I am a Research Scientist at Toyota Research Institute, working on modeling human interactions and improving human-AI collaboration.

Before TRI, I received my PhD graduate Georgia Tech, working in the CORE Robotics lab at Georgia Tech under professor Matthew Gombolay, and before that I was a Research Scientist in the same lab and in the RAIL lab with Dr. Sonia Chernova.

I’ve worked as an intern at Apple, Inc and Google Brain Robotics on private federated learning, agile robot adaptation, and personalization for large-language models.

I am also a recipient of the 2021 Apple Scholars in AI/ML PhD fellowship and was named a DAAD AInet Fellow!


See below for examples and links to my recent work!

FedPerC

In FedPC, we present a new approach to personalization for federated deployment of large language models, using personal and context embeddings for each user.

Published Link

Evaluating Explainability

Explainability is increasingly popular and important for machine learning research and deployment, but little work is evaluated with real humans. In this user study, we compare seven explainability conditions with real human users.

Published LinkPreprint Link

Figure of clustered data from Cross-Loss Influence Functions to Explain Deep Network Representations

CLIF

Influence functions can help to show why a model make certain decisions, but previously have only been proven for matched train/test objectives. We show that influence functions can work with unsupervised or self-supervised learning.

Published Link

FedEmbed

We present a new approach to private, personalized federated learning, leveraging personal embeddings and clustering of users with similar preferences.

arXiv Link

LanCon-Learn

We present an approach to language-conditioned multi-task learning, using language-based command embeddings rather than conventional one-hot goal specifications.

Published LinkPreprint Link

Multimodal Punctuation Prediction

Speech-to-text systems often transcribe raw audio into text, but do not always consider the structure of text and how that might affect the meaning. In this work, we explore multi-modality and we introduce context-dropout to improve punctuation prediction from raw audio and text.

Published LinkarXiv Link

ProLoNets

Reinforcement learning agents waste hundreds of hours simply learning the rules of the world. With ProLoNets, we can hard-code heuristics directly into RL-agent’s neural network weights before training even begins, enabling faster learning in challenging domains.

Published LinkarXiv LinkGitHub Link

Examples of discrete decision trees from Optimization Methods for Interpretable Differentiable Decision Trees Applied to Reinforcement Learning

Interpretable RL

After learning the weights of a differentiable decision tree for an RL task, it’s possible to convert the network into a discrete, ordinary decision tree while preserving performance. This offers an interpretable, small tree that can be used by the agent to improve human trust and efficiency.

Published Link