I am a 4th year PhD student at MIT CSAIL advised by Jacob Andreas and Julie Shah. I’ve spent wonderful summers at the Boston Dynamics AI Institute, MIT-IBM Watson AI Lab, Facebook AI Research (FAIR), and before grad school, two years as an AI Resident at Microsoft Research. I did my undergrad at Yale, where I got my start in research with Brian Scassellati and read a lot of dead philosophers.

I’m interested in building agents that learn representations from rich human knowledge, whether directly (e.g. from users) or through priors (e.g. from LMs). Currently, I’m thinking a lot about how to utilize pretrained models in conjunction with human feedback to interactively learn aligned representations/rewards.

A history buff at heart, I care deeply about working with non-academic communities to create safe, ethical, and equitable AI. I currently serve as a Special Government Employee for the Defense Innovation Unit (DIU). In a previous life, I worked at the White House Office of Science and Technology Policy (OSTP), National Institute of Standards and Technology (NIST), and Schmidt Futures. I also serve on the advisory board of the Yale Jackson School of Global Affairs, where I co-teach a course on AI for policymakers.

I love being outdoors, even in the brutal Boston winters. A current goal is to run a sub-3:00 marathon (this is how I’m doing). Reach out to chat about research, policy, or running! Preferred subject line: Your cat is dope.

email | cv | google scholar | twitter | linkedin


  • Ph.D. Computer Science, 2023 -

    Massachusetts Institute of Technology

  • M.S. Computer Science, 2023

    Massachusetts Institute of Technology

  • B.S. Cognitive Science, 2018

    Yale University

  • B.A. Global Affairs, 2018

    Yale University

People Financially Invested in My Future

  • Open Philanthropy
  • NSF Graduate Research Fellowship
  • Truman Scholarship
  • My parents

Recent News

All news»

[Feb 2024] I am giving a talk at the MILA Robot Learning Seminar.

[Feb 2024] I will be visiting Constellation, an AI safety research center, for two weeks!

[Jan 2024] Our paper Learning with Language-Guided State Abstractions was accepted to ICLR 2024.

[Dec 2023] Attending NeurIPS! I’m presenting Human-Guided Complexity-Controlled Abstractions in the main conference and co-organizing the Goal-Conditioned Reinforcement Learning Workshop.

[Nov 2023] Our paper Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback was accepted to TMLR.

[Nov 2023] Our papers Aligning Human and Robot Representations and Preference-Conditioned Language-Guided Abstractions were accepted to HRI 2024.


Pragmatic Feature Preferences: Learning Reward-Relevant Preferences from Human Feedback
Learning with Language-Guided State Abstractions
Aligning Robot Representations with Humans
Preference-Conditioned Language-Guided Abstractions
Getting Aligned on Representational Alignment
Human-Guided Complexity-Controlled Abstractions
Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Diagnosis, Feedback, Adaptation: A Human-in-the-Loop Framework for Test-Time Policy Adaptation
Strengthening Subcommunities: Towards Sustainable Growth in AI Research
Make Greenhouse-Gas Accounting Reliable — Build Interoperable Systems
Investigations of Performance and Bias in Human-AI Teamwork in Hiring
On the Nature of Bias Percolation: Assessing Multiaxial Collaboration in Human-AI Systems
Human-Machine Collaboration for Fast Land Cover Mapping
What You See Is What You Get? The Impact of Representation Criteria on Human Bias in Hiring
An Integrated Machine Learning Approach To Studying Terrorism
Conceptual Feasibility Study of the Hyperloop Vehicle for Next-Generation Transport
Early Detection of Boko Haram Attacks in Nigeria