I am a 4th year PhD student at MIT CSAIL advised by Jacob Andreas and Julie Shah. I’ve spent wonderful summers at the Boston Dynamics AI Institute, MIT-IBM Watson AI Lab, Facebook AI Research (FAIR), and before grad school, two years as an AI Resident at Microsoft Research. I did my undergrad at Yale, where I got my start in research with Brian Scassellati and read a lot of dead philosophers.
I’m interested in building agents that learn representations from rich human knowledge, whether directly (e.g. from feedback) or through priors (e.g. from LMs). Currently, I’m thinking a lot about how to utilize pretrained model priors in conjunction with human feedback to interactively learn user-aligned representations/rewards.
A history buff at heart, I care deeply about creating a world where AI is used safely, ethically, and equitably. I prioritize connecting with non-academic communities, and have worked at the White House Office of Science and Technology Policy (OSTP) and Schmidt Futures on AI policy. I also serve on the advisory board of the Yale Jackson School of Global Affairs, where I co-teach a course on AI for policymakers.
I love being outdoors, even in the brutal Boston winters. A current goal is to run a sub-3:00 marathon (this is how I’m doing). Reach out to chat about research, policy, or running! Preferred subject line: Your cat is dope.
email | cv | google scholar | twitter | linkedin
Ph.D. Computer Science, 2023 -
Massachusetts Institute of Technology
M.S. Computer Science, 2023
Massachusetts Institute of Technology
B.S. Cognitive Science, 2018
Yale University
B.A. Global Affairs, 2018
Yale University
[Dec 2023] Attending NeurIPS! I’m presenting Human-Guided Complexity-Controlled Abstractions in the main conference and co-organizing the Goal-Conditioned Reinforcement Learning Workshop.
[Nov 2023] Our paper Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback was accepted to TMLR.
[Nov 2023] Our papers Aligning Human and Robot Representations and Preference-Conditioned Language-Guided Abstractions were accepted to HRI 2024.
[Nov 2023] Our new preprint Getting Aligned on Representational Alignment is out.
[Nov 2023] I passed my quals at MIT!
[Sep 2023] Our paper Human-Guided Complexity-Controlled Abstractions was accepted to NeurIPS 2023.