Hey there! I an incoming Ph.D. student at Stanford University and a research intern at the U.S. Navy Marine Mammal Program. I received my bachelors at Stanford in 2024, where I studied computer science, psychology (minor), and creative writing (minor). I worked in Chelsea Finn's IRIS lab on robot learning. I am grateful to be suppoorted by a Knight-Hennessy Fellowship.
I'm broadly interested in how robots can learn and reason like humans and animals. During my Ph.D., I want to get robots to acquire skills autonomously, create rich interpretations of reinforcement, and adapt quickly to unexpected situations. I'm also interested in how we can meaningfully quantify the diverse arrays of intelligences in living creatures and artificial systems.
In my free time, I love to write short fiction and creative non-fiction. I'm working on a book that looks at the often-misrepresented stories of marine mammal trainers and the zoological industry.
Most novel situations can be solved with a large enough skill repertoire, but it takes interactions to find the right skill for the task. A key component of this approach is knowing when to try something new. In this project, I develop a measurement of task progress that compares the robot's current strategy with a hypothetical expert. If the robot is progressing too slowly, the strategy is likely suboptimal and we should try a new strategy.
In other fields of machine learning, large datasets are commonly used to help train new models. Analogously, we want to use pre-collected robotics datasets to learn a target task. However, diverse robot datasets may contain irrelevant or even counterproductive demonstrations, which may hinder the learning process. In this project, we proposed a data filtering method that intelligently extracts data from a diverse robot dataset based on its relevance to the target task.
In many situations, vision alone may not be sufficient to solve a task. For example, consider the task of grabbing car keys from a dark bag. We would reach into the bag by looking for the opening. Then, we might localize the keys by listening to the jangling noises. Ideally, a robot should be able to accomplish the same feat. We tackled this key extraction task on a real robot by using a third-person camera and a microphone attached to the robot's gripper.
Reinforcement learning (RL) was originally inspired by the same ideas that gave rise to animal behavior theory. In the past few decades, RL has become a widespread machine-learning algorithm, and behavior theory has seen great success in marine parks, research labs, and even the classroom. In this talk, I argue that these disciplines have more similarities than differences. Namely, natural and artificial learning processes have uncannily similar failure modes, which can be traced back to shared difficulties of perceiving and learning from an uncertain world.
I gave this talk for three years with Stanford Splash, an educational outreach program in the Bay Area. I also gave an abridged version of this talk at the International Marine Animal Trainers' Association (IMATA) conference, where I won first prize for research advancements.
Slide Deck Sample  /  Stanford Splash Program
I am a mentor for the Deep Learning Portal outreach program. The Portal is a program created from the IRIS lab and the Stanford CS department, designed to help disadvantaged students learn AI by providing access to existing online courses and hosting live office hours. Every week, I met 1:1 with my mentee and discussed conceptual and technical questions related to machine learning.
For research to make a positive impact, we need effective storytelling to give it a voice that speaks with clarity, truthfulness, and nuance. Outside of research papers, I bring my storytelling to short fiction and creative non-fiction. Currently, I'm mostly interested in the human-animal relationship, the immigrant identity, and trauma in places of paradise. Two of my works have been the recipient of a Stanford Creative Writing Prize. I've advised numerous published op-eds and long-form narratives, including one op-ed that started a national animal welfare movement.
Many things separate the human experience from the lives of all other animals. I'm fascinated by how we can process our lived experiences to a near-infinite depth. An animal might run away from a terrible past experience, but we might run towards the fire. I'm interested in how we face this terrible, this untouchable. I'm interested in how we seize our pasts and make it our own. Find my writing here.
I'm currently working on a non-fiction book that explores the human-animal relationship through people who work with animals. This work celebrates the lure of lives very different from our own. In this celebration, I challenge a common trend of romanticizing animal intelligence (especially whales & dolphins) and instead encourage the acceptance of intellectual diversity in the human realm and the animal kingdom. I also include a new look on the commonly-misrepresented world of whale trainers: people who once rode killer whales in acrobatic shows, demonstrating powerful training techniques, uncanny athleticism, and ultimately, trust.
I learn the best when I take notes on things. Through my four years at Stanford, I've taken hundreds of pages of notes on topics that range from probability to psychology. They cover all levels of knowledge, from undergraduate introduction to cutting-edge AI algorithms. I'm working on making all of these notes public, and i hope that they can be helpful to some people.
This simple Python-based program allows you to use keyboard shortcuts to annotate audio, video, and live events with timestamped comments. The program will export your annotations to copy-and-paste text that you can add to any literature review notes. I rely heavily on this tool to review hours of videos for my book.
Recently, I've also made a book annotator that allows you to make page-specific notes. This allows you digitize your annotations for physical books.
As a researcher, I often find that I use the same code many times in different applications. To help, I've been working on a large repository of code basics. It's a collection of code snippets that help with model debugging, plot making, basic PyTorch models, and more.