"And still 24 hours, maybe 60 good years, it's really not that long a stay..." -JB
Hey there! I am a second-year Ph.D. student at Stanford University working with Shuran Song in the Robotics and Embodied AI (REAL) Lab. I'm broadly interested in how robots can learn and reason like humans and animals. I am grateful to be supported by the Knight-Hennessy Fellowship and the NSF Graduate Research Fellowship.
I completed my undergrad degree in computer science at Stanford, where I conducted research in Chelsea Finn's IRIS lab on imitation learning methods. In the months before my PhD, I interned at the Navy Marine Mammal Program, where I worked hands-on with dolphins and sea lions. This once-in-a-lifetime experience convinced me that a better understanding of animal cognition will be critical in crafting better AI algorithms.
Beyond the robot lab and my continued involvement with the Navy dolphins, I'm a writer and an oral historian. Sometimes this journey takes me to advocate for animal care professionals, their animals, and responsible zoological facilities. But most of the time, I enjoy writing short stories. I hope you can read them soon.
Email  / Google Scholar  /  GitHub  /  CV
June 2025: I became the first (part-time) employee of Lead Dog, a company that dev œelops video games for marine mammals and beyond.
April 2025: Honored to receive the NSF Graduate Research Fellowship!
March 2025: Our dolphin video game project CV-EVE won 2nd place at the International Marine Animal Trainer's Association (IMATA) conference.
December 2024: My first op-ed published about my oral history work.
Sept 2024: I started my CS Ph.D. at Stanford.
May 2024: My first narrative audio story published about whales!
June 2024: Finished undergrad and started my internship at the U.S. Navy Marine Mammal Program in San Diego.
March 2024: Honored to receive the Knight-Hennessy Fellowship!
Robot policies are getting more complicated, which also means that tuning them also becomes tedious. In DynaGuide, we propose a novel way of steering policies by using a dynamics model to influence the action inference process of a robot policy. DynaGuide works on any diffusion policy, including off-the-shelf real robot policies.
When deploying robot policies, we need to know when we've made mistakes so we can recover from them and try something else. In this paper, we propose a mistake detection approach that uses the Bellman error of a learned value function. This simple trick boosts success rates of trained policies, all without modifying the base policy.
When training robots on large datasets, we need to select the right data to improve its performance without confusing the robot. In this work, we propose a simple data filtering approach using distances in a latent representation. This data filter has large impacts on robot performance in simulation and on a real setup.
Operating in the real world, sound cues can tell us information even when vision cues fail, like rooting around a dark bag. In this work, we add the sound modality to a robot by adding a microphone to the gripper. We show that it is able to locate and extract hidden keys from a bag.
For research to make a positive impact, we need effective storytelling to give it a voice that speaks with clarity, truthfulness, and nuance. Outside of research papers, I bring my storytelling to short fiction and creative non-fiction. Currently, I'm mostly interested in the human-animal relationship, the immigrant identity, and trauma in places of paradise. Two of my works have been the recipient of a Stanford Creative Writing Prize. I've advised numerous published op-eds and long-form narratives, including one op-ed that started a national animal welfare movement.
Many things separate the human experience from the lives of all other animals. I'm fascinated by how we can process our lived experiences to a near-infinite depth. An animal might run away from a terrible past experience, but we might run towards the fire. I'm interested in how we face this terrible, this untouchable. I'm interested in how we seize our pasts and make it our own. Find my writing here.
I'm currently working on a non-fiction book that explores the human-animal relationship through people who work with animals. This work celebrates the lure of lives very different from our own. In this celebration, I challenge a common trend of romanticizing animal intelligence (especially whales & dolphins) and instead encourage the acceptance of intellectual diversity in the human realm and the animal kingdom. I also include a new look on the commonly-misrepresented world of whale trainers: people who once rode killer whales in acrobatic shows, demonstrating powerful training techniques, uncanny athleticism, and ultimately, trust.
I learn the best when I take notes on things. Through my four years at Stanford, I've taken hundreds of pages of notes on topics that range from probability to psychology. They cover all levels of knowledge, from undergraduate introduction to cutting-edge AI algorithms. I'm working on making all of these notes public, and i hope that they can be helpful to some people.
During my time with the Navy Marine Mammal Program, I developed a computer vision controller that allows dolphins to interact with a video game system through their body motions. If you are a marine mammal facility and believe that your animals could benefit from a similar system, please reach out to me or Kelley Winship.
This simple Python-based program allows you to use keyboard shortcuts to annotate audio, video, and live events with timestamped comments. The program will export your annotations to copy-and-paste text that you can add to any literature review notes. I rely heavily on this tool to review hours of videos for my book.
Recently, I've also made a book annotator that allows you to make page-specific notes. This allows you digitize your annotations for physical books.