ryan lowe
Hi! I'm Ryan. It's good to meet you.
Right now, I'm a researcher at the Meaning Alignment Institute. We're gathering a group of cross-disciplinary researchers interested in the project of aligning AI and institutions with what people value, which we call full-stack alignment.
I'm also running contemplative experiments for AI alignment researchers (and others) with Max Roth, under the banner of Connecting Intelligence.
Until 2024, I worked at OpenAI, where I led the "practical alignment" team that built InstructGPT, and co-led the alignment of GPT-4. Before that, I got my PhD in Computer Science from McGill University supervised by Joelle Pineau, where I worked on dialogue systems and multi-agent RL. You can view some of my papers here.
I've been most influenced by my friends, my family, Jon Hansen, Rob Burbea, Joe and Tara Hudson, Dustin DiPerna, Joe Edelman, and the lineages of their teachers.