@ethanjperez
I lead the adversarial robustness team at Anthropic, where I’m hoping to reduce existential risks from AI systems. I helped to develop Retrieval-Augmented Generation (RAG), a widely used approach for augmenting large language models with other sources of information. I also helped to demonstrate that state-of-the-art AI safety training techniques do not ensure safety against sleeper agents. I received a best paper award at ICML 2024 for my work showing that debating with more persuasive LLMs leads to more truthful answers. I received my PhD from NYU under the supervision of Kyunghyun Cho and Douwe Kiela and funded by NSF and Open Philanthropy. Previously, I’ve spent time at DeepMind, Facebook AI Research, Montreal Institute for Learning Algorithms, and Google. I was also named one of Forbes’s 30 Under 30 in AI.
ethanperez.netThis is a donation to this user's regranting budget, which is not withdrawable.
$0 in pending offers
I'm interested in funding anything related to making AI go well, including work on AI safety, policy, welfare, and more.
For | Date | Type | Amount |
---|---|---|---|
Manifund Bank | about 13 hours ago | deposit | +100000 |
Manifund Bank | almost 2 years ago | withdraw | 200100 |
Compute and other expenses for LLM alignment research | almost 2 years ago | project donation | +200000 |
Compute and other expenses for LLM alignment research | almost 2 years ago | project donation | +100 |
Compute and other expenses for LLM alignment research | about 2 years ago | project donation | +200000 |
Manifund Bank | about 2 years ago | withdraw | 200000 |