Making sure AI systems don't mess up acausal interactions
Jaeson Booker
Creating a fund exclusively focused on supporting AI Safety Research
Igor Ivanov
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Asterisk Magazine
Jim Maar
Reproducing the Claude poetry planning results quantitatively
tamar rott shaham
Miles Tidmarsh
Enabling Compassion in Machine Learning (CaML) to develop methods and data to shift future AI values
Cameron Berg
Astelle Kay
Open benchmark and turnkey license that helps AI teams curb flattery in frontier models via the four-step VSPE Framework
Connor Axiotes
Filming a feature-length documentary on risks from AI for a non-technical audience on streaming services
Centre pour la Sécurité de l'IA
4M+ views on AI safety: Help us replicate and scale this success with more creators
Amritanshu Prasad
General Support for an AI Safety evals for-profit
Sandy Fraser
New techniques to impose minimal structure on LLM internals for monitoring, intervention, and unlearning.
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Mox
For AI safety, AI labs, EA charities & startups
Evžen Wybitul
Guy
Out of This Box: The Last Musical (Written by Humans)
Ronak Mehta
Funding for a new nonprofit organization focusing on accelerating and automating safety work.
Carlos Rafael Giudice
I've self funded my ramp up for six months and interview/grant processes are taking longer than expected.