Bryce Meyer
Sarah Wiegreffe
https://actionable-interpretability.github.io/
Chi Nguyen
Making sure AI systems don't mess up acausal interactions
Kristina Vaia
The official AI safety community in Los Angeles
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Jaeson Booker
Creating a fund exclusively focused on supporting AI Safety Research
Igor Ivanov
Asterisk Magazine
tamar rott shaham
Jim Maar
Reproducing the Claude poetry planning results quantitatively
Connor Axiotes
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
Centre pour la Sécurité de l'IA
4M+ views on AI safety: Help us replicate and scale this success with more creators
Guy
Out of This Box: The Last Musical (Written by Humans)
Mox
For AI safety, AI labs, EA charities & startups
Ronak Mehta
Funding for a new nonprofit organization focusing on accelerating and automating safety work.
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
Miles Tidmarsh
Enabling Compassion in Machine Learning (CaML) to develop methods and data to shift future AI values
Amritanshu Prasad
General Support for an AI Safety evals for-profit
Carlos Rafael Giudice
I've self funded my ramp up for six months and interview/grant processes are taking longer than expected.