Sarah Wiegreffe
https://actionable-interpretability.github.io/
Robert looman
Building a transparent, symbolic AGI that runs millions of tokens/sec on CPUs, making safe, explainable AI accessible to everyone.
Chi Nguyen
Making sure AI systems don't mess up acausal interactions
Jaeson Booker
Creating a fund exclusively focused on supporting AI Safety Research
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Igor Ivanov
Asterisk Magazine
Jim Maar
Reproducing the Claude poetry planning results quantitatively
tamar rott shaham
Connor Axiotes
Geoffrey Hinton & Yoshua Bengio Interviews Secured, Funding Still Needed
Miles Tidmarsh
Enabling Compassion in Machine Learning (CaML) to develop methods and data to shift future AI values
Centre pour la Sécurité de l'IA
4M+ views on AI safety: Help us replicate and scale this success with more creators
Mox
For AI safety, AI labs, EA charities & startups
Guy
Out of This Box: The Last Musical (Written by Humans)
Florian Dietz
Revealing Latent Knowledge Through Personality-Shift Tokens
Ronak Mehta
Funding for a new nonprofit organization focusing on accelerating and automating safety work.
Amritanshu Prasad
General Support for an AI Safety evals for-profit
Yuanyuan Sun
Building bridges between Western and Chinese AI governance efforts to address global AI safety challenges.
Carlos Rafael Giudice
I've self funded my ramp up for six months and interview/grant processes are taking longer than expected.
Sandy Fraser
New techniques to impose minimal structure on LLM internals for monitoring, intervention, and unlearning.