Michaël Rubens Trazzi
Funding gap to pay for a video editor and scriptwriter
Chris Wendler
Help fund our student’s trip to NeurIPS to present his main conference paper on interpretable features in text-to-image diffusion models.
Ethan Nelson
Leveraging a 23K Subscriber Channel to Advance AI Safety Discourse
Aditya Raj
Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.
Justin Olive
Funding to cover our expenses for 3 months during unexpected shortfall
Leo Hyams
A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind
Jhet Chan
A self-funded researcher presenting at NeurIPS NeurReps to showcase a new approach to geometry and cognition.
Sean Peters
I'd like to explore a research agenda at the intersection of time horizon model evaluation and control protocols.
20 Weeks Salary to reach a neglected audience of 10M viewers
Thane Ruthenis
Research agenda aimed at developing methods for constructing powerful, easily interpretable world-models.
Muhammad Ahmad Janyau
Elevating Africa’s Missing Voice in Global AI Safety
Lindsay Langenhoven
Support our mission to educate millions through podcasts and videos before unsafe AI development outruns human control.
Connor Axiotes
Making God is now raising for post-production so we can deliver a festival-ready documentary fit for Netflix acquisition.
fernando yupanqui
I am making music about AI risk
Aditya Arpitha Prasad
Practicing Embodied Protocols that work with Live Interfaces
Apart Research
Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors
Anthony Duong
Hieu Minh Nguyen
LLMs often know when they are being evaluated. We’ll do a study comparing various methods to measure and monitor this capability.
David Chanin
Steve Petersen
Teleology, agential risks, and AI well-being