Sam Nadel
Experimental message testing and historical analysis of tech movements to identify how to effectively mobilize people around AI safety and governance
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Dr. Jacob Livingston Slosser
Help get the Sapien Institute off the ground
Martin Percy
An experimental AI-generated sci-fi film dramatising AI safety choices. Using YT interactivity to get ≈880 conscious AI safety decisions per 1k viewers.
Pedro Bentancour Garin
Building the first external oversight and containment framework + high-rigor attack/defense benchmarks to reduce catastrophic AI risk.
Agwu Naomi Nneoma
Funding a Master's in AI, Ethics & Society to transition into AI governance and long-term risk mitigation, and safety-focused policy development.
Chris Canal
Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide
Atmadeep Ghoshal
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
Armon Lotfi
Multi-agent AI security testing that reduces evaluation costs by 10-20x without sacrificing detection quality
Justin Olive
Funding to cover our expenses for 3 months during unexpected shortfall
Michaël Rubens Trazzi
Funding gap to pay for a video editor and scriptwriter
Leo Hyams
A 3-month fellowship in Cape Town, connecting a global cohort of talent to top mentors at MIT, Oxford, CMU, and Google DeepMind
Cillian Crosson
$200k in 1:1 matched funding to support reporting on AI.
20 Weeks Salary to reach a neglected audience of 10M viewers
Akshyae Singh
Scaling AI safety research to 10M+ people through the first distributed content creation project
Jared Johnson
Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use