Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bringing EA with wisdom
Mox
An incubator & community space in SF; for doers of good and masters of craft
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
AISA
Translating in-person convening to measurable outcomes
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Dominique Gian Leonardo
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Lawrence Wagner
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Rufo Guerreschi
Persuading a critical mass of key potential influencers of Trump's AI policy to champion a bold, timely and proper US-China-led global AI treaty
David Carel
Accelerating the adoption of air filters in every classroom
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety