Jacob Steinhardt
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Xyra Sinclair
Unlocking the paradigm of agents + SQL + compositional vector search
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Lawrence Wagner
Anthony Ware
Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.
Ryan Celimon
Translating AI’s biggest threats into videos anyone can understand: AGI, misalignment, and job loss explained.
Will Shin
A global IP project reimagining ecology and future technology and institutions through character-driven narratives.
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Muhammad Ahmad
A pilot to build policy and technical capacity for governing high-risk AI systems in Africa
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Centre pour la Sécurité de l'IA
Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.
Sandy Tanwisuth
We reframe the alignment problem as the problem of governing meaning and intent when they cannot be fully expressed.
Evžen Wybitul
Brian McCallion
A mechanistic, testable framework explaining LLM failure modes via boundary writes and attractor dynamics
Christopher Kuntz
A bounded protocol audit and implementation-ready mitigation for intent ambiguity and escalation in deployed LLM systems.
Jasraj Hari Krishna Budigam
Reusable, low-compute benchmarking that detects data leakage, outputs “contamination cards,” and improves calibration reporting.
Chris Canal
Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Orpheus Lummis
Seminars on quantitative/guaranteed AI safety (formal methods, verification, mech-interp), with recordings, debates, and the guaranteedsafe.ai community hub.