Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Anthony Etim
Defense-first monitoring and containment to reduce catastrophic AI risk from stolen frontier model weights
Larry Arnold
A modular red-teaming and risk-evaluation framework for LLM safety
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Tomasz Kiliańczyk
Translating an AI safety report (1k+ downloads) for peer-reviewed publication to formalize "Emergent Depopulation" as a novel systemic risk.
Robert Craft
Grant Request: £40,000 (Seed Funding) Duration: 6 Months
Jess Hines (Fingerprint Content)
Detect polarising story-frames early and build better narratives—fast, practical, adoptable.
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
Lawrence Wagner
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Preeti Ravindra
AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Sara Holt
Short Documentary and Music Video
Melina Moreira Campos Lima
Assessing the Climate Potential of Catering Systems in Public Schools and Hospitals
Alex Leader
Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs
Jade Master
Developing correct-by-construction world models for verification of frontier AI
Ella Wei
A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.
Avinash A
Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment