Keith Gariepy
An open-source hardware root-of-trust that physically enforces safety invariants on robots in <10ms, preventing AI hallucinations from causing kinetic harm.
AISA
Translating in-person convening to measurable outcomes
Haakon Huynh
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Remmelt Ellen
Tom Maltby
A Three-Month Falsification First Evaluation of CREATE
Mercy Kyalo
Operational costs for AISEA
Larry Arnold
A modular red-teaming and risk-evaluation framework for LLM safety
MANRAJ SINGH
Exploring ways of Benchmarking that do not get saturated over time
Vahit FERYAD
Build an agentic LLM+VLM pipeline that generates product visuals and automatically verifies identity, color, and artifacts, enabling scalable, trustworthy e-com
Jacob Steinhardt
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Lawrence Wagner
Igor Labutin
A tech-infused immersive musical. Experience the future of storytelling where artificial intelligence meets the depths of human emotion.
Jess Hines (Fingerprint Content)
Detect polarising story-frames early and build better narratives—fast, practical, adoptable.
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Reconstructs longitudinal patient state, identifies causal drivers, and runs simulations to prevent high severity clinical failures under fragmented data.