Pedro Bentancour Garin
Empirical testing of whether AI capability scaling leads to emergent agency or shutdown resistance in frontier systems.
Miles Tidmarsh
Open Welfare Alignment Evals for Frontier Models
Aria Wong
Jessica P. Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Hayley Martin
Support my postgraduate law studies and research in AI Governance
Connacher Murphy
A flexible simulation environment for assessing strategic and persuasive capabilities, benchmarking, and agent development, inspired by reality TV competitions.
Cameron Tice
AISA
Translating in-person convening to measurable outcomes
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
Remmelt Ellen
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference
Habeeb Abdulfatah
Seeking funding to secure API infrastructure and permanently eliminate the rate limits bottlenecking open-source EA grant evaluation.
Jacob Steinhardt
Krishna Patel
Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models
Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).
Warren Johnson
Novel safety failure modes discovered across 7 LLM providers with 35,000+ controlled inference trials. Targeting NeurIPS 2026.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Lawrence Wagner
Boyd Kane
by buying gift cards for the game and handing them out at the OpenAI offices