Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Aashka Patel
Redirecting India’s Middle‑Schoolers into AI Safety, Governance, and X‑Risk Work
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bridging EA with wisdom
Mox
An incubator & community space in SF; for doers of good and masters of craft
Johan Fredrikzon
Designing a Project Funding Proposal
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Pedro Bentancour Garin
Runtime safety, oversight, rollback, and control infrastructure for advanced AI in real-world, high-consequence environments.
Pu Wang (Jessica)
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Suki Krishna
Investigate how LLMs behave in multi-agent environments particularly how contextual framing and strategic advice can systematically manipulate coord. outcomes
AI Understanding
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
AISA
Translating in-person convening to measurable outcomes
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Gergő Gáspár
Help us solve the talent and funding bottleneck for EA and AIS.
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Dominique Gian Leonardo