Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Dr Richard Armitage
Covering the article processing charge for an accepted Analysis paper (Open Access) calling for collaboration between public health and existential risk studies
Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.
Aashka Patel
Redirecting India’s Middle‑Schoolers into AI Safety, Governance, and X‑Risk Work
Nicholas Kruus
LLMs could automate intelligence analysis. I wrote the first paper on governing this; $5k buys 2mo to revise it and scope an org research branch
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bridging EA with wisdom
Mox
An incubator & community space in SF; for doers of good and masters of craft
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Lawrence Wagner
A benchmark for studying how failures spread across multi-agent AI systems and whether they can be detected and interrupted in time.
Pedro Bentancour Garin
Runtime safety, oversight, rollback, and control infrastructure for advanced AI in real-world, high-consequence environments.
Johan Fredrikzon
Designing a Project Funding Proposal
Mateusz Bagiński
One Month to Study, Explain, and Try to Solve Superintelligence Alignment
AISA
Translating in-person convening to measurable outcomes
Suki Krishna
Investigate how LLMs behave in multi-agent environments particularly how contextual framing and strategic advice can systematically manipulate coord. outcomes
AI Understanding
Amrit Sidhu-Brar
Research on how to navigate the transition to a world with superintelligent AI systems
David Krueger
Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)