Pedro Bentancour Garin
Building early AI governance and safety systems focused on alignment, oversight and risk reduction before more capable AI arrives.
Emma Humphrey
$5,000 USD to bring 16 vetted academics and policy leads to NZ's first AI Safety Conference, ensuring national representation and cross-sector collaboration
Leticia Prados
Designing liability, insurance and fiduciary mechanisms for frontier AI using commercial space law as a structurally precise comparative framework
Salvatore Barbera
Building the missing public-mobilisation layer for AI safety in Italy and Southern Europe, starting with autonomous weapons and youth AI literacy.
Andwar Cheng
Open-source replay tool and benchmark prototype for identifying step-level semantic risk transitions in multi-agent AI traces — CI-backed and independently repr
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Joshua Michael Sparks
Stage 0 bridge for a focused-ultrasound research program targeting the neural architecture that maintains chronic suffering.
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Mox
An incubator & community space in SF; for doers of good and masters of craft
Mu Zi
This round of funding will be used primarily for prototype hardening, artifact packaging, runtime evaluation, and preparation for external review.
Kumari Neha Priya
Urgent funding needed by May 14 for graduate policy training focused on AI governance
Developing enforceable architectural constraints, safety mechanisms, and certification criteria to keep advanced AI systems aligned and non-conscious
Adin Noel Kelly
Dr Richard Armitage
A trusted profession that has advocated against existential risks like nuclear war can do so again for AI — but clinicians must first be made aware of the risks
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Shuo Li Liu
A 12-month axiomatic alignment program: Savage-dominated EU maxima, a Debreu theorem for attention, and a 5× RLHF preference-aggregation gap.
Gaetan Selle
This is a small grant buying a large increase in high-quality Francophone AI risk communication from a creator who has already a track record.
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bridging EA with wisdom
Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.