Andwar Cheng
Open-source replay tool and benchmark prototype for identifying step-level semantic risk transitions in multi-agent AI traces — CI-backed and independently repr
Joshua Michael Sparks
Stage 0 bridge for a focused-ultrasound research program targeting the neural architecture that maintains chronic suffering.
Ahmed Abdelhamed
Vlad M.
Model-agnostic R-Cycle / 4-voice / Oath-Lock protocol that reduces LLM reactivity at the orchestration layer. DOI-published. Working code. Solo founder.
Detecting the EC–EpC gap in deployed LLMs: when AI systems sound credible but misrepresent their own knowledge boundaries.
Kumari Neha Priya
Urgent funding needed by May 8 for graduate policy training focused on AI governance
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Developing enforceable architectural constraints, safety mechanisms, and certification criteria to keep advanced AI systems aligned and non-conscious
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Mu Zi
This round of funding will be used primarily for prototype hardening, artifact packaging, runtime evaluation, and preparation for external review.
Adin Noel Kelly
Zaelani
18+ preprints across multiple fields, all written on a 2GB RAM phone. $600 removes the only thing standing between me and the next body of work.
Mox
An incubator & community space in SF; for doers of good and masters of craft
Shuo Li Liu
A 12-month axiomatic alignment program: Savage-dominated EU maxima, a Debreu theorem for attention, and a 5× RLHF preference-aggregation gap.
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Dr Richard Armitage
A trusted profession that has advocated against existential risks like nuclear war can do so again for AI — but clinicians must first be made aware of the risks
Alex Hakuzimana
Africa's Voice in Global AI Safety
Gaetan Selle
This is a small grant buying a large increase in high-quality Francophone AI risk communication from a creator who has already a track record.
Matei-Alexandru Anghel
A Safety Framework for Evaluating AI Humanity Alignment Through Progressive Escalation and Scope Creep
Euan McLean
Salary & support for 1 year of leadership of Integral Altruism - a movement bridging EA with wisdom