Nathan Thornhill
An ORCID-gated submission pipeline where a multi-model AI panel plus quality-control layer delivers rigorous peer review without institutional gatekeeping.
Emma Humphrey
$5,000 USD to bring 16 vetted academics and policy leads to NZ's first AI Safety Conference, ensuring national representation and cross-sector collaboration
Vangelis Gkagkelis
A 6-month pilot testing probabilistic forecasting for AI, misinformation, institutional trust, and social risk in Greece.
Fatika Umar Ibrahim
The first AI safety evaluation benchmark for Nigerian indigenous livestock systems testing whether frontier models are safe to deploy in African food systems.
Tom Bibby
Social media content across YouTube, Instagram, and TikTok to grow AI x-risk awareness and build political momentum for a global pause.
Atmadeep Ghoshal
Requesting funding for ICML 2026 spotlight position paper on ML safety for combating intimate partner violence
Sankalp Gilda
Two co-authored workshop papers (LLM reasoning, agentic-AI accountability), presented April 2026 in Rio. Asking partial trip reimbursement.
Modeling Cooperation
Software tools and research to quantify coordination failures and inform policy decisions.
Matthew A Cator
Funding the open-source launch of a working claim-state system and the local firewall bridge that carries verification before voice into governed agent action.
Alex Kwon
If your reward model is an LLM, you cannot tell whether the policy is gaming the reward or actually getting better. We built a simulator instead.
Karsten Brensing
Limited Legal Personhood as a Reversible Safety Instrument
Kumari Neha Priya
Urgent funding needed by May 8 for graduate policy training focused on AI governance
Aashka Patel
Inspiring India’s Middle‑Schoolers to pursue AI Safety, Governance, and X‑Risk Work
Developing enforceable architectural constraints, safety mechanisms, and certification criteria to keep advanced AI systems aligned and non-conscious
Mu Zi
This round of funding will be used primarily for prototype hardening, artifact packaging, runtime evaluation, and preparation for external review.
AI Understanding
Building the first browser-based digital laboratory for interactive AI Safety education and failure-mode discovery.
Jessica Pu Wang
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Ida-Emilia Kaukonen
A 15,000+ page corpus on long-term interaction, symbolic language, unusual model behavior, and safety edge cases.
Dr Richard Armitage
A trusted profession that has advocated against existential risks like nuclear war can do so again for AI — but clinicians must first be made aware of the risks
Alex Hakuzimana
Africa's Voice in Global AI Safety