Kateryna Morozovska
Pu Wang (Jessica)
Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.
Elliot McKernon
A shared framework, case studies, and decision tools to help policymakers and AISIs identify gaps, prioritize interventions, and coordinate AGI readiness.
aya samadzelkava
LLMs scale language, not method. HP turns hypothesis-driven papers into machine-readable maps of variables, controls, stats, and findings for researchers & AI.
Aashkaben Kalpesh Patel
Nutrition labels transformed food safety through informed consumer choice, help me do the same for AI and make this standard :)
Haakon Huynh
Adam Boon
An executable reasoning quality framework that checks whether AI-generated arguments are logically sound — not just factually accurate. Live at usesophia.app.
Hayley Martin
Support my postgraduate law studies and research in AI Governance
Gergely Máté
An Interactive Tool for Navigating AI Career Risk
Finn Metz
Funding 5–10 AI security startups through Seldon’s second SF cohort.
Mirco Giacobbe
Developing the software infrastructure to make AI systems safe, with formal guarantees
Theia Vogel
Research, tutorial writing, and open-source libraries & tools for experimenting with language models
Justin Bianchini
A modular gene-editing platform for engineering new pigment patterns in ornamental plants, starting with a vein-pattern rescue line in petunias.
Markus Englund
Continue developing ‘copy-paste-detective’ - a software that detects signs of data fabrication in scientific research - and run it against 20,000 publicly available Excel datasets.
Jade Master
Developing correct-by-construction world models for verification of frontier AI
David Rozado
An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems
Habeeb Abdulfatah
Seeking funding to secure API infrastructure and permanently eliminate the rate limits bottlenecking open-source EA grant evaluation.
Matthew Farr
I self-funded research into a new threat model. It is demonstrating impact (accepted at multiple venues, added to BlueDot's curriculum).
Galen Wilkerson
Measuring and Visualizing Model Uncertainty During Inference
Warren Johnson
Novel safety failure modes discovered across 7 LLM providers with 35,000+ controlled inference trials. Targeting NeurIPS 2026.