Impactful giving,
efficient funding.
Manifund offers charitable funding infrastructure designed to improve incentives, efficiency, and transparency.
AI Safety Regranting
Including projects in all stages and from all rounds.
Oliver Klingefjord
Develop an LLM-based coordinator and test against consumer spending with 200 people.
Jesse Hoogland
Addressing Immediate AI Safety Concerns through DevInterp
Brian Tan
~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila
Adam Shai
Fund a new research agenda, based on computational mechanics, bridging mechanism and behavior to develop a rigorous science of AI systems and capabilities.
Lovis Heindrich
Shreeda Segan
We know what works. Let's make sure it looks good.
Francisco Carvalho
Constance Li
Request for Retroactive Funding
Fazl Barez
Unlearning, AI Safety
Garrett Baker
Francis Dierick
Online platform where AIs and humans race to solve puzzles.
PIBBSS
Fund unique approaches to research, field diversification, and scouting of novel ideas by experienced researchers supported by PIBBSS research team
Max Kaufmann
Grace Braithwaite
A Cambridge Biosecurity Hub and Cambridge Infectious Diseases Symposium on Avoiding Worst-Case Scenarios
Tianyi (Alex) Qiu
Early exploration, agenda-setting, technical infrastructure, and early community building
Kyle Gracey
Strategy Consulting Support for AI Policymakers
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Dan Hendrycks
Tom McGrath
Find the best settings for SAE training we can, then scale across models
Florent Berthet
French center for AI safety