Problem: AGI might come within a few years and society isn’t ready
Many people in influential positions are barely paying attention to AGI.
Hundreds of people who are trying to make AGI go well through technical research, influencing governments, launching high-stakes legislation like SB1047, etc. But they often make foreseeable mistakes due to lacking detailed understanding of AGI futures, for example underestimating the impact of internal deployment or underweighting the US executive branch.
Society isn’t ready for AGI and the associated risks.
Solution:
High-quality, full-stack futurism. Example: AI 2027.
‘High-quality’: Informed guesses based on deep thinking, trend extrapolations, detailed spreadsheets, industry rumors, conversations with experts, and tabletop exercises.
‘Full-stack’: Holistic scenario forecast, not just compute/benchmark trends, but also real-world capabilities/applications, alignment outcomes, takeoff models, geopolitics.
Recommendations to make AGI go better. Example: 4 Ways to Advance Transparency in Frontier AI Development.
Informing the world about our forecasts and recommendations. Example: Tabletop exercises for AGI companies and policymakers.
Track record
In April 2025, we released AI 2027, a detailed AGI scenario forecast. The reception included positive reactions from both people with similar views (e.g. Yoshua Bengio, Jack Clark) and dissimilar views (e.g. Dean Ball), a Dwarkesh podcast, a Lawfare podcast, an NYT piece, ~1M unique website visitors, and well-attended talks at influential organizations. We’ve also run about 35 AGI tabletop exercises (TTXs), demand for which spread solely by word of mouth from satisfied participants. Two people told us that the TTX caused them to shift their career path.
Funding opportunity
Update Sep 2025: We've recently received a $1.44M grant from the Survival and Flourishing Fund (SFF) and another $3.05M in funding from private donors. This means most of our funding gap has been filled. The next $500K in donations to the AI Futures Project will be matched by SFF.
Some excellent researchers are potentially interested in working with us. We require $1.9–4.7M so we can make 0–5 job offers and continue our work through the end of 2026. The below budget is for all of 2025–2026, the yellow row shows our funding gap from now through the end of 2026.

Planned activities
In April–June 2025, we’ll be testing out different activities, then we’ll pursue the ones that seem most promising. Funding us is a bet on our team and the broad types of work described above rather than a bet on any specific activities. That said, we list some activities ordered by how likely we are to make them a priority in the second half of 2025 (with ≥1 FTE):
Our team
Daniel Kokotajlo, Executive Director: Daniel oversees the AI Futures Project. He previously worked as a governance researcher at OpenAI on scenario planning. In August 2021, Daniel predicted the fine-tuning of language models as chatbots, scaling to >$100 million models, chain-of-thought, reasoning models, and more. See also his Time100 AI 2024 profile.
Eli Lifland, Researcher: Eli specializes in forecasting AI capabilities. He also co-founded and advises Sage, which builds interactive AI explainers. He ranks #1 on the RAND Forecasting Initiative leaderboard.
Thomas Larsen, Researcher: Thomas specializes in forecasting AIs’ goals and AI policy. He previously founded the Center for AI Policy.
Romeo Dean, Researcher: Romeo specializes in forecasting AI chip production and usage.
Jonas Vollmer, COO: Jonas focuses on communications and operations. Jonas has 11 years of experience founding and leading AI safety nonprofits.
Advisors with policy and communications experience whom we consult with before making highly consequential decisions.
AI Futures Project Inc is a 501(c)(3) public charity (EIN 99-4320292, previously Artificial Intelligence Forecasting Inc) and accepts donations here on Manifund, on Every.org, or through major DAF providers.