Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
7

AI forecasting and policy research by the AI 2027 team

AI governanceForecasting
🍋

Jonas Vollmer

ActiveGrant
$15,300raised
$500,000funding goal

Donate

Sign in to donate
Problem: AGI might come within a few years and society isn’t ready
  • Many people in influential positions are barely paying attention to AGI.

  • Hundreds of people who are trying to make AGI go well through technical research, influencing governments, launching high-stakes legislation like SB1047, etc. But they often make foreseeable mistakes due to lacking detailed understanding of AGI futures, for example underestimating the impact of internal deployment or underweighting the US executive branch.

  • Society isn’t ready for AGI and the associated risks.

Solution: 
  1. High-quality, full-stack futurism. Example: AI 2027.

    1. ‘High-quality’: Informed guesses based on deep thinking, trend extrapolations, detailed spreadsheets, industry rumors, conversations with experts, and tabletop exercises.

    2. ‘Full-stack’: Holistic scenario forecast, not just compute/benchmark trends, but also real-world capabilities/applications, alignment outcomes, takeoff models, geopolitics.

  2. Recommendations to make AGI go better. Example: 4 Ways to Advance Transparency in Frontier AI Development.

  3. Informing the world about our forecasts and recommendations. Example: Tabletop exercises for AGI companies and policymakers.

Track record

In April 2025, we released AI 2027, a detailed AGI scenario forecast. The reception included positive reactions from both people with similar views (e.g. Yoshua Bengio, Jack Clark) and dissimilar views (e.g. Dean Ball), a Dwarkesh podcast, a Lawfare podcast, an NYT piece, ~1M unique website visitors, and well-attended talks at influential organizations. We’ve also run about 35 AGI tabletop exercises (TTXs), demand for which spread solely by word of mouth from satisfied participants. Two people told us that the TTX caused them to shift their career path.

Funding opportunity

Some excellent researchers are potentially interested in working with us. We require $1.9–4.7M so we can make 0–5 job offers and continue our work through the end of 2026. The below budget is for all of 2025–2026, the yellow row shows our funding gap from now through the end of 2026.

Planned activities

In April–June 2025, we’ll be testing out different activities, then we’ll pursue the ones that seem most promising. Funding us is a bet on our team and the broad types of work described above rather than a bet on any specific activities. That said, we list some activities ordered by how likely we are to make them a priority in the second half of 2025 (with ≥1 FTE):

  • Likely priority (50–75% probability)

    • AGI tabletop exercise (TTX). Informed by AI 2027, we’ve developed a TTX that takes ~4 hours and consists of 8–14 players who take on different roles, such as the US president, China, or the leading CEO, and aim to accurately simulate their actor. We’ve run about 30 of these thus far, to overall very positive reviews. In May, we will run several TTXs in DC for senior officials, and if it goes well we may scale to reach hundreds of DC folks in 2025.

    • Endorsed scenario ending and AGI policy playbook. We will develop it alongside a new “endorsed” ending to AI 2027, in which the US executive branch responds wisely to AGI. The supplements to this ending will go into more detail about the government’s actions and options, constituting a  “playbook” for what the US executive branch should do once AGI is achieved. We’re currently working on an initial version.

    • Frequent posts on our blog. Our blog has 5,000 subscribers thanks to AI 2027; our team (plus Scott Alexander) will cover forecast updates, near-term AI policy ideas, TTX results, and similar topics. For representative past content, see Daniel’s transparency article, Scott’s time horizon explainer, and Daniel’s article about training AGI in secret.

    • Policy and media engagement. We will continue to respond to inbound requests to take meetings with influential policymakers or media outlets, e.g., meeting with congresspeople or going on major podcasts and YouTube channels.

  • Potential priority (25–50% probability)

    • New AI 2027 scenario endings. We might write new endings beyond the “race” and “slowdown” endings. We’re especially excited about adding an ending with longer timelines to communicate that we aren’t confident in AGI by 2027 or soon after.

    • Improvements to key forecasts like our AI 2027 research on timelines, takeoff, etc.

    • An activity not listed here. Funding us is a bet on our team and the broad types of work described in the “Solution” section above rather than a bet on any specific activities. We might adopt a priority not mentioned here.

Our team
  • Daniel Kokotajlo, Executive Director: Daniel oversees the AI Futures Project. He previously worked as a governance researcher at OpenAI on scenario planning. In August 2021, Daniel predicted the fine-tuning of language models as chatbots, scaling to >$100 million models, chain-of-thought, reasoning models, and more. See also his Time100 AI 2024 profile.

  • Eli Lifland, Researcher: Eli specializes in forecasting AI capabilities. He also co-founded and advises Sage, which builds interactive AI explainers. He ranks #1 on the RAND Forecasting Initiative leaderboard.

  • Thomas Larsen, Researcher: Thomas specializes in forecasting AIs’ goals and AI policy. He previously founded the Center for AI Policy.

  • Romeo Dean, Researcher: Romeo specializes in forecasting AI chip production and usage.

  • Jonas Vollmer, COO: Jonas focuses on communications and operations. Jonas has 11 years of experience founding and leading AI safety nonprofits.

  • Advisors with policy and communications experience whom we consult with before making highly consequential decisions.

AI Futures Project Inc is a 501(c)(3) public charity (EIN 99-4320292, previously Artificial Intelligence Forecasting Inc) and accepts donations here on Manifund, on Every.org, or through major DAF providers.

Comments4Donations6
NeelNanda avatar

Neel Nanda

donated $5K
2025-05-07
RyanKidd avatar

Ryan Kidd

donated $5K
2025-05-07
GauravYadav avatar

Gaurav Yadav

donated $30
2025-05-07
🐝

Michael Chen

donated $250
2025-05-07
Austin avatar

Austin Chen

donated $5K
2025-05-07
🍉

nikki

donated $20
2025-05-07