Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
7

AI forecasting and policy research by the AI 2027 team

AI governanceForecasting
🍋

Jonas Vollmer

ActiveGrant
$15,300raised
$500,000funding goal

Donate

Sign in to donate
Problem: AGI might come within a few years and society isn’t ready
  • Many people in influential positions are barely paying attention to AGI.

  • Hundreds of people who are trying to make AGI go well through technical research, influencing governments, launching high-stakes legislation like SB1047, etc. But they often make foreseeable mistakes due to lacking detailed understanding of AGI futures, for example underestimating the impact of internal deployment or underweighting the US executive branch.

  • Society isn’t ready for AGI and the associated risks.

Solution: 
  1. High-quality, full-stack futurism. Example: AI 2027.

    1. ‘High-quality’: Informed guesses based on deep thinking, trend extrapolations, detailed spreadsheets, industry rumors, conversations with experts, and tabletop exercises.

    2. ‘Full-stack’: Holistic scenario forecast, not just compute/benchmark trends, but also real-world capabilities/applications, alignment outcomes, takeoff models, geopolitics.

  2. Recommendations to make AGI go better. Example: 4 Ways to Advance Transparency in Frontier AI Development.

  3. Informing the world about our forecasts and recommendations. Example: Tabletop exercises for AGI companies and policymakers.

Track record

In April 2025, we released AI 2027, a detailed AGI scenario forecast. The reception included positive reactions from both people with similar views (e.g. Yoshua Bengio, Jack Clark) and dissimilar views (e.g. Dean Ball), a Dwarkesh podcast, a Lawfare podcast, an NYT piece, ~1M unique website visitors, and well-attended talks at influential organizations. We’ve also run about 35 AGI tabletop exercises (TTXs), demand for which spread solely by word of mouth from satisfied participants. Two people told us that the TTX caused them to shift their career path.

Funding opportunity

Some excellent researchers are potentially interested in working with us. We require $1.9–4.7M so we can make 0–5 job offers and continue our work through the end of 2026. The below budget is for all of 2025–2026, the yellow row shows our funding gap from now through the end of 2026.

Planned activities

In April–June 2025, we’ll be testing out different activities, then we’ll pursue the ones that seem most promising. Funding us is a bet on our team and the broad types of work described above rather than a bet on any specific activities. That said, we list some activities ordered by how likely we are to make them a priority in the second half of 2025 (with ≥1 FTE):

  • Likely priority (50–75% probability)

    • AGI tabletop exercise (TTX). Informed by AI 2027, we’ve developed a TTX that takes ~4 hours and consists of 8–14 players who take on different roles, such as the US president, China, or the leading CEO, and aim to accurately simulate their actor. We’ve run about 30 of these thus far, to overall very positive reviews. In May, we will run several TTXs in DC for senior officials, and if it goes well we may scale to reach hundreds of DC folks in 2025.

    • Endorsed scenario ending and AGI policy playbook. We will develop it alongside a new “endorsed” ending to AI 2027, in which the US executive branch responds wisely to AGI. The supplements to this ending will go into more detail about the government’s actions and options, constituting a  “playbook” for what the US executive branch should do once AGI is achieved. We’re currently working on an initial version.

    • Frequent posts on our blog. Our blog has 5,000 subscribers thanks to AI 2027; our team (plus Scott Alexander) will cover forecast updates, near-term AI policy ideas, TTX results, and similar topics. For representative past content, see Daniel’s transparency article, Scott’s time horizon explainer, and Daniel’s article about training AGI in secret.

    • Policy and media engagement. We will continue to respond to inbound requests to take meetings with influential policymakers or media outlets, e.g., meeting with congresspeople or going on major podcasts and YouTube channels.

  • Potential priority (25–50% probability)

    • New AI 2027 scenario endings. We might write new endings beyond the “race” and “slowdown” endings. We’re especially excited about adding an ending with longer timelines to communicate that we aren’t confident in AGI by 2027 or soon after.

    • Improvements to key forecasts like our AI 2027 research on timelines, takeoff, etc.

    • An activity not listed here. Funding us is a bet on our team and the broad types of work described in the “Solution” section above rather than a bet on any specific activities. We might adopt a priority not mentioned here.

Our team
  • Daniel Kokotajlo, Executive Director: Daniel oversees the AI Futures Project. He previously worked as a governance researcher at OpenAI on scenario planning. In August 2021, Daniel predicted the fine-tuning of language models as chatbots, scaling to >$100 million models, chain-of-thought, reasoning models, and more. See also his Time100 AI 2024 profile.

  • Eli Lifland, Researcher: Eli specializes in forecasting AI capabilities. He also co-founded and advises Sage, which builds interactive AI explainers. He ranks #1 on the RAND Forecasting Initiative leaderboard.

  • Thomas Larsen, Researcher: Thomas specializes in forecasting AIs’ goals and AI policy. He previously founded the Center for AI Policy.

  • Romeo Dean, Researcher: Romeo specializes in forecasting AI chip production and usage.

  • Jonas Vollmer, COO: Jonas focuses on communications and operations. Jonas has 11 years of experience founding and leading AI safety nonprofits.

  • Advisors with policy and communications experience whom we consult with before making highly consequential decisions.

AI Futures Project Inc is a 501(c)(3) public charity (EIN 99-4320292, previously Artificial Intelligence Forecasting Inc) and accepts donations here on Manifund, on Every.org, or through major DAF providers.

Comments4Donations6
donated $5,000
Austin avatar

Austin Chen

1 day ago

Approving this proposal. AI Futures Project has been one of the most hyped efforts in my circles for a while now, and for good reason: it brings together some of the most accomplished individuals in the AI safety scene, working under a single banner. For this reason alone, I think the team is worth taking a bet on.

They've also already made their mark, with AI 2027. I'd gotten a sneak peek, and had actually been a bit unimpressed - I had high expectations, but imo the preview suffered from "too many cooks in the kitchen" wrt writing & site design. But by launch, AIFP had upped their game, with a polished product that's been well & widely received. It's certainly shaped how I think about the next few years of AI development. Kudos to the team for being willing to share an early v0, and then iterating to make it better over time!

I think the people working on this are super smart and probably know what they're doing, but I figured I'd throw in my unsolicited 2c:

  1. It seems like the core team is already heavy on researchers, so it's unclear to me that hiring more researchers is the right strategic move, vs investing in roles that can produce great content for a wide audience. Right now their plan seems to be to partner with really great folks (eg Scott for writing, Oli for website design, Dwarkesh for podcasts), and it seems to be working so far, but I would guess that having in-house expertise on this could be super valuable, much more so than a marginal researcher.

  2. Specifically with the TTX, I haven't played through one myself, but my understanding is it's currently costly to run (requiring an in-person expert facilitator). I'd be pretty excited for ways to automate that, scale it out, and get much wider distribution, eg by shipping an interactive web experience powered by LLMs, or packaging it as a commercial board game.

Anyways, AIFP is one of the most exciting efforts I'm currently tracking; I've made a small personal donation as a show of support. I expect that AIFP will be amply funded by larger parties like OpenPhil and SFF, and as Neel says, is not really in my comparative advantage; but I still think that independent donations are valuable for diversifying funding streams.

🧡
🍋

Jonas Vollmer

1 day ago

@Austin Your suggestion #2 is on our roadmap as a thing we might work on (an online version played by humans, an online LLM-powered version, and an offline board game you can order)!

donated $5,000
NeelNanda avatar

Neel Nanda

3 days ago

I don't believe AI 2027 will be my comparative advantage as a regranter, but I think AI 2027 is great, they're an excellent place to donate money, and I wanted to put my money where my mouth is with a token donation.

I was very impressed with AI 2027. While I don't agree with all of the assumptions, and it's substantially faster than I predict, I found it scarily plausible. It was very productive for concretising my own thoughts and making me more aware of key considerations, for example, whether an AI will try to align its successor. I think this kind of writing is valuable and I would love to see more.

It also seems to have been very well received publicly (much more so than I expected), getting a ton of attention while being much higher quality than the messages that many people may have previously been exposed to. The team has an excellent track record, and I think the success of AI 2027 gives them significant momentum, in addition to Daniel's profile and positive reputation as a whistleblower who tried to walk away from large amounts of money, so it's plausible to me that they can get a good amount of public/policy attention. Though I would be much more optimistic about this if they hired someone with substantial policy/lobbying experience.

While I don't expect to agree with the team on everything, I broadly think they have good judgment, care about the correct things, and will do things that are valuable and helpful. I view this donation and endorsement as a bet on the team and their future judgment, more so than any specific future plan.

🧡2
🍋

Jonas Vollmer

1 day ago

@NeelNanda Thanks so much Neel, appreciate it!