Project summary
Fund 1 1/2 years (18 months) of salary for a senior strategy consultant to provide pro bono support to individuals in governments tasked with preparing AI policy events (such as international summits) and/or legislation central to AI safety.
What are this project's goals and how will you achieve them?
Background
Key decision moments for pioneering legislation or institutions tend to set the course of a whole policy field. However, these windows of opportunity can fail to deliver transformative policy, simply due to the fact that bureaucrats and legislative staff face serious constraints on their time and expertise in these decisive moments.
Such conditions are observable in AI governance. Recent examples include the preparation of the UK AI Safety Summit and the final negotiations of the EU AI act. For civil servants and political appointees, having even a little more support and time to think through your next move in the negotiations, or contact an additional ally, can dramatically alter the impact of a policy or international treaty.
Large parts of the set-up of the AI policy field are still undecided. Teams of new units or departments are typically small and understaffed. Staff may have expertise in policy development, or expertise in AI, but often not both. This is leaving single individuals with both very challenging circumstances and also extraordinary leverage.
While policymakers are very interested in receiving outside support, rather complicated and lengthy processes for accessing, and especially hiring, external consultants tend to prohibit help when needed the most.
Proposal
Our activities can be roughly separated into four recurring phases:
Scout for upcoming windows of opportunity. These will include international summits (such as the upcoming AI action summit in France in 2025), landmark legislative processes, budget decisions, etc. Focus on a small subset (2-3 per year) of opportunities. Foster relationships with AI safety-focused insiders.
Support preparation for the decision-making moment. One example: Developing a coalition building strategy, including the identification of new influential allies within the institution — e.g., identifying organic leaders with authority on the issue at hand and how their motivations and values can be connected to AI safety.
Maximize chances of success during negotiation/summit: Help with rapid problem-solving in short one-on-one consultations, quick background research to enhance next iteration of a proposal, preparing briefs and short simulations to be better equipped for the next round of negotiations.
Debrief and learning sessions to strengthen chances of success for the next window of opportunity.
How will this funding be used?
Senior Strategy Consultant Salary & Benefits, 1.5 years €200,000
Future Matters Managing Director (0.2 FTE) €20,000
Operations Overhead €30,000
Total €250,000
Who is on your team and what's your track record on similar projects?
Due to our strategy consulting work in AI safety & governance across the field, Future Matters has both a deep understanding of the AI landscape and access to individuals working inside the government institutions aiming to advance AI safety. To support in critical decision-making moments, we can draw on a library of over 300 social science techniques for creating policy change. Additional to our research-backed expertise, we can also build on substantial experience in policy. Kyle Gracey, who is leading Future Matters’ AI safety & governance strategy consulting, brings 17 years of combined experience in consulting, policy, politics, and social movements. They previously held positions in the United States government and military, including serving then-Vice President Joe Biden, while also being originally trained in computer science. Working across global catastrophic risks in climate and biosecurity enables us to draw lessons from key decision-making moments and trajectory changes in these more advanced policy fields.
Using this expertise, Future Matters has already been able to contribute to policymaking efforts at critical moments. Recent examples include:
Getting ideas into legislative processes:
AI: Connected a client with a staffer at the US Congress to talk about substantially increasing public spending for technical AI safety. As an outcome of this conversation, the client got invited to provide a legislative proposal to Congress, which we helped to research.
Climate: We used our own expertise in power-building to build support for the EU policy recommendations from our climate policy prioritization research in just a few months. Among those now actively engaging with our recommendations for the 5-year EU climate agenda are Kurt Vandenberghe (the EU’s “climate secretary”), two Directors at the EU’s Directorate General Climate Action, Jacob Werksman (EU's COP negotiator), as well as influential members of the European Parliament and German diplomats, and four former EU climate policy leaders.
Providing insight: When open letters created a sudden spark of interest in AI safety, Future Matters quickly produced briefings on normalizing AI risks and on using public moments to advance AI safety to support AI safety actors.
Coordinating around major events: Developed and coordinated communication goals for leading AI safety organizations participating in the UK AI Summit, ensuring a robust shared set of key messages.
What are the most likely causes and outcomes if this project fails? (premortem)
We might fail to identify policymakers who are interested in receiving our services, and/or we might fail to convince policymakers that our services are valuable enough to want to work with us. Relatedly, we might build relationships with policymakers, but too slowly to be useful to them in key moments where our support would have been most helpful.
We might encounter issues on which we do not have sufficient knowledge to be useful to the policymaker.
The outcome in all of these cases is that the policymakers will not receive our support, or at least will not receive it in the most critical moments. This will essentially be the status quo today - the policymakers will have less help than they ideally want to advance AI governance, leading to less, and lower quality, AI governance being developed.
What other funding are you or your project getting?
We have not yet received any other funding for this project.