You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
A six-month project that explores how historical nuclear policies and frameworks can inform AI governance, assessing the impact of policy measures like "Atoms for Peace," the Nuclear Non-Proliferation Treaty, and the International Atomic Energy Agency (IAEA). This work will analyze existing literature and debates on AI governance, providing strategic insights into emerging policies and their potential outcomes.
Broadly speaking, this project will take the shape of an overview and critique of historical and contemporary nuclear policy and broadly analogous policies for AI. The current plan is to break this down by specific policies in the nuclear space. A brief version of what this could look like specifically for the set of US policies referred to as "Atoms for Peace" can be found at A Critical Review of "Chips for Peace": Lessons from "Atoms for Peace"; this was a quick attempt produced for a hackathon, and there is a lot of scope for improvement throughout.
The ideal format for the output of this project probably looks like a blog post for each set of policies, followed by an overall review at the conclusion of the project. The initial phase (approx. 1 month) will involve a broad discovery-based approach of finding and categorizing literature and drafting rough notes about topics being covered. Post that the project will continue in approximately monthly cycles involving literature review, writing, feedback, and publication for each topic. Currently proposed topics are "Atoms for Peace", the Nuclear Non-Proliferation Treaty, and the IAEA, however this list will likely be expanded during the discovery phase. Finally, the final phase, likely to be 15-20 days, will involve synthesizing the prior outputs in a consolidated publication.
The idea to publish in monthly single-topic format before a final consolidation was made in order to avoid scope creep leading to a lack of output. It also allows us to set hard deadlines and keep up a momentum based on continuous feedback.
Most of the funding (100% up to $5k) will be for our salary. Further funding will be partially used for travel to relevant conferences and events, visiting other researchers, and purchasing subscriptions to any relevant closed access journals, magazines and books. Leftover funds will top-up our salary and allow for further time to be dedicated.
Work on this project will be done regardless of funding - funding helps us dedicate time into the project instead of trying to secure an income elsewhere and also get access to better resources to work on this. The project being fully funded would mean that a minimum of 1.5 FTE is being dedicated to this project over the two of us engaged in the project.
I have a background in AI safety, governance, and machine learning, having worked as a Model Interaction Contractor at METR and as a course facilitator for the AI governance course run by BlueDot Impact. I was part of a team that developed an Autonomous Military Systems Conflict Avoidance Framework (AMSCAF) during the 2023 Yale Policy Institute Hackathon, which came in 4th. I also authored a previously mentioned report titled A Critical Review of "Chips for Peace": Lessons from "Atoms for Peace".
I will be working on this in collaboration with Jayat Joshi. Jayat has a background in policy and strategy, specializing in AI and public policy at institutions such as Axiom Futures and the Mercatus Center at Georgetown University. We, along with 2 other members, have previously worked on creating a report titled "A Landscape Analysis of AI Governance in India" as part of Future Academy v2.0 and were awarded 3rd place in the project phase.
Together, we have the technical knowledge, policy insight, and strategic understanding necessary to carry out this project successfully.
This project has no likely adverse outcomes from failure.