Approving this project as it fits our criteria of "charitable and few downsides". I think publishing a forecast on the effects of a AI treaty could be very helpful. I am more skeptical of "running an open letter urging governments to coordinate to make an AI safety treaty" -- I'd highly encourage working with other players in the AI governance space, as otherwise I expect the impact of an open letter to be ~nil. (Maybe that was already the plan, in which case, great!)
Forecasting - AI Governance Policies
Project summary
Samotsvety Forecasting did conditional forecasts focusing on two potential policies that seek to reduce AI risk. The first, Policy PAUSE, involves the implementation of the 6 month training moratorium of AI systems more powerful than GPT-4, as recommended by the FLI open letter.
The second policy concerns the signing or implementation of a presented AI treaty. It includes provisions on banning large AI training runs, dismantling large GPU clusters, and promoting international cooperation and capacity building in AI safety.
Tolga, a member of Samotsvety, wants to continue improving the forecasts and treaty.
You can see the treaty, forecasting, etc by emailing contact@bilge.no
What are this project's goals and how they be achieved?
Draw attention to the need for international coordination to regulate AI progress, especially by preventing unsafe development (eg. via a global moratorium)
Make progress on what this treaty could actually say, so once the international community does begin working on it, there is already a good model for them to draw from.
Figure out what is best/most important to include in a potential treaty and what is worth especially fighting for.
How will this funding be used?
Professional web developer: $2k
Web hosting for 1 year: $1k
Retrospective compensation for forecasters' work: $2k
Forecasters' compensation for the next project: $2k
Compensation for Tolga's and others' work on treaty: $2k
Who is on the team and what's their track record on similar projects?
Tolga is part of a leading forecasting team, Samotsvety. He also has a track record with several other groups.
Simeon Campos, Akash Wasil, and Olivia Jimenez will also be part of this project who will be aiding in spreading and implementing the results of the forecasts who each have their own track records in this area.
How could this project be actively harmful?
Accelerate the drive towards an early AI treaty, but that drive might not result in a strong AI treaty like we are suggesting, but a much weaker one that makes it harder to get a strong AI treaty later.
If we are maximally successful in getting a strong AI treaty agreed, the institutions that are set up could become captured by badly motivated or incentivized people.
What are the most likely causes and outcomes if this project fails? (premortem)Doesn't get a lot of traction
What other funding is this person or project getting?
This project hasn't had any funding so far, and all work done so far has been done on a voluntary basis. Tolga has not received any grants. Samotsvety has received retrospective funding for a couple of previous forecasting projects (e.g. Nuclear forecasting), but is currently unfunded. Tolga is part of some other forecasting groups (Swift Centre, INFER, Good Judgment), from which he earns about $1k per month in total.
Tolga Bilge
over 1 year ago
Thank you, that is indeed the plan and we have been doing this. We wouldn't go ahead with the open letter without significant support from AI safety and governance people. I agree with your assessment that running an open letter without support would be extremely unlikely to be successful.

Marcus Abramovitch
over 1 year ago
Main points in favor of this grant
Forecasting conditional policy effects and forecasting important questions are among the best use of forecasters in my opinion.
It's good to see forecasters work be used for something actionable.
I want to encourage forecasters to do more of this kind of work.
I'm not aware of anyone else trying to quantify impacts of different AI policies which seems important.
Tolga seems quite bright and is a top superforecaster.
The grant is just very cheap compared to what it could accomplish.
I am hopeful that if successful, when Tolga graduates, he will be doing a lot more of this kind of work. Making forecasts useful for AI Policy.
Donor's main reservations
I think there is an order of magnitude too much funding going towards forecasting. I am worried about contributing to this. However, this isn't the type of forecasting grants I am against (this is mainly just blind funding forecasting platforms).
I'm skeptical how much forecasting work can really be done in areas like this vs. just guessing and putting numbers on things to make them seem more credible when that number can just be fairly meaningless.
Process for deciding amount
The amount Tolga wanted for this was very reasonable. Everyone is working on below market rate salaries or for free. I didn't want there to be a tradeoff between compensating previous forecasting work vs. ensuring there was some funding for the next forecasting steps.
Conflicts of interest
None.