You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
Last year I and a collaborator summarised every live project in AI safety, tried to understand their theories of change, listed outputs, personnel, and funding amounts, and wrote an editorial.
We talked to a couple dozen researchers to check our glosses and get their views. The post was well-received (100 karma on AF, which is very rare) and is e.g. a standard intro resource at 80k. We did it pro bono (or rather, failed to obtain retroactive funding).
We want to update the review for 2024: progress, shutdowns, trends, and our takes.
The original goal was to help new researchers orient and know their options, to help everyone understand where things stand, and to help funders see quickly what has already been funded. Simply putting all links in one place was perhaps half of the value.
This iteration: same as above but incorporating last year's feedback and seeking to get sign-off from more than 50% of those covered. Also a professionalised version suitable for policy audiences.
$8K: bare bones update (80 hours). Skim everything, reuse the taxonomy and seek correction in the comments.
$13K: much more effort on verifying details and seeking out consensus, more editorial and synthesis
$17K: section on academic and outgroup efforts. Add a glossy formal report optimised for policy people.
Wages.
Gavin and Stag did last year's version. Stephen is the source of much of the (limited) descriptive statistics about the field.
We ran this project last year, and it was well-received. Habryka: "I think overall this post did a pretty good job of a lot of different work happening in the field. I don't have a ton more to say, I just think posts like this should come out every few months, and the takes in this one overall seemed pretty good to me."
N/A
$0 so far.