Clarification: this work doesn't get funded by Good Ventures. OP may still recommend grants of this type to non-Good Ventures donors. In practice, this means the pool of possible funding is still much smaller, so I think the argument stands.
@Thomas
I work at the AI Futures Project, most recently on AI 2027.
https://www.linkedin.com/in/thomas-larsen/This is a donation to this user's regranting budget, which is not withdrawable.
$30,000 in pending offers
I would like to fund:
Talented early career AI governance folks who could be accelerated with small amounts of funding.
Established think tanks who would like to do work on AI safety.
In the near term, I'm excited about work on transparency measures for frontier AI developers, because I think that this will be helpful for improving the strategic awareness of governments and make it harder for an AI lab to pursue an intelligence explosion in secret.
Once there is more societal wakeup, I think there should be an international treaty to not build superintelligence. I'm excited about work on verification methods to enforce treaties, plus work on developing agreements that could become viable.
Thomas Larsen
about 21 hours ago
Clarification: this work doesn't get funded by Good Ventures. OP may still recommend grants of this type to non-Good Ventures donors. In practice, this means the pool of possible funding is still much smaller, so I think the argument stands.
Thomas Larsen
2 days ago
I think this is very promising. This team seems to have some of the people who have the clearest thoughts in the world about acausal interactions. I've asked several people who I trust a lot in this space and gotten universally positive references about the team.
My main concern is that thinking about acausal interactions is extremely difficult (meaning that zero progress is somewhat likely) and sign uncertain (so, even if they did make progress, it's not clear this would be net helpful). Overall, my view is that it still seems good to have some people working on this, and I trust this team in particular to be thoughtful about the tradeoffs.
Also, this type of work doesn't get funded by OP.
Thomas Larsen
9 days ago
I'm donating at least $10k (via Manifund).
I think Lightcone has been hugely impactful (see the above, which I think is very compelling evidence of a large amount of impact), and can't get funded from many of the usual places.
COI Note: Lightcone contracted with AI Futures Project (the org I work for) on AI 2027, and is continuing to contract with us on followup work. This is a donation for general support, and not anything like payment for services received.
Thomas Larsen
10 days ago
I'm donating $10k, and I think there's a good chance I'll come back and end up donating more.
I think that AI Digest has done great work and will be able to put marginal funding to good use. Stuff I'm particularly excited about in the past:
The Agent Village. I think getting real world experience seeing how AI agents will interact with tasks outside the narrow confines of a benchmark suite is a neglected form of capability evaluation.
Various explainers, especially the one of the METR graph.
Main reservations:
Nothing jumps out to me.
Conflicts of Interest:
No COIs.
For | Date | Type | Amount |
---|---|---|---|
AI Digest | 7 days ago | project donation | 10000 |
Lightcone Infrastructure | 9 days ago | project donation | 10000 |
Manifund Bank | 2 months ago | deposit | +100000 |