1 & 2: The only broad survey from those in the field, as cited in the project summary, lists lack of funds as a critical bottleneck. On top of that, there is a long history of unexpected and sudden drops in funding, severe delays in decision timelines, and opaque decision criteria. I won’t go into all of the ones I know of, but here are some examples: AI Safety Camp struggled to get funding (1), despite many in the community viewing them as highly impactful (2), and almost shut down as a result. Lighthaven, also viewed as highly impactful (3), struggled to get funding for years, despite also being regarded by many as very impactful (3). Most recently, Apart Research who, despite outperforming on their previous grant from LTFF (4), was turned down because LTFF is funding constrained, and OpenPhil did not respond in the expected timeframe (5). Regardless of how you feel about Apart’s impact, the reason they were not funded was not because they were judged to not be impactful, but because of a decrease in funds available. Goodventures has caused OpenPhil not to fund certain impact areas (6), despite some thinking they are critically important (7). LTFF has also been known for being extremely capacity constrained (8). There’s also the problem of just how much of the funds flow from very few sources, resulting in single points of failure, which can result in chaotic outcomes, such as the collapse of FTX Future Fund and the sudden decisions made by Good Ventures (9). It has likely also resulted in the conforming of ideas to suit only the world models of a few, most likely suffocating alternative ones (10). I have spoken to many who have had similar situations, where the problem did not seem to be the lack of a project’s promise or skill of the grantee, but sudden shifts in funding and a lack of clearly-communicated timelines to hear back. I have also spoken with individual researchers who had to leverage their own network and time to get promising research funded From others I have spoken with, this has also resulted in people leaving the AI Safety space altogether and working instead on capabilities research. I think the indirect cost is harder to measure, but probably much greater. Many talented people might care about AI going well, but their threshold for sacrifice might be lower than the one demanded of them currently in the community. They want a reliable community with easy channels to get involved, with dependable funding. I’m not going to pretend I can solve all of these issues, but I think the problem is there and this is a start in a better direction.
3: I think there is too much “whale hunting”. As I said, I think high-leverage donors are useful, and am fine with others continuing to pursue them, but they also carry with them the risks mentioned before. Namely, single points of failure, which results in funding shocks felt around the ecosystem, and conformity to world models held by the donors. I’m aiming more for sub-billionaires. I think there’s potential for wealthy individuals who are not very connected to the AI Safety space, but who are already concerned, and also grassroots campaigns for a more dispersed fundraising approach. I think the latter could be very important in the coming years, if AI continues to improve and gain more attention. By 2027, the funding landscape could scarcely resemble the current one, and I think setting-up funds now ready to capitalize on that will be important. I think what projects need to be funded, and the number of people capable of executing those projects, might also change. Even if you think most useful projects are being funded today doesn’t mean there won’t be a much higher range of useful projects tomorrow.
1: https://www.lesswrong.com/posts/EAZjXKNN2vgoJGF9Y/this-might-be-the-last-ai-safety-camp
2: https://thezvi.substack.com/p/the-big-nonprofits-post?open=false#%C2%A7ai-safety-camp
3: https://www.lesswrong.com/posts/5n2ZQcbc7r4R8mvqc/the-lightcone-is-nothing-without-its-people
4: https://forum.effectivealtruism.org/posts/x5R4mpJRqPwpQAPqv/why-is-apart-research-suddenly-in-dire-need-of-funding
5: https://forum.effectivealtruism.org/posts/x5R4mpJRqPwpQAPqv/why-is-apart-research-suddenly-in-dire-need-of-funding
6: https://www.goodventures.org/blog/an-update-from-good-ventures/
7: https://www.youtube.com/watch?v=uD37AKRx2fg&t=4965s
8: https://forum.effectivealtruism.org/posts/ee8Pamunhqabucwjq/long-term-future-fund-ask-us-anything-september-2023?commentId=NvuGEcKFLQrioBuRH
9: https://docs.google.com/document/d/1EYCMHa6_7Mudb4s1MDvppGMY5BmHEVvryGw9cX_dlQ8/edit?tab=t.0
10: https://www.lesswrong.com/posts/FdHRkGziQviJ3t8rQ/discussion-about-ais-funding-fb-transcript