Austin, glad to read your points.
I'm quite confused why other donors aren't excited to fund AISC.
This is often confusing, even to us as organisers. Some years we had to go by on little money, and other years we would suddenly get a large grant or an influx of donations.
The lowest-quality editions in my opinion were AISC3 (when I was burned out, and we ran the edition across rooms in a Spanish hotel) and AISC4 (when COVID struck and we quickly switched to virtual). We ran those editions on a shoestring budget. But the year after in 2021, we received $85k from LTFF and $35k from SFP.
Financial stability would help us organise better editions – being able to plan next editions knowing we will have the money. "Stability" is not sexy but it makes the difference between being able to fully dedicate oneself and plan long term as an organiser, and organising on the fly.
Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support.
Last time, we asked Manifund staff to extend the fundraiser deadline (by 2 weeks), in order to not get cancelled. Looking at the datestamps of my email notifications, donations are coming in faster this time.
Having said that, if any funder here decided not to donate, I'd be curious why!
I'm specifically hesitant to fund the stop/pause agenda that Remmelt supports.
Before getting into this: ~5 stop/pause projects were hosted in 2024 and now in 2025. Our program got about five times as many other projects. The majority of our projects are in theoretical or applied safety.
I'm giving my takes here. Robert will have a different thoughts for the projects he supports.
We might get more stop/pause projects in 2026, which is what makes me most excited as an organiser. I'm also excited about technical projects that enable comprehensive assessments of model safety issues that AI companies have to address.
I'm generally worried about projects that assume it is simple – or somehow obviously doable – to make large machine learning systems safe, because I think it's bad for the community's epistemics. Particularly if alumni end up promoting their solutions to others in the community, or decide to commercialise them for companies, this could support safety-washing. Safety-washing is a way for corporate actors to avoid accountability – it allows them to build dangerous systems and make them look safe, instead of actually scoping their development of systems to be safe. It's counterproductive to AI Governance.
I value folks with a security mindset who are clear about not wanting to make things worse. I'm unsure how much the camp has enabled people to think like that in the past. Some of our alumni even went on to work at OpenAI and DeepMind. So that would be a reason not to donate to us.
Again, these are my thoughts. Robert and Linda will have their thoughts.
For one, I don't like the polarization that the stop/pause framework introduces
Is the polarisation in the framework itself, or in the implementation of it? Curious for your thoughts.
Various respected researchers (eg. Yudkowsky, Yampolskiy, Shovelain) who have been researching the alignment problem for about the longest are saying that we are not track on solving alignment (given rate of development over previous years and/or actually intractable sub-problems of control).
Slowing down AI development helps alignment researchers spend more time working out the problem. It does not have to be polarising, where alignment researchers recognise the need for society-wide efforts to restrict corporate-AI scaling.
Where tensions can occur is if alignment folks indirectly boost work at AGI companies. For example, at OpenAI some alignment researchers have made confident public statements there about being able to make AGI safe, and others have created shallow alignment techniques that made it easier to commercialise products. OpenAI has received $30 million from OpenPhil, and talented engineers advised by 80k to join OpenAI. One start-up dedicated to alignment even offered their state-of-the-art supercomputer to OpenAI. Similar things have happened at DeepMind and Anthropic.
There is a deep question here of whether the community wants to continue to risk accelerating AGI development in the hope of solving all the lethal sub-problems we identified but been unable to solve yet.
if I had to "choose a side" I might very well come down on "AGI sooner would be good"
Why do you think "AGI sooner would be good"? Is the argument that faster development results in fewer competing architectures?
From my perspective, introducing this self-modifying autonomous machinery should be avoided, given the risk of losing all the life we care about on Earth. We should coordinate to avoid it, not only because allowing companies like OpenAI to push the frontiers of dangerous tech and then having other actors (like Google) rush after them is bad. But also because once the tech pushes all workers out of the loop and starts modifying and re-producing itself in runaway feedback loops, then we lose all control. Under such exponential tech processes, mass destruction happens either way. Whether one architecture starts dominating our economy, or multiple architectures that end up interfacing over high-bandwidth channels.
Even if you think the alignment problem could be solved eventually, it seems good to buy time. We can buy time by coordinating with other stakeholder groups to slow developments down. Then we can build the capacity to research the problem more rigorously.
Linda, who I've heard good things about, won't be organizing this time around. (I'm unsure how much to read into this -- it might just be a vicious cycle where the organizers leave for lack of funding, and the funders don't fund for lack of organizers)
I don't want to speak for Linda here, so I asked her to comment :)