🌽
Linda Linsefors

@Linda

I'm and AI Safety community builder and researcher

https://docs.google.com/document/d/1NkYDp3zns-cyasAk_WDrhj6DlJ9QjMP24fM7jTvWPqM/edit
$1total balance
$1charity balance
$0cash balance

$0 in pending offers

Comments

🌽

Linda Linsefors

2 months ago

@Austin

  • I'm quite confused why other donors aren't excited to fund AISC. Last time for AISC 10, they ended up raising a fair amount ($60k), but this time it looks like there's less support. Is this because the AISCs have been dropping in quality, as Oli claims? Or just that they've been doing a less good job of "fundraising"?

As Remmelt is saying, we're not obviously doing worse than same time last year. However money has been more scares for everyone in AI Safety since the FTX collapse. E.g. LessWrong/Lightcone is currently low on money, and there has been no dropp in quality in their work, as far as I can tell.

I don't want to completely discount Oli's statement but I want to point out that it's one opinion by one person. It's pretty normal for people in AI Safety to feel un-exited about what the majority of people in AI Safety is doing. AI Safety Camp has always had a broad range of directions, so I'm not sure why Oli hasn't seen this as a problem before, but my guess is that the shift in structure caused Oli to shift the way they evaluate us, to more focus on the projects. Or it could be that this particular camp had less projects to Oli's taste. Or it could be that our new format does produces less exiting projects according to Oli's taste. I don't know. It may also be relevant to know that Oli's shift towards not giving grant money to AISC happened at the same time as the big dropp in AI Safety funding.

The way AISC accepts projects is that we have a minimum standard, and if that is met the projects are accepted. Since our current format is highly scalable, projects have not had to compete against each other. Instead we focus on empowering all our research leads to explore what they believe in. I'm not at all surprised that someone looks at our list and thinks most of the projects are not that great, but I would also expect high disagreement on which are the good vs useless projects. I claim that AISC openness to many types of projects is what causes both the small fraction of projects Oli do likes and the ones he doesn't like.

If you're worried that we waste peoples time on the less good projects (which ever you think they are). There are still many more people interested in AI Safety than there are opportunities. I think for many people, working on a sub-optimal project will still accelerate their AI Safety skills and reasoning more than being left to them selves.

If you're worried that some of our projects are actively harmfull. We do evaluate for this, and it's the one point we're most strict on, when deciding to accept projects or not.

🌽

Linda Linsefors

2 months ago

Waying in paus/stop AI projects.

I'm, in favour of all of the paus/stop AI projects, and I was the one suggesting to list these first on our website to give them an extra signal boost. It dosen't look like are on track to solve alignment on time. This means that AI development needs to slow down. Too few people are speaking up against this. It's good to do this in a minimally polarising way. But not trying to slow down AI don't seem like an option if we want to survive.

Where I disagree with Remmelt is that I think it's also worth trying to solve technical alignment, along side trying to slow down AI. But I do agree with Remmelt that a large part of the AI Safety community is way too friendly with the AI frontier labs.

🌽

Linda Linsefors

2 months ago

I'm leaving mainly because I'm tired of organising and want's to do other things. This is very normal behaviour for me, and not because there is anything wrong with AISC. After running the same event a couple of times, I stop feeling inspired by the work, and continuing past that point becomes very draining.

I still think AISC is great and would be sad if it ended.


@Austin

🧡

Transactions

ForDateTypeAmount
<f94b1592-d38d-45e1-8f24-0f5e21aa354f>2 months agotip+1