Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
3

[CLOSED] Arkose may close soon

Arkose avatar

Arkose

Not fundedGrant
$0raised

We are no longer requesting funding at this time.

Arkose is an AI safety fieldbuilding organisation that supports experienced machine learning professionals — such as professors and research engineers — to engage with the field. We focus on those new to AI safety, and have strong evidence that our work helps them take meaningful first steps.

Since December 2023, we’ve held nearly 300 one-on-one calls with mid-career machine learning researchers and engineers. In follow-up surveys, 79% reported that the call accelerated their involvement in AI safety[1]. Nonetheless, we’re at serious risk of shutting down in the coming weeks due to a lack of funding. Several funders have told us that we’re close to meeting their bar, but not quite there, leaving us in a precarious position. Without immediate support, we won’t be able to continue this work.

If you're interested in supporting Arkose and would like to learn more, please reach out here or email victoria@arkose.org. 

What evidence is there that Arkose is impactful?

AI safety remains significantly talent-constrained, particularly with regard to researchers who possess both strong machine learning credentials and a deep understanding of existential risk from advanced AI.

Arkose aims to address this gap by identifying talented researchers (e.g., those with publications at NeurIPS, ICML, and ICLR), engaging them through one-on-one calls, and supporting their immediate next steps into the field.

Following each call, we distribute a feedback form to participants. 52% of professionals complete the survey, and of those, 79% report that the call accelerated their involvement in AI safety:

We also send a short form 6 months later, which demonstrates that professionals take a variety of concrete actions following their call with us:

We think that accelerating a senior machine learning professional's involvement in AI safety (e.g. a professor) by 1-3 months is a great outcome, and we're excited about the opportunity to improve our call quality over the next year and support researchers even better.

Unfortunately, we're not able to share all of our impact analysis publicly. If you're interested in learning more (for instance, if you'd like to see summaries of particularly high or low impact calls), please reach out to victoria@arkose.org.

What would the funding allow you to achieve?

We need a further $200,000 minimum, which would allow us to run (along with our remaining runway) for a year at our current size. During this time period, we expect to be able to run between 300 and 800 more calls with senior researchers while improving our public resources and expanding our support for researchers following the call.

How can I help?

At present, we're only seeking funding at or above our minimum viable level. That said, we're still enthusiastic about small donors — and may be able to continue if enough are willing to contribute. We're using Manifund's system to ensure donations only go through if we reach our minimum bar; otherwise, donors will be refunded.

We believe Arkose is a promising, cost-effective approach to addressing one of AI safety’s most pressing bottlenecks, and remain excited about this work. With additional funding, we’re in a position to continue reaching high-impact researchers, deepen our post-call support, and refine our approach. Without it, we will be forced to wind down operations in the coming weeks. If this work aligns with your interests, please do get in touch -- the next few weeks will be a critical time for us.

[1] Of 51% who gave feedback immediately after their call.

Comments9Similar7
Arkose avatar

Arkose

Arkose

AI safety outreach to experienced machine learning professionals

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
0
$0 raised
Apart avatar

Apart Research

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline

Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors

Technical AI safetyAI governanceEA community
30
36
$131K raised
tylerjn avatar

Tyler Johnston

The Midas Project

AI-focused corporate campaigns and industry watchdog

AI governanceGlobal catastrophic risks
2
2
$0 raised
JorgenLjones avatar

Jørgen Ljønes

80,000 Hours

We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.

Technical AI safetyAI governanceBiosecurityEA Community ChoiceEA communityGlobal catastrophic risks
12
13
$4.92K raised
briantan avatar

Brian Tan

WhiteBox Research’s AI Interpretability Fellowship

~4 FTE for 9 months to fund WhiteBox Research, mainly for the 2nd cohort of our AI Interpretability Fellowship in Manila

Technical AI safetyEA Community Choice
6
7
$2K raised
remmelt avatar

Remmelt Ellen

10th edition of AI Safety Camp

Technical AI safetyAI governance
18
21
$68.5K raised
🍄

David Conrad

Talos Network needs your support in 2025

We are fostering the next generation of AI Policy professionals through the Talos Fellowship. Your help will directly increase the number of places we can offer

AI governanceEA communityGlobal catastrophic risks
2
0
$0 raised