Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
5

Asterisk AI Blogging Fellowship

Technical AI safetyAI governanceGlobal catastrophic risks
Asterisk-Magazine avatar

Asterisk Magazine

ActiveGrant
$70,000raised
$70,000funding goal
Fully funded and not currently accepting donations.

What is this?

A blog-building intensive for thinkers and writers to help improve the public conversation about the future of AI.

Why is it necessary?

There’s been an explosion of writing about AI over the past two years, but much of it is shallow or insular or both. We need more writers who take the potential of transformative AI seriously, have real subject matter knowledge, and can communicate to non-experts and across ideological divides.

Our own experience is that in-person conversations at AI hubs and coworking spaces are deeper, richer, and more nuanced than what’s available online. Important insights often don’t make it into the public discourse. We want to change that.

We’re particularly interested in supporting people who can cover technical AI safety critically and cogently, and who can bridge conceptual gaps between the safety community and other parts of the AI world. We’ve also noticed a serious talent gap around AI communications. A number of organizations working on AI policy have come to us for advice on hiring writers, and we usually don’t have anyone to recommend. The goals of this program are twofold: we’d like to produce good writing, but also good writers who can fill some of those gaps.

How will it work?

We're specifically targeting the program towards people with a solid understanding of some AI topic (technical safety, policy, hardware and infrastructure, China, AI for science, etc.) but who don't feel comfortable writing in public. The goal of the intensive portion of the program is for each participant to produce three blog posts. We’ll have two mandatory meetings each week, one with the whole group, one with a smaller breakout group who can read and critique each other’s writing.

We think we can attract a strong, intellectually diverse class of fellows, and this peer support will be an important part of the program. We’ll also provide workshops on skills like outlining and editing, one on one editorial support, and access to a network of mentors. This includes Scott Alexander, Dean Ball, Jordan Schneider, and Sam Bowman, with more to be announced.

The structure is inspired by the Roots of Progress blog building initiative. We’ve conducted a number of interviews with participants and organizers in that program, as well as with target participants for our program, to come up with a structure that will give participants the support to start writing regularly. We’ll also solicit feedback from potential fellows to help finalize the structure.

Who are we?

The editors of Asterisk Magazine. We think we’ve produced some of the most thoughtful AI coverage out there, and are well equipped to find and support budding AI bloggers.

Where is the money going?

$40k to hire a contractor to help develop and run the program (10 weeks). We're very excited to have Avital Morris, a writer and researcher who's worked at the Forecasting Research Institute and the Center for AI Safety, has lots of experience with pedagogy and has taught writing at the graduate level. $30k will go to stipends for the fellows, in order to increase commitment and retention.

Why Manifund?

In the past, our programs have been funded primarily by Open Philanthropy. That’s still the case, but we believe it’s important to diversify our donor base, especially for content related to AI. We don’t want to be (and don’t want our fellows to have to worry about being) too closely associated with any one organization or perspective.

Comments2Donations2Similar6
cais avatar

Center for AI Safety

AI Safety & Society

High-quality, timely articles on AI safety

2
1
$250K raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
StevenK avatar

Steven Kaas

Stampy’s AI Safety Info

Creating an interactive FAQ to explain existential risk from AI to all audiences

8
3
$0 raised
🍋

Jonas Vollmer

AI forecasting and policy research by the AI 2027 team

AI Futures Project

AI governanceForecasting
7
9
$35.6K raised
JaesonB avatar

Jaeson Booker

The AI Safety Research Fund

Creating a fund exclusively focused on supporting AI Safety Research

Technical AI safety
1
16
$100 / $100K
Thomas-Larsen avatar

Thomas Larsen

General Support for the Center for AI Policy

Help us fund 2-3 new employees to support our team

AI governance
9
5
$0 raised