What is this?
A blog-building intensive for thinkers and writers to help improve the public conversation about the future of AI.
Why is it necessary?
There’s been an explosion of writing about AI over the past two years, but much of it is shallow or insular or both. We need more writers who take the potential of transformative AI seriously, have real subject matter knowledge, and can communicate to non-experts and across ideological divides.
Our own experience is that in-person conversations at AI hubs and coworking spaces are deeper, richer, and more nuanced than what’s available online. Important insights often don’t make it into the public discourse. We want to change that.
We’re particularly interested in supporting people who can cover technical AI safety critically and cogently, and who can bridge conceptual gaps between the safety community and other parts of the AI world. We’ve also noticed a serious talent gap around AI communications. A number of organizations working on AI policy have come to us for advice on hiring writers, and we usually don’t have anyone to recommend. The goals of this program are twofold: we’d like to produce good writing, but also good writers who can fill some of those gaps.
How will it work?
We're specifically targeting the program towards people with a solid understanding of some AI topic (technical safety, policy, hardware and infrastructure, China, AI for science, etc.) but who don't feel comfortable writing in public. The goal of the intensive portion of the program is for each participant to produce three blog posts. We’ll have two mandatory meetings each week, one with the whole group, one with a smaller breakout group who can read and critique each other’s writing.
We think we can attract a strong, intellectually diverse class of fellows, and this peer support will be an important part of the program. We’ll also provide workshops on skills like outlining and editing, one on one editorial support, and access to a network of mentors. This includes Scott Alexander, Dean Ball, Jordan Schneider, and Sam Bowman, with more to be announced.
The structure is inspired by the Roots of Progress blog building initiative. We’ve conducted a number of interviews with participants and organizers in that program, as well as with target participants for our program, to come up with a structure that will give participants the support to start writing regularly. We’ll also solicit feedback from potential fellows to help finalize the structure.
Who are we?
The editors of Asterisk Magazine. We think we’ve produced some of the most thoughtful AI coverage out there, and are well equipped to find and support budding AI bloggers.
Where is the money going?
$40k to hire a contractor to help develop and run the program (10 weeks). We're very excited to have Avital Morris, a writer and researcher who's worked at the Forecasting Research Institute and the Center for AI Safety, has lots of experience with pedagogy and has taught writing at the graduate level. $30k will go to stipends for the fellows, in order to increase commitment and retention.
Why Manifund?
In the past, our programs have been funded primarily by Open Philanthropy. That’s still the case, but we believe it’s important to diversify our donor base, especially for content related to AI. We don’t want to be (and don’t want our fellows to have to worry about being) too closely associated with any one organization or perspective.