Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
3

Evitable: a new public-facing AI risk non-profit

AI governanceGlobal catastrophic risks
DavidKrueger avatar

David Krueger

ProposalGrant
Closes February 1st, 2026
$5,127raised
$1,000minimum funding
$1,500,000funding goal

Offer to donate

37 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Evitable is a new non-profit seeking donations.  Our mission is to inform and organize the public to confront societal-scale risks from AI, and put an end to the reckless race to develop superintelligence. We use high-profile media appearances, evidence-based communication, and novel messaging to shift the narrative. Our core message is “AI is not inevitable, it is a political choice”. We argue an indefinite AI pause is feasible without mass surveillance or centralizing control over AI; for instance via a robust international agreement to get rid of advanced AI chips and fabs.


We believe Achieving Evitable’s goal of stopping or slowing AI progress is essential to reducing AI risk to an acceptable level. Most contemporary work in AI safety is unlikely to be sufficient in worlds where alignment is hard, or where catastrophic outcomes occur due to other issues such as gradual disempowerment or offense/defence imbalance. A few organizations have similar goals to Evitable, but we differentiate ourselves via (i) our plan to create a gentle on-ramp for people who don't identify as activists, (ii) our positive framing ("evitability"), and (iii) our founder’s credentials.

What are this project's goals? How will you achieve them?

Our plans for 2026 include:

  • Q1: Get our message into the public conversation.

    • Produce accessible content to explain AI risks and Evitable’s proposed solutions. This will include writings, websites with explainers and infographics, videos, and social media posts.

    • Build media relationships and work with contacts to secure high-profile appearances.

    • Conduct message testing such as polling, focus groups, and A/B testing.

  • Q2: Build coalitions and reinforce messaging.

    • Build and enact tools and processes to quickly respond to current events.

    • Develop partnerships with civil society orgs and AI advocates.

    • Create informative resources for AI advocates, such as training and communications guides.

  • Q3/4: Help make AI a prominent issue in advance of the US midterms. 

    • Create “media moments” that draw attention to AI risks and our messages.

    • Work with partners to identify key issues for focused communication.

    • Our goal is for the midterms to operate as a “referendum” on the desirability of continued frontier AI development in the broadest possible terms (i.e. “yes or no”).

Note that our grassroots lobbying activities will be limited in scope initially, but we also plan to quickly set-up a complementary 501c4.

How will this funding be used?

Funding will be used to develop and run communications campaigns. At the outset, the primary expensis will be salaries of early employees and other expenses related to setting up our operations, including contracted communications and legal work, enterprise software, etc.

Who is on your team? What's your track record on similar projects?

As the founder of Evitable, AI professor David Krueger brings a history of successful communication about AI risk (e.g. initiating the CAIS Statement on AI Risk). David is supported by Luka Ladan (an experienced and PRSA accredited publicist), Anna Kerby (a consultant from Future Matters), and his assistant Sam A; Gretchen Krueger (a researcher associated with Harvard’s Berkman Klein Center, formerly of OpenAI and AI Now, and David’s sister) is also assisting in a volunteer capacity. We’re hiring for communications, operations, and Chief of Staff roles, and are in conversations with experienced organizers.

What are the most likely causes and outcomes if this project fails?

Our project could fail by having low impact (if we don’t get much attention) or negative impact.  

The most likely reasons we might have negative impact are (1) our work might move conversations about AI risk in an unproductive direction (e.g. harmful political polarization), (2) our work might lead to reputational harm for us and those associated with us (e.g. the AI safety community).  These risks are intrinsic to seeking to influence the public, and we take them very seriously.  

Negative Outcome (1) seems more likely to the extent we:

  • mislead people

  • inflame popular sentiment about AI along partisan lines

Negative Outcome (2) seems more likely to the extent we:

  • are viewed as intentionally inflaming or misleading the public, or otherwise causing outcome (1)  

  • are viewed as ungrounded or lacking credibility

  • are closely associated with other organizations, ideas, movements, or people

Planned Mitigations:

We plan to mitigate the risks of low impact by:

  • working with experienced communications professionals

  • conducting message testing

  • exploring different communication strategies

In addition, we plan to mitigate the risk of negative impact by:

  • maintaining a high standard of fact-checking avoiding hyperbolic rhetoric

  • avoiding politically coded language

  • seeking political balance in our hiring, funding, media engagements, etc.

  • message testing to understand the likely impact of our work on different populations attitudes and beliefs towards AI (/risk).

  • “red teaming” our communications in anticipation of counter-narratives

  • seeking partnerships and engagement with organizations from across many different sectors of civil society 

  • developing a distinctive brand identity that involves being both: i) uncompromising, critical, and direct; and ii) rigorous, principled, and fair

  • developing media crisis response plans

We will also take steps to minimize the risk of potentially damaging security breaches and legal challenges.

How much money have you raised in the last 12 months, and from where?

We are self-funded so far, and in the early stages of fundraising.  We are anticipating funding from FLI, SFF, LTFF, and/or private donors, including $40k tentatively committed from private donors so far.

Comments1Offers2Similar7

Donation Offers

JJ-Hepburn avatar

JJ Hepburn

$127
3 days ago
RyanKidd avatar

Ryan Kidd

$5K
3 days ago