Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Amplifying AI extinction risk awareness before it’s too late

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
The-AI-Risk-Network avatar

Lindsay Langenhoven

ProposalGrant
Closes October 14th, 2025
$0raised
$1,000minimum funding
$50,000funding goal

Offer to donate

40 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

The AI Risk Network produces accessible podcasts and videos to close the public awareness gap on AI extinction risk. By educating millions about the dangers of unchecked AI development, we build the pressure needed for real safety regulations and guardrails. Our media-first approach makes complex issues clear, engaging, and actionable — empowering everyday people to demand change from leaders. With your support, we can expand our reach, sustain high-quality programming, and ensure the public has a voice before unsafe AI development outruns human control.

What are this project's goals? How will you achieve them?

For this project we would like to:

  • Expand public awareness of AI extinction risk by producing accessible, engaging podcasts and videos that reach broad audiences.

  • Grow our YouTube channel with weekly podcast episodes to build a consistent, trusted platform on AI extinction risk for the general public.

  • Promote our new shows including:

    • Unsafe Mode – Practical safety steps for families

    • Warning Shots – Revealing the weeks hidden AI risks

    • Last Laugh – Comedians breaking through denial

    • Am I? – Researchers exploring the murky waters of AI consciousness

  • Promote audience engagement through targeted outreach campaigns, ensuring our content sparks real-world conversations and action.

  • Amplify advocacy impact by equipping the public with knowledge and tools to push for responsible AI governance.

How will this funding be used?

To support the production, dissemination and promotion of The AI Risk Network’s series of podcasts.

Who is on your team? What's your track record on similar projects?

John Sherman, Executive Director and Founder
John is a veteran media producer and strategist with 15+ years in storytelling and campaign design. As founder of StoryFarm, he has worked with mission-driven organizations across causes. At GuardRailNow, he leads strategy and content while hosting flagship shows For Humanity and Last Laugh. His expertise in persuasion and narrative design drives the group’s media-first approach to advocacy.

Caroline Little, Content Project Manager
Caroline manages content production across GuardRailNow, ensuring timely, high-impact communication. She also supports strategic communications, vendor coordination, performance tracking, and public outreach. With expertise in digital strategy and project management, she keeps content organized, on message, and effectively delivered at scale.

Deena Englander, Operations Manager
Deena, a Lean Six Sigma Black Belt and founder of WorkStream Nonprofit, helps mission-driven teams streamline systems. At GuardRailNow, she manages operations, training, and performance to ensure clarity and scalability. Her data-driven approach improves workflows, vendor coordination, and efficiency, saving clients significant time and resources.

Sade McDougal, YouTube Specialist & Content Editor

Lindsay Jane Langenhoven, Content Writer & Editor

Gerry Shom, Video Editor

What are the most likely causes and outcomes if this project fails?

Causes

  • Insufficient funding to sustain video and podcast production at scale.

  • Limited promotion means episodes fail to reach beyond a limited audience.

  • Resource constraints leading to fewer shows, reduced quality, or gaps in consistency.

  • Operational bottlenecks if core staff capacity is stretched too thin.

Outcomes

  • Lost public awareness: millions miss the chance to understand AI extinction risks in time.

  • Narrative vacuum: Big tech and pro-AI acceleration voices dominate unchallenged.

  • Reduced influence on policymakers and media, weakening momentum for guardrails.

  • Diminished credibility for GuardRailNow and The AI Risk Network as a consistent, leading voice in AI safety.

How much money have you raised in the last 12 months, and from where?

Total Funding Received To Date: $198,728.20

  • Center for AI Safety ($160,000.00) 

  • Louis Berman ($25,000.00) 

  • Private donors ($13,824.38)


CommentsOffersSimilar7

No comments yet. Sign in to create one!