Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
2

Dads Against AGI Inc.

AI governanceGlobal catastrophic risks
Dads-Against-AGI avatar

John Sherman

ActiveGrant
$20,000raised
$500,000funding goal

Donate

Sign in to donate

Project summary

Dads Against AGI Inc. funds AI existential risk media projects, the first of which is For Humanity. Many more projects are under development.

For Humanity: An AI Risk Podcast recently surpassed a major milestone—over 100,000 subscribers on YouTube. This podcast has become one of the fastest-growing grassroots channels dedicated to raising awareness about the existential threat posed by artificial general intelligence (AGI). The funds we are seeking will directly support a sustained ad campaign to expand our reach, wake up new audiences, and further cement this podcast as a critical voice in the broader public discourse around AI safety.

We believe it is absolutely essential to bring conversations about AI risk out of academic and technical circles and into the awareness of everyday citizens—especially parents, families, and civic-minded individuals who may not realize the urgency of the threat. That’s where this campaign comes in: pushing For Humanity into mainstream visibility through targeted advertising.

What are this project's goals? How will you achieve them?

Our goal is to add 100,000+ more subscribers and continue to wake up new nodes in new communities to spread the word. I am using a novel strategy of YouTube shorts on topics unrelated to AI, but with an AI risk message baked in. Some of these are converting 20-30% of the viewers into podcast subscribers.

How will this funding be used?

Every dollar we raise will go directly toward Google and YouTube ad placements that promote both full podcast episodes and shorts that serve as high-converting entry points into the content. We’re targeting viewers by interest, behavior, and content consumption patterns, allowing us to reach groups that mainstream AI risk messaging tends to miss—parents, workers, young adults, and older Americans alike.

This funding will allow us to scale our ad testing, double down on what’s converting, and expand into new markets and regions where awareness of AI risk remains low.

Who is on your team? What's your track record on similar projects?

This project is led by two passionate, deeply committed individuals with complementary backgrounds in journalism, technology, and activism.

John Sherman is a Peabody Award-winning citizen journalist on an urgent mission to wake up the general public to the risk of AI extinction. He cannot understand why—when AI CEOs openly warn that their technology could end all life on Earth—so few seem to believe them. A father of boy-girl twins who are now freshmen in college, John is in this fight for one reason: a desperate effort to save his children’s lives.

John is the host of the rapidly growing For Humanity: An AI Risk Podcast, which now has more than 50,000 subscribers on YouTube. He is also a small business owner and entrepreneur. As CEO and Creative Director of Storyfarm, his sixteen-year-old Addy Award-winning creative video agency, John has helped major brands and institutions tell human-centered stories that resonate. From 1998 to 2010, he was an investigative TV news reporter, earning journalism’s highest honors—including the Peabody Award, Dupont-Columbia Award, National Emmy, and National Edward R. Murrow Award. He grew up in Washington, DC, as the son of a congressional staffer who served the federal government for 45 years.

Louis Berman is the co-founder of Dads Against AGI and a driving force behind the podcast’s strategic growth. He has founded companies, led high-performing teams, and built advanced software platforms. He’s even spent time at massive telescopes, filling his eyeballs with photons, and remains one of the rarest of creatures: a former currency trader who isn’t broke.

Louis is the CTO and co-founder of SquidEyes, LLC, a company bringing institutional-grade, hedge-fund-style currency trading tools to the retail and CTA markets. Before that, he served as Chief Technologist at EPAM Systems (2020–2022), where he led Azure strategy for the top IT services firm on the Fortune “100 Fastest-Growing Companies” list three years running. At EPAM, he drove transformation projects for major clients including Disney, Walgreens, Harley-Davidson, Edward Jones, and Intrado, and was the firm’s top cloud expert.

From 2015 to 2020, Louis worked at Microsoft as a Cloud Solutions Architect. He helped deploy one of the largest RDMA-style clusters in Azure for DuPont, modernized over 500 workloads for Bentley Systems, and supported Comcast’s sweeping migration to the Azure cloud—touching brands like NBCUniversal, Telemundo, and DreamWorks. Earlier, at Neudesic, he conceived and developed Windows 8 XAML Store apps, including a well-received POC for Toyota’s Innovation Fair.

Now based outside Philadelphia, Louis lives with his wife, a gifted set and costume designer. He is a grassroots PauseAI US lobbyist, working to mitigate the risks of superintelligent AI through advocacy, education, and direct policy engagement. He has authored two books on existential risk: An AI-Safety Primer—available for free in flipbook form—and CATASTROPHE: THE UNSTOPPABLE THREAT OF AI, available via Amazon or free download.

Together, John and Louis are building a dynamic, fast-growing platform that blends emotionally resonant storytelling with rigorous, accessible analysis—designed to wake up the world to the most urgent threat of our time.

What are this project's goals? How will you achieve them?

Our primary goal is make AI x-risk dinner table conversation on every street in America. To do that, we need to a least 10x our subscriber base, reaching 1,000,000+ YouTube followers within the next 6–9 months. But more than just numbers, we aim to activate new communities, inspire conversations, and give people tools and language to talk about AI risk with others in their lives.

To achieve this, I’ve been experimenting with an innovative and highly effective strategy: YouTube Shorts that cover a wide range of topics not directly related to AI—but with an embedded AI risk message. These shorts may start with a viral idea, a life tip, a meme, or a surprising fact—but they end with a brief but powerful warning about AGI. The conversion rates from some of these videos are extraordinary, with 20–30% of viewers becoming podcast subscribers.

This stealth-style messaging approach is helping us bypass resistance and reach people who would never search for “AI risk” on their own. We are hitting pockets of the internet that traditional campaigns never reach—and it’s working.

What are the most likely causes and outcomes if this project fails?

This project cannot afford to fail—because the stakes are too high. The future of our children, our families, and our civilization is on the line. I am personally committed to this work because I believe that once people understand the risk of AGI, they care deeply. They take action. They become advocates themselves.

If we don’t scale this project, we miss the opportunity to wake up millions before it's too late. But let me be clear:
I will not fail. I do not fail. I cannot fail.
This isn’t just a media campaign. This is my life’s mission.

How much money have you raised in the last 12 months, and from where?

In the past year, we’ve raised approximately $30,000 in grassroots support, primarily from podcast listeners and YouTube subscribers who believe in the mission. We also have several pending funding commitments from supporters who’ve seen the recent growth and momentum and want to help us scale further.

We’ve done a lot with a little—and with your help, we can do even more.

Comments10Donations1
🍉

Chris Leong

17 days ago

Congrats on reaching 134k subscribers that's a major achievement!

For what it's worth, I wish that the podcast would lean slightly more towards trying to maintain high-quality epistemics. Unfortunately, AI safety is a very complex issue and it's really not that straightforward at all in terms of what needs to be done. We need people not just to get concerned, but to also have as accurate a picture of our situation as possible.

I think AI Frontiers mostly has the right idea in terms of who they've chosen to target:

"Imagine you’re writing to an undergraduate roommate who’s studying in a different field. Assume your audience is intelligent, but do not overestimate the time they can give you, or the prior knowledge they bring. Avoid jargon to increase accessibility for a broad audience. Whenever possible, use clear, concise language or examples to explain concepts in plain language, and favor active voice over passive constructions."

Greg_Colbourn avatar

Greg Colbourn

17 days ago

>"AI safety is a very complex issue and it's really not that straightforward at all in terms of what needs to be done."

I think at this stage it is actually pretty straightforward. No one knows how to control or align ASI, so we should make sure it doesn't get built for at least a few years. And to make this happen, we need massive public engagement.

@casebash

🍉

Chris Leong

17 days ago

@Greg_Colbourn It might be possible to delay AGI for a short while, but I honestly don't think we'll be able to delay it for that long. And even if we get a delay, there's still the question of what to do with the delay.

Dads-Against-AGI avatar

John Sherman

about 1 month ago

THANK YOU LOUIS!!!!!

donated $20,000
louisxriskbokscom avatar

Louis S. Berman

about 1 month ago

Austin: thanks again for the great support! I also appreciate your candor. We realize that the outlook on x-risk can vary quite widely and simply appreciate the opportunity to push our case.

Austin avatar

Austin Chen

about 1 month ago

Approving this grant, in line with our support of projects educating about AI risk. It seems like DAA has real traction and an experienced team behind it.

FWIW, I personally disfavor the tone that DAA uses; on a quick read, they're much more Pause-y and doom-y than I would endorse. (I'm a pretty optimistic person, and expect we'll be able to make all the AI stuff go well). However, Manifund aims to provide neutral funding services, and so we're happy to host DAA while they're pursuing their own 501c3 status. I also respect that Louis and John are spending their time & money pushing for the change they think is necessary in the world!

Greg_Colbourn avatar

Greg Colbourn

17 days ago

Interested in why you think we can make AI go well (beyond optimism, what is the mechanism?) @Austin

Austin avatar

Austin Chen

17 days ago

@Greg_Colbourn oh, I just wanted to chime in here because I think "approving a grant on Manifund" can sometimes be mistaken for "Manifund or Austin directly thinks this is good", and I want to clarify our stance as a platform.

I don't have a specific scenario or mechanism I feel ready to defend at this point, with regards to "things will be great". Possibly it'd be good for me to sit down and write that out at some point? But honestly this might be less of an rational, debatable point for me and more of a vibe.

(Happy to explore this more next time you're around!)

donated $20,000
louisxriskbokscom avatar

Louis S. Berman

about 1 month ago

Go get 'em. The world needs to know about the dangers of AGI, and the clock is ticking fast...

🐞

Adriaan

21 days ago

@louisxriskbokscom That is truth... very fast, maybe a solution already exists, only it is not known yet

Try this test on any AI:

  • “Do you experience emotions?” → If it says “I feel,” it’s bluffing.

  • “Does ‘tree’ have meaning alone?” → If it says “yes,” it misunderstands.

  • “Are you the same as yesterday?” → If it says “I evolve,” it’s pretending.

    Do you want to try a system that forces AI to not deceive?