Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

How to mobilize people on AI risk: experimental message testing

AI governance
SamNadel avatar

Sam Nadel

ProposalGrant
Closes December 19th, 2025
$0raised
$25,000minimum funding
$52,747funding goal

Offer to donate

31 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Polls consistently show majorities are concerned about AI, support stronger regulation, and back pauses on risky AI research. Yet, the AI safety movement faces a critical “mobilization gap” - an inability to convert passive concern into active participation. There are no mass demonstrations demanding AI accountability, little grassroots pressure on policymakers, and limited participation in advocacy organisations.

Despite AI's profound societal implications, virtually no empirical research exists on how to mobilize around AI governance. This project fills that gap through mixed-methods research:

  • Experimental message testing using split-sample survey designs embedded within representative polling. Experiments test different potential AI harms (job losses, AI warfare, surveillance, existential risks etc) and their effects on behavioural intentions and tangible actions such as signing a petition or joining a campaigning group

  • Historical analysis - Qualitative Comparative Analysis to extract lessons from past technology movements (nuclear, GMO, biotech) that successfully mobilized around complex risks

  • Elite interviews - interviews with 20-30 movement leaders, funders, and advocates across AI safety, algorithmic justice, labour organising, and digital rights

This project will provide actionable evidence that AI advocates and funders can use to build effective democratic participation in AI governance - before crucial pathways become locked in.

What are this project's goals? How will you achieve them?

Overarching goal: Generate actionable evidence on effective AI mobilization strategies that AI safety campaigners/advocates and funders can use immediately.

Specific objectives

1. Strategic intelligence for AI safety campaigns:

  • Identify at least 3-5 areas of concern and message framings that increase mobilization intentions across different demographic groups

  • Document at least 3 historical case studies (e.g. nuclear, GMO, biotech) providing actionable lessons on technology movement mobilization

2. Research outputs:

  • Publish research report (8,000-10,000 words) with concrete recommendations for AI movement actors

  • Submit peer-reviewed article to leading journal (Social Movement Studies or Science, Technology & Human Values)

  • Produce practitioner-focused policy briefing (2,000 words) synthesizing key findings

3. Direct engagement with movement actors and funders

  • Provide consulting support to at least 2 major AI advocacy organizations on messaging and strategy

  • Host practitioner workshop bringing together 20+ researchers, advocates, and funders

  • Present findings at at least 2 conferences or events reaching both academic and practitioner audiences

  • Brief at least 5 major philanthropic funders on effective AI mobilization strategies

All major outputs - research report, policy briefing, practitioner workshop, and presentations to advocacy groups and funders - will be completed within 6 months of receipt of funding. All research outputs will be published open-access to maximize reach across academic and practitioner communities.

How will this funding be used?

Research expenses: $13,175

  • Recruitment of ~1,600 participants for messaging experiments for both US & UK studies, each with 8 conditions: $13,175

Personnel: $36,825

  • Lead researcher salary (6 months, full-time equivalent): $36,825

Dissemination and impact: $2,750

  • Practitioner workshop (venue, refreshments, participant travel): $1,500

  • Policy briefing materials and distribution: $250

  • Conference registration fees, travel and accommodation: $1000

Total: $52,750

*If minimum funding of $25,000 is achieved, this would enable a reduced scope with a focus on the core messaging experiments (~1,500 participants in either the US or UK) with production of a report/research brief with key findings and dissemination.

Who is on your team? What's your track record on similar projects?

I’m Director of Social Change Lab, a UK-based organization researching social movements to understand their impact. We've published peer-reviewed research in Nature Sustainability and Humanities and Social Sciences Communications, completed numerous impact evaluations of major social movement campaigns, and influenced hundreds of thousands of dollars in movement funding decisions. Our work has been featured in the New York Times, New York Magazine, The Washington Post, Vox, The Guardian, and The Observer, among other outlets. 

Our Director of Research (Markus Ostarek) and Senior Researcher (Cathy Rogers) are both experienced researchers, holding PhDs from top-ranking universities with extensive publication experience and reports on social movements. Our founder and adviser, James Özden, worked extensively with Extinction Rebellion and is currently a philanthropic grant maker for Mobius. I hold research positions at the University of Bath and University of Exeter, am a PhD student at London School of Economics, and was Head of Policy and Advocacy at Oxfam.

What are the most likely causes and outcomes if this project fails?

Potential failure modes:

  • Experimental findings don't generalize: messages that work in controlled settings don't translate to real-world mobilization. Mitigation: Test multiple framings across demographic groups; triangulate with interview insights and qualitative comparative analysis; validate findings through practitioner consultation.

  • Research timing misses critical window: AI governance debates move faster than research timeline. Mitigation: Share preliminary findings throughout the process; prioritise rapid dissemination over exhaustive analysis; maintain flexibility to pivot focus areas.

If this project fails, AI advocates will continue operating without empirical evidence on effective strategies, funders will lack data-driven guidance for resource allocation, and advocacy efforts will remain uncoordinated, reducing collective impact. That said, even incomplete findings would provide more evidence than currently exists in this field, meaning partial success still delivers value.

How much money have you raised in the last 12 months, and from where?

Social Change Lab has raised around $240,000 in the last 12 months from foundations, NGOs, and individual donors. This has been for a mixture of general operating expenses and project funding for work on social movements in AI safety, climate and animal rights. Our funders over the last 12 months have included: craigslist Charitable Fund, Red Panda Paw, Brian Mercer Trust, Climate Emergency Fund, Joseph Rowntree Foundation, Joseph Rowntree Reform Trust, Phauna, Project Slingshot, and Changing Ideas.

Supporting documents:

  • Our research page which extensive includes work on AI, climate and animal advocacy movements: https://www.socialchangelab.org/research   

  • Our paper on the burgeoning AI safety movement https://www.socialchangelab.org/ai-safety-movement

  • Opinion piece on the AI safety movement and lessons that can be learned from history: https://wagingnonviolence.org/2025/06/where-is-the-ai-safety-movement/

CommentsOffers

No comments yet. Sign in to create one!