I'm obviously biased since I'm part of the project but I have worked with Martin many times over the last 10 years and I'm sure that it will be an excellent project!
You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
OVERVIEW
The Race to Superintelligence: You Decide will be an AI-generated sci-fi film, using AI-generated video to dramatise the utopia or dystopia we might face, depending on a key choice about the development of AI. But this project will add interactivity to YouTube to generate engagement and learning impact - you won’t just passively watch the choice - you’ll MAKE the choice yourself. Greatly increasing impact. Here’s a trailer: https://vimeo.com/1135361302?share=copy&fl=sv&fe=ci
We aim to get 880 conscious AI safety decisions per 1000 viewers. This will generate far more data about user behaviour than a traditional YouTube video, which only shows the number of viewers who start watching, plus average view duration etc.
Beyond this, our hypothesis is that when viewers are required to make a conscious decision about AI safety, they will think harder about the subject and remember it better. Therefore we will run a test with 200 participants: half viewing the interactive version and half viewing an otherwise identical linear version. We will test their recollection of issues immediately after viewing and then again after 6 months. All results will be shared on the EA forum. Succeed or fail, these results should be useful to the wider AI safety YouTuber community.
This is a project from me, Martin Percy. For more than a decade, I’ve worked on new ways to use interactivity with video to train people more effectively. There are multiple published medical research papers on my approach, showing it can improve learning outcomes by up to 166% compared with traditional teaching methods. For this work, I’ve been fortunate enough to pick up a BAFTA award, an Emmy award, 11 Webby awards, done a TEDx talk, etc. More info here:
https://en.wikipedia.org/wiki/Martin_Percy
However, I’m new to the EA community — which of course is a big disadvantage! If you’d like to talk to me before considering this project, please don’t hesitate to drop me a line at https://www.linkedin.com/in/martinpercy/
MORE DETAIL
But first, let’s step back and look at the context which explains why this experiment might be useful.
Context 1: Passive views on YouTube are challenging for learning effectiveness and data collection.
EA-aligned organisations care very much about learning effectiveness and data collection. They use YouTube extensively, in spite of the following challenges inherent in the platform:
LEARNING EFFECTIVENESS: Studies show that people learn better when they are actively engaged in learning and making conscious decisions, rather than just watching passively. However, YouTube only supports passive video watching.
DATA COLLECTION: YouTube provides data on when users start a video and how long they watch on average; but it offers no reliable data on whether viewers actually engage with or understand the topic. Of course there are likes, shares, comments etc. - but these are unreliable as data sources.
This project proposes to test an experimental approach to using YouTube in a novel way. We aim to create the first interactive YouTube film about AI safety, using YouTube end cards to enable conscious decision-making by viewers. In doing so, we will explore a new method for addressing YouTube’s current limitations in learning effectiveness and data collection. Succeed or fail, it should offer useful insights to other creators about new ways to use YouTube more effectively.
Context 2: AI-generated video is now good enough to vividly portray dystopian/utopian AI safety scenarios
Aside from these issues, another key problem with AI safety has always been how to get regular people to imagine a dystopian or utopian future with AI. Now, with the exponential improvement in AI-generated video, we can make a mini science-fiction film that convincingly portrays possible futures with AI. This leads to our proposed project…
The Race to Superintelligence: You Decide
This will be an interactive, AI-generated sci-fi film loosely based on the well-known AI 2027 scenario at https://ai-2027.com/ created by the AI Futures Project. They have kindly given permission for us to do this “unofficial” interactive adaptation of it (see below).
In the film, the user first sees a SETUP VIDEO that tells the story of a race between US and Chinese companies to develop superintelligent AI. The user must then make a choice, as in the original scenario, whether to “win the race” or “slow down”. They make their choice by clicking on one of two YouTube end cards.
This leads to one of two CONSEQUENCE VIDEOS. If they choose “win the race”, they see humanity destroyed by an out-of-control superintelligent AI. If they choose “slow down”, they see a much more positive future for humanity - though one that still presents clear risks.
After making this choice, the user can check what would have happened if they had chosen differently. They are then invited to make a final decision about what humanity should do now regarding AI: win the race, slow down, pause AI completely, or make no comment.
The user then sees one of four NEXT STEPS VIDEOS, each giving them a chance to reflect on their choices. Each video links to a different text-based page containing resources tailored to that user’s view of what humanity should do next. This makes it possible to offer far more accurate suggestions that align with the viewer’s real opinions.
This is the flowchart for the experience:
The pink “brain” icons show the two conscious decisions the user must make about AI. Because the user is actively deciding rather than passively watching, their likelihood of learning and remembering the issues increases significantly.
Furthermore, we get far more accurate and unusual data about user opinions at scale. By structuring a YouTube experience that requires conscious choices, we move beyond passive impressions to measurable understanding - a new kind of educational engagement that can be tested and compared directly with traditional video.
What percentage of initial viewers will make each decision?
The drop-out rate for YouTube videos is steep. Going on industry averages, we can expect 39%–63% of viewers to still be watching after three minutes, with 50% as a realistic median figure; and 27%–51% to be watching after eight minutes, with 38% as a realistic median.
Based on usual YouTube drop-off rates, for every one thousand users who start to watch the setup video:
– We can expect ≈500 users to make Decision 1 after ≈3 minutes.
– We can expect ≈380 users to make Decision 2 after ≈8 minutes.
That gives us approximately 880 conscious decisions about AI safety overall per thousand users. What is unknown is whether the interaction makes people more or less likely to drop out of the experience than with a regular linear YouTube video. That will be a key finding of the experiment.
How many viewers will we get in total? At what cost per viewer decision?
This is an obvious question, but very hard to answer. A key factor is which YouTube channel these videos go out on. We are already in discussion with some well-established relevant channels with large audiences. They are interested in hosting the videos but reluctant to commit at this stage, since it’s a significant departure from the regular, linear videos they usually post.
Assuming we do secure a YT channel with significant subscribers, how many viewers can we expect? There are already two excellent linear videos on YouTube based on the AI-2027.com scenario:
We're Not Ready for Superintelligence / AI In Context / 283K subscribers / 8.1M views
AI 2027: A Realistic Scenario of AI Takeover/ Species | Documenting AGI / 209K subscribers / 3.8M views
However, we’re reluctant to predict that we’ll reach similar figures. In the absence of a precise benchmark, let’s go with figures a broad range:
Assumption 1: 1m total viewers ≈ 880k conscious AI safety decisions ≈ cost per decision: $0.03
Assumption 2: 10k total viewers ≈ 8.8k conscious AI safety decisions ≈ cost per decision: $3
The goal of this project is to create - and test the effectiveness of - an AI-generated experimental film dramatising AI safety scenarios, combining YouTube and interactivity to generate engagement and learning impact. We will test its effectiveness by monitoring user figures, comments, shares etc - but also with the following experiment.
Research experiment
To measure the real educational value of this new interactive approach, we will run a controlled online study with 200 participants. 100 will watch the interactive version, and 100 will watch an otherwise identical linear version. Before and after viewing, participants will complete short tests assessing their factual recall, understanding of AI safety concepts, and attitudes to AI. The tests will be run immediately after viewing and then again 6 months after viewing.
The aim is to determine whether interactivity genuinely improves learning effectiveness compared with traditional video. Results will be published on the EA Forum. Statistical analysis will be led by Dr Juemin Xu, an expert in cognitive and decision-making sciences.
Publishing research findings
We will publish a comprehensive summary of research findings on the EA Forum with an analysis by Dr Juemin Xu. We will make all data available on the EA Forum and submit our findings to relevant AI safety conferences. This ensures that other AI safety YouTubers and educators can learn from and build upon the results of this experiment, regardless of the outcome.
Permission to make an “unofficial adaptation” of AI 2027
The site ai-2027.com was created by the AI Futures Project. The COO of the Project, Jonas Vollmer, was kind enough to say in an email: “Yes, you are welcome to use our work; please just make sure to include an easily visible link to https://ai-2027.com/ and make it clear it's an unofficial representation… Note that several filmmakers are looking to option our source material. I expect this won't be a problem as long as it's fair use / derivative and short videos rather than feature-length or TV series”. The proposed project is within those very reasonable guidelines.
How will this funding be used?
This project’s total cost is $24,375 USD.
It will be completed in around 32 working days over 6 weeks.
Martin Percy (Concept, Writing, Direction & Supervision): $15,480
AI Generation Specialists: $2,900
Editor - 10 rounds of feedback: $1,595
Research & Testing (Dr Xu plus 100-user trial): $2,500
AI Tools & Licences: $1,200
Admin: $500
Marketing: $500
A detailed itemised budget is available on request.
CONCEPT, WRITER, PRODUCER & DIRECTOR: MARTIN PERCY
BAFTA, Emmy and 11× Webby winner making interactive films that turn viewers into doers. Education: M.A. (Hons) English, King’s College, Cambridge
EA courses: BlueDot Impact Future of AI; BlueDot Impact AGI strategy; Introductory EA Program; due to attend EA Connect in early December.
My work focuses on measurable learning outcomes, with results validated in peer-reviewed studies and large-scale deployments.
Wikipedia: https://en.wikipedia.org/wiki/Martin_Percy
Emergency skills training using gamified films:
Lifesaver (interactive CPR, UK):
After 6 months, taught CPR 29% better than intensive face-to-face (F2F) training
Lifesaver + F2F delivered 166% better than F2F alone.
Used by over 3 million people since 2012, ~300k/year
Multiple real-world lives saved
Heart Class (school CPR, USA):
After 6 months, 51% more students delivered high-quality CPR vs the leading US classroom video approach.
CardiacCrash (group CPR, Canada): in a 22-school study:
98% of students performed good CPR while learning
85% would recommend
98% of teachers want it run annually.
Peer-reviewed sources include Resuscitation, Journal of the American College of Cardiology, and Circulation (abstract).
Trailer for one example gamified film (CardiacCrash: Monica): https://youtu.be/lONRVhIbLrg
Participatory workshops to debate societal emergencies
My participatory workshop AI Basics: Thrills or Chills? combines interactive films with live discussion to replace passive viewing with active debate.
In each two-hour watch party, participants watch ≈ 40 minutes of gamified video with 12 conscious decisions about AI and AI safety, and take part in ≈ 70 minutes of discussion.
It’s been run at venues including Cornell, Cambridge, a World Bank conference in Tokyo, the UK Cabinet Office; as well as in schools and sixth forms in deprived parts of Britain.
In Cornell University sessions:
92% reported significantly greater understanding of AI
94% would recommend the session.
Trailer for AI Basics: Thrills or Chills?: https://youtu.be/llpT6PYk6Xc
OTHER ROLES
Editors and AI-generated image/video producers:
Martin has extensive contacts in the community of freelance editors and AI-generated image and video producers; these will be confirmed once the project gets the go-ahead.
Test users:
A small group of representative YouTube viewers will be recruited closer to the testing phase to provide structured feedback on clarity, engagement and interactivity.
STATISTICAL ANALYSIS OF EXPERIMENTAL RESULTS
Dr Juemin Xu will do statistical analysis of experimental results. Dr Xu researches judgement and decision-making. Her work has been reported by the Wall Street Journal, The Economist, The New Yorker, etc. https://www.linkedin.com/in/jueminxu/
STEERING GROUP
The project will be supported by a steering group, providing independent oversight and expert guidance throughout development, testing and evaluation.
Wing-Yee Lau - former Global Head of Strategy at TikTok; strategic and data-driven leader with experience across multiple areas and industries.
Nav Kumar - CEO, Astroware; building autonomous systems to red-team, defend and align frontier models.
Dr Atif Mohammed Ghani SFHEA FRSA - interdisciplinary producer, innovator, and inclusive educator. Innovate UK XR Educator Winner 2025. Lecturer, Royal College of Art.
We are still looking for members of the steering group - so if you are an expert in AI Safety and would like to get on board, please get in touch!
RISK 1 – FAILURE TO REACH A SIGNIFICANT NUMBER OF YOUTUBE VIEWS
Cause: For a variety of reasons (e.g. limited public interest, lack of marketing budget, difficulty finding a suitable partner YouTube channel etc.) it is possible the project may not reach a significant number of views.
Outcome: Reduced visibility and less meaningful data on how viewers engage with and respond to the project.
Mitigation: As soon as the project is greenlit, we will start looking for a suitable partner channel with the largest possible subscriber base. We will also reach out to influencers, bloggers, and others who may be interested in or sympathetic to the project - especially given its novelty - to help drive traffic and awareness.
RISK 2 – FAILURE TO FIND A SUITABLE PARTNER YOUTUBE CHANNEL
Cause: We may fail to find a suitable partner YouTube channel. This may be for a variety of reasons (e.g. lack of interest in the subject, reluctance to experiment with new interactive formats, etc.)
Outcome: Reduced reach and limited data on audience engagement.
Mitigation: Begin outreach as soon as the project starts. Aim to get introductions from the YouTube creator community to identify suitable partners early (e.g. 80,000 Hours, AI in Context, Future of Life Institute (FLI), Centre for AI Safety (CAIS), ColdFusion, Computerphile, etc.) We expect target channels to be more confident and enthusiastic once they can see a finished version in action rather than just a concept.
RISK 3 – REDUCED BUDGET
Cause: Project proceeds at less than the full $24,375 but above the minimum of $13,360, resulting in less time and fewer resources for production.
Potential outcomes would depend on budget level achieved but at $13,360:
The 200-person controlled trial would need to be scaled down.
Production days would be cut drastically, giving less time for AI image and video generation, refinement and continuity checking.
The number of feedback and revision rounds would fall from 10+ to 3.
All data would still be published, but without a full statistical report from Dr Juemin Xu.
The published record would be a simpler “cut-and-paste” presentation of raw data and summary observations.
Mitigation: Focus on completing a clear, fully functional version of the film and collecting core engagement data.
RISK 4 – LOW VIEWER ENGAGEMENT WITH INTERACTIVE ELEMENTS
Cause: Audiences may prefer to watch passively rather than make interactive choices.
Outcome: Lower-than-expected participation
Mitigation: Use clear on-screen prompts, strong narrative motivation for each choice, and user testing to optimise timing and clarity.
If this still occurs, the finding remains valuable for future AI safety communicators, who will know to avoid this approach.
RISK 5 – PRACTICAL PRODUCTION ISSUES
Cause: Inconsistent AI-generated visuals or unclear choice points.
Outcome: Lower perceived quality and reduced viewer comprehension.
Mitigation: Maintain strict prompt discipline, continuity checks, and look-development standards, supported by iterative user testing.
Overall, even if one or more of these risks occur, all data, methods and analysis will be published openly. The project will therefore still succeed as an experiment by providing clear guidance for future AI safety educators and filmmakers.
$0 (new project).