Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
7

Grow An AI Safety Tiktok Channel To Reach Ten Million People

Technical AI safetyAI governance
michaeltrazzi avatar

Michaël Rubens Trazzi

ActiveGrant
$12,400raised
$40,000funding goal

Donate

Sign in to donate

Project summary

In the past month, I have been posting daily AI Safety content on tiktok and youtube reaching more than 1M people.

This grant would pay for my time so I could keep posting daily content on tiktok / youtube until the end of the year (20 weeks left). If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Why this matters: Short-form AI Safety content is currently neglected—most outreach targets long-form YouTube viewers, missing younger generations who get information from TikTok. With 150M active TikTok users in the UK & US, this audience represents massive untapped potential for talent pipeline (e.g., Alice Blair who recently dropped out of MIT to work at Center for AI Safety as a Technical Writer exemplifies the kind of young talent I'd want to reach).

What impact I am planning to get:

  • Base case: Maintaining momentum from the past four weeks (1.3M views on YT + Tiktok, meaning 325k views / week) for 20 weeks would yield 6.5M views by the end of the year.

  • Best case: Maintaining momentum from the past two weeks (1M views on YT + Tiktok in past two weeks, meaning 500k views / week) for 20 weeks, would yield 10M AI Safety views by the end of the year.

Below Tiktok's performance from Jul 14 to Aug 10:

What are this project's goals? How will you achieve them?

Project Goals:

  1. Reach 6.5-10M views by the end of the year (as outlined in summary above)

  2. Build an engaged audience of 15,000+ followers through an ecosystem approach: publish a mix of fully safety content, partly/indirectly safety content, and "AI is a big deal" videos to create a funnel where viewers progressively engage with AI safety ideas. Convert the most engaged viewers (those who visit my profile and watch pinned videos) into concrete actions through CTAs and links in bio (eg. to aisafety.com). See comment for more details.

How I'll achieve them:

  1. Post 1-3 clips daily across TikTok and YouTube

  2. Focus on AI-Safety-related interviews, such as Geoffrey Hinton, Sam Altman, Ilya Sutskever, Tristan Harris, Yudkowsky, etc.

  3. Post clips quickly after they appear online to be pushed by the algorithm

How will this funding be used?

This funding will be used to pay for my salary.

$40k for 20 weeks of work means $2k per week, which corresponds to a ~100k / year salary, aka the opportunity cost of going back to working as a ML engineer in France, enabling me to work full-time on this project productively.

Essentially, every $2k pay for one week of work, which (in the best case) translates to ~500k AI Safety views, meaning about $4 / 1000 views. In comparison, to run ads on Tiktok the price would be $5-15 / 1000 views, and even then they would be much less engaged.

Note: If I get less than my target funding, I will work proportionally to how much funding I get (say 5 weeks if I get $10k).

Who is on your team? What's your track record on similar projects?

Team: Michaël Trazzi.

Track record:

  1. Growing my AI Safety Tiktok to 1M views in the past month (with a single clip reaching 500k views) and my Youtube to 470k views (lifetime).

  2. Some examples of clips that have performed especially well on TikTok over the past month:

    1. Tristan Harris on Anthropic's blackmail results (150k views)

    2. Ilya Sutskever on AI being able to do all of human jobs, and making sure artificial superintelligences are honest (152k views)

    3. Daniel Kokotajlo on what happens in fast AI takeoff worlds "It's going to hit humanity like a truck" (54k views)

  3. I've recently edited two AI-safety-related short-form videos (1, 2) for another content creator which ended up being the most watched videos of the entire channel by a large margin (3-4x more views than all the other videos)

  4. Directed the SB-1047 documentary (website), which involved working with and learning from ~4 seasoned video editors for ~6 months.

What are the most likely causes and outcomes if this project fails?

Most likely causes of this project reaching less people than my target would be that:

  1. Some weeks happen to have less interesting content to make clips about than in the past week. Answer: I expect that if this is true for some weeks, other weeks will have more content to make clips than average, which will at least balance it out. In practice, I expect that as we get to the end of the year, people will start talking more about AI, not less, so there will be more clips to be made on average.

  2. The algorithm does not push my videos as much as they have been pushed for the past two weeks. Answer: One reason would be that Tiktok starts pushing content discussing AI less. However, people's personal experience of AI is increasing, and with AGI / Superintelligence being clearly inside of the overton window with content like AI 2027, it seems that the potential audience for these videos will increase, and therefore be pushed more by the algorithm as people engage. Another reason would be that I get somehow shadow banned or similar. In that case I could create a new account or transition to other platforms like We Chat or similar.

How much money have you raised in the last 12 months, and from where?

In the past 12 months I have raised $143k for the SB-1047 documentary (see post-mortem here). The funding was almost entirely from a previous Manifund grant. $20k came from the Future of Life Institute.

Comments15Donations5Similar8
michaeltrazzi avatar

Michaël Rubens Trazzi

about 7 hours ago

Update Aug 13:
- Corrected my projections to be more accurate and conservative: now targeting 6.5M-10M views by year end (I had initially made a math error where things were off by a factor of two).
- I've posted some more thoughts on LW / EAF regarding were I'm expecting most of the impact to be, expanding on what I call "progressive exposure".

MarcusAbramovitch avatar

Marcus Abramovitch

about 14 hours ago

Seems interesting though I am somewhat questioning the funding ask here. Is this really full time work that commands a full time 6 figure (annualized) salary to post 1-3 clips a day? And assuming you believe in AI safety sufficiently, if you only get $12.4k (current amount as of this comment) in funding, are you going to just quit posting in 6 weeks and a day?

FWIW, I don't want to single you out, I have this kind of critique of many, many people doing AI safety work but this just seems like a striking example of it.

michaeltrazzi avatar

Michaël Rubens Trazzi

about 7 hours ago

@MarcusAbramovitch Thanks for the questions. Let me address both points:

On the work involved: I spend 5-6 hours a day going through multiple podcasts to find the very best clips, where most of them don't end up being posted. There's also editing / uploading work (2-3 hours) on top of that that is hard to see (adapting things from vertical vs. horizontal by scaling / position / changing backgrounds, iterating on different captions, fixing the audio, fixing subtitles, upscaling, uploading to different platforms, checking on different devices). It definitely gets to a full day of work, especially as I do more clips.

6 figure (annualized) salary: I've spent some time thinking about how much money I would ask to pay for my time on this grant. One important word above is "work full-time on this project productively". I did consider other amounts that would mean basically only paying bills and nothing else, but I don't think that'd have been sustainable nor helpful to make this project go well.

To give more context, last year I made the mistake of under-budgeting on my salary for the SB-1047 documentary (see post-mortem here), which meant that I basically paid myself for only 2 of the ~8-9 months I spent on this. One lesson I learned from this is that compensating yourself for your time is not just a cherry on top after you have everything else figured out, but something necessary to work on something productively for extended periods of time.

> And assuming you believe in AI safety sufficiently, if you only get $12.4k (current amount as of this comment) in funding, are you going to just quit posting in 6 weeks and a day

After 6 weeks of full-time work, I'd evaluate options to maximize the project's continued impact: transitioning to part-time while fundraising, mentoring someone to continue, or documenting my process for others to pick it up.

I appreciate the clarifying questions, let me know if you need anything else.

🌳

Cian

34 minutes ago

@MarcusAbramovitch Yeah, I also like the idea, but $100-$300 per tiktok clip seems weirdly expensive. If you drop this due to insufficient funding I hope someone picks up the idea as a hobby

donated $200
Haiku avatar

Nathan Metzger

1 day ago

The generality of this approach is a positive, since public awareness of AI risk itself is likely a prerequisite of good AI policy, which is likely a prerequisite of safe AI development.

donated $8,000
NeelNanda avatar

Neel Nanda

2 days ago

Seems like an interesting project, and impressive reach. What kinds of messages/calls to action do you hope to broadcast?

Also presumably there's a typo above and you mean $10K for 5 weeks not be 10?

🥕

Jesse Richardson

2 days ago

Seconded -- I am interested in this project but want to hear more about what outcomes you hope to achieve from an expanded audience

donated $200
Haiku avatar

Nathan Metzger

2 days ago

I agree. Awareness is good in general, but some of the most watched clips don't really touch on AI Safety, and none of them have calls to action. ("Learn More Here," "Share This," "Call your representatives," etc.)

michaeltrazzi avatar

Michaël Rubens Trazzi

1 day ago

@NeelNanda Yes that was a typo, fixed it!

Regarding messages and outcomes (cc @NeelNanda, @Jesse-Richardson and @Haiku), see below my strategy which includes a diagram summarizing the approach (also included in the main proposal):

  1. Messages: my goal is to promote content that is fully or partly about AI Safety:

    1. Fully AI safety content: Tristan Harris (176k views) on Anthropic's blackmail results, summarizes recent AI safety research in a way that is accessible for most people. Daniel Kokotajlo (55k views) on fast takeoff scenarios, introduces the concept of automated AI R&D, and related AI governance issues. These show that AI Safety content can get high reach if the delivery or editing is good enough.

    2. Partly / Indirectly AI safety content: Ilya Sutskever (156k views) on AI doing all human jobs, the need for honest superintelligence and AI being the biggest issue of our time. Sam Altman (400k views) on sycophancy. These help with general AI awareness that makes viewers receptive to safety messages moving forward.

    3. "AI is a big deal" content: Sam Altman (600k views) talking about ChatGPT logs not being private in the case of a lawsuit. These videos aren't directly about safety but establish that AI is becoming a major societal issue.

The overall strategy here is to prioritize posting fully-safety content that has the potential to have high reach, then go for the partly / indirectly safety content that walks people through why AI could be a risk, and sometimes post some content that is more generally about AI being a big deal, bringing even more people in.

  1. Outcomes: Rather than adding call-to-actions at the end of videos, which unfortunately makes videos much less likely to reach a lot of people on Tiktok (mostly because people would exit instead of re-watching the video) and is quite uncommon to do on tiktok compared to Youtube, especially for clips, I'm expecting the outcomes to be:

    1. Engagement / Following: about 50k people (3-4%) engaged with the content (shares, likes, comments, follows). I expect that the people who engaged will continue seeing my content in the future (because TikTok will push it). In some cases, they will end up engaging more and more with the content that is directly about safety, and eventually integrate the broader AI Safety ecosystem (to a certain degree).

    2. Profile clicks: About 0.5% of viewers click on the channel's profile (I've received 5k+ profile views). The two outcomes from that are:

      1. Watching the pinned videos: 4k views on the 3 pinned videos came from these 5k profile clicks, meaning a large fraction who click on the profile click on pinned. I think in the future one of these pinned videos could be a video with a strong CTA that directly leads to outcomes we care about around informing the public / representatives about AI Safety, similar to this one which had a very high conversion rate in having viewers take action.

      2. Clicking on the link in bio: so far I don't have a clickable link, but plan to link to eg. aisafety.com to redirect to resources to learn more about AI Safety.

    3. Progressive exposure: Most people who eventually work on AI safety needed multiple exposures from different sources before taking action. Even viewers who don't click anywhere are getting those crucial early exposures that add up over time.

donated $8,000
NeelNanda avatar

Neel Nanda

1 day ago

Gotcha, thanks! @michaeltrazzi

That seems a pretty reasonable plan and you've gotten good reach. I'm not confident this is a good idea, but I think that's plausible and more value of information here would be good, so I've donated another month's worth. Good luck!

michaeltrazzi avatar

Michaël Rubens Trazzi

1 day ago

Thanks @NeelNanda !

🥕

Jesse Richardson

about 21 hours ago

Thanks for sharing! My other question is how much time you're spending on this a week? Is the TikTok + YouTube stuff roughly a full-time job at the moment?

michaeltrazzi avatar

Michaël Rubens Trazzi

about 7 hours ago

@Jesse-Richardson Yes it's full-time.

I wrote down more details in my answer to Marcus here.

donated $200
🍊

Andrew G

2 days ago

Seems like a very promising approach!

donated $2,000
sudonhim avatar

Brenton Milne

2 days ago

Great plan. Donated!