Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
4

Elizabeth and Timothy Podcast on values in Effective Altruism (Funded)

Science & technologyEA community
🐹

Elizabeth Van Nostrand

ActiveGrant
$7,810raised
$13,045funding goal

Donate

Sign in to donate

Project summary:

Elizabeth and Timothy create a podcast about truthseeking, values drift, and a potential EA Renaissance (or diaspora).

ETA: we have enough pledged donations to meet our goal of 5 episodes. Small donations are still emotionally meaningful, but not necessary at this time.

What are this project's goals? How will you achieve them?

Short term (up to 5 episodes): shed clarity on truthseeking, integrity, and group intelligence within Effective Altruism via discussion on podcast. Timothy hopes to use this to aid healing EA; Elizabeth has given up on this and is hoping to find more like minded people.

Long term: If successful, we'd like to expand to other topics that catch our interest and seem valuable to discuss publicly. Could include topics such as epistemic norms, hope for an EA renaissance or splinter group, group intelligence, building productive cultures good for the people in them,, etc.

 

How will this funding be used?

Pay for both people’s time, equipment (e.g. microphones), software (e.g. Descript for editing), and services (currently just Elizabeth’s editing and research but these could be hired out in the future).

Who is on your team? What's your track record on similar projects?

Elizabeth Van Nostrand: known for science writing and epistemic spot checks at AcesoUnderGlass.com, has received grants and done work for a variety of EA orgs. She had a year+ blog sequence outlining problems in truthseeking within EA. This sequence received many compliments but there is no concrete evidence of change she cares about.

Timothy Telleen-Lawton: Has worked at a variety of EA orgs, been a member of Effective Altruism since 2013 and East Bay Rationality since 2016 

Our first episode has been released here. Discussion here.

What are the most likely causes and outcomes if this project fails?


The information falls on deaf ears, or generates a lot of heat and no light. 

In earlier years there might have been risk of inflammation; seems like that market is now saturated and one more source won’t make it worse. 



How much money have you raised in the last 12 months, and from where?

Elizabeth has a <$500 Patreon that is currently per-post but will become per-month in the near future.

We are particularly interested in crowdfunding because the number of donors (independent of donation size) is a predictor of impact. However, if successful, we plan to fundraise from larger donors for larger projects as well.

Comments12Donations6Similar8
michaeltrazzi avatar

Michaël Rubens Trazzi

Making 52 AI Alignment Video Explainers and Podcasts

EA Community Choice
8
9
$15.3K raised
🐹

Elizabeth Van Nostrand

EVN General Support Application

1
6
$50K raised
David_Moss avatar

David Moss

Experiments to test EA / longtermist framings and branding

Testing responses to “effective altruism”, “longtermism”, “existential risk” etc.

EA community
5
16
$26.8K raised
DavidThorstad avatar

David Thorstad

Reflective altruism

EA community
1
6
$2K raised
FarmKind avatar

Aidan Alexander

FarmKind - A donation platform creating new donors for effective animal advocacy

Animal welfareEA Community Choice
5
10
$2.58K raised
JorgenLjones avatar

Jørgen Ljønes

80,000 Hours

We provide research and support to help people move into careers that effectively tackle the world’s most pressing problems.

Technical AI safetyAI governanceBiosecurityEA Community ChoiceEA communityGlobal catastrophic risks
12
13
$4.92K raised
Dads-Against-AGI avatar

John Sherman

Dads Against AGI Inc.

Funding For Humanity: An AI Risk Podcast

AI governanceGlobal catastrophic risks
2
10
$20.2K raised
liron avatar

Liron Shapira

Doom Debates - Podcast & debate show to help AI x-risk discourse go mainstream

Let's warn millions of people about the near-term AI extinction threat by directly & proactively explaining the issue in every context where it belongs

Technical AI safetyAI governanceEA Community ChoiceGlobal catastrophic risks
4
11
$1.24K raised