Shreeda Segan
We know what works. Let's make sure it looks good.
Oliver Habryka
Funding for LessWrong.com, the AI Alignment Forum, Lighthaven and other Lightcone Projects
Effective Altruism New Zealand
Zian Marie V. Bonoan
Your contribution makes a difference at EA Philippines!
Apart Research
Incubate AI safety research and develop the next generation of global AI safety talent via research sprints and research fellowships
Dusan D Nesic
Free/Subsidized/Cheap office space outside of EU but in good timezones with favorable visa policies (especially for Chinese/Russian but also other citizens).
David Moss
Testing responses to “effective altruism”, “longtermism”, “existential risk” etc.
Luan Rafael Marques de Oliveira
Support to translate BlueDot Impact’s AI alignment curriculum into (Br) Portuguese to be used in university study groups and an online course
David Thorstad
Vipul Naik
DLW is a compilation of EA donation data across funders.
Jeroen Willems
A Happier World explores exciting ideas with the potential to radically improve the world. It discusses the most pressing problems and how we can solve them.
David Federico Rivadeneira
Hyams
Request for retroactive funding
Louis S. Berman
AI-Risk Education for Politicians
Netiwit Chotiphatphaisal
translating utilitarianism.net contents and The Life You Can Save into Thai
LAUREL
Organizing global AI ethics think tank for dynamic AI research updates and framework for AI safety policies implementation and humanity income support
James Boswell
Funding Round #2
Siao Si Looi
12 months funding for 3 people to work full-time on projects supporting AI safety efforts
Greg Colbourn
Centre hosting EAs’ research projects/EAs’ upskilling for ~1/3 of the average cost of funding such projects remotely
UtiliThai is a transformative two-year initiative in Thailand, focused on introducing utilitarian, effective altruism and longtermism for Thai general audiences