Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
5

Fund a Fellow for the Cooperative AI Research Fellowship!

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
Leo-Hyams avatar

Leo Hyams

ProposalGrant
Closes October 30th, 2025
$30raised
$500minimum funding
$225,500funding goal

Offer to donate

27 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project Summary

The Cooperative AI Research Fellowship is a 3-month research program running from January to April of 2026. It will connect a global cohort of fellows to top researchers from Google DeepMind, MIT, Carnegie-Mellon University, and Oxford (among others). Over 1000 candidates have applied for this fellowship (the application deadline is October 1, 2025), which includes top African talent (including DeepMind scholars and authors in top journals like NeurIPS, ICLR, and ICML). This talent pool also features global talent from top institutions such as Oxford, CHAI, and Stanford.

This program provides research mentorship and professional development while building Cape Town's AI safety research ecosystem. Fellows will contribute to research domains identified by the Cooperative AI Foundation while developing skills and networks in neglected areas of AI safety. The program operates through partnerships with the University of Cape Town's African Hub for Safety, Peace, and Security, creating pathways for both African and international researchers into the emerging Cape Town hub.

This fellowship is a partnership between AI Safety South Africa (AISSA, program operations), the Cooperative AI Foundation (CAIF, research direction and mentor network), the University of Cape Town AI Initiative (institutional infrastructure and research positions), and the Principles of Intelligent Behaviour in Biological and Social Systems (PIBBSS, operational support).

Fellows will work on topics in line with CAIF's research objectives, including multi-agent safety, AI for facilitating human cooperation, and mitigating gradual disempowerment. These topics will be the main focus of this program. However, funders can support fellows working on alternate focus areas under the wildcard track.

Project Goals

Develop AI safety research capacity

  • Channel talent into research domains identified by CAIF

  • Enable fellows to develop research skills, build professional networks, and gain expertise in neglected areas of AI safety

  • Support growing demand for AI safety researchers at African institutions (University of Cape Town, University of the Witwatersrand, Stellenbosch University)

Establish Cape Town as a global AI safety hub

  • Create a cost-effective alternative to UK/US hubs with high quality of life

  • Provide pathways for African talent into AI safety research

  • Offer options for international researchers facing restrictive visa requirements

  • Coordinate responses to AI risk in Africa through partnerships with ILINA, UCT, and the Global Centre on AI Governance

  • Build visibility for Cape Town's AI safety ecosystem

Shape the African AI safety research agenda

  • Connect the emerging Cape Town research network with international mentors

  • Inform the research direction of the African Hub for Safety, Peace, and Security through this program's research outputs and network

Funding Usage

Each fellow costs around $20,500 to fully fund. This program has funding for four fellows currently, and has capacity for up to 11 additional fellows. An ideal cohort size is between 12-15 fellows.

This covers the following line items (% of total amount):

  • Fellowship stipends (46%)

  • Accommodation (10%)

  • Research manager salaries (9%)

  • Travel support (5%)

  • Additional operational overhead (4%)

  • Meals (4%)

  • Retreat (3%)

  • Compute (2%)

  • Office space (2%)

  • Memorabilia (1%)

Additionally, it includes:

  • Fiscal sponsorship cost (8%)

  • Buffer (6%)

More details and access to a full breakdown of the budget are available on request. Email leo@aisafetysa.com for these details.

Team & Track Record

Operations Team

Leo Hyams (Founder & Executive Director, AI Safety South Africa)

  • Supported at least 10 people in making major career decisions to pivot into AI safety. This includes: a software engineer at BlueDot Impact, a MATS fellow, and authors of AI safety papers published in AAAI and NeurIPS.

  • Led a team of four to complete four bounties for the UK AI Security Institute's Bounty program.

  • Lead author on "Precursors, Proxies and Predictive Models for Long-Horizon Tasks," a NeurIPS 2025 workshop paper.

  • UK AISI Challenge Fund 2025 awardee for Science of Evals research.

Imaan Khadir (Operations Generalist, AI Safety South Africa)

  • Director of Effective Altruism South Africa, lead organizer for the EA SA Summit in 2024.

  • Holds a Master's in Development Policy and Practice from the University of Cape Town's Nelson Mandela School of Public Governance.

  • Led core operations for two AI Safety Fundamentals Courses hosted by AI Safety South Africa with 100+ participants each.

  • Previously an AI Governance Teaching Fellow for BlueDot Impact.

  • Don Lavoie Fellow at the Mercatus Center.

Tegan Green (Event Curator, AI Safety South Africa)

  • Organizer for ACX Cape Town.

  • Facilitator for BlueDot Impact.

  • Completed the Intro to Cooperative AI Course.

  • Extensive project management experience in the film industry.

  • Previously a UX Designer.

University of Cape Town Affiliates

Claude Formanek (PhD student, University of Cape Town)

  • CAIF PhD Fellow.

  • Research Engineer at InstaDeep.

  • NeurIPS, ICLR author.

Associate Professor Jonathan Shock (Director, University of Cape Town AI Initiative)

  • Co-PI of the African AI Hub on Safety, Peace, and Security.

Advisory Committee

  • Dušan D. Nešić (Operations Director, PIBBSS)

  • Cecilia Elena Tilli (Associate Director, Cooperative AI Foundation)

  • Lewis Hammond (Research Director, Cooperative AI Foundation)

  • Benjamin Sturgeon (Strategic Director, AI Safety South Africa)

Project Risks

The most urgent risk is the current cohort size. There is funding for four fellows, which is too small to achieve the network benefits typical of AI safety fellowships. The more fellows on the program, the more valuable the network created, and the more support each fellow receives from their colleagues. This is particularly important given that a primary aim is creating an international network of researchers connected to Cape Town. A cohort of 12-15 fellows would be ideal for balancing operational capacity with network effects.

Beyond cohort size, the program has secured excellent mentors, candidates, venues, and support staff, which mitigates risks associated with a new fellowship in a geographically distant location. The program also has experienced advisory staff to support execution.

Recent Funding Track Record

This project has secured $30,000 from the AI Safety Tactical Opportunities Fund. The Cooperative AI Foundation has pledged $100k conditional on securing an additional $35k, though this condition will likely be waived (currently under discussion).

AI Safety South Africa has received $117k from Open Philanthropy for general organizational support and recently won a Challenge Fund grant from the UK AISI for $134k for Science of Evals research.

Comments1Offers2Similar6
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
CarmenCondor avatar

Carmen Csilla Medina

Accelerating the AI Safety talent pipeline in South Africa| Matched donations

Technical AI safetyAI governanceEA Community ChoiceLong-Term Future FundEA Infrastructure FundGlobal catastrophic risks
4
1
$656 raised
🍄

David Conrad

Talos Network needs your support in 2025

We are fostering the next generation of AI Policy professionals through the Talos Fellowship. Your help will directly increase the number of places we can offer

AI governanceEA communityGlobal catastrophic risks
3
0
$0 raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
1
$100K raised
Apart avatar

Apart Research

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline

Funding ends June 2025: Urgent support for proven AI safety pipeline converting technical talent from 26+ countries into published contributors

Technical AI safetyAI governanceEA community
33
39
$131K raised
🦀

Reed Shafer-Ray

Educating World Class Leaders From Developing World on AI/Pandemics

AI governanceBiosecurityGlobal catastrophic risks
1
4
$0 raised