Congratulations on getting an interview with them! @Connoraxiotes
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.
Connor Axiotes
1 day ago
Updates:
We have secured interviews with two of the ‘Godfathers of AI’ - Prof. Geoffrey Hinton & Prof. Yoshua Bengio.
We have wrapped our first 'leg' of production. We are now back in London to plan the rest (and largest part) of our filming in the UK, US, and Canada. We still have around 10-12 interviews to film.
We now have a funding gap of just over $150,000. Please help us finish filming today by donating.
Below we have some stills from our last shoot.
Neel Nanda
3 days ago
I think this kind of academic field building is cheap and valuable and I like the emphasis on practicality of the actionable interceptability workshop, so I've fully funded this (and invited Sarah to apply). I'd happily fund this higher if you can accommodate more people, including top paper authors or other notable people would be great
Maia Adar
4 days ago
I just made a Manifund account in order to donate to this! I think it's a great topic to gather more info about. I'd love to see your report summarized into a simple graphic so that the info can spread more easily.
Oliver Habryka
5 days ago
Post on MDMA. Multiple people have told me it convinced them not to use MDMA, or allowed them to convince others not to do so. Note that this post is 7 years old, and if I was doing it today it would be much more quantified
This is true for me! I have had a bunch of friends over the years who considered doing MDMA, and the previous post was quite helpful for changing those people's minds on what drugs to take (which I think was in-expectation quite good for them). My guess is it prevented about 1.5 people in-expectation from doing MDMA at some point as a result of me linking to it.
Austin Chen
7 days ago
Approving this project. I'm excited that Manifund can help support more speculative and philosophical work that is generally neglected. I do expect that understanding decision theory better will serve us well as we move into weirder worlds; and on a brief skim, their decision theory benchmark seems promising. Thanks to Thomas and Lauren for funding this!
Alexandra Bos
9 days ago
Participants rated the program highly: they estimated it accelerated their founding journey by ~11 months total on average. At the end of (the online) Phase 1 of the program, 66% of participants indicated that time spent in Phase 1 of the program was 3-10x or 10x+ as valuable as how they would have spent their time otherwise. At the end of Phase 2 (in-person), 85% of participants indicated this.
Please find an overview organizations incubated in the program here: https://www.catalyze-impact.org/post/introducing-11-new-ai-safety-organizations-catalyze-incubation-program-cohort-winter-2024-25
To highlight some examples, these are three promising organizations that came out of our Nov-Feb '25 incubation program pilot:
• Luthien: Developing Redwood's AI Control approach into a production-ready solution. Founded by Jai Dhyani, an experienced ML engineer (Meta, Amazon) and MATS 6.0 graduate where he worked with METR. Within two months of its existence, Luthien has already secured nearly 190k$ through our Seed Funding Circle.
• Wiser Human: a non-profit modeling AI threats for agentic use cases, producing compelling demos to hold AI devs accountable to safety commitments. Co-founded by Francesca Gomez, who worked in digital risk management for many years and has a background in AI, and Sebastien Ben M'Barek, an experienced digital risk management professional with a software engineering & product management background. Wiser Human has received 15k$ in donations from our Seed Funding Circle.
• Coordinal Research: a non-profit accelerating technical AIS agendas with research automation. Co-founded by Ronak Mehta, a CS Postdoc & MATS 6.0 graduate, and Jacques Thibodeau, a former data scientist and MATS graduate, previous founder and independent alignment researcher focused on automating alignment research. Coordinal has secured 110k$ in seed funding through members of our Seed Funding Circle.
Please find a few of the testimonials from program graduates below:
Jai Dhyani (Luthien): “Catalyze gave me the structure, information, and connections I needed to make Luthien a reality. When I started I had no idea how to build a company or a non-profit, but by the end of Catalyze I not only felt confident in my ability to get started, I was (and remain) optimistic that I will actually succeed in making a meaningful difference. Within three months of the end of the program I had over a year of runway and was well on my way to deploying an MVP.”
Cecilia Callas (AI safety comms organization): “Participating in Catalyze Impact was completely transformational for my career journey into AI Safety. (...) being immersed in a community of like-minded AI safety entrepreneurs and having access to advisors helped my co-founder and I to be much more successful, and much more quickly. (...) Within a few months of the Catalyze program concluding, we have secured seed funding for our AI safety communications project, have a clear direction for our organization and perhaps most importantly, were have affirmed that we could build careers in AI Safety”
Francesca Gomez (Wiser Human): “The Catalyze Impact AI Safety Incubator really helped get our AI Safety work off the ground. Weekly sessions with the team and Catalyze’s group of mentors, domain experts in AI Safety, gave us first‑hand, candid feedback that really sharpened our thinking, which would not have been possible to do outside of the programme. By the time the cohort wrapped up, we had mapped a roadmap, secured initial seed funding, and produced the materials that later underpinned our larger grant applications. Another big benefit for us was how Catalyze plugged us straight into the London AI Safety ecosystem. (...) the sense of accountability and the ongoing flow of expertise continue to be invaluable as we grow.”
Ronak Mehta (Coordinal Research): “The Catalyze program was integral to the foundation of Coordinal Research. The mentorship, networking, and co-founder matching all directly contributed to the organization's founding. Having a dedicated, full-time commitment and space for 1) learning how to build an organization, 2) building out proofs of concept, and 3) networking with AI safety researchers, funders, and other founders was necessary, valuable, and fun, and I cannot imagine a scenario where Coordinal would exist without Catalyze. Learning what it takes to build a new organization alongside like-minded founders dedicated to AI safety was so valuable, in a way that typical startup incubators couldn't provide. The accountability felt extremely genuine, with everyone seriously considering how their organization could effectively contribute to AI safety.”
We spent the ~16k$ we raised here primarily on salaries and runway before getting the pilot program funded, as outlined in the comments to this grant.
Thomas Larsen
9 days ago
Clarification: this work doesn't get funded by Good Ventures. OP may still recommend grants of this type to non-Good Ventures donors. In practice, this means the pool of possible funding is still much smaller, so I think the argument stands.
Thomas Larsen
10 days ago
I think this is very promising. This team seems to have some of the people who have the clearest thoughts in the world about acausal interactions. I've asked several people who I trust a lot in this space and gotten universally positive references about the team.
My main concern is that thinking about acausal interactions is extremely difficult (meaning that zero progress is somewhat likely) and sign uncertain (so, even if they did make progress, it's not clear this would be net helpful). Overall, my view is that it still seems good to have some people working on this, and I trust this team in particular to be thoughtful about the tradeoffs.
Also, this type of work doesn't get funded by OP.
Jai Dhyani
11 days ago
@Austin Luthien got a grant from SFF! Is there anything I can do now to facilitate the fiscal sponsorship stuff?
Jaeson Booker
11 days ago
@RyanKidd I would be interested in regranting for Manifund, but do fear that it might steer me toward looking good in the eyes of the other regranters, and away from funding what might actually be most needed. I am open to it, though.
Jaeson Booker
11 days ago
@RyanKidd I've actually already reached out to ARM and they were encouraging of the idea of creating new funds for AI Safety, separate from ARM. I had also mentioned a collaboration of some kind, but they said they are focused on figuring out the strategy for the fund in the coming months, and to reach out again at that point.
Ryan Kidd
11 days ago
@JaesonB, if I was in your shoes, I would prioritize these things in order:
Talk to ARM and ask them how they are capacity constrained, then help alleviate that constraint.
If ARM is constrained by hiring great grantmakers, help them build a great hiring pipeline.
If ARM is constrained by funding, build a TLYCS for AI safety, to encourage mass donations to ARM.
If ARM is constrained by unfixable factors (e.g., no-one is driving it and they refuse help), first try cutting your teeth via regranting on Manifund. If this goes well, it doesn't seem crazy to set up another fund. Note that many early grantmakers for EA Funds also worked at Givewell or Open Philanthropy.
Ryan Kidd
11 days ago
I think there's a gap for an organization like The Life You Can Save for AI safety, which would encourage donations and pledges to top charities, but I don't really see how your proposed fund is a better alternative to scaling the AI Risk Mitigation Fund or Manifund. If the argument is "we bring in additional donors because of our non-longtermist affiliation", the same could be said for ARM and Manifund. If the argument is "we add additional grantmaker capacity" I would counter "why can't these same grantmakers join ARM or just use Manifund?" (Possible answer: maybe ARM's bar is too high and Manifund isn't an attractive target for donations.) Basically, I think we do need more funders in the medium-long term, but I think the experience and reputation of the ARM grantmakers is much higher than your proposed grantmakers, on average, and Manifund already exists as a short-term regranting solution that I would rather grow. I don't want this project to not happen at all, to be clear, but I would rather it be rescoped as something closer to TLYCS, as there is a much larger gap here than for another funder constrained by the same things as existing funders, albeit less experienced.
Ryan Kidd
11 days ago
@JaesonB, I also fear those capacity constraints. I'm curious why you think the solution is a separate fund, rather than alleviating the capacity constraints of an already-proven fund? Additionally, why wouldn't exactly the same constraints (e.g., money, competent grantmakers) bottleneck your fund?
Jaeson Booker
12 days ago
@RyanKidd To my knowledge, they're still setting up or determining their next steps for the fund. Hopefully, it goes well, but I fear similar capacity constraints to LTFF.
Daniel Kokotajlo
12 days ago
Very excited about this project! I'm donating $100k, not through Manifund but directly. For more on my pitch, read this: https://www.lesswrong.com/posts/APfuz9hFz9d8SRETA/my-pitch-for-the-ai-village
Connor Axiotes
1 day ago
Updates:
We have secured interviews with two of the ‘Godfathers of AI’ - Prof. Geoffrey Hinton & Prof. Yoshua Bengio.
We have wrapped our first 'leg' of production. We are now back in London to plan the rest (and largest part) of our filming in the UK, US, and Canada. We still have around 10-12 interviews to film.
We now have a funding gap of just over $150,000. Please help us finish filming today by donating.
Below we have some stills from our last shoot.
Neel Nanda
3 days ago
I think this kind of academic field building is cheap and valuable and I like the emphasis on practicality of the actionable interceptability workshop, so I've fully funded this (and invited Sarah to apply). I'd happily fund this higher if you can accommodate more people, including top paper authors or other notable people would be great
Maia Adar
4 days ago
I just made a Manifund account in order to donate to this! I think it's a great topic to gather more info about. I'd love to see your report summarized into a simple graphic so that the info can spread more easily.
Oliver Habryka
5 days ago
Post on MDMA. Multiple people have told me it convinced them not to use MDMA, or allowed them to convince others not to do so. Note that this post is 7 years old, and if I was doing it today it would be much more quantified
This is true for me! I have had a bunch of friends over the years who considered doing MDMA, and the previous post was quite helpful for changing those people's minds on what drugs to take (which I think was in-expectation quite good for them). My guess is it prevented about 1.5 people in-expectation from doing MDMA at some point as a result of me linking to it.
Austin Chen
7 days ago
Approving this project. I'm excited that Manifund can help support more speculative and philosophical work that is generally neglected. I do expect that understanding decision theory better will serve us well as we move into weirder worlds; and on a brief skim, their decision theory benchmark seems promising. Thanks to Thomas and Lauren for funding this!
Alexandra Bos
9 days ago
Participants rated the program highly: they estimated it accelerated their founding journey by ~11 months total on average. At the end of (the online) Phase 1 of the program, 66% of participants indicated that time spent in Phase 1 of the program was 3-10x or 10x+ as valuable as how they would have spent their time otherwise. At the end of Phase 2 (in-person), 85% of participants indicated this.
Please find an overview organizations incubated in the program here: https://www.catalyze-impact.org/post/introducing-11-new-ai-safety-organizations-catalyze-incubation-program-cohort-winter-2024-25
To highlight some examples, these are three promising organizations that came out of our Nov-Feb '25 incubation program pilot:
• Luthien: Developing Redwood's AI Control approach into a production-ready solution. Founded by Jai Dhyani, an experienced ML engineer (Meta, Amazon) and MATS 6.0 graduate where he worked with METR. Within two months of its existence, Luthien has already secured nearly 190k$ through our Seed Funding Circle.
• Wiser Human: a non-profit modeling AI threats for agentic use cases, producing compelling demos to hold AI devs accountable to safety commitments. Co-founded by Francesca Gomez, who worked in digital risk management for many years and has a background in AI, and Sebastien Ben M'Barek, an experienced digital risk management professional with a software engineering & product management background. Wiser Human has received 15k$ in donations from our Seed Funding Circle.
• Coordinal Research: a non-profit accelerating technical AIS agendas with research automation. Co-founded by Ronak Mehta, a CS Postdoc & MATS 6.0 graduate, and Jacques Thibodeau, a former data scientist and MATS graduate, previous founder and independent alignment researcher focused on automating alignment research. Coordinal has secured 110k$ in seed funding through members of our Seed Funding Circle.
Please find a few of the testimonials from program graduates below:
Jai Dhyani (Luthien): “Catalyze gave me the structure, information, and connections I needed to make Luthien a reality. When I started I had no idea how to build a company or a non-profit, but by the end of Catalyze I not only felt confident in my ability to get started, I was (and remain) optimistic that I will actually succeed in making a meaningful difference. Within three months of the end of the program I had over a year of runway and was well on my way to deploying an MVP.”
Cecilia Callas (AI safety comms organization): “Participating in Catalyze Impact was completely transformational for my career journey into AI Safety. (...) being immersed in a community of like-minded AI safety entrepreneurs and having access to advisors helped my co-founder and I to be much more successful, and much more quickly. (...) Within a few months of the Catalyze program concluding, we have secured seed funding for our AI safety communications project, have a clear direction for our organization and perhaps most importantly, were have affirmed that we could build careers in AI Safety”
Francesca Gomez (Wiser Human): “The Catalyze Impact AI Safety Incubator really helped get our AI Safety work off the ground. Weekly sessions with the team and Catalyze’s group of mentors, domain experts in AI Safety, gave us first‑hand, candid feedback that really sharpened our thinking, which would not have been possible to do outside of the programme. By the time the cohort wrapped up, we had mapped a roadmap, secured initial seed funding, and produced the materials that later underpinned our larger grant applications. Another big benefit for us was how Catalyze plugged us straight into the London AI Safety ecosystem. (...) the sense of accountability and the ongoing flow of expertise continue to be invaluable as we grow.”
Ronak Mehta (Coordinal Research): “The Catalyze program was integral to the foundation of Coordinal Research. The mentorship, networking, and co-founder matching all directly contributed to the organization's founding. Having a dedicated, full-time commitment and space for 1) learning how to build an organization, 2) building out proofs of concept, and 3) networking with AI safety researchers, funders, and other founders was necessary, valuable, and fun, and I cannot imagine a scenario where Coordinal would exist without Catalyze. Learning what it takes to build a new organization alongside like-minded founders dedicated to AI safety was so valuable, in a way that typical startup incubators couldn't provide. The accountability felt extremely genuine, with everyone seriously considering how their organization could effectively contribute to AI safety.”
We spent the ~16k$ we raised here primarily on salaries and runway before getting the pilot program funded, as outlined in the comments to this grant.
Thomas Larsen
9 days ago
Clarification: this work doesn't get funded by Good Ventures. OP may still recommend grants of this type to non-Good Ventures donors. In practice, this means the pool of possible funding is still much smaller, so I think the argument stands.
Thomas Larsen
10 days ago
I think this is very promising. This team seems to have some of the people who have the clearest thoughts in the world about acausal interactions. I've asked several people who I trust a lot in this space and gotten universally positive references about the team.
My main concern is that thinking about acausal interactions is extremely difficult (meaning that zero progress is somewhat likely) and sign uncertain (so, even if they did make progress, it's not clear this would be net helpful). Overall, my view is that it still seems good to have some people working on this, and I trust this team in particular to be thoughtful about the tradeoffs.
Also, this type of work doesn't get funded by OP.
Jai Dhyani
11 days ago
@Austin Luthien got a grant from SFF! Is there anything I can do now to facilitate the fiscal sponsorship stuff?
Jaeson Booker
11 days ago
@RyanKidd I would be interested in regranting for Manifund, but do fear that it might steer me toward looking good in the eyes of the other regranters, and away from funding what might actually be most needed. I am open to it, though.
Jaeson Booker
11 days ago
@RyanKidd I've actually already reached out to ARM and they were encouraging of the idea of creating new funds for AI Safety, separate from ARM. I had also mentioned a collaboration of some kind, but they said they are focused on figuring out the strategy for the fund in the coming months, and to reach out again at that point.
Ryan Kidd
11 days ago
@JaesonB, if I was in your shoes, I would prioritize these things in order:
Talk to ARM and ask them how they are capacity constrained, then help alleviate that constraint.
If ARM is constrained by hiring great grantmakers, help them build a great hiring pipeline.
If ARM is constrained by funding, build a TLYCS for AI safety, to encourage mass donations to ARM.
If ARM is constrained by unfixable factors (e.g., no-one is driving it and they refuse help), first try cutting your teeth via regranting on Manifund. If this goes well, it doesn't seem crazy to set up another fund. Note that many early grantmakers for EA Funds also worked at Givewell or Open Philanthropy.
Ryan Kidd
11 days ago
I think there's a gap for an organization like The Life You Can Save for AI safety, which would encourage donations and pledges to top charities, but I don't really see how your proposed fund is a better alternative to scaling the AI Risk Mitigation Fund or Manifund. If the argument is "we bring in additional donors because of our non-longtermist affiliation", the same could be said for ARM and Manifund. If the argument is "we add additional grantmaker capacity" I would counter "why can't these same grantmakers join ARM or just use Manifund?" (Possible answer: maybe ARM's bar is too high and Manifund isn't an attractive target for donations.) Basically, I think we do need more funders in the medium-long term, but I think the experience and reputation of the ARM grantmakers is much higher than your proposed grantmakers, on average, and Manifund already exists as a short-term regranting solution that I would rather grow. I don't want this project to not happen at all, to be clear, but I would rather it be rescoped as something closer to TLYCS, as there is a much larger gap here than for another funder constrained by the same things as existing funders, albeit less experienced.
Ryan Kidd
11 days ago
@JaesonB, I also fear those capacity constraints. I'm curious why you think the solution is a separate fund, rather than alleviating the capacity constraints of an already-proven fund? Additionally, why wouldn't exactly the same constraints (e.g., money, competent grantmakers) bottleneck your fund?
Jaeson Booker
12 days ago
@RyanKidd To my knowledge, they're still setting up or determining their next steps for the fund. Hopefully, it goes well, but I fear similar capacity constraints to LTFF.
Daniel Kokotajlo
12 days ago
Very excited about this project! I'm donating $100k, not through Manifund but directly. For more on my pitch, read this: https://www.lesswrong.com/posts/APfuz9hFz9d8SRETA/my-pitch-for-the-ai-village