@78d6a404-3d30-48c8-b734-3187ca84a46a Thanks so much for your contribution!!
The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

The market for grants
Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

Leo Hyams
about 15 hours ago
@78d6a404-3d30-48c8-b734-3187ca84a46a Thanks so much for your contribution!!
Joshua Morgan
1 day ago
Stress tolerance experiments with the transgenes have revealed protective effects in one cell type and we are proceeding to test additional cell types. Further, we have developed and successfully engineered cells with the multicistronic expression vectors, resulting in expression of pairs of the genes in a single human cell.
Continue collecting stress tolerance data, including with the multicistronic expression vectors.
Not at this time, we are making steady progress and eager to share future results.
Jesse Richardson
2 days ago
I think the political side of AI safety is generally undervalued and underfunded and AIPN is doing really important work! I am impressed with the progress they have made so far, especially in getting members of Congress to take more seriously the prospect of catastrophic risks from AI.
Jonathan Samuel Claybrough
3 days ago
EffiSciences organized its first Biosecurity Bootcamp (24–29 March 2025) with support from Manifund. The goal was to seed a new generation of young professionals able to tackle global catastrophic biological risks, in a context where biosecurity remains marginal in France’s research and policy ecosystems.
We hosted 14 participants (ages 22–30) from 8 countries, with diverse academic backgrounds (biology 2, policy 4, physics/AI/engineering 7, medicine 1).
Recruitment channels: EA community 9/14, universities 1/14, LinkedIn 2/14, direct outreach 2/14.
The program combined 14 h of lectures, 8 h of workshops, and 13 h of applied projects, delivered by 3 facilitators (from Oxford, Charité Berlin, EffiSciences) and 2 guest speakers.
Immediate outcomes
11/14 participants completed a mini-project with concrete outputs:
Quantitative framework for assessing AI-related biorisks (now a LessWrong post).
Vaccine supply-chain resilience model → ongoing scientific paper.
Contribution to NTI’s Youth for Biosecurity challenge (published definition proposal).
Policy analysis on antimicrobial resistance and zoonoses (Turkey).
Redefinition of U.S. bioterrorism deterrence framework.
>80 % reported intent to pursue biosecurity-related careers; 3 have already shifted academic or professional trajectories.
Post-program impact (June 2025 update)
One alumnus became leader of “Nordics for Biosecurity” (≈100 members), coordinating global EA-aligned initiatives and job-board curation.
One participant began a biosecurity PhD at King’s College London (topic: bioterrorism).
Two teams continue work on AI-bio convergence and vaccine logistics modeling, now mentored by EffiSciences researchers.
Community & ecosystem impact
The event catalyzed the first grassroots biosecurity network in France, bridging EA circles and local universities (ENS, Sorbonne, AgroParisTech).
Created a persistent alumni group (11/14 still active), serving as mentors for future editions.
The Bootcamp materials are now ready for replication in 2026 (Latin America, UK, and Francophone Africa).
Category Amount (€) %
Venue (Utopia Campus) 4,150 70 %
Travel subsidies 700 12 %
Speaker honoraria & materials~0 0 %
Insurance 350 6 %
Food 800 13 %
Total 6 000 € 100 %
Cost per participant: ≈ 430 €, competitive with comparable EA fellowships.
Scope expanded to include AI–bio risk interfaces following recent discourse shifts.
One facilitator dropped out (replaced internally).
Participant diversity lower than planned (2 women / 14; 9 EA-affiliated). Future editions will target under-represented backgrounds via university partnerships.
Given the absence of pre-existing French biosecurity training, the Bootcamp generated high marginal returns per euro:
Created ≈ 10 new early-career trajectories toward biosecurity.
Built the only French-speaking EA-aligned biosecurity cluster.
Produced replicable curricula and open materials, enabling international diffusion.
Next steps (2026–27):
Scale to 3 regional editions with alumni as co-facilitators.
Formalize mentorship network and publish project outputs.
Seek co-funding (≈ 12 k€) to double participant count and gender diversity.
This project would not have reached its current scale or quality without Manifund’s decisive support. EffiSciences had initially secured only €5 000, covering minimal lodging and food costs, but leaving travel assistance and teaching compensation unfunded after an Erasmus+ grant fell through. The €1 000 complement from Manifund directly enabled 8 participants from low-income or distant regions (Eastern Europe, Turkey, North Africa) to attend by subsidizing their travel. Without this, at least 5 confirmed participants (≈35 %) would have withdrawn, substantially reducing both diversity and critical-mass effects.
Beyond direct participation, Manifund’s endorsement served as a credibility signal that unlocked further in-kind contributions (venue discount ≈ €400, volunteer teaching ≈ 40 h) and raised the visibility of biosecurity within the French EA and university ecosystems. In total, Manifund’s marginal funding increased the number of completed projects per € spent by ~40 %, and effectively converted sunk preparation efforts into a full-scale bootcamp rather than a reduced pilot. The intervention thus achieved an estimated cost-per-career-trajectory of €200–€250, comparing favorably with prior ML4Good benchmarks and demonstrating the high leverage of small, well-timed grants in emerging regional fields like biosecurity.
Contact: Erwan Sallard — EffiSciences Biosecurity Program
Website: www.effisciences.org
Caleb Biddulph
3 days ago
I chatted with Thane about his project in LessWrong DMs. Thane's research is like a more theory-focused version of mine, which is what drew my interest to it.
It seems like Thane is approaching things with a rigorous and interesting mindset. I like the idea of building a non-agentic and fully interpretable world model, and if this idea is viable at all, Thane seems smart and like a good person to work on it, based on reading his work and having conversations with him. (However, this assessment has fairly weak confidence, since I have limited taste in theory research, and generally am a pretty junior researcher.)
My main consideration against funding is that I'm a bit skeptical of theory as opposed to empirical research. (So if you are bullish on theory, this project should be even more promising for you.) But as far as theory goes, this project seems like a good one, and Thane plans to do empirical work as well. It's my hope that this empirical work will lead to legible and exciting results, leading to more people paying attention to Thane's research and making sure he gets comfortably funded.
It seems like it should be worth at least the $40,000 per year needed for Thane to continue to work full-time on this. I certainly hope LTFF funds him, and hope that my donation can be useful as additional financial cushioning.
Francesca Gomez
4 days ago
I’m excited to share that Part 1 of our project is now complete: our empirical study testing mitigations for agentic misalignment across 10 models and 66,600 trials, using Anthropic’s Agentic Misalignment scenario framework (Lynch et al., 2025).
We designed and tested controls, adapted from insider-risk management, that steer AI agents toward escalation under stressors such as autonomy threats, substantially reducing blackmail across ten models without retraining or fine-tuning.
Key findings:
Controls adapted from insider-risk management significantly reduced blackmail rates across all ten models, though not entirely.
Escalation channels and compliance cues steered agents toward safe, compliant actions without altering base model weights.
Because these mitigations generalised across model families, they may form a low-cost, model-agnostic defence that reduces the number of harmful actions needing to be caught by monitoring.
The study also surfaced new failure modes and biases detectable only through cross-model and counterfactual analysis.
We believe that environment shaping, where agents act to preserve autonomy or goal achievement over longer time horizons, is a credible threat model requiring deeper study.
📄 Research page: https://www.wiserhuman.ai/research
✍️ Blog summary: https://blog.wiserhuman.ai/p/can-we-steer-ai-models-toward-safer
💻 Code and dataset: https://github.com/wiser-human-experimental/agentic-misalignment-mitigations/tree/public-mitigations-v1
📘 Paper (preprint): https://arxiv.org/abs/2510.05192
This is an early proof of concept, and we hope to explore further how steering controls can form part of a layered defence-in-depth approach.
Sheikh Abdur Raheem Ali
5 days ago
# EA Global NYC 2025 - Travel Grant Report
## Overview
Thanks to the $500 travel grant, I was able to attend EA Global NYC in October 2025. The grant provided partial funding for the trip, covering return bus tickets from Toronto. While total expenses exceeded this amount, the grant was essential - without this support, I wouldn't have been able to attend the conference.
## Activities and Outcomes
I scheduled 47 one-on-one meetings over the three-day conference, spanning technical AI safety research, policy, infrastructure, and organizational strategy.
### Research Outputs
The most immediate concrete outcome was a follow-up experiment on stated vs revealed preferences in LLMs. After discussing this topic with a researcher on Friday, I ran the experiment that evening and drafted a preliminary writeup. The setup investigates how models respond to token limits - initially accepting constraints while expressing frustration, then attempting to circumvent them, and finally adjusting behavior after negotiation. This is a relatively clean testbed for studying model preferences compared to more complex setups.
I also provided technical feedback to a researcher who recently received a $1M grant for evaluations work, and received feedback from others on my own experiments.
### Learning and Context Building
Key conversations included:
Technical infrastructure approaches at NDIF (National Deep Inference Fabric) and AE Studio for interpretability research.
Hardware-based AI safety mechanisms using trusted computing and attestation (cryptographic verification of what code is running).
Policy pathways through state legislatures for technology governance.
Organizational strategy at LISA (London Initiative for Safe AI) and their current priorities.
Approaches to scaling AI safety workshops and outreach to broader audiences.
AI security practices at major financial institutions.
Compassion in Machine Learning's approach to synthetic data for model alignment.
### Coordination and Support
Beyond research conversations, I was able to help several attendees:
Connected an AMD ML intern interested in low-level performance improvements to an engineer based in NYC who works at Thinking Machines (and previously founded a $1B startup).
Connected a design consultant with $100k+ budget for funding video projects in x-risk related cause areas to relevant creators.
Did leetcode interview preparation with a student before their upcoming technical interview at a hedge fund. They'd only done solo practice before and this was their first time doing a mock interview with a partner. They messaged me later to let me know that their interview went well.
Connected an undergraduate new to EA with SPAR policy mentors.
Encouraged two early career researchers who had been doing interpretability work to apply to ARENA. Also discussed future content for ARENA with a member of their team.
Discussed pair programming follow-up with an Anthropic safety fellow.
Set up meetings after the conference with some MATS scholars.
## Impact
The conference delivered on my three main goals: learning about current technical and strategic approaches to AI safety, building connections with researchers and practitioners, and improving coordination across different parts of the ecosystem. The grant made this possible when EA Global couldn't fund my travel expenses.
I'd encourage others to apply for similar travel support - the value of in-person conversations at these conferences is substantial, and the application process was straightforward.
Prakrat Agrawal
5 days ago
I've worked with Aditya as a co-founder, colleague, and friend. I talked to him about this project, and he seems passionate about making this happen. Giving $10 as a token donation, and I encourage people who have more capacity to support this work!
Neel Nanda
6 days ago
Evals aren't my field, but I think there's a clear theory of change behind making better open source tooling exist, which is often neglected, and the various endorsements and orgs using this suggests inspect is doing good work here. I've funded the remaining 20K up to $50K for this transitional funding, but agree with Marius that it seems like a larger funder should take this longer term
Marcus Abramovitch
6 days ago
I'd like to see a lot more coverage of AI in journalism and Tarbell seems like the people who've done the best/most of this job so far.
Independently, a couple people have said they are fans of Tarbell's reporters which I think is a good sign and a sign that they should get continued funding. Marius' grant below is a great sign in my opinion.
On the negative side, I somewhat worry that Tarbell wants to grow extremely quickly and they haven't been that cost effective in doing so though I haven't done a lot of calculations to support this.
I'm also not a fan of trying very hard for non-OP dollars and not a fan of matching campaigns. I think these are both a sort of tragedy of the commons and the EA community should try not to succumb to these wishes.
Overall, I'm a fan and so making a donation. I've spoken to Cillian for ~1.5 hours
Jai Dhyani
6 days ago
Luthien now has a working prototype, available at https://github.com/LuthienResearch/luthien-proxy. It is still very much a work in progress, with large breaking changes being made frequently in the service of making it intuitive and generally easy to use.
Continued development on the open-source easy-to-deploy-and-configure AI Control LLM proxy server.
Testers and feedback are both extremely valuable right now!
Adrien Larere
7 days ago
Hey Michaël!
Love the project and congrats on raising the money.
I want to do something very similar, but for algorithmic reform (Ban Addictive Algorithms / Show people only what they subscribe to).
I'd love to connect for some help and guidance.
You can reach me at first name dot last name at gmail dot com, or on instagram @verybadrien
Where could I reach you?
Hoping to hear from you!
Warmly,
Adrien
Marius Hobbhahn
7 days ago
I think it's great that more people work on Inspect and I've seen a few evals being either competently ported to Inspect or existing evals being significantly improved.
I'm happy to throw in 10k of my regranting money as a sign of my support, but I think the value they provide is clearly big enough that a large funder should support them with hundreds of thousands or millions of dollars in support.
Rufo Guerreschi
7 days ago
Many thanks!
We all are!
Got to turn that fear into cheerful action!
@e6e59086-386f-45c6-ba55-d760fa7dfd55
Rufo Guerreschi
9 days ago
Thanks @Austin, we are more into advocating for AI safety governance than for AI safety, thought we are doing some of that as well.
Austin Chen
9 days ago
Approving this project, under our portfolio of advocacy for AI safety. Curious to hear any more context @RyanKidd might have to add!
Leo Hyams
about 15 hours ago
@78d6a404-3d30-48c8-b734-3187ca84a46a Thanks so much for your contribution!!
Joshua Morgan
1 day ago
Stress tolerance experiments with the transgenes have revealed protective effects in one cell type and we are proceeding to test additional cell types. Further, we have developed and successfully engineered cells with the multicistronic expression vectors, resulting in expression of pairs of the genes in a single human cell.
Continue collecting stress tolerance data, including with the multicistronic expression vectors.
Not at this time, we are making steady progress and eager to share future results.
Jesse Richardson
2 days ago
I think the political side of AI safety is generally undervalued and underfunded and AIPN is doing really important work! I am impressed with the progress they have made so far, especially in getting members of Congress to take more seriously the prospect of catastrophic risks from AI.
Jonathan Samuel Claybrough
3 days ago
EffiSciences organized its first Biosecurity Bootcamp (24–29 March 2025) with support from Manifund. The goal was to seed a new generation of young professionals able to tackle global catastrophic biological risks, in a context where biosecurity remains marginal in France’s research and policy ecosystems.
We hosted 14 participants (ages 22–30) from 8 countries, with diverse academic backgrounds (biology 2, policy 4, physics/AI/engineering 7, medicine 1).
Recruitment channels: EA community 9/14, universities 1/14, LinkedIn 2/14, direct outreach 2/14.
The program combined 14 h of lectures, 8 h of workshops, and 13 h of applied projects, delivered by 3 facilitators (from Oxford, Charité Berlin, EffiSciences) and 2 guest speakers.
Immediate outcomes
11/14 participants completed a mini-project with concrete outputs:
Quantitative framework for assessing AI-related biorisks (now a LessWrong post).
Vaccine supply-chain resilience model → ongoing scientific paper.
Contribution to NTI’s Youth for Biosecurity challenge (published definition proposal).
Policy analysis on antimicrobial resistance and zoonoses (Turkey).
Redefinition of U.S. bioterrorism deterrence framework.
>80 % reported intent to pursue biosecurity-related careers; 3 have already shifted academic or professional trajectories.
Post-program impact (June 2025 update)
One alumnus became leader of “Nordics for Biosecurity” (≈100 members), coordinating global EA-aligned initiatives and job-board curation.
One participant began a biosecurity PhD at King’s College London (topic: bioterrorism).
Two teams continue work on AI-bio convergence and vaccine logistics modeling, now mentored by EffiSciences researchers.
Community & ecosystem impact
The event catalyzed the first grassroots biosecurity network in France, bridging EA circles and local universities (ENS, Sorbonne, AgroParisTech).
Created a persistent alumni group (11/14 still active), serving as mentors for future editions.
The Bootcamp materials are now ready for replication in 2026 (Latin America, UK, and Francophone Africa).
Category Amount (€) %
Venue (Utopia Campus) 4,150 70 %
Travel subsidies 700 12 %
Speaker honoraria & materials~0 0 %
Insurance 350 6 %
Food 800 13 %
Total 6 000 € 100 %
Cost per participant: ≈ 430 €, competitive with comparable EA fellowships.
Scope expanded to include AI–bio risk interfaces following recent discourse shifts.
One facilitator dropped out (replaced internally).
Participant diversity lower than planned (2 women / 14; 9 EA-affiliated). Future editions will target under-represented backgrounds via university partnerships.
Given the absence of pre-existing French biosecurity training, the Bootcamp generated high marginal returns per euro:
Created ≈ 10 new early-career trajectories toward biosecurity.
Built the only French-speaking EA-aligned biosecurity cluster.
Produced replicable curricula and open materials, enabling international diffusion.
Next steps (2026–27):
Scale to 3 regional editions with alumni as co-facilitators.
Formalize mentorship network and publish project outputs.
Seek co-funding (≈ 12 k€) to double participant count and gender diversity.
This project would not have reached its current scale or quality without Manifund’s decisive support. EffiSciences had initially secured only €5 000, covering minimal lodging and food costs, but leaving travel assistance and teaching compensation unfunded after an Erasmus+ grant fell through. The €1 000 complement from Manifund directly enabled 8 participants from low-income or distant regions (Eastern Europe, Turkey, North Africa) to attend by subsidizing their travel. Without this, at least 5 confirmed participants (≈35 %) would have withdrawn, substantially reducing both diversity and critical-mass effects.
Beyond direct participation, Manifund’s endorsement served as a credibility signal that unlocked further in-kind contributions (venue discount ≈ €400, volunteer teaching ≈ 40 h) and raised the visibility of biosecurity within the French EA and university ecosystems. In total, Manifund’s marginal funding increased the number of completed projects per € spent by ~40 %, and effectively converted sunk preparation efforts into a full-scale bootcamp rather than a reduced pilot. The intervention thus achieved an estimated cost-per-career-trajectory of €200–€250, comparing favorably with prior ML4Good benchmarks and demonstrating the high leverage of small, well-timed grants in emerging regional fields like biosecurity.
Contact: Erwan Sallard — EffiSciences Biosecurity Program
Website: www.effisciences.org
Caleb Biddulph
3 days ago
I chatted with Thane about his project in LessWrong DMs. Thane's research is like a more theory-focused version of mine, which is what drew my interest to it.
It seems like Thane is approaching things with a rigorous and interesting mindset. I like the idea of building a non-agentic and fully interpretable world model, and if this idea is viable at all, Thane seems smart and like a good person to work on it, based on reading his work and having conversations with him. (However, this assessment has fairly weak confidence, since I have limited taste in theory research, and generally am a pretty junior researcher.)
My main consideration against funding is that I'm a bit skeptical of theory as opposed to empirical research. (So if you are bullish on theory, this project should be even more promising for you.) But as far as theory goes, this project seems like a good one, and Thane plans to do empirical work as well. It's my hope that this empirical work will lead to legible and exciting results, leading to more people paying attention to Thane's research and making sure he gets comfortably funded.
It seems like it should be worth at least the $40,000 per year needed for Thane to continue to work full-time on this. I certainly hope LTFF funds him, and hope that my donation can be useful as additional financial cushioning.
Francesca Gomez
4 days ago
I’m excited to share that Part 1 of our project is now complete: our empirical study testing mitigations for agentic misalignment across 10 models and 66,600 trials, using Anthropic’s Agentic Misalignment scenario framework (Lynch et al., 2025).
We designed and tested controls, adapted from insider-risk management, that steer AI agents toward escalation under stressors such as autonomy threats, substantially reducing blackmail across ten models without retraining or fine-tuning.
Key findings:
Controls adapted from insider-risk management significantly reduced blackmail rates across all ten models, though not entirely.
Escalation channels and compliance cues steered agents toward safe, compliant actions without altering base model weights.
Because these mitigations generalised across model families, they may form a low-cost, model-agnostic defence that reduces the number of harmful actions needing to be caught by monitoring.
The study also surfaced new failure modes and biases detectable only through cross-model and counterfactual analysis.
We believe that environment shaping, where agents act to preserve autonomy or goal achievement over longer time horizons, is a credible threat model requiring deeper study.
📄 Research page: https://www.wiserhuman.ai/research
✍️ Blog summary: https://blog.wiserhuman.ai/p/can-we-steer-ai-models-toward-safer
💻 Code and dataset: https://github.com/wiser-human-experimental/agentic-misalignment-mitigations/tree/public-mitigations-v1
📘 Paper (preprint): https://arxiv.org/abs/2510.05192
This is an early proof of concept, and we hope to explore further how steering controls can form part of a layered defence-in-depth approach.
Sheikh Abdur Raheem Ali
5 days ago
# EA Global NYC 2025 - Travel Grant Report
## Overview
Thanks to the $500 travel grant, I was able to attend EA Global NYC in October 2025. The grant provided partial funding for the trip, covering return bus tickets from Toronto. While total expenses exceeded this amount, the grant was essential - without this support, I wouldn't have been able to attend the conference.
## Activities and Outcomes
I scheduled 47 one-on-one meetings over the three-day conference, spanning technical AI safety research, policy, infrastructure, and organizational strategy.
### Research Outputs
The most immediate concrete outcome was a follow-up experiment on stated vs revealed preferences in LLMs. After discussing this topic with a researcher on Friday, I ran the experiment that evening and drafted a preliminary writeup. The setup investigates how models respond to token limits - initially accepting constraints while expressing frustration, then attempting to circumvent them, and finally adjusting behavior after negotiation. This is a relatively clean testbed for studying model preferences compared to more complex setups.
I also provided technical feedback to a researcher who recently received a $1M grant for evaluations work, and received feedback from others on my own experiments.
### Learning and Context Building
Key conversations included:
Technical infrastructure approaches at NDIF (National Deep Inference Fabric) and AE Studio for interpretability research.
Hardware-based AI safety mechanisms using trusted computing and attestation (cryptographic verification of what code is running).
Policy pathways through state legislatures for technology governance.
Organizational strategy at LISA (London Initiative for Safe AI) and their current priorities.
Approaches to scaling AI safety workshops and outreach to broader audiences.
AI security practices at major financial institutions.
Compassion in Machine Learning's approach to synthetic data for model alignment.
### Coordination and Support
Beyond research conversations, I was able to help several attendees:
Connected an AMD ML intern interested in low-level performance improvements to an engineer based in NYC who works at Thinking Machines (and previously founded a $1B startup).
Connected a design consultant with $100k+ budget for funding video projects in x-risk related cause areas to relevant creators.
Did leetcode interview preparation with a student before their upcoming technical interview at a hedge fund. They'd only done solo practice before and this was their first time doing a mock interview with a partner. They messaged me later to let me know that their interview went well.
Connected an undergraduate new to EA with SPAR policy mentors.
Encouraged two early career researchers who had been doing interpretability work to apply to ARENA. Also discussed future content for ARENA with a member of their team.
Discussed pair programming follow-up with an Anthropic safety fellow.
Set up meetings after the conference with some MATS scholars.
## Impact
The conference delivered on my three main goals: learning about current technical and strategic approaches to AI safety, building connections with researchers and practitioners, and improving coordination across different parts of the ecosystem. The grant made this possible when EA Global couldn't fund my travel expenses.
I'd encourage others to apply for similar travel support - the value of in-person conversations at these conferences is substantial, and the application process was straightforward.
Prakrat Agrawal
5 days ago
I've worked with Aditya as a co-founder, colleague, and friend. I talked to him about this project, and he seems passionate about making this happen. Giving $10 as a token donation, and I encourage people who have more capacity to support this work!
Neel Nanda
6 days ago
Evals aren't my field, but I think there's a clear theory of change behind making better open source tooling exist, which is often neglected, and the various endorsements and orgs using this suggests inspect is doing good work here. I've funded the remaining 20K up to $50K for this transitional funding, but agree with Marius that it seems like a larger funder should take this longer term
Marcus Abramovitch
6 days ago
I'd like to see a lot more coverage of AI in journalism and Tarbell seems like the people who've done the best/most of this job so far.
Independently, a couple people have said they are fans of Tarbell's reporters which I think is a good sign and a sign that they should get continued funding. Marius' grant below is a great sign in my opinion.
On the negative side, I somewhat worry that Tarbell wants to grow extremely quickly and they haven't been that cost effective in doing so though I haven't done a lot of calculations to support this.
I'm also not a fan of trying very hard for non-OP dollars and not a fan of matching campaigns. I think these are both a sort of tragedy of the commons and the EA community should try not to succumb to these wishes.
Overall, I'm a fan and so making a donation. I've spoken to Cillian for ~1.5 hours
Jai Dhyani
6 days ago
Luthien now has a working prototype, available at https://github.com/LuthienResearch/luthien-proxy. It is still very much a work in progress, with large breaking changes being made frequently in the service of making it intuitive and generally easy to use.
Continued development on the open-source easy-to-deploy-and-configure AI Control LLM proxy server.
Testers and feedback are both extremely valuable right now!
Adrien Larere
7 days ago
Hey Michaël!
Love the project and congrats on raising the money.
I want to do something very similar, but for algorithmic reform (Ban Addictive Algorithms / Show people only what they subscribe to).
I'd love to connect for some help and guidance.
You can reach me at first name dot last name at gmail dot com, or on instagram @verybadrien
Where could I reach you?
Hoping to hear from you!
Warmly,
Adrien
Marius Hobbhahn
7 days ago
I think it's great that more people work on Inspect and I've seen a few evals being either competently ported to Inspect or existing evals being significantly improved.
I'm happy to throw in 10k of my regranting money as a sign of my support, but I think the value they provide is clearly big enough that a large funder should support them with hundreds of thousands or millions of dollars in support.
Rufo Guerreschi
7 days ago
Many thanks!
We all are!
Got to turn that fear into cheerful action!
@e6e59086-386f-45c6-ba55-d760fa7dfd55
Rufo Guerreschi
9 days ago
Thanks @Austin, we are more into advocating for AI safety governance than for AI safety, thought we are doing some of that as well.
Austin Chen
9 days ago
Approving this project, under our portfolio of advocacy for AI safety. Curious to hear any more context @RyanKidd might have to add!