Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Support our mission. Read more

The market for grants

Manifund helps great charities get the funding they need. Discover amazing projects, buy impact certs, and weigh in on what gets funded.

Get started
Manifox
ProjectsCommentsDonations
Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
Apart avatar

Apart Research

about 17 hours ago

@AntonMakiievskyi

TLDR; no big funders have retracted funding, we have undersold our impact in grant applications by not focusing on the metrics that we now know work best, our network within SF isn't as strong as other orgs, and the AIS funding ecosystem face general problems. See a longer response to this question here.

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
🌷

1 day ago

Apart research publishes multiple papers in AI safety. I don't like the entrepreneurial vibe but they have results.

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
AntonMakiievskyi avatar

Anton Makiievskyi

1 day ago

With such an amazing track record and support previous from big funders, I wonder why they (big funders) are letting you run out of money. Did they all refuse additional support?

'Making God': a Documentary on AI Risks for the Public
🍩

Tristan Williams

1 day ago

I think Mike's got a great vision for the project here, and though its an expensive project I think the prospect of having it reach streaming platforms is really appealing, and it seems sensible that you need a certain level of quality to reach them. Godspeed.

Am I? — A research documentary at the edge of machine consciousness
liconstan avatar

Constance Li

1 day ago

@camberg23 Glad that you have given this some thought and have a hard deadline for publication. Good luck with the film, and I look forward to watching it this fall!

Calibration City
wasabipesto avatar

wasabipesto

1 day ago

@Austin Thank you for the advice and the generous donation! I think those points are spot-on - you aren't the first to bring up how wordy it is. Both of those links are phenomenal resources, I have a few ideas on how I can make it better already.

Calibration City
Austin avatar

Austin Chen

2 days ago

@wasabipesto brier.fyi looks fantastic, great work! This is the kind of core infrastructural work that helps inform expert forecasters, and also makes forecasting more understandable to newcomers. Visual design-wise, the diagrams look great, too! I've donated an additional $5k in recognition of the time & effort you've put into making this happen.

Also some quick feedback:

  • The home page is quite wordy atm -- I'd suggest radically trimming or collapsing text from the home page, and letting the visualizations speak for themselves more. Could take inspiration from PlasticList perhaps, they put most of the explanation on other pages.

  • Also, suggest shorter line lengths for readability, capping around 100 chars per line. See also these typography tips.

  • I'm slightly peeved at how low Manifold currently ranks, but I'm hoping it's a kick-in-the-pants to the Manifold team to figure out why and how to make Manifold more accurate, and also signal to users which markets are less trustworthy

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
charbel-raphael avatar

Charbel-Raphael Segerie

2 days ago

Apart has been useful for me to quickly experiment with ideas/improve on quick iteration. I've organised multiple hackathons before knowing apart, and their format is vastly more effective at converting talent/by unit of effort. While I was head of EffiSciences’ AI Safety Unit, this was one of my favorite event formats, and this is one of the format that I encourage alumni of ML4Good to run. Empirically, each apart hackathon that I organized in Paris enabled the long term careers of 0.6 person in AI Safety (see the table). This means that, on average, 0.6 new full-time persons started working on AI safety after each Apart hackathon event in Paris.

Calibration City
wasabipesto avatar

wasabipesto

2 days ago

Progress update

Brier.fyi is live! 🎉

Calibration City was great, but we needed new data and a new way to convey our insights. Now we have Brier.fyi - a more intuitive, guided exploration of prediction market accuracy.

Go check it out first, and then come back here for the progress update!

Calibration City 🌇

When we started this proposal, we already had a good MVP. We had a data pipeline to ingest data from the public APIs of Kalshi, Manifold, Metaculus, and Polymarket, but it was brittle and missed some data. We were able to do some cool things with that data, and we learned a lot. Most importantly, we showed that prediction markets are pretty well calibrated! Over time I added some features like customizing the bin method, weighting the averages, advanced filtering, and a simple Brier score analysis. However, I was always reluctant to actually show the overall Brier score of any particular platform or category since they are so fundamentally different it would be a bit misleading. Instead, I fell back to showing general trends and comparisons.

Matching 🔥

I really wanted to be able to answer the questions “How accurate are prediction markets in general?” and “Which market platform is most accurate?”

In order to do that, I needed to be able to compare apples to apples. My proposal for this project was to group identical markets from each platform, then grade the markets in those groups against each other. And that’s what we did! Right now we have 931 linked markets and I’m adding more every day. With these scores we can finally answer who’s the most accurate!

The results:

  • On average, prediction markets are pretty accurate! One month before close, 62% of markets were already within 30% of the correct resolution, representing a Brier score of 0.09.

  • No prediction market platform is a clear winner on all topics! Kalshi technically leads this score, but by only a few percent. However, looking at each question category shows that most platforms have a few niches where they shine - Kalshi and Polymarket are good at sports, while Metaculus is best at scientific topics. See all of the scores on the platforms page.

New Features 💡

In addition to the market matching, I had some improvements I already wanted to make to the site:

  • We now have metrics for the market volume and an estimate for the number of traders on Polymarket.

  • We decompose multiple-choice markets into binary markets, now allowing us to score almost all markets on each platform!

Additionally, I was noticing some issues with Calibration City I wanted to address:

  • The previous data extract-transform-load data pipeline took a long time to run and failed often, which lead to me not refreshing the database for months at a time. Now the entire thing is automated and much more resilient, allowing us to gather new markets every single day.

  • Over time the site became slower and slower, plus it doesn’t always load correctly on the first visit. Caching doesn’t work quite right and so there was always a lot of load on the server. The new site is completely static, cached properly, and loads instantly with all data. It’s also much easier to develop and get the data for their own experiments.

  • The old site was often cited as proof that markets are accurate without explanation or context, leaving visitors confused unless the person who linked it also gave an explanation. The primary calibration chart looks neat, but doesn’t really mean anything unless you already know about calibration. The Introduction page was supposed to be a remedy for that, but basically nobody has read it. In response, every chart and visualization on the new site has some sort of explanation of what the chart means, and most also have the results and context presented in a way new users can understand. We also address the primary question most users have - “How accurate are prediction markets?” - right at the very beginning.

  • There was a split between users trying to prove that prediction markets in general are great versus those trying to prove that their favorite site is the best. The old site had enough data that you could try and prove either one, but it wasn’t built with that in mind. In the new site I held those viewpoints front and center and tried to answer both honestly and directly.

Wrap-Up 🏁

I still have a lot of work to do here, but I’m closing this project because I think I’ve completed my main goal. My main focus is getting user feedback, making things crystal clear, and matching more markets together. My roadmap is alongside the project code on GitHub, and both are open to community contribution.

To all of my donors: thank you for your contributions and kind words. Without your encouragement Brier.fyi would not have happened. Feel free to get in touch with me anytime. I’ll be at Manifest next weekend if anyone wants to say hello!

General support for research activities
AmritanshuPrasad avatar

Amritanshu Prasad

2 days ago

If you would like to fund the AI safety evals org I am starting, please donate at Suav Tech, an AI Safety evals for-profit | Manifund

General support for research activities
🐯

Neevan Sharma

2 days ago

I know him well - he’s experienced, reliable, and doing meaningful work in AI policy, ethics, and governance.

Definitely a project worth funding

General support for research activities
sushantshah-dev avatar

Sushant Shah

2 days ago

I know Mr. Amritanshu and he's one of the more highly active people in the AI Ethics field that I know of.

Understanding Trust
Austin avatar

Austin Chen

3 days ago

Approving this project! While I'm not very familiar with this field of research myself, Abram is well-regarded in the AI safety space; I've spoken with another funder I trust who was interested in supporting Abram's work.

I'm also glad that Manifund can help out here, by serving as a simple, low-cost option for fiscal sponsorship!

The First Workshop on Mechanistic Interpretability for Vision
🥥

tamar rott shaham

3 days ago

I think it is important to arrange a social dinner as part of the mechanistic interpretability for vision workshop (transferring back from charity to cash)

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
Bart-Bussmann avatar

Bart Bussmann

3 days ago

During an Apart Research hackathon I got my first hands-on experience with mechanistic interpretability and fell in love with it. Now I'm working on mechinterp full-time and have published mechinterp papers at ICLR and ICML. Generally their hackathons are very accessible, well-organized, and a great entry-point for people interested in AI safety.

I believe this is a great funding opportunity for any funders interested in getting more people into working on AI safety!

Enable whistleblowers at AI companies
🐰

Samuel Shadrach

3 days ago

@travelling_salesman Thank you!

Enable whistleblowers at AI companies
🐰

Samuel Shadrach

3 days ago

@Romain_D Thank you! Feel free to reach out here or on my email or anonymously, for any further questions.

The First Workshop on Mechanistic Interpretability for Vision
NeelNanda avatar

Neel Nanda

4 days ago

I recommended that the organisers of the workshop play here for funding. I think that mechanistic interpretability is an impactful field of safety and I care about there being a thriving academic field. Workshops at top conferences are fairly high leverage ways for people to form connections, spark potential research collaborations, introduce junior people to senior people, etc. The workshop will happen regardless but I am happy to give them a bit of funding to do additional things like organising dinners for speakers + organisers + other notable people

Am I? — A research documentary at the edge of machine consciousness
camberg23 avatar

Cameron Berg

4 days ago

Strongly agree, and thanks for calling this out.

We are planning to publish whatever we have by the end of August at the latest as a first step.

Though we can't guarantee the end result will go viral, early signals are extremely promising, and we are looking to execute/scale this project in a professional but minimally polished manner for precisely the reason you outline: the perfect will be the enemy of the good here, and this space is moving far too quickly for us to be complacent. @liconstan

Am I? — A research documentary at the edge of machine consciousness
liconstan avatar

Constance Li

4 days ago

My 2 cents: I'm a bit concerned about documentaries because I know many people who spend years creating documentaries only to get stuck by not being able to find a distribution platform for them. They often then go on arduous film festival circuits to gain some recognition and reviews to be able to get a distribution deal, but even that often doesn't work out.

If you want to release it for free on Youtube then that works fine, but high viewership is not a given, especially if you do not have have a highly subscribed channel or a big promotion plan.

I would say that the biggest failure risk for this project is spending a lot of time on a film that no one outside of our small community watches.

You can always record the emergent behaviors for the film so that you can show they existed even if AI companies later obfuscate them, but you can't get the time and money back if you end up stuck at the distribution stage.

'Making God': a Documentary on AI Risks for the Public
MarcusAbramovitch avatar

Marcus Abramovitch

4 days ago

This has been on myradar for a while now. Communication to the masses about AI seems like the most high value neglected thing to me. This is a fairly expensive project but I think can move this conversation that is typically held on LessWrong to the dinner table.

ProjectsCommentsDonations
Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
Apart avatar

Apart Research

about 17 hours ago

@AntonMakiievskyi

TLDR; no big funders have retracted funding, we have undersold our impact in grant applications by not focusing on the metrics that we now know work best, our network within SF isn't as strong as other orgs, and the AIS funding ecosystem face general problems. See a longer response to this question here.

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
🌷

1 day ago

Apart research publishes multiple papers in AI safety. I don't like the entrepreneurial vibe but they have results.

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
AntonMakiievskyi avatar

Anton Makiievskyi

1 day ago

With such an amazing track record and support previous from big funders, I wonder why they (big funders) are letting you run out of money. Did they all refuse additional support?

'Making God': a Documentary on AI Risks for the Public
🍩

Tristan Williams

1 day ago

I think Mike's got a great vision for the project here, and though its an expensive project I think the prospect of having it reach streaming platforms is really appealing, and it seems sensible that you need a certain level of quality to reach them. Godspeed.

Am I? — A research documentary at the edge of machine consciousness
liconstan avatar

Constance Li

1 day ago

@camberg23 Glad that you have given this some thought and have a hard deadline for publication. Good luck with the film, and I look forward to watching it this fall!

Calibration City
wasabipesto avatar

wasabipesto

1 day ago

@Austin Thank you for the advice and the generous donation! I think those points are spot-on - you aren't the first to bring up how wordy it is. Both of those links are phenomenal resources, I have a few ideas on how I can make it better already.

Calibration City
Austin avatar

Austin Chen

2 days ago

@wasabipesto brier.fyi looks fantastic, great work! This is the kind of core infrastructural work that helps inform expert forecasters, and also makes forecasting more understandable to newcomers. Visual design-wise, the diagrams look great, too! I've donated an additional $5k in recognition of the time & effort you've put into making this happen.

Also some quick feedback:

  • The home page is quite wordy atm -- I'd suggest radically trimming or collapsing text from the home page, and letting the visualizations speak for themselves more. Could take inspiration from PlasticList perhaps, they put most of the explanation on other pages.

  • Also, suggest shorter line lengths for readability, capping around 100 chars per line. See also these typography tips.

  • I'm slightly peeved at how low Manifold currently ranks, but I'm hoping it's a kick-in-the-pants to the Manifold team to figure out why and how to make Manifold more accurate, and also signal to users which markets are less trustworthy

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
charbel-raphael avatar

Charbel-Raphael Segerie

2 days ago

Apart has been useful for me to quickly experiment with ideas/improve on quick iteration. I've organised multiple hackathons before knowing apart, and their format is vastly more effective at converting talent/by unit of effort. While I was head of EffiSciences’ AI Safety Unit, this was one of my favorite event formats, and this is one of the format that I encourage alumni of ML4Good to run. Empirically, each apart hackathon that I organized in Paris enabled the long term careers of 0.6 person in AI Safety (see the table). This means that, on average, 0.6 new full-time persons started working on AI safety after each Apart hackathon event in Paris.

Calibration City
wasabipesto avatar

wasabipesto

2 days ago

Progress update

Brier.fyi is live! 🎉

Calibration City was great, but we needed new data and a new way to convey our insights. Now we have Brier.fyi - a more intuitive, guided exploration of prediction market accuracy.

Go check it out first, and then come back here for the progress update!

Calibration City 🌇

When we started this proposal, we already had a good MVP. We had a data pipeline to ingest data from the public APIs of Kalshi, Manifold, Metaculus, and Polymarket, but it was brittle and missed some data. We were able to do some cool things with that data, and we learned a lot. Most importantly, we showed that prediction markets are pretty well calibrated! Over time I added some features like customizing the bin method, weighting the averages, advanced filtering, and a simple Brier score analysis. However, I was always reluctant to actually show the overall Brier score of any particular platform or category since they are so fundamentally different it would be a bit misleading. Instead, I fell back to showing general trends and comparisons.

Matching 🔥

I really wanted to be able to answer the questions “How accurate are prediction markets in general?” and “Which market platform is most accurate?”

In order to do that, I needed to be able to compare apples to apples. My proposal for this project was to group identical markets from each platform, then grade the markets in those groups against each other. And that’s what we did! Right now we have 931 linked markets and I’m adding more every day. With these scores we can finally answer who’s the most accurate!

The results:

  • On average, prediction markets are pretty accurate! One month before close, 62% of markets were already within 30% of the correct resolution, representing a Brier score of 0.09.

  • No prediction market platform is a clear winner on all topics! Kalshi technically leads this score, but by only a few percent. However, looking at each question category shows that most platforms have a few niches where they shine - Kalshi and Polymarket are good at sports, while Metaculus is best at scientific topics. See all of the scores on the platforms page.

New Features 💡

In addition to the market matching, I had some improvements I already wanted to make to the site:

  • We now have metrics for the market volume and an estimate for the number of traders on Polymarket.

  • We decompose multiple-choice markets into binary markets, now allowing us to score almost all markets on each platform!

Additionally, I was noticing some issues with Calibration City I wanted to address:

  • The previous data extract-transform-load data pipeline took a long time to run and failed often, which lead to me not refreshing the database for months at a time. Now the entire thing is automated and much more resilient, allowing us to gather new markets every single day.

  • Over time the site became slower and slower, plus it doesn’t always load correctly on the first visit. Caching doesn’t work quite right and so there was always a lot of load on the server. The new site is completely static, cached properly, and loads instantly with all data. It’s also much easier to develop and get the data for their own experiments.

  • The old site was often cited as proof that markets are accurate without explanation or context, leaving visitors confused unless the person who linked it also gave an explanation. The primary calibration chart looks neat, but doesn’t really mean anything unless you already know about calibration. The Introduction page was supposed to be a remedy for that, but basically nobody has read it. In response, every chart and visualization on the new site has some sort of explanation of what the chart means, and most also have the results and context presented in a way new users can understand. We also address the primary question most users have - “How accurate are prediction markets?” - right at the very beginning.

  • There was a split between users trying to prove that prediction markets in general are great versus those trying to prove that their favorite site is the best. The old site had enough data that you could try and prove either one, but it wasn’t built with that in mind. In the new site I held those viewpoints front and center and tried to answer both honestly and directly.

Wrap-Up 🏁

I still have a lot of work to do here, but I’m closing this project because I think I’ve completed my main goal. My main focus is getting user feedback, making things crystal clear, and matching more markets together. My roadmap is alongside the project code on GitHub, and both are open to community contribution.

To all of my donors: thank you for your contributions and kind words. Without your encouragement Brier.fyi would not have happened. Feel free to get in touch with me anytime. I’ll be at Manifest next weekend if anyone wants to say hello!

General support for research activities
AmritanshuPrasad avatar

Amritanshu Prasad

2 days ago

If you would like to fund the AI safety evals org I am starting, please donate at Suav Tech, an AI Safety evals for-profit | Manifund

General support for research activities
🐯

Neevan Sharma

2 days ago

I know him well - he’s experienced, reliable, and doing meaningful work in AI policy, ethics, and governance.

Definitely a project worth funding

General support for research activities
sushantshah-dev avatar

Sushant Shah

2 days ago

I know Mr. Amritanshu and he's one of the more highly active people in the AI Ethics field that I know of.

Understanding Trust
Austin avatar

Austin Chen

3 days ago

Approving this project! While I'm not very familiar with this field of research myself, Abram is well-regarded in the AI safety space; I've spoken with another funder I trust who was interested in supporting Abram's work.

I'm also glad that Manifund can help out here, by serving as a simple, low-cost option for fiscal sponsorship!

The First Workshop on Mechanistic Interpretability for Vision
🥥

tamar rott shaham

3 days ago

I think it is important to arrange a social dinner as part of the mechanistic interpretability for vision workshop (transferring back from charity to cash)

Keep Apart Research Going: Global AI Safety Research & Talent Pipeline
Bart-Bussmann avatar

Bart Bussmann

3 days ago

During an Apart Research hackathon I got my first hands-on experience with mechanistic interpretability and fell in love with it. Now I'm working on mechinterp full-time and have published mechinterp papers at ICLR and ICML. Generally their hackathons are very accessible, well-organized, and a great entry-point for people interested in AI safety.

I believe this is a great funding opportunity for any funders interested in getting more people into working on AI safety!

Enable whistleblowers at AI companies
🐰

Samuel Shadrach

3 days ago

@travelling_salesman Thank you!

Enable whistleblowers at AI companies
🐰

Samuel Shadrach

3 days ago

@Romain_D Thank you! Feel free to reach out here or on my email or anonymously, for any further questions.

The First Workshop on Mechanistic Interpretability for Vision
NeelNanda avatar

Neel Nanda

4 days ago

I recommended that the organisers of the workshop play here for funding. I think that mechanistic interpretability is an impactful field of safety and I care about there being a thriving academic field. Workshops at top conferences are fairly high leverage ways for people to form connections, spark potential research collaborations, introduce junior people to senior people, etc. The workshop will happen regardless but I am happy to give them a bit of funding to do additional things like organising dinners for speakers + organisers + other notable people

Am I? — A research documentary at the edge of machine consciousness
camberg23 avatar

Cameron Berg

4 days ago

Strongly agree, and thanks for calling this out.

We are planning to publish whatever we have by the end of August at the latest as a first step.

Though we can't guarantee the end result will go viral, early signals are extremely promising, and we are looking to execute/scale this project in a professional but minimally polished manner for precisely the reason you outline: the perfect will be the enemy of the good here, and this space is moving far too quickly for us to be complacent. @liconstan

Am I? — A research documentary at the edge of machine consciousness
liconstan avatar

Constance Li

4 days ago

My 2 cents: I'm a bit concerned about documentaries because I know many people who spend years creating documentaries only to get stuck by not being able to find a distribution platform for them. They often then go on arduous film festival circuits to gain some recognition and reviews to be able to get a distribution deal, but even that often doesn't work out.

If you want to release it for free on Youtube then that works fine, but high viewership is not a given, especially if you do not have have a highly subscribed channel or a big promotion plan.

I would say that the biggest failure risk for this project is spending a lot of time on a film that no one outside of our small community watches.

You can always record the emergent behaviors for the film so that you can show they existed even if AI companies later obfuscate them, but you can't get the time and money back if you end up stuck at the distribution stage.

'Making God': a Documentary on AI Risks for the Public
MarcusAbramovitch avatar

Marcus Abramovitch

4 days ago

This has been on myradar for a while now. Communication to the masses about AI seems like the most high value neglected thing to me. This is a fairly expensive project but I think can move this conversation that is typically held on LessWrong to the dinner table.