@Loppukilpailija Thanks!
@NunoSempere
Researcher & forecaster
https://nunosempere.comThis is a donation to this user's regranting budget, which is not withdrawable.
$0 in pending offers
I don't yet know what I will do with this money. Some threads that I am considering:
Grants whose appeal other funding sources can't understand.
Thiel-style funding: Grants to formidable people outside the EA community for doing things that they are intrinsically motivated to do and which might have a large positive effect on the world.
Targetted grants in the forecasting sphere, particularly around more experimentation.
Giving a large chunk of it to Riesgos Catastróficos Globales (https://riesgoscatastroficosglobales.com/) in particular.
Bets of the form "I am quite skeptical that you would do [some difficult thing], but if you do, happy for you to take my money, and otherwise I will take yours".
Bounties: like the above, but less adversarial, because you do get the amount if you succeed, but don't lose anything if you don't.
Non-AI longtermism.
Grants to the Spanish and German-speaking communities.
I am also considering doing things a bit different from what the current EA ecosystem currently does, just for the information value. For example:
Giving feedback at depth on applications that people pitch me on
The rationale is that this feedback could improve people's later career paths. I think that other funding orgs don't do this because they get overwhelmed with applications. But I'm not overwhelmed at the moment!
Putting bounties on people referring applications
Using Manifold prediction markets on the success of grants as a factor for evaluation
Requires grantees willing to be more transparent, though
That said, I'm generally seeking to do "the optimal thing", so if I get some opportunity that I think is excellent I'll take it, even if it doesn't fall into the above buckets.
Also, I guess that $50k is not that large an amount, so I'm either going to have to be fairly strategic, or get more money :)
As for myself personally, I'm maybe best known for starting Samotsvety Forecasting (samotsvety.org/)—an excellent forecasting team, for being a prolific EA Forum poster (https://forum.effectivealtruism.org/users/nunosempere?sortedBy=top)—since emigrated to nunosempere.com/blog, or for having done some work at the Quantified Uncertainty Research Institute on topics of estimation and evaluation.
Nuño Sempere
3 months ago
This continues to run, you can keep track at https://blog.sentinel-team.org. We also have a weekly private project update mailing list to which I'm happy to add people with whom I have some rapport.
One important development has been getting a cofounder, Rai Sur (rai.dev) to complement my skillset; he's been working out great.
Nuño Sempere
3 months ago
I like the varied and different tools. I'm a bit worried about the minimum funding bar not being hit.
Nuño Sempere
5 months ago
A foresight team continuously looking for things that could become large-scale or existential catastrophes
An emergency response composed of capable yet mostly unattached people who could convene in the event of a catastrophe
A website http://sentinel-team.org/
It also has a short writeup of the project's history https://sentinel-team.org/sentinel-history/
Weekly public minutes from the foresight team https://sentinel-team.org/blog/
A fiscal sponsor
Further funding
A collaboration with Nathaniel Cooke on foresight methods beyond forecasting <https://forecasting.substack.com/p/a-gentle-introduction-to-risk-frameworks>. We later parted ways, but this was a good output.
I'm probably forgetting something
In short, the project has overall been going well. The idea was to have two components, the first of which was a foresight team that could raise an alarm if something happens. This foresight team is going great; I have three very obsessive, very competent forecasters, in addition to myself, and some tooling to aid them.
The emergency response team has been going well, but less so. It exists, it has some competent people, and we had a trial run with the Iran attacks on Israel.
But in general I just feel much better about our ability to, say, detect a Chinese invasion of Taiwan two weeks, or a few days before it happens, than I feel about our ability to do anything about it.
Some steps on the horizon:
Improve emergency response team.
Integrate more info sources.
Put out analytical pieces sharing lessons learnt
Reach out to potential collaborators and similar projects
Consider finding more funding
Introduction to potential emergency response team members. For details see https://sentinel-team.org/emergency_response_team/
Mentorship seems like it would be super useful to me. Are you one or two levels above me in life; have you set up something cool and want to share pointers? I'd be grateful.
I'm currently conflicted about funding. I'd appreciate help either with acquiring more, or with deciding that it's a distraction.
Funding is not the bottleneck on the, say $5k to $10k range, but funding on the $100k to $5M range would allow me to make this project more awesome.
I'm thinking that I prefer a smaller project that is sustainable ~forever, over a larger project that lives or dies by [large funder]'s word. But is this a good way to think about it? And even if it is, should I instead attempt to build something that shines twice as bright but lasts twice as long?
I'm procrastinating on applying to the SFF grant round. Partly this is because I find the application baroque. Help, or just coworking on it, would be appreciated
On the other hand, this project is sustainable at the current spend, so looking for more funding feels like a distraction.
I'm currently ~not really paying myself. I'm probably fine with this until the end of this year, though. Is this a good move?
I am very grateful to Manifund.
Writing the project proposal and getting early funding was important for coordinating between people interested in supporting the project.
Getting early funding from peers, from people whose respect I cherished, was important for me psychologically. It made me more excited. It was a hard to fake signal of promisingness
Early funding has been useful to not have money be a bottleneck.
Nuño Sempere
11 months ago
I continue to be excited about RCGs and its role in the EA Spain/LatAm communities.
I was waiting until end of year to see if I found more promising options. I was considering APART (https://manifund.org/projects/help-apart-expand-global-ai-safety-research), but I don't think I'll have time to evaluate it in more depth; still, I've reserved some of my funds potentially for it.
Nuño Sempere
11 months ago
I guess that another way of expressing the above might be that this seems potentially good, but given the large amount of funding it is asking for, it feels like someone should evaluate this in-depth, rather than casually?
Nuño Sempere
11 months ago
This looks shiny to me. I am considering funding it for a small amount.
Pros:
Successes and accomplishments seem valuable
Proxies look good
Writeup seems thoughtful
Cons:
I don't understand why other people haven't funded this yet
Maybe this application is exaggerating stuff?
Maybe the organization adds another step in the chain to impact, and it would be more efficient to fund individual people instead?
Maybe the biggest one: how do I know the success is counterfactual? Say that someone participated in a hackathon/fellowship/etc, and then later got a research position in some Oxford lab. How do I know that the person wouldn't have gotten something similarly impressive in the absence of your organization?
Nuño Sempere
11 months ago
I thought bundling was a neat idea. Contra other comments I don't think this would only be valuable if you also solved discounting. And discounting could maybe be achieved by the platform offering to match your returns (on resolved or exited markets).
Nuño Sempere
12 months ago
Project has some value, may have influenced organizations like Charity Entrepreneurship
Project has some learning value as well
None.
Negotiation with project lead based on expected number of hours.
None
Nuño Sempere
12 months ago
I think this project is my bar for funding. If I don't find other projects I'm as excited by, I'm planning to donate my remaining balance to it.
Nuño Sempere
12 months ago
I feel that the project could be potentially valuable, and I hope Marcel will be a bit more ambitious/have a bit more runway and leeway. I feel that there could be room for more funding, but I'd want some specific commitments in exchange.
Nuño Sempere
12 months ago
@vandemonian Are you currently constrained by more funding? Do you have the capacity to put in more effort if you get more funding?
Nuño Sempere
about 1 year ago
Funding this. I like the lumenator part, but I particularly like the more ambitious life trajectory point.
On your application, you mention:
returning the money left if I decided that this was not a good idea anymore
Please consider not doing this; rather, please either pivot to a better opportunity or keep it until a good opportunity arises.
Nuño Sempere
about 1 year ago
Overall I don't really understand the biosecurity ecosystem or how this would fit in, so I'm thinking I'm probably a bad funder here. Still, some questions:
Do you already have some decision-makers who could use these estimates to make different decisions?
How valuable do you think that this project is without the long covid estimate?
Who is actually doing this work? Vivian and Richard, or Joel and Aron?
Why are you doing stuff $3.6k a time, rather than having set up some larger project with existing biosecurity grantmakers?
Nuño Sempere
about 1 year ago
I have too many conflicts of interest to fund this myself, but here are some thoughts:
I like thinking of Nathan's work in terms of the running theme of helping communities arrive at better beliefs, collectively. And figuring out how to make that happen.
On the value of that line of work:
- I have a pretty strong aversion to doing that work myself. I think that it's difficult to do and requires a bunch of finesse and patience that I lack.
- I buy that it's potentially very valuable. Otherwise, you end with a Cassandra situation, where those who have the best models can't communicate them to others. Or you get top-down decisions, where a small group arrives an opinion and transmits it from on high. Or you get various more complex problems, where different people in a community have different perspectives on a topic, and they don't get integrated well.
- I think a bottleneck on my previous job, at the Quantified Uncertainty Research Institute, was to not take into account this social dimension and put too much emphasis on technical aspects.
One thing Nathan didn't mention is that estimaker, viewpoints and his podcast can feed on each other: e.g., he has interviewed a bunch of people and got them to make quantified models about AI using estimaker: (Katja Grace: https://www.youtube.com/watch?v=Zum2QTaByeo&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=8, Rohit Krishnan: https://www.youtube.com/watch?v=cqCYMgEnP7E&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=10, Garett Jones: https://www.youtube.com/watch?v=FSM94rmJUAU&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=4, Aditya Prasad: https://www.youtube.com/watch?v=rwTb7VgSZKU&list=PLAA8NhPG-VO_PnBm3EkxGYObLIMs4r2wZ&index=6). This plausibly seems like a better way forward than the MIRI conversations https://www.lesswrong.com/s/n945eovrA3oDueqtq.
Generally, you could imagine an interesting loop: viewpoint elicitation surfaces disagreements => representatives of each faction make quantified models => some process explains the quantified models to a public => you do an adversarial collaboration on the quantified models, parametrizing unresolvable disagreements so that members of the public can input their values but otherwise reuse the model.
I see reason to be excited about epistemic social technology like that, and about having someone like Nathan figure things out in this space.
Nuño Sempere
about 1 year ago
I think that RCG's object-level work is somewhat valuable, and also that they could greatly contribute to making the Spanish and Latin-American EA community become stronger. I think one could make an argument that this doesn't exceed some funding bar, but ultimately it doesn't go through.
For | Date | Type | Amount |
---|---|---|---|
Fund Sentinel for Q1-2025 | 8 days ago | project donation | +50 |
Fund Sentinel for Q1-2025 | 10 days ago | project donation | +100 |
Fund Sentinel for Q1-2025 | 16 days ago | project donation | +100 |
Fund Sentinel for Q1-2025 | 21 days ago | project donation | +350 |
<89b29643-1793-4dec-8713-59b3c13edb86> | 22 days ago | profile donation | +100 |
Fund Sentinel for Q1-2025 | 22 days ago | project donation | +15 |
Fund Sentinel for Q1-2025 | 23 days ago | project donation | +5000 |
Fund Sentinel for Q1-2025 | 23 days ago | project donation | +29 |
Fund Sentinel for Q1-2025 | 24 days ago | project donation | +500 |
Fund Sentinel for Q1-2025 | 30 days ago | project donation | +10000 |
Fund Sentinel for Q1-2025 | 30 days ago | project donation | +200 |
Future-Proofing Forecasting: Easy Open-Source Solution | about 1 month ago | project donation | 50 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +100 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | 50 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +50 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +1000 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +100 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +200 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +500 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +1000 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +500 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +200 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +500 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +100 |
Fund Sentinel for Q1-2025 | about 1 month ago | project donation | +1000 |
Play money prediction markets | 2 months ago | project donation | 50 |
CEEALAR | 2 months ago | project donation | 350 |
Make ALERT happen | 3 months ago | project donation | +216 |
Make ALERT happen | 3 months ago | project donation | +100 |
Make ALERT happen | 3 months ago | project donation | +50 |
Make ALERT happen | 3 months ago | project donation | +100 |
Manifund Bank | 3 months ago | deposit | +700 |
Manifund Bank | 7 months ago | deposit | +5001 |
Manifund Bank | 7 months ago | return bank funds | 10000 |
The Base Rate Times | 11 months ago | project donation | 1500 |
Support Riesgos Catastroficos Globales | 11 months ago | project donation | 12500 |
Manifund Bank | 11 months ago | withdraw | 18000 |
Make ALERT happen | 12 months ago | project donation | +950 |
Make ALERT happen | 12 months ago | project donation | +2050 |
Update Big List of Cause Candidates | 12 months ago | project donation | 1000 |
Make ALERT happen | 12 months ago | project donation | +5000 |
Make ALERT happen | 12 months ago | project donation | +5000 |
Make ALERT happen | 12 months ago | project donation | +5000 |
A Lumenator Company, or: A More Ambitious Life Trajectory | about 1 year ago | project donation | 5000 |
Support Riesgos Catastroficos Globales | about 1 year ago | project donation | 20000 |
Manifund Bank | over 1 year ago | deposit | +50000 |