strong team with nose in the game
@gleech
I cofounded the consultancy Arb, and am a fellow at Cosmos, CFI and Foresight.
gleech.orgThis is a donation to this user's regranting budget, which is not withdrawable.
$0 in pending offers
I'm a regrantor most interested in technical AI alignment, control, and security projects. I'm also interested in funding macrostrategy and coordination (things like my Shallow Review of the field but better).
Gavin Leech
8 days ago
Peli is a frugal philosopher who cowrote one of my favourite essays on alignment.
The idea: He wants to invent virtue post-training, inspired by e.g. the actual normative practice of fields like maths, which often appeals to self-instantiation. It's an open question whether current capabilities allow for instilling stable loops like this and I'm glad someone is trying.
Counterfactual: The current project struck out from the usual funders, I guess because theory is de-emphasised now, or because virtue ethics is usually unoperationalisable, or because he's not good at sales, or because they don't know the following about him.
Track record: Besides the (great) above essay: I've worked with Peli on two technical ML research papers and was impressed with his experimental skill, design skill, and precision. He also did some invited technical replication work on Turner 2023. He's competent at ML experiments at the level of a decent PhD student in the field. He is very used to working totally independently from inception to .
Concerns: If I didn't know the above, the doc would worry me in how totally non-ML-technical it is (it's intentionally doing some original philosophical work as prerequisite for that part). He's also been looking for collaborators and not finding much success; I hope that giving him independent funding makes this search an easier pitch.
I expect distribution to be the weakest part of the project. The inferential distance might be too high for the narrower part of the ingroup audience to bridge to him, even with him presenting them good data. But places like PAW, AIES and HAAISS will certainly engage. He's capable of doing the conference paper grind but doesn't seem motivated by it. Maybe a collaborator could bring the will to actively disseminate the results. But if distribution is the worst risk for a speculative ambitious project then we're in a good spot.
Cost-effectiveness: Very high, $40k / FTE for taking an intriguing idea and bringing it to testability.
Conflict of interest: As noted, Peli has worked on several projects at my company Arb and I've known him for years on Twitter.
Gavin Leech
28 days ago
The post went up roughly on time (29th December) and was fairly well-received (though it garnered less karma than last year). Comments were good and only the Alex Altair entry required notable edits. This is evidence but not strong evidence that the current version is error-free.
Our conference scrape provided some academic work which I think is underappreciated on LW but less than I hoped.
I'm very happy with the new fields (target case and broad approach) and our data entry on them.
Change: Following comments from funders we didn't do the "glossy" PDF version. Surplus money will go towards the 2025 version.
100% on salaries for the team. Thanks especially to Shoshannah Tekofsky, a highly graceful research manager.
Gavin Leech
7 months ago
Thanks very much to all donors! A private donor has offered to fill the difference so please stop sending me money (mods, if there's a way to close projects I can't see it). We've started work.
Gavin Leech
over 1 year ago
The predecessor was my most important project last year. I've personally verified that there's a great deal of demand for some version of this ("customer" orgs and institutions and "supplier" volunteers). Nuno has some rare and essential qualities (honesty, clarity, infovorism) while lacking some others. But the shoestring version still excites me and I vote with my feet.
For | Date | Type | Amount |
---|---|---|---|
Orexin Pilot Experiment for Reducing Sleep Need | 7 days ago | project donation | 500 |
Virtue-Ethical Rationality and Training Dynamics | 7 days ago | project donation | 10000 |
Manifund Bank | about 1 month ago | deposit | +100000 |
Manifund Bank | 6 months ago | withdraw | 10000 |
Shallow review of AI safety 2024 | 6 months ago | project donation | +10000 |
Manifund Bank | 7 months ago | withdraw | 10860 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +8000 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +1000 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +1000 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +10 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +500 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +50 |
Shallow review of AI safety 2024 | 7 months ago | project donation | +300 |
Make ALERT happen | over 1 year ago | project donation | 5000 |
Manifund Bank | over 1 year ago | deposit | +5000 |