Great work ahead. @AlexL
Between December 2023-March 2024 I was working on creating a proactive AI safety advocacy organization tentatively called AI Watch. This was funded until the end of 2023 but I wasn’t compensated for my time on the project after that. In late March, the co-founder decided to switch to a different project and I ended up dropping the comms org idea.
This grant would fund my prior work as well as provide transition support for finding a new job or grant.
Another funder recommended this in May but was ultimately unable to disperse funding due to restrictions on what forms of funding their legal entity can provide to individuals. This funding request is asking for what the prior funder had agreed to give. This will give me compensation for past work, performed with an expectation I would receive this previous grant, and alleviate some financial concerns which arose because of the complicated funding situation. It will also allow me some extra time to figure out what I will do next, once I finish my current, temporary, employment situation.
Aside from funding for work on the AI advocacy project, my time has been spent and, in November will be spent:
Helping start an AI governance forum (ongoing)
Applying for a variety of jobs in the AI governance space
Searching for potential co-founders/collaborators
Writing on learnings from the AI Watch project, case studies on past advocacy wins (e.g. seatbelts, global warming), and relevant learnings from recent work in elections.
Getting up-to-speed on current AI policy proposals along with quick distillation work on them.
Exploring other project ideas
For the past few years I (Alex Lintz) have been something of an entrepreneur in the AI governance space. I’ve helped start The AI Governance Archive (TAIGA), the AI Safety Communications Centre (AISCC), and the Long-term AI Strategy Retreat (LAISR). With a small team, I’m currently working on starting an organization which will work on election-related projects. I’m also working with another team to plan strategy, find funding, and hire a founder who can take charge of starting an AI governance forum with our help & supervision. Outside of this, I’ve worked on (often semi-private) research projects aiming to understand the strategic landscape in AI governance.
Austin Chen
3 months ago
Approving as part of Manifund's portfolio of AI safety advocacy! I agree with Adam that one important way funders like us can help the ecosystem is by derisking charitable entrepreneurship through such grants.
@AlexL do you think this other funder would be structurally able to donate to Manifund (as a US-based 501c3)? If the only blocker to your prior funder was the fact that this was a grant to an individual, Manifund is happy to provide a lightweight fiscal sponsorship service now, and accept the donation on your behalf (and use the funds to replenish Adam's regrantor budget) To be clear, this is completely optional -- we're happy to proceed with this grant either way -- but helping your original funder follow through with their commitment feels like a better allocation of credit for this grant (and of course, permits Adam to fund more causes he finds important)
Alex Lintz
3 months ago
@Austin I suspect this wasn't the blocker but I'll reach out to the funder and check
Adam Gleave
3 months ago
Alex Lintz has a track record of good work in this space, and took a career risk in order to start a potentially high-impact organization. Although this particular org didn't pan out, this is par for the course in entrepeneurship. I'm excited for Alex to have more shots on goal.
I would not in general offer retroactive funding for activities of this nature -- I'd encourage people to either secure sufficient personal runway to derisk this, or secure funding before starting a new org -- but Alex's case is a bit unusual as he had a good-faith reason to expect to receive funding for this it just fell through for (after having heard the specifics of the case) somewhat arbitrary reasons out of his control.