Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
4

AI Security Startup Accelerator Batch #2

Science & technologyTechnical AI safetyGlobal catastrophic risks
Finn-Metz avatar

Finn Metz

ActiveGrant
$355,000raised
$355,000funding goal
Fully funded and not currently accepting donations.

Project summary

Seldon Labs runs a for-profit accelerator for AI security and assurance startups.

In our first SF batch, we worked with companies like Andon Labs, Lucid Computing, and Workshop Labs, who went on to raise significant follow-on funding, close contracts with labs such as xAI, and Anthropic, and receive coverage in outlets like TIME.

With this round, we're running Batch #2 in SF for 5–10 teams working on AI security, assurance, and related infrastructure. The funding enables us to directly support founders relocating to and building in SF, and run a focused, in-person program aimed at turning technically strong, safety-motivated teams into durable AI security companies.

Structure note: This is structured as a dilutive investment (YC SAFE) into Seldon Labs PBC, routed through Manifund as fiscal sponsor. Returns, if any, accrue to the funder's Manifund balance.

What are this project's goals? How will you achieve them?

Increase the number and quality of AI security/assurance startups that materially reduce catastrophic AI risk, and help them reach product-market fit and funding.

We'll run an in-person SF accelerator (Jan–Apr 2026) for 5–10 teams focused on AI security infrastructure, AI assurance and governance tooling, and adjacent safety-relevant infrastructure. We provide upfront capital, work with founders on problem selection, go-to-market, and governance, and leverage our network to help secure early pilots and raise follow-on funding.

How will this funding be used?

Direct founder support (upfront investments into batch companies) and program operations (team, space, events, legal/admin).

Who is on your team? What's your track record on similar projects?
Core team:

  • Finn Metz: Co-founder, Seldon Labs. Background in VC/PE. Co-founder AI Safety Founders Community.

  • Esben Kran: Co-founder, Seldon Labs. Co-founder of Apart Research. Deep connections across AI safety labs, researchers, and funders.

Track record (Batch #1): Our first batch worked with Andon Labs, Lucid Computing, Workshop Labs, and others. Those teams raised >$10m collectively, closed contracts with major labs (xAI, and Anthropic), and received multiple features in TIME.

We're advised by Nick Fitz (Juniper Ventures) and Eric Ries (LTSE, The Lean Startup).

What are the most likely causes and outcomes if this project fails?

Most likely ways this could fail:

  • We can’t convert enough top-tier applicants into a strong batch (e.g. key founders decide to join labs or move into other roles instead).

  • We under-resource the program, providing less support than Batch #1 (e.g. not enough staff / time for hands-on help).

  • Some companies drift away from safety-critical problems despite our theory-of-change guidance and governance work.

Outcomes if that happens:

If that happens: you still get several competent teams building AI-security-adjacent tools, but with lower impact density.

How much money have you raised in the last 12 months, and from where?

Over the last 12 months, Seldon Labs PBC has received non-dilutive funding at an amount of $53k from the Survival and Flourishing Fund (SFF). Seldon also received in-kind contributions, and investments from private investors.

Comments2Donations1Similar7
akshyaesingh avatar

Akshyae Singh

about 3 hours ago

Hard vouch for the Seldon team, Finn and Esben are extremely fast moving with an in-depth understanding of the AI Safety For-profit space.

Austin avatar

Austin Chen

about 22 hours ago

Approving this project; we've been happy to host the first cohort of Seldon at Mox, and I'm excited for more incubation and support in the space of AI safety startups. Finn and Esben are doing good work and I'm looking forward to seeing what's next!