Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
4

AI Security Startup Accelerator Batch #2

Science & technologyTechnical AI safetyGlobal catastrophic risks
Finn-Metz avatar

Finn Metz

ActiveGrant
$355,000raised
$355,000funding goal
Fully funded and not currently accepting donations.

Project summary

Seldon Labs runs a for-profit accelerator for AI security and assurance startups.

In our first SF batch, we worked with companies like Andon Labs, Lucid Computing, and Workshop Labs, who went on to raise significant follow-on funding, close contracts with labs such as xAI, and Anthropic, and receive coverage in outlets like TIME.

With this round, we're running Batch #2 in SF for 5–10 teams working on AI security, assurance, and related infrastructure. The funding enables us to directly support founders relocating to and building in SF, and run a focused, in-person program aimed at turning technically strong, safety-motivated teams into durable AI security companies.

Structure note: This is structured as a dilutive investment (YC SAFE) into Seldon Labs PBC, routed through Manifund as fiscal sponsor. Returns, if any, accrue to the funder's Manifund balance.

What are this project's goals? How will you achieve them?

Increase the number and quality of AI security/assurance startups that materially reduce catastrophic AI risk, and help them reach product-market fit and funding.

We'll run an in-person SF accelerator (Jan–Apr 2026) for 5–10 teams focused on AI security infrastructure, AI assurance and governance tooling, and adjacent safety-relevant infrastructure. We provide upfront capital, work with founders on problem selection, go-to-market, and governance, and leverage our network to help secure early pilots and raise follow-on funding.

How will this funding be used?

Direct founder support (upfront investments into batch companies) and program operations (team, space, events, legal/admin).

Who is on your team? What's your track record on similar projects?
Core team:

  • Finn Metz: Co-founder, Seldon Labs. Background in VC/PE. Co-founder AI Safety Founders Community.

  • Esben Kran: Co-founder, Seldon Labs. Co-founder of Apart Research. Deep connections across AI safety labs, researchers, and funders.

Track record (Batch #1): Our first batch worked with Andon Labs, Lucid Computing, Workshop Labs, and others. Those teams raised >$10m collectively, closed contracts with major labs (xAI, and Anthropic), and received multiple features in TIME.

We're advised by Nick Fitz (Juniper Ventures) and Eric Ries (LTSE, The Lean Startup).

What are the most likely causes and outcomes if this project fails?

Most likely ways this could fail:

  • We can’t convert enough top-tier applicants into a strong batch (e.g. key founders decide to join labs or move into other roles instead).

  • We under-resource the program, providing less support than Batch #1 (e.g. not enough staff / time for hands-on help).

  • Some companies drift away from safety-critical problems despite our theory-of-change guidance and governance work.

Outcomes if that happens:

If that happens: you still get several competent teams building AI-security-adjacent tools, but with lower impact density.

How much money have you raised in the last 12 months, and from where?

Over the last 12 months, Seldon Labs PBC has received non-dilutive funding at an amount of $53k from the Survival and Flourishing Fund (SFF). Seldon also received in-kind contributions, and investments from private investors.

Comments2Donations1Similar7
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
2
$100K raised
AmritanshuPrasad avatar

Amritanshu Prasad

Suav Tech, an AI Safety evals for-profit

General Support for an AI Safety evals for-profit

Technical AI safetyAI governanceGlobal catastrophic risks
4
0
$0 raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
Apart avatar

Apart Research

Apart Research: Research and Talent Acceleration

Support the growth of an international AI safety research and talent program

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
6
1
$0 raised
Allisondman avatar

Allison Duettmann

Increasing the funding distributed by Foresight Insitute's AI safety grants

focused on 1. bci and wbe for safe ai, 2. cryptography and security for safe ai, and 3. safe multipolar ai

Science & technologyTechnical AI safetyAI governance
4
0
$0 raised
remmelt avatar

Remmelt Ellen

11th edition of AI Safety Camp

Cost-efficiently support new careers and new organisations in AI Safety.

Technical AI safetyAI governance
25
31
$45.1K raised