Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

Guaranteed Safe AI Seminars 2026

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
orpheus avatar

Orpheus Lummis

ProposalGrant
Closes November 15th, 2025
$2,300raised
$1,000minimum funding
$30,000funding goal

Offer to donate

29 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Guaranteed Safe AI Seminars is a monthly series convening researchers and engineers to advance AI systems with quantitative safety guarantees. We advance the GSAI approach, which combines a world model, a safety specification, and a verifier to produce auditable guarantees rather than relying only on empirical testing. This is increasingly important as systems approach ASL-3+. Each session ships a recording, and we are building GuaranteedSafe.ai into a community hub that curates papers, code, and resources across these three pillars.

Traction: Since April 2024, the Guaranteed Safe AI Seminars has had steady growth: so far 15-25 live participants, 80–200 replays within 90 days, ~260 subscribers, and ~600 RSVPs over the past year. We had prominent speakers: Yoshua Bengio (Mila), Steve Omohundro, Tan Zhi Xuan (NUS/A*STAR), Charbel-Raphaël Ségerie (CeSIA), Jobst Heitzig, Evan Miyazono (Atlas Computing), Rafael Kaufmann (Gaia), Agustín Martinez Suñé (Oxford OXCAV), Louis Jaburi, GasStationManager (independent).

References:

  • https://luma.com/guaranteedsafeaiseminars

  • https://www.horizonevents.info/events/guaranteed-safe-ai-seminars

  • https://www.youtube.com/playlist?list=PLOutnjp2BEJeQM2J49_KvdpuZlaQXPboy

  • https://arxiv.org/abs/2405.06624v3

What are this project's goals? How will you achieve them?

Goals (Jan - Dec 2026, 12 months).

  1. Run 12 monthly seminars (talk + Q&A).

  2. Run a structured debate session leading to a distilled write-up post (on our blog and LessWrong).

  3. Operate GuaranteedSafe.ai as a curated community/info hub (papers, code, talks).

  4. Host 4 curated mixers (thematic lightning talks to start with, then breakouts and 1:1s) to convert talks into collaborations.

How we’ll achieve them:

  • Program design: maintain a 3–6-month speaker pipeline spanning formal methods & PL, verification, mech-interp, specification & controls, ensuring each session clearly maps to GSAI.

  • Standardized production: speaker orientation, live moderation, recording with light edit.

  • Community & curation: GuaranteedSafe.ai maintained collaboratively, with a quarterly “what’s new” roundups potentially also published on Quinn Dougherty’s Progress in Guaranteed Safe AI newsletter.

  • Partnerships & distribution: cross-post with labs and aligned orgs newsletter; invite co-hosts for debate/mixers, post on AI safety events & training newsletter.

  • Measurement & iteration: run a yearly survey to subscribers, and track attendance, replays, subscriber growth.

  • Risk management: backup host/organizer, recorded-only fallback, scope can reduce if funding tightens.

How will this funding be used?

Minimum (0$): the series will continue, but sporadically and without commitments, because we believe in the project but are capacity-constrained.

Core (10$K): commitment to organize the monthly seminars, which involves mainly organizer time doing speaker curation and coordination, light editing, event tooling and domain, distribution, and lightweight design.

Mainline (30$K): we will be able to do the monthly seminars and increase their quality, organize mixers and a debate with its write-up, create the community hub on guaranteedsafe.ai with bibliography curation, and onboard a second organizer for occasional support and backup.

Who is on your team? What's your track record on similar projects?

Organizer: Orpheus Lummis (https://orpheuslummis.info)

  • Founder, Horizon Omega (https://horizonomega.org, Canadian not-for-profit #1584536-0)

  • Co-curator, Mila AI Safety Reading Group (biweekly with authors presenting).

  • Organizer of Montréal AI Safety, Governance, Ethics group (~1600 nominal members).

  • Since 2018: AI Safety Unconferences at NeurIPS (~185 participants total), Virtual AI Safety Unconference (~400 registrations), Limits to Control workshop.

Advisory pool: prior speakers and domain experts for recommendations and peer review.

In mainline scenario, we will hire a part-time second organizer to share the work and derisk the project.

What are the most likely causes and outcomes if this project fails?

  • Risks: organizer bandwidth, funding shortfall.

  • Mitigations: multi-month speaker backlog, backup organizer, standardized ops, recorded-only fallback if needed, partnerships with relevant groups for cross-posting.

  • Failure outcomes: loss of momentum for quantitative-safety agendas.

How much money have you raised in the last 12 months, and from where?

No funds raised in last 12 months. In the past, we received 6K from LTFF to cover 6 months of the series in 2024.

Comments1Offers2Similar7
🥭

about 22 hours ago

For career scientists like me who transitioned to technical AI safety research from other fields and are thus not too well integrated into the AI safety community outside traditional academia, seminar series like this are a crucial opportunity for networking and gaining visibility in the field by presenting own research. At the same time, the guaranteed safe AI approach is also under-represented in the AI safety community and other talk series and onboarding programmes such as MATS, which makes this seminar series a crucial anchor point for the emerging guaranteed safe AI community.