Project summary
Guaranteed Safe AI Seminars is a monthly series convening researchers and engineers to advance AI systems with quantitative safety guarantees. We advance the GSAI approach, which combines a world model, a safety specification, and a verifier to produce auditable guarantees rather than relying only on empirical testing. This is increasingly important as systems approach ASL-3+. Each session ships a recording, and we are building GuaranteedSafe.ai into a community hub that curates papers, code, and resources across these three pillars.
Traction: Since April 2024, the Guaranteed Safe AI Seminars has had steady growth: so far 15-25 live participants, 80–200 replays within 90 days, ~260 subscribers, and ~600 RSVPs over the past year. We had prominent speakers: Yoshua Bengio (Mila), Steve Omohundro, Tan Zhi Xuan (NUS/A*STAR), Charbel-Raphaël Ségerie (CeSIA), Jobst Heitzig, Evan Miyazono (Atlas Computing), Rafael Kaufmann (Gaia), Agustín Martinez Suñé (Oxford OXCAV), Louis Jaburi, GasStationManager (independent).
References:
What are this project's goals? How will you achieve them?
Goals (Jan - Dec 2026, 12 months).
Run 12 monthly seminars (talk + Q&A).
Run a structured debate session leading to a distilled write-up post (on our blog and LessWrong).
Operate GuaranteedSafe.ai as a curated community/info hub (papers, code, talks).
Host 4 curated mixers (thematic lightning talks to start with, then breakouts and 1:1s) to convert talks into collaborations.
How we’ll achieve them:
Program design: maintain a 3–6-month speaker pipeline spanning formal methods & PL, verification, mech-interp, specification & controls, ensuring each session clearly maps to GSAI.
Standardized production: speaker orientation, live moderation, recording with light edit.
Community & curation: GuaranteedSafe.ai maintained collaboratively, with a quarterly “what’s new” roundups potentially also published on Quinn Dougherty’s Progress in Guaranteed Safe AI newsletter.
Partnerships & distribution: cross-post with labs and aligned orgs newsletter; invite co-hosts for debate/mixers, post on AI safety events & training newsletter.
Measurement & iteration: run a yearly survey to subscribers, and track attendance, replays, subscriber growth.
Risk management: backup host/organizer, recorded-only fallback, scope can reduce if funding tightens.
How will this funding be used?
Minimum (0$): the series will continue, but sporadically and without commitments, because we believe in the project but are capacity-constrained.
Core (10$K): commitment to organize the monthly seminars, which involves mainly organizer time doing speaker curation and coordination, light editing, event tooling and domain, distribution, and lightweight design.
Mainline (30$K): we will be able to do the monthly seminars and increase their quality, organize mixers and a debate with its write-up, create the community hub on guaranteedsafe.ai with bibliography curation, and onboard a second organizer for occasional support and backup.
Who is on your team? What's your track record on similar projects?
Organizer: Orpheus Lummis (https://orpheuslummis.info)
Advisory pool: prior speakers and domain experts for recommendations and peer review.
In mainline scenario, we will hire a part-time second organizer to share the work and derisk the project.
What are the most likely causes and outcomes if this project fails?
Risks: organizer bandwidth, funding shortfall.
Mitigations: multi-month speaker backlog, backup organizer, standardized ops, recorded-only fallback if needed, partnerships with relevant groups for cross-posting.
Failure outcomes: loss of momentum for quantitative-safety agendas.
How much money have you raised in the last 12 months, and from where?
No funds raised in last 12 months. In the past, we received 6K from LTFF to cover 6 months of the series in 2024.