You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
CeSIA can leverage its Global Call for AI Red Lines to catalyze international agreements. We seek $150k+ to activate our plan.
Maria Ressa, Nobel Peace Prize, announcing the call at the opening of the UN General Assembly: "We urge governments to establish clear international boundaries to prevent unacceptable risks for AI. At the very least, define what AI should never be allowed to do"
We initiated and co-led the Global Call for AI Red Lines:
Signed by 12 Nobel laureates, 10 former heads of state, and 90+ organizations,
300+ media mentions
Presented at the UN General Assembly (video) and the UN Security Council (video)
We have the opportunity to translate this political mandate into concrete policy.
"As an expert in multilateral Tech diplomacy and a tenured practitioner of UN processes, I assess that CeSIA’s approach to securing formal recognition of AI red lines at UN level - including under a binding instrument - as a significant chance of success pending adequate resources and sustained diplomatic engagement." - Jérôme Barbier
Yoshua Bengio, presented our Call for Red Lines at the UN Security Council: "Earlier this week, with 200 experts, including former heads of state and Nobel laureates [...], we came together to support the development of international red lines to prevent unacceptable AI risks."
The problem: The Global Call says, “These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds.” BUT existing frontier AI labs have diverging safety thresholds (e.g., example here between OpenAI’s 5x acceleration vs. Anthropic’s 2x) and won’t be enforced by default. Without harmonization, this creates a race to the bottom, disadvantaging safety-conscious labs. Some companies don’t even have any threshold. This is why we need to enforce common safety standards, which entails 1) developing this standard via a harmonization process and 2) enforcing it via a binding international framework.
Our approach combines two tracks:
Technical harmonization: Convene lab safety teams and other stakeholders to develop common threshold definitions, then embed these into regulatory frameworks (EU AI Act Code of Practice, AISI guidelines)
Diplomatic engagement: Secure a champion state and build toward a UN General Assembly resolution on AI red lines, leveraging the 2026 diplomatic calendar (Delhi AI Summit, French G7 presidency, UN Global Dialogue)
Why now: Multiple forecasts converge on the next few years when dangerous AI capabilities start emerging. The diplomatic windows in 2026 (G7, UN Global Dialogue) may be the last opportunity before capabilities outpace governance. The Global Call is going to deprecate quickly if not activated soon. You are buying speed to match the 2026 window.
Why CeSIA: We combine what's rarely found together: technical depth (we can conduct the risk modeling and threshold harmonization work, having published research on AI evaluations) with policy work (our recommendations have been integrated into the EU AI Act's Code of Practice, and we've briefed multiple key institutions on AI risks). France holds the next G7 presidency in 2026, and France is also simultaneously a driving force in EU AI regulation and a permanent member of the UN Security Council, making it the ideal champion state for international AI red lines. CeSIA is ideally placed to leverage those opportunities. We discussed with other aligned organizations active in the UN, and they told us they won’t work on red lines.
The ask: $150k for the Q1-Q2 2026 diplomatic sprint while institutional funding is finalized. This covers the two track leads, key diplomatic trips, and the lab harmonization workshop.
We aim to convert the momentum from the Global Call for AI Red Lines into binding international agreements that establish clear limits on dangerous AI capabilities.
We will achieve this through two parallel tracks:
Track 1: Technical Harmonization of Risk Thresholds
Frontier AI labs (Anthropic, OpenAI, DeepMind) each have their own frontier safety frameworks with different risk thresholds—differing quantitatively on key metrics like AI R&D acceleration. We will:
Convene workshops with lab safety teams to develop harmonized threshold definitions (building on our October 2024 pilot workshop)
Work with AI Safety Institutes (EU AI Office, UK AISI) to embed harmonized thresholds into regulatory guidelines
Target EU AI Act Code of Practice Measure 4.1, which requires companies to define "systemic risk acceptance criteria" (see the relevant section in our research agenda)
Track 2: Diplomatic Engagement
Pushing red lines in the most important international AI governance fora, putting them in the agenda, and securing a UN General Assembly resolution on red lines. Here are the highlights of the engagement calendar:
January 2026: Diplomatic missions meetings in NYC and Geneva to identify champion states by discussing with the representatives of those countries
February 2026: AI Impact Summit in Delhi; IASEAI Conference workshop on red lines, to try to find a consensus among Civil society organizations on risks threshold through a structured survey and discussion
June 2026: G7 Summit (France holds presidency)—to initiate a coalition of the willing for AI red lines, or alternatively an amendment to the Hiroshima AI Process to gain transparency on the levels of unacceptable risks defined by the industry
July 2026: UN Global Dialogue Summit: organize a public consultation on AI red lines / add red lines to the agenda
September 2026: Target UN General Assembly resolution on AI red lines
We are requesting $150k for the immediate Q1-Q2 2026 diplomatic sprint. Full 12-month execution requires ~$400k.
Key activities funded:
2-3 UN diplomatic trips (NYC/Geneva)
Lab harmonization workshop at AAAI Singapore (January 2026)
Event organization at AI Impact Summit Delhi
Professional UN lobbying consultation
Core Team:
Charbel-Raphaël Segerie - Executive Director
Initiator and co-lead of the Global Call for AI Red Lines
Co-founded CeSIA, ML4Good, AI Safety Atlas
Teaches AI Safety at ENS
OECD AI Expert
Pauline Charazac Engagement Lead:
AI Governance, former OECD Advisor
UNESCO Women4Ethical AI Expert Member
Sciences Po Guest Lecturer
Extensive experience and network within G7, G20, IMF, World Bank, OECD, FATF
CeSIA’s core team for support on analysis and operation
Florent Berthet - COO
Arthur Grimonpont - Head of Editorial
Charles Martinet - Head of Public Affairs
Advisors/Partners:
Jérôme Barbier & Ulysse Erich (UN Consulting), 10+ years of combined experience in the UN system
Anoush Tatevossian (The Future Society, former UN staffer)
Simon Institute for Long-term Governance, extensive UN experience
Professional UN lobbying firms (APCO)
Just over a year and a half ago, CeSIA did not exist. Today, we are one of the leading AI safety organizations in France and Europe, directly informing policy at national and European levels.
Global Call for AI Red Lines:
You can find key information about the Global Call at the top of the document
Successful pilot workshop (October 2024) with 10 org leaders and frontier lab staff, demonstrating the feasibility of harmonization discussions
Policy Influence:
Recommendations integrated into the EU AI Act's Code of Practice
First organization to present on the topic of loss of control at the headquarters of key international organizations
Briefings to high-level policy makers in France
We have already convened org leaders and lab staff for a pilot “AI Safety Frontier Lab Coordination Workshop”.
On top of this policy influence, we also have significant experience in field-building. Our team created ML4Good, which has become a key talent pipeline for 200 hundred people per year who want to upskill in AI Safety / AI governance across the world. We created the AI Safety Atlas, used by thousands of students.
"What if no champion state emerges?" If we don’t find any state that can champion this push, we would be unable to move forward on the ambitious aspects of the diplomatic track. If that were the case, we would pivot to focus on the harmonization track.
"What if labs refuse harmonization?" If labs refuse harmonizations, we will pivot to the regulatory track, pushing AISIs and the EU AI Office to set thresholds externally.
"What about geopolitical tensions?" We would hedge with a coalition-of-the-willing approach (G7, like-minded democracies). Even bilateral agreements and strengthened voluntary commitments would reduce race dynamics.
"Why would companies accept constraints?" Without common rules, safety-cautious actors are disadvantaged. Anthropic and DeepMind have incentives to level the playing field against less cautious competitors. We frame harmonization as protection, not constraint, for responsible labs. It would also establish a clear and predictable regulatory framework that reduces uncertainty for companies.
1. Understanding: At the very least, we will document and share - what we do. We will gain significant insights, and options.
2. Awareness: Key policymakers (high-level diplomats, ministers, heads of state) are more aware of AI risks - this prepares us for future efforts when political conditions improve.
3. Policy norm: Establish a norm that risk thresholds should be internationally coordinated - by tabling this in the agenda of major international fora.
4. Technical readiness: Better operationalizations of risk thresholds ready for use
5. Lab harmonization: Safety-conscious companies harmonize thresholds
6. Political agreement: UN resolution/G7 statement mentioning red lines
7. Hard resolution: UN resolution initiating negotiations by the end of 2026
Even at Level 3-4, this project produces substantial value: regulatory-ready technical frameworks and established norms that AI governance requires international coordination.
Contact: charbel@securite-ia.fr