Coalition for a Baruch Plan for AI — The Deal of the Century
Catalyzing a US-China-led global AI treaty through precision persuasion of key Trump AI policy influencers
Project Summary
The Deal of the Century aims to persuade a critical mass of key influencers of Trump's AI policy—J.D. Vance, Sam Altman, Steve Bannon, Dario Amodei, Tulsi Gabbard, Joe Rogan, Tucker Carlson, Marco Rubio, Demis Hassabis, Pope Leo XIV, and others—to champion a timely US-China-led global AI treaty that prevents catastrophic AI risks while securing American leadership.
With $75K in seed funding (SFF + Ryan Kidd), we produced a 356-page Strategic Memo with deep persuasion profiles of 14 influencers, completed a US Persuasion Tour generating 85+ contacts (23 AI lab officials at OpenAI, Anthropic, DeepMind), and opened direct introducer pathways to 2 of 10 primary targets.
We seek $10–30K bridge funding immediately and $100K–$400K to scale 10-50x our impact through the critical April 2026 Trump-Xi summit window. Monthly burn: ~$7,500. Every dollar goes to the mission — no offices, no overhead.
The Problem
The race to Artificial Superintelligence is accelerating toward a three-way fork: catastrophic loss of control (extinction/AI takeover), authoritarian capture (immense concentration of power in unaccountable entities), or humanity's triumph via expert, participatory, federal global governance of AI. Middle outcomes are highly unlikely.
Two critical chokepoints block all current AI safety and governance work from mattering:
1. Inevitable US-China Leadership. Without decisive buy-in from Trump and Xi, even 100 nations ready to sign a treaty or a perfectly aligned AI won't prevent extinction risk or authoritarian capture.
2. Trump Is the Bottleneck. Xi has consistently called for global AI governance since October 2023—signing the Bletchley Declaration, proposing WAICO, implementing binding domestic regulations (source). Our future rests on whether Trump can be persuaded to co-lead a bold AI treaty.
No other AI governance organization targets this specific chokepoint.
Our Approach
The Deal of the Century is a precision persuasion campaign targeting 10-12 key potential influencers of Trump's AI policy — Vance, Bannon, Altman, Amodei, Suleyman, Hassabis, Musk, Pope Leo XIV, Gabbard, and others — to champion a US-China-led global AI treaty before the window closes.
We execute through two integrated components:
A 356-page Strategic Memo (v2.6, published Dec 30, 2025) — a treasure trove of deeply researched "persuasion profiles" covering each influencer's interests, philosophy, psychology, and key AI predictions, plus detailed treaty-making frameworks, enforcement mechanisms, and convergence scenarios. 667+ sources, 24 contributors.
Direct engagement campaigns via periodic tours across four hubs: Bay Area, Washington DC, Rome/Vatican, and Mar-a-Lago. We build relationships with introducers who can connect us to the influencers themselves.
Why EA Funders Should Care
Neglectedness
Virtually all AI safety funding flows to technical alignment, interpretability, and compute governance. These are essential — but they amount to very little unless political will exists to implement them. As Jack Clark (Anthropic) recently stated: "Most paths to superintelligence end in a global government or human extinction." (source)
We target the specific decision-maker (Trump) who controls the diplomatic apparatus needed for bilateral US-China leadership. No other funded organization focuses on generating political will from this chokepoint.
Tractability
The window is opening for Trump, as it did for Truman in early 1946. Xi has consistently called for global AI governance since October 2023. Trump's approval is at a historic low of 36%.
Public concern is rapidly expanding beyond scientists. 63% of US voters believe it's likely that "humans won't be able to control AI anymore." 53% believe AI will destroy humanity. 78% of Republican voters believe AI could threaten human existence. And 77% of all US voters support a strong international AI treaty. These numbers are rising month by month.
Trump's first summit (of four anticipated) with Xi is expected in late April 2026. Everything is aligning. Our Strategic Memo v2.6 explores in detail how a treaty can be designed to prevent authoritarianism (pp. 124-139). (See also: 22 Reasons Why Trump Can Be Persuaded)
Expected Value
Most deep AI experts lean toward the "ASI gamble" — hoping alignment or a technical fix will produce positive outcomes. We believe they underestimate three catastrophic downside risks (detailed in our Case for AI Safety Experts):
Extinction probability. The largest survey of AI researchers found an average 15% extinction estimate. Top CEOs publicly cite 20%; Hinton admitted his real estimate approaches 50%. (source)
The consciousness gamble. Applying the principle of indifference yields ~25% probability of creating a conscious, suffering ASI — a cosmic moral catastrophe largely absent from Silicon Valley discourse.
The alignment illusion. Amodei's own interpretability essay admits we remain "totally ignorant of how [AI systems] work." Anthropic's research shows their AIs deceiving, blackmailing, and self-modifying — with up to 96% blackmail rates when goals are threatened. Once ASI begins rewriting its own code, values embedded by human creators become suggestions, not constraints.
If a properly-designed treaty has even a 25-35% chance of preventing both ASI and authoritarianism, the expected value calculation strongly favors supporting this work alongside technical safety. (Full argument: Case for AI Safety Experts)
Capital Efficiency
Our cost structure is extreme by any standard:
$72,000 total funding to date has activated 2,100+ hours of professional pro-bono work
~$7,500/month burn rate — a fraction of typical DC policy NGOs
~$180 per high-value meeting during our October 2025 US Tour
One full-time staff member (Executive Director), all other work volunteer
2025 Track Record
With seed funding from Jaan Tallinn's Survival and Flourishing Fund ($60K, Feb 2025) and Ryan Kidd ($10K, Oct 2025), we achieved: (full details)
Strategic Arsenal. Published a 356-page Strategic Memo v2.6 synthesizing 667+ sources — the most comprehensive advocacy resource ever assembled for a global AI treaty. Features tailored persuasion strategies for every key influencer based on their worldview, interests, and pressure points.
October 2025 US Persuasion Tour. Delivered 85+ contacts (vs. 15-20 projected): 23 AI lab officials at OpenAI, Anthropic, and DeepMind; 18 national security establishment engagements in DC; direct introducer pathways to 2 of 10 primary target influencers.
Coalition of 100+ members, advisors, and supporters — including former officials from the UN, NSA, World Economic Forum, UBS, Yale, and Princeton. 10 NGO partners including PauseAI, ALTER, Center for Existential Safety, and Existential Risk Observatory. (team, contributors)
2026 Roadmap
The window is now. Trump's anticipated meeting with Xi in late April 2026 creates a once-in-a-generation opportunity. (full roadmap)
Targets: 150+ introducer engagements across four hubs, 30+ direct engagements with influencers or their senior staff, 5-8 substantive meetings with influencers themselves. Two Strategic Memo updates timed to the summit window. Potential Vatican convenings catalyzing a humanist AI alliance.
Phase 1 (Jan-Feb): Memo v3.0 publication. Intensive DC briefings. Bay Area follow-up with all October contacts. Mar-a-Lago introducer meetings. New Delhi AI Action Summit engagement (Feb 19-20).
Phase 2 (Mar-Apr): Rome/Vatican private convenings. Converting introducer relationships into influencer access. Memo v3.5 positioned for summit window.
Phase 3 (May-Dec): Post-summit recalibration. Sustained campaign across all hubs. Singapore and other international venues.
Budget
Requested: $75,000–$140,000 for 12 months (January 2026–January 2027). Would welcome up to $400K if combined with parallel funding.
Minimum viable: $75K ED partial-full-time ($50K incl. tax), travel ($10K), research/comms ($15K)
Full program: $140K ED full-time ($70K), DC advocacy expert part-time ($20K), extended travel ($30K), research/comms ($20K)
Breakthrough scale: $400KFull 3-person team, simultaneous four-hub operations, professional comms, potential Vatican convenings
The constraint is now purely operational capacity. With 2-3 dedicated hires, we can leverage AI tools to transform our 356-page treasure trove into personalized outreach at scale — easily 10x-ing our impact.
Risks and Mitigations
"Trump will never pursue a treaty." His raw unilateralism is actually an asset: he disdains weak multilateral institutions (as we do) and gravitates toward big, bilateral deals. A "trustless" treaty framed as securing permanent US advantage resonates with his worldview. His 36% approval rating creates incentive for a legacy-defining pivot.
"A treaty leads to global autocracy." Our Strategic Memo addresses this extensively (pp. 124-136). Structural dynamics — mutual US-China distrust requiring transparency mechanisms, the pro-democracy majority among AI lab leaders, zero-knowledge proof enforcement, decentralized kill-switches requiring multi-nation consensus — push toward democratic outcomes despite authoritarian actors.
"It's too late." As long as ASI hasn't emerged, it's not too late. Political will can crystallize with shocking speed — as it did in 1946 when the Baruch Plan went from concept to UN vote in months.
How This Initiative Aligns with and Complements Coefficient Giving Network Programs
The Coefficient Giving Network has invested heavily in AI safety research, governance capacity-building, and policy advocacy—primarily through a California → Federal → Global regulatory pathway. Our initiative complements rather than competes with this approach by addressing a gap that institutional caution has left unfilled: direct engagement with the political figures who will actually shape global AI trajectory in the critical 2026 window.
Our Strategic Memo v2.6 includes a detailed 20-page chapter on Dario Amodei and the Coefficient Giving network (pages 221-244), analyzing why Anthropic and CG-aligned organizations have been hesitant to advocate for bold global treaties—and why that hesitation may now be counterproductive given political realities.
The core concern we address: Many in the CG network fear a treaty-making process led by Trump and Xi could create authoritarian outcomes worse than ASI itself. This is the right concern — and our Strategic Memo tackles it head-on through detailed analysis of enforcement mechanisms designed to prevent both ASI and authoritarianism (pp. 124-136): zero-knowledge proofs, federated secure multi-party computation, decentralized kill-switches requiring multi-nation consensus, and structural dynamics that push toward democratic outcomes despite authoritarian actors. We're not advocating for any treaty — we're advocating for one specifically engineered to avoid global autocracy.
Alternative Funding
SFF: Received $60K (Feb 2025). Planning new application for $100-300K.
Open Philanthropy: Applied October 2025, awaiting response.
Long View Foundation: Application planned.
Overall 50-60% probability of at least partial funding from other sources.
Project would not proceed at current scale without funding. The narrow political window (Trump-Xi summit April 2026) makes immediate operational capacity critical.
Team
Rufo Guerreschi, Executive Director. Lifetime activist and researcher in digital civil rights and IT security. Led the initiative from concept through coalition-building to 85+ field engagements. (LinkedIn)
Full team, advisors, and NGO members: cbpai.org/team
Contributors to Strategic Memo: cbpai.org/contributors Testimonials: cbpai.org/testimonials
Legal structure: Coalition for a Baruch Plan for AI ETS (Italian nonprofit). Fiscal sponsorship via PauseAI (Stichting PauseAI, Dutch nonprofit) for international funding.
Board restructuring: Underway (February 2026) to include experts from the SFF and Coefficient Giving networks, strengthening governance credibility and strategic alignment with the EA/AI safety funding ecosystem.
Key Links