You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
The Center for AI Policy (CAIP) is a nonpartisan research organization that develops policy and conducts advocacy to mitigate catastrophic risks from AI.
CAIP is seeking funding to support a 3-month project to raise awareness in Congress about AI-enhanced chemical and biological weapons risk and advocate for bipartisan federal legislation that meaningfully addresses these risks. To achieve this goal, CAIP will conduct research, publish white papers, develop educational materials, build a coalition of key stakeholders, advocate for specific legislation, and organize tabletop exercises, expert briefings, red teaming challenges, and meetings with Congressional offices and federal agencies. CAIP will leverage our growing network of grassroots groups and key stakeholders in Congress, federal agencies, academia, industry, and the public safety community to ensure stakeholder buy-in and collaboration.
CAIP’s mission addresses all catastrophic AI risks, but we are zeroing in on the immediate threat of chem-bio AI risk where ripe opportunities exist to advance bipartisan legislation and existing governance frameworks that can immediately reduce the threat. Last December, a group of 4 bipartisan Senators introduced a bill that would create an AI safety review office at the deputy Cabinet level to counter this threat; as Congress wraps up its budget negotiations, it will soon return to this kind of substantive work. With your support, we can put containing AI-driven bio risk near the top of Congress’s agenda.
Even aside from the political timing, we need to act fast. Frontier models already meet or exceed human performance on chem-bio lab expertise. AI is also able and willing to coach arbitrary users like terrorists and other bad actors to use AI to create new bioweapons, toxic chemicals, and other weapons of mass destruction.
Without more funding in the next several weeks, CAIP will need to downsize and slow our operations considerably. This project will extend our runway through Q3 2025. We don’t receive Open Philanthropy funding; they have declined to fund us in 2025. Our funding from SFF is only a fraction of what we need to survive. Manifund support would be a major lifeline to help keep CAIP’s doors open and our critical work moving forward. As our track record shows, we’ve been able to influence AI policy discourse in a short amount of time. We are immensely grateful for any support that the Manifund community can provide!
The project’s goal is to raise awareness about AI-enhanced chemical and biological weapons risk and advocate for bipartisan federal policies and actions that meaningfully address these risks. These efforts will bolster public safety and U.S. national security, while making broader regulation of frontier AI more feasible in the future.
CAIP will achieve this by implementing a coordinated, comprehensive advocacy strategy comprised of the following activities:
Coalition building:
- Engage key stakeholders across government agencies, academia, and private industry to refine policy proposals and build consensus.
- Leverage CAIP’s existing grassroots network and grow it through an educational tabletop exercise on chem-bio AI risk with public health stakeholders and presentations to increase awareness of the risk and amplify CAIP’s advocacy.
- Launch a chem-bio red teaming challenge with CAIP’s Policy Advocacy Network, a group of college AI safety teams from dozens of universities across the U.S.
Congressional engagement:
- Organize Congressional briefings to raise awareness about chem-bio AI threats featuring respected experts from academia, industry, and government.
- Bring chem-bio experts to DC for facilitated private meetings with Congressional offices and federal agencies.
- Distribute targeted briefing papers, highlighting how various legislative proposals could address these risks.
- Advocacy for the Preserving American Dominance in AI Act, the Nucleic Acid Standards for Biosecurity Act, and bills related to synthetic DNA such as the Strategy for Public Health Preparedness and Response to Artificial Intelligence Threats Act and the MedShield Act.
- Produce persuasive educational materials emphasizing the urgency and existing bipartisan support for reintroducing and passing legislation.
Administration engagement: Meet with executive branch stakeholders (OSTP, HHS, DHS, DoD) to increase awareness of AI chem-bio risk and inform executive actions on AI.
Host a Chem-Bio AI Risk Tabletop Exercise at a state/local public safety or public health department in the DC area to develop scenarios that accurately depict potential AI-driven chem-bio threats, vetted by experts. This event will bring together senior-level participants from public health, emergency response, defense, AI research, and policy communities and include key government participants.
Research and public materials development:
- Explain, extend, and apply the existing research on AI chem-bio risk to develop it into actionable policy solutions.
- CAIP podcasts with AI policy analysts, biodefense experts, and legislators to discuss chem-bio AI risk management and solutions.
- Public white papers, blog posts, guest articles in publications related to tech, public health, and national security, and social media amplifying public awareness and policymaker attention.
This funding will be used to cover the salaries of core CAIP staff for three months, as well as direct costs associated with all proposed activities and events (tabletops, red teaming, briefings, advocacy events, etc.) including food and beverages, honoraria, travel and lodging, consulting fees, printing, and operating costs.
With our minimum goal of $50,000, we can organize an expert briefing on Capitol Hill, a tabletop exercise, some Congressional engagement and some public materials development like blog posts. But our impact will be limited.
A median goal of $250,000 would mean limiting our engagements with Congress and federal agencies and cutting several activities. We would likely be able to achieve some impact, albeit at reduced levels.
Our $400,000 goal will allow us to achieve the highest impact. Advocacy is multifaceted; it requires constant engagement with policymakers, key stakeholders, constituents and media. AI is a new and emerging technology that is poorly understood by elected officials and the public alike, so continuous, hands-on education about its harms and how to mitigate them is necessary. We’ve seen the impact of tabletop exercises and demonstrations of AI capabilities and how it changes people’s minds. There is excitement from grassroots and key stakeholders to work with us. We’ve established relationships with Congressional offices to the point where they are asking for our policy advice and endorsement on AI safety bills. This is why we are taking this multi-level approach to this project, and why we believe we can achieve the highest level of impact with $400,000.
The CAIP team bios can be found on the Our Team page on our website.
CAIP is managed by Executive Director Jason Green-Lowe. Our advocacy team includes government relations directors Kate Forscey and Brian Waldrip, who engage with Congressional offices, our director of external affairs Mark Reddish, who engages with industry, academia, think tanks and other non-profits, and national advocacy coordinator Ivan Torres, who runs our grassroots advocacy efforts. Our research and policy team includes senior policy analyst Claudia Wilson, policy analyst Joe Kwon, and research associate Tristan Williams, who write original research papers, reports, and federal agencies’ request for comment. Jakub Kraus is our technical content lead who runs our weekly newsletter, produces our podcast, and organizes our external events. Marta Sikorski Martin is our director of development who runs our fundraising efforts.
We have a strong track record that demonstrates our ability to raise awareness about catastrophic AI risks and advocate for AI safety bills that protect the public. In 2024, we invested much time and effort into building relationships with Congressional offices, grassroots groups, key stakeholders, and the media that are generating results. These critical audiences are now seeking out our expertise and advice on AI safety policy and opportunities for collaboration. Our policy ideas are now being covered in mainstream news and media like The New York Times and Fox News, as well as local media that reach constituents in key districts.
● Legislative Impact:
- We are the first organization to publish model legislation that addresses the need for permitting the largest AIs, hardware security, monitoring and reporting, liability reform, and emergency powers. We just released an updated version of our model legislation on April 30.
- In 2024, CAIP successfully added stronger AI safety language to 3 bills that were introduced in Congress. CAIP officially endorsed 10 AI safety bills, 3 of which were published by Congressional offices through press releases.
- In 2025, Congressional offices asked CAIP to provide AI-focused amendment ideas for the FY26 National Defense Authorization Act and specific bill language for AI-related FY26 appropriations requests, as well as policy input on modest AI safety bills.
- CAIP was invited by Congressional offices to provide AI and cyber-related questions for two House committee hearings, in which some of our questions were asked and helped shape the discussion.
- Federal agencies have also asked CAIP to respond to specific requests for information, citing our expertise in AI.
- CAIP’s comments on the U.S. Artificial Intelligence Safety Institute’s first public draft of NIST AI 800-1 Managing Misuse Risk for Dual-Use Foundations in September 2024 were incorporated into the second public draft, such as providing greater detail on timelines across the AI lifecycle.
● Coalition building:
- In 2024, CAIP published an open letter in support of the 118th Congress’s bipartisan responsible AI legislation that openly warned Congress about the “possibility of misaligned autonomous AI agents”. The letter was co-signed by mainstream organizations like Public Citizen, the Federation of American Scientists, the Software & Information Industry Association, and the Open Markets Institute.
- In March 2025, CAIP launched the Policy Advocacy Network, a group of dozens of college AI safety teams across the United States. The network provides members with networking, capacity building, and advocacy opportunities related to AI safety at the local, regional, and national levels.
● High-impact events with key stakeholders:
- We’ve organized 20 events so far, including 8 expert briefings on Capitol Hill for Congressional staffers on AI issues that have been at the forefront of the policy debate, including AI and education, privacy, automatic and the workforce, elections, music, and most recently, cybersecurity.
- In January 2025, CAIP convened public safety leaders, federal officials, and AI experts in Virginia for a tabletop exercise on AI Threats to Emergency Response. Participants were asked to identify methods for addressing various ways that AI can interfere with public safety operations and to stimulate defending against AI-enhanced SWATing attacks. After the event, an overwhelming majority of participants indicated they felt significantly more concerned that AI will disrupt operations in the next six months compared to before the exercise.
- In February, CAIP organized a first-of-its-kind Congressional exhibition on advanced AI on Capitol Hill that featured leading AI research groups from 14 universities from around the country that presented interactive demos of the risks posed by the latest generation of AI. Representative Bill Foster (D-IL-11) joined many Congressional staffers as they tested real-time AI tools that designed new viruses, created audio deepfakes, and misrepresented drone deployments. Feedback showed that the event changed some staffers’ perspectives on AI risk and the need for AI safety legislation.
● Our publications and materials are widely read by important audiences:
- 100 Congressional staffers subscribe to CAIP’s newsletter, which features in-depth editorial coverage of the week’s AI policy news that promotes a safety-focused view of developments in AI.
- CAIP’s formal response to President Trump’s 2025 AI Action Plan and the Office of Science of Technology Policy’s Request for Information was featured in The New York Times, GovTech, Just Security, and Tech Policy Press.
- Our press statement on the release of the Bipartisan AI Task Force’s Landmark Report was picked up by mainstream news like Fox News. Fox News mentioned CAIP’s concern about the report’s lack of emphasis on catastrophic risk and lack of decisive action on AI safety from Congress. FoxNews.com averaged 361 million visits in November 2024.
- Our press statement on the AI Safety Landscape under a new Trump Administration was picked up by online news outlets Wired and Decrypt, which printed our warning that a careless or hasty repeal of the Biden AI EO could negatively affect the reliability of AI safety testing. Wired has a digital monthly reach of 20 million; Decrypt reaches 5 million readers.
- CAIP’s February press release on reported mass layoffs at NIST’s AI Safety was picked up by several online tech media outlets, including Fortune, Wired, TechCrunch, TECHi, which collectively have an estimated monthly online readership of 65 million people.
One of the biggest potential causes of project failure is a lack of funding. CAIP is experiencing a funding crisis right now. Without more funds to close our budget gap, we will have to downsize and considerably slow our operations. If that happens, then the AI safety advocacy movement will suffer a serious setback. We will lose almost two years of hard work in building strong relationships with Congress that have been essential to getting smart AI risk mitigation policies in front of staffers and elected officials. Our efforts to mobilize grassroots groups across the country and build broad, organized political support for AI safety legislation will vanish. Unfortunately, there is no other organization that will pick up the torch when we are gone. By the time anyone does, the AI safety advocacy movement will already be too far behind.
Although there is unified Republican control of the federal government, we can’t guarantee that Congress will act on AI safety policies this year. But CAIP’s work will still have critical value. If an AI crisis does happen, CAIP has been preparing Congress with the knowledge and policy tools it needs to act quickly and decisively. Our model legislation can be used in whole or in part to meet any moment and the needs of Congress.
CAIP has raised $288,000 from SFF, $250,000 from a private donor, and $7,000 from smaller individual contributions over the last 12 months. CAIP’s annual budget is $1.8 million.
CAIP doesn’t receive Open Philanthropy funding, despite our efforts to convince them to fund our work. Since our founding in 2023, we’ve relied mostly on individual major and mid-level donors.
Like many nonprofits right now, we have been struggling to raise enough funding to cover our budget for 2025. Manifund funding would be a major lifeline for CAIP. Thank you sincerely for your consideration.