Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

AI governance

27 proposals
59 active projects
$3.76M
Grants212Impact certificates8
jessicapwang avatar

Jessica P. Wang

Safe AI Germany (SAIGE)

Germany’s talents are critical to the global effort of reducing catastrophic risks brought by artificial intelligence.

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$0 / $285K
remmelt avatar

Remmelt Ellen

12th Edition of AI Safety Camp

Technical AI safetyAI governance
2
0
$0 / $90K
JessHines- avatar

Jess Hines (Fingerprint Content)

Department of Future Listening: Narrative Risk Radar (UK pilot)

Detect polarising story-frames early and build better narratives—fast, practical, adoptable.

AI governanceForecastingGlobal catastrophic risksGlobal health & development
1
0
$0 / $300K
🥭

AIVA OS: Causal Intelligence for Medicine

Reconstructs longitudinal patient state, identifies causal drivers, and runs simulations to prevent high severity clinical failures under fragmented data.

Science & technologyAI governanceGlobal health & development
1
0
$0 / $500K
Igor-Labutin avatar

Igor Labutin

AI:Save Our Souls

A tech-infused immersive musical. Experience the future of storytelling where artificial intelligence meets the depths of human emotion.

Science & technologyTechnical AI safetyAI governance
2
0
$0 / $120K
🥕

Jacob Steinhardt

Transluce: Fund Scalable Democratic Oversight of AI

Technical AI safetyAI governance
5
2
$40.3K / $2M
DavidKrueger avatar

David Krueger

Evitable: a new public-facing AI risk non-profit

Our mission is to inform and organize the public to confront societal-scale risks of AI, and put an end to the reckless race to develop superintelligent AI.

AI governanceGlobal catastrophic risks
5
3
$5.28K / $1.5M
AmritSidhu-Brar avatar

Amrit Sidhu-Brar

Forethought

Research on how to navigate the transition to a world with superintelligent AI systems

AI governanceGlobal catastrophic risks
4
3
$365K / $3.25M
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
2
2
$10K raised
TheaTERRA-Productions-Society avatar

Sara Holt

Paper Clip Apocalypse (War Horse Machine)

Short Documentary and Music Video

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $40K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
2
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
evzen avatar

Evžen Wybitul

Retroactive: Presenting a poster at the ICML technical AI governance workshop

AI governance
1
3
$1.3K raised
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
Josephebrown27 avatar

Joseph E Brown

Architectural Governance to Prevent Authority Drift in AI Systems

A constraint-first approach to ensuring non-authoritative, fail-closed behavior in large language models under ambiguity and real-world pressure

Science & technologyTechnical AI safetyAI governance
1
1
$0 / $30K
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $75K
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised

Unfunded Projects

anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
2
1
$0 raised