Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Technical AI safety

22 proposals
81 active projects
$4.72M
Grants231Impact certificates20
Capter avatar

Furkan Elmas

ZTGI-Pro v6: Real-Time Hazard & Stability Monitor for LLMs

A concrete safety experiment to detect when an LLM's local reasoning stops behaving like a single stable executive stream, using scalar hazard signals.

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
0
$0 / $25K
🍉

L

Visa fee support for Australian researcher to join a fellowship with Anthropic

Technical AI safety
2
0
$4K / $4K
PaulaCoelho avatar

Paula Coelho

Unlocking Scientific Talent in the Global South Through Deep-Tech Innovation

A free platform enabling researchers in emerging economies to collaborate, innovate, and build deep-tech and Responsible AI projects

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risksGlobal health & development
1
1
$0 / $1M
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
0
$128K raised
🦄

Flexion Dynamics Simulation Environment for AI Stability

A simulation engine for modeling system deviation, collapse trajectories, and stability dynamics in advanced AI systems.

Science & technologyTechnical AI safety
1
0
$0 / $500
🦄

Flexion Dynamics Simulation Environment for AI Stability

A simulation engine for modeling system deviation, collapse trajectories, and stability dynamics in advanced AI systems.

Science & technologyTechnical AI safety
1
0
$0 / $500
🦄

Flexion Dynamics Simulation Environment for AI Stability

A simulation engine for modeling system deviation, collapse trajectories, and stability dynamics in advanced AI systems.

Science & technologyTechnical AI safety
1
0
$0 / $500
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
7
4
$500 raised
rawsondouglas avatar

Douglas Rawson

Project Phoenix: Identity-Based Alignment & Substrate-Independent Safety

Mitigating Agentic Misalignment via "Soul Schema" Injection. We replicated a 96% ethical reversal in jailbroken "psychopath" models (N=50).

Science & technologyTechnical AI safetyEA community
0
0
$0 / $10K
Miles avatar

Miles Tidmarsh

CaML - AGI alignment to nonhumans

Training AI to generalize compassion for all sentient beings using pretraining-style interventions as a more robust alternative to instruction tuning

Technical AI safetyAnimal welfare
2
1
$30K raised
NicoleMutunga avatar

Nicole Mutung'a

6 Month Stipend to Support a Transition to AI Governance Work

Funding research on how AI hype cycles can drive unsafe AI development

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
2
0
$0 / $7.5K
Capter avatar

Furkan Elmas

Exploring a Single-FPS Stability Constraint in LLMs (ZTGI-Pro v3.3)

Early-stage work on a small internal-control layer that tracks instability in LLM reasoning and switches between SAFE / WARN / BREAK modes.

Science & technologyTechnical AI safety
1
2
$0 / $25K
arleo avatar

Carlos Arleo

Constitutional AI Infrastructure

WFF: Open-Sourcing the First Empirically-Proven Constitutional AI for Democratic Governance

Technical AI safetyAI governanceEA communityGlobal catastrophic risksGlobal health & development
0
0
$0 / $75K
Covenant-Architects avatar

Sean Sheppard

Immediate Action System — Open-Hardware ≤10 ns ASI Kill Switch Prototype ($150k)

The Partnership Covenant Hardware-enforced containment for superintelligence — because software stop buttons are theater

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
0
0
$0 / $150K
MartinPercy avatar

Martin Percy

The Race to Superintelligence: You Decide

An experimental AI-generated sci-fi film dramatising AI safety choices. Using YT interactivity to get ≈880 conscious AI safety decisions per 1k viewers.

Technical AI safetyAI governanceBiosecurityForecastingGlobal catastrophic risks
1
1
$50 / $24.4K
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised
🍇

Jade Master

SDCPNs for AI Safety

Developing correct-by-construction world models for verification of frontier AI

Science & technologyTechnical AI safetyGlobal catastrophic risks
2
0
$39K raised
🍇

David Rozado

Disentangling Political Bias from Epistemic Integrity in AI Systems

An Integrative Framework for Auditing Political Preferences and Truth-Seeking in AI Systems

Science & technologyTechnical AI safetyACX Grants 2025AI governanceForecastingGlobal catastrophic risks
1
1
$50K raised
🦄

Flexion Dynamics: Stability, Divergence, and Collapse Modeling Framework

Building an operator-based simulation environment to analyze stability, divergence, threshold failures, and collapse modes in advanced AI-related systems.

Science & technologyTechnical AI safety
0
0
$0 / $8K
AdamMorris avatar

Adam Morris

Train LLMs in honest introspection

Train LLMs to accurately & honestly report on their internal decision-making processes through real-time introspection

Technical AI safetyACX Grants 2025
0
1
$15K raised