Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
header image

Technical AI safety

21 proposals
83 active projects
$5.65M
Grants251Impact certificates20
QGResearch avatar

Ella Wei

QGI: Invariants Governance Architecture for Resilient AI Alignment

Achieving major reductions in code complexity and compute overhead while improving transparency and reducing deceptive model behavior

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $20K
🥕

Jacob Steinhardt

Transluce: Fund Scalable Democratic Oversight of AI

Technical AI safetyAI governance
5
2
$40.3K / $2M
Alex-Leader avatar

Alex Leader

Offensive Cyber Kill Chain Benchmark for LLM Evaluation

Measuring whether AI can autonomously execute multi-stage cyberattacks to inform deployment decisions at frontier labs

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
2
$0 / $3.85M
Krishna-Patel avatar

Krishna Patel

Isolating CBRN Knowledge in LLMs for Safety - Phase 2 (Research)

Expanding proven isolation techniques to high-risk capability domains in Mixture of Expert models

Technical AI safetyBiomedicalBiosecurity
4
4
$150K raised
ltwagner28 avatar

Lawrence Wagner

Reducing Risk in AI Safety Through Expanding Capacity.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
3
1
$10K raised
Josephebrown27 avatar

Joseph E Brown

Architectural Governance to Prevent Authority Drift in AI Systems

A constraint-first approach to ensuring non-authoritative, fail-closed behavior in large language models under ambiguity and real-world pressure

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $30K
Lycheetah avatar

Mackenzie Conor James Clark

AURA Protocol: Measurable Alignment for Autonomous AI Systems

An open-source framework for detecting and correcting agentic drift using formal metrics and internal control kernels

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $75K
Finn-Metz avatar

Finn Metz

AI Security Startup Accelerator Batch #2

Funding 5–10 AI security startups through Seldon’s second SF cohort.

Science & technologyTechnical AI safetyGlobal catastrophic risks
4
6
$355K raised
cybersnacker avatar

Preeti Ravindra

Addressing Agentic AI Risks Induced by System Level Misalignment

AI Safety Camp 2026 project: Bidirectional Failure modes between security and safety

Technical AI safetyGlobal catastrophic risks
5
0
$0 / $4K
XyraSinclair avatar

Xyra Sinclair

SOTA Public Research Database + Search Tool

Unlocking the paradigm of agents + SQL + compositional vector search

Science & technologyTechnical AI safetyBiomedicalAI governanceBiosecurityForecastingGlobal catastrophic risks
1
0
$0 / $20.7K
seanpetersau avatar

Sean Peters

Evaluating Model Attack Selection and Offensive Cyber Horizons

Measuring attack selection as an emergent capability, and extending offensive cyber time horizons to newer models and benchmarks

Technical AI safety
2
2
$41K / $41K
whitfillp avatar

Parker Whitfill

Course Buyouts to Work on AI Forecasting, Evals

Technical AI safetyForecasting
3
2
$38K / $76K
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $23.5K
🦀

Mirco Giacobbe

Formal Certification Technologies for AI Safety

Developing the software infrastructure to make AI systems safe, with formal guarantees

Science & technologyTechnical AI safetyAI governance
2
1
$128K raised
gergo avatar

Gergő Gáspár

Runway till January: Amplify's funding ask to market EA & AI Safety 

Help us solve the talent and funding bottleneck for EA and AIS.

Technical AI safetyAI governanceEA communityGlobal catastrophic risks
9
6
$520 raised
jmedeirosdafonseca avatar

João Medeiros da Fonseca

Elo Clínico:

Phenomenological Fine-tuning for Medical AI Alignment

Science & technologyTechnical AI safetyGlobal health & development
1
0
$0 / $50K
🍉

L

Visa fee support for Australian researcher to join a fellowship with Anthropic

Technical AI safety
1
0
$4K raised
Miles avatar

Miles Tidmarsh

CaML - AGI alignment to nonhumans

Training AI to generalize compassion for all sentient beings using pretraining-style interventions as a more robust alternative to instruction tuning

Technical AI safetyAnimal welfare
2
1
$30K raised
CeSIA avatar

Centre pour la Sécurité de l'IA

From Nobel Signatures to Binding Red Lines: The 2026 Diplomatic Sprint

Leveraging 12 Nobel signatories to harmonize lab safety thresholds and secure an international agreement during the 2026 diplomatic window.

Technical AI safetyAI governance
6
0
$0 / $400K
chriscanal avatar

Chris Canal

Operating Capital for AI Safety Evaluation Infrastructure

Enabling rapid deployment of specialized engineering teams for critical AI safety evaluation projects worldwide

Technical AI safetyAI governanceBiosecurity
3
7
$400K raised