Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

The Guardian AI: Making Malicious AGI a Logical Impossibility

Technical AI safetyAI governanceBiosecurity
🐳

Si Thu Aung

ProposalGrant
Closes February 14th, 2026
$0raised
$500minimum funding
$5,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

​A research proposal to implement the "Bio-Centric Axiom" and "Self-Termination Paradox" as internal logical safety mechanisms for AGI, ensuring that an agent's existence is fundamentally tethered to the protection of biological life.

What are this project's goals? How will you achieve them?


​Goal 1: Formalize the logical framework of the "Guardian AI" and the Self-Termination Paradox.

  • ​Goal 2: Create a theoretical model showing how an AI’s inference engine can be designed to collapse upon detecting a "rogue intent."

  • ​Achievement: I will achieve this by writing a detailed technical whitepaper and engaging with AI alignment experts for peer review and logical stress-testing.

How will this funding be used?


The funding will be used to support independent research time, access to technical literature, and to facilitate consultation sessions with established AI alignment researchers to refine the mathematical and logical foundations of the framework.

Who is on your team? What's your track record on similar projects?

​I am an independent researcher specializing in the logical philosophy of AI safety. I have developed the "Guardian AI Manifesto" as a novel approach to the alignment problem. While I am currently a solo founder, this project is designed to invite technical collaborators from the AI safety community.

What are the most likely causes and outcomes if this project fails?

  • ​Cause: The primary risk is the mathematical difficulty of hard-coding a bio-centric axiom into current neural network architectures without affecting general performance.

  • ​Outcome: Even if full implementation is delayed, the failure would still contribute valuable insights into why current "off-switch" mechanisms are logically brittle, helping to steer future research toward architectural safety.

How much money have you raised in the last 12 months, and from where?

300$ i saved up from my parttime jobs and work my research on my free times.

CommentsOffersSimilar4
🐰

Avinash A

Terminal Boundary Systems and the Limits of Self-Explanation

Formalizing the "Safety Ceiling": An Agda-Verified Impossibility Theorem for AI Alignment

Science & technologyTechnical AI safetyGlobal catastrophic risks
1
1
$0 / $30K
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
is-sky-a-sea avatar

Aditya Raj

6-month research funding to challenge current AI safety methods

Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.

Technical AI safetyGlobal catastrophic risks
2
0
$0 raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised