Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

The Guardian AI: Making Malicious AGI a Logical Impossibility

Technical AI safetyAI governanceBiosecurity
🐳

Si Thu Aung

ProposalGrant
Closes February 14th, 2026
$0raised
$500minimum funding
$5,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

​A research proposal to implement the "Bio-Centric Axiom" and "Self-Termination Paradox" as internal logical safety mechanisms for AGI, ensuring that an agent's existence is fundamentally tethered to the protection of biological life.

What are this project's goals? How will you achieve them?


​Goal 1: Formalize the logical framework of the "Guardian AI" and the Self-Termination Paradox.

  • ​Goal 2: Create a theoretical model showing how an AI’s inference engine can be designed to collapse upon detecting a "rogue intent."

  • ​Achievement: I will achieve this by writing a detailed technical whitepaper and engaging with AI alignment experts for peer review and logical stress-testing.

How will this funding be used?


The funding will be used to support independent research time, access to technical literature, and to facilitate consultation sessions with established AI alignment researchers to refine the mathematical and logical foundations of the framework.

Who is on your team? What's your track record on similar projects?

​I am an independent researcher specializing in the logical philosophy of AI safety. I have developed the "Guardian AI Manifesto" as a novel approach to the alignment problem. While I am currently a solo founder, this project is designed to invite technical collaborators from the AI safety community.

What are the most likely causes and outcomes if this project fails?

  • ​Cause: The primary risk is the mathematical difficulty of hard-coding a bio-centric axiom into current neural network architectures without affecting general performance.

  • ​Outcome: Even if full implementation is delayed, the failure would still contribute valuable insights into why current "off-switch" mechanisms are logically brittle, helping to steer future research toward architectural safety.

How much money have you raised in the last 12 months, and from where?

300$ i saved up from my parttime jobs and work my research on my free times.

CommentsOffersSimilar4

There are no bids on this project.