Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Terminal Boundary Systems and the Limits of Self-Explanation

Science & technologyTechnical AI safetyGlobal catastrophic risks
🐰

Avinash A

ProposalGrant
Closes January 26th, 2026
$0raised
$15,000minimum funding
$30,000funding goal

Offer to donate

17 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary


Project Summary: I am an independent researcher who has developed the ASE (Absolute Self-Explanation) Impossibility Theorem. Using Symmetric Monoidal Closed Categories, I have proven that "Absolute Self-Explanation"—a prerequisite for many current superalignment strategies—is a mathematical impossibility for agentic systems. This research identifies structural failure points in AI architecture that empirical testing cannot catch. I am seeking $15,000 for a 3-month sprint to finalize the Agda formalization of these proofs and publish a machine-verifiable "Axiomatic Audit" for frontier AI labs.

What are this project's goals?

How will this funding be used?

Why is this high-impact? Current safety efforts are "patching holes" in a boat. My research proves that the hull itself has a logical limit. By defining the Terminal Boundary, I help the ecosystem avoid a trillion-dollar "catastrophic fail" caused by trying to scale systems past their logical safety capacity.

What are this project's goals? How will you achieve them?

  1. Machine Verification: Translate the categorical proofs (Yoneda-theoretic naturality failure, Lawvere Fixed-Point obstructions) into Agda to provide a mathematically certain "No-Go Theorem" for AI Safety.

  2. Define the "Safety Ceiling": Create a formal framework for labs (OpenAI, Anthropic) to identify which alignment goals are physically/logically impossible versus which are engineering challenges.

  3. The Human-AI "Missing Link": Develop a follow-up framework for "Open-Boundary Alignment," which models the missing logical connection between human intent and AI autonomy.

How will this funding be used?


Stipend ($12,000): To support 3 months of full-time research and formalization, preventing my exit from the field due to financial constraints.

  • Compute & Verification Tools ($2,000): For formal verification overhead and library development.

  • Open-Source Publication ($1,000): To ensure all proofs and Agda libraries are publicly available for the AI Safety community.

  • Why is this high-impact? Current safety efforts are "patching holes" in a boat. My research proves that the hull itself has a logical limit. By defining the Terminal Boundary, I help the ecosystem avoid a trillion-dollar "catastrophic fail" caused by trying to scale systems past their logical safety capacity.

Who is on your team? What's your track record on similar projects?


I am the sole principal investigator, operating as an independent researcher for 6 years. My track record is defined by high-conviction, self-funded deep work in the categorical foundations of AI safety.

  • Project Evolution: Over the last 6 years, I have moved from theoretical abstractions to the development of the Terminal Boundary Systems (TBS) framework.

  • Deliverables: I have produced two core technical papers ("Terminal Boundary Systems" and "The ASE Impossibility Theorem") and am currently developing a machine-verifiable formalization in Agda.

  • Execution: Operating without institutional support for 6 years demonstrates a high level of research discipline, resourcefulness, and a long-term commitment to solving the most difficult 'Safety Ceiling' problems in AI."

What are the most likely causes and outcomes if this project fails?

Answer: Likely Causes of Project Failure:

  • Formalization Bottleneck: The Agda formalization of Symmetric Monoidal Closed Categories is highly complex. Failure could occur if the translation from category theory to machine-verified code hits a 'complexity wall' that exceeds the current 3-month sprint timeline.

  • Conceptual Friction: The AI safety community may struggle to adopt a 'structural limit' approach over the current 'empirical testing' paradigm.

Likely Outcomes of Project Failure:

  • Field Risk: Without a proven 'Safety Ceiling,' labs will continue to pursue Absolute Self-Explanation, a goal my theory suggests is mathematically impossible. This leads to a false sense of security in AI alignment.

  • Catastrophic Failure: If agentic systems are deployed without acknowledging these structural boundaries, we risk Modal Collapse—where an AI's internal logic deviates from human reality in an unobservable, uncorrectable way.

  • Personal Risk: My exit from the field. After 6 years of self-funding, a lack of institutional support would mean the permanent loss of this specific mathematical early-warning system for the safety community."

How much money have you raised in the last 12 months, and from where?


Answer: "In the last 12 months, I have raised $0 in external funding. The project has been 100% self-funded through my own personal resources and 6 years of dedicated research labor.

I have reached a 'critical mass' where the theoretical work is complete, but the computational formalization (Agda) requires dedicated runway that my personal resources can no longer sustain. I am seeking this grant to transition from an 'Independent Explorer' to a 'Funded Developer' of safety-critical formal tools."

Comments1OffersSimilar7
anthonyw avatar

Anthony Ware

Shallow Review of AI Governance: Mapping the Technical–Policy Implementation Gap

Identifying operational bottlenecks and cruxes between alignment proposals and executable governance.

Technical AI safetyAI governanceGlobal catastrophic risks
1
1
$0 / $23.5K
is-sky-a-sea avatar

Aditya Raj

6-month research funding to challenge current AI safety methods

Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.

Technical AI safetyGlobal catastrophic risks
2
0
$0 raised
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
QGResearch avatar

Ella Wei

Testing a Deterministic Safety Layer for Agentic AI (QGI Prototype)

A prototype safety engine designed to relieve the growing AI governance bottleneck created by the EU AI Act and global compliance demands.

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $20K
Brian-McCallion avatar

Brian McCallion

Boundary-Mediated Models of LLM Hallucination and Alignment

A mechanistic, testable framework explaining LLM failure modes via boundary writes and attractor dynamics

Technical AI safetyAI governance
1
0
$0 / $75K
mfatt avatar

Matthew Farr

MoSSAIC

Probing possible limitations and assumptions of interpretability | Articulating evasive risk phenomena arising from adaptive and self modifying AI

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
sandguine avatar

Sandy Tanwisuth

Alignment as epistemic system governance under compression

We reframe the alignment problem as the problem of governing meaning and intent when they cannot be fully expressed.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $20K