Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

Global Governance Layer & Defense Benchmarks for Advanced AI Systems

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
Lisa-Intel avatar

Pedro Bentancour Garin

ProposalGrant
Closes December 14th, 2025
$0raised
$350,000minimum funding
$350,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

Lisa Intel is building the first external governance and safety layer for advanced AI systems - a framework that can detect, constrain, and contain unsafe AI behavior even when the underlying model is compromised, open-weight, or deliberately misconfigured.

Unlike current guardrails (which sit inside the model and disappear under fine-tuning or jailbreaks), our architecture operates at the system level:

Real-time behavior monitoring across modalities and agents.

Early-warning signals for unsafe drift, deception, escalation, or capability jump.

Hard containment mechanisms that limit what a system can do even if the model itself is unsafe.

Intervention tools that allow isolation or shutdown before catastrophic actions.

Cross-vendor interoperability, including open-weight and frontier models.

Over the past months, we developed early prototypes and conducted internal simulations inspired by “The Attacker Comes Second”. Our preliminary defense benchmarks show substantially higher resistance than current leading models, with 97–99% defensive effectiveness in controlled tests - results we aim to formalize, replicate, and publish openly.

We now seek $350,000 in funding, including $100,000 to cover past development costs, to:

Build the first working MVP of the governance & containment layer.

Develop standardized attack/defense benchmarks aligned with the AI Safety community.

Run controlled lab tests with safety orgs, red-teamers, and external evaluators.

Prepare 15+ remaining patents to secure the architecture before widespread disclosure.

Produce transparent technical documentation and impact assessments.

Explore collaboration with EU AI Office, AISI, METR-aligned teams, and other safety orgs.

Our goal is to contribute a safety-first infrastructure that complements existing evaluation work and provides a structural, scalable mechanism to reduce catastrophic AI risk.

What are this project's goals? How will you achieve them?

Primary goal:

Develop and validate the first external governance and containment layer that can keep advanced AI systems within safe operational boundaries, even when underlying models are compromised or intentionally misused.

The problem:

Most current safety methods exist inside the model. They disappear when a model is fine-tuned, jailbroken, misconfigured, or deployed in the wild. This leaves both open-weight and proprietary systems vulnerable to misuse and escalatory behavior.

Our solution:

We build a system-level governance layer that monitors AI behavior, restricts unsafe actions, and isolates harmful activity before it escalates. It is designed to work with any model, including frontier, open-weight, or unregulated systems. The architecture aligns closely with the goals of the EU AI Act, CAISI, and international safety directives.

How we will achieve this:

• Build and release the first functional MVP of the governance and containment layer

• Develop standardized attack and defense evaluations in collaboration with safety researchers

• Test the architecture through controlled adversarial scenarios and red-team environments

• Validate the system in comparison with existing studies such as "The Attacker Comes Second"

• Finalize and file the remaining patents that secure the architecture before broader disclosure

• Produce technical documentation and policy-aligned standards for deployment

• Engage with EU AI Office, academic groups, and safety organizations for early pilot testing

• Work closely with evaluation teams to ensure scientific rigor and reproducibility

Expected outcome:

A scalable and practical governance infrastructure that reduces catastrophic AI risk and offers a structural mechanism for safe deployment across government, enterprise, and open-model ecosystems.

How will this funding be used?

This funding will enable us to build and validate the first MVP of the Lisa Intel governance and containment system, complete the core safety research, and secure the remaining patents before wider disclosure. Our use of funds is focused on engineering, evaluation, and essential IP work. The project spans 9 months.

Budget allocation:

45,000 USD - Founder salary (9 months, 5,000 USD/month)

Covers basic living costs so full-time work can continue on development, safety research, patents, and evaluations.

• 9,000 USD - Strategic advisory (Alexis Podolny, 9 months, 1,000 USD/month)

Part-time support from a senior advisor with 20 years in startups and AI, helping with architecture decisions, investor readiness, and hiring plan for the MVP phase.

• 100,000 USD - repayment of earlier development costs.

These are documented expenses for prototype research, patent drafting, and initial system design. Covering them restores operational liquidity so we can move at full speed.

• 110,000 USD - core engineering and MVP development.

Hiring two contractors and one part-time engineer to build the governance layer, simulation tools, containment logic, and model-agnostic interfaces.

• 30,000 USD - evaluations and adversarial testing.

Creating attack simulations and benchmark environments similar to “The Attacker Comes Second,” to validate our defense effectiveness across multiple model families.

• 35,000 USD - patent completion and legal protection

We have filed 4 patents and are preparing 16 more. This portion covers legal fees and ensures that the architecture can be safely disclosed during pilot collaborations.

• 15 000 USD – Infrastructure and compute.

Basic infrastructure such as compute for testing, secure cloud deployments, and compliance documentation.

• 6,000 USD - operational costs.

Bookkeeping, minor travel for key meetings, tools, domains, and other operational overhead.

Total: 350,000 USD.

All funds accelerate development.

This funding removes our current capital bottleneck, allows us to protect our IP, and ensures that the first working version of the global AI governance layer can be tested by governments, researchers, and safety organizations.

Who is on your team? What's your track record on similar projects?

Pedro Bentancour Garin (Founder & Lead Architect)

I am the founder of Lisa Intel and the architect behind the global AI governance framework. My background combines engineering, humanities research, and complex-systems analysis, which has shaped the design of a governance architecture that bridges technical controls with real-world policy needs.

Over the past two years, I have built the full conceptual framework, drafted 27 patent filings (4 submitted, 23 pending), developed early prototypes for monitoring and containment, and led all research, outreach, and architecture design.

Alexis Podolny (Strategic Advisor)

Alexis is a senior startup and AI advisor with 20 years of experience in engineering, product development, and early-stage company building. He supports architecture decisions, investor readiness, and planning for the first MVP phase. His background includes advising deep-tech founders and scaling safety-focused engineering teams.

Track record

Although this is a new venture, the work has already produced:

A complete blueprint for a global AI governance and containment layer.

Initial prototype logic for external anomaly detection and policy enforcement.

Early simulation environments inspired by “The Attacker Comes Second”.

Engagement with policy bodies including the EU AI Office and national regulators.

A multicomponent patent portfolio covering governance, containment, evaluation, and protocol-level safeguards.

This project builds directly on that foundation.

With funding, we can move from architected system to the first functional version that can be tested by external researchers, governments, and safety organizations.

What are the most likely causes and outcomes if this project fails?

1. Incomplete development of the governance MVP

The main risk is that without sufficient funding, we cannot complete the first working version of the external governance and containment layer. This would delay the ability of governments, researchers, and safety organizations to evaluate the system and provide feedback.

2. Slower progress on AI governance research

The field is moving quickly. If we cannot dedicate full-time work to the architecture, simulations, and early evaluations, progress on external-control approaches may slow down. This creates a wider gap where only internal-model guardrails exist, even though recent incidents show that they can be bypassed.

3. Reduced ability to demonstrate feasibility

Without the MVP, it becomes harder to validate the architecture with benchmark environments, adversarial testing, and pilot users. This limits opportunities to collaborate with institutions working on AI safety, such as EU bodies, academia, and research labs.

4. Missed window for early regulatory alignment

Governments are drafting AI governance frameworks now. If the project stalls, we may miss the opportunity to influence early standards and to provide a practical tool that supports compliance and certification.

Outcomes if the project fails

No working prototype for external AI monitoring and containment.

Slower adoption of cross-vendor governance methods.

Continued reliance on model-internal guardrails alone.

Higher difficulty attracting future funding

Loss of momentum during a key period when institutions are actively seeking new governance solutions.

We mitigate these risks through careful budgeting, a focused milestone plan, and a narrow scope: deliver a functional MVP that can be evaluated and tested by external stakeholders.

How much money have you raised in the last 12 months, and from where?

In the past 12 months, the company has raised a total of 100,000 USD, consisting of:

45,000 USD in personal loans from family, used for early research, prototype work, and patent drafting

55,000 USD in bank loans, used for development costs, legal fees, and operational expenses

We have not received any external grants or equity funding yet; all progress so far has been self-funded.

CommentsOffers

There are no bids on this project.