Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
0

Seeking domesticated results, or unleashing wild intelligence?

Science & technologyTechnical AI safetyAI governanceEA community
🌳

sung hun kwag

ProposalGrant
Closes December 4th, 2025
$0raised
$7,000minimum funding
$25,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Project summary

  • Why do we assume a project summary must be a snapshot of what already fits into today's categories, instead of a hypothesis about what becomes possible if we abandon those boxes?

  • If the real crux is “what kind of intelligence are we choosing to grow?”, why do summaries obsess over legibility—rather than admitting most fundamental breakthroughs will look illegible at the start?

What are this project's goals? How will you achieve them?

  • Why must “goals” be pre-written and boxed in advance?

  • If the architecture truly changes what’s possible, doesn’t prematurely specifying execution plans just limit its potential already?

How will this funding be used?

  • Can real innovation ever happen if every dollar is strictly partitioned on a spreadsheet?

  • If you can’t tolerate uncertainty and buffer for genuine exploration, isn’t the budget itself a prison for breakthrough work?

Who is on your team? What's your track record on similar projects?

  • If a team’s “credentials” are all that count, isn’t this just an HR gatekeeping ritual repackaged as due diligence?

  • Judging strictly by past “track record,” aren’t you just ensuring that only recycled, system-approved approaches survive?

What are the most likely causes and outcomes if this project fails?

  • Can you really expect innovation if failure is something to be feared, not learned from?

  • In a safety-obsessed environment, can anything genuinely new survive, or do we just keep safe, sterile status quos?

How much money have you raised in the last 12 months, and from where?

  • Does fundraising history actually correlate with breakthrough potential?

  • If previous investors didn’t “approve,” isn’t that sometimes the strongest evidence that something radical is happening—something scoring rubrics are blind to?

CommentsOffersSimilar5
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 / $125K
is-sky-a-sea avatar

Aditya Raj

6-month research funding to challenge current AI safety methods

Current LLM safety methods—treat harmful knowledge as removable chunks. This is controlling a model and it does not work.

Technical AI safetyGlobal catastrophic risks
2
0
$0 / $16.5K
Faisal-Moarafur-Rasul avatar

Faisal Moarafur Rasul

Inquiro: Building a Global Hub for Understanding AI

A media and learning platform exploring how AI thinks — featuring Philosopher AI, an educative system that explains its reasoning.

Science & technologyTechnical AI safetyAI governance
1
0
$0 / $10K
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised
🐢

Kaynen B Pellegrino

Support SyberSuite: The first real Governance Platform for AI

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised