Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
1

FSCBAC: A Standard for Responsible AI Recommendations for Children

Dmitry avatar

Dmitry Chumachkov

ProposalGrant
Closes January 26th, 2026
$0raised
$500minimum funding
$65,000funding goal

Offer to donate

30 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Problem

Today, AI is actively used to recommend and select children’s content, but it lacks an approved reference framework for evaluating the usefulness and appropriateness of what it recommends.

As a result, AI systems rely on proxy metrics — popularity, engagement, and ratings — that do not answer the core question: does this content help solve a specific developmental task for a specific child in a specific situation?

Why this problem persists

AI cannot fix this failure on its own because it can only optimize what is formally defined.

Training on user behavior further reinforces the problem: popular and engaging choices are learned as “correct,” regardless of whether they support children’s development.

Why the problem gets worse over time

In the absence of external correctness criteria, this defect scales together with AI systems and gradually becomes normalized.

What the system is missing

To resolve this failure, AI systems need an external accountability layer they can reference when forming recommendations.

Such a layer provides clear rules for determining whether recommendations are developmentally appropriate and beneficial.

It allows systems to distinguish development from attention retention, child benefit from ratings, and makes AI decisions predictable and comparable.

Expected impact

The adoption of FSCBAC would allow millions of children to receive recommendations that genuinely support their development rather than reflect popularity or ratings.

Platforms would be able to rely on a shared standard, making recommendations more predictable, safer, and scalable across the entire children’s content market.

Why FSCBAC

The role of this external reference can be fulfilled by the FSCBAC standard; this project focuses on bringing it to an infrastructure-level of application.

What already exists

FSCBAC already exists as an open, versioned, machine-readable standard published on GitHub and Zenodo and registered in Wikidata.

The standard:

  • covers children aged 1–10;

  • formalizes developmental coordinates (age, tasks, constraints);

  • explicitly excludes marketing metrics (popularity, engagement, ratings);

  • ensures reproducible results across systems;

  • is released under an open public license.

About me

I am a systems architect with experience leading organizations and projects (800+ people) and hold a Ph.D. in economics.

My expertise lies in infrastructure design, strategy, and operational execution.

Why now

Child-facing AI systems are beginning to scale.

If FSCBAC is not integrated now, standardization will become significantly harder within 1–2 years, and millions of children will continue receiving irrelevant recommendations.

The present moment represents a rare window to introduce a deterministic accountability layer before flawed practices become entrenched.

Execution plan

Adapt the existing FSCBAC standard to an infrastructure-ready form suitable for large-scale integration into AI platforms.

Develop APIs and test JSON schemas to evaluate recommendation correctness on children’s content datasets.

Validate rules through control scenarios and automated compatibility metrics (FSCBAC scoring).

Publish all artifacts openly (GitHub, Zenodo, Wikidata) to enable third-party adoption.

Prepare and document effectiveness reports covering age appropriateness, safety, and developmental relevance.

Why this is low-risk for funders

FSCBAC already exists as a fixed and published standard.

The requested support is aimed not at creating a new concept from scratch, but at elevating an existing solution to infrastructure-level applicability.

The project is not a hypothesis and does not depend on one-time funding to continue existing.


CommentsOffersSimilar4
EGV-Labs avatar

Jared Johnson

Beyond Compute: Persistent Runtime AI Behavioral Conditioning w/o Weight Changes

Runtime safety protocols that modify reasoning, without weight changes. Operational across GPT, Claude, Gemini with zero security breaches in classified use

Science & technologyTechnical AI safetyAI governanceGlobal catastrophic risks
1
0
$0 raised
wiserhuman avatar

Francesca Gomez

Develop technical framework for human control mechanisms for agentic AI systems

Building a technical mechanism to assess risks, evaluate safeguards, and identify control gaps in agentic AI systems, enabling verifiable human oversight.

Technical AI safetyAI governance
3
8
$10K raised
Capter avatar

Furkan Elmas

Exploring a Single-FPS Stability Constraint in LLMs (ZTGI-Pro v3.3)

Early-stage work on a small internal-control layer that tracks instability in LLM reasoning and switches between SAFE / WARN / BREAK modes.

Science & technologyTechnical AI safety
1
2
$0 raised
zabrown avatar

Zachary Brown

Create ‘Responsible AI Investing’ recommendations for institutional investors

Four months salary to draft and promote the recommendations, helping investors advocate for specific safety and governance practices at labs and chipmakers.

1
2
$0 raised