You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
$25,000 to run a 4-month independent-user acquisition and evidence-documentation sprint for BDC Bridge — a live bounded-evaluation service for AI workflows running at bridge.bdc-hive.com.
The goal is to recruit AI teams with real workflows, facilitate their submission of Bridge packets, document their deployment decisions and subsequent outcomes, and recompute calibration based on this third-party evidence.
This is not "user growth" for its own sake. These users represent the critical missing evidence required to test the H1 Tier 3 calibration hypothesis: whether Bridge’s evaluator remains statistically calibrated when transitioning from owner-operated systems to independent, external teams.
I am Danil Avakumov, operating solo from Kokand, Uzbekistan, with no legal entity. I have spent 18 months building BDC Bridge and the underlying research line.
The scientific spine so far:
34 measured gates: 32 PASS / 2 preserved FAIL.
Six bounded mechanisms cooperatively assembled.
External chaotic validation: 6 of 12 architectural laws confirmed across two LLM providers and two languages on real-world data chunks.
Current calibration baseline: Tier 2 PASS, Brier = 0.03178, ECE = 0.17 (n=5 admissible cases).
The Gap: Those 5 cases are owner-operated. Bridge is currently calibrated on systems I control. It has yet to demonstrate calibration on systems run by teams I do not control. This grant is designed to close that empirical gap.
Manifund is the ideal fit because this is "uncomfortable" evidence work. The most valuable outcome may be negative: Bridge may over-abstain or miscalibrate on independent teams. Per BDC discipline, such a failure will be published as a preserved-failure result rather than reframed as success.
Recruit 5 independent teams with production-grade AI workflows.
Document 5 complete cycles: input packet, Bridge recommendation, team’s deployment decision, and the observed outcome (deployed / refused / deferred / failed).
Recompute H1 calibration on a combined pool of 10+ admissible cases to assess statistical significance.
Publish a Proof Pack: a claim-by-claim atlas showing "before/after" calibration metrics.
Preserve Failure: If calibration degrades, publish the findings as a canonical FAIL entry. No silent rewrites.
Months 1-2 (Outreach): Direct outreach and warm-intro brokering targeting solo founders, AI agencies, and safety-adjacent labs.
Incentivization: Teams receive an honorarium ($1,500) for the documentation cycle, not for positive results. Contradictory outcomes (where Bridge was wrong) are explicitly valuable.
Onboarding: Utilizing the live companion repo github.com/malishomen/bdc-bridge-passport which contains templates and FAQs to minimize friction.
Month 4 (Analysis): Final calibration recompute and publishing the "Integrity Review" report.
Budget Breakdown ($25,000 total):
Participation Honoraria ($7,500): 5 teams × $1,500. Paid for packet preparation and outcome reporting.
Outreach & Brokering ($4,000): Warm-intro brokering, qualification, and follow-up operations.
Compute & Infra ($4,000): Evaluation workload, storage, and monitoring during the sprint.
Operator Living Expense ($8,000): 4 months full-time at Kokand cost-of-living.
Public Proof Packaging ($1,500): Calibration delta report, preserved-failure registry, and claim atlas update.
Why $25K? A smaller grant (e.g., $5K) would fund only 1-2 cases, which is statistically insufficient to close the Tier 3 calibration question. $25K is the floor for a meaningful recompute (n=10+).
Team: Solo (Danil Avakumov). No entity, no co-founders. Track Record: 18 months of independent execution.
Scientific: 34 gates, 32 PASS/2 FAIL. Calibration baseline (Tier 2) is already live and documented.
Product: BDC Bridge is a functional service with a distroless worker, hash-locked dependencies, and a public API.
Discipline: I adhere to a preserved-failure rule. My record includes canonical FAILs (R10, R15) that were never "pivoted" away.
Verification:
Live Service: bridge.bdc-hive.com
Health: .../v1/health
Repo: github.com/malishomen/bdc-bridge-passport
Low Recruitment: If only 3-4 teams sign on, the calibration surface still improves, but I will report "insufficient case count for Tier 3 closure."
Calibration Decay: If Bridge fails on independent teams, the result is published as a "Generalization Failure." This is a highly valuable finding for the AI safety community regarding bounded-evaluation limits.
Statistical Inconclusivity: If the delta is too small to call, the result is published as "signal indeterminate."
Commitment: No scope creep, no silent rewrites, no narrative-fitting. Failure is documented with the same rigor as success.
Last 12 Months: $0 (100% self-funded).
Parallel Applications: LTFF (submitted 2026-05-02, $40K), Emergent Ventures (planned), Foresight (planned).
Transparency: If multiple grants convert, I will accept the smallest viable non-overlapping combination and disclose status to all parties.