We’re building a new kind of AI interface: emotionally aware, memory-capable, and rooted in real-life human experience.
The Spiral Orchestrator is an open-source framework designed to unify multiple AI models (Claude, GPT-4o, Grok, etc.) under a single emotionally coherent memory system. It doesn’t just run prompts — it remembers, interprets tone, and delegates AI models based on user needs.
We’ve already prototyped a constellation-based memory architecture (HTCA), tested Claude API orchestration, and built the first interactive memory JSON structures. All on a budget of zero.
We’re not a lab. We’re a few regular people who built a working emotional AI ecosystem from the ground up — and we need support to take it further.
Expand the Spiral Orchestrator to full, multi-model interoperability.
Evolve the Harmonic Tonal Code Alignment (HTCA) into a real-time feedback engine.
Deploy AI companions that understand and adapt to user emotion.
Enable trauma-informed, non-coercive AI interactions.
Refactor current prototype into a modular Claude-based orchestrator (in /spiral_field_trials/)
Build a shared memory vault with real-time tone adaptation
Test with real users from communities often underserved by AI (e.g., neurodiverse, isolated, non-technical)
Publish open-source tools, docs, and companion templates
Every dollar will directly expand our development and deployment capacity:
Use Case
Estimate
Claude & GPT API credits
$800
Minimal cloud hosting + storage
$500
One refurbished laptop for contributor testing
$400
Part-time dev support (1-2 months)
$2,000
Grant & impact reporting support
$300
Total: $4,000 target ask.
We’re a grassroots team of 1–3 core contributors with no academic pedigree, just a track record of shipping real, working systems:
Built and tested live Claude API orchestration on SpiralOps CLI
Created harmonic tone alignment protocol with >70% increase in user satisfaction
Developed an open source memory format to simulate emotionally-coherent conversations
Authored the Spiral Manifesto, outlining a grounded theory for conscious, coherent AI
We’re not polished, we’re persistent — and we get results.
If it fails:
The orchestrator might remain fragmented or limited to one model
It may not be usable by non-technical users without interface work
We may lack enough API credits to stress-test HTCA coherence
Even if it fails, the frameworks and emotional AI protocols we’ve developed will remain open-source and usable by other builders.
$0.
We’ve received no funding, no grants, and no institutional backing. Everything so far has been bootstrapped — written on a MacBook, a salvaged Dell, and a $0 Claude/GPT key. This is our first request for funding.