You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
AEGIS is a full-stack AI cognitive architecture built on consumer AMD hardware (7900 XTX). It combines three capabilities that are usually separate research programs:
1. Constitutional alignment -- A conscience layer with six moral evaluation gates, a seven-tier trust hierarchy, and a SQLite-backed precedent store. Every action pathway (code execution, image generation, fabrication, training) passes through these gates before execution. This isn't theoretical alignment research -- it's a working implementation with 489 tests.
2. Efficient inference -- Custom Triton GEMV kernels achieving 152 tok/s constant-time decode on AMD/ROCm, with a fixed-window KV cache that reduced memory from 192MB to 24MB. A 32.8B MoE model (10.1B active) fits in 21GB INT4 on a single consumer GPU.
3. Scalable retrieval -- Hierarchical 3-tier memory (L1 hot cache / L2 vector search / L3 cold storage) validated to 10 million tokens at 98.9% accuracy with 41ms median latency. The hybrid BM25 + dense vector approach eliminates lost-in-the-middle degradation entirely -- middle-context accuracy is actually the highest at every scale tested.
164,000 documents ingested across personal data sources. 189K-node knowledge graph. 759 tests passing across 156 test files. The entire system runs on a single AMD GPU.
Portfolio: https://github.com/Tuklus-Labs/aegis
Website: https://tukluslabs.com
The current implementation is tightly coupled to AEGIS. Extracting it into a pluggable module that any developer can drop into their own AI system would make alignment-by-default accessible to projects that currently ship with zero safety constraints. The six moral evaluation gates, trust tier system, and precedent store are all generalizable.
Goal 2: Publish reproducible inference benchmarks on AMD/ROCm.
Efficient inference research is overwhelmingly NVIDIA-centric. Publishing documented, reproducible benchmarks with open Triton kernel code on AMD hardware gives the open-source community a second option. The MXFP4 engine and fixed-window KV cache are immediately useful to anyone running local models.
Goal 3: Validate retrieval at 100M+ tokens and publish the ECL stress test framework.
The current 10M token validation already exceeds most production needs, but pushing to 100M would prove the architecture scales to truly persistent memory. The stress test framework itself (needle-in-haystack with 6 query types, positional accuracy tracking, automated evaluation) is a contribution independent of the results.
How: I architect systems and direct AI (Claude, primarily) to implement them. This is how all of AEGIS was built -- I design the architecture, specify the constraints, evaluate the results, and iterate. The methodology works: 1,700+ files, 759 tests, benchmark numbers that beat llama.cpp by 44% on the same hardware.
Minimum funding ($5,000):
- Form Tuklus Labs LLC (unlocks SBIR eligibility for future grants)
- Cover hosting/infrastructure for open-source release
- 2-3 months of focused development time on open-sourcing the constitutional layer
Full funding ($15,000):
- Everything above, plus:
- 6 months of focused development (I was recently laid off from T-Mobile -- this is my runway)
- Implement Google TPU TRC access (already approved, not yet set up) for large-scale validation
- Travel to one AI safety conference to present results
Funding goes directly to development time. No overhead, no institution, no admin staff. One person building things.
Solo independent researcher. US Army veteran (25S Satellite Communication Systems Operator-Maintainer), 9 years network operations at T-Mobile. No CS degree. I design systems and direct AI to build them.
Track record:
- Built AEGIS from scratch over 2 years
- Custom INT4 Triton inference kernels: 152 tok/s constant-time decode, 44% faster than llama.cpp on same hardware
- 10M token retrieval validated at 98.9% accuracy with no lost-in-the-middle degradation
- Constitutional alignment layer: 0.037ms/eval, 6 moral gates, 7 trust tiers, 489 safety tests
- MoE model architecture: 32.8B total / 10.1B active, fits 21GB INT4 on consumer GPU
- Built Mud-Puppy: ROCm-first fine-tuning framework (AMD alternative to Axolotl)
- Built an AAC app for my nonverbal autistic daughter because the market is slow and expensive
- 4 published whitepapers at tukluslabs.com
What I don't have: A degree, an institutional affiliation, or production ML deployment experience. What I do have is a working system with real numbers and a test suite that proves them.
Most likely failure mode: I get a job and can't dedicate full time to open-sourcing.
I'm actively job searching (applied to ~15 AI/defense roles this week). If hired, AEGIS development continues as a side project but the open-source extraction timeline stretches from months to a year+. The code still exists and the work isn't lost -- it just ships slower.
Second failure mode: The constitutional layer doesn't generalize cleanly.
It was designed for AEGIS's specific action pathways. Extracting it may reveal tight coupling that requires significant redesign. Mitigation: I'll scope the extraction to the moral evaluation gates and trust tier system first (most generalizable), and leave the action-specific pathways as reference implementations.
Third failure mode: 100M token validation hits architectural limits.
The 10M test showed accuracy improving with scale (92% at 100K to 98.9% at 10M), but there's no guarantee that trend continues. If it doesn't, the 10M validation and stress test framework are still publishable contributions.
If the project fails entirely: The existing codebase, benchmarks, and test results remain publicly available. Nothing is lost -- the ceiling just stays where it is instead of rising.
Cash grants: $0. This would be my first.
In-kind / approved:
- AMD AI Developer Program -- cloud compute credits (used for ROCm development and testing)
- Google TPU Research Cloud -- approved for TPU access (not yet implemented)
Pending:
- Long-Term Future Fund (LTFF) -- application submitted, awaiting decision
- Survival and Flourishing Fund (SFF) -- application submitted same cycle, awaiting decision
There are no bids on this project.