Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Meryylle avatarMeryylle avatar
Francisco Antonio Da Costa Barroso

@Meryylle

Independent AI researcher from Brazil building open-source novel architectures. Creator of Lunaris MoC (Mixture of Collaboration), a new expert communication paradigm for sparse language models. 17 years old, entirely self-funded, shipping code daily.

https://www.linkedin.com/in/francisco-antonio-0434aa284/
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I'm Francisco Antonio, a 17-year-old independent AI researcher based in Brazil. I design and implement novel neural network architectures from scratch — not fine-tunes, not wrappers, but new computational mechanisms for how language models process information.

My current project is Lunaris MoC (Mixture of Collaboration), an open-source architecture that replaces the isolated expert paradigm in Mixture-of-Experts models with a mediator-based communication system. Instead of experts processing tokens independently, MoC introduces an O(K) mediator hub that enables bidirectional information flow between experts, combined with dual-axis adaptive compute that learns per-token how much reasoning and collaboration each input needs.

What I've built so far — with approximately $8.50 in cloud compute:

A complete architecture implementation (1,200 lines of production-quality PyTorch)

A full training pipeline with routing diagnostics, adaptive compute tracking, and wandb integration

Initial experiments comparing MoC vs MoE vs dense baselines at 64M parameters, showing MoC achieving 6.2% better perplexity than MoE with identical parameter counts

A curated 150B-token pretraining dataset (FineWeb-Edu + FineMath + Stack-Edu)

An SFT dataset under active development (50K+ examples)

Everything is open source: https://github.com/Auren-Research/lunaris

My constraints are purely financial. I have the technical skills, the architecture, the data pipeline, and the experimental methodology — I just need compute to validate at scale. My immediate goal is running controlled MoC vs MoE comparisons at 1-2B parameters, which requires approximately $1,500-3,000 in GPU hours.

I'm currently awaiting responses from NLnet, the Thiel Fellowship, and Emergent Ventures. I have no institutional affiliation, no advisor, and no funding — just a laptop, an RX 6600, and a lot of determination.

Projects

Lunaris MoC: Open-Source Mixture of Collaboration Architecture for LLMspending grant agreement signature