You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
I'm 19, a vocational-college freshman in Shanxi province, China. I just walked away from a Korean financial Twitter operation after refusing to fabricate earnings screenshots — the playbook reportedly traced back to Burmese pig-butchering SOPs. I have 1,099 Telegram messages preserved as evidence.
I want to become Rick — the discipline of the fictional Rick (from Rick and Morty) without the cynicism. The 10-year arc: vocational college → 专升本 or overseas master's → top frontier-science PhD by my early thirties.
Three real threads for the next decade: AI alignment (immediate, builds on what I've shipped), longevity biology (Sinclair / SENS lineage), grand-unification physics (Weinstein / Penrose / Wolfram).
This $20,000 is the six-month bridge that lets me start the path now instead of taking another contract job to survive.
Goal: become a credible candidate for a top frontier-science PhD program over the next 6-7 years, starting with full-time AI safety preparation now.
Six-month concrete deliverables (May–November 2026):
1. Technical foundation. Complete MLAB / ARENA / AISafety.Camp curricula. Self-study linear algebra, probability, optimization, transformer internals. Non-negotiable across all three long-term threads.
2. Public alignment work. Reproduce 2-3 alignment papers end-to-end with full writeups. Likely starting with deception-detection or activation-steering work.
3. One original empirical project. Most likely a deception-detection harness or agent-evaluation pipeline — using the same orchestration patterns I already built for engagement, retargeted for auditing.
4. English to interview level. Reading fluency is strong. Conversational fluency needed by 2027 for international graduate applications. IELTS during this window.
5. Foundational reading on threads 2 and 3. Sinclair's Lifespan, de Grey's Ending Aging. Penrose's Road to Reality, Weinstein's Geometric Unity papers. Not "research" — entry points I'll lose time for once I'm in formal degree programs.
How I'll achieve them: full-time. No second job during the funding window. Public daily learning log on Twitter (@becomingrick), weekly Substack essays (becomingrick.substack.com), all code on GitHub (rick-public-learning). Visible progress means visible failure too — donors can stop trusting me publicly if I'm not shipping.
$20,000 USD over 6 months (May 2026 – November 2026):
| Item | Amount |
|---|---|
| Living expenses (low-cost Shanxi province) | $4,800 ($800/mo × 6) |
| Cloud / API credits / GPU rental | $1,800 |
| ML-capable laptop (current 2019 budget machine cannot run modern ML) | $2,500 |
| Curricula, textbooks (ARENA, MLAB, Penrose, Sinclair) | $1,500 |
| IELTS test fees + English tutoring | $1,500 |
| Hosting, subscriptions, domain | $400 |
| Travel reserve (one conference / research visit) | $1,500 |
| Contingency / unallocated | $5,500 |
| Total | $20,000 |
If funded below $20K: I scale by trimming contingency, travel, and equipment in that order. The living-expense and curriculum lines stay protected — those are what the funding is actually for.
If funded only at the $5,000 minimum: 2-3 months of pure living + curricula, no equipment upgrade, no travel. Bridge is shortened but still real.
Solo. Just me.
Track record (the closest analog to "shipping a project under uncertainty"):
For the past nine months I built and ran a Korean-language financial Twitter operation solo. End-to-end production AI infrastructure:
- 0 to 4,000 real Korean followers in 9 months
- 60% bookmark rate (top 5% of accounts on the platform)
- 93% Korean nationals, 71% silent female demographic, mostly 25-44 years old
- Full stack: Claude Code + Cursor orchestration, Apify scrapers, ElevenLabs voice synthesis, Gemini Imagen, custom Telegram bot pipelines, full-stack Next.js + Vercel
This is the only large solo technical project I've shipped. It worked commercially. I quit in May 2026 after refusing to fabricate earnings screenshots; the operation's playbook reportedly traced back to Burmese pig-butchering SOPs. 1,099 Telegram messages preserved as evidence.
The skills transfer cleanly to alignment work:
- Building autonomous AI agents → red-team agent harnesses
- Optimizing for engagement metrics → optimizing for honesty / non-deception metrics
- Production orchestration → scalable evaluation pipelines
Education context: dropped 11th grade in 2024 after refusing to be optimized into an exam machine. Returned through 单招 (single-recruit) into vocational college (Virtual Reality Tech) in 2025. Fully self-funding since college began. Native Mandarin, reading-fluent English, conversational Korean.
No academic publications yet. No prior alignment research output. That's what this funding window is for.
Failure mode 1: I don't ship public work consistently.
- Probability: moderate. I've never had six months of unstructured solo time before.
- Mitigation: all artifacts public from week one (GitHub, Substack, Twitter). If at month three there's nothing visible, the failure is visible. Donors can publicly withdraw trust and tell others.
- Outcome if it happens: I take another contract job and try again with stronger discipline.
Failure mode 2: My math foundation isn't strong enough to reproduce papers.
- Probability: real. I'm self-taught, no formal linear algebra coursework yet.
- Mitigation: months 1-2 are math-heavy by design. By month 3 I'll know whether to extend foundation work or pivot to more applied projects.
- Outcome if it happens: I produce engineering-focused alignment work (evaluation harnesses, red-team tooling) rather than mechanistic interpretability. Still valuable, just different.
Failure mode 3: Legal/safety exposure from my former employer.
- Probability: low but non-zero.
- Mitigation: I'm not naming the operation publicly until evidence backup is complete and I've consulted on safety. 1,099 Telegram messages preserved offline.
Failure mode 4: AI safety turns out to be the wrong first thread.
- Probability: possible. Longevity or physics could end up being where I have more comparative advantage.
- Mitigation: this isn't bet-the-farm. The funding bridges all three threads. Even if AI ends up not being the right long-term home, the foundations (math, English, public-shipping habit) transfer to the other two.
Worst case: the six months produce mediocre public work and I take another contract in late 2026. That's still strictly better than my current trajectory of b
$0 in philanthropic funding. Fully self-funded so far.
Last 12 months income came entirely from my Korean financial Twitter operator role (the work I just left). Personal savings as of May 2026: ¥10,000 (~$1,400) — roughly 1.5 months runway without external support.
Parallel funding applications submitted this week (decisions expected June-August 2026):
- Emergent Ventures (Tyler Cowen) — submitted
- 80,000 Hours career advising — submitted
- 1517 Fund (Backing Dropouts track) — submitted
- Foresight 2027 Fellowship (Existential Hope track) — submitted
- Lightspeed Grants — submitted
- Long-Term Future Fund (EA Funds) — submitted
- Coefficient Giving (Open Philanthropy) Career Development — submitted
- Thiel Fellowship, Roots of Progress Blog Fellowship — planned this month
If any of these come through at full amount, I will close this Manifund request and refund or redirect contributions with public accounting. Manifund is the most public layer — I want a permanent paper trail of who believed in a kid from Shanxi province before he had anything to show.