Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate

Funding requirements

Sign grant agreement
Reach min funding
Get Manifund approval
2

AI-Powered Knowledge Management System for Alignment Research

Science & technologyTechnical AI safety
quentin101010 avatar

Quentin Feuillade--Montixi

ProposalGrant
Closes November 14th, 2025
$50raised
$14,000minimum funding
$20,000funding goal

Offer to donate

21 daysleft to contribute

You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.

Sign in to donate

Proposal Summary

This grant proposal focuses on essentials, for full details, see my LessWrong post: Thinking Partners: Building AI-Powered Knowledge Management Systems

In 1945, Vannevar Bush identified a fundamental problem: information systems work like filing cabinets, but minds work by association. Despite 80 years of attempts to solve this, we still waste hours reconstructing context and lose connections between ideas.

Personal knowledge management systems can be powerful thinking tools. Niklas Luhmann built a paper-based system that became a "communication partner," revealing unexpected connections between ideas separated by years. He published over 70 books and hundreds of articles, but it required decades of disciplined manual work.

I'm building tools that solve Bush's challenge using AI to automate what he couldn't imagine - creating systems that work like memory, not like filing cabinets. The result: scattered notes, conversations, and ideas become an active thinking partner that processes knowledge continuously and constructs intelligent context for any task.

I'm asking $14-20k USD for 4 months to build an MVP with AI researchers as design partners, before validating with broader knowledge worker communities.

Solo founder: Quentin Feuillade--Montixi, currently Paris (relocating to San Francisco for this project)


About the project

The problem:

Knowledge work has four critical bottlenecks:

Capturing: We forget 50% of new information within a day, 90% within a week. Insights from conversations and connections noticed while reading disappear before we document them. Manual capture systems require us to stop what we are doing to take notes, breaking flow and slowing thinking.

Retrieving: Information and insights are scattered across folders, applications, and documents. Finding what we need wastes hours.

Processing: Complex thinking requires synthesizing dozens of sources and tracking how ideas evolved across months and contexts. We can't bring enough knowledge together simultaneously to think at the complexity required for efficient and innovative research.

Sharing: Every blog post, grant application, or email requires manually reconstructing and repackaging the same information over and over again.

To increase our individual capabilities, we need tools that address these bottlenecks without requiring decades of manual discipline. Current solutions fail: Manual PKM systems like Notion or Obsidian require unsustainable discipline. AI assistants have poor memory and degrade with long context (20-40% performance drops). RAG systems dump raw, unorganized documents into context, making them nearly useless for complex thinking.

What I'm building:

A system with two components:

Memory system: Continuously processes incoming information - summarizing content, linking concepts across contexts, surfacing unexpected connections, building hierarchies, and reorganizing as understanding evolves. It leverages LLMs to automate what Bush envisioned but couldn't implement: associative trails that work like memory.

Interfaces: Capture everything you read, write, discuss, and work on. They connect to existing tools (Google Drive, Notion, Slack, email, voice notes) and feed data to the memory system. They also construct intelligent context for AI conversations, complex thinking, writing, and other tasks by following associative trails rather than dumping static documents.

Why this matters for AI safety:

AI alignment requires extreme conceptual complexity while racing against capability development. Researchers can't afford to lose hours reconstructing context, hunting through notes, or manually crafting AI prompts. They can't easily synthesize across dozens of sources, track how arguments evolved through months of discussions, or bring enough context together simultaneously to do the complex thinking alignment requires. We need cognitive augmentation to accelerate and improve the quality of AI safety research.

Longer-term possibilities:

If the basic system works, the architecture could enable interesting extensions: avatars grounded in documented expertise, coordination networks for research teams, and infrastructure where humans remain architecturally integral as multi-agent AI systems develop. If we can't align AGI in time, having infrastructure where humans stay in the loop might matter (see this section).


What are this project's goals and how will I achieve them?

Goal: Build an MVP that 50+ knowledge workers use weekly within 4 months because it genuinely saves them time and improves their work.

Month 1: Design partners first

  • Recruit 5-10 AI safety researchers as design partners before writing significant code

  • Deep workflow interviews: how they process papers, synthesize arguments, lose context, use AI assistants

  • Map common research patterns and specific pain points

  • Start building infrastructure

  • Plan Bay Area move

Months 2-3: Core MVP with weekly validation

  • Memory system: automated ingestion (Google Drive, Notion, document uploads), LLM-powered organization (summarization, concept linking, hierarchy building)

  • Interfaces: MCP integration with ChatGPT/Claude for intelligent prompt construction

  • Weekly testing sessions with design partners, iterate based on what actually helps

Month 4: Validation and honest assessment

  • Launch to broader communities: AI safety researchers, AI governance, policy research, academic researchers

  • Track actual weekly usage and retention (not just signups)

  • Structured interviews: What became easier? How did AI conversation quality change? What connections did you discover?

  • Document detailed learnings for community value regardless of outcome

Success means three concrete signals:

  1. Usage intensity: 50+ knowledge workers using weekly, 20+ using daily

  2. Genuine value: Users report saving 2+ hours/week on context reconstruction, discovering connections they would have missed, or working on more complex synthesis than before

  3. Organic growth: 30%+ of new users from word-of-mouth referrals

If I see all three: Apply to accelerators (YC, Entrepreneur First, SeldonLab) with validated metrics.

If I see one or two: Extend runway while iterating.

If I see none: Pivot hard, or stop. Document learnings publicly on LessWrong, open-source useful components.

Four months is enough to know if this actually helps people.


How will this funding be used?

Option 1 - San Francisco (basic): $14,000 (minimum to stay sustainable)

  • Living expenses: $12,000 ($3,000/month)

  • Infrastructure/development: $2,000

This keeps me operational in SF but with tight living constraints and less buffer for unexpected costs.

Option 2 - San Francisco (comfortable): $20,000 (optimal for project success)

  • Living expenses: $16,000 ($4,000/month)

  • Travel/visa: $2,000

  • Infrastructure/development: $2,000

Note: I'm planning to relocate to SF regardless of funding level to be embedded in the AI community. At $14k, I'll make it work but with a more precarious living situation and less focus buffer. The $6k difference buys sustainable runway and removes financial stress that could impact project execution.

Both options allow me to work embedded in the community as solo founder until there's clear evidence this helps people.


Who is on the team and what's the track record?

Solo founder. Here's why I can execute:

I invented state-of-the-art jailbreaking techniques and I intuitively understand LLM behaviors. I created Persona Modulation, the technique used in the NeurIPS 2023 SoLaR workshop paper "Scalable and Transferable Black-Box Jailbreaks for Language Models via Persona Modulation". I spent a year developing an improvement that broke all SOTA models automatically on every tested behavior in under 10 minutes, presented at the PARIS AI Summit, Day 1. I was a SERI MATS 3 scholar researching LLM cognition and working on cyborgism resulting on a new research direction called LLM Ethology, I worked as a contractor for METR (formerly ARC Evals) evaluating GPT-4 and other models before public release, I'm an Anthropic HackerOne red teamer, and I won GraySwan's jailbreak championship with a single prompt. This matters for building knowledge tools because intelligent prompt engineering that avoids context rot requires understanding exactly how LLMs fail. When context length increases, models degrade in specific, predictable ways.

I've shipped production AI systems that companies pay for. I built the PRISM Eval platform from the ground up: over 60k lines of production code (front + back + infra + prompting) implementing autonomous jailbreak systems. Amazon AGI and other major companies under NDA paid for this. The platform is still running in production.

I can build complex systems fast. At 42 school, I ranked 2nd in my cohort of over 300 students on pure software engineering skills, while actively participating in 42AI (I was vice president in 2021-2022 and president in 2022-2023), teaching ML to 400+ students. Multiple hackathon wins: Apart Research AI Safety, LVMH OpenDare, Google Cloud Climate. I worked part-time for Outmind as their only NLP engineer for a year and a half during my studies, I know how to design systems that connect to multiple knowledge sources.

I live with this problem daily. I'm an AI safety researcher. This isn't a market opportunity I discovered through research; it's friction that actively slows down my own work. I feel the pain of reconstructing context across conversations, synthesizing evolving arguments from literature, and losing connections between ideas. I'll be my first user, though I'm committed to building with design partners to avoid anchoring purely on my own use cases.

I can build the MVP solo in 3 months.


What are the most likely causes and outcomes if this project fails?

Four main failure modes:

Pain point isn't acute enough: Low retention after initial trial

Mitigation: Design partners validate pain before building. If 5-10 AI safety researchers don't feel it acutely, stop or pivot.

Response: Pivot to knowledge workers with clearer pain (consultants, policy analysts), or stop if the primitive doesn't solve real problems.

Technical execution harder than expected: Context construction doesn't avoid degradation, no improvement over manual prompting

Mitigation: My LLM expertise reduces this risk. Will know within 2 months.

Response: Simplify scope to pure memory organization (without AI integration), or stop if core premise fails.

Adoption friction too high: Completed setup but no usage

Mitigation: Start with single integration (Google Drive). Expand based on what design partners actually use.

Response: Pivot to B2B where IT handles setup, or stop if UX fundamentally doesn't work.

Privacy concerns block adoption: Design partners express concerns about sensitive research

Mitigation: Local-first architecture option.

Response: Focus on non-sensitive use cases or stop if this is a fundamental barrier.


How much money have I raised in the last 12 months and from where?

$0 for WeaveMind. First fundraising attempt. The project is brand new.

PRISM Eval was revenue-funded through customer payments. Self-funded through consulting and savings since then. No grants or investment in the last 12 months.

Contact

Email: contact@weavemind.ai | Website: https://weavemind.ai

I would be happy to discuss technical details, share research background, or answer questions about the approach. I am interested in feedback even if you're not looking to fund this work.

Comments2Offers1Similar8

Donation Offers

Esben-Kran avatar

Esben Kran

$50
10 days ago