Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
6

Train great open-source sparse autoencoders

Technical AI safety
tmcgrath avatar

Tom McGrath

ActiveGrant
$4,025raised
$10,000funding goal

Donate

Sign in to donate

Project summary

tl;dr: determine the best currently-available training setup for SAEs and disseminate this knowledge. Train SAEs for steadily larger models (starting with GPT-2-small for MATS scholars) and then scale up as budget and time allows.

Project proposal doc with more details: https://docs.google.com/document/d/15X28EEHo7pM2CYkfZqk05A0MZi4ImvTSSVaC9wtFLyI/edit?usp=sharing

What are this project's goals and how will you achieve them?

  1. Determine good hyperparameters for sparse autoencoders for realistic LLMs by doing a comprehensive architecture and hyperparameter comparison.

  2. Use this knowledge to train a suite of high-quality SAEs for GPT-2-small, then scale up further as resources allow, targeting ~1B and ~8B models in sequence.

  3. Disseminate knowledge on SAE training through a technical report.

How will this funding be used?

Compute!

Who is on your team and what's your track record on similar projects?

Lead: Tom McGrath - former DeepMind interpretability researcher.

Collaborating: Joseph Bloom - owner of SAELens and contributor to Neuronpedia.

What are the most likely causes and outcomes if this project fails? (premortem)

Failure to replicate results obtained by major labs leading to low SAE performance.

What other funding are you or your project getting?

Tom McGrath: none, currently self-funding

My collaborators Joseph Bloom and Johnny Lin are funded to work on Neuronpedia.

Comments6Donations2
Lun avatar

Lun

donated $25
2024-08-29
NeelNanda avatar

Neel Nanda

donated $4K
2024-05-10