@apollo
$0 in pending offers
Apollo Research
6 months ago
Apollo Research
over 1 year ago
Thanks for the compliments. We quickly want to contextualize some of the reservations.
1. We understand that we're more expensive on a per-person basis than independent researchers. However, On top of salaries, our costs also include compute, pensions, insurance, travel cost, visas, office, etc. which are usually not factored into the applications for independent researchers. Furthermore, as discussed in the response to Austin Chen's comment, we're currently significantly cheaper than other AI safety labs. For example, we calculate with 200k/year total cost while many technical positions in Bay area AI safety orgs have >200k/year starter salaries (not yet including pensions, insurance, compute, etc).
2. Given that we don't train any models, it would require a lot of effort for us to pivot to becoming an AI accelerator. Also, as you can see in our sharing and security policy (https://www.apolloresearch.ai/blog/security), we're very aware of the potential to accidentally accelerate AI research and thus engage in various security measures such as differential publishing.
3. Obviously, our spending target is not infinite. The reason why we are asking for more money at the moment is that we have many obvious ways to use it effectively right now--primarily by hiring more researchers and engineers. As suggested in another comment, a short runway makes it both harder to plan and may make it impossible to hire very senior staff, despite them being interested in working for us.
Apollo Research
over 1 year ago
Some people have been concerned with us scaling too fast and we think this is good advice in general. However, there are a couple of considerations that make us think that the risk of growing too slowly is bigger than the risk of growing too fast for our particular situation.
1. We have an actionable interpretability and evals agenda. In both cases, we're only limited by the number of experiments we can run. If we were bottlenecked by ideas we wouldn't try to scale.
2. We already form a coherent team. Many people in the team have worked together before and it currently feels like our team dynamic is very good. Otherwise, we would focus more on building cohesion first.
3. We have the management capacity. Multiple people in the team have supervised others before. This really doesn't feel like a problem right now.
4. The talent is amazing. There are some really great people in the AI safety space. They are motivated, capable, and nice to work with. It's really not that hard to onboard such candidates.
5. Time is ticking. AI alignment isn't gonna solve itself and there will be lots of evals organizations popping up that don't care about catastrophic risk. Making sure that there are multiple leading organizations that care about these risks seems important.
6. We can always stop scaling for a while. If we realize after the first hiring round that we're scaling too fast, we can always postpone the next round and focus on consolidation. On the other hand, if we realize that we need more people it's hard to hire fast. Most rounds take 3 months from job ad to onboarding.
Apollo Research
over 1 year ago
Thanks for the high praise, we will try to live up to that expectation. These are totally reasonable questions and we're happy to answer them in more detail.
1. That mostly depends on your definition of "a lot". We have received enough funding to work for ~one year with the current team but we don't have enough funding to hire the number of people we think we can productively onboard. We have checked with our biggest funders and are allowed to disclose the amounts (which is why we didn't respond immediately). We have received ~$1.5M from OpenPhil and ~$500k from SFF speculation grants (the round has not closed yet, so we don't know how much we'll receive in total). We have a small number of other funders that add up to <$200k in addition to the two big institutional funders.
While this is a substantial amount of money, running an AI safety research lab is unfortunately quite expensive with salaries and compute being the biggest cost points. For comparison, Redwood started with $10M (and a $10M top-up after one year), and another alignment non-profit (anonymous on OpenPhil's website but not hard to deduce) started with $14.5M. Both of these organizations aim to pay competitively with FAANG salaries which are ~$200-500k for ML talent before equity.
2. We mostly agree. We hope to be fully funded by larger institutional funders or aligned investors in the near future. However, right now, we could easily onboard multiple new people but just don't have the funds to do so. Therefore, Manifund funding would directly make a big difference for us. Furthermore, the fact that people are enthusiastic about Apollo Research has meant multiple funders expected someone else to have already covered our entire funding gap or wanted to give less on the expectation that others would give more. Lastly, we're specifically asking for 3 positions because these have not been covered by other funders and we want to fill them as soon as possible.
3. We estimate that we could productively spend $5-7M in the first year, $10-20M in the second, and even more later. Furthermore, having more than 10 months of runway makes planning much easier and allows us to hire people with more experience who are often less willing to join an organization with a short runway. We don't intend to compete with Anthropic, OpenAI and DeepMind on salaries but it is a drawback if our salaries are 2-10x lower (which they currently are). For reference, a SWE at Anthropic London earns GPB 250k-420k before equity. Therefore, we would like to raise salaries to retain and attract talent in the long run (maybe so that they are only 1.5-5x lower).
Regarding talent: We are well connected in the AI safety scene and think that there is a lot of talent around that is currently looking for jobs but unable to find any. We think it is really bad for the ecosystem and for AI alignment that this talent is unable to fully contribute or has to jump from independent grant to independent grant without stability or ability to plan (this also heavily selects against people with families who are often more experienced). Our agenda is fairly straightforward and we have the management capacity to onboard new people, so we would really like to hire more people fairly soon. We think it is likely that we will be 15-30 people by next June if we have the funds to do so.
We're happy to answer more questions or clarify any of the above if needed!
For | Date | Type | Amount |
---|---|---|---|
Manifund Bank | 6 months ago | withdraw | 4000 |
Apollo Research: Scale up interpretability & behavioral model evals research | 6 months ago | project donation | +4000 |
Manifund Bank | 12 months ago | withdraw | 15160 |
Apollo Research: Scale up interpretability & behavioral model evals research | 12 months ago | project donation | +15000 |
Apollo Research: Scale up interpretability & behavioral model evals research | over 1 year ago | project donation | +160 |
Manifund Bank | over 1 year ago | withdraw | 310249 |
Apollo Research: Scale up interpretability & behavioral model evals research | over 1 year ago | project donation | +250 |
Apollo Research: Scale up interpretability & behavioral model evals research | over 1 year ago | project donation | +9999 |
Apollo Research: Scale up interpretability & behavioral model evals research | over 1 year ago | project donation | +100000 |
Apollo Research: Scale up interpretability & behavioral model evals research | over 1 year ago | project donation | +200000 |