Apart is an AI safety research organization incubating global talent to mitigate existential risks from AI. Our weekend-long research sprints and the 3-6 month Apart Lab Fellowship offer a remote-first, globally accessible opportunity to engage in object-level AI safety research. Besides providing 4 fellows lead authorship on top publications, Apart Lab enabled two fellows to secure positions at Oxford and another to join ARC Evals (METR) as a contractor. With our sprints held online and in over 50 locations, including underserved areas, we're committed to diverse, inclusive AI safety research that directly addresses existential risks. See also https://apartresearch.com/
With this grant, we want to sustain and grow our efforts by engaging 1,000 sprint participants and inviting 30 additional lab fellows within the next 6 months.
Expand AI Safety Research Sprints: We aim to double the number of participants in our AI safety research sprints by engaging 1,000 sprint participants in the first half of 2024 while maintaining our priority on participant quality. Each sprint will explore promising agendas in technical AI safety and AI governance for existential risk reduction. We plan to achieve this through strategic partnerships with aligned organizations, improved scheduling, outreach, and high-quality event infrastructure and mentorship.
Improve the Apart Lab Fellowship: We intend to increase the capacity and quality of our mentorship for 30 new Apart Lab fellows in our Spring 2024 cohorts, focusing on impactful AI safety projects in AI governance, AI security, interpretability and model evaluations. The objective is to help our fellows generate research that is directed at real-world impact, beneficial for governance and alignment, in addition to academic publications. This effort crucially includes facilitating the transition of skilled fellows into full-time work in AI safety roles, thereby strengthening their long-term contributions to the field.
We were able to build a track record in 2023, which serves as a good indication on the feasibility of our goals: In 2023, we hosted 17 research sprints with over 1,000 participants, generating 170+ research projects. 10 of our 24 Apart Lab fellows have already completed their research, resulting in three publications accepted at top-tier academic ML venues (NeurIPS, ACL, ICLR), and seven currently under review; two of our fellows have secured research positions at Oxford University. We are also working with the Cooperative AI Foundation on a report of multi-agent risk from advanced AI and are co-organizing the Workshop on the Scaling Behavior of Large Language Models accepted at EACL 2024.
When it comes to our leadership team, Apart Research is led by Esben Kran (Director), Fazl Barez (Co-director & Research Director), and Jason Hoelscher-Obermaier (Research Lead). Our team has close ties and affiliations with many renowned projects & institutions such as FLI, CSER and the Alan Turing Institute, the Torr Vision Group at the University of Oxford, and PIBBSS. Furthermore, our team members have previously worked for ARC Evals (now METR), Amazon, Aarhus University, the University of Edinburgh, the University of Vienna, and 3 ML startups.
See also: https://apartresearch.com/about
https://www.linkedin.com/company/apartresearch/
We have worked on a detailed budget and we realistically rely on multiple sources of funding to achieve our goals. Our current ask of (59k USD) is derived from our worst-case scenario. We have also included information on what the funding will be used on given a medium and best case scenario. Worst Case - Our current funding applications (manifund, LTFF & EA FF) don't yield significant results and don't cover salary costs: In this scenario, we would likely rely on the track records of our leadership team to apply for grants individually. In this case, we would use this money on operational and administrative, as well as research costs (59k USD total) Medium Case - Our current funding applications yield some results and cover the costs of sustaining apart for the next six months: In this scenario, we would attribute the grant towards our growth initiatives necessary to fully achieve our ambitious goals of incubating 1000 research sprint paricipants and 30 new lab fellows in the first half of 2024 Best Case - Our current funding applications exceed our budget and cover both costs for sustaining and expanding our research incubation efforts: In this optimistic scenario, the additional funds will be invested in enhancing our team and quality assurance measures. Specifically, we would plan to hire additional senior mentors internally.Our goal would be to ensure that all 1000 participants and our 30 lab fellows are involved in high-quality, professional AI safety research consistently. Budget Breakdown: Sustaining Current Efforts: $164,000 Salary Costs: $105,000 Operational & Administrative Costs: $33,000 -> Our Ask Research Costs (compute, API’s, conference attendance): $26,000 -> Our Ask Growth Initiatives: $84,000 Expansion of Core Team (2 additional FTEs): $44,000 Operations and Fundraising Support (1 FTE): $22,000 Research Assistant for Apart Lab (1 FTE): $22,000 Stipends for Apart Lab Fellows (30 fellows): $30,000 External Senior Mentors (200 hours): $10,000
In case the answer to "How much money do you need?" is formatted incorrectly, please see the full response to this form at this link: https://docs.google.com/document/d/1P1AJ4QxAaLQgVQFnzys5iQ1JB2JUgzF7rL7dNyZOFvQ/edit?usp=sharing
Website: https://apartresearch.com/
Manifund application: https://manifund.org/projects/help-apart-expand-global-ai-safety-research
78% - with this grant, we could make a large step towards sustaining our efforts. However, the ultimate success is dependent on multiple funding sources and solid execution on our end. Project risks include but are not limited to: Lack of research taste for Apart, Lack of research taste for participants and fellows, and Lab graduates not realizing their potential after the fellowship. Given our track record and the current team spirit, we are optimistic that we are well equipped for the project related risk and have the capacity and motivation to fundraise the necessary means.