Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
3

General support for SaferAI

AI governance
🐸

SaferAI

ActiveGrant
$100,025raised
$300,016funding goal

Donate

Sign in to donate

Project summary

SaferAI is a governance and research non-profit focused on AI risk management. Their work focuses on standards & governance with involvement in the EU AI Act Code of Practice, NIST US AISIC and OECD G7 Hiroshima Process taskforce. They additionally produce ratings of frontier AI developers’ risk management practices. To support this work, they conduct fundamental research into quantitative risk management for AI, with a particular focus on assessing risks from LLM cyberoffensive capabilities.

What are this project's goals and how they be achieved?

SaferAI aims to incentivize the development and deployment of safer AI systems through better risk management. The organisation focuses on doing research to advance the state of the art in AI risk assessment and developing methodologies and standards for risk management of these AI systems⁠.

How will this funding be used?

The next marginal $300k are projected by the leadership of SaferAI to be used to:

  • Hire ML researchers to work with their Head of Research, Malcolm Murray, to further develop risk models for AIs.

  • Move their Executive Director, Siméon, to payroll who has not been drawing a salary and instead supporting himself from other grants.

  • Expand the involvement of part-time senior staff and advisors.

  • Hire a pool of research assistants that can be tapped into for assorted projects.

Leadership estimates a total room for more funding of €2M+ given the span of opportunities they see.

Who is on the team and what's their track record on similar projects?

SaferAI is led by their founder and Executive Director, Siméon Campos, who previously co-founded EffiSciences. Their work on quantitative risk assessment is led by Malcolm Murray (Head of Research) who was previously a Managing VP at Gartner, with over 10 years of experience on risk management. Their work on rating frontier AI companies has involved Henry Papadatos (Managing Director; technical component) who holds an MS in Robotics & Data Science, and Gábor Szórád (Head of Product; product developement) who is an experienced entrepreneur and manager having led teams of 8000+. Their policy engagement is led by Chloe Touzet (Head of Policy), who holds a DPhil from Oxford and spent 5 years as a researcher at the OECD, with standards development by James Gealy (Head of Standards), with prior engineering experience in the aviation industry. They’re advised by Cornelia Kutterer, former Senior Director of EU Government Affairs at Microsoft. A full list of staff and backgrounds is available on their about page.

What are the most likely causes and outcomes if this project fails? (premortem)

Although SaferAI has attracted a number of highly experienced individuals, many of the team are junior (including the Executive Director, Siméon), and the more experienced individuals mostly have experience in parallel fields (e.g. aviation standards; risk management outside AI; policy experience in labor). In other words, although all the requisite skills and experience are represented somewhere on the SaferAI team, combining all of them will require a high level of coordination that may be challenging to achieve.

SaferAI are operating in a challenging domain with significant lobbying efforts from AI companies to actively obstruct safety standards proposed by civil society. Even if they execute well it is quite possible that many of their initiatives will fail to culminate in a desirable outcome, an issue shared with all projects in this space.

What other funding is this person or project getting?

For various reasons, SAFERAI can’t disclose a detailed list of all their funders in public, but disclosed that they have multiple institutional and private funders from the wider AI safety space. They may be able to disclose more details in private.

Comments1Donations2
donated $100,000
AdamGleave avatar

Adam Gleave

3 months ago

Main points in favor of this grant

SaferAI have rapidly since their start in 2023 established themselves as a reputable and informed source on AI risk management and standards. They have assembled a team with diverse relevant experience, and built connections with key stakeholders in the policy community to effectively disseminate their research output.

Donor's main reservations

SaferAI’s rapid growth coupled with an inexperienced leadership team is likely to put a strain on management capacity. This risk is exacerbated by the team taking on a diverse range of projects (standards, ratings, direct policy advising) that, while complementary, will impose additional overhead on the leadership team to coordinate. The team intends to explore a more focused approach over the next 6 months to mitigate this risk.

Process for deciding amount

In principle I would be excited to fund at least the next $300k of SaferAI’s funding gap based on their described marginal uses of funding. My overall remaining grantmaking budget is around $125k; so I am allocating the bulk of my remaining budget (and around 25% of my total Manifund budget) to SaferAI. I wish to retain some remaining discretionary funds for small, high-value projects. I have not conducted an extensive search for possible projects in this space so I make no claim that SaferAI is the highest impact project at the margin, but I believe it clears the more general bar for funding of technical AI governance projects.

Conflicts of interest

None; I have no significant relationship with any of the SaferAI team. I know Siméon best but this has been limited to infrequent (~twice a year) chats at the sides of conferences.