Incubate and, if successful, spin out an international network organization, with members active in AI safety and/or ethics. The network will facilitate and provide infrastructure for member organizations to coordinate their efforts in areas such as communications, policy, and advocacy; help to foster trust and information sharing among groups; and serve as a capacity building space for new organizations. Potential member orgs have already told us this is needed, and smaller efforts to implement pieces of this work have already been successfully tested by us and others. We know how to build this from working for the largest international climate network.
The growing ecosystem of organizations trying to reduce AI harms face a major challenge. As more think tanks, advocacy nonprofits and social movements appear, they are not coordinating their efforts, even when their goals and campaigns align. These groups are missing opportunities to drive the narrative and advance impactful risk-reduction policies, even as AI corporations become more influential and governments are deciding how to respond. For example, the UK AI Safety Summit could have been a moment when AI ethics and safety organizations delivered a united set of principles for international regulation. Instead, coordination was minimal, governments, and major AI companies dominated the media discourse, and other organizations’ voices took a back seat.
A network organization is an institution that can help to overcome this challenge. With a membership composed of organizations active on the same issues, the network organization serves (at least) the following functions for its members:
Coordination infrastructure for shared strategy, policy, communications, & advocacy
Capacity building, especially for smaller and newer organizations
Confidential information sharing
Trust building across organizations
For example, the network might:
Develop shared policy positions, campaign, and messages in the lead-up to and during a major international conference
Create a multi-year strategy for advancing policies in important countries
Create a capacity-building program for newly established organizations from less well-resourced countries
In short, network organizations act as a neutral space for members to build collective power.
The AI safety and ethics communities lack a network organization. Yet, rapid growth in the number of organizations, coupled with important political and policy developments, such as the UK’s AI Safety Summit, suggest that a network organization could be valuable. Further, as the safety and ethics communities are to some degree split, lack of coordinating mechanism between them leaves them less able to combine their power, wastes their resources competing with each other for influence and leaves them more at risk of being further split by opponents of increased AI oversight.
A prominent example of a successful network organization in a parallel community is the 35-year-old Climate Action Network - International (CAN). CAN has a membership of 1900 organizations in 130 countries fighting climate change, and legally independent intermediary networks in 22 countries and regional groups (such as the EU). Other network organizations exist in various forms in global health, global development and beyond.
Efforts by individual AI organizations, or small groups of organizations, to create some of the network functions above, suggest that there is already interest in the kinds of services a network organization can provide. However, efforts so far:
Have been piecemeal, involving relatively few organizations or only one strategic element (e.g. just communications but not policy)
Typically haven’t bridged the safety and ethics divide
Are usually led by entities that have their own policy or ideological positions
Were not sustained for more than a few months or beyond a single event
In our conversations with dozens of individuals active in AI risk reduction, we have also been repeatedly told that such a network could be valuable. Further, no one else has expressed an interest themselves in establishing such a network.
Future Matters will incubate, and, if successful, spin out an international network organization for AI safety and ethics. Over the next year, we will:
Gather input from potential member organizations on the ideal services and scope of the network
Establish a founding membership and process to expand and diversify membership
Create regular coordination mechanisms (e.g. recurring calls and meetings) and identify shared positions and goals, likely in the areas of policy, advocacy and communications strategy.
We expect that the initial membership will be composed especially of organizations active in the European Union, United Kingdom and United States; with further outreach to groups in Canada, India, Australia, the Middle East, Japan, Singapore and the rest of Southeast Asia; and initial contact in Latin America and Africa. We will strive for the equitable inclusion of organizations identifying with AI ethics and/or AI safety. Member organizations will eventually pay modest dues to join the network, scaled to their organization budgets. This will help to diversify the network organization’s revenue sources, although evidence from other networks suggests that dues will cover only a minority of the network’s expenses.
AI Network Executive Director Salary & Benefits €150,000
AI Network Member Coordinator Salary & Benefits €94,000
Future Matters Managing Director (0,2 FTE) €20,000
Overhead €36,000
Total 1-Year Budget €300,000
Kyle Gracey - Future Matters Strategy Consultant (future AI Network Executive Director)
Justus Baumann - Future Matters Managing Director
These and other Future Matters bios are here.
The AI Network Member Coordinator will be hired if our fundraising is successful.
Future Matters already provides strategy consulting and facilitation services to AI risk reduction organizations, focusing on political, policy, communications, and campaign strategy advice. This includes experience coordinating multiple organizations to arrive at shared strategies. We believe these existing relationships will allow us to build a membership base more quickly. We do not advance specific policy or advocacy positions of our own. Our only position is that the world needs to better protect itself against the risks from advanced AI, and that civil society organizations play an essential role in this struggle.
We also have experience working directly for Climate Action Network - International, and have first-hand knowledge of the processes, organizations, and structures that allowed this network to be so successful. Our AI Strategy Consultant, Kyle Gracey, even wrote a compilation of its history and structure.
Moreover, four of our staff have experience founding and successfully running several nonprofit organizations, bringing expertise in everything from finance and operations to policy development, communications strategy and executive leadership.
We believe we have the experience, the connections, and the knowledge to build a first iteration of this network over the next year.
The network fails to attract a sufficient* number of members to build significant collective power, or attracts sufficient members but provides little added value to their work. This could happen because 1) We fail to build services and infrastructure that enough member organizations find valuable, causing them to either leave the network, or remain but not actively participate; and/or 2) Founder effects from the initial cohort of member organizations lock in certain values and practices that disincentivize further member organizations from joining.
Outcome: We think the most likely outcome in this failure mode is a return to the status quo, plus wasted effort by participating organization during the time when they were active in the network. The potential coordination benefits of the network will mostly go unrealized. Some benefits may still be realized if the kinds of piecemeal efforts that we've seen so far continue to be attempted by other organizations. For example, organizations in one country might still try to coordinate on communications. If the attempted coordination by network members goes especially badly (for example, if there are major conflicts between safety and ethics organizations caused by trying to coordinate), then current goodwill and cooperation by organizations might actually become worse than if the network had never been attempted.
Partial success/failure: Division along safety & ethics lines, or geographical lines. An intermediate position between success and failure is that we succeed in creating a network that is more limited in scope than our vision, because some element of the composition of the membership does not work. An obvious break point would be between safety and ethics organizations, since there is already some division between these groups. We might succeed in creating an international AI safety network, or an international AI ethics network, but not something uniting. Or, we might succeed in creating a more regional network, such as an EU or U.S. safety & ethics network, but not a more global one, due to differences among organizations in different countries on their AI strategies, or an unwillingness to work on issues beyond their borders. The longer-term impact of this outcome is less clear. The network might remain "stuck" in its current membership and composition, having the desired coordination and synergistic effects among its members, but leaving many other organizations out. Possibly several, independent networks emerge, separated by these lines of division. They each have impact, but they never coordinate or merge. Or, the first network might eventually grow beyond its initial composition and become an international safety & ethics network over a longer period of time.
* We roughly think a dozen or more active organizations are needed by the end of year one for the network to be on a path to keep growing its power and be worthy of trying to spin out into its own organization.
We've already received €155000 from several individual donors, and are incrementally fundraising for our gap.