Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
1

Funding a Consortium of Legal Scholars and Academics doing AI Safety

AI governance
🦁

Peter Salib

Not fundedGrant
$0raised

Project summary

Given the movement around AI legislation and regulation, a number of legal academics have become interested in AI safety. After a previous AI safety x legal scholarship workshop, they’ve formed a consortium and are interested in helping build the field further. 

One of their goals is to initiate a symposium on AI safety at a top law review. Such a symposium would include an RFP for legal scholarship on AI safety as well as an in-person event for the papers which end up being selected. If this was successfully accomplished at a top law review, that would lend a lot of credibility and prestige to AI safety in legal academia and would pave the way for future field-building efforts. 

This regrant seeks to cover the costs of such a symposium, which also increases the likelihood that it is accepted.

What are this project's goals and how they be achieved?

The project’s goal is to initiate a symposium on AI safety at a top law review. The consortium will leverage their connections and reach out to the editorial boards of different law reviews and pitch the opportunity.

How will this funding be used?

Funding will cover the costs of the symposium, specifically the flights and hotel stays of the academics attending.

Who is on the team and what's their track record on similar projects?

Yonathan Arbel is an Associate Professor of Law at the University of Alabama. 

Peter Salib is an Assistant Professor of Law at the University of Houston Law Center.

Kevin Frazier is an Assistant Professor of Law at St. Thomas University.

All three have published papers in prestigious legal journals and understand the norms and customs of legal academia.

What are the most likely causes and outcomes if this project fails? (premortem)

The main failure mode would be if no top-tier law review is interested in doing a symposium on AI safety. This might happen if interest in AI safety isn’t high enough. If this is the case, the consortium has discretion to use the funds for a separate project, so long as that project is focused on reducing AI x-risk by increasing the prominence of AI safety in law. As an example, these funds could be used in a prize competition.

What other funding is this person or project getting?

None that I'm aware for the legal symposium.

Comments2Similar6
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
1
$100K raised
Thomas-Larsen avatar

Thomas Larsen

General Support for the Center for AI Policy

Help us fund 2-3 new employees to support our team

AI governance
9
5
$0 raised
🐠

Annalise Norling

Graduate Students for Safe and Responsible AI (SAFARI)

An association for interdisciplinary interest in AI 

Technical AI safety
2
0
$0 raised
peterwildeford avatar

Peter Wildeford

AI Policy work @ IAPS

AI governance
8
3
$10.1K raised
JaesonB avatar

Jaeson Booker

Funding to attend AI Conclave

A month long on-sight campus to deeply understand and shape AI

2
2
$0 raised
kylegracey avatar

Kyle Gracey

AI Policy Breakthroughs — Empowering Insiders

Strategy Consulting Support for AI Policymakers

AI governance
3
1
$20K raised