Project summary
Given the movement around AI legislation and regulation, a number of legal academics have become interested in AI safety. After a previous AI safety x legal scholarship workshop, they’ve formed a consortium and are interested in helping build the field further.
One of their goals is to initiate a symposium on AI safety at a top law review. Such a symposium would include an RFP for legal scholarship on AI safety as well as an in-person event for the papers which end up being selected. If this was successfully accomplished at a top law review, that would lend a lot of credibility and prestige to AI safety in legal academia and would pave the way for future field-building efforts.
This regrant seeks to cover the costs of such a symposium, which also increases the likelihood that it is accepted.
What are this project's goals and how they be achieved?
The project’s goal is to initiate a symposium on AI safety at a top law review. The consortium will leverage their connections and reach out to the editorial boards of different law reviews and pitch the opportunity.
How will this funding be used?
Funding will cover the costs of the symposium, specifically the flights and hotel stays of the academics attending.
Who is on the team and what's their track record on similar projects?
Yonathan Arbel is an Associate Professor of Law at the University of Alabama.
Peter Salib is an Assistant Professor of Law at the University of Houston Law Center.
Kevin Frazier is an Assistant Professor of Law at St. Thomas University.
All three have published papers in prestigious legal journals and understand the norms and customs of legal academia.
What are the most likely causes and outcomes if this project fails? (premortem)
The main failure mode would be if no top-tier law review is interested in doing a symposium on AI safety. This might happen if interest in AI safety isn’t high enough. If this is the case, the consortium has discretion to use the funds for a separate project, so long as that project is focused on reducing AI x-risk by increasing the prominence of AI safety in law. As an example, these funds could be used in a prize competition.
What other funding is this person or project getting?
None that I'm aware for the legal symposium.