You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
This project will establish a legal watchdog and evaluation organization dedicated to analyzing court decisions in AI-related lawsuits across Western jurisdictions. Western courts are increasingly establishing precedents that could inadvertently bottleneck AI research, deployment, and infrastructure. If the West creates a hostile legal environment for AI development, it risks ceding its technological leadership to authoritarian regimes, primarily China. Because authoritarian regimes possess vastly different value systems, an AGI scenario where they lead significantly increases P(doom) and severe S-risks (suffering risks). By evaluating legal precedents, publishing expert legal analyses, and intervening in key cases, this organization aims to ensure the West maintains the legal and regulatory runway necessary to lead in safe AGI development.
Goals:
Monitor and Evaluate: Systematically track AI-related litigation across key Western jurisdictions (US, EU, UK) and grade the quality, technical accuracy, and long-term implications of judicial rulings.
Inform and Steer Precedent: Provide high-quality legal analyses to prevent innovation-stifling or technologically illiterate precedents that could cripple Western AI capabilities.
Mitigate S-Risks: Indirectly reduce P(doom) and S-risks by ensuring Western, democratic nations maintain their competitive edge over authoritarian states in the race to AGI.
Execution: I will leverage the extensive network of leading legal experts (including scholars from Stanford and the University of Vienna) that I have cultivated during my LLM studies. We will achieve our goals by:
Publishing public "scorecards" and briefs on pivotal AI lawsuits, translating complex technical realities into actionable legal frameworks for judges and policymakers.
Forming an advisory board of world-class international tech law experts to review and validate our evaluations.
Drafting amicus curiae (friend of the court) briefs in critical AI cases to inject long-term AI safety and geopolitical context directly into the courtroom.
The funding will be used to transition this initiative from an independently funded, network-building phase into a fully operational organization. Specifically:
Legal & Technical Research: Hiring or contracting specialized legal clerks and AI policy analysts to continuously monitor court dockets and draft evaluations.
Expert Consultation Fees: Compensating top-tier legal scholars and technologists for reviewing our briefs and contributing to amicus filings.
Operational & Publication Costs: Building a public-facing database for our legal evaluations, web hosting, and distribution of our materials to legal professionals and policymakers.
Legal Filing Fees: Covering the administrative and legal costs associated with submitting amicus briefs or expert testimonies in relevant jurisdictions.
I am the founder and lead researcher. My track record is defined by high personal conviction, skin in the game, and strategic networking. I have personally invested roughly $30,000 of my own capital to pursue an LLM in International Business Law at the University of Vienna.
I undertook this specific academic path not just for the credential, but as a strategic maneuver to access, study under, and network with the world’s leading law experts—including visiting scholars and professors from Stanford and other top institutions. This has provided me with the ideal circumstances to secure expert opinions, understand international legal frameworks, and build the foundational network required to evaluate high-stakes AI litigation effectively. While this organization is new, the foundational groundwork, legal expertise, and network of advisors are already in place.
Causes of Failure:
Lack of Judicial Receptiveness: Judges and courts may ignore our evaluations and amicus briefs, relying instead on traditional, backward-looking legal arguments.
Pacing Problem: The volume and speed of AI litigation might outpace our team's capacity to evaluate and intervene effectively.
Optics/Partisanship: Our arguments regarding geopolitical competition and S-risks might be viewed by traditional courts as outside the scope of specific civil disputes (e.g., copyright infringement).
Outcomes if it fails: If we fail to influence the legal landscape, the project will dissolve, and the counterfactual reality will proceed: Western courts will likely establish disjointed, restrictive precedents driven by legacy industries. This will create a chilling effect on Western AI R&D, slowing progress just enough for authoritarian regimes, which face no such domestic legal bottlenecks, to close the gap and dictate the future of AGI, leaving the world highly vulnerable to S-risks.
I have personally bootstrapped this initiative by investing approximately $30,000 of my own funds into completing an LLM at the University of Vienna to build the necessary legal expertise and expert network to make this project successful. I have raised $0 in external funding over the last 12 months.