You're pledging to donate if the project hits its minimum goal and gets approved. If not, your funds will be returned.
The landscape of existential risk from AI (AI x-risk) is full of misconceptions and unnecessary conflicts. By writing a short history of x-risk from AI for a non-expert audience, this project contributes to presenting a way toward useful policy and improved general understanding that is unburdened by entrenched positions of the past. The goal is twofold: on the one hand, to make traditional x-risk promoters open up to the relevance of short-term obstacles as worthwhile considerations when estimating the flourishing of future beings and, on the other hand, help persistent critics of the x-risk position to appreciate arguments that several less devastating but easier-to-grasp risks might, in fact, compound and accumulate into existential levels. In other words, the ambition is to demonstrate that a historical understanding of how thinking about AI x-risk has developed will show people that the "either-or" character that has been plaguing this debate is no longer valid. If recent events in the AI space writ large are any guide, we need to swiftly overcome the false conception that we must choose between the short and the long term.
Moreover, it's time we recognized the role of computing technology in conceptualizing better-known existential risks such as nuclear, pandemics, and climate change and, conversely, the place that an understanding of these other risks has had for how the notion of AI as a threat has matured since the 1960s and, more sharply, since the 1990s.
To sum up: x-risk from AI is too important an issue to remain a topic discussed primarily at philosophy departments at elite universities and among philosophically curious AI researchers at prominent labs. This book makes use of history to open up the discussion in the present toward a future that looks attractive to non-expert communities, i.e. where most people are found.
This project will result in a short and accessible book on the history of x-risk from AI. The goal is to make it no longer than 100 pages. Because it's mainly historical it won't be outdated by the time of publication like so much of AI literature.
I aim to spend the better part of the 2024-25 academic year writing, editing, and revising this book (writing short and concise pieces is always more time consuming). For this year of writing, I will seek affiliations that will elevate my thinking on the topic and provide useful opportunities for comments and reviews from experts in the field: Stanford Existential Risks Initiative (SERI) and Centre for the study of existential risk (CSER) at Cambridge (I know people and am already speaking to both these centers to find avenues of cooperation).
The funds will be used exclusively to finance my time as researcher and author: 10 – 12 months, starting September 2025.
I'm the sole author of this book. Naturally, I'll be working with an editor at a press (I'm in touch with Stanford University Press and MIT Press).
I have been a researcher in the history of technology and media for over a decade. With a background in Computer Science and a Ph.D. in the History of Ideas, I'm ideally positioned to write this history. For the past three years I've been doing a research project about errors and mistakes in the history of artificial intelligence (funded by the Swedish research council). I'm employed at KTH Royal Institute of Technology in Stockholm and have been a visiting scholar at UC Berkeley and Stanford in the Bay Area (I'm currently at Stanford). For several years, I've been teaching classes and publishing on the History of Death and Dying, with a focus on 20th century and nuclear radiation, the History of Futurology and Futures studies as well as the History of Technological Critique in areas of the environment, labor, philosophy, and media. These experiences, as well as the x-risk course I've been designing, has made me well prepared to begin writing this book.
Because Swedish-born thinkers have been disproportionately influential in the AI x-risk area (most notably Bostrom, Sandberg, and Tegmark as well as Nobel laureate Hannes Alfvén), being a native speaker of this language will be helpful in investigating the origins and emergence of this field proper (i.e. studying archive materials, correspondence, interviews, school essays etc.)
Without funding, it will be difficult to find the time to write this book. Basically, I need funding to write it instead of using the time to teach and pursue other projects. Anyone with even a little knowledge of AI safety and x-risk domains will understand that it matters whether things get done now or a few years down the road.
I was fortunate to receive $10K from Ryan Kidd and Austin Chen to start developing a course on the history of x-risk. The book project I'm now proposing is an extension of this work which helpfully informs the core elements that make up the book.