Project summary
aisafety.info is a website started by Rob Miles that hosts hundreds of answers to questions about existential risk from artificial intelligence. These answers contain introductions to basic AI safety arguments and concepts, summaries of technical content, and responses to objections.
The human-written content in the web interface covers the most common questions. To answer the “long tail” of uncommon questions, we’re also building an automated distiller. This chatbot searches a database of alignment literature and summarizes the results with citations to the sources. We’ve been making progress on improving the quality of the dataset and minimizing hallucinations.
What are this project's goals and how will you achieve them?
We aim to provide a reliable and accessible source of information about existential risk from AI. This resource caters to all audiences, whether they are new to the topic, looking to explore in more depth, seeking answers to their objections, or hoping to get involved with research or other projects. We think improving people’s understanding of AI risk helps improve the odds of a good outcome in a way that is relatively robust to different assumptions about the strategic landscape.
To create and maintain this resource, we’re hoping to:
Keep expanding and improving our written content. A team of distillation fellows and volunteers has been editing the approximately three hundred answers on site, and hundreds more are in progress. We intend to increase contact with experts to help ensure the site reflects humanity’s best current understanding of AI safety.
Refine our prototype chatbot and make it a major component of the user interface.
Redesign and improve the front end, using A/B testing to figure out how to present information in ways that make it easier to take in and share.
Develop an API that lets external websites embed our search function. The Campaign for AI Safety and other groups have reached out to us about strategic partnerships; these projects, as well as the upcoming rebuilt version of aisafety.com, would be using our content in their social media campaigns.
How will this funding be used?
We will use it to fund further distillation fellowships similar to the current and previous ones, to continue improving the content. With larger amounts of funding, we may also hire software developers to continue working on the code for the interface and chatbot, and a CEO to direct them.
Who is on your team and what's your track record on similar projects?
Rob Miles is a leading AI safety communicator and will be our quality control manager. Steven Kaas has been leading our team of distillation fellows and developing systems to ensure quality and engagingness of content. Our distillation fellows have a wide range of backgrounds and have been working on content for the past several months in a group of five to ten people working remotely. plex has over 10 years of community management experience, and has been the glue holding the project together for the last two and a half years. Chris Canal has entrepreneurship, software management, and ML experience. He is ready to iterate the website into a much better state using user research and analytics, while also leading the dev team. We have several excellent developers who have been doing what they can in their free time.
Our work has happened on Rob Miles’s Discord, on GitHub, and on various Google Docs. For an indication of what future work will look like, you can look at those places, at the site itself, or our roundups of newly posted content.
What are the most likely causes and outcomes if this project fails? (premortem)
It’s possible that the project won’t get enough traction to justify the cost, e.g. if the quality never becomes high enough to make a full launch. (That said, we believe there has been visible steady progress so far.) It could “fail” in a different sense if presenting views that are common in the AI safety community causes people to make bad decisions. This could happen because these views turn out wrong or badly conceptualized, or because they influence people in an unintended way, e.g. through convincing them that AI has great power without also convincing them that it’s hard to get alignment right.
What other funding are you or your project getting?
We have received around $46k from SHfHS and $54k from LTFF, both for running content writing fellowships. We have been offered a $75k speculation grant from Lightspeed Grants for an additional fellowship, and made a larger application to them for the dev team which has not been accepted. We have also recently made an application to Open Philanthropy.