This course will get you up to speed on extreme risks from AI and governance approaches to mitigating these risks. You can learn more about the facilitated version of the course here.
In this course, we examine risks posed by increasingly advanced AI systems, ideas for addressing these risks through standards and regulation, and foreign policy approaches to promoting AI safety internationally. There is much uncertainty and debate about these issues, but the following resources offer a useful entry point for understanding how to reduce extreme risks in the current AI governance landscape.
This course was developed by subject matter experts, with input from leaders in the field, such as Ben Garfinkel, Haydn Belfield and Claire Boine. See who else is involved on our website.
By the end of this course, you should be able to understand a range of agendas in AI governance and make informed decisions about your next steps to engage with the field.
Course Overview
First, we'll examine the risks posed by advances in AI. Machine learning has advanced rapidly in recent years. Further advances threaten various catastrophic risks, including powerful AIs in conflict with human interests, AI-assisted bioterrorism, and AI-exacerbated conflict.
Standards and regulations could help address extreme risks from frontier AI. They could include model evaluations and security measures. One approach to reducing risks from states that do not regulate AI is for a cautious coalition of countries to lead in AI (e.g., through hardware export controls, infosecurity, and immigration policy) and use this lead to reduce risks. Another approach is expanding the cautious coalition, which may be doable through treaties with privacy-preserving verification mechanisms. We'll provide resources to help you to examine both approaches.
Other prominent (sometimes complementary) AI governance ideas include lab governance, mitigating misuse of generally capable frontier systems; slowing down AI now; and "CERN for AI." We'll briefly provide an overview of these approaches so that you know what others are discussing.
The final week discusses ways you can contribute through policy work, strategy research, "technical governance" work, and other paths.
Audio
Audio versions of many core readings are available on Apple Podcasts, Google Podcasts, Spotify and this RSS feed.