Aligning Superintelligence: Consequentialist Objectives Pose Catastrophic Risk
Abstract
Because human preferences are too complex to codify, AIs operate with misspecified objectives. Optimizing such objectives often produces undesirable outcomes; this phenomenon is known as reward hacking. Such outcomes are not necessarily catastrophic. Indeed, most examples of reward hacking in previous literature are benign. And typically, objectives can be modified to resolve the issue.
We study the prospect of catastrophic outcomes induced by AIs operating in complex environments. We argue that, when capabilities are sufficiently advanced, pursuing a fixed consequentialist objective tends to result in catastrophic outcomes. We formalize this by establishing conditions that provably lead to such outcomes. Under these conditions, simple or random behavior is relatively safe. Catastrophic risk arises due to extraordinary competence rather than incompetence.
With a fixed consequentialist objective, avoiding catastrophe requires constraining AI capabilities. In fact, constraining capabilities the right amount not only averts catastrophe but yields valuable outcomes.
Our results apply to any objective produced by modern industrial AI development pipelines.
Joint work with Henrik Marklund and Alex Infanger.
Bio
Benjamin Van Roy is a Professor at Stanford University, where he has served on the faculty since 1998. His research focuses on reinforcement learning and alignment. Beyond academia, he founded the Efficient Agent Team at DeepMind and Enuvis (acquired by SiRF/Qualcomm). He has also led research programs at Morgan Stanley and Unica (acquired by IBM). He received the SB in Computer Science and Engineering and the SM and PhD in Electrical Engineering and Computer Science, all from MIT, where his doctoral research was advised by John N. Tsitsiklis.
He is a Fellow of INFORMS and IEEE and has served on the editorial boards of Machine Learning, Mathematics of Operations Research, for which he edited the Learning Theory Area, Operations Research, for which he edited the Financial Engineering Area, the INFORMS Journal on Optimization, and Foundations and Trends in Machine Learning. He has been a recipient of the MIT George C. Newton Undergraduate Laboratory Project Award, the MIT Morris J. Levin Memorial Master’s Thesis Award, the MIT George M. Sprowls Doctoral Dissertation Award, the National Science Foundation CAREER Award, the Stanford Tau Beta Pi Award for Excellence in Undergraduate Teaching, the Management Science and Engineering Department’s Graduate Teaching Award, the INFORMS Frederick W. Lanchester Prize, and the INFORMS Philip McCord Morse Lectureship Award.
He has graduated dozens of doctoral students, who have gone on to careers in academia (Carnegie Mellon, Columbia, Cornell, MIT, Northwestern, Rice, Stanford, USC), technology (Adobe, Amazon, DeepMind, Meta, Microsoft, Netflix, OpenAI, Spotify, Tesla, xAI), and finance (Citadel, DE Shaw, Goldman Sachs, Jane Street, Morgan Stanley, Two Sigma).