LOW FLAT RATE $9.90 AUST-WIDE DELIVERY

Close Notification

Your cart does not contain any items

If Anyone Builds It, Everyone Dies

The Case Against Superintelligent AI

Eliezer Yudkowsky Nate Soares

$36.99

Paperback

Forthcoming
Pre-Order now

QTY:

English
The Bodley Head Ltd
30 September 2025
The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately

AI is the greatest threat to our existence that we have ever faced.

The technology may be complex but the facts are simple. We are currently on a path to build superintelligent AI. When we do, it will be vastly more powerful than us. Whether it 'thinks' or 'feels' is irrelevant, but it will have objectives and they will be completely different from ours. And regardless of how we train it, even the slightest deviation from human goals will be catastrophic for our species - meaning extinction. Precisely how this happens is unknowable, but we what do know is that when it happens, it will happen incredibly fast, and however it happens all paths lead to the same conclusion- superintelligent AI is a global suicide bomb, the labs who are developing it have no adequate plan or set of policies for tackling this issue, and we will not get a second chance.

From the leading thinkers in the field of AI risk, If Anyone Builds It, Everyone Dies explains with terrifying clarity why in the race to build superintelligent AI, the only winning move for our species is not to play.
By:   ,
Imprint:   The Bodley Head Ltd
Country of Publication:   United Kingdom
Dimensions:   Height: 234mm,  Width: 153mm,  Spine: 40mm
Weight:   700g
ISBN:   9781847928931
ISBN 10:   1847928935
Pages:   304
Publication Date:  
Audience:   College/higher education ,  Professional and scholarly ,  General/trade ,  Primary ,  Undergraduate
Format:   Paperback
Publisher's Status:   Forthcoming

Eliezer Yudkowsky (Author) Eliezer Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), and the founder of the field of AI alignment research. He is one of the most influential thinkers and writers on the topic of AI risk, and his TIME magazine op-ed of 2023 is largely responsible for sparking the current concern and discussion around the potential for human extinction. Nate Soares (Author) Nate Soares is the president of MIRI and one of its seniormost researchers. He has been working in the field of AI alignment for over a decade, after previous experience at Microsoft and Google.

Reviews for If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI

The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! -- Stephen Fry If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can -- Tim Urban, co-founder of Wait But Why The best no-nonsense, simple explanation of the AI risk problem I've ever read -- Yishan Wong, former CEO of Reddit Soares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous -- Emmett Shear, former interim CEO of OpenAI


See Also