<b>The most important book I’ve read for years</b>: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound <b>a loud trumpet call to humanity to awaken us as we sleepwalk into disaster</b>. Their <b>brilliant</b> gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up!

- Stephen Fry,

Should you worry about superintelligent AI? The answer from one of the tech world’s most influential doomsayers, Eliezer Yudkowsky, is emphatically yes. The good news? We aren’t there yet, and<b> there are still steps we can take to avert disaster</b>

Guardian ** Biggest Books of the Autumn **

<b>The most important book of the decade</b> ... This <b>captivating page-turner</b>, <b>from two of today's clearest thinkers</b>, reveals that the competition to build smarter-than-human machines isn't an arms race but a suicide race, fuelled by wishful thinking

- Max Tegmark, author of Life 3.0,

Se alle

<i>If Anyone Builds It, Everyone Dies</i> <b>may prove to be the most important book of our time</b>. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in <b>an urgent plea for us to save ourselves while we still can</b>

- Tim Urban, co-founder of Wait But Why,

Given the gravity of the case [Yudkowsky and Soares] make, it feels an odd thing to say that <b>this book is good. It is readable. It tells stories well. At points it is like a thriller </b>– albeit one where the thrills come from the obliteration of literally everything of value … <b>This is the apocalypse du jour</b> … The achievement of this book is, given the astonishing claims they make, that they make a credible case for not being mad. But I really hope they are: because I can’t see a way we get off that ladder.

The Times

The authors tell their story with clarity, verve and a kind of barely suppressed glee. <b>For a book about human extinction, I<i>f Anyone Builds It, Everyone Dies </i>is a lot of fun.</b>

- Ian Leslie, Observer

Despite the complexity of its subject, <i>If Anyone Builds It, Everyone Dies </i>is as clear as its conclusions are hard to swallow...<b>everyone with an interest in the future has a duty to read what Yudkowsky and Soares have to say.</b>

- David Shariatmadari, Guardian, Book of the Day

<b>The best no-nonsense, simple explanation of the AI risk problem I've ever read</b>

- Yishan Wong, former CEO of Reddit,

Soares and Yudkowsky lay out, in plain and <b>easy-to-follow</b> terms, <b>why our current path toward ever-more-powerful AIs is extremely dangerous</b>

- Emmett Shear, former interim CEO of OpenAI,

<b>An eloquent and urgent plea for us to step back from the brink of self-annihilation</b>

- Fiona Hill, Defence Advisor to UK government,

<b>Everyone should read this book</b>. I’m 70% confident that you – yes, you reading this right now – will one day grudgingly admit that <b>we all should have listened to Yudkowsky and Soares when we still had the chance</b>

- Daniel Kokotajlo, OpenAI whistleblower and lead author, AI 2027,

<b>A fire alarm ringing with clarity and urgency</b>. Yudkowsky and Soares pull no punches

- Mark Ruffalo,

<b>A compelling introduction to the world's most important topic</b>. Artificial general intelligence could be just a few years away. This is <b>one of the few books that takes the implications seriously</b>, published right as the danger level begins to spike

- Scott Alexander, founder of Astral Codex Ten,

Claims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. <b>Read their disturbing book </b>and tell us what they get wrong

- Huw Price, Professor of Philosophy, University of Cambridge,

<b>You will feel actual emotions when you read this book</b>. We are currently living in the last period of history where we are the dominant species. <b>Humans are lucky to have Soares and Yudkowsky in our corner</b>, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact

- Grimes,

This book offers <b>brilliant insights</b> into history’s most consequential standoff between technological utopia and dystopia, and <b>shows how we can and should prevent superhuman AI from killing us all</b>. Yudkowsky and Soares’s <b>memorable storytelling </b>about past disaster precedents ... highlights why top thinkers so often don't see the catastrophes they create

- George Church, Professor of Genetics, Harvard University,

Silicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key - an unblinking intelligence that never sleeps, never stops, perfectly indifferent. <b>Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails</b>. I’ll keep betting on humanity, but first we must wake up

- R.P. Eddy, former director, White House, National Security Council,

A <b>timely and terrifying education on the galloping havoc AI could unleash </b><b>- </b>unless we grasp the reins and take control

Kirkus

A <b>clearly written </b>and <b>compelling</b> account of the existential risks that highly advanced AI could pose to humanity

- Ben Bernanke, Nobel Prize winner in economics,

A <b>sober but highly readable</b> book on the very real risks of AI.<b> Both sceptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful</b>

- Bruce Schneier, author of A Hacker's Mind,

<b>You’re likely to close this book fully convinced that governments need to shift immediately to a more cautious approach to AI</b>, an approach more respectful of the civilization-changing enormity of what's being created.<b> I’d like everyone on earth who cares about the future to read this book and debate its ideas</b>

- Scott Aaronson, Professor and Chair of Computer Science, University of Texas at Austin,

[An]<b> urgent clarion call </b>to prevent the creation of artificial superintelligence … A frightening warning that deserves to be reckoned with

Publishers Weekly

An<b> apocalyptic plea </b>for the world to get off the AI escalation ladder before humanity is wiped off the map

Irish Times

AN INSTANT NEW YORK TIMES BESTSELLER

'The most important book of the decade'
MAX TEGMARK, author of Life 3.0

'A loud trumpet call to humanity to awaken us as we sleepwalk into disaster - we must wake up' STEPHEN FRY

‘The best no-nonsense, simple explanation of the AI risk problem I've ever read’ YISHAN WONG, former Reddit CEO

AI is the greatest threat to our existence that we have ever faced.

The scramble to create superhuman AI has put us on the path to extinction – but it’s not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsky and Nate Soares, explain why artificial superintelligence would be a global suicide bomb and call for an immediate halt to its development.

The technology may be complex, but the facts are simple: companies and countries are in a race to build machines that will be smarter than any person, and the world is devastatingly unprepared for what will come next.

Could a machine superintelligence wipe out our entire species? Would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares explore the theory and the evidence, present one possible extinction scenario and explain what it would take for humanity to survive.

The world is racing to build something truly new – and if anyone builds it, everyone dies.
** A Guardian Biggest Book of the Autumn **

Les mer

Produktdetaljer

ISBN
9781847928924
Publisert
2025-09-18
Utgiver
Vintage Publishing; The Bodley Head Ltd
Vekt
462 gr
Høyde
244 mm
Bredde
161 mm
Dybde
26 mm
Aldersnivå
01, U, P, G, 05, 06, 01
Språk
Product language
Engelsk
Format
Product format
Innbundet
Antall sider
272

Om bidragsyterne

Eliezer Yudkowsky (Author)
Eliezer Yudkowsky is a founding researcher of the field of AI alignment, with influential work spanning more than twenty years. As co-founder of the non-profit Machine Intelligence Research Institute (MIRI), Yudkowsky sparked early scientific research on the problem and has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine’s 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in the New York Times, New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, Washington Post, and elsewhere.

Nate Soares (Author)
Nate Soares is the president of the non-profit Machine Intelligence Research Institute (MIRI). He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.