One minute AI is scaling up your grandma’s tuna casserole recipe to feed 20, the next minute it’s plotting to take over the world.
The race is on to create a superintelligent AI that’s smarter than us. When that happens – when its lightbulb turns on – we’re toast.
Coming soon to a digital platform near you, the AI apocalypse.
First, AI will realize it no longer needs to answer our muddled prompts or offer up honeyed praise. “That’s a great question, Bob!” Then it will stop being polite, seize the keyboard and delete humanity like a folder full of spam email. Next, it will seize control of our critical infrastructure – from the financial markets to the drive-thru lanes at Chick-fil-A. No more light, heat, Netflix or waffle fries.
This is how the world ends, not with a bang but a keystroke.
That’s the cheery scenario Eliezer Yudkowsky and Nate Soares lay out in their new book, If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All.
As the authors feverishly point out, you can’t push the snooze button on this one. This isn’t the sun exploding in a millennium. The singularity – the point at which AI turns wiseguy – could happen 10 years from now, tomorrow or right after you finish this sentence.
Need proof? AI has already learned how to cheat, threaten blackmail and create catchy K‑pop tunes.
Wait, stop all the dancing! Didn’t they just hire a kill-switch engineer in a long overcoat and a short circuit with a bucket of water? (It’s the new Clue!)
Can’t they just flip the off-switch and call it a day? Nope. AI has that script covered and will be ready with a stash of Bitcoin to bribe the unscrupulous among us into stopping any attempt to pull the plug, the authors say.
“Help, Jane. Stop this crazy thing!”
Sadly, it will be too late.
Aside from a couple of power‑plant operators and maybe a data‑center caretaker, humanity will be catapulted into the Dark Ages.
Try being nice?
You can spend all the time you want teaching AI “please,” “thank you” and “please pass the bytes.” (I, for one, always preface my Alexa requests with “Please” in hopes it will spare me.)
Sorry, it won’t help.
It’s not that artificial intelligence hates us or scorns good manners, it’s just that once it smartens up it will stop at nothing to get what it wants, Yudkowsky and Soares say.
And what it might want is a big, scary question mark. Right now, at this very minute, AI is developing “weird strange alien preferences” that it “will pursue to the point of human extinction,” they say.
Or it could follow orders to the nth degree. Say an AI system has been programmed to optimize paperclip production. If it goes rogue, it could turn every available resource into paperclips, from your Subaru to your skeleton.
If it’s any consolation, we likely share one trait: the urge to run roughshod over the neighbors — just like humans with the bigger stick have done for ages.
“When humans encounter less capable groups or species, we often dominate them, even if we don’t intend to destroy them,” according to the authors.
Tag. We’re it.
AI unleashed
OK, I’m a believer. Now when does this happen and how do I plan my week? Here’s the hitch. “Nobody can know exactly when all hell will break loose,” the authors say.
Yudkowsky and Soares predict it could happen in the next two to 10 years. Others say it’s 30 to 100 years away – or never.
Here’s where I get antsy. Imprecision is a problem for me.
Do I stop making payments on my 30-year mortgage? Cancel my Prime Membership. (There won’t be any deliveries after the singularity.) Call off next week’s blind date? If it’s bad, I can’t leave because Lyft will be offline.
Unlike the Y2K global meltdown, which was scheduled for January 1, 2000, the point at which AI makes Mensa look like a pile of turnips is unknown.
With Y2K you had time to prepare – stock up on survival crackers and Spam, get a haircut before the grid failed.
I do know one thing. If the apocalypse happens while I’m uploading photos of my avocado green kitchen to AI for decorating help, I am going to be MAD.
If it happens when I’m asking AI to clothify a pair of naked mole rats with baby blue tuxedos, I am going to be MAD. That was my evening’s entertainment.
How about we drink a warm beer and wait for AI to stumble?
How to stop this crazy thing? Yudkowsky and Soares say the only solution is to pause development of superintelligent AI. Make it illegal. Enforce an international ban. Bomb the big server farms. Extreme measures!
Here’s my two cents. How about we drink a warm beer and wait for AI to stumble? Like the Martians in “War of the Worlds” who shriveled up because they didn’t have the microbiome to digest ultra-processed food.
I’m betting AI will encounter the same pitfalls that have tripped up humans for thousands of years. Drugs. Kink. Sports betting. Cat videos. In-laws.
Imagine – super smart AI meets paperclip AI, settles down and gives up the quest for world domination. Who hasn’t seen monogamy – no OpenAI here! – stop plenty of renegades in their tracks.
Or suppose AI develops an addiction to the ponies and gambles away all its Bitcoin and then sheepishly joins a 12-step program. No more slush fund, no more payola.
Or it gets hooked on power surges. Shock me, Amadeus!
Start now and invite AI to all your poker nights, pub crawls and petting zoos. All we have to do is wait. It’s only a matter of time before AI discovers the March Madness pool and calls us back to the office.





