Where's Your AI Bunker?
“We’re definitely going to build a bunker before we release AGI.”
The quote is attributed to OpenAI co-founder Ilya Sutskever, who said it during a meeting with new researchers in mid-2023, according to Karen Hao’s new book on the company. He reportedly suggested that the responsible engineers would need to be protected from governments vying for control of the tech.
Other sources from other interactions with Stuskever say that he expected AGI would cause a “rapture” — a religious word for the end of time — and that a bunker would be needed to protect its designers from the unpredictable consequences of their actions.
Do you think you’ll have a space in a bunker, too?
Of course not.
Hao’s 496-page book looks like a meticulously documented study on OpenAI and, by association, a narrative on the thinking and actions of all of the tech firms pursuing ever-better AI (“AGI” stands for Artificial General Intelligence, which describes a machine that can think as flexibly and autonomously as a human being).
But we already know the punchline: We are all guinea pigs in a massive, irreversible experiment with a technology its own makers don’t fully understand nor can reliably predict its effects.
That punchline gets obscured, mostly on purpose, by the PR chaff coming from the folks conducting the experiment, who make claim that their tools will either solve our every problem or destroy humanity and therefore render reasoned debate impossible.
In the resulting vacuum, companies have been sold a “next big thing” pitch that requires them to spend oodles of money on AI and pushing it into their day-to-day operations, for fear that Wall Street might look askance at any lack of such feckless commitment.
Governments have been sidelined, both by their inability to grapple with the enormity of AI’s potential impacts, good or bad, while feeling the need to posture on “encouraging innovation” and other management consultant-ese. Academia has institutionalized the study of AI, ensuring that any learned insights will be revealed too late and too vague so as to not offend the tech and tech-adjacent sponsors of their work.
And a lot of smart, technology-aware regular folks have been convinced that the coming of AI is no different than other technologies, like knitting looms or telephones, their expectations for benefits that outweigh the costs based on comparisons of tech-transformations that have no resemblance whatsoever to what’s happening now.
Only knitting looms didn’t learn by themselves, and telephones didn’t decide to do their own talking.
So, we’re not having the conversations that we need to have about AI, and I doubt that Karen Hao’s book will change that. I will read it and support her work 100% but the train has left the station and there’s no turning back.
Ilya Sutskever was scared of what his work would do to the us and the world. It led him to quit OpenAI within a year of making the comment quoted in the book. He has started another AI company, called Safe Superintelligence, which promises to somehow do exactly what OpenAI and others are doing, only somehow do it safely.
I wonder if that includes building a bunker?