The AI-pocalypse Is Coming!
Are you an AI Doomsday-er? Those of us holding on to any kind of rational skepticism about man’s newest discovery of “fire” taking over the world and enslaving humans in the very near future are becoming increasingly rare. Along with some actual promise to positively impact our lives, the new generative AI has arrived on planet Earth with an abundance of terror. Last month, hundreds of tech leaders including Sam Altman, CEO of ChatGPT, put out a statement warning of man’s extinction due to AI.
“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
The Washington Post recently ran the headline,
“How AI Could Accidentally Extinguish Humankind”.
Pop intellectual, Yuval Harari, told the Economist that
“AI has hacked the operating system of humanity.”
Creative vs subjective
The leap to “creativity” is exciting and questionable. Is this what is sending folks into “End Times” mode? Because we are certainly there. Turn on any news program. AI has become the new catch-all term for our fear of the future. The subjugation of man by machines has been predicted for years now, but the appearance of ChatGPT and Bard has made death, doom, and gloom by the apocalyptic horseman of tech very real. We can hear his snorting breath, his clamoring hooves on the pavement of panic. (I refer to The Book of Revelations because I haven’t seen The Terminator movies. Is this why I haven’t yet caught the imminent threat?)
Some say the new AI has “gone rogue,” like HAL from the movie. Is this trained narrative voice truly a subject in its own right? Or is it an illusion to think of AI as a rogue first-person non-reductive conscious subject? This is the question that appears to be haunting its own creators like the monster of Frankenstein.
We have been debating whether AI could be conscious or not for a while. It would help, of course, to know what consciousness is. The crowd who is most afraid of this new AI tends to be the same folks who think that humans are biological robots—that we humans have no consciousness or free will. When you take that away from us, I can understand how one feels eerily similar to ChatGPT. The deconstruction of humanity by the “3rd person” or “objectivist” positivists has not been good for their own mental health.
Hype/Fear cycles
Have you noticed that we don’t bother to say artificial intelligence anymore? It’s just AI. This shortened acronym represents the way the meaning of the term, too, has become compressed, an icon of fear, shorthand for the march of technology, the inevitable cavalcade of computerization. For many, it is still a symbol of hope and progress. But what really is generative artificial intelligence? Some giant tech companies have recently funded or introduced some large language models (LLMS) based on a type of neural network architecture called the transformer architecture, which was introduced in a 2017 paper by Vaswani et al. called "Attention is All You Need”. This is software housed on hardware. It has access to lots of data and goes very fast—two popular words among the fashion-conscious. And this is being duplicated right and left. Apparently, it is not that difficult to copy as can be seen by the recent “we have no moat” memo leaked from Google and reported here in the Economist.
Amid hype—and fear cycles (always mixed like a good cocktail)—it is easy to forget about the actual history of technology, and with that history the limitations of any product. Remember back a decade or more ago when social media platforms were seen as a technology that was going to save the world? Being on Facebook and Twitter was an absolute must. We had to tweet everything we read or wrote or podcasted, or we weren’t alive. At the time, the Arab Spring was blowing up, and we all saw the power of social media to upend traditional institutions. Now, a few years on, we’re perhaps over the hype/fear cycle. Perhaps it was even a dud. Some good, some harm, overall the social media platforms were a useful product to many, but nothing apocalyptic or salvational. A friend of mine who grew up in a small rural town in the 60s has told me that when television came to his house, he heard that TV would be the end of civilization. Does anyone talk about cell phones giving us cancer anymore—except Robert Kennedy Jr.?
Don’t get me wrong. I don’t want to chide anyone for this fear. Let us see it for what it is. It is not every day a new technology comes along and scares the bejesus out of us, or threatens us with thoughts that we may no longer be top dog on this planet. This is, let me summon up a pop phrase, “an existential moment.” I’ve been through enough therapy to know the importance of accepting one’s present anxiety.
It’s ironic. In order to induce fear and grave consequences to humanity, one must first hype the potential of AI. So what really is the goal of the tech industry leaders? Earlier I mentioned the statement put out recently by tech leaders suggesting man’s imminent extinction. This comes a month or two after an "open letter" put out by the Future of Life Institute calling for all AI labs to immediately pause their work and signed by entrepreneur and technology leader, Elon Musk, Apple co-founder, Steve Wozniak, as well as the afore quoted pop intellectual, Yuval Harari.
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
Are these leaders who are looking out for our future welfare hoping to curb this “runaway technology” or not? How does it serve Altman, Musk, and Harari to hype AI in order to warn against it? Altman and Musk have skin in the AI game. Musk has long presented himself as the savior of the world as his overall marketing strategy. Harari, the author of books on the trendy topic of all the human history you need to know in three bullet points, is out in front of every new Silicon Valley technology with the speed that might impress marketers everywhere if not Olympic judges.
The Empiricists find God
The “empirical” crowd is urging us to take it at their word that the end is nigh. We are not to trust our own experience, for I have not seen, heard, or felt any cataclysmic effects from the new AI. With nuclear bombs, one could see and know the danger, even in the testing phase. Show me, please, how large language models can threaten our existence. Oddly, the evidence-based crowd is telling us to take it on faith: “Believe us, we have seen and heard the god Lamda, and he’s real. We may have made him up, but he exists independently now, and he deserves our respect. Furthermore, you plebeians do not have access and therefore must come to us and believe our holy word.” What’s going on here? Have the empiricists found God? The Book of Revelations is not a casual reference. Silicon Valley is the new Vatican with its private access to God/Lamda, anointed cardinals, holy writs, inside scoop on the future of the world, and power over nations and municipalities, even if they haven’t yet agreed on electing Elon Musk the Holy Papa.
I was surprised by the talk that we must shut AI down after the engineers had been working so hard to build it. It’s as though after putting so much research into looking for aliens, calling aliens, hoping that aliens were out there, they now want to give up the moment aliens contact us.
And, if AI truly has gone rogue, isn’t it/he/she a welcome alien, even if we did have to create it? Why the overwhelming fear? Why not love this cute little E.T.?
The Vatican Engineers certainly do have Society (except those who are busy with distractions such as fighting wars, fighting disease, fighting inflation) right where they want us-obsessed with AI, with their latest creation. The Apocalypse thunders and darkens our future. It is as though a meteor were headed for Earth. Large language models are our new fetish. And why not? We must have something to talk about at the water cooler. Tell me if there is a better topic for this Friday’s book club. Does anyone remember what else they were talking about in the months leading up to Y2K?
I write this not to downplay the usefulness of artificial intelligence applications budding by the nanosecond. Recently at my day job, I interviewed a KOL in synthetic biology who said that AI is being used by over 1,000 synthetic biology companies, for example, to synthesize proteins. A recent report details the new AI’s potential impact on medical diagnostics It is our work in the weeks and months ahead to sort out uses for this powerful new technology from the hype. One cannot help but think of major technological revolutions: television, the calculator, the computer, the internet, and email. Generative AI is a step forward. It will bloom. We will incorporate this technology into our daily lives. And then it will begin to fade as the next new thing appears on the horizon. Every technology has a shelf life. Much as I love my adorable piano, I struggle to find a local technician. Fun fact: when Beethoven arrived as a pianist and composer in Vienna, there were already 300 concert pianists performing in the city.
Language has endured perhaps as well as any human technology. Is the new AI an extension of language? Some suggest—they’ve been saying this for years—that it is a great opportunity to study the mind. Others disagree, such as philosopher John Searle, who argues we must study neuro-physiology, the actual stuff of the brain, to understand the mind and consciousness. He contends that the brain does not use computation to produce consciousness. Others argue, that these new language models are very similar to cognitive function. Standing here at the starting line for new technology with such vast potential is awesome. The impulse to put up guard rails and even the panic is perhaps understandable. When humans began to write down our language for the first time, Socrates argued against it.
And what of the real dangers? What of the self-powered and smart weapon that makes it into the hands of dictators? What of the continued spread of misinformation—social media on steroids? Every advance in technology has had its serious risks. Let us not forget the pandemic we just survived that could have come from a lab leak. Biosecurity is a very present danger. Nuclear weapons have had every generation since World War II living in existential dread and angst. Perhaps the silent generation was the last who knew a simple peace in going to sleep at night. And they could argue they had their own dreads and angst from the stress of the automobile and the train and the telegraph. Adapt to this new artificial intelligence we must.
When is the apocalypse coming? It’s always coming. In the meantime, I’m going to sleep like ancient humans tonight.