One question is whether the algorithmic processes of today’s chatbots, which operate by scanning the relationships between symbols in vast text libraries, is a form of mental organization that can surface new information and insight.
If so, it takes a human to recognize it. The bots don’t know or care whether their output has true information value to human recipients. Whether it even has motives apparently isn’t clear to the most successful of these bots. “ChatGPT is not inherently programmed with motives or intentions,” it tells me, and yet “developers may have programmed it with certain goals or intentions.”
ChatGPT is a quicker way to get answers than traditional search. Me: “Is the Hyundai Santa Cruz subject to the chicken tax?” Bing chatbot: “The Hyundai Santa Cruz is not subject to the Chicken Tax because it is produced in the United States. The Chicken Tax is a 25% tariff on imported light trucks that dates back to 1964.”
But you also have to check because the system is designed to produce sensible sentences, not to be accurate. ChatGPT also risks becoming stillborn informationally precisely because it cuts off the flow of advertising dollars to the underlying sources it feeds on. If the new bots are not to be self-extinguishing, a business model will have to be found.
One fear, loss of jobs, strikes me as wrongheaded when, in fact, we need giant productivity gains just to pay off Social Security and Medicare and meet the needs of our aging and retired population.
If journalists are especially alarmed by ChatGPT, that’s because more than other humans we suspect we are algorithms too. Our idea of truth on any subject is whatever pattern of words and phrases prevails at any moment in our milieu. I’m not making a joke. OpenAI’s leader Sam Altman, in his ingratiating appearance before Congress this week, offered a reminder to counter much media hysteria: “It’s important to understand that GPT-4 is a tool, not a creature.”
Capabilities are said to be advancing faster than we can assess them; unexpected attributes may emerge, like self-awareness, causing some to insist on attributing “rights” to the non-creature. If so, that would still be up to us, just as whether to attribute rights to a cow or unborn infant is up to us.
The fear most often voiced is that, intentionally or accidentally, a reward system will be introduced that causes an AI to decide its goals are best pursued by enslaving humanity or getting rid of it.
Through some wrinkle not clearly specified, it controls the analog tools to do so, including the analog tools known as human beings, whom it blackmails into serving its ends.
My immediate fear is different: The new chatbots’ ability to generate infinite reams of textual output will swamp the human texts on which it feeds, filling up the information space with derivative spew. Google search won’t be obsolete after all; it will be urgently upgraded to weed out dead-end algorithmic blather in favor of those texts and other unfake documents that are genuinely imbued with human reasoning and knowledge.
In any scenario, China is unlikely to join a moratorium that some have proposed to delay the arrival of general machine intelligence or licensing schemes that are promoted in hopes of allowing into the world only AIs that do things we like.
Maybe China will be the site of a future AI Chernobyl from which we can all learn, but artificial intelligence seems sure to advance as long as human civilization doesn’t destroy itself by the means already available to it. And, on the whole, it seems better that it do so.
A serious theory holds that we don’t detect cosmic signals from advanced civilizations because advanced civilizations don’t survive their own technological innovations. Yet the same theorizers would also admit that technology is our only hope. The fossil record suggests a species like Homo sapiens will go extinct a lot sooner than the schedule that worries Elon Musk, the AI critic who nevertheless wants to plan for the death of the sun in five billion years or so. One study estimates that the average mammalian species lasts just two million years.
The certain prognosis of natural history then is that, while our technology may doom us, we are certainly doomed without it. Unless faster-than-light travel can be invented, robots and artificial intelligence would seem the only plausible way of distributing our biological seed to distant planets, then constructing the initial habitat and support infrastructure to allow it to thrive there.
And, no, warp drive may not be a deus ex machina. To argue that faster-than-light travel is feasible may be tantamount to arguing we’re already doomed because some advanced civilization already roaming the galaxy surely won’t welcome competition from us. My guess is that a universe that requires AI for interstellar colonization will be safer for us than a universe that enables lightspeed travel.