In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories warned.
This came on the heels of another high-profile letter, signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems.
Meanwhile, the Biden administration has urged responsible A.I. innovation, stating that “in order to seize the opportunities” it offers, we “must first manage its risks.” In Congress, Senator Chuck Schumer called for “first of their kind” listening sessions on the potential and risks of A.I., a crash course of sorts from industry executives, academics, civil rights activists and other stakeholders.
The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.
A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting – some say impossible – task. But the benefits appear truly tantalizing.
Discover the stories of your interest
Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry – without ever being programmed to do these things.Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run.
Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”
This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.
They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.
But this ideology – call it A.G.I.-ism – is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.
Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.
Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. – or at least the rush to build it – is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.
Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.
Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.
It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.
Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).
These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.
A.G.I. will never overcome the market’s demands for profit.
Remember when Uber, with its cheap rates, was courting cities to serve as their public transportation systems?
It all began nicely, with Uber promising implausibly cheap rides, courtesy of a future with self-driving cars and minimal labor costs. Deep-pocketed investors loved this vision, even absorbing Uber’s multibillion-dollar losses.
But when reality descended, the self-driving cars were still a pipe dream. The investors demanded returns and Uber was forced to raise prices. Users that relied on it to replace public buses and trains were left on the sidewalk.
The neoliberal instinct behind Uber’s business model is that the private sector can do better than the public sector – the market bias.
It’s not just cities and public transit. Hospitals, police departments and even the Pentagon increasingly rely on Silicon Valley to accomplish their missions.
With A.G.I., this reliance will only deepen, not least because A.G.I. is unbounded in its scope and ambition. No administrative or government services would be immune to its promise of disruption.
Moreover, A.G.I. doesn’t even have to exist to lure them in. This, at any rate, is the lesson of Theranos, a start-up that promised to “solve” health care through a revolutionary blood-testing technology and a former darling of America’s elites. Its victims are real, even if its technology never was.
After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.
As always, Silicon Valley mavens play down the market’s role. In a recent essay titled “Why A.I. Will Save the World,” Marc Andreessen, a prominent tech investor, even proclaims that A.I. “is owned by people and controlled by people, like any other technology.”
Only a venture capitalist can traffic in such exquisite euphemisms. Most modern technologies are owned by corporations. And they – not the mythical “people” – will be the ones that will monetize saving the world.
And are they really saving it? The record, so far, is poor. Companies like Airbnb and TaskRabbit were welcomed as saviors for the beleaguered middle class; Tesla’s electric cars were seen as a remedy to a warming planet. Soylent, the meal-replacement shake, embarked on a mission to “solve” global hunger, while Facebook vowed to “solve” connectivity issues in the Global South. None of these companies saved the world.
A decade ago, I called this solutionism, but “digital neoliberalism” would be just as fitting. This worldview reframes social problems in light of for-profit technological solutions. As a result, concerns that belong in the public domain are reimagined as entrepreneurial opportunities in the marketplace.
A.G.I.-ism has rekindled this solutionist fervor. Last year, Mr. Altman stated that “A.G.I. is probably necessary for humanity to survive” because “our problems seem too big” for us to “solve without better tools.” He’s recently asserted that A.G.I. will be a catalyst for human flourishing.
But companies need profits, and such benevolence, especially from unprofitable firms burning investors’ billions, is uncommon. OpenAI, having accepted billions from Microsoft, has contemplated raising another $100 billion to build A.G.I. Those investments will need to be earned back – against the service’s staggering invisible costs. (One estimate from February put the expense of operating ChatGPT at $700,000 per day.)
Thus, the ugly retrenchment phase, with aggressive price hikes to make an A.G.I. service profitable, might arrive before “abundance” and “flourishing.” But how many public institutions would mistake fickle markets for affordable technologies and become dependent on OpenAI’s expensive offerings by then?
And if you dislike your town outsourcing public transportation to a fragile start-up, would you want it farming out welfare services, waste management and public safety to the possibly even more volatile A.G.I. firms?
A.G.I. will dull the pain of our thorniest problems without fixing them.
Neoliberalism has a knack for mobilizing technology to make society’s miseries bearable. I recall an innovative tech venture from 2017 that promised to improve commuters’ use of a Chicago subway line. It offered rewards to discourage metro riders from traveling at peak times. Its creators leveraged technology to influence the demand side (the riders), seeing structural changes to the supply side (like raising public transport funding) as too difficult. Tech would help make Chicagoans adapt to the city’s deteriorating infrastructure rather than fixing it in order to meet the public’s needs.
This is the adaptation bias – the aspiration that, with a technological wand, we can become desensitized to our plight. It’s the product of neoliberalism’s relentless cheerleading for self-reliance and resilience.
The message is clear: gear up, enhance your human capital and chart your course like a start-up. And A.G.I.-ism echoes this tune. Bill Gates has trumpeted that A.I. can “help people everywhere improve their lives.”
The solutionist feast is only getting started: Whether it’s fighting the next pandemic, the loneliness epidemic or inflation, A.I. is already pitched as an all-purpose hammer for many real and imaginary nails. However, the decade lost to the solutionist folly reveals the limits of such technological fixes.
To be sure, Silicon Valley’s many apps – to monitor our spending, calories and workout regimes – are occasionally helpful. But they mostly ignore the underlying causes of poverty or obesity. And without tackling the causes, we remain stuck in the realm of adaptation, not transformation.
There’s a difference between nudging us to follow our walking routines – a solution that favors individual adaptation – and understanding why our towns have no public spaces to walk on – a prerequisite for a politics-friendly solution that favors collective and institutional transformation.
But A.G.I.-ism, like neoliberalism, sees public institutions as unimaginative and not particularly productive. They should just adapt to A.G.I., at least according to Mr. Altman, who recently said he was nervous about “the speed with which our institutions can adapt” – part of the reason, he added, “of why we want to start deploying these systems really early, while they’re really weak, so that people have as much time as possible to do this.”
But should institutions only adapt? Can’t they develop their own transformative agendas for improving humanity’s intelligence? Or do we use institutions only to mitigate the risks of Silicon Valley’s own technologies?
A.G.I. undermines civic virtues and amplifies trends we already dislike.
A common criticism of neoliberalism is that it has flattened our political life, rearranging it around efficiency. “The Problem of Social Cost,” a 1960 article that has become a classic of the neoliberal canon, preaches that a polluting factory and its victims should not bother bringing their disputes to court. Such fights are inefficient – who needs justice, anyway? – and stand in the way of market activity. Instead, the parties should privately bargain over compensation and get on with their business.
This fixation on efficiency is how we arrived at “solving” climate change by letting the worst offenders continue as before. The way to avoid the shackles of regulation is to devise a scheme – in this case, taxing carbon – that lets polluters buy credits to match the extra carbon they emit.
This culture of efficiency, in which markets measure the worth of things and substitute for justice, inevitably corrodes civic virtues.
And the problems this creates are visible everywhere. Academics fret that, under neoliberalism, research and teaching have become commodities. Doctors lament that hospitals prioritize more profitable services such as elective surgery over emergency care. Journalists hate that the worth of their articles is measured in eyeballs.
Now imagine unleashing A.G.I. on these esteemed institutions – the university, the hospital, the newspaper – with the noble mission of “fixing” them. Their implicit civic missions would remain invisible to A.G.I., for those missions are rarely quantified even in their annual reports – the sort of materials that go into training the models behind A.G.I.
After all, who likes to boast that his class on Renaissance history got only a handful of students? Or that her article on corruption in some faraway land got only a dozen page views? Inefficient and unprofitable, such outliers miraculously survive even in the current system. The rest of the institution quietly subsidizes them, prioritizing values other than profit-driven “efficiency.”
Will this still be the case in the A.G.I. utopia? Or will fixing our institutions through A.G.I. be like handing them over to ruthless consultants? They, too, offer data-bolstered “solutions” for maximizing efficiency. But these solutions often fail to grasp the messy interplay of values, missions and traditions at the heart of institutions – an interplay that is rarely visible if you only scratch their data surface.
In fact, the remarkable performance of ChatGPT-like services is, by design, a refusal to grasp reality at a deeper level, beyond the data’s surface. So whereas earlier A.I. systems relied on explicit rules and required someone like Newton to theorize gravity – to ask how and why apples fall – newer systems like A.G.I. simply learn to predict gravity’s effects by observing millions of apples fall to the ground.
However, if all that A.G.I. sees are cash-strapped institutions fighting for survival, it may never infer their true ethos. Good luck discerning the meaning of the Hippocratic oath by observing hospitals that have been turned into profit centers.
Margaret Thatcher’s other famous neoliberal dictum was that “there is no such thing as society.”
The A.G.I. lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.
But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.
It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem – hence the excitement about A.G.I.
However, if A.G.I.-ism really is neoliberalism by other means, then we should be ready to see fewer – not more – intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. A.G.I.’s grand project of amplifying intelligence may end up shrinking it.
Because of such solutionist bias, even seemingly innovative policy ideas around A.G.I. fail to excite. Take the recent proposal for a “Manhattan Project for A.I. Safety.” This is premised on the false idea that there’s no alternative to A.G.I.
But wouldn’t our quest for augmenting intelligence be far more effective if the government funded a Manhattan Project for culture and education and the institutions that nurture them instead?
Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for A.G.I. start-ups, reinforcing the falsehood that society doesn’t exist.
Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.