Generative AI has polarized artificial intelligence (AI) experts with some calling for a six-month moratorium on building new systems and others arguing against the idea on the grounds that the benefits of AI far outweigh perceived risks. Mint explores the merits of these views:
Who’s suggesting a moratorium and why?
More than a thousand people including Elon Musk, Yoshua Bengio, Stuart Russel, Gary Marcus and Andrew Yang have called for a six-month moratorium on training systems that are “more powerful than GPT-4″, arguing such systems should be developed only when the world believes it is able to contain the risks. But AI experts like Andrew Ng, co-founder of Coursera, counter the moratorium is a “terrible idea… I’m seeing many new applications in education, healthcare, food, that’ll help many people. Improving GPT-4 will help. Let’s balance the huge value AI is creating vs realistic risks.”
Why the alarm over generative AI?
AI has gathered momentum in the last 6-7 years by augmenting our smartphones, wearables, laptops, and cars. Sectors like healthcare, retail, oil and gas, utilities, and banking, financial services and insurance have smart chatbots now. It gives us insights from the past and predictive analytics for the future. But the exponential progress in generative AI models that are used to create new content appears to have alarmed many, ever since the launch of OpenAI’s ChatGPT in December 2022. They think these models will think and act like humans, plagiarize the work of artists, and take routine jobs.
What kind of jobs are most at risk?
GPTs could impact at least 10% of the work of 80% of the US workforce, says a study by OpenAI, Open Research and Pennsylvania University. Programming and writing jobs are more susceptible than those that need scientific and critical thinking. A 26 March note by Goldman Sachs says globally, generative AI could expose 300 million full-time jobs to automation.
Can a moratorium be implemented?
It’s almost Utopian to expect big tech companies, which are not only trying to outrun each other in the race for AI, but have also to show returns to shareholders, to halt the progress of these models, albeit temporarily. Language models tend to hallucinate (convincingly provide wrong answers) but they also help society. Ng says: “Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.”
Is there a right way to regulate AI?
This is not the first time AI is under the scanner. “States should place moratoriums on the sale and use of AI systems until adequate safeguards are put in place,” UN human rights chief Michelle Bachelet says. But Ng believes “regulations around transparency and auditing would be more practical.” The 2022 ‘Blueprint for the US AI Bill of Rights’ states: “Users should be notified that they are using an automated system and how it contributes to outcomes that impact them.” As should be evident, there’s no one answer.
Download The Mint News App to get Daily Market Updates & Live Business News.