The explosion of generative AI – which can create text, photos and videos in response to open-ended prompts – in recent months has spurred both excitement about its potential as well as fears it could make some jobs obsolete, upend economies and even possibly overpower humans.
“We are flying down the highway in this car of AI,” said Ian Swanson, CEO and co-founder of Protect AI, which helps businesses secure their AI and machine learning systems, during a Reuters MOMENTUM panel on Tuesday.
“So what do we need to do? We need to have safety checks. We need to do the proper basic maintenance and we need regulation.”
Regulators need look no further than at social media platforms to understand how unchecked growth of a new industry can lead to negative consequences like creating an information echo chamber, said Seth Dobrin, CEO of Trustwise.
“If we expand the digital divide … that’s going to lead to disruption of society,” Dobrin said. “Regulators need to think about that.”
Discover the stories of your interest
Regulation is already being prepared in several countries to tackle issues around AI. The European Union’s proposed AI Act, for example, would classify AI applications into different risk levels, banning uses considered “unacceptable” and subjecting “high-risk” applications to rigorous assessments.
U.S. lawmakers last month introduced two separate AI-focused bills, one that would require the U.S. government to be transparent when using AI to interact with people and another that would establish an office to determine if the United States remains competitive in the latest technologies.
One emerging threat that lawmakers and tech leaders must guard against is the possibility of AI making nuclear weapons even more powerful, Anthony Aguirre, founder and executive director of the Future of Life Institute, said in an interview at the conference.
Developing ever-more powerful AI will also risk eliminating jobs to a point where it may be impossible for humans to simply learn new skills and enter other industries.
“We’re going to end up in a world where our skills are irrelevant,” he said.
The Future of Life Institute, a nonprofit aimed at reducing catastrophic risks from advanced artificial intelligence, made headlines in March when it released an open letter calling for a six-month pause on the training of AI systems more powerful than OpenAI’s GPT-4. It warned that AI labs have been “locked in an out-of-control race” to develop “powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
“It seems like the most obvious thing in the world not to put AI into nuclear command and control,” he said. “That doesn’t mean we won’t do that, because we do a lot of unwise things.”