25.1 C
New Delhi
Friday, November 22, 2024
HomeTechAI poses 'risk of extinction,' industry leaders warn

AI poses ‘risk of extinction,’ industry leaders warn


A group of industry leaders warned Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars.


“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in AI.

The signatories included top executives from three of the leading AI companies: Sam Altman, CEO of OpenAI; Demis Hassabis, CEO of Google DeepMind; and Dario Amodei, CEO of Anthropic.

Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern AI movement, signed the statement, as did other prominent researchers in the field.

The statement comes at a time of growing concern about the potential harms of AI. Recent advancements have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.

These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building – and, in many cases, are furiously racing to build faster than their competitors – poses grave risks and should be regulated more tightly.

Discover the stories of your interest


This month, Altman, Hassabis and Amodei met with President Joe Biden and Vice President Kamala Harris to talk about AI regulation. In a Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms. Dan Hendrycks, executive director of the Center for AI Safety, said that the open letter represented a “coming-out” for some industry leaders who had expressed concerns – but only in private – about the risks of the technology they were developing.

“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves