While several countries have been looking at ways to regulate AI, European lawmakers have taken a lead by drafting AI rules aimed at setting a global standard for a technology key to almost every industry and business. The draft rules could get approved by next month.
“We worked on these risk framework that could be applied in a different format, less mandatory than the EU one, but it could be a model for other applications in other parts of the world,” Benifei told the Reuters NEXT conference in New York.
Executives and experts attending the conference stressed the importance of establishing guardrails to AI to prevent threats to society and democracy.
The US Congress has talked about passing significant legislation to address the harm that AI might do, including issues such as how it could affect elections.
President Joe Biden has signed an executive order requiring developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government before they are released to the public.
Discover the stories of your interest
Liz O’Sullivan, who is on the National AI Advisory Committee providing advice on US AI strategy, said on Wednesday that AI was fundamentally conservative, instead of creative, and largely replicates situations it has seen, including any bias. O’Sullivan, the chief executive of Vera, a company that helps firms deploy AI, noted that potential regulation of AI might include such things as audits from outside stakeholders, impact assessments of risk as well as controls like the ability to turn off AI.
Last week, Britain published a paper known as the “Bletchley Declaration“, agreed with 28 countries including U.S. and China, aimed at boosting global efforts to cooperate on AI safety.
The Group of Seven wealthy nations last month agreed on a voluntary code of conduct for companies developing advanced AI as governments, setting a landmark for how major countries govern AI, amid privacy concerns and security risks.
“We are in line with many issues that are in the voluntary commitments we see all over the world, but also in the U.S.,” Benifei said. “But we put it in a law so that it’s not a voluntary commitment.”
“We can build these common alphabet because it’s very important to deal with higher level challenges on AI development, for example, the risk of AI used as weapons,” he said.