The voluntary code of conduct will set a landmark for how major countries govern AI, amid privacy concerns and security risks, the document seen by Reuters showed.
Leaders of the Group of Seven (G7) economies made up of Canada, France, Germany, Italy, Japan, Britain and the United States, as well as the European Union, kicked off the process in May at a ministerial forum dubbed the “Hiroshima AI process”.
The 11-point code “aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems”, the G7 document said.
It “is meant to help seize the benefits and address the risks and challenges brought by these technologies”.
The code urges companies to take appropriate measures to identify, evaluate and mitigate risks across the AI lifecycle, as well as tackle incidents and patterns of misuse after AI products have been placed on the market.
Discover the stories of your interest
Companies should post public reports on the capabilities, limitations and the use and misuse of AI systems, and also invest in robust security controls.The EU has been at the forefront of regulating the emerging technology with its hard-hitting AI Act, while Japan, the United States and countries in Southeast Asia have taken a more hands-off approach than the bloc to boost economic growth.
European Commission digital chief Vera Jourova, speaking at a forum on internet governance in Kyoto, Japan earlier this month, said that a Code of Conduct was a strong basis to ensure safety and that it would act as a bridge until regulation is in place.