35.1 C
New Delhi
Saturday, April 20, 2024
HomeTechAI too important to be not regulated, says Google

AI too important to be not regulated, says Google


Google on Friday published a whitepaper with suggestions for a policy agenda for responsible AI progress. In a blog post, the company said AI was too important not to regulate.


“Calls for a halt to technological advances are unlikely to be successful or effective, and risk missing out on AI’s substantial benefits and falling behind those who embrace its potential,” Kent Walker, president of global affairs, Google & Alphabet, said in the blog post.

He said broad-based efforts—across government, companies, universities, and more—were needed to help translate technological breakthroughs into widespread benefits, while mitigating risks.

Walker said individual practices, shared industry standards, and sound government policies would be essential to getting AI right.

In the whitepaper, Google said it encourages governments to focus on three key areas—unlocking opportunity, promoting responsibility, and enhancing security.

“Economies that embrace AI will see significant growth, outcompeting rivals that are slower on the uptake. AI will help many different industries produce more complex and valuable products and services, and help increase productivity despite growing demographic challenges,” Walker said.

Discover the stories of your interest


He added that AI also promises to give a boost to both small businesses using AI-powered products and services to innovate and grow—and to workers who can focus on non-routine and more rewarding elements of their jobs.But in order to make this happen, Walker urged policymakers to invest in innovation and competitiveness, promote legal frameworks that support responsible AI innovation, and prepare workforces for AI-driven job transition.

“For example, governments should explore foundational AI research through national labs and research institutions, adopt policies that support responsible AI development (including privacy laws that protect personal information and enable trusted data flows across national borders), and promote continuing education, upskilling programmes, movement of key talent across borders, and research on the evolving future of work,” he said.

In the same breath, he said that if AI is not developed and deployed responsibly, AI systems could also amplify current societal issues, such as misinformation, discrimination, and misuse of tools.

In order to ensure the promotion of responsible AI, he said that while some challenges will need fundamental research to better understand AI’s benefits and risks and how to manage them, others would need risk-based regulation while finally some would require new organisations and institutions.

“For example, leading companies could come together to form a Global Forum on AI (GFAI), building on previous examples like the Global Internet Forum to Counter Terrorism (GIFCT). International alignment will also be essential to develop common policy approaches that reflect democratic values and avoid fragmentation,” he said.

Further, Walker said that AI has important implications for global security and stability. Generative AI can help create (but also identify and track) mis- and dis-information and manipulated media. AI-based security research is driving a new generation of cyber defences through advanced security operations and threat intelligence, while AI-generated exploits may also enable more sophisticated cyberattacks by adversaries.

He said it was important to put technical and commercial guardrails in place to prevent malicious use of AI and to work collectively to address bad actors,

“Governments should explore next-generation trade control policies for specific applications of AI-powered software that are deemed security risks, and on specific entities that provide support to AI-related research and development in ways that could threaten global security,” he suggested.

Walker concluded that a policy agenda centred on the pillars of opportunity, responsibility, and security can unlock the benefits of AI and ensure that those benefits are shared by all.

“As we have said before, AI is too important not to regulate, and too important not to regulate well. From Singapore’s AI Verify framework to the UK’s pro-innovation approach to AI regulation to America’s National Institute of Standards & Technology’s AI Risk Management Framework, we are encouraged to see governments around the world seriously addressing the right policy frameworks for these new technologies, and we look forward to supporting their efforts,” he said.

Just a little over a week ago, at Google’s annual developer conference, CEO Sundar Pichai said the growth of AI is as big a technology shift as we have seen. And while he spoke extensively about being bold with their efforts in generative AI, he was equally particular to emphasise the need for being responsible.

He reiterated that we were at an inflection point and that we can significantly improve the lives of billions of people, help businesses thrive and grow, and support society in answering our toughest questions but must be clear-eyed that AI will come with risks and challenges.



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves