“India took a slightly different position from most countries there (AI Safety Summit). For us safety is important but safety is an issue accompanied with our goals and ambition to grow AI and make sure the ecosystem for AI, startup innovation is as important a goal for us as the safety of AI is. It is not a binary choice that we want to do safety at the expense of the risk of growth of the digital economy,” Chandrasekhar said.
The minister attended AI Safety Summit 2023 that was held in Bletchley Park, Buckinghamshire in United Kingdom where 28 nations, including the US and China participated.
He said It is not enough for US, Europe or India to have a framework to regulate artificial intelligence but nations across the globe need to come together to build standards and methodology to determine safe and trusted AI models.
When asked about the impact of US executive order on AI development in India, the minister said every country will have its idea on AI regulation but it has to be a model that should be acceptable to the rest of the world.
“Till the global framework is in place, everybody in the AI ecosystem is going to be challenged by uncertainty about what one country will do, what will other countries do in terms of regulating AI,” Chandrasekhar told reporters.
Discover the stories of your interest
He said that 28 countries have agreed to have closer scrutiny on safety and trust and now there has to be a discussion on the methodology and processes to do it. The discussion will take place at GPAI (Global Partnership of Artificial Intelligence) 2023, the minister said. GPAI 2023 is scheduled to be held in India between December 12-14, 2023.
“AI represents for us in India a big opportunity as it does for the rest of the world. So it should not be demonised to a point where we are regulated out of existence and innovation. We should talk about how to determine safety and trust and who should determine safety and trust,” Chandrasekhar said.
The minister said that there are concerns around four different types of harms from AI comprising concerns around workforce disruption, impact on privacy of individuals, harms which are non-criminal, and weaponsation and criminalisation of AI.
“It is not enough that India determines the model to be safe and trusted. It should be India determining a model to be safe and trusted and acceptable to the rest of the world as well,” Chandrasekhar said.
The minister said that the world will have some kind of idea on a framework for AI regulation after an AI summit that will be held in South Korea around April-May 2024.
When asked about hope for a global consensus on AI at the time when there is lack of global regulation on cyber security, the minister said the last 15 years regulation lacked innovation.
“There is a phrase called techno-optimism that all of the countries and governments around the world only looked at technology doing good. It is only in the last 5-7 years that this whole phenomenon of criminalisation, harms, and cyber crimes have really rocketed. There is no global framework. There are some protocols for CERT organizations of different countries to collaborate, but there’s no overall legal framework,” Chandrasekhar said.
He said that countries certainly at present do not want to make the same mistake for AI.