The Ministry of Electronics and Information Technology (MeitY) and the Ministry of Corporate Affairs (MCA) may not be on the same page regarding the regulation of artificial intelligence (AI) in the country. While the MCA, through the Digital Competition law is looking to curb potential harm (ex-ante), MeitY, on the other hand, may be considering actual harm posed to users.
Economy watchers said care should therefore, be taken to ensure that growth of this new technology is not hindered and that innovation is not killed. These diametrically different approaches to regulate AI could pose difficult issues for the government, which is keen to harness the transformative technology’s full potential through responsible regulation, they added.
Balancing innovation and safety will be essential to create a sustainable and inclusive AI landscape that benefits the humanity as a whole, experts said.
Bone of contention
An MCA-appointed inter-ministerial panel — Committee on Digital Competition Law (CDCL) — is in the final stages of firming up a draft of the proposed Digital Competition Bill, which will seek to bring digital gatekeepers, including AI platforms, under the ex-ante framework.
On Friday, Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology, said the Centre’s approach to regulating AI will be through the prism of “user harm” — to ensure that it doesn’t harm ‘digital citizens’. This approach sits odd with the proposed Digital Competition Bill mooted by the MCA-appointed panel.
The Bill envisages special obligations which are required to be followed by digital gatekeepers on ex-ante basis without waiting for ‘actual user harm’.
Why regulate AI
AI has rapidly emerged as a transformative technology, promising immense benefits and advancements across various sectors of the society. However, as it becomes more integrated into daily lives, concerns surrounding its ethical implications, potential risks and societal impact have also grown. In response, regulatory frameworks are being developed to govern the development, deployment and use of AI technologies.
The exponential growth of AI has raised ethical concerns and risks, such as bias and discrimination, privacy infringement and job displacement. The lack of accountability and transparency in AI algorithms has further fueled the urgency for regulation.
AI systems must be developed and utilised responsibly to ensure they align with human values, avoid harm and promote trust in the technology.
Governments worldwide have recognised the importance of AI regulation and have taken steps to address its challenges. The European Union’s General Data Protection Regulation (GDPR) has established rules for data protection, including AI-generated data. Additionally, the EU has proposed the Artificial Intelligence Act, aiming to create a harmonised regulatory framework for AI applications, ensuring transparency, accountability and safety.