9.1 C
New Delhi
Saturday, December 14, 2024
HomeTechIndia’s position as a neutral jurisdiction key to AI framework

India’s position as a neutral jurisdiction key to AI framework


India’s role as a neutral jurisdiction will be critical in the evolution of artificial intelligence frameworks, said industry leaders.


Building AI development and learning ecosystems that prevent propagation of biases is integral to creating responsible AI frameworks, they said, speaking at a roundtable with ET.

IT industry body Nasscom had earlier this month released guidelines for responsible AI usage. For the development of generative AI solutions, the guidelines recommend cautious use and risk assessment of the potential harm throughout the lifecycle of the solutions. The guidelines also call for publicly disclosing data and algorithm sources unless developers can prove that disclosing such information can harm public safety.

As a lot of AI usage involves natural language interfaces, it is essential to bring people with a strong background in humanitarian sciences, like sociology and philosophy, to the development of the solutions, said Tejal Patil, General Counsel, Wipro.

“…our traditional methods of software development focus on siloed skills,” said Patil. “However, the role of the developer will expand exponentially to include checking for biases, planning, testing and governance of these solutions and there will be an increasing need to bring additional expertise.”

Discover the stories of your interest

While different geographies have adopted different approaches to regulate AI, India’s role as a neutral jurisdiction will be critical in the evolution of AI frameworks, she said.

At present, the Indian government has said it will look at AI regulation only from the prism of user harm.

Hasit Trivedi, Tech Mahindra’s chief tech office, digital technologies and global head – AI, said from a skills perspective, software development ecosystems tend to lack knowledge about legal implications of IP infringement. These issues have been called out globally with increasing cases of generative AI platforms producing visual, software and textual content without due credit to the original creators of the base content.

“The cross-functional teams of technology, legal experts, marketing, human resources among others have to come together to ensure that enterprises have the right strategy to deal with such a disruptive technology,” he added.

Misinformation, IP infringement, data privacy violation, propagation of biases, large scale disruption of life and livelihood, environmental degradation and malicious cyberattacks are some of the top harms and malpractices that the Nasscom guidelines have called out.

Ashish Aggarwal, Nasscom’s vice president and head of public policy, said while biases within AI systems may not be intentional, they are a major threat to the efficacy of these systems.

“We believe that bias is a serious problem whether they are introduced to AI systems knowingly or unknowingly,” said Aggarwal. “One of the important ways of addressing this issue during the development of AI solutions is to ensure that businesses work with diverse and inclusive teams that bring in multiple viewpoints and help to call out such biases,” he added.

The Nasscom guidelines advise the explainability of outputs generated by generative AI algorithms, in addition to grievance redressal mechanisms to address mishaps during development or use of such solutions.

Stay on top of technology and startup news that matters. Subscribe to our daily newsletter for the latest and must-read tech news, delivered straight to your inbox.



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves