NEW DELHI : Companies must adopt proactive and more responsible policies to mitigate biases in artificial intelligence (AI)-based decision making within an organization, said a study by the Aapti Institute and United Nations Development Programme (UNDP), issued on Wednesday.
Adapting policies and regulations amid the increasing digitization of businesses could help companies address the impact of AI on human rights, which is a growing concern as more enterprises automate services, the study noted. The lack of conducive company policies and regulations can exacerbate the impact of AI and automation on the human rights of workers, it said.
Companies use the guise of algorithm-based decision making to “obfuscate deliberate company policies”, instead of aiming to establish a responsible and explainable AI model, according to the study.
An explainable AI model is one where the actions taken by an AI algorithm and its logic can be explained, thus leading to better understanding of underlying biases and the training of algorithms accordingly.
The issue of algorithmic bias, according to the study, has the greatest impact on financial services, healthcare, retail and gig worker industries. Among these sectors, the most impacted workers belong to the vulnerable and marginalized sections for whom direct access to technology is limited, thus restricting their ability to seek recourse if they feel violated by an automated decision made by their employers.