The Meta-owned encrypted messaging app banned 2,069,000 accounts through its abuse detection technology. It operates at three stages of an account’s lifestyle: at registration, during messaging, and in response to negative feedback, which it receives in the form of user reports and blocks. It also ‘actioned’ 18 accounts on the basis on 500 user complaints.
‘Accounts Actioned’ denotes reports where WhatsApp took remedial action based on the report. Taking action denotes either banning an account or a previously banned account being restored as a result of the complaint.
“Over the years, we have consistently invested in Artificial Intelligence and other state of the art technology, data scientists and experts, and in processes, in order to keep our users safe on our platform,” said WhatsApp Spokesperson.
The disclosures are mandated by the new rules of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. WhatsApp took these actions based on grievances received from users in India and accounts actioned in India through its prevention and detection methods for violating the laws of India or WhatsApp’s Terms of Service.
In addition to responding to and actioning on user complaints through the grievance channel, WhatsApp also deploys tools and resources to prevent harmful behaviour on the platform. It said the company is particularly focused on prevention because it believes it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred.