Google had received 24,569 complaints from users and removed 48,594 pieces of content based on those complaints in October, while 3,84,509 pieces of content were removed as a result of automated detection.
The US-based company has made these disclosures as part of compliance with India’s IT rules that came into force in May this year.
Google, in its latest report, said it had received 26,087 complaints in the month of November (November 1-30, 2021) from individual users located in India via designated mechanisms, and the number of removal actions as a result of user complaints stood at 61,114.
These complaints relate to third-party content that is believed to violate local laws or personal rights on Google’s significant social media intermediaries (SSMI) platforms, the report said.
“Some requests may allege infringement of intellectual property rights, while others claim violation of local laws prohibiting types of content on grounds such as defamation. When we receive complaints regarding content on our platforms, we assess them carefully,” it added.
Trusted by Industry Leaders
The content removal was done under several categories, including copyright (60,387), trademark (535), circumvention (131), court order (56) and graphic sexual content (5).
Google explained that a single complaint may specify multiple items that potentially relate to the same or different pieces of content, and each unique URL in a specific complaint is considered an individual “item” that is removed.
For user complaints, the “removal actions” number represents the number of items where a piece of content was removed or restricted during the one-month reporting period as a result of a specific complaint, while for automated detection, the “removal actions” number represents the number of instances where Google removed content or prevented the bad actor from accessing the Google service as a result of automated detection processes.
Google said in addition to reports from users, the company invests heavily in fighting harmful content online and use technology to detect and remove it from its platforms.
“This includes using automated detection processes for some of our products to prevent the dissemination of harmful content such as child sexual abuse material and violent extremist content.
“We balance privacy and user protection to: quickly remove content that violates our Community Guidelines and content policies; restrict content (e.g., age-restrict content that may not be appropriate for all audiences); or leave the content live when it doesn’t violate our guidelines or policies,” it added.
Google said automated detection enables it to act more quickly and accurately to enforce its guidelines and policies. These removal actions may result in removing the content or terminating a bad actor’s access to the Google service, it added.
Under the IT rules, large digital platforms – with over 5 million users – have to publish periodic compliance reports every month, mentioning the details of complaints received and action taken thereon.
The report needs to also include the number of specific communication links or parts of the information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools.