About half of Indians surveyed said that they are unable to differentiate between the real and cloned voice of a person while 83 percent of the victims of voice scams have faced the loss of money, online security firm McAfee said in a report.
The survey of 7,054 people was conducted in seven countries, including 1,010 respondents from India, around artificial intelligence-enabled voice scams by imposters.
The report suggests using a verbal codeword among family members and trusted close friends as one of the protective measures from voice scams.
“About half (47 percent) of Indian adults have experienced or know someone who has experienced some kind of AI voice scam, which is almost double the global average (25 percent). 83 percent of Indian victims said they had a loss of money — with 48 percent losing over Rs. 50,000,” the report said.
McAfee conducted a survey on how artificial intelligence (AI) technology is fueling a rise in online voice scams, with just three seconds of audio required to clone a person’s voice.
“The survey reveals that more than half (69 percent) of Indians think they don’t know or cannot tell the difference between an AI voice and real voice,” the report said.
The survey found 66 percent of the Indian respondents said they would reply to a voicemail or voice note purporting to be from a friend or loved one in need of money.
“Particularly if they thought the request had come from their parent (46 percent), partner or spouse (34 percent), or child (12 percent). Messages most likely to elicit a response were those claiming that the sender had been robbed (70 percent), was involved in a car incident (69 percent), lost their phone or wallet (65 percent) or needed help while travelling abroad (62 percent),” the report said. The survey also found that the rise of deep fakes and disinformation has led to people being warier of what they see online, with 27 percent of Indian adults saying they are now less trusting of social media than ever before and 43 percent being concerned over the rise of misinformation or disinformation.
“Artificial Intelligence brings incredible opportunities, but with any technology, there is always the potential for it to be used maliciously in the wrong hands. This is what we’re seeing today with the access and ease of use of AI tools helping cybercriminals to scale their efforts in increasingly convincing ways,” McAfee CTO Steve Grobman said.