The Indian Computer Emergency Response Team (CERT-In) has warned that AI-based platforms, such as ChatGPT, Bard and Bing AI, could be used by scammers to target individuals and organisations.
A ‘threat actor’ could use the application to write malicious codes, exploit the vulnerability, and conduct scanning to construct malware or ransomware for a targeted system, warns the cybersecurity watchdog. Cybercriminals could use artificial intelligence (AI) language models to scrape information from the Internet, such as articles, websites, news, and posts, and potentially take Personal Identifiable Information without explicit consent from the owners to build a corpus of text data.
“AI-based applications can generate output in the form of text as written by humans. This can be used to disseminate fake news, scams, generate misinformation, create phishing messages, or produce deep fake texts,” said CERT-In in an advisory.
Also read: Elon Musk says he’ll create ‘TruthGPT’ to counter AI ‘bias’
A ‘threat actor’ can ask for a promotional e-mail, a shopping notification, or a software update in their native language, and get a well-crafted response in English, which could be used for phishing campaigns, it added.
The advisory assumes significance in the context of the recent concerns raised by the like of Elon Musk. Prominent experts have asked Artificial Intelligence labs to immediately pause the training of AI systems more powerful than GPT for at least six months.
The open letter has many notable signatories, including Tesla and Twitter CEO Elon Musk; Steve Wozniak, co-founder of Apple; Swedish-American physicist and popular science writer, Max Tegmark; Israeli historian Yuval Noah Harari; and many more. This comes even as reports suggest that big tech players have recently laid off employees from the ‘responsible AI’ teams.
Consequences
The letter signed by the tech doyens stated that as AI labs are locked in an out-of-control race to develop even more powerful digital minds, proper planning management is not happening to appraise and control the consequences of such technologies.