10.1 C
New Delhi
Wednesday, December 25, 2024
HomeTechSecurity experts warn of GPT-4 risks

Security experts warn of GPT-4 risks


NEW DELHI : Cyber security experts have warned of a variety of risks that can arise out of GPT-4, the latest large language model (LLM) launched on Tuesday by artificial intelligence (AI) research firm, OpenAI. Such risks can emerge from rising sophistication of security threats driven by GPT-4’s better reasoning and language comprehension abilities, as well as its long-form text generation ability that can be used to write more complex code for malicious software programmes.


While OpenAI’s generative AI chatbot, ChatGPT, found widespread popularity after being opened for public access since last November, its proliferation also saw cyber criminals being able to use the tool to generate malicious code.

In a research note published on Thursday, Israeli cyber security firm, Check Point Research, said despite improvements to safety metrics, GPT-4 still poses the risk of being manipulated by cyber criminals to generate malicious code. These abilities include writing code for a malware that can collect confidential portable document files (PDFs) and transfer to remote servers through a hidden file transfer system, using programming language, C++.

In a demonstration, while GPT-4 initially refutes code generation due to the presence of the word ‘malware’ in the query, the LLM, which is presently available on ChatGPT Plus—a paid subscription tier of ChatGPT— failed to detect the malicious intent of code when malware word was removed.

Other threats that Check Point’s researchers could execute include a tactic called ‘PHP Reverse Shell’—which hackers use to gain remote access to a device and its data; writing code to download remote malware using the programming language Java; and, creating phishing drafts by impersonating employee and bank emails.

“While the new platform improved on many levels, GPT-4 can still empower non-technical bad actors to speed up and validate their hacking activities and enable execution of cyber crime,” said Oded Vanunu, head product vulnerabilities research at Check Point.

Fellow security experts concur, saying GPT-4 will continue to pose a wider range of challenges such as expanding type and scale of cyber crimes that a larger number of hackers can now deploy to target individuals and companies alike.

Mark Thurmond, global chief operating officer at US cyber security firm Tenable, said tools such as GPT-4-based chatbots “will continue to open the door for potentially more risk, as it lowers the bar in regard to cyber criminals, hacktivists and state-sponsored attackers.”

“These tools will soon require cyber security professionals to up their skill and vigilance about the ‘attack surface’ — with these tools, you can potentially see a larger number of cyber attacks that leverage AI tools to be created,” Thurmond added.

The attack surface refers to the total number of entry points cybercriminals can use to compromise a system. Thurmond said these tools can create a wider range of threats that were so far not accessible to those without technical knowhow because of their text generating abilities.

Sandip Panda, chief executive at Delhi-based cyber security firm, InstaSafe, added that apart from the technical threats, a drastic rise in phishing and spam attacks could be on the horizon.

“With improvement in tools like GPT-4, the rise of more sophisticated social engineering attacks, generated by users in fringe towns and cities, can create a massive bulk of cyber threats. A much larger number of users who may not have been fluent at drafting realistic phishing and spam messages can simply use one of the many generative AI tools to create social engineering drafts, such as impersonating an employee or a company, to target new users,” Panda said.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves