Email Phishing Attacks Up 1,265% Since ChatGPT Launched: SlashNext

0

Email Phishing Attacks Up 1,265% Since ChatGPT Launched: SlashNext

Generative AI has revolutionized almost every facet of daily life in a relatively short time. An unfortunate side effect of this surge into the mainstream, however, is a surge of phishing scams utilizing the technology. A new report by cybersecurity firm SlashNext says phishing emails have jumped 1,265% since the launch of ChatGPT.

On top of the development of malware-generating AI tools like WormGPT, Dark Bart, and FraudGPT, which are spreading on the darkweb, cybercriminals are also finding new ways to jailbreak OpenAI’s flagship AI chatbot.

“When ChatGPT released in February, we saw a dramatic increase in the number of phishing attacks, obviously partly driven by, just in general, overall attacks because of the success,” SlashNext CEO Patrick Harr told Decrypt.

Phishing attacks refer to a cyberattack that comes in the form of email, text, or message on social media that appears to come from a reputable source. Phishing attacks can also be designed to direct victims to malicious websites that trick them into signing transactions with their crypto wallet that then drain their funds.

According to SlashNext’s report, in the last quarter of 2022, 31,000 phishing attacks were sent daily, which already represented a 967% increase in credential phishing. 68% of all phishing attacks, SlashNext said, are text-based business email compromises (BEC), and 39% of all mobile-based attacks were via SMS phishing (smishing) attacks.

“While there has been some debate about the true influence of generative AI on cybercriminal activity, we know from our research that threat actors are leveraging tools like ChatGPT to help write sophisticated, targeted business email compromises and other phishing messages,” Harr said.

Hackers Keep Finding New and Sophisticated Ways to Use AI for Crime

“At its core, these are link-based attacks that try to get you to give up your credentials, username, and password,” Harr added, noting that phishing attacks can also lead to more presistent ransomware being installed. “The Colonial Pipeline attack was a credential phishing attack, [the attackers] were able to gain access to a user’s username and password.”

As cybercriminals use generative AI to target victims, Harr said cybersecurity professionals should go on the offensive and fight AI with AI.

“These companies have to incorporate [AI] directly into their security programs so that [the AI] is constantly scouring through all their messaging channels to remove these [threats],” he said. “That’s exactly why we use generative AI in our own tool sets to detect and not just block, but we also use it to predict how the next one’s going to happen.”

But while Harr is optimistic about the ability of AI to catch rogue AIs in the act, he acknowledges that it will take more than telling ChatGPT to look out for cyber threats.

“You have to have the equivalent of a private large language model application sitting on top of that, which is tuned to look for the nefarious threats,” he said.

While AI developers like OpenAI, Anthropic, and Midjourney have worked hard to include guard rails against using their platforms for nefarious purposes like phishing attacks and spreading misinformation, skilled users are committed to circumventing them.

Last week, the RAND Corporation released a report that suggested terrorists could learn how to carry out a biological attack using generative AI chatbots. While the chatbot did not explain how to build the weapon, by using jailbreaking prompts, the chatbot could be made to explain how the attack should be carried out.

Researchers have also found that by using less commonly tested languages like Zulu and Gaelic, they could hack ChatGPT and get the chatbot to explain how to get away with robbing a store.

In September, OpenAI launched an open call to offensive cybersecurity professionals, also known as Red Teams, to help find security holes in its AI models.

“Companies have to rethink their security postures,” Harr concluded. “They need to use generative AI-based tools to detect and respond to these things or, more importantly, not just respond to also detect, block and stop before they take action.”

Source

Leave A Reply

Your email address will not be published.