The New Frontier of Cybercrime: Generative AI

As generative AI like ChatGPT becomes more accessible, so do the threats of cybercriminals weaponizing the technology for increasingly persuasive and customized phishing campaigns.

The New Frontier of Cybercrime: Generative AI

Generative artificial intelligence (AI) tools like ChatGPT are gaining immense popularity, but they also pose dangers in the wrong hands. Threat actors on the dark web are showing increased interest in leveraging these AI systems to create more persuasive and tailored phishing campaigns, potentially increasing the success rate of cyberattacks.

The severity of this issue is magnified by its potential implications: both amateur and professional hackers could employ these tools to generate increasingly persuasive phishing emails tailored to their intended audience, raising the likelihood of a successful attack.

Access to AI Systems Being Sold on Dark Web

According to threat exposure management company Flare, mentions of ChatGPT on dark web forums and marketplaces have skyrocketed over 27,000 times in just six months.

Flare researchers have identified over 200,000 OpenAI credentials being sold on the dark web in the form of stealer logs. While this is a small fraction of ChatGPT's estimated 100 million monthly users, it demonstrates that threat actors recognize the potential for abuse.

A June 2022 report from cybersecurity firm Group-IB revealed that dark web marketplaces were trading logs from malware containing over 100,000 ChatGPT accounts. As cybercriminals explore the capabilities of generative AI, their interest has led to concerning developments.

Malicious ChatGPT Alternative Created

A threat actor has developed "WormGPT," a ChatGPT clone specifically marketed as an AI chatbot that "lets you do all sorts of illegal stuff." WormGPT is built on the open-source GPT-J language model and has been trained on diverse data sets related to malware capabilities.

Security researchers at SlashNext gained access to WormGPT to test its potential for phishing. They found that WormGPT could generate remarkably persuasive business email compromise (BEC) messages intended to dupe employees into sending fraudulent payments.

Dangers of Weaponized AI for Phishing and BEC

The experiment revealed several advantages generative AI poses in the hands of threat actors:

  • Sophisticated grammar and wording to increase legitimacy
  • Custom-tailored messaging to intended targets
  • Enable less skilled hackers to mount complex attacks

While defending against this threat is difficult, organizations can take proactive steps, including:

  • Employee training on verifying urgent payment requests
  • Enhanced email verification processes
  • Flagging keywords associated with BEC scams

AI Phishing Represents an Escalating Cyber Threat

As generative AI capabilities rapidly advance, threat actors are already exploring ways to weaponize these tools for social engineering at scale. The early evidence of AI credential theft and creation of tools like WormGPT represents the tip of the iceberg.

While AI chatbots create new opportunities for efficiency and innovation, their vulnerabilities to abuse should not be discounted. Organizations must recognize the emerging threat of AI-powered phishing and make cybersecurity readiness a top priority in order to protect their data, assets, and reputations against this escalating risk.

Proactive defense measures, security training, and vigilant monitoring of cybercriminal underground activities will be crucial in defending against this new breed of highly persuasive and customized social engineering attacks.

Read next