Uncovering ChatGPT's Role in Crypto Botnets on Twitter

ChatGPT's words are spreading misinformation across Twitter's feeds. A new study reveals how cryptocurrency scammers are unleashing the persuasive powers of AI to con unwitting users.

Uncovering ChatGPT's Role in Crypto Botnets on Twitter

A recent study has revealed how artificial intelligence (AI) tools like ChatGPT are being used to power cryptocurrency spam botnets on Twitter. Researchers at Indiana University's Observatory on Social Media uncovered a network of over 1,140 fake accounts dubbed "Fox8" that were constantly posting tweets linking to spammy crypto "news" sites. The study highlights how AI is enabling the automation and proliferation of misinformation networks on social platforms.

ChatGPT Enables Efficient Automation of Fake Accounts

The Indiana researchers found that the Fox8 botnet leveraged ChatGPT to automatically generate unique content for each of the thousands of fake accounts. By using AI to create original tweets, the botnet operator could efficiently control over a thousand accounts that appeared to be real Twitter users interested in crypto trading.

"Without ChatGPT, managing so many accounts and continuously creating new content would be incredibly time-consuming," said lead researcher Professor Filippo Menczer. "The AI makes it trivial to operate networks of this scale."

The botnet took advantage of ChatGPT's natural language generation capabilities to craft tweets, respond to other users, and add human-like variability between accounts. This mass production of AI-powered content allowed the network to aggressively spread cryptocurrency hype and drive traffic to affiliated sites.

Outsmarting Moderators Through "Human-Like" Behavior

The botnet was able to avoid detection for months by mimicking human behaviour with ChatGPT's aid. The AI assistant helped introduce natural patterns of activity over time and add nuanced imperfections to its writing.

"The bots could post at random intervals, make slight grammatical mistakes, employ slang, and reference recent news to appear more authentic," explained Menczer. "If it wasn't for the 'as an AI assistant' disclaimer ChatGPT appends, they may have flown under the radar indefinitely."

By leveraging AI to orchestrate complex, human-like behaviour, malicious actors are able to disguise their bots as real users. Social platforms like Twitter now face the challenge of quickly advancing moderation to counter this new generation of AI-powered misinformation tools.

The Need for AI Detection to Combat Disinformation

The Indiana researchers chose not to notify Twitter of the Fox8 botnet, as they found the company unresponsive following Elon Musk's takeover. However, the study emphasizes the need for platforms to actively develop advanced AI capabilities that can identify fake accounts and combat the spread of misinformation.

"It's an arms race, where disinformation networks will find new ways to exploit AI, and companies must stay ahead to counter these threats," said Menczer. "Detecting AI-generated content and coordinated inauthentic behaviour will only grow more critical moving forward."

Staying ahead of AI-powered misinformation will require social networks to prioritize investment in natural language processing, anomaly detection, and machine learning techniques. Policymakers may also need to consider regulations focused on bot detection and disclosure to protect the online information ecosystem. The Fox8 case study provides a glimpse of the AI-enabled disinformation attacks that platforms should prepare for today.

Read next