Cryptocurrency Scammers Exploit ChatGPT-Powered Botnet on Social Platform X

Author: CoinSense

A recent investigation by researchers at Indiana University Bloomington has unveiled the use of a botnet powered by ChatGPT, a sophisticated AI language model developed by OpenAI, to promote cryptocurrency scams on X (formerly known as Twitter).

The botnet – dubbed Fox8 due to its connection with crypto-related websites – was composed of 1,140 accounts that utilized ChatGPT to generate and post content as well as engage with other posts. The auto-generated content aimed to entice unsuspecting users into clicking on links that led to crypto-hyping websites.

The researchers detected the botnet’s activity by identifying a specific phrase, “As an AI language model…,” which ChatGPT occasionally uses in response to certain prompts.

This led them to manually scrutinize accounts they suspected were operated by bots. Despite the relatively unsophisticated methods employed by the Fox8 botnet, it managed to publish seemingly convincing messages endorsing cryptocurrency sites, illustrating the ease with which AI can be harnessed for scams.

Micah Musser, an expert in AI-driven disinformation, believes that this discovery might only scratch the surface of a larger issue, given the popularity of large language models and chatbots.

“This is the low-hanging fruit,” Musser said in an interview with WIRED. “It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things.”

OpenAI’s usage policy explicitly prohibits the use of its AI models for scams and disinformation. Researchers stress the challenge of identifying such botnets when they are effectively configured, as they could evade detection and manipulate algorithms to spread disinformation more effectively.

Filippo Menczer, a professor spearheading the University’s research into Fox8, said they only noticed the botnet because the scammers were sloppy. “Any pretty-good bad guys would not make that mistake,” he stated.

Spam Bots On X

Spam bots have long plagued the online crypto community, and are a common grievance among influencers within the space. Such bots are usually easy to spot on platforms like YouTube and X, but have nevertheless managed to steal millions from victims by impersonating celebrities and promoting malicious giveaways.

Though Elon Musk promised to “defeat the spam bots” after buying out Twitter, Menczer believes the bots have become more common since Musk’s takeover. The researcher and his team no longer contact X with their findings about the bots.

“They are not really responsive,” Menczer says. “They don’t really have the staff.”

Musk confirmed over the weekend that X will be removing its blocking feature, sparking further criticism from creators concerned that they won’t be able to curate their feeds to remove scammers and impersonators. 
 

A recent investigation by researchers at Indiana University Bloomington has unveiled the use of a botnet powered by ChatGPT, a sophisticated AI language model developed by OpenAI, to promote cryptocurrency scams on X (formerly known as Twitter).

The botnet – dubbed Fox8 due to its connection with crypto-related websites – was composed of 1,140 accounts that utilized ChatGPT to generate and post content as well as engage with other posts. The auto-generated content aimed to entice unsuspecting users into clicking on links that led to crypto-hyping websites.

The researchers detected the botnet’s activity by identifying a specific phrase, “As an AI language model…,” which ChatGPT occasionally uses in response to certain prompts.

This led them to manually scrutinize accounts they suspected were operated by bots. Despite the relatively unsophisticated methods employed by the Fox8 botnet, it managed to publish seemingly convincing messages endorsing cryptocurrency sites, illustrating the ease with which AI can be harnessed for scams.

Micah Musser, an expert in AI-driven disinformation, believes that this discovery might only scratch the surface of a larger issue, given the popularity of large language models and chatbots.

“This is the low-hanging fruit,” Musser said in an interview with WIRED. “It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things.”

OpenAI’s usage policy explicitly prohibits the use of its AI models for scams and disinformation. Researchers stress the challenge of identifying such botnets when they are effectively configured, as they could evade detection and manipulate algorithms to spread disinformation more effectively.

Filippo Menczer, a professor spearheading the University’s research into Fox8, said they only noticed the botnet because the scammers were sloppy. “Any pretty-good bad guys would not make that mistake,” he stated.

Spam Bots On X

Spam bots have long plagued the online crypto community, and are a common grievance among influencers within the space. Such bots are usually easy to spot on platforms like YouTube and X, but have nevertheless managed to steal millions from victims by impersonating celebrities and promoting malicious giveaways.

Though Elon Musk promised to “defeat the spam bots” after buying out Twitter, Menczer believes the bots have become more common since Musk’s takeover. The researcher and his team no longer contact X with their findings about the bots.

“They are not really responsive,” Menczer says. “They don’t really have the staff.”

Musk confirmed over the weekend that X will be removing its blocking feature, sparking further criticism from creators concerned that they won’t be able to curate their feeds to remove scammers and impersonators.