An extensive bot issue on X (previously known as Twitter) has been discovered by a team of student-teachers at Indiana University’s Observatory on Social Media. According to the report, there are about 1,140 accounts with artificial intelligence behind them that produce content automatically and fabricate personas using stolen selfies.

The “Fox8” botnet, which is made up of multiple phony identities on X, was discovered by the researchers. This botnet creates content with the intention of spreading dangerous content and promoting dubious websites using ChatGPT, an AI language model. The bots have been observed engaging in actions such as luring users into investing in fictitious cryptocurrencies and even pilfering from real cryptocurrency wallets.

These phony accounts communicate with human-run accounts, such as those that provide news and information on cryptocurrencies, and frequently utilize hashtags like #bitcoin, #crypto, and #web3. Researchers discovered that the Fox8 bots are more convincing to human users since they not only produce material but also engage with each other by creating plausible profile descriptions, retweeting and replying, and even having followers and friends.

These bots’ main objective is to bombard X people with AI-generated posts in an attempt to increase the likelihood that phony content would be viewed and clicked on by real users. In the past, botnets such as Fox8 were comparatively simple to identify because of their implausible language and content. Nevertheless, it is becoming more challenging to differentiate these accounts from real human accounts because to developments in language models like ChatGPT.

The researchers claim that even sophisticated content detection algorithms have trouble telling the difference between content created by artificial intelligence (AI) and human content since the Fox8 botnet’s accounts are now so convincing. The study underlined how much funding is required to create suitable legislation and solutions to deal with this problem.

The lack of efficient techniques to identify AI-generated content, according to the researchers, makes it difficult to stop the spread of damaging content and phony accounts on social networking sites like X.

X is said to have removed the 1,140 phony accounts connected to the botnet following the publication of the study. In order to stop its spread and potential harm, researchers recommend more attention to the problem of AI-generated bogus accounts and material, which is still a worry.

The study emphasizes the necessity for strong steps to prevent AI-generated material’s detrimental effects on social media by highlighting the expanding complexity of this content and its implications for online platforms.