Hackers hate AI Slop even more than you do

[keyword]


The complaint sounds familiar. “I’m disappointed that you’re including AI garbage on the site,” one exasperated person, who posted anonymously, said in an online message. “No one is asking for this – we want you to improve the site, stop charging for new features.”

Only, it’s not a regular internet user whining about AIbe in forced their favorite app. Instead, they complain about a cybercrime forum’s plans to introduce more generative AI. Like millions of others, scammers, miscreants and low-level hackers are getting annoyed with AI intruding into their lives and the rise of low quality AI slop posted in their online communities.

“People don’t like it,” says Ben Collier, a security researcher and senior lecturer at the University of Edinburgh. As part of a recent study on how low-level cybercriminals use AI, Collier and fellow researchers have seen a growing backlash over the use of generative AI in underground cybercrime forums and hacking groups.

During the generative AI boom and hype cycles of the past few years, some people posting on hacking forums moved from being positive about how AI could help hacking to greater skepticism about the technology, according to the study, which also involved researchers from the University of Cambridge and the University of Strathclyde.

The researchers analyzed 97,895 AI-related conversations on cybercrime forums from the launch of ChatGPT in 2022 to the end of last year. They found complaints about people throwing out “bullet point explainers” of basic cybersecurity concepts, moans about the number of low-quality posts, and concerns about Google’s AI search reviews reduce the number of visitors to the forums.

For decades, cybercrime message boards and marketplaces, often Russian in origin, have allowed scammers to do business together. These are places where stolen data can be traded, hacking jobs advertised, and fraudsters shit about their competitors. While Scammers often try to cheat each otherthe forums also have a sense of community. For example, users build reputations for being trustworthy, and forum owners hold writing competitions.

“These are essentially social spaces. They really hate other people using (AI) on the forums,” says Collier. He says the social dynamics of the groups can be messed up by potential cybercriminals trying to get a better reputation by posting AI-generated hacking explainers. “I think a lot of them are a little ambivalent about AI because it undermines their claim to be a skilled person.”

Posts reviewed by WIRED on Hack Forums, a self-styled space for those interested in talking about hacking and sharing techniques, show an irritation caused by people creating posts with AI. “I see many members using AI to make their threads/posts and it pisses me off as they don’t even take the time to write a simple sentence or two,” wrote one poster. Another put it more bluntly: “Stop posting AI shit.”

In several cases, Collier says, users of various forums appear to be annoyed by AI posts because they want to make friends. “If I want to talk to an AI chatbot, there are plenty of sites for me to do that…I come here for human interaction,” said one post cited in the research.

Since ChatGPT By the end of 2022, there was significant interest in AI hacking capabilities and how the technology could transform online crime. Both sophisticated hackers and those less capable tried to use AI in their attacks. While some organized fraudsters have boosted their operations with ever more realistic AI face swapping technology and social engineering messages translate with AImuch attention has been paid to generative AI’s abilities to write malicious code and vulnerabilities discovered.



Eva Grace

Eva Grace

Leave a Reply

Your email address will not be published. Required fields are marked *