AIMalicious AI Arrives on the Dark Web

By Mercedes Page

Published 23 August 2023

Nefarious non-state actors are already harnessing AI to scale up their malicious activities. Just as legitimate users have moved on from exploring ChatGPT to building similar tools, the same has happened in the shadowy world of cybercrime.

The development of artificial intelligence has progressed at an unprecedented pace over the past few months. While governments, industry, civil society and multilateral bodies alike deliberate how best to regulate it, nefarious non-state actors are already harnessing AI to scale up their malicious activities.

Since the launch of OpenAI’s ChatGPT in November last year, forums on the dark web have been buzzing about ways to harness the technology. Just as people around the world have shared tips on using ChatGPT and other AI tools to enhance efficiency or outsource tasks, dark web users have been sharing tips on how to jailbreak the technology to get around safety and ethical guardrails or use it for more sophisticated malicious activity. Now, just as legitimate users have moved on from exploring ChatGPT to building similar tools, the same has happened in the shadowy world of cybercrime.

In recent weeks the dark web has become a breeding ground for a new generation of standalone AI-powered tools and applications designed to cater to a cybercriminal’s every illicit need.

The first of these tools, WormGPT, appeared on the dark web on 13 July. Marketed as a ‘blackhat’ alternative to ChatGPT with no ethical boundaries, WormGPT is based on the open-source GPT-J large-language model developed in 2021. Available in monthly (€100) or yearly (€550) subscriptions, WormGPT, according to its anonymous seller, has a range of features such as unlimited character inputs, memory retention and coding capabilities. Allegedly trained on malware data, its primary uses are generating sophisticated phishing and business email attacks and writing malicious code. The tool is constantly being updated with new features, which are advertised on a dedicated Telegram channel.

Hot on WormGPT’s heels, FraudGPT appeared for sale on the dark web on 22 July. The tool—based on GPT-3 technology—is marketed as the an advanced bot for offensive purposes. Its uses include writing malicious code, creating undetectable malware and hacking tools, writing phishing pages and scam content, and finding security vulnerabilities. Subscriptions start at US$200 a month through to US$1,700 for an annual license. According to the security firm that discovered it, FraudGPT is likely focused on generating quick, high-volume phishing attacks, while WormGPT is more focused on generating sophisticated malware and ransomware capabilities.