Search TorNews

Find cybersecurity news, guides, and research articles

Popular searches:

Home » News » Cyber Threats » Cybercrime Enters Subscription Era: AI Hacking Tools Rented for Less Than the Cost of Netflix

Cybercrime Enters Subscription Era: AI Hacking Tools Rented for Less Than the Cost of Netflix

Last updated:January 21, 2026
Human Written
  • Renting artificial Intelligence tools such as “Dark LLMs” and deepfakes is now possible on various dark web forums for $30 a month or less, and that has made it easier for cybercriminals to launch advanced scams more easily than ever before.

  • The financial loss resulting from the use of AI for Fraud has reached astronomical levels with Deepfakes scams causing verified losses totaling $347 million in a single quarter alone.

  • Researchers warn this automation removes traditional clues from attacks, making it harder to identify real perpetrators and leaving static defenses struggling.

Cybercrime Enters Subscription Era AI Hacking Tools Rented for Less Than the Cost of Netflix

Imagine cybercrime tools being as easy to subscribe to as your favorite streaming service. That scary reality is here. AI has become the cheap, off-the-shelf plumbing for modern digital crime. Researchers have uncovered a booming marketplace. Criminals are renting weaponized AI for subscription fees anyone can afford.

The Dark Web AI Marketplace Boom Amid Spike in Sales of Deepfakes

Forget complex hacking setups. According to data from Group IB, a global cybersecurity company, there are numerous AI crimeware advertisements available on Dark Web forums today, and since 2019, there has been a 371% increase in these types of posts.

The conversation is exploding. In fact, throughout 2025, there were over 23,000 fresh posts discussing different types of AI tools, with almost 300,000 replies discussing these tools.

The business model for these crimes is fairly simple. Attackers have turned once skill-intensive attack stages into automated workflows and now sell them as subscriptions through shady software-as-a-service operations.

One of the ugliest trends is “Dark LLMs.” Developers build these self-hosted language models specifically to power scams and malware. They are not jailbroken chatbots. They run hidden behind Tor and deliberately ignore safety rules.

Several vendors already sell them for around $30 monthly. This deliberate stripping of safeguards mirrors the severe risks posed when mainstream AI platforms are exploited, as seen in recent watchdog reports. They have over 1,000 combined users.

Deepfakes and Real-World Damage

The deepfake trade is booming right alongside. Criminals can buy complete synthetic identity kits for as little as $5. AI-generated images of people’s faces and voices were a few examples.

This proliferation of AI-forged identities is a cornerstone of the modern attack chain, fuelling the unprecedented scale of breaches documented over the past year.

Sales skyrocketed from 2024 into 2025. This points to a rapidly growing market.

The financial damage is very real. Group-IB reported that deepfake fraud caused verified losses of $347 million in just one quarter alone. This includes cloned executives and fake video calls.

In one case, the firm assisted a bank to identify more than 8,000 deepfake-powered fraud attempts over eight months.

Scam call centers now use synthetic voices for first contact. Language models then coach the human operators during calls. Malware developers are also testing AI tools for reconnaissance. They are laying groundwork for more autonomous attacks in the future.

An Unprecedented Shift in Scale

“AI gives criminals unprecedented reach,” said Group-IB’s Anton Ushakov. He further explained how the technology helps criminals launch scams with ease and carry out hyper-personalization today.

He warns tomorrow could bring autonomous AI executing attacks that once required human expertise.

For defenders, AI removes the usual clues. When attackers can generate voices, text, and video on demand, identifying the real culprit becomes far more difficult, and static security defenses struggle to keep up.

Cybercrime hasn’t reinvented itself. AI just fully automated many things that people used to do manually; however, it also had added a monthly subscription model and has now made advanced automation available all over the world. Now everyone has to cope with the destruction from it.

Share this article

About the Author

Memchick E

Memchick E

Digital Privacy Journalist

Memchick is a digital privacy journalist who investigates how technology and policy impact personal freedom. Her work explores surveillance capitalism, encryption laws, and the real-world consequences of data leaks. She is driven by a mission to demystify digital rights and empower readers with the knowledge to protect their anonymity online.

View all posts by Memchick E >
Comments (0)

No comments.