Search TorNews

Find cybersecurity news, guides, and research articles

Popular searches:

Home » News » Cyber Threats » AI-Generated Code Leaves Criminal Carding Platform Exposed, Researchers Say

AI-Generated Code Leaves Criminal Carding Platform Exposed, Researchers Say

Last updated:May 8, 2026
Human Written
  • Jerry’s Store, a dark web carding platform, exposed nearly 345,000 stolen credit card records after its AI-generated backend left a public directory completely unprotected on the internet.

  • Cybersecurity experts say the incident is a sharp warning regarding “vibe coding,” where people who develop websites and apps publish codes from AI without adequate security checks.

  • The breach hints at the increase in criminal networks, taking advantage of top-tier AI tools for advanced fraud operations.

AI-Generated Code Leaves Criminal Carding Platform Exposed, Researchers Say

A dark web marketplace built largely by artificial intelligence just leaked nearly 345,000 credit card records. The platform’s operators used AI-generated code to build their backend systems and skipped the basic security steps.

The platform, Jerry’s Store, operated as a carding marketplace. Criminals verified stolen credit cards here before putting them to use. The site tested cards by running small transactions through legitimate platforms like Amazon and Lyft, then reading the transaction responses to identify which cards still work.

Cybersecurity researchers who uncovered the breach pointed directly at the development process. According to their findings, the operators built the platform’s backend using Cursor, a legitimate and popular AI coding assistant. The code Cursor helped generate, however, was shipped without authentication controls, exposing hundreds of thousands of records to anyone who found it.

“Vibe Coding” Opens Real-World Security Holes

Security experts are now using this incident to raise a serious alarm about a growing development trend they call “vibe coding.” The term describes a pattern where builders rely heavily on AI to write functional code but skip the rigorous human review that catches vulnerabilities before anything goes live.

Cursor is legit. People who build software and apps use them around the world. However, the problem is the assumption that AI-generated code is production-ready without a proper security audit. In Jerry’s Store’s case, that assumption left the entire operation wide open.

According to cyber protection experts, this risk applies to criminal enterprises, startups, small businesses, and even large organizations. Without authentication layers, encryption standards, and basic access controls, functional software becomes a serious liability.

The Jerry’s Store breach makes that lesson impossible to ignore, and it came from the criminals themselves.

How a Carding Platform Ran Like a Legitimate Business

What makes this incident particularly striking is how professionally Jerry’s Store operated before it collapsed. The platform did not simply sell stolen card data. It offered a verification service, running cards through small charges on well-known consumer platforms to confirm which ones remained usable.

That process mirrors what legitimate payment processors do when checking card validity. Jerry’s Store, however, did it with stolen credentials, at scale, and charging other criminals for the results.

According to cybersecurity researchers, after looking into the platform, the modus operandi reflects a macro shift in the way criminal networks structure their infrastructure. AI tools have significantly lowered the barrier to entry. A group that might have previously needed experienced developers can now spin up a functional platform quickly and cheaply.

That same accessibility, however, is exactly what caused the breach. Speed without review left a critical gap open, and the data spilled out.

Regulators Face a Growing and Poorly Understood Threat

The exposure of Jerry’s Store is drawing strong attention from cybersecurity professionals who believe regulators are not moving fast enough on AI-related vulnerabilities.

AI misconfiguration, they argue, deserves the same level of regulatory scrutiny as any traditional data breach. The source of the vulnerability does not reduce the scale of the damage.

Up to 345,000 credit card credentials that belong to real people now sit exposed and at risk of fraud. Financial institutions will spend considerable resources identifying and replacing compromised cards. And the criminals behind Jerry’s Store will likely rebuild, perhaps more carefully this time.

The broader message is direct. AI is now a tool for both sides of the cybersecurity divide. The good and bad use it, and the moment either side negotiates on security, those whose data ends up in these systems get to pay the price.

The scale of card fraud on the dark web is staggering. Beyond the 345,000 cards exposed in Jerry’s Store, a separate UK-focused investigation found 1,800 stolen bank cards actively for sale, representing just a fraction of the overall criminal card economy. For full details on the UK card crisis, see UK dark web crisis: 1,800 stolen bank cards found for sale.

Share this article

About the Author

Memchick E

Memchick E

Digital Privacy Journalist

Memchick is a digital privacy journalist who investigates how technology and policy impact personal freedom. Her work explores surveillance capitalism, encryption laws, and the real-world consequences of data leaks. She is driven by a mission to demystify digital rights and empower readers with the knowledge to protect their anonymity online.

View all posts by Memchick E >
Comments (0)

No comments.