Credential stuffing attacks: anatomy, detection, and defense

Credential stuffing remains one of the most scalable and persistent threats on the internet. While defenders have improved at catching basic abuse, attackers now operate with mature tools, shared playbooks, and infrastructure designed to evade traditional detection.

These attacks are no longer opportunistic. They’re systematic — fueled by automation, credential leaks, and access to cheap proxy networks. And when they succeed, they lead directly to account takeover (ATO) — unauthorized access to user accounts, often followed by fraud, abuse, or data theft.

This post breaks down how credential stuffing works, why it’s so effective, and what actually helps detect and stop it in production. It’s written for fraud analysts, detection engineers, and product teams responsible for defending login flows.

What is a credential stuffing attack?

Credential stuffing is a type of automated attack where threat actors take large lists of previously leaked username-password pairs and try them en masse against login endpoints of unrelated services. The underlying assumption: many users reuse the same password across platforms, so a breach on one site can compromise accounts elsewhere.

This makes credential stuffing particularly dangerous; the service under attack doesn’t need to be breached itself. If your users reused their credentials elsewhere, they’re vulnerable.

It’s important to note: credential stuffing isn’t the same as account takeover, but it’s one of the most common ways attackers achieve it. When a login attempt using leaked credentials succeeds, the attacker has taken over the account. From there, they can extract value, change ownership, or use the account as a launchpad for further abuse.

To understand the mindset, consider a manual version: you want to break into a streaming account, so you try your target’s email and a few password guesses — maybe something personal. If you’re more determined, you dig up a password from a past breach involving that email and try that. But this is slow, tedious, and rarely fruitful.

Now automate it. Instead of one email, you have a million. Instead of guessing, you use real leaked credentials. Tools like OpenBullet let you configure the login flow for a specific site, load in a credential list and proxy pool, and fire off thousands of login attempts per minute. Hits are saved. Everything else is discarded.

Credential stuffing is about scale: turning a manual tactic into an industrialized pipeline, and turning recycled credentials into account takeovers.

How credential stuffing attacks work

Credential stuffing isn’t a single exploit. It’s a pipeline, a series of coordinated steps attackers refine to increase efficiency, evade detection, and maximize payout.

1. Sourcing credentials

The process begins with credential acquisition. Attackers collect username-password pairs from public leaks (e.g., Collection #1), stealer logs, or breach dumps traded in underground marketplaces. Many of these lists circulate in Telegram channels and forums, often formatted as email:password.

Some actors build private collections by merging multiple breaches, de-duping, and enriching with metadata (e.g., source service, password reuse hints) to improve their hit rate.

2. Choosing targets

With credentials in hand, attackers pick targets. They prioritize services where accounts are valuable and password reuse is likely, such as gaming platforms, e-commerce sites, and streaming services. Monetization opportunities and the strength of a service’s bot defenses both factor into this choice.

3. Setting up automation

With targets identified, attackers configure bots to automate login attempts at scale. These bots simulate legitimate login traffic by replicating the request format used by the target application, typically specifying the right HTTP method, headers, and payload structure.

To improve reliability and avoid basic blocks, bots are set up to:

  • Rotate through proxy lists to distribute traffic
  • Throttle requests to mimic human pacing
  • Randomize traits like user agents or header order to avoid easy fingerprinting

This configuration phase is straightforward, often involving GUI tools like Open Bullet,, or shared attack configurations. Once deployed, the bot runs autonomously — attempting thousands of logins per minute and logging successful hits for later use or resale.

4. Launch and scale

Once configured, the campaign is launched. The bot runs through the credential list, sending login attempts — sometimes thousands per minute. Sophisticated setups throttle requests and inject randomized behavior to mimic human traffic. Valid logins (“hits”) are logged automatically.

At this stage, attackers often step back. The tooling does most of the work.

5. Monetize and reuse

Verified credentials are immediately monetized. Some are used to make fraudulent purchases, drain stored value, or access restricted content. Others are sold or bundled into new combo lists marketed as “verified hits”. In some cases, attackers pivot — using access to reset passwords on other platforms or conduct phishing from a trusted inbox.

The schema below illustrate how an attacker can use a bot to test dozens of credentials obtained through a data breach on a target website, here a streaming service. The bot automatically saves successful username/password combinations to a file or a database to monetize compromised accounts later.

Why attackers conduct credential stuffing attacks

Credential stuffing is attractive because it’s cheap to launch, hard to trace, and profitable, even with single-digit success rates.

Attackers don’t need to write code or reverse-engineer login flows. Tools like OpenBullet or SilverBullet let them load prebuilt configuration files for specific targets — often shared openly in Telegram groups or forums. With just a credential list and proxy pool, they’re ready to launch.

At scale, the math works in their favor. A 0.1% hit rate on a million credentials yields 1,000 valid logins, with minimal overhead. Campaigns are often run from regions with weak enforcement, using crypto for payments and residential proxies for obfuscation.

Monetization options are broad:

  • Drain gift cards or loyalty balances
  • Make purchases using stored payment methods
  • Resell accounts (especially gaming and streaming)
  • Use hijacked accounts for phishing or scams

The barrier to entry is low, and the tools keep improving, making credential stuffing a persistent threat for any service with a login form.

How credential stuffing impacts businesses

Credential stuffing hurts businesses even if they’ve never had a breach. Because attackers rely on credential reuse, any platform with a login form becomes a potential target, and the consequences ripple across teams.

From the user’s perspective, a compromised account feels like a platform failure. Trust erodes quickly, especially if attackers make purchases, drain stored value, or abuse saved payment methods. Even when the credentials came from another breach, users often blame the site where they experienced the takeover.

Fraud losses add up fast. Attackers exploit compromised accounts to steal loyalty points, trigger promo codes, or issue chargebacks. For platforms with stored payment methods or inventory (like gaming skins or digital goods), the financial impact can be significant.

Customer support teams often bear the brunt, dealing with surges in password reset requests, account lockouts, and complaints about unauthorized activity.

Legal and compliance teams may also get pulled in. Even if the business wasn’t the source of the breach, regulators increasingly expect platforms to detect and respond to signs of compromise affecting their users.

Lastly, detection teams face the constant risk of false positives. Overly aggressive rules — especially those relying on IP reputation or breach-check APIs — can lock out real users. That risk is amplified for customers on mobile networks, shared IPs, or using reused (but valid) credentials.

The result: real-world costs, degraded UX, and increased pressure on teams across the org.

Types of bots used in credential stuffing attacks

Credential stuffing attacks don’t rely on one tool — they rely on the ability to simulate valid logins across surfaces: web, mobile, or APIs. The attacker’s choice depends on the target’s surface and the sophistication of its defenses.

Common approaches include:

  • HTTP clients: Lightweight tools like curl or Python’s requests library are fast and easy to script. They work well at scale but are noisy unless carefully tuned to mimic real browsers (e.g., header ordering, TLS behavior).
  • Browser automation frameworks: Tools like Puppeteer and Playwright offer full browser control. When patched to remove automation artifacts (like navigator.webdriver), they can run JavaScript and evade basic bot defenses, at the cost of speed.
  • Anti-detect browsers: Designed to spoof fingerprinting signals (canvas, WebGL, timezone, fonts), tools like Undetectable or Hidemium make automated traffic look more like real user sessions, especially at scale.
  • Mobile emulators and rooted devices: For mobile APIs, attackers often use Genymotion, Bluestacks, or rooted Android devices to simulate traffic from actual apps. These setups can spoof device metadata and bypass web-based detection layers.

At the center of most large-scale campaigns is OpenBullet (and its forks like SilverBullet and OpenBullet 2) — modular frameworks purpose-built for credential testing. These tools offer:

  • A visual interface for creating and running credential stuffing campaigns
  • Config files that define request logic, success conditions, and parsing rules for each target
  • Built-in proxy rotation, retry logic, and result handling
  • Plugin support for CAPTCHA solving, headless browsing, and advanced fingerprint spoofing

The popularity of OpenBullet stems from its accessibility. Attackers can download shared configs from Telegram or forums, plug in a credential list, and start testing immediately — no custom coding required. You can learn more about Open Bullet in one of our recent blog posts.

Industries most targeted by credential stuffing

Credential stuffing targets any service with a login form, but attackers focus their efforts where compromised accounts offer tangible value. That typically means platforms with stored payment methods, loyalty programs, or data they can resell or exploit.

Gaming platforms are frequent targets. Rare skins, in-game currencies, and linked payment methods turn player accounts into digital assets. Once compromised, attackers can either sell the whole account on a marketplace (cf screenshot below) or against in-game money or sell the valuable in-game items against real/virtual game money.

We discuss how bots target video game platforms with credential stuffing in this article.

E-commerce sites are rich in stored value. Attackers use stolen logins to drain gift cards, trigger refunds, or place fraudulent orders with saved credit cards. Some even harvest personal data (addresses, preferences) to use in social engineering or phishing.

Streaming services are attractive for a different reason: resale. Premium accounts are bundled and sold in bulk, often through Telegram channels or dark web shops. A single breached login may be shared by dozens of users — until it’s locked or flagged.

Financial platforms pose more risk and require more sophistication, but the payout justifies it. Compromised logins can lead to unauthorized transfers, identity theft, or synthetic identity creation. Attackers often chain accounts — using a breached email or bank login to compromise others.

SaaS tools — particularly in B2B contexts — are attractive targets due to the breadth of access they provide. Compromised accounts may expose sensitive business assets like billing records, internal documents, customer PII, or API keys. Even limited or read-only access can be monetized through scraping, data resale, or abuse of integrations.

But the risk isn’t just about information leakage. As we covered in this article on free tier abuse, attackers can also exploit the functionality of the SaaS product itself — running jobs, generating output, or triggering API usage without paying for it. Since many SaaS platforms operate on a pay-as-you-go model, the legitimate account owner often ends up footing the bill for this unauthorized activity.

Mobile apps are a frequent blind spot in credential stuffing defenses. Many mobile login APIs lack the client-side instrumentation used on web, such as JavaScript-based fingerprinting or behavioral telemetry. As a result, they’re often easier to target.

Attackers exploit this by mimicking mobile clients using emulators, rooted devices, or scripts that replay login flows. Without strong device attestation or anomaly detection on mobile traffic, these automated requests can blend in.

These aren’t theoretical risks. In one real-world case, Castle blocked over 558,000 credential stuffing attempts during a 4-day attack on a major U.S. on-demand staffing app. The attack exclusively targeted the mobile login endpoint, using distributed infrastructure to evade rate limits and avoid detection until volume-based protections kicked in.

The table below summarizes the main exposure per industry, and how attackers can monetize accounts compromised through credential stuffing attacks.

Industry Monetization Potential
Gaming Resell accounts with rare items, drain virtual currency, commit in-game fraud
E-commerce Abuse refund systems, steal loyalty points, place fraudulent orders, resell data
Streaming Sell “shared” premium accounts on black markets
Finance Unauthorized transfers, identity theft, chaining access to other accounts
SaaS (B2B) Exfiltrate billing data, customer PII, API keys; abuse org-wide access
Mobile apps All of the above, with added stealth; mobile flows often lack strong fingerprinting

How to detect credential stuffing attacks

Credential stuffing doesn’t always stand out at the individual request level. Each login attempt might appear valid on its own, using real credentials, normal headers, and common user agents. But at scale, attackers leave behind patterns that legitimate users don’t. If you suspect credential stuffing, here are the signals to monitor.

Volume anomalies: Credential stuffing is fundamentally a scale-driven attack. It generates abrupt, often unnatural shifts in login traffic, especially in failure patterns and timing.

  • Unusual volume of failed logins for non-existent accounts: Attackers often use large credential lists scraped from unrelated breaches. As a result, many of the email addresses they test won’t exist in your system at all. If you see a spike in login attempts to unknown accounts, especially with high entropy or unusual formats, this is a strong signal of automated abuse.
  • Drop in login success rate across high volume: In normal conditions, the ratio of successful to failed logins tends to remain relatively stable over time. Credential stuffing floods your endpoints with mismatched credentials, causing that ratio to plummet. A sudden, sharp drop — particularly alongside volume spikes — often signals an active attack.
  • Login surges during off-hours: Campaigns may be launched during nights or weekends, either to avoid detection or simply because the attacker operates in a different time zone than your user base. Watch for traffic surges outside your platform’s typical usage patterns.

Environmental inconsistencies: Even when attackers try to mimic real users, their infrastructure often leaks patterns.

  • Unfamiliar geographies: Logins from countries where you don’t have a customer base, or from multiple regions in a short timeframe.
  • Proxy and VPN usage: High usage of known proxy networks, especially free or residential ones.
  • Fingerprint reuse: Same browser characteristics (e.g., unusual user agent, WebGL renderer, canvas fingerprint, timezone) appearing across many sessions, suggesting automation or anti-detect tools.
  • Uniform TLS or header signatures: Unusual spikes linked to specific HTTP headers or TLS fingerprints across login attempts may indicate a shared bot framework.

Behavioral irregularities: Credential stuffing bots often produce patterns of interaction that differ from real user behavior, either through excessive regularity or inconsistent session hygiene.

  • Consistent, robotic login timing: Legitimate users don’t attempt logins at fixed intervals. But bots often do — for example, one attempt every 1.2 seconds. Even “low and slow” attacks, where bots throttle themselves to one login per IP per hour, can reveal anomalies when aggregated across sessions.
  • Clustering around similar usernames: High volumes of login attempts targeting similar email patterns (e.g. many @gmail.com addresses or usernames with incremental numbers) suggest automation using generic breach data or brute-force permutations.
  • Session or device reuse across IPs: Reuse of session tokens, cookies, or fingerprinted device traits across multiple IPs or user identifiers is uncommon in legitimate traffic. It typically signals poor session management by bots, especially when rotating proxies.

Credential stuffing reveals itself through correlation, not in isolation. Monitor these indicators over time, across user segments, and at the session level. The earlier you detect campaign-level activity, the faster you can investigate and mitigate the damage.

How to block credential stuffing

Detecting credential stuffing is only half the battle. The real challenge is stopping the attack without disrupting real users. Blocking too broadly locks out legitimate traffic. Blocking too narrowly lets attackers through.

The goal isn’t perfect prevention — it’s cost escalation. Credential stuffing thrives when defenses are static, predictable, and easy to model. Effective mitigation raises the attacker’s costs: more CAPTCHAs to solve, more infrastructure to rotate, more detection to evade. The harder it is to maintain a high hit rate, the faster the campaign breaks down.

Apply progressive friction: Introduce barriers only when risk warrants it. Done right, this slows attackers without punishing real users.

  • Trigger CAPTCHA or JavaScript challenges on suspicious behavior. This deters basic scripts and forces attackers to invest in solvers or headless frameworks.
  • Use risk-based authentication to step up to 2FA or email verification when the session looks new, untrusted, or unusual.
  • Block known abusive IPs (like open proxies or VPN gateways), but tune carefully — many real users share IPs in mobile or corporate environments.

Implement adaptive rate limits: Start with basic IP-based rate limiting — it’s a useful first step. Limiting the number of login attempts from a single IP can block naive attacks and prevent one node from overwhelming your service.

But this isn’t a silver bullet. IP-based rules are easy to evade with rotating proxies. Think of IP throttling as a safety cap — a way to contain worst-case scenarios, not a core detection mechanism.

To detect real credential stuffing campaigns, you need rate limiting that adapts to attacker behavior:

  • Rate-limit by device, session, or username, not just IP. Bots often recycle credentials or identifiers across distributed infrastructure.
  • Monitor timing consistency — repeated logins at fixed intervals (e.g., every few seconds) signal automation, even if IPs are unique.
  • Detect slow-mode attacks: Some bots intentionally spread login attempts over hours or days (e.g., one login per IP per hour) to avoid triggering thresholds. When aggregated, these “low and slow” campaigns still reveal patterns.

Effective rate limiting doesn’t just suppress volume — it exposes attacker strategy. Look for coordination, repetition, and timing anomalies across sessions.

Invest in bot detection: Off-the-shelf bots are noisy. They leak automation flags, miss headers, and fail obvious checks. But the bots that matter — the ones that run large-scale credential stuffing campaigns — are tuned to blend in.

Catching them requires deeper behavioral analysis:

  • Detect automation artifacts like navigator.webdriver, inconsistencies in WebGL or audio fingerprints, or signs of Chrome DevTools Protocol (CDP) injection.
  • Monitor interaction patterns — mouse movement, typing cadence, tab visibility, and focus shifts. But tread carefully: real users behave unpredictably. Autofill, password managers, screen readers, and mobile UX all generate noise that breaks naive rules.

The most robust systems link sessions across time and infrastructure. They correlate repeated behaviors across devices, IPs, and user accounts — exposing patterns that wouldn’t stand out in isolation.

Building that kind of system takes time. If you don’t already have the stack for it, tools like Castle provide real-time risk scoring, fingerprint correlation, and visibility into campaign-level behavior — without needing to maintain your own pipeline.

Why blocking at scale is hard, and what works

Credential stuffing isn’t a fixed threat. It’s a moving target. Attackers constantly test, adapt, and evolve faster than any static defense can keep up.

Block IPs? They route traffic through residential proxy pools or mobile IPs. Fingerprint devices? They switch to anti-detect browsers that spoof everything from screen resolution to WebGL renderer and vendor. Add CAPTCHA? They offload challenges to solver APIs, click farms, or OCR-based bots. And mobile? Many login APIs are under-instrumented, making emulator-based attacks hard to catch.

The core challenge isn’t just detection — it’s doing so without breaking the experience for real users. Overblock and you introduce friction that degrades user trust. Underblock and bots scale freely. There’s no universal threshold or rule that holds at scale.

That’s why defending against credential stuffing in production takes more than point solutions. It requires an adaptive system designed to learn, iterate, and respond to the tactics attackers use in real time.

What effective production defense looks like

1. Behavioral analytics

Understand how sessions behave — not just what headers they send. Bots can spoof metadata, but they struggle to mimic human interaction. Analyze timing, navigation depth, event pacing, and how users respond to friction (e.g., CAPTCHA, 2FA). Real users have entropy. Bots try to simulate it, but rarely succeed at scale.

2. Session linking

Attackers rotate IPs, user agents, and even devices, but infrastructure reuse leaves behind patterns. By linking sessions over time using cookies, TLS or JA3 fingerprints, and device traits, you can detect distributed attacks that wouldn’t trigger individual alerts.

3. Evolving detection signals

The bot ecosystem moves quickly. New Puppeteer forks, anti-fingerprint patches, solver APIs — all ship weekly. Static detections degrade fast. Your system needs to ingest fresh signals, retrain detection logic, and respond to campaign-level shifts as they emerge.

4. Real-time visibility

You can’t defend against what you can’t see. Real-time visibility means dashboards that highlight login anomalies by segment, risk-score deltas across sessions, and the ability to trace coordinated activity across infrastructure. Post-incident forensics isn’t enough — you need live telemetry to intercept campaigns mid-flight.

Attackers today don’t just run scripts. They operate infrastructure: headless Chromium in sandboxed VMs, proof-of-work logic to bypass friction, and evasive JS execution, as we showed in our TikTok VM teardown.

That’s why point solutions like CAPTCHA or IP blocklists rarely hold up. They’re friction layers, not foundations. Real defense is layered, adaptive, and operational — built to evolve as fast as the threats it faces.

Final thoughts

Credential stuffing isn’t amateur hour anymore. It’s a mature, professionalized ecosystem powered by open-source tools, and thriving marketplaces. Attackers don’t need to write code. They can launch at scale using shared OpenBullet configs, CAPTCHA solvers, residential proxies, and mobile emulators.

This isn’t trial-and-error. It’s workflow automation. Campaigns are tested, tuned, and rerun. Defenses are benchmarked. Infrastructure is modular and disposable. Static rules don’t stand a chance.

Stopping this kind of threat requires more than strong controls — it requires adaptive systems. Systems that track behavior over time, correlate across sessions, and evolve alongside attacker tactics. Not just friction like CAPTCHA or IP blocklists, but full-stack visibility into how credential stuffing campaigns operate and how they shift.

That’s what Castle is built for. If credential stuffing is hitting your login flows, and you need to move from guessing to understanding, we can help.