How bot detection misfires on non-mainstream browsers and privacy tools

Every time there's a Hacker News thread about bots, bot detection, or CAPTCHAs, a familiar complaint shows up: people using VPNs, ad blockers, Firefox forks, or privacy tools get bombarded with CAPTCHAs or blocked entirely. It feels like modern anti-bot systems are punishing users just for trying to protect their privacy or use a non-mainstream browser.

Most users notice this when they hit a CAPTCHA, or worse, when they’re stuck solving a series of increasingly hostile ones. But it’s not just CAPTCHAs. These false positives are often triggered silently by anti-bot and anti-fraud systems. These systems may apply stricter checks, block access outright, or degrade the user experience when they detect signals they associate with bots, even if they come from real users.

Tools like ad blockers, VPNs, anti-fingerprinting browser extensions, and privacy-oriented browsers such as Firefox (with theresistFingerprinting option) or Brave are common among technically savvy users. But the use of these tools increases the chances of being flagged as suspicious.

In this post, I’ll explain why that happens. We'll look at how common privacy tools affect JavaScript execution and browser fingerprinting, often used by anti-bot and anti-fraud systems, how that impacts detection systems, and why it leads to false positives.

Note: I’ll use “anti-bot” and “anti-fraud” interchangeably throughout this article. While they serve slightly different purposes, they rely on similar techniques and signals.

Also, this isn’t a post about how Castle handles these cases specifically, though we’ll share some of our approach at the end. The focus here is on the broader technical reasons why non-standard browsers and privacy countermeasures often trigger stricter detection rules across the web, regardless of the vendor.

TL;DR

In case you don’t want to read the full article, these are the main reasons why privacy tools and non-mainstream browsers often trigger false positives or get blocked by anti-bot systems:

  • Ad blockers → Often block anti-bot scripts, making it look like the browser doesn’t execute JavaScript — a common bot behavior.
  • Anti-fingerprinting tools and privacy browsers → Modify key fingerprint attributes, creating inconsistencies that resemble automation or spoofing.
  • Non-mainstream or forked browsers → May lack expected APIs or features, triggering detection models that assume tampering or evasion.
  • VPNs and Tor → Use shared IPs frequently abused by bots. Many systems preemptively block or challenge this traffic, causing collateral damage.
  • Bias in detection models and analyst logic → Privacy-related traits often correlate with bots, leading systems to flag legitimate users based on flawed assumptions.

False positives from non-mainstream setups

There are several reasons why using privacy tools or non-standard browsers can trigger false positives in anti-bot systems.

Ad blockers and blocked anti-bot scripts

One of the most common privacy tools is the ad blocker. For example, uBlock Origin with blocking lists like EasyList or EasyPrivacy. These tools block scripts known to track users or serve ads. However, some anti-bot or anti-fraud scripts also get blocked, even when they're used only for security purposes. We won’t debate whether these scripts should or shouldn’t be in such lists; the key point is that once they’re blocked, detection systems lose visibility into what the browser is doing.

As a result, from the perspective of the detection system, it may look like JavaScript is completely disabled, a behavior typically associated with basic bots. In practice, this can lead to CAPTCHAs or full blocks, or trigger fallback challenges that are slower and more invasive.

Some users argue that blocking JavaScript selectively, e.g. via an ad blocker, should make them look more human, not less. But the system can’t distinguish between a browser that intentionally blocks one script and a bot that skips JavaScript execution entirely. Many bots, especially those built on HTTP clients or stripped-down headless browsers, don’t execute JavaScript at all. Even more advanced bots often block what they consider "non-essential" JavaScript to save bandwidth and reduce proxy usage. So, when a legitimate user’s browser blocks a key script, it can look exactly like automation.

Non-mainstream browsers and anti-fingerprinting tools

The problem is different when it comes to tools that modify the browser fingerprint. This includes browser forks like Brave and Librewolf, or configurations like Firefox with resistFingerprinting enabled.

Most anti-bot and anti-fraud systems rely on browser fingerprinting to distinguish real browsers from spoofed or automated ones. Fingerprinting doesn’t just help with user tracking, it’s often used to validate whether a browser is real and internally consistent. Detection systems gather large sets of signals using JavaScript: GPU information, supported APIs, error behavior, stack traces, available WebGL extensions, and more. These signals are then checked for internal consistency.

For instance, a user might present a user agent like this:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36

If this browser claims to be Chrome 134 but lacks a feature like GPUAdapterInfo.subgroupMinSize, it raises a red flag. Either the browser is spoofed, modified, or something is interfering with the fingerprint.

Privacy-focused tools often change values such as the number of CPU cores, canvas fingerprint, timezone, or screen resolution. These changes may be intentional, to prevent tracking, but from a detection standpoint, they introduce noise and inconsistency.

Forked browsers like Palemoon or Librewolf can make this worse. These forks often present themselves as Firefox in the user agent, but they may lack APIs, report different fingerprints, or be out of sync with upstream Firefox behavior. Detection systems that expect a full Firefox fingerprint may classify them as suspicious or forged.

VPNs and Tor make things harder

VPNs and Tor don’t directly affect fingerprinting, but they create another problem: shared and flagged IP addresses.

Most VPNs and all Tor exit nodes reuse IPs across many users. Bots also operate through these IPs, so over time, detection systems associate them with automated behavior. As a result, many WAFs and bot detection systems block or challenge traffic from these IPs by default, even if the current session is benign.

This creates collateral damage. Legitimate users who happen to share an IP with abusive traffic get caught in the crossfire. Some organizations even block Tor and data center IPs outright as a policy, regardless of observed activity.

Bias in ML models and human assumptions

There’s also bias, both in machine learning models and in the human teams using them.

Many bots share common traits:

  • Inconsistent fingerprints
  • Suspicious IPs (VPN, Tor, datacenter)
  • Absence of JS execution

Over time, ML models trained on bot datasets start to correlate these features with fraud. But correlation isn’t causation. Just because many bots use VPNs doesn’t mean all VPN users are bots.

The problem compounds when false positives leak into training data. A model that learns from already biased labels ends up reinforcing those same patterns in future predictions, a feedback loop that worsens the issue for privacy-focused users.

Incorrect mental models from operators

Finally, there’s the human side. Many customers of anti-bot products operate based on rigid rules or flawed assumptions about what “normal” traffic looks like.

For example, it’s common to see someone define a rule like “block any session that uses more than two IPs in 5 minutes.” That seems reasonable, until you consider mobile networks, VPNs bundled with antivirus software, or Apple’s Private Relay. All of those can change a user’s IP mid-session.

Other times, sites assume that users from unexpected geographies or with “unusual” configurations must be bots. But that assumption reflects a narrow mental model, not reality. Legitimate users can trigger multiple heuristics, and unless the system accounts for edge cases, those users get blocked or challenged more than necessary.

Deep dive on LibreWolf, a privacy-focused Firefox fork

LibreWolf is a custom Firefox fork built with privacy and security in mind. Unlike the default Firefox distribution, LibreWolf removes telemetry, disables auto-updates, and ships with hardened privacy defaults out of the box. It includes features like:

  • Built-in blocking of known trackers
  • Stricter cookie and referrer policies
  • Disabled WebRTC, Pocket, and other Mozilla services
  • resistFingerprinting enabled by default

While these changes are great for privacy, they also make LibreWolf diverge significantly from the expected fingerprint of a standard Firefox installation. This can unintentionally make users stand out, or worse, appear suspicious, to bot detection and anti-fraud systems that rely on fingerprinting consistency.

Comparing LibreWolf and Firefox fingerprints

We compared the fingerprint of LibreWolf with that of a vanilla Firefox build on the same device. All tests were run locally with no network-related changes, only the browser configuration varied. In the following examples, the Firefox fingerprints are on the left, while the LibreWolf fingerprints are on the right.

We start with screen-related attributes, accessible via the window.screen object. These include screen.width, screen.height, screen.colorDepth, and screen.pixelDepth.

Despite running on the same hardware, these values differ between LibreWolf and Firefox. The resolution appears lower in LibreWolf, and the color depth is also different. These discrepancies result from fingerprint randomization or hardening settings that modify or mask actual values.

But it’s not only about screen-related signals, other attributes, such as the number of CPU cores, collected using navigator.hardwareConcurrency and the timezone, collected with Intl.DateTimeFormat().resolvedOptions().timeZone are also different. In particular, even though I’m located in France, it claims my timezone is linked to Iceland.

WebGL is another area where detection logic pays close attention. It’s commonly used to extract GPU-related fingerprinting features like WEBGL_debug_renderer_info. In LibreWolf, these signals are either disabled or unavailable, which leads to collection errors.

These missing or broken values don’t necessarily imply malicious behavior. But when fingerprinting systems expect a GPU vendor and renderer and receive nothing (or an exception), they often treat it as an anomaly.

Why it matters

Each of these deviations, slightly altered screen dimensions, a missing timezone, missing WebGL info, might not raise a red flag on its own. But when combined, they form a pattern of inconsistencies that increases the detection system’s confidence that something is off.

Modern anti-bot systems don’t rely on a single signal. They correlate raw values with expected ones (based on user agent, platform, etc.), verify whether features match known browser capabilities, and compare results across multiple fingerprint layers. If several signals are missing or inconsistent, even if caused by a privacy tool like LibreWolf, the session may be flagged as risky or automated.

For instance:

  • Screen size checks may be cross-validated using media queries.
  • Fingerprint components are tested for consistency with the declared user agent (e.g. claiming to be Firefox with a given version but missing expected APIs).
  • A timezone mismatch may raise suspicion if it contradicts the IP geolocation.

In isolation, none of these issues are conclusive. But together, they increase the probability that the user is not using a standard, trusted browser, and that’s enough for some detection systems to introduce friction.

How we handle privacy tools and non-mainstream browsers at Castle

At Castle, we design our detection systems with the diversity of real-world users in mind. Just because someone uses a non-mainstream browser or a VPN doesn’t mean they should be treated as suspicious by default.

That said, the challenge is real: there’s a tradeoff between reducing false positives and not letting fraudsters through. Many bots mimic real users, and some deliberately adopt privacy tools or tweak browser fingerprints to evade detection. This makes it harder to distinguish between a legitimate privacy-conscious user and an attacker.

Some privacy-focused browsers identify themselves, for example, Brave exposes the navigator.brave property. Others, like Firefox forks, reuse the Firefox user agent string but behave differently. Our goal is to detect these differences accurately without overreacting. That means supporting a broad set of browser variants, not just Chrome and vanilla Firefox, while still catching automation and spoofed environments.

This isn’t just about browsers. The same principle applies to IP reputation. Users behind VPNs, proxies, or privacy-preserving services like Apple Private Relay may look suspicious in legacy detection systems that rely too much on IP signals. We aim to avoid penalizing users simply for using those tools, especially when there’s no other indication of malicious behavior.

Ultimately, we try to strike the right balance between security, privacy, and usability. It’s not perfect, and it never will be. But reducing false positives, especially on traffic from privacy-aware users, is something we actively care about when designing detection logic.