Research · · 3 min read

Introducing Castle’s Research Team

Introducing Castle’s Research Team

How we think about research at Castle

Bot detection and fraud prevention are adversarial by default. It is a cat-and-mouse game: attackers iterate, defenders respond, and the cycle keeps moving.

AI has accelerated this dynamic on both sides. Attackers use it to quickly develop new bots, scale manual fraud operations, and iterate faster while making their automation more stealthy. Defenders, including us at Castle, use it to speed up experimentation, explore ideas, and shorten the path from signal to production.

But faster iteration alone does not break the cycle.

At some point, staying ahead requires a deeper shift: understanding how attackers actually operate.

That means studying the fraud ecosystem itself:

This is where the leverage comes from. Not just reacting to bypasses, but anticipating them.

At Castle, we do use AI heavily. It helps us test ideas faster and scale research efforts. But it is not a substitute for domain expertise. The core challenge is still to identify signals that survive contact with real attackers, not just clean lab conditions.

In practice, this leads us to focus on a few key areas:

Research at Castle is not separate from the product. It directly shapes how detection works in production, as attackers continuously probe assumptions and ship bypasses.

The goal is simple: design systems with the attacker in mind, not just react after the fact.

Meet the Research Team

Our research team includes specialists across fingerprinting, automation analysis, mobile security, and software protection.

Antoine Vastel

Head of Research at Castle, focused on bot and fraud detection. He works on device fingerprinting, behavioral analysis, and large-scale detection systems that identify automated abuse and account fraud. His work builds on more than a decade of research on browser fingerprinting and adversarial detection, combining academic foundations with real-world insights from attackers to develop resilient signals and detection strategies. He regularly publishes technical research on Castle’s blog and contributes to open-source projects related to fraud and bot detection.

Kurt Nistelberger

Software engineer at Castle, focused on mobile security and reverse engineering. He develops and secures the Android and iOS mobile SDK, building protections against tampering and dynamic attacks. His work leverages deep knowledge of mobile internals, attacker techniques, and obfuscation to deliver strong client-side protections and reliable integrity signals for Castle’s detection systems.

Naif Mehanna

Research engineer at Castle, focused on bot detection and automated detection systems. He uncovers, validates, and productionizes new signals that improve Castle’s ability to identify automated abuse at scale. His work is grounded in a deep understanding of real-world bot behavior and evasion tactics, and he specializes in hardware fingerprinting and high-fidelity device signals to push detection performance.

Nicolas Sama

Software engineer at Castle, focused on software protection. He builds and hardens obfuscation and tamper-resistance systems, works closely with runtime specs and compiler internals, and applies reverse-engineering expertise to make defenses resilient against real-world attackers. He also turns behavioral and integrity indicators into robust risk scores and detection features.

Roadmap and future plans

This year, we are expanding how we share our internal research.

Bot detection and fraud prevention tend to be more secretive than other areas of cybersecurity. Unlike vulnerability research, where disclosure norms are well established, anti-fraud teams often keep detection techniques confidential.

This caution is understandable. But it also creates an imbalance.

Much of the attacker tradecraft we analyze is already discussed openly in online communities. Attackers collaborate, share tooling, and iterate in the open. Defenders, on the other hand, often work in isolation, rediscovering the same patterns independently.

We believe there is value in closing that gap by sharing research responsibly.

This does not mean exposing sensitive production details or proprietary detection logic. But it does mean publishing technical analyses of the systems and techniques we observe in the wild, including:

These will not be surface-level summaries. They will be technical deep dives grounded in hands-on experimentation and real-world observations, with the goal of helping practitioners better understand how these systems behave in practice.

By the time this is published, our first deep dive will already be available (link). This is the first of many. We plan to publish consistently and contribute to a more open and informed discussion around adversarial detection.

Read next