Industry

Defining AX: Attacker Experience

"Think like an attacker." Despite being good advice, if not a cliche, it seems to be easier said than done. In hopes that security is not just an afterthought, designers and engineers on product teams are now getting more involved in addressing security concerns early on in the process. Yet when product teams make early attempts to create an attacker persona, they often end up with little more than a caricature in a mask or hoodie. Why does the current approach tend to be so unhelpful, and how can we improve it? Product and security teams should have a better way to define personas for attackers based on data.

Beneath the Hoodie is a Human

Product teams are focused on listening to users and observing their behavior. This approach prevents putting the team's opinions into the mouths of the users. How do we avoid biases inherited from our roles from influencing our ideas about attackers? What biases lead to misconceptions about attackers?

First, engineers may be overly interested in the sophistication of specific attacks but largely ignore their impact or likelihood. As a result, anticipating relatively simple attacks (even those with serious implications) may fall to the wayside. Attackers, among other things, are counting on widespread neglect from developers, and very few of them have the time or skill to perform more sophisticated attacks.

Second, design roles may be more focused on protecting the seamlessness of UX than on using it as a means of defense. With the tendency to focus on the user as a sympathetic character, security concerns only crop up as a necessary evil. In the ideal design, security would be nearly invisible. Security, to the design team, tends to be reactive and negative, something that limits their potential. As such, an attacker may just pass as an inconvenience worth avoiding, not exploring.

Third, even specialists in offensive security, who can "attack like attackers," have their focus centered on the modes of attack, not on the motives behind it. They do not perfectly mirror attackers per se, nor are they meant to. There is a current (ambiguous) distinction between red teaming and pen-testing that leaves room to expand the scope and realism of simulated attacks. These teams are typically more thorough than a real attacker to support defense in depth, so their own experience may not reflect an attacker's.

Government agencies developed the practice of red team analysis that information security inherited. Their concern was far less about the methods of attack and far more about the perspective of the adversary. It was especially important if their adversary had a different social milieu. (In a similar sense, attackers are wholly different from market competitors.) In an intelligence tradecraft manual, the section on red team analysis (page 31) provides a good starting point for how to empathize with anyone, including an antagonist:

Develop a set of "first-person" questions that the adversary would ask, such as: "How would I perceive incoming information; what would be my personal concerns; or to whom would I look for an opinion?"

It is worth noting that attackers are almost certainly considering these questions when they plan any attack with a social component.

Attackers Already Understand Our Perspective

Attackers often combine several different approaches in an attack, and the vast majority of significant breaches include a component of social engineering. According to Verizon's analysis of 1600 breaches, 90% involved phishing. Attackers have their own warped version of user empathy: the wording of a phishing email, the appearance of a UI redressing attack, and even the path of a robotic mouse on a target site. As Christopher Hadnagy describes in his book on Social Engineering, fraudsters do not need a fully developed psychological profile of their victims. They just need enough information to pick the best approach.

Beyond just thinking like victims, attackers also consider the perspectives of security teams. In cases where those teams have been replaced or supplemented with signature- and behavior-based tools, attackers also anticipate how defenders try to find them. To thwart security teams, attackers pick particular timing for attacks. They may pick holiday rushes when activity is abnormally high, or late nights and weekends when the defenders are less available. Attackers that use bots, which constitute 52% of Internet traffic, have researched how defenders try to identify bots and then work to make them appear more human-like.

Using Dark Patterns to Study Attackers

A widespread controversy in the UX world is the fact that some designers create intentionally bad experiences for the benefit of the business. Known as "dark patterns" or "black hat UX," these design tactics can deceive users into making decisions against their interests, or otherwise provide them with a negative surprise. For example, while sites may make it as straightforward as possible to subscribe for a service, their cancellation page may intentionally be hard to find. Annoying mobile popups have "X" buttons that are so small, we almost always hit the ad instead. An app that is free to download takes us through its 10-minute user onboarding, only for us to discover that there is actually a subscription.

All of these dark patterns, such as the roach motel and classic bait and switch, challenge the notion that UX design is purely "for the users." Yet the only reason we consider them ethically wrong is under the assumption that the users mean well. So what if we could use dark patterns in a separate, targeted way to distract or misguide attackers?

We can detect attackers by thinking of what legitimate users are likely to do and to take notice if someone is not doing those things. Once an attacker's behavior becomes evident, we have the opportunity to guide them down a different journey than our other users. For example, we can implement a dark pattern just for attackers by guiding them into a mock application with mock data, solely for the purpose of observing their behavior and preventing real damage. Honeypots, for example, are a technique of creating dummy hosts that attackers think are real systems. Some organizations develop entire honeynets to research the motives and methods of attackers.

Journey's End

It’s easy for an attacker to take advantage of an app’s UX. For example, if an attacker attempts an account takeover and the failed login tells them "user not found," they can iterate through a list of accounts to infer which users do exist. That constitutes a direct risk to privacy, but also security. If the attacker knows which accounts exist, they can target those accounts more directly to steal their credentials by some other channel. According to OWASP's authentication guidelines, even the HTTP response should be generic to prevent the attacker from gleaning any details.

To make matters more challenging, while severe legal punishments exist, they are difficult or impossible to rely on when attackers are outside of jurisdictions. Most of the cybercrime actors mentioned in CrowdStrike's threat report reside in countries with no extradition policies with the countries of their targets. Since attribution alone is challenging, the defender is limited to denying access and reporting the attack to shared threat intelligence.

If anything, we should design with the foreknowledge that we will always get attacked, and the attacker may get away with it. By thinking this way, our mindset shifts from only minimizing attacks to continually learning from them.

The following are some questions that UX designers, product engineers, and security practitioners can review together to form the basis of AX:

  • Based on all the paths of a genuine user's journey, where are our opportunities to discover if (1) they're not human and (2) if what they're doing is malicious?
  • How would an attacker perceive information in the application?
  • Where would an attacker find avenues for attack?
  • How do attackers know if they've successfully gathered information?
  • What would be an attacker's personal concerns?
  • To whom would attackers look for an opinion if they encounter difficulty?
  • How would an attacker benefit from the attack?
  • How can we alter the attacker experience without breaking it for a genuine user?
  • How can we guide attackers down a path where we learn as much as possible, and they get as little as possible?