Notice: Trying to access array offset on value of type bool in /customers/5/5/b/ on line 212 Notice: Trying to access array offset on value of type bool in /customers/5/5/b/ on line 212

Mirror Chess Is Not Good Cyber – Forbes

Mirror Chess: a great way to lose at the Game of KingsMicah Ward, Cybereason

This article could easily be subtitled “Why Seemingly Logical Approaches to Security Open New Forms of Secondary Risk and Vulnerability.” Put another way, predictability in any conflict with an intelligent opponent isn’t a good strategy.

My good friend and colleague David Berliner and I were discussing a collection of several bad strategies that we see in security; and in seeking an analogy, we came up with the notion of “mirror chess.” For those that don’t play, mirroring in chess is a strategy where you simply copy the moves of your opponent. Generally speaking, it is a terrible approach: it opens you up to predictable weaknesses asynchronously because there is asymmetry in chess that can be exploited when you know the steps your opponent will take. This, in turn, led to a well-received advanced session at RSA Conference in 2018, but we’ve since advanced our thinking on the subject and how to use this concept of mirroring to improve security operations.

To those not steeped in all things chess, the game looks symmetrical. When someone says “chess,” an image comes to mind of the ordered starting position of pieces, with black and white facing down across an array of ordered, alternating squares. However, the game is actually asymmetrical in the following ways:

  • The board isn’t symmetrical: the rightmost square on the nearest row (if you’re playing properly) is always white, and the left-hand peer is always black.
  • The pieces aren’t symmetrical: Kings face Queens and vice versa, looked at in either the X direction or the Y direction of the cartesian cut-up of the chess board.
  • The moves aren’t symmetrical: each player takes a move in their own turn, on their own timeline, slightly slipped from one another temporarily. The white player initiates and is responded to if played properly. In theory, two equally skilled “perfect” players will always see the initiative favoring the white player.

Mirror chess became our analogy for looking at mirroring behavior in the cyber domain too. We explored the asymmetries in cybersecurity, and they are more profound in cyber conflict than they are chess: the stakes are higher, the terrain is bigger and more varied, the impacts are (in almost all cases) more significant and there are many more people in play in any given “game.”

So how is cybersecurity “asymmetrical”? David and I started by looking at all the ways that we see a defensive team acting on auto-pilot in strategy and execution and narrowed it down to three. Feel free to disagree with these or to add your own dimensions, but these are a good start:

  1. Mirror Adversary Behavior: In the dance of move and countermove, we can become predictable and exploited. In the wake of CVE-2018-4878, companies started shutting down Flash, which had a measurable effect on Flash-based services and technology. What if that had been the desired goal of the attackers: the denial of service? What if instead, they did this to Word or to an ICS program?
  2. Mirroring Past Behavior: This is when the defender either mirrors past successful strategies or is manipulated by the attacker into a predictable course of action. If attackers know that defenders always follow a standard playbook response to a given exploit, they can intentionally trigger it to distract the defenders, extending time-to-live, or even trick the defenders into further damaging the environment themselves. A classic example of this is the growing use of ransomware as a wiper, anticipating that security teams will try to contain and clean up the ransomware attack without realizing this action is destroying evidence of a deeper incident (see, among other sources, Assaf Dahan’s research on MBR-ONI).

    The difference here from the mirroring in (1) is who is exploiting and being mirrored, and here the defender is doing the mirroring in a way that the attacker can predict. This is also true when the attacker sets up an expectation by the defender and then does something different. The best example of this is false flag operations. After all, defenders generally love attribution (hint: I don’t) because it let’s them personify their opponent and focus in on idiosyncrasies, but this is inherently manipulable by an organized, intelligent attacker.

  3. Mirroring Other (1st order chaos as described here) Fields (e.g. IT): This is entirely an “own goal” scenario, no opponent required. Companies will behave like…companies. In other words, if we are used to business processes being managed a certain way (in HR, in marketing, in Sales, in R&D, in IT and really any other department), we’ll do the same in security, right? What works as “good business discipline and wisdom” should be applicable everywhere, right? No. Actually, that’s not the case.

    The only part of the security stack where there is an intelligent opponent is in security. In IT, the opponent is nature and predictable natural processes. In R&D, the opponent is entropy in the balancing act of quality, scope and time (hence shipping is the art of cutting) and so on. But in security, we have 2nd order chaos, as previously mentioned, because of an intelligent opponent (this is true by the way in sales-and-marketing too as there is business competition, but not elsewhere in the tech stack).

In looking at risks, David and I rapidly came to the conclusion that there are some very obvious (“no, duh”) types of risks associated with mirror complications. For the most part, we don’t care about the simple risks that others see and we dove deeper to find the secondary risks. For each area of analysis, we looked at four areas: complication, predictable impact, predictable response and mirror problem.

Let’s start with a simple example and do exactly that: complications from asymmetries in rates of innovation between adversaries and defenders (this is an instance (1) above). Here the major complication is that adversaries are generally faster to innovate than three groups: target companies, product suppliers to those companies and services that manage operations in-house or as-a-service to those companies. The predictable impact of this is an increase in ZeroDay techniques and exploit development in new, uncovered vectors in the Kill Chain. The predictable response is the lionization of machine learning for its own sake and then an increase in automation and machine learning; but the secondary mirror problem this creates is distinct: defenders get in a rut following adversaries, the time-to-live for adversaries goes up and more aggressive uses of machine learning (whether they are appropriate or not) continues causing a vicious circle in effect.

This mention of machine learning is important, incidentally, and probably worth a whole new article at some point. Machine learning is incredibly valuable as a toolkit in our new Data Age, and that is equally true for cybersecurity as for any other domain. However, it is far too often dragged out and thrown around as if it were a single thing capable of anything. It’s become almost a MacGuffin in cybersecurity and is bandied about as a form of magic, and that’s a dangerous thing.

However, there are mirror complications inherent in machine learning. The major complication is fourfold: over-learning, not having enough data, having too many variables for machine learning to produce significant actionable insight and the fallacy of correlating statistical significance with security significance. The predictable impact is that it can create blind spots and increase options, ironically, for adversaries. The predictable response to that is to add more machine learning and deploy traditional countermeasures on top of machine learning in a “hybrid” approach; and the secondary mirror problem is that the data store gets corrupted, output becomes stochastic, risk increases and there’s an erosion of trust in machine learning.

In my next article, which I am creatively thinking of calling “Mirror Chess, Part 2” (unless I think of a better name), I’ll dive a little deeper into some of the mirror complications of so-called next-generation technologies and policy-driven systems. I’ll also look at concrete takeaways for all security practitioners to avoid mirror complications and advise how to be on the lookout for these secondary issues.


Leave a Reply