Decoys, take 2

About a year ago I blogged strategically on decoys but I promised a second post.  Here it is.  Quite simply almost out of the user-manual, there are many levels of how our  tamper-proofing implementation makes use of decoys.

Some instructions are like red flags, they only  occur in “suspicious” circumstances. dummy_car Besides generating such instructions on the fly, our tool generates a few of these instruction in a whole bunch of places, adding just a small container of extra needles to the  proverbial hay stack.  Let the attacker guess which  the right needle is.

Not just instructions maybe be recognized, the same is true for instruction patterns.  In fact, looking for pattern is a common attack method. (While this is not topic relevant, rest assured there is quite some effort in the tool to avoid patterns).  More relevant to the topic: when some vanilla extra code is needed somewhere, the tool can nicely generate known patterns it otherwise tries to avoid.

And of course, extra, never executed code can be added. Let the attacker spend the appropriate extra effort.  Software engineers can easily forget how resourceful and clever attackers are.  Attackers worth that name will use good statistics of instructions used or groups of instructions and find out that some code just doesn’t quite look right. The most clever solution to get code with a particular distribution for a specific application is also the most simple one:  Just let the programmer specify some extra binary input files of his choice, the tool will simply use those instructions and not worry about any math, statistics, or being fancy.  Using code found in the real application to be protected as decoys however would be a rather stupid idea.

Extra code might be more than a bundle of instructions to look at:  We distinguish such simple extra instruction sequences from more clever instruction sequences which also get executed and have not much effect. But unlike the former, such code will really be executed in the running application.

Don’t forget: the tool only does the tool’s work; to really fool an attacker the programmer can help with extra decoys really looking attractive.  This touches one of our standard mantras: an application doesn’t need to be changed at all to be protected, but when a real first class, high level protection is desired, efforts of the tool can be highly augmented by a programmer who knows what he is doing.

Should the user add decoys for tamper-proofing ?

Just because users use White Hawk Software tools does not prevent them from adding some protection code of their own.

Decoys are among the best defenses. However, use of decoys can become dangerous and may be tricky.45-OIMP28-M

Consider a decoy having been introduced. What are the possibilities?

  • The decoy is not detected.
    Nothing happens; no good, no bad. It still is there for possible later detection.
  • The decoy is detected, but confused with the real thing.
    The best possible outcome. Attacker stops searching because he thought he got the result.
  • The decoy is detected and recognized to be a decoy.
    The worst possible outcome. Any attacker is reinforced that there must be something worth hiding. Attacker will multiply efforts to search for the real thing.

Use of decoys is a strategic decision which can be made only after evaluating the possible outcomes and their consequences.

Should decoys be protected?  Of course. If a decoy is not protected doesn’t that just scream this code is intended for viewing?  On the other side: don’t protect it too well, or an attacker has no clue of the decoy and won’t waste his time.
So, how well should decoys be protected?  That is a difficult question; maybe protect it just a tiny bit less then the real code. Or, have several decoys and protect them at different levels.

When time permits I plan on blogging how NestX86 itself takes advantage of decoys at different levels.    …Here it is