About a year ago I blogged strategically on decoys but I promised a second post. Here it is. Quite simply almost out of the user-manual, there are many levels of how our tamper-proofing implementation makes use of decoys.
Some instructions are like red flags, they only occur in “suspicious” circumstances. Besides generating such instructions on the fly, our tool generates a few of these instruction in a whole bunch of places, adding just a small container of extra needles to the proverbial hay stack. Let the attacker guess which the right needle is.
Not just instructions maybe be recognized, the same is true for instruction patterns. In fact, looking for pattern is a common attack method. (While this is not topic relevant, rest assured there is quite some effort in the tool to avoid patterns). More relevant to the topic: when some vanilla extra code is needed somewhere, the tool can nicely generate known patterns it otherwise tries to avoid.
And of course, extra, never executed code can be added. Let the attacker spend the appropriate extra effort. Software engineers can easily forget how resourceful and clever attackers are. Attackers worth that name will use good statistics of instructions used or groups of instructions and find out that some code just doesn’t quite look right. The most clever solution to get code with a particular distribution for a specific application is also the most simple one: Just let the programmer specify some extra binary input files of his choice, the tool will simply use those instructions and not worry about any math, statistics, or being fancy. Using code found in the real application to be protected as decoys however would be a rather stupid idea.
Extra code might be more than a bundle of instructions to look at: We distinguish such simple extra instruction sequences from more clever instruction sequences which also get executed and have not much effect. But unlike the former, such code will really be executed in the running application.
Don’t forget: the tool only does the tool’s work; to really fool an attacker the programmer can help with extra decoys really looking attractive. This touches one of our standard mantras: an application doesn’t need to be changed at all to be protected, but when a real first class, high level protection is desired, efforts of the tool can be highly augmented by a programmer who knows what he is doing.
A simple measure against crackers is detecting whether a program is being debugged. There is a race between defense adding new recognition features and attackers finding ways to fool defenders. In fact, White Hawk Software does not have the illusion we can always recognize when an attacker uses a debugger. So we use a large number of different methods, annoying any attacker so that he just never knows when he is finished after finding the next hurdle.
For an extra kick the WHS protected program makes its defense in such a subtle way, that the attacker might not recognize the defense. Maybe a wrong algorithm is chosen, or the precision will become dismal… Sadly, we cannot create subtle misbehavior automatically: in practice it turns out to either not be subtle enough, or not to disturb the attacker at all. Real subtlety is the domain for manual protections and therefore restricted for use in very high-end defenses. Here, White Hawk Software can offer to give the user enough control and enable manual creation and integration of such a protection. One thing however we can do automatically: Add a delay between detection and acting, so the attacker might not see the spot where his presence has been detected. Another trick against scripted attackers is to not always trigger but to use some randomness.
We want to present some tricks to recognize a debugger. The absolute last thing we want to do is educating attackers, and that is indeed not happening. Rest assured, attackers already know about what is written here, it’s ok to also let the other people know. By the way: When researching known anti-debugger tricks we have found a large number to be broken. Traditionally it is the good guy which uses a debugger, and malware trying to prevent debugging. For this reason (at least we occasionally think so) text-books describe detection methods which don’t quite work completely. We will stay within that tradition, we will stay far too general and high-level to risk exposure of new information to criminals.
There are very different classes of debuggers. For example: Breakpoints can be created by modifying the code. Breakpoints can be implemented in hardware. A debugger can stop all threads, or could stop just one single thread. An emulated processor could be used. The program could be run on special hardware for debugging. No single measure will recognizes all.
Finally, here are some ways to detect your program may be debugged
- The operating system has an API to detect debuggers. Look for that call.
- There are some bits set by debuggers which can be checked directly without using the API. (E.g. Windows Process Environment Block)
- Recognize presence of trap instructions. (And don’t trip over data with the same encoding.)
- Special-purpose breakpoint registers can be “used” by us, so use by a debugger becomes conflicted.
- Debuggers themselves can have bugs which are known and exploited by malware. (But eventually those will become fixed.)
- Enumerate the running processes and recognize a few known debuggers.
- Recognize the user-interface of a debugger on the monitor.
- Check the execution timing, recognize the slowdown from single stepping. (But ignore page faults.)
- In Windows, only one debugger can be “attached”: attach yourself first.
- Execution inside a special sandbox might be recognized by the environment looking too simple.
- Check or damage the content of interrupt or trap vectors used for debugging.
- Create an artificial bug, and try to catch that bug. A debugger may snatch the bug away, expose the bug to the user, and itself to the protected application.
- Certain special instruction sequences cannot be single stepped. Does the debugger know?
- Some measures use unusual instructions, being visible like red flags. We use decoys to hide the ones that are really used for anti-debug in a pile of fakes.
A few references:
Any internet-search can show these anti-debugger tricks and more methods. Here are some we have used.