Frequently Asked Questions

 

Everything you ever wanted to know about code q_a_code_pic_croppedtamper-proofing (but couldn’t find any answers)!

Because White Hawk Software protection can be directed very precisely, the technical slowdown for protection becomes irrelevant. Our tool will be set up so that inner loops might stay unmodified but the preparation and finalization could be strongly protected. The actual performance loss of the protected program depends on what is protected and how the protection is done, and it can become very small.

No, our tool is using sufficient automation so that a software engineer can quickly learn how to use it. A protection engineer might know more about actual protections. A software engineer however might understand the application better, knows what needs to be protected and what is performance critical. The software engineer who created the application has an advantage. Nevertheless, for highest grade protection a software engineer and a protection engineer might work together.

The user interface parts of an application are a typical attack point for a cracker.

Furthermore, it is quite difficult to analyze the usage of all the text strings. The fact that White Hawk protection can be guided from source code makes it possible to encapsulate strings in a way which is recognizable and aliasing can be excluded by the protection engine. This is not yet available in our BETA.

No, it is not so. There are some advantages to source code modifications, but the disadvantages are dominant. Protecting object files can make an attacker’s analysis rather difficult, each flag might (or might not) carry essential information. Indirect jumps are far more easily done at the machine code level. The source languages are mostly aimed at clarity, rather its opposite, obfuscation. While White Hawk Software doesn’t use source code for its input, the customer can (optionally) guide the protection with explicit source code.

Only to some extent. The final released application will not be debuggable.

Of course the application should be debugged as early as possible. It is possible to apply a partial protection, which comes much closer to the released application than the unprotected application. That way the protection process (as user specified) can be debugged in a normal way. Different behavior to the finished applications will be limited to timing differences, and to code layout differences. Very few bugs originate in those layers. It is possible to have user errors in the actual protection script, but that becomes hard to find out only in rather complex high level scripts.

Yes. For example you can protect parts of a library. You can link additional object files, independent of whether these are part of the application or are extra parts of the protection. Customers can write code snippets which interact with the protection.

Yes. Some protection mechanism rely on sequential behavior, others do not. The user must give correct behavior information to the protection tool, so that only valid mechanisms are used. In fact some of the more secure protection tests can make explicit use of the knowledge of the threading. Unless explicitly disabled, the protection tool will make use of threads even for a previously single threaded application.

Certainly.

Not automatically. It is possible to protect each program on its own. Eventually we will provide more powerful primitives which will interweave the processes with each other. An operating system with good proximity detection will most likely run the watchdog threat on a processor of its own.

Some languages will work, others will not. The analysis of the binary code will make use of knowledge of a particular compiler. When a compiler generates code patterns which look unrecognizable to the protection tool, it is possible that the code cannot be analyzed. It is easy to implement the necessary heuristics for unusual compilers, but you must ask White Hawk to do that. There are other limits however: Once code is protected, not even White Hawk Software will be able to analyze such code and run a second protection. Furthermore, assembly language programs tend to use concepts which are difficult to analyze, e.g. self-modifying code and jump targets without labels will thwart automatic analysis. That is not as surprising as one would think: There must have been a reason to actually use assembly code in the first place.

No such case is known to us yet. Our protection is very strong.

Yes that is true. In theory. There exists proof that there is an asymptotic upper limit of protection strength (which looks too low). In reality the cracking effort for a good protection will become enormous, long before the theoretical limit is reached.

We don’t know; we haven’t seen a quantum computer yet and don’t know what its real limitations will be. Once quantum computers are available a lot of algorithms will need to be modified. Protections (and cryptanalysis) are no exceptions. For the near future we expect quantum computers to be used for “standard” algorithms. We believe it will take somewhat longer until quantum computing can be applied to such unstructured problems as reverse engineering protected code.

April 2016: NIST publshed a paper “Report on Post-Quantum Cryptography” (NISTIR 8105) http://nvlpubs.nist.gov/nistpubs/ir/2016/NIST.IR.8105.pdf
That is a summary; it has a substantial list of scientific references. 

Yes. White Hawk Software protection is performed on binary code. Protection of code for different computer architectures will require different protection tools. White Hawk Software plans to extend the number of platforms as resources allow.

Yes and no. At this time we have no transformations which completely hide vector-instructions specifically (of course we do have encryption). However, we can easily obfuscate program-flow, indices, and loop counters.

Yes. Even hardware protection needs to have encryption keys hidden somewhere. The transition between trusted and untrusted areas is a good candidate for White Hawk software protection. The substantial complexity of trusted hardware makes it likely that somebody might find software bugs to enter the system. Hardware protection is from the viewpoint of the processor. It typically protects the processor from an application. Remember White Hawk Software protection is different: We protect an application from the processor (respectively its owner).

Note that traditional encryption and White Hawk Software protection protect against very different attack scenarios. If your scenario can be protected with encryption alone, luck is on your side. In most cases protecting the encryption key is a good idea. Using White Hawk protection will always make use of encryption however.

White Hawk Software at this time does not have classical white box cryptographic methods. If you add other white box cryptography we strongly recommend using tamper-proofing above the white box cryptography. Classical white box cryptography alone is broken, but with added tamper-proofing, we consider it safe.

More

The complexity of a protected application makes it more difficult for an attacker to gather reliable repeated information of the protected parts. If this is an attack which is worrisome for an application, we recommend placing constant tables into heap memory. With White Hawk doing heap fuzzing, statistical analysis becomes very difficult.

White Hawk has the concept of code cloning.  During obfuscations, the clone and its original will diverge.  At runtime a (pseudo) random generator is used to decide which branch to execute. This aspect has been implemented exactly to prevent dynamic instrumentation from succeeding.

White Hawk Software protections have elements aimed at making static analysis difficult. There are other elements which make dynamic analysis difficult. A White Hawk Software protection uses several defenses interlocked. To thwart such attacks we recommend making use of stack-fuzzing, heap-fuzzing and code-cloning.

We call this ‘zipper effect’. With thousands of code snippets protecting the application and each other, the attacker will not likely find a meaningful order. With the use of the background watchdog this becomes very difficult for an attacker. White Hawk’s technology creates synergies between protection measures so that the protection effect as overall sum is greater than the sum of the individual protection measures.

White Hawk Software has one patent application pending and more in preparation. The White Hawk Software advantage comes from the clever composition of many different protection elements and the deep integration with machine code. Most White Hawk protections fall under trade secrets.

Yes. In fact protecting at the highest optimization levels is a White Hawk competitive advantage enabled by our patent (pending).

Technically, of course, but legally we don’t know. We do know some advanced techniques which are beyond what can be used in White Hawk Software for now. We may also be limited by the requirement to make protections easy enough for engineers to apply. On the other side, some of our transformations are among the most powerful ones.

Yes, but that is in no way sufficient to break a White Hawk Software protected application.

Of course this sounds a little bit provocative, but we have heard this question in earnest!  By the way: the solution is relatively simple:  Use damage and repair aspects, so that the whole binary code is never completely in clear text in memory at any particular time.

Yes, but this is not sufficient to break a White Hawk Software protected application.

White Hawk Software detects the running of the application inside most debuggers. Timing checks, multi-threading and watchdog threads will take care of other debugger attacks.

See Troubles with Debuggers.

White Hawk Software has means to detect simulators and stop execution of the application. To become dangerous, a simulator would need to be quite complete, including timing and multi-processing.

Not as part of the protection; it could be done the same way as an unprotected application. The difference our protection makes is that the code cannot be modified.

That is so because an attacker of a crypto algorithm might eventually figure out the obscure algorithm. However, the obfuscations of White Hawk Software tools are parametrized and combined. It is not that each transformation cannot be cracked; it is the combination of transformations that makes it very difficult to crack the application. This does not match the description of security through obscurity which implies human made obscurity.

As an element of (pseudo) randomness is included, success by fuzzing is limited on a well protected application.

No. These are different tools; they serve different purposes. In fact, using both can be synergistic.

Static analysis: Is a tool to find and remove bugs. Bugs are a known way for a hacker to enter an application. But not all bugs will be found, and, there are as well other ways to attack an application.

Tamper-proofing: Will not eliminate bugs, but the application (together with its bugs) will be protected. Furthermore: bugs will be harder to find.

No problem. White Hawk protects object code and has only very basic requirements about the compiler to be used.

By using a different “seed” value for a protection, our tool can generate completely different protections from the same input object file.

White Hawk is proud of its extension mechanisms. You might actually not even need an extension: a protection script can make use of shared memory and can include customer functions in its core functionality.

Yes. We think our solution is unique. This is particularly useful for protections at the highest strength levels.

See also “Can a protection change the behavior of an application?” and “Can I link and use explicit customer behavior with the protection?“.

Your application is still protected.  The tool has a large variety of mechanism, it builds code on the fly and makes extensive use of PRNG.  Decoy code can be user generated.  Access to our source code might answer an attackers one or other question, but it does not weaken the protection substantially.  But users must be careful and protect the protection logs and the source code of a protected application.

No, you don’t HAVE to modify the source code. However, for a good protection you might want to add “markers” to tell the protection about the source code. The protection tool will not need access to the source code, it will find the markers in the compiled binary. The markers will be removed and will have no performance cost beside a possible “register spill”.  (Patent pending.)

It sounds useful, but writing a message right away opens an attack vector which tells an attacker where to get in.

In  95% of all protections this is undesired.

One obvious behavior change is a performance loss.  This is however under control of the engineer: Our tool gives precise control over the locations used for a protection, which can be used to limit performance costs.

Dangerous are programming errors where correct behavior erroneously depends on timing of the application instead on correct code synchronizations.  This can lead to “bugs”; however  such bugs really are already in the application before it is protected.

Another example is a simple bug in a protection:  If the programmer uses “damage” and “repair” aspects.  Just consider some code is “damaged”,  but the engineer forgot to issue a “repair” before that code is executed.

Lastly, and very rarely:  certain protection primitives CAN change the program semantics on purpose.  This is very powerful, very advanced and highly unusual.  In general we do NOT recommend doing this, and, and this will never happen by accident.  Using such mechanisms has the high cost of not being debuggable before the protection is applied.  As a simple example: it is possible to add tracing into a protection.

Yes you can, but it is not necessarily in your best interest.
Our tool, “is just a tool”.  You can use it in many different ways.

Not touching the source code is desirable when protecting legacy code with limited effort.  An example of a primitive which wouldn’t require source code knowledge is to simply run a watchdog thread doing checks for unexpected modifications and traces of being debugged.  You don’t have to program such a thread, it is a ‘primitive” included by the protection tool.

The opposite, a complex, involved protection is desirable for high level protection of critical infrastructure.

By the way:  An engineer can add markers into the source code, but that does not imply that the tool will access source code directly.  It detects the markers in compiled code (Patent pending).

In addition to using markers, it is also possible to denote ranges in the application code by using function names.

Our tool would be perfect to hide what an application does… Virus writers use obfuscations all the time; many obfuscation techniques have been developed by them to hide malware. Criminals do not care about robustness and stability; they just want a fast obfuscation. That may be why WHS has problems using obfuscations as we find them on the web.

The deep reason White Hawk Software technology is actually hard to miss-use is quite simple.  White Hawk Software’s obfuscations in no way can be hidden.  They are quite large and strong, and may need setup and decoys…   This is unacceptable to criminals.  For them an obfuscation and its strength is secondary to the ability to hide its presence.  White Hawk Software may hide some key aspects of a program, but the very fact that code has been obfuscated is visible, almost in plain view.

Lastly:  Criminals prefer not to be found; our customer-list is unlikely to be a desirable location for hiding.  We can keep customer-data close to our chest and, upon request, won’t publicize it, but we have no plans hiding customer identities from law enforcement.

See also our blog-post “What If“.

Load More