North Korea says: No, We Didn’t Hack Sony.

Recently a new cyber crime story gets reported every week. This week’s news on cybercrime is about an attack at Sony Pictures Entertainment, among other problems making movies publicly available, and creating substantial damage.

Cute dog and girl, most likely from movie.
Picture from linked web page., most likely from movie

An important aspect of most cyber crime is the fact that hacks usually cannot be attributed to the real source. Just because a computer was attacked by another computer, maybe in North Korea or maybe somewhere else doesn’t confirm the real source. That computer may itself be an innocent victim and may have been used by another computer in some other part of the world. There can be a chain of tens and more computers. Even aunt Emma’s computer may be part of such a chain. Therefore it is a very bad idea for most people to start counterattacking cyber criminals by themselves.

This attack is different from old fashioned cyber-crime in what it tries to do. It is not simply stealing some money were the loss of the victim matches the gains of the criminal; it is not simply leaking credit card numbers. The loss to Sony is “strategic”: The loss for Sony isn’t what is gone and has been stolen. The loss in this case is directly hurting Sony in its ability to do further business. As of today, such crimes are common place in newspaper talk about state-actors, cyber-“terrorism” and in hype like cyber-“war”. What is new and unusual here is that such losses are inflicted on normal, commercial business enterprises.

Adding several layers of protection could significantly minimize the risk of such attacks. Obfuscation of your code as provided by White Hawk Software can be one of these protective layers.

For more about this incident see:

Eat your own dog food

We reached an interesting way-point:  “We got to eat our own dog food”.   We created a protection for the protection-tool.  First the obvious: we need tamper-proofing for the same reason everybody else does.  But the second point is more interesting:  pixabay_dog-72333about every customer will ask us whether we protect our own tool.  We just have to do it.

Of course we would have a good excuse:  The majority of the NestX86 protection-tool is written in a high level, byte-coded language.  Our product is about protecting machine-code in object files.   That may be a very good excuse technically, but to our customers it may nevertheless feel lame.

We can’t really handle byte-codes, but we still had to fake it.  Instead of writing another protection-tool, we made a custom-protection.  It is a prime example for how good expert work and protection-design can make up for lots of automatic tools.  We also learned the lesson we want our customers to learn: One can develop one’s own protection, but buying a tool is much cheaper. (We knew that before.)

We know our program.  We know what parts we really want to protect.  We know what parts might give most insight into the internal workings of the important parts.  Our tools’ performance is so good, we can easily give away some computer cycles for the protection.  Doing it manually, we can add some devious decoys.  Not random decoys, but decoys aimed at the particular circumstances.  In addition we can protect the base libraries together with the real tool.  The quantity alone of the protected code should discourage most attackers. (How can an attacker crack the binary if the source code with all its documentation is still hard to comprehend?) We don’t randomly rename identifiers, but make sure our renaming causes confusion and some aliasing.  A small number of artificial sub-classing, use of undocumented private algorithms and some multithreading should dot the i-s and cross the t-s.  Not yet having a protection tool doesn’t mean we didn’t create a number of special-purpose hacks for modifications to our source code before compilation.

For now we skip semantics-preserving transformations, until we have a decent automatic pixabay_food-bowl-281980analyzer and composer for byte codes.  If we weren’t a penny pinching startup company we would have bought a tool from one of our competitors. The free tools we tried were too difficult to use for our program base.  The grin of a competitors sales-person selling us a tamper-proofing tool would just have been completely unbearable.

Go try to hack our tool while there is still some possibility. With the next release, or maybe the after-next, you can forget that.  Sorry, this is rhetorical only:  For legal reasons the license for our tool prohibits reverse engineering of the tool.



Good bye Windows XP

Official support for Microsoft Windows XP has ended.  Panic begins.

But what if you still need to stick to an XP platform? Maybe the software is just a small part of a complex system? For software producers who, for whatever reason, still must create code for an old Windows XP platform after its end of life, there is a solution:windows_xp White Hawk Software helps making the software hacker-proof.

When the application software has been tamper proofed, hackers might still take over an XP machine. Protected software may go down with the machine, but it will not continue running and create wrong results.

If the machine can’t be protected, try protecting its software.

Model, simulate, theorize – or just do it?

A recent article in Wired magazine explained how some very smart mathematicians had theorized for years that there was a way to use encryption techniques to protect executable code as well as data. As far as I can tell, most of them never got around to it as they thought the mathematical simulation and proof that this would work was estimated to be a 3 many year project. However, some new research and concept tools in this area are close to coming to fruition and hence the article.

Software code protection tools
Coopers Hawk in nest – thanks to Cornell Univ.

But what if someone with a lot of experience in obfuscation tools, and others, created a new complex tool set that used a variety of techniques simultaneously to properly protect sensitive parts (or the whole) of a software system? Tools that can balance speed, protection and size? Tools that can protect object code as well as work on source code?  That is what Dr. Jacobi has been doing for White Hawk for the past 3 years using intense applied science, starting from a clean slate. Plus he previously worked for a major vendor in this area where they applied completely different techniques.

He has developed a software protection technique based on random control of novel obfuscations, mutually checking protection aspects, and algorithmic combinations of diverse code primitives. We are busy packaging the X-86 version of this as NEST-X86 for demo and beta testing in late March 2014. Forget trying to model its strengths and weaknesses, as each company will implement their chosen protection plans in different ways with this tool set.

Do think about signing up to be a beta tester or even a beta breaker – if you can.

(C) Copyright 2014 White Hawk Software

Cyber Threats to the Aviation Industry – Front and Back

SchipholTower300A long time ago I was so fascinated by how one air traffic control system could handle all the planes for three New York Area airports simultaneously, long before there was the internet or even multi user  computers, that I wrote a paper on it for part of my post grad degree program. Recently the SenseCy blog has called out some highlights from the AIAA official release of ” A Framework for Aviation Cyber Security” which discussed the connectivity challenge in a networked world..

With the enormous number of computers involved today in air traffic control, airport and ground control, as well as on-board control, the concerns about cyber security in this special industry have expanded dramatically. Perhaps because air travel is the primary method of international travel, plus the fact that other transportation systems don’t fall out of the sky when they fail, more attention needs to be paid to aviation than rail, ship or car travel (not that these others aren’t susceptible to attacks too).

Since the development of drones and their much wider deployment in recent conflicts, AirplaneCockpit300even Joe Public knows you can take control of planes from the right computers. This has also been portrayed in movies and TV shows. My concern is that all the attention seems to be paid to protecting these “front end access” systems. But what if malicious code has infected the back end systems or embedded code, for example? Infections that may lie dormant for a long time but then cause a lot of problems. A virus made its way into the International Space Station via a simple USB drive some astronaut brought aboard – so this is not just a theoretical discussion.

I hope and trust that some more attention will be paid to making back-end and embedded systems more tamper-proof before I next leave for the airport.

(C) Copyright 2014 White Hawk Software

Ensure Your Space Programs are Tamper-Proof too!

Picture of International Space Station that itself was infected with a virus. Photo thanks to Wiki Commons

Renowned Russian virus and security expert Eugene Kaspersky revealed recently that a virus had even been discovered on board the  International Space Station – despite them being a million miles from the nearest internet node. Turns out some space astronaut accidentally took along the virus on a USB “thumb” drive for use on one of the many laptops deployed in the space station. See full story from the International Business Times.

The big motto of this story is that you don’t have to be attached to the internet to be infected. So don’t wait to run virus checkers and hope for the best. Mission critical software should all be tamper-proof so that no malware can hook in and cause any damage whatsoever.

(C) Copyright 2013 White Hawk Software