Spare Clock Cycles Hacking is freedom.

27Nov/109

Avoiding AV Detection

As a follow-up to my post on the USB Stick O' Death, I wanted to go a little more in depth on the subject of AV evasion. Following my release of (some of) my code for obfuscating my payload, it became apparent that researchers at various antivirus companies read my blog (Oh hai derr researchers! Great to have you with us! I can haz job?) and updated their virus definitions to detect my malicious payload. To be perfectly honest, I was hoping this would happen, as I figured it would be a teachable moment on just how ineffective current approaches to virus detection can be, give readers a real world look at how AV responds to new threats, and provide one of the possible approaches an attacker would take to evading AV software. My main goal in this research was to see how much effort it would take to become undetectable again, and the answer was 'virtually none'.

In this post, I will first look at how I was able to evade detection by many AV products simply by using a different compiler and by stripping debugging symbols. Then, I will look at how I was able to defeat Microsoft's (and many other AV products') detection mechanisms simply by  "waiting out" the timeout period of their simulations of my program's execution. However, a quick note before we begin: I'm by no means an expert on antivirus, as this exercise was partly to further my understanding of how AV works, and these explanations and techniques are based on my admittedly poor understandings of the technologies behind them. If I mistakenly claim something that isn't true, or you can shed light on some areas that I neglect, please comment. I would love to learn from you.

Compiler Confusion

In my original post, I mentioned that I ended up using a copy of mingw64 from this PPA rather than from the standard Ubuntu repositories. While you'd think that this detail wouldn't matter significantly, it really ends up being almost everything. My malicious payload, when compiled with that version of mingw64 rather than the default one, has an enormously lower detection rate . Why is this?

Well, apparently the two versions have enough differences in their backend algorithms that the executable generated with Ubuntu's trips some heuristic definitions, while one made with the PPA version doesn't. The reason that Ubuntu's is being detected over the PPA is obvious: attackers are more likely to have used the default one in the repos. In addition, I actually saw three or four AV companies add detection for the Ubuntu version of the executable, but have only seen one possible additional detection of the PPA version (although it might have been a fluke, but I used it as my example anyway), which seems to indicate that the researchers trying to replicate my executables were using the default mingw64 as well. This confuses me a bit, as I've been uploading my attempts to VirusTotal and linking to them, but hopefully they will analyze them at some point.

I haven't spent the time yet to figure out what is specifically causing the problem for AV between the two versions of mingw64, and I imagine it varies by AV product, but it serves to illustrate quite clearly the large challenge facing AV companies of simply dealing with different compilers and different optimization routines (payload detection rate with Ubuntu's mingw64 / payload detection rate with PPA mingw64)

Debugging Symbols Debacle

While the different compiler issue might be eye opening to some, I had suspected that it would be an easy way to prevent detection, at least early on in the lifecycle of a piece of malware. I was not expecting, however, for debugging symbols to factor at all into the detection issue. They have no impact on the actual behavior of the code itself, only on code size and apparent file similarity by some metrics. However, in my tests, simply stripping out the debugging symbols (strip --strip-debug filename) managed to erase the detections I was getting from two AV products, Ikarus and Emnisoft (detection results). My guess is that their scanning algorithms look for files that are a certain percentage similar to previously seen malicious executables, and then subject those to a sandboxed execution to try and gain a clean memory image. Because the executables are (by their metrics) not very similar, this sandboxed execution was not triggered, and the malicious code that is revealed in memory during execution is not discovered. However, this is certainly just a guess, and I would love for someone more well versed in the finer points of AV operation to explain it to me.

Timing Troubles

Even with these two methods, however, I was still not completely escaping AV detection. The closest I had come was by using the PPA mingw64 version and stripping debugging symbols, which got me to my original detection rate in which Microsoft was the only program detecting my executable. It was clear at this point that Microsoft was detecting my code by running it in a sandbox or simulator of some kind, where it then was able to obtain a memory dump that revealed the presence of my meterpreter payload. Being the perfectionist that I am when it comes to these things, I still wanted 0% detection rather than 1/43. So how to do this?

Given that my understanding of the problem was correct, there were two possible ways: avoid triggering the heuristic definition that marked it for further scrutiny, or somehow defeat the sandbox itself. Surprisingly, I ended up going with the latter. After a good number of failed attempts to modify the code in such a way that it avoided the heuristic trigger, I came up with a different idea: outlasting the sandbox timeout. Because it made no sense to suppose that the sandbox would allow the application to run indefinitely, I supposed that Microsoft had some set limit of instructions or time that it would simulate program execution for before killing it and taking a memory dump. As it turns out, this assumption proved correct, and that timeout period wasn't very long at all. To test my theory, I created two slightly modified versions of my original, in which I just inserted two loops into my decryption loop. I could have used anything that simply chewed up CPU cycles, so this implementation choice is arbitrary and intentionally ridiculous. The configurations (which can be inserted into my original hide_payload.py script) are below.

Short loops:

pre_loop = "int j, k;\n"
pre_enc = "for(j=0;j<2;j++){for(k=0;k<2;k++){"
enc = "tmp[i]=sc[i]^key;\n"
post_enc = "}}\n"
post_loop = "//do nothing\n"
post_func = "//do nothing\n"

Long loops:

pre_loop = "int j, k;\n"
pre_enc = "for(j=0;j<500;j++){for(k=0;k<100;k++){"
enc = "tmp[i]=sc[i]^key;\n"
post_enc = "}}\n"
post_loop = "//do nothing\n"
post_func = "//do nothing\n"

As you can see by these detection rates (short loop detection rate / long loop detection rate), it is clear that Microsoft's AV is quitting the program execution before the meterpreter payload is fully decrypted, preventing detection. The short loop is detected, while the long loop is not. A look at the code in Ollydbg also shows that the only difference between the two generated executables is the number of loop iterations.

As it turns out, this approach is entirely feasible as a method of avoiding detection: the long loops take less than a second to finish in my test VM, a negligible delay for the added invisibility. In addition, Microsoft is not the only AV that can be defeated via this technique. It appears as though most of the AVs were detecting my payload via similar techniques, and that most proved similarly weak (improved Ubuntu mingw64 version). These products include BitDefender, F-Secure, GData, and nProtect. However, these are certainly not the only ones affected, and in fact should be commended for detecting my executable in the first place. Oddly enough, a product called VBA32 began detecting a virus called "BScope.Jackz.e" after I added these loops, which makes me curious as to whether or not someone is already exploiting this weakness.

Convincing Conclusion

To begin my carefully crafted conclusion to this alliterative analysis of antivirus, I want to first note that this post is certainly not meant to disrespect the work that AV companies do: detection of malware is difficult business, and the attacker certainly has a significant advantage. My malware is certainly not anywhere near the hardest threat that AV has to deal with, either. Indeed, this probably partly explains why it was so easy to bypass detections: the adults had better things to do than write definitions specifically for my crappy toy crypter. I seriously doubt that the writers of Conficker could resort to such simple methods to avoid detection. That said, it seems to me that trying to implement detection based on live behavioral techniques could significantly improve AV effectiveness, much more than toiling hopelessly away in research labs, trying to complete the Sisyphean task of writing definitions that catch every known or slightly modified piece of malware. At some point, if not already, this will become entirely impossible, and a new solution will need to be devised.

Of course, in this discussion, we must also remember that AV is essentially the very last line of defense between an attacker and code execution, and if one manages to get to this point, our system protections have already failed miserably. To me, AV is analogous to the TSA: expensive, intrusive, and not incredibly effective for the effort involved. This is not to say that AV is not helpful in preventing attacks; both AV and the TSA are decent barriers that make it more difficult to exploit common attack vectors. However, neither are the be-all-end-all of security in their respective fields, and, with effective security policies in place, they should only be present as a contingency plan of sorts, to become important in preventing attacks only if all else fails. I hope this small test case has illustrated the degree to which a system's overall security posture (disabling AutoRun/AutoPlay, disallowing USB drives, keeping software up-to-date, etc) is much more important than simply having up to date antivirus software installed on a given system, and that we need to continue to research new ways to improve AV detection methods so that it might not need to rely solely on reactive virus definitions, but rather work towards a system aided by proactive detection techniques.