Follow-up On d0z.me: Some Thoughts

Security/Bug Fixes

I'm a big believer in being up front and loud about security vulnerabilities and software bugs that are found, so I wanted to first and foremost tell anyone using the d0z.me source code to grab an updated version. I have to apologize, as I really had not expected this site to get anywhere near as popular as quickly as it did, and so I had not spent a ton of time testing it before I put in production. This is a sadly common mistake that the worst of us tend to fall into :-/ . As such, one of the regexs I was using in a sanitize routine did not function correctly, allowing for XSS and SQL injection with a specially crafted URL. The hole was patched quickly, and no significant user data was taken (as I keep none), but I wanted to make sure everyone knew in case they're putting up their own versions. Props to the person on Reddit who pointed out the flaw to me. I'm not sure who it was, as it has since been deleted, but props all the same.

Secondly, a minor bug, but a reader named Max pointed out that I had mistyped a couple characters in my charset. Should have used Python -> string.letters+string.digits, but oh well. Thanks for the report!

Mitigation Techniques

Now, onto happier / more interesting topics!

DNS Blocking

An oldie, but goodie. Used to fight malware all the time, this would mitigate malicious shorteners by simply by redirecting the user to a warning page by changing the DNS entry. It was interesting to see that, within a day of releasing d0z.me, it was already being blocked by OpenDNS. Impressive response time, but given that it was posted all over Slashdot and Reddit before it got blocked, I think the response time was better than it usually is.

OpenDNS blocking d0z.me

I'm not going to spend much space discussing this one because, rather obviously, this is not a feasible technique in the long run, as it hasn't been particularly effective for malware either. Attackers can easily register domains more quickly than they're being blocked, and not all DNS providers provide such a defense for their users in the first place. However, it is a decent first line of defense on the user's end, and will at least block more popular and older malicious URL shorteners.

HTTP Referrer + mod_rewrite

Some smart people have been discussing the use of mod_rewrite (or similar) to redirect based on HTTP referrers to block these attacks (interested readers can find an excellent write-up at jaymill.net). Although it's an interesting approach that might deter the casual attacker, I see a couple big problems with this.

Firstly, it is absolutely trivial to make this attack function with a whole host of different HTTP referrers, simply by hosting a couple static files on more trusted domains. The HTTP referrer header and the Origin header for HTML5 cross-origin requests is based off the the source domain of the script location. To circumvent HTTP referrer protections, one needs only to upload a small html file and a Javascript file to Google Pages, Google App Engine, Amazon, etc, etc to give the requests a new domain. I don't believe most sites would be OK with blocking refers from a large number of these kinds of domains just to avoid this attack, unless the attack was already underway and serious. Also, the fact that such a large number of domains can be used means that whoever is managing the site would have to keep track and enter these rules for every single domain, which could become quite the herculean task.

I took the liberty of writing up a PoC demonstrating that, using a hidden iframe, one can make attacks like d0z.me appear to have a different referrer address (spareclockcycles.org). You can view it at http://d0z.me/poc_refer.html (note: this link attacks example.com, which should be local). If you look at the requests in Wireshark, you can see that the Origin and HTTP Referrer headers are set to spareclockcycles.org and spareclockcycles.org/evil.html respectively.

Wireshark capture

This was tested with both Firefox and Chrome, and this behavior conforms with what the working draft has to say on the topic. Some browsers might be non-compliant, but the big two apparently aren't.

Secondly, while this mitigates the attack somewhat, it simply raises the bar of how many users an attacker needs to recruit through clicks on malicious links. Enough traffic will still overwhelm the server, as valid connections are still being fully established, and some level of processing still has to be done on their requests.

I believe that Jeremy mentioned both of these issues, to some degree, in his post, and seemed to believe that both were of small enough concern that it doesn't really matter. I have to respectfully disagree on that point, however. I think that yes, it raises the bar slightly for an attacker, but in the long run proves very little hindrance to one with any level of technical ability. It is certainly a good recommendation for those currently under an attack; however, I think a more solid approach is needed to dealing with this issue in the long run, and that we need to eventually address it with a complete solution rather than with temporary band-aids.

Changing Cross-Origin Requests Standards

It is clear to me that this problem is going to have to be fixed with modifications on both ends, servers and browsers alike, if we want a real fix.

The main problem seems to be that browsers, when making HTML5 cross-origin requests, do not cache the fact that they have been denied access on a certain server. Most servers have no reason to allow cross-origin requests, and so rightly deny them across the board, but do so (per official specifications) by replying without the domain name of the requesting server in an Access-Control-Allow-Origin header. The browsers, in turn, cache the denial for that particular piece of content, but will willingly try to grab another piece of content immediately. This is particularly obvious if you watch d0z.me work in Chrome: the Javascript console fills up with denial after denial error, but the browser continues to pound the server with requests.

Chrome cross-origin error screenshot

This seems silly at first glance, but is actually a side-effect of the fact that different files/directories on the same site can have different cross-origin policies (i.e., you can make specific files cross-origin accessible by only adding the proper header when you respond to those requests). The browser doesn't want to cache a denial for the entire site when it's possible that some of the content could be allowed. So here is our problem: how do we remember cross origin denials for sites that have no such content, but still allow for sites to have differing policies for different sections of their site?

My first thought was to modify the standards to require any site that services cross-origin requests in any capacity to put an "Allows-Cross-Origin" header in all of their responses to cross-origin response. This way, if a browser tries to access the server once with a cross origin request and doesn't see this header, it can cache the denied response for a certain period of time before allowing any other requests to be sent, and be assured that there is no cross-origin content available on the server. This will mitigate the attack for a large portion of servers, as most have little reason to allow cross-origin requests in the first place. For those that do need to provide this service, it might be advisable for browsers to begin rate-limiting cross-origin requests, so as to minimize the potential effectiveness of this attack on those sites as well. There is no valid reason that I can think of that a browser should need to make 10,000 cross-origin requests a minute, so why let it?

This is by no means the only solution to this problem, however, and I would love to hear what other ideas people have on the matter. I very well could have overlooked a much more simple and effective fix, or missed problems with my own.

d0z.me Improvements/Updates

As I mentioned earlier, d0z.me was simply something I hacked together in a few hours for research purposes (and, of course, fun). This, unfortunately, led to code that is really not up to anyone's standards, including mine, as I had been more concerned with testing the attack than writing good stuff. I also did not experiment too much with the best ways to exploit the HTML5 cross-origin approach, as I was planning on doing this more later after I released the tool. I have been in discussion with the researcher who originally reported on this problem, Lavakumar Kuppan, who very kindly helped identify a number of places where my code could be improved upon.

Because of these things, I am working on releasing a more secure, reliable, and effective PoC, written for Django and the Google App Engine. The strain of keeping both spareclockcycles.org and d0z.me running has not been easy on my server, so it will be nice to get half the load into the cloud. This release will also hopefully include some significant refinements to the stealth and speed of the DoS. It will probably be delayed until after the holidays though, as I want to give concerned parties some time to come up with fixes for this issue before releasing a more potent tool, as well as give myself some time to make sure I get things right.

Statistics Analysis

I know I promised @sanitybit that I would have some stats on d0z.me today, but sadly, this post has already gotten way too long, and I'd like to spend some quality time poking around with the numbers. However, quick stats: at the time of this writing, I've had ~30,000 page views (14,000 unique visitors) on the d0z.me domain and ~19,000 pageviews (16,000 unique visitors) on spareclockcycles.org since Sunday. Definitely not bad, especially for someone who is used to having visitor counts in the tens...

But yeah, welcome to all my readers! More interesting statistics are still coming, so come back soon.


Written by admin in Vulnerabilities on Wed 22 December 2010. Tags: d0z.me, ddos, html5,


Copyright Ben Schmidt 2015