Patrick Beart
TCP Service Fingerprinting with CORS-Denied HTTP Requests
23rd of May 2024 , updated 26th of September 2024

Note: Unfortunately, before publishing this (but after making a test page, and performing some experiments showing it did once work), it appears most major browsers have patched the CORS fingerprinting technique. You can still try the demo here 🡥 but it isn't likely to work.

CORS (Cross-Origin Resource Sharing) 🡥 is a web-browser security mechanism which tries to prevent web requests from being made to unauthorised resources. CORS uses information received from accessing the target server to decide whether to allow access to it: this may seem like a counterintuitive property, but it allows target websites to decide whether they want to be able to be requested by other sites, by setting certain headers in the response. This would be necessary for services such as a weather API, which expect to be used from the browsers of people visiting many other websites.

Checking what you can do after you do it

The browser will first make a request to the target server. If the CORS policy the browser receives from the target server turns out to mean the user shouldn't have been able to make the request, then the browser behaves as though the request failed: it does not allow any returned data to be accessed by the script which initiated the request (whose potential maliciousness the mechanism is trying to defend against). There's an interesting similarity with a CPU feature called speculative execution 🡥 here, and if you have heard of it then you may already be able to imagine where this is going.

Many modern CPUs have some kind of prefetching, which essentially means they try to work around the long delay required to fetch the next instruction from memory by beginning the process of fetching it before it's needed: sometimes that answer is more complicated than "the one after this one", which is where branch prediction comes in. A feature called speculative execution takes it a step further: not only does it fetch a likely next instruction, it actually executes (runs) it. As with CORS, if it turns out to be wrong (for example if a branch went the other way than expected), it "undoes" the effects, and returns the processor back to how it was, before loading and executing the correct instruction.

In both cases, the specification of the defined behaviour of the system says that we're not allowed to see what the result of the forbidden action would have been. It's true that in the CORS case, the response is the same as if the request failed, and in the CPU case, the ISA-defined processor state after a failed speculation is the same as if the speculation hadn't occurred, which is why the issue in both cases has to do with a sidechannel: an "out-of-band" aspect of the system, whose behaviour might not be specified: in both cases, the sidechannel is timing. Timing is often a highly variable and performance-dependent aspect of a system, which might be considered to vary almost randomly, but in both of these cases it is affected by a factor we should not be allowed to gain knowledge of.

Nothing

In the web request case, if the request we make really has no server to answer it, the request will generally time-out, that is, give up and not wait forever for a reply. If the request fails due to CORS however, the browser may cause the request to fail (which looks to us, for security reasons, like a time-out) as soon as it knows it cannot succeed. What this means in practice is, although the response in both cases is a failure, it will be a quicker failure if something is actually there and simply does not want to talk to us, and we can use this difference in time-to-failure to determine the existence of things we shouldn't be able to access at all. This means we can identify things like programs and services running on a user's computer and their local network, which either do not answer HTTP requests properly or answer with failure-causing CORS policies, but whose response nonetheless allows us to determine their existence. and to build a profile of the user.

This isn't a terrible problem in modern web browsers, but I think it's rather interesting as an unexpected consequence of the performance-focused design of web browsers, the difficulty of foreseeing sidechannels and the extreme attention which web browser developers (mostly) pay to privacy. You can try out an online demo of this here 🡥 . The CPU speculative execution vulnerabilities follow a similar pattern to the web ones (extreme performance focus, surprising sidechannels and unacceptability of even obscure security problems)

If you haven't heard about the CPU architecture features I've mentioned here, vulnerabilities following this pattern (transient execution vulnerabilities) were initially widely exploited in the form of Spectre and Meltdown 🡥 in 2017 and 2018 (which were quite a bit worse than being able to tell if someone has Minecraft installed), and you can read about this topic in general on Wikipedia, or in any good edition of Hennessy and Patterson 🡥 .