Closed Bug 1536058 Opened 5 years ago Closed 5 years ago

Partition the HTTP cache per the top-level document's origin

Categories

(Core :: Networking, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla70
Tracking Status
firefox70 --- fixed

People

(Reporter: freddy, Assigned: sstreich)

References

(Depends on 1 open bug)

Details

(Whiteboard: [necko-triaged])

Attachments

(2 files)

The attack is as follows

  • user visits evil page on which the attacker can
  • remove foo.com/loggedin.js from cache (explanation below)
  • render foo.com in iframe
  • observe whether loggedin.js is being loaded through timing
  • attacker knows whether victim is logged in to foo.com

A website can reliably remove a given resource from cache by issuing a new request to the resource that will bypass the cache and return an error.
Examples of requests that bypass the cache are POST requests or fetch() with cache: reload options. The website can ensure that the request does not succeed by supplying a too long request URL, overlong HTTP referrers or other custom headers.

This has been disclosed privately by Eduardo Vela (@sirdarckcat)

I think this could be fixed by not removing things from the cache if the HTTP response returns an error.

Group: network-core-security

:michal, could you take a look? Thanks.

Flags: needinfo?(michal.novotny)
Priority: -- → P2
Whiteboard: [necko-triaged]

I don't think the entry can be removed using POST. Every POST request has an unique ID which is part of the key, so it's a different entry in the cache. I'm not sure about the fetch(). Honza, can we easily avoid dooming the cache entry in case of server error?

Flags: needinfo?(michal.novotny) → needinfo?(honzab.moz)

POSTs are uniquely isolated (impossible to remove cache entries with it).

fetch() with mode 'reload' will doom the entry immediately before we even start the fetch (FETCH_CACHE_MODE_NO_STORE -> INHIBIT_CACHING | LOAD_BYPASS_CACHE -> nsICacheStorage::OPEN_TRUNCATE); this is on purpose to support concurrent write/read access on identical entries.

a cross origin fetch({ cache: "reload" }) loads with the anonymous flag (another level of isolation), attack is not possible, unless the resource indicating login is also loaded with the anonymous flag (e.g. fonts, various cross-origin no-credentials requests)

but fetch({ credentials: "include", cache: "reload" }) will rewrite the global cache entry (no isolation), so a wider attack is possible here.

I think the same can be applied to xhr.

note that query strings in the url will isolate (it's simply a different URI). hashes are never sent obviously. request headers are customizable except https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name; there is one way that comes to my mind (I will not write it down here) that may be used to simply trigger an error using a header not on that list.

we have recently implemented first-party isolation for cache/storage for requests to tracking domains. But this is not the case, the probe request will go to a regular site, not on the tracking list.

adding some experts on tracking for further comments.

Flags: needinfo?(honzab.moz)
Flags: needinfo?(ehsan)

Hi Honza,

Sorry about the late response, I was at a work week and have been catching up with things since last week...

I don't think this attack vector is possible for third-party trackers right now since they are already double-keyed based on the origin of the top-level document.

It seems to me that for the fetch({ credentials: "include", cache: "reload" }) case, protecting against the side-channel attack without double-keying the cache based on the origin of the top-level document would be quite difficult, if at all possible, since both steps 2 and 3 of the STR in comment 0 could fail in the case where attacker.example could observe the logged in state of foo.com when embedding foo.com resources inside a top-level attacker.example page.

Basically, I think in order to fix this bug, we need to partition our cache similar to Safari based on the origin of the top-level page. That is a long-term anti-tracking goal too (in order to prevent ETag-based tracking vectors) so this gives us yet another privacy related reason for doing so, I think.

What do you think?

Flags: needinfo?(ehsan)

(CCing annevk and ckerschb, who have been involved in a cross-vendor discussion about these kinds of attacks)

Ehsan: Double-keying the cache sounds like the most robust solution. I just learned Chrome is experimenting with this in beta. Their reason to do so is indeed due to the cross-site leak discussed here.
Chrome has a public document discussing the threat model and the possible solutions in detail here: https://docs.google.com/document/d/1U5zqfaJCFj_URrAmSxJ0C7z0AilLLJ30lgAqShVWnck/edit. I've also been glancing at their implementation here https://cs.chromium.org/search/?q=kSplitCacheByTopFrameOrigin&type=cs

Additionally, it might be interesting to discuss removing cache: reload from the web platform, which will be discussed in https://github.com/whatwg/fetch/issues/902

Type: enhancement → defect

FWIW in the mid-to-long term we would also move to that model probably (I think as part of the feature being developed in bug 1549587). So I guess the question here is whether we want to do something specifically to partition the cache sooner than that.

I think it would be good to move ahead sooner, given the badness of the attack. A large site has indicated they'd be willing to run an experiment if we gave them a setting to only double key the HTTP cache (i.e., first-party isolation does not work as that is a much larger change from the status quo).

The one other thing we need to get clarity on, standards-wise or as a start across some browsers, is what the second key should be. Is this going to be the top-level URL's origin or top-level URL's host's registrable domain (maybe plus scheme)? Using registrable domain would probably lesson the impact on some properties.

(In reply to Anne (:annevk) from comment #9)

The one other thing we need to get clarity on, standards-wise or as a start across some browsers, is what the second key should be. Is this going to be the top-level URL's origin or top-level URL's host's registrable domain (maybe plus scheme)? Using registrable domain would probably lesson the impact on some properties.

FWIW Chrome appears to be using the origin https://cs.chromium.org/chromium/src/net/http/http_cache.cc?dr=C&l=615

It would be interesting to double check what key WebKit uses to double-key its HTTP cache, as it is the only shipping implementation...

But speaking about our options here, I think registerable domain (perhaps plus scheme) makes more sense than origin given the broader aspect of partitioning the browsing data. Given that cookies aren't origin bound, it's unclear to me how we could possibly partition the entire browsing data based on origin and not break sites which try to set a cookie on the root domain from a subdomain and such...

I think the argument for origin is that a compromise of example.com would not allow for HTTP-cache attacks on mail.example.com. I strongly suspect we deal with cookies and the HTTP cache separately, so we could have different strategies (also note that most cookies ignore the scheme). (I do realize that leaving other side-channels open will likely mean attacks remain possible, but I think there's value in driving attackers to more complex solutions while addressing rather trivial-to-execute attacks. [I also agree that checking with WebKit would be good here.])

Hmm, yes, that's a fair argument.

(FWIW, the HTTP cache partitioning we've implemented for third-party tracking resources is also origin-based! https://searchfox.org/mozilla-central/rev/99a2a5a955960b0e58ceade1db1f7652d9db4ba1/netwerk/protocol/http/nsHttpChannel.cpp#4051).

https://github.com/whatwg/fetch/issues/904 has some further discussion on standardizing a double-keyed HTTP cache. And again, the sooner we ship this the better I think (even if there are still exploits available through popups that can only be addressed by the site adopting Cross-Origin-Opener-Policy; see https://github.com/whatwg/html/labels/topic%3A%20cross-origin-opener-policy and bug 1543066 for that).

Depends on: 1560017
Depends on: 1557346
No longer depends on: 1560017
Assignee: nobody → streich.mobile
Keywords: dev-doc-needed
See Also: → 1553003
Depends on: 1553003
Depends on: 1572544
Depends on: 1572546

Sebastian, do you mind sending an "Intent to Prototype" as per our exposure guidelines to dev.platform please? Thanks!

Summary: Websites removing resources from the cache predictably creates a sidechannel → Partition the HTTP cache per the top-level document's origin

:annevk put this on my radar and I'll be tracking it from a Product Management point-of-view.

(In reply to :Ehsan Akhgari from comment #17)

Sebastian, do you mind sending an "Intent to Prototype" as per our exposure guidelines to dev.platform please? Thanks!

Reminder!

Flags: needinfo?(streich.mobile)

Sent! :)

Flags: needinfo?(streich.mobile)
Keywords: checkin-needed

Pushed by nerli@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/a5e791146ef5
Add Cache-Isolation behind a pref r=ckerschb,Ehsan
https://hg.mozilla.org/integration/autoland/rev/4fe91f01854e
Add a Test for cache isolation r=ckerschb,Ehsan

Keywords: checkin-needed
Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla70
Regressions: 1576039

Hey Sebastian, are you also planning to work on bug 1572544, 1572546 and 1553003? Thanks!

Depends on: 1576297
Regressions: 1582247
Blocks: 1590107

Removing dev-doc-needed; this was not enabled in any Fx version at this point, so it seemed early to document it.

Keywords: dev-doc-needed

Blocks: 1687569

Blocks: bug 1687569

Regressions: 1687569

This bug added browser.cache.cache_isolation which is not enabled by default, not even on Nightly. So it was not the cause of the regression. What you are concerned with is privacy.partition.network_state. I filed bug 1687618 to get rid of browser.cache.cache_isolation.

No longer blocks: 1590107
No longer regressions: 1687569

Thanks for clarifying, that solution allows extensions like SingleFile to use browser cache.
I solved all my extensions for saving pages problems by disabling (setting to false) these in about:config :
privacy.partition.network_state
network.http.rcwn.enabled

Now they use cache (no more double downloads for images for example, one by page, one by extension) and reduce network data waste. See bug 1687569

You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: