Giter Site home page Giter Site logo

storage-partitioning's People

Contributors

annevk avatar hober avatar jyasskin avatar kangz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

storage-partitioning's Issues

Scenario Validation (Embedded Component (Tableau) )

It is unclear exactly where to post this to since it is more of a scenario than a comment directly on storage-partitioning but I am picking this repository because I have to pick one. I have tried to outline our scenario, the expecations of our customers and end-users and the impact of all of the options currently on the table. Hopefully this issue can be used to have a discussion and I don't have to post on all the repositories?

Background

Tableau is a service which can run as a SaaS offering or via an on-premise, customer managed server installation. Tableau uses cookies primarily for session management. After a user logs in to Tableau, a session is created and a cookie is used to maintain this information (along with a csrf cookie used as part of CSRF protection). One of our primary (not unique) use cases is via embedded analytics where our visualations and experiences are embedded as a component (iframe) inside of a customer site or a third-party application. When cookies stop working, our ability to maintain sessions is broken and our user experience most often degrades into an endless login loop. We hit similar issues during the fixes for SameSite attribute enforcement (which just required updating everywhere we generate cookies) but are trying to make sure that the current proposals take into consideration our customer needs.

Requirements

  • Our embedding experience should feel like a native part of the app it is in. That means we should not require re-prompting for authentication and we should not require the user to “accept” that a component service uses cookies.
  • A single Tableau tenant ‘site’ can be embedded in many different apps including wiki, chat, portal, or a collaboration app. Those apps are not necessarily all in the same domain as they could themselves be SaaS services (with vanity subdomains) and so do not necessarily have a relationship outside of potentially user identity.
  • Many of our embedded customers also use Tableau directly. For them, a common solution to SSO is sharing an IDP between Tableau and the embedding app, like Ping or Okta. With this established, and with CSP frame-ancestors to establish security (ideally), they are able to perform the auth handshake with no clicks / pop-ups, at which point Tableau can generate a session.
    • We are working on alternatives to establish trust with embedding client applications but the end result is still a session (cookie).
    • Customers not able to meet our current SSO experience security requirements can fall back to “Click To Sign In” experience where we launch a popup to do the auth handshake but it is a fallback and not the approach desired by our customers.
  • We do not own all layers of the stack between our service and the browser. On premise customers can choose to put load balancers or authenticating proxies between us and the browser. For our SaaS service we have layers of AWS (technically our choice but still limited by their requirements)

Options

We have broken down options going forward and provided insight into each as well as a look at how proposals from the committee might affect them.

  • Stop Using Cookies (for embedding) — We have tried to evaluate this using our existing iframe model and also considering a non-framed model (which would have introduced even more security related concerns). This comes with the following concerns:
    • We don’t own the layers between us and the User Agent so it is not even clear that we can do this. Cookies are still a fundamental part of some LBs (https://aws.amazon.com/about-aws/whats-new/2021/02/application-load-balancer-supports-application-cookie-stickiness/). When SameSite enforcement hit we had to work with numerous on-prem customers to deal with issues in their network stack to make things work again.
    • Native elements like ‘img’ tags could not be used without a full scale switch to use src:data encoding. Our images still require auth which is provided for by the session cookie (do not require the csrf header in this path). This is just one example.
    • Session management is a primary use case for cookies. That is what we are using them for. Breaking that ability seems ...
  • Keep using cookies but become first-party — via mechanisms like CName cloaking or SSL terminating proxies. Concerns:
    • CName cloaking is being looked at as a ‘hole’ in the current proposals and not a long term solution
    • Configuring SSL terminating proxies for all places we could be embedded is not feasible / scalable for our customers.
    • Any option like this can lead to cookie ‘leakage’ depending on the Domain and Path attributes that are set so it could introduce security issues as it attempts to address others
  • Keep using cookies ‘as-is’ — this relies on some of the proposals currently being discussed that we are hoping to chime in on.
    • Partitioned Storage — as demonstrated by Firefox’s implementation and our testing, this seems very promising. The only information we need shared across sites is IDP related and seems to be getting addressed (although how WebID applies to embedded content is not yet clear to me)
    • First Party Sets — This also seems reasonably promising. It requires us to finish up some work for ‘vanity’ urls (in-progress and highly desired) as we would need to identify tenants via subdomains but the main concern is how we can support embedding a single tenant ‘site’ into multiple applications which could be in different domains. It would be odd to put them in the same domain set, especially since multiple would be ‘owners’. Also, Tableau can itself be embedded into with third-party content (also using iframes) so how could we manage all of that?
    • StorageAccess API — depending on the algorithm used, this can work as demonstrated by our testing. However, that is only verifying the ‘hasStorageAccess’ api, which does not require an explicit user interaction (click). The requestStorageAccess API can only work in our ‘Click to Sign In’ scenario and so breaks our seamless SSO requirement, and even then some browser implementations won’t even ask the question if we have never been a first-party.
    • WebID — This is mostly focused on making sure that our integration with IDPs continues to work. I am not sure if falls under the bucket of us continuing to use cookies but is something I am trying to understand how it relates to embedded content using an IDP

Current ‘Conclusions’

We have data that we can share which breaks down the impact of the currently available implementations and our testing of them across different browsers. Our hope is to work / communicate with the working groups to understand what the expectation so that we can continue to meet the requirements of our customers and end users.

SessionStorage partitioning

I noticed that there is a behavior gap between WebKit and Gecko on partitioning SessionStorage. Is there a better option between the two we can align on here?

It may be useful to note that this came up while we were prototyping having Storage not un-partition with the Storage Access API. A major IDP uses SessionStorage and relies upon it not being partitioned in some use cases. That means that it is broken with the combination of always partitioned Storage and partitioned SessionStorage.

First-party sets and Storage Partitioning

So, First-party sets is one of the approaches to solve Third-party cookies issues. Is there a way to make it work for the cases when local storage is used for communication between trusted websites from different domains? For example by making LS unpartitioned, but allowing access to it if domains are specified in the FPS relation. We have quite a complex communication mechanism in place, that relies on Local Storage and StorageEvents, that was impacted by Third-party storage partitioning rollout, and it would take quite a lot of effort to refactor it. Also, I don't see any other option except for using backend for this kind of communication now.
Also it seems that Storage Access API works only with cookies, maybe it can be applied to other storage types too?

Expose partitionedness

Exposing whether an environment is partitioned, mainly through an HTTP request header, came up in the last cookie discussion privacycg/meetings#19 (again). There's a couple different ideas floating around addressing various use cases around security and developer ergonomics.

#31 and #25 also relate to this in that for cookies people have suggested a different keying setup, which really drives home the point that we have to be very careful with what we end up doing in this space.


I think having an equivalent to Sec-Fetch-Site that tells you something about your ancestor documents (none, same-origin, same-site, or cross-site) still makes a lot of sense. However, in a A1 -> B -> A2 scenario this header would signal cross-site for A2, which might not make it clear enough it can still set SameSite=None cookies (depending on how #31 gets decided). It would indicate that CHIPS cookies would work however so maybe that is good enough. (The main alternative I can think of is that we'd expose a separate "what is my site relation with the top-level" header, but I'm not convinced that carries its weight.)

Ability to get localStorage value from third party iframe always blocked?

I think this is working as intended, but the examples on https://developer.chrome.com/en/docs/privacy-sandbox/storage-partitioning/ don't really fit my example so I'll outline my use case and you can let me know if this is intended. TL;DR; - attempting to retrieve an auth token from our auth site returns undefined when trying to access localStorage from an iframe from auth site

Prereq

  • Application Site - app.example.com
  • Authorization Site - auth.example.com

Steps:

  1. A user lands on auth.example.com to authenticate
  2. A user successfully authenticates and auth.example.com sets a localStorage value of their auth token - localStorage.set('token', 'a.a.a')
  3. auth.example.com then redirects the user over to app.example.com
  4. app.example.com looks for a "token" in localStorage but can't find one
  5. app.example.com then loads an iframe to auth.example.com/token.html and postMessage to the iframe with a message of "get-token"
  6. auth.example.com/token.html receives the message and then postMessage back to the parent with a value of localStorage.get('token')
  7. app.example.com receives "undefined" for the token from the message event

If third party storage partitioning is off, then app.example.com receives the token correctly in Step 7.

Follow up

In practice, both the auth.example.com and app.example.com are on the same domain so we don't actually run into this problem (the token is found correctly in Step 4). However, when developing locally we use "localhost" for "app.example.com" and it is in local development where this issue is happening. Has any consideration been given to exclude "localhost" from these rules?

Consider affordance for embedded frames in extension pages based on externally_connectable

Currently there is an affordance in place for extensions so that they can embed frames with web origins in extension pages, which will then be treated as first-party. (Reference)

The current affordance however requires an extension to have host_permissions over the web origin.

If the web origin belongs to the extension author, in most cases it wouldn't need or request host permissions since it can directly communicate with the page using sendMessage having declared it as externally_connectable in its manifest.

Having minimal permissions in this case harms the experience since the scenario doesn't fit into the current affordance.

Q: Can we consider extending the affordance to consider frames first party on extension pages if the extension has the embedded webpage origin declared as externally_connectable in its manifest?

Clear-Site-Data for partitioned storage can be used for cross-site tracking

Back when WebKit considered whether or not to implement Clear-Site-Data, we noted that clearing partitioned data upon receiving that header can be used for cross-site tracking purposes. Since not many others were considering partitioned storage at the time, we never filed issues about it, at least not that I'm aware of.

The attack is about one first party site having control over website data under another first party site.

Imagine site.example registering these 33 domains: haveSetPartitionedData.example and bucket1.example through bucket32.example.

site.example runs script in the first party context on a great many websites. As part of its execution on those sites, it injects 33 invisible iframes for the domains mentioned above.

Let's say site.example is executing its script on news.example. If a cross-site user ID has not yet been planted yet for news.example, the haveSetPartitionedData.example iframe will not have website data yet and communicates to the bucket1.example through bucket32.example iframes to start fresh. The bucket1.example through bucket32.example iframes all store '1' in their partitioned storage and report back to the haveSetPartitionedData.example iframe when they are done. Now the haveSetPartitionedData.example iframe stores the fact that 32 '1's have been stored in the news.example partiton.

Every time the user visits site.example, site.example gets to see its unpartitioned cookies which identifies the user. Let's say it uses a 32-bit ID for the user. It now makes sure to send Clear-Site-Data response headers matching the '0's in the unpartitioned cookie ID for the corresponding bucket domains. For example, let's say the user ID has '0's in bit 4, 6, and 20. Then site.example would make sure website data is cleared for bucket4.example, bucket6.example, and bucket20.example.

Now when the user visits news.example, the haveSetPartitionedData.example's iframe will have website data set and communicates to the bucket1.example through bucket32.example iframes to report their '1's and '0's (no website data means '0') to the site.example script on news.example.

Voilà, cross-site user ID established.

Only accepting Clear-Site-Data from the current first party website would mitigate this attack but not fix it. Further, if this attack is combined with browser/device fingerprinting, it only needs to add enough cross-site bits to reach ≈32 bits in total.

Partitioned popups

@krgovind at the last Privacy CG call you floated an idea around popups. Whereby you could open a popup and get a handle to it, but the popup would end up being partitioned in some way. I was wondering how serious that idea was as there are other proposals around popup handling and I wonder to what extent they should be pursued jointly.

cc @hemeryar

Explore cookie partitioning

EDIT: We published an explainer expanding on this idea: https://github.com/DCtheTall/CHIPS/


During the CG meeting today, the topic of partitioning cookies came up.

@annevk mentioned that Firefox is currently experimenting with this. Also see his previous comment.

@johnwilander previously wrote that Safari attempted this change and rolled it back due to a couple of concerns that are broadly relevant:

  • Developer confusion
  • Multiple sets of cookies increases memory footprint

Both of these issues might be alleviated by using an opt-in model for partitioned cookies.

One potential solution is to have the developer specify a cookie attribute PerPartition (name needs bikeshedding), that is parsed in embedded/third-party contexts:

Set-Cookie: SID=31d4d96e407aad42; Secure; HttpOnly; PerPartition

The browser then stores that cookie in a partition keyed on (top-level-site, embedded-site)

Subsequently, when the browser makes a request to the embedee, it includes a cookie header with only the opted-in cookies and a header to indicate the top-level site:

Cookie: SID=31d4d96e407aad42
Sec-TopLevelSite: https://toplevel.site

Note: The question of whether it is acceptable to expose the first-party to a partitioned third-party is being explored in #14

Expose the first party to a partitioned third party

During the CG meeting there was a question whether the first party location should be exposed to third parties (both via HTTP and JavaScript). And some agreement that it might make sense, modulo referrer policy.

What is the state of third party storage today in the various browsers?

Chrome now blocks third-party storage in incognito mode.
I believe Firefox blocks third-party storage for sites on the tracking list.
I don't know what Safari does today.
I don't know what Edge does today.

It's obviously much easier to simply throw on third-party storage access and then fill in unpartitioned storage once requestStorageAccess resolves. Do we have good reasons not to simply do that? Or perhaps we could provide a single partitioned storage mechanism, but not all of them.

Add the blob URL store

We should do it in such a way that end users can still open them in the address bar though. And perhaps they should force COOP.

What about SameSite?

As discussed recently, there are various properties of the SameSite cookie attribute that need to be evaluated for how they would work in a partitioned world without third party cookies. A probably incomplete list of things I've seen mentioned:

  1. #31
  2. Do we still allow cross-site POST with SameSite=None?
  3. When an embedded iframe navigates from a cross-site context to same-site, SameSite=None cookies are sent. Do we want to keep this behavior?
  4. Sending cookies as part of FedCM requests (fedidcg/FedCM#248)

Underlying is the question of what the SameSite attribute itself should look like in the future. We could, for example, decide to deprecate the attribute entirely and use alternative attributes to preserve aforementioned security-related use cases with more granular control.

Accessing session storage in nested documents

Our web application has a nested document structure, A1->B->A2.

<html>
<body>
  A1
  <iframe src="tableau cloud URL">
    B
    <iframe src="Same domain as A1">
       A2
    </iframe>
  </iframe>
</body>
<html>

A1 and A2 are contents we created on AWS, and they are within the same domain. We use AWS Cognito for user authentication and store access tokens in the browser's session storage in A1.
B is a page on Tableau's cloud.
A2 is an HTML from AWS embedded in B, and it calls the REST API we provide on AWS using JavaScript.
In this call, we set the access token that A1 saved in the session storage in Authorization.

With StoragePartitioning enabled, A2 cannot access the access token from the session storage, and the REST API from A2 can no longer be called.
Authentication using AWS Cognito and saving to the session storage are done using libraries provided by AWS, and the display in B or A2 uses features provided by Tableau, so the only part we can program is within the JavaScript in A2.

Could you please provide a way in the JavaScript within A2 to reference the session storage saved in A1?

Storage partitioning allowances for custom protocol frames

For a site the user has added as a registered protocol handler for a safelisted scheme or web+ custom protocol, Storage partitioning, if it separates the handler site from its main storage (e.g. IndexedDB) will break the use case of loading a registered protocol as an iframe's src to establish a protocol-based app-to-app API channel.

To understand the use case, consider an example, web+wallet, wherein a user has added a site as their web+wallet handler. The web+wallet community ships a small lib to create a frame that loads the web+wallet protocol in an iframe's src, allowing top-level site to interact with whatever site a user has installed as their web+wallet handler, via the postMessage API conduit. It is important we not break this functionality for frames loaded with custom protocol handler pages, as this is the only means installed handlers have to provide a background process/API channel to sites that integrate support for them.

Recommendation: because registering a protocol handler already requires an explicit top-level visit to the domain of the registered site + the direct, overt, explicit user choice to install a site as a handler, custom protocol frames should be exempt from partitioning.

Cookie partitioning issues on PSL domains

As per the ongoing discussion on PSL in privacycg/private-click-measurement#78, it's become apparent that a domain present on the PSL can still be loaded within a browser. This has been tested across Safari, Chrome, Firefox with consistent results - the PSL domain will load and be rendered in the browser.

The example referenced in the other issue is http://gov.au, which is on the PSL and is a static holding page for the Australian government. You'll note that the browser will load this page and cookies can successfully be set for this domain, potentially causing scoping issues for subdomains that should probably be treated independently of the parent domain.

This is a security issue, especially when many of the proposals like the linked one rely on cookie separation as part of the set of privacy guarantees.

We should discuss how to resolve this.

Sharing of HTTP and fetch caches

Overview

Currently, it is possible for standard browser navigations and JavaScript based fetch/XHR calls to hit the same HTTP cache. This may not be true as caches are partitioned further in the future. However, there are advantages to allowing these network requests to hit the same cache, in particular for same-origin applications.

Implementation wise, this can be utilized if the website sends responses in a "polyglot response" format that is both well-formed HTML, and parseable by JavaScript to extract HTML chunks, or extract structured data to utilize with client-side rendering.

Proposal

Formalize circumstances under which the HTTP cache is shared with fetch/XHR for same-origin requests, even as further partitioning occurs. This may be automatic by convention, or may require specific parameters opting in to the behavior for fetch/XHR to utilize the HTTP cache instead of a separate cache. For example, using { mode: 'same-origin' } for fetches.

Use Cases

Use Case 1 - Browser navigation to client-side navigation

Consider a hybrid app that prerenders the initial page and sends complete HTML to the client, then loads subsequent pages using JavaScript, fetch/XHR, and the History API. Each page contains dynamic content, e.g. the results of a search query.

First, the client initiates a client-side navigation to a new page which destroys the initial page content. Next, the user hits the back button to return to the initial page. A new network request must be initiated from JavaScript to fetch the dynamic content from the server.

If the fetch/XHR request can call the initial URL, hit the HTTP cache, and extract the needed dynamic content, it can avoid this network request, server computation, and added latency. This is similar to a fully SSR experience - a back button in this scenario would hit the HTTP cache from disk without an additional network request.

While there are other solutions to this like storing the initial page content in memory or other storage like SessionStorage, these have their own downsides, and also do not help with the reverse situation:

Use Case 2 - Client-side navigation to browser navigation

Consider the same hybrid app. Now, the user navigates client-side one or more times, then clicks a link to an external site. The user then hits the back button to return to the hybrid app, and BFCache misses. In a fully SSR app, the back navigation would again instantly restore the page from disk cache without a network request. In the hybrid case, we will experience a cache miss since the URL was originally fetched via JavaScript, and is now fetched via a browser navigation.

However, if we are using the polyglot response approach and fetched/XHRed the same URL that was pushed to the history stack, it will already be in the HTTP cache and the back navigation will be performed instantaneously without an additional network request.

Even utilizing custom caching in-memory or with other storage, this case can't be solved for browser-based navigations without an additional network request.

Examples

Below are example flows performed in desktop Chrome showing how this works today.

Browser navigation to client-side navigation

  1. In the console, execute await fetch('https://www.google.com/search?q=test+query+1', {mode: 'same-origin', cache: 'only-if-cached'}).
  2. Observe the cache misses.
  3. Navigate to https://www.google.com/search?q=test+query+1
  4. Observe the page loads with type document and has a size, meaning it downloaded from the server.
  5. In the console, execute await fetch('https://www.google.com/search?q=test+query+1', {mode: 'same-origin', cache: 'only-if-cached'}).
  6. Observe the network request of type fetch loaded from (disk cache).

Client-side navigation to browser navigation

  1. Navigate to https://www.google.com/search?q=test+query+1
  2. Navigate to https://www.google.com/search?q=test+query+2
  3. Navigate to https://www.google.com/search?q=test+query+3
  4. Navigate to https://www.google.com/search?q=test+query+3 again
  5. Observe direct navigations never hit (disk cache).
  6. Using browser navigation buttons, go back -> forward.
  7. Observe the browser back/forward buttons hit (disk cache).
  8. Now from https://www.google.com/search?q=test+query+3, right click the refresh button and Empty Cache and Hard Reload.
  9. Again using browser navigation buttons, go back -> forward.
  10. Observe the cache was cleared because the back button missed cache, but forward hit it again.
  11. From https://www.google.com/search?q=test+query+3, right click the refresh button and Empty Cache and Hard Reload again.
  12. From the console, execute await fetch('https://www.google.com/search?q=test+query+2').
  13. Hit the back button.
  14. Observe the browser hits the cache populated by the fetch.

consider including a "cross-site ancestor chain" bit in the storage key

Currently service workers have poor SameSite cookie protections because its "site for cookies" is simply set to the origin:

https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis#section-5.2.2.2

In contrast, documents take into account the top-level-site and the ancestor chain when computing "site for cookies":

https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis#section-5.2.1

This is problematic because it means adding a service worker to a site can reduce the safety of SameSite cookies.

With storage partitioning we have the opportunity to fix this. We already plan to include top-level site in the storage key which will allow us to include it in the "site for cookies" computation for service worker. We lack any ancestor chain information, however.

The ancestor chain is important for "site for cookies" because it helps protect against clickjacking attacks. To extend this protection to service workers we propose:

Include a "cross-site ancestor chain" bit in the storage key. This bit would be true if there are any sites between the current context and the top-level context that are cross-site to the current context. So it would be true for A -> B -> C or A -> B -> A. It would be false for A -> A or A -> B.

With this bit in the storage key it would permit us to compute a "site for cookies" value for service workers that is equivalent to any document controlled by that service worker.

This was discussed at the recent service worker virtual F2F: w3c/ServiceWorker#1604.

A way to define an origin as safe, to disable partitioning

If a website uses iframes that are not sameorigin but still controlled by the same authority, wouldn't it make sense to have a way to disable storage partitioning ?
Not having any way to disable it forces you to ask for user gesture on the iframe for that iframe to have access to APIs like a service worker.

It would be the same kind of way than CSP or CORS, defining explicitly the domains that the embedder and the embedded accepts.

I've not seen anything regarding a way to disable partitioning after looking through the issues / docs.

What state should be able to have its keying relaxed?

This issue ties into the Storage Access API, but is also relevant here as it has implications for the architecture of the affected pieces of state.

Firefox has an implementation where Cookies and all of Storage go between having additional keying and not having additional keying depending on the Storage Access API.

Safari allows Cookies to go between blocked and not having additional keying (i.e., first-party access).

I'm not aware what Chrome is planning here.

more things to isolate

Here's a list of additional things that are isolated by privacy.firstparty.isolate in Firefox and Tor Browser:

  • speculative connections
  • fetch caching and requests
  • XHR caching and requests
  • HPKP
  • OCSP cache and requests
  • intermediate CA cache
  • auto form-fill
  • favicon cache
  • page info media previews
  • image cache
  • "save page as" requests

A1 -> B -> A2 nested documents and cookies (and SameSite=None)

Assume a document nesting scenario of A1 -> B -> A2 whereby A1 and A2 are same-origin with each other and cross-site with B. In the real world this sometimes materializes as a publisher embedding an ad distributor that then decides to display an ad from the publisher.

As discussed in #25 and elsewhere it's generally considered good practice for A2 to be severed from A1 to avoid confused deputy attacks, which is why browsers are considering adding the "has cross-origin ancestors" bit to the partitioning key.

Now unlike other state, cookies have the unique ability to indicate these confused deputy attacks are defended against through the usage of SameSite=None. As such, the argument has been made that sending unpartitioned cookies to A2 is okay, as long as they use SameSite=None.

This creates some weirdness in that from a theoretical perspective B and A2 should not really be any different in terms of their relationship with A1. As in, both of them are partitioned. However, given the existing use cases and the unique ability of cookies to indicate confused deputy attacks were considered upon creation (to be clear, I somewhat doubt web developers consider that in detail, they also just want things to work) it might be acceptable to privilege A2.

Alternatives:

  • SameSite=None does not have special privileges that allow it to ignore the "has cross-origin ancestors" bit when setting the cookie. (This would be my personal preference as this kind of logic where we only look at part of the total key seems rather scary.)
  • We introduce another attribute specifically for these "has cross-origin ancestors" bit scenarios. I don't think there's enough benefit to the churn this ends up necessitating. SameSite=None already indicates a disregard for security.

(We discussed this scenario as part of privacycg/meetings#19.)

Define terminology for a site's storage with various kinds of keys

The storage of a top-level frame is keyed by just its origin, while storage for a subframe is keyed by at least its own origin and the top-level origin. Intuitively, we often talk about the situation of being keyed by just one origin as having access to "first-party storage", but that's not really defined anywhere, and I don't know of shared terminology for subframes' keying situation.

This explainer should say how other specifications should describe the various situations. It should probably also eventually define ways for other specifications to define the storage access of their own environment settings objects, but that seems farther away.

Definition of third party

I think there's roughly two definitions of third party that are important for the web platform:

  1. Third-party origin: settings object's origin is not same origin with the settings object's top-level origin. (E.g., Permissions Policy largely uses this.)
  2. Third-party site: settings object's origin is not same site with the settings object's top-level origin. (E.g., state partitioning largely uses this.)

Potential usage in prose if we want to formalize these as terms rather than using the longer phrase: If settingsObject has a third-party origin, then ...?


There's an interesting thing that @bakulf pointed out to me which is that cookies have their own definition of this concept and that considers the entire ancestor chain. So when example.com/1 embeds thirdparty.example and that embeds example.com/2 per the above definitions /2 would not have a third-party origin/site, but at the same time it would not get SameSite cookies.

This does not seem hugely problematic to me and I don't think we can/should really change either definition at this point, but it's worth keeping this in mind.


Mainly wanted to write this down here to ensure we actually have agreement on this as we often say third party without being concrete about it.

cc @clelland

Do service/shared workers and BroadcastChannel deserve a special strategy?

From https://bugzilla.mozilla.org/show_bug.cgi?id=1495241#c1 (more context at https://privacycg.github.io/storage-partitioning/):

A problem with isolating service workers is that they are somewhat intrinsically linked to globalThis.caches aka the Cache API, a typical origin-scoped storage API. And that in turn is expected to be the same as localStorage or Indexed DB as sites might have interdependencies between the data they put in each.

Possible solutions:

  1. Using the "storage access" principal is what dFPI does and creates a strange transition scenario in that you have the old and new service worker that can each talk to a different group. At that point all the third parties the old service worker is in touch with can be given the first party data from the new service worker. Also, once B embedded in A is granted storage access, A might be able to tell some additional things about B, but I'm not sure how avoidable that is anyway.
  2. We could attempt to disable service workers (as well as BroadcastChannel and shared workers) when a document does not have storage access to avoid the weirdness of being able to communicate with documents in a third party and first party state at the same time. (An assumption here is that sites do not assume that if they have storage they also have service workers (as well as BroadcastChannel and shared workers).)
  3. We could scope service workers (as well as BroadcastChannel and shared workers) to the agent cluster (or perhaps browsing context group).
    1. If we did this unconditionally it would largely defeat the point of BroadcastChannel and shared workers, which is to be able to share work across many instances of an application (e.g., consider having multiple editable documents open in separate tabs). And it might also defeat the clients API in service workers.
    2. If we only did this for third parties we would again hit the problematic transition scenario when there's a popup. Though perhaps it's reasonable to consider an opener popup (as opposed to a noopener popup) in a special way to encourage sites to adopt Cross-Origin-Opener-Policy and get their own browsing context group? I.e., while you get first-party storage, you still get don't get top shelf communication channels.

Based on this I still favor 2, but 3.2 is also interesting.

cc @andrewsutherland @jakearchibald @inexorabletash @jkarlin @johnwilander

What state should be blocked?

This is related to #7.

In particular if you allow the Storage category to have its keying relaxed, there's an argument to be made that BroadcastChannel and shared/service workers ought to be blocked rather than have additional keying as sites could end up in a state where they have both third-party and first-party BroadcastChannel, for instance. And they cannot really be told apart either other than the site knowing when it allocated them relative to its current Storage Access API state.

Note that it's not a good solution to let part of the Storage category have its keying relaxed and part of it not. Sites often use multiple storage APIs for various bookkeeping purposes. Making their data inconsistent with each other is bad news. Blocking on the other hand doesn't really have that problem and might even be doable given that BroadcastChannel and shared worker are not supported by Safari.

Effectively this is a variant of the issue with same-origin frames having synchronous communication access being able to end up in different states. (Though we made a decision there to not let that happen.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.