from Hacker News

Information Leaks via Safari’s Intelligent Tracking Prevention

by GuardLlama on 1/22/20, 5:35 PM with 35 comments

  • by om2 on 1/23/20, 1:35 AM

    We've addressed the issues disclosed to us, and if you try any of the 5 POCs in the paper you will find they no longer work in the latest Safari. Details of the fixes here: https://webkit.org/blog/9661/preventing-tracking-prevention-...

    There may be room for more improvement here but be aware what the POCs illustrate is not an active vulnerability any more.

    In addition, we don't believe this channel was ever exploited in the wild.

    (If anyone is aware of other issues in this area, I encourage you to practice responsible disclosure and report to Apple or to the WebKit project.)

  • by arkadiyt on 1/22/20, 6:44 PM

    Reposting from the other [1] thread:

    Basically Safari keeps track of which domains are being requested in a 3rd party context (i.e. I load example.com in my browser and the page loads the facebook sdk - Safari increments a counter for facebook by 1). Once a given domain reaches 3 hits, Safari will strip cookies and some other data in 3rd party requests to that domain.

    The problem is that advertisers can use this to fingerprint users: register arbitrary domains, make 3rd party requests to them, and detect whether or not that request is having data stripped. Each domain is an additional "bit" of data.

    This is similar to "HSTS Cookies" [2] and also to issues with Chrome's XSS auditor, which is why it was removed [3].

    [1]: https://news.ycombinator.com/item?id=22120136

    [2]: https://nakedsecurity.sophos.com/2015/02/02/anatomy-of-a-bro....

    [3]: https://twitter.com/justinschuh/status/1220021377064849410

  • by lmkg on 1/22/20, 6:52 PM

    There is a fundamental difficulty when trying to implement privacy: A limit on the disclosure of information is itself a disclosure of information.

    A good privacy design needs to confront this issue directly. Sometimes there's nothing to be done. I think in some cases it's mathematically unsolvable (cf. Cynthia Dwork's paper on Differential Privacy). But an explicit consideration can at least surface some trade-offs. The more fine-grained and selective your redactions, the more information they reveal.

  • by rasz on 1/23/20, 4:30 AM

    Last time Google researchers made similar discoveries, 2012, it was used to ... track users :-)

    https://www.ghacks.net/2012/02/21/microsoft-google-is-also-b...

    "We used known Safari functionality to provide features that signed-in Google users had enabled. It’s important to stress that these advertising cookies do not collect personal information."

    and bypassing IE third party cookie protection: "impractical to comply with Microsoft’s request while providing modern web functionality." Google says complying with tracking protection is Impractical!

  • by _underfl0w_ on 1/22/20, 9:12 PM

    Haven't read TFA yet, but at first glance this sounds similar to the approach used by the "Privacy Badger" browser extension - if it sees the same tracker on multiple sites, it "learns" and begins blocking it. Would it also be susceptible to similar information leaks with this threat model?
  • by noizejoy on 1/22/20, 7:57 PM

    I’ve been following privacy issues and technology for a while, but haven’t come across a foundational discussion of (a) the merits of and (b) technical implementations of different approaches to avoid fingerprinting:

    “hiding” vs “blending in”(making me look identical to countless others - maybe even randomizing who I look like in a smart way).

    I wonder if any subject area experts reading this thread would be willing to share a summary of their knowledge and thoughts here.

  • by nattaylor on 1/22/20, 7:23 PM

    Conversely, Chrome is heading in the right direction:

    >Chrome plans to more aggressively restrict fingerprinting across the web. One way in which we’ll be doing this is reducing the ways in which browsers can be passively fingerprinted, so that we can detect and intervene against active fingerprinting efforts as they happen. [0]

    This will include things like restricting the volume of Browser API checks allowed, etc, to reduce the number of bits that can be used in a fingerprint.

    [0] https://blog.chromium.org/2019/05/improving-privacy-and-secu...

  • by summerlight on 1/22/20, 6:47 PM

    Wow. I understand ITP's high level design, but didn't know it's implementation is so naive. Maintaining global database with a few rules which can be easily reverse engineered and giving its access to any documents? How did it go through the internal review process? Does Apple have any privacy/security review process for its major products?

    I understand that privacy engineering is very hard and sometime can get not very obvious with implicit statistical dependency chains, but this kind of direct problem could (or should?) be caught in an early stage of design. Anyway, ITP is all about privacy and deserves attentions from dedicated privacy engineers.