from Hacker News

Long-Term Consequences of Spectre and Its Mitigations

by sankha93 on 1/17/18, 1:14 AM with 68 comments

  • by mwcampbell on 1/17/18, 2:34 AM

    > I think it would be a grave mistake to simply give up on mixing code with different trust labels in the same address space. Apart from having to redesign lot of software, that would set a hard lower bound on the cost of transitioning between trust zones. It would be much better if hardware mitigations can be designed to be usable within a single address space.

    I wonder what software redesigns he has in mind. As far as I can tell, best practices are already trending toward only one trust zone per address space. Some might argue that that's the whole point of multiple address spaces. I suspect that Spectre will accelerate this trend.

    I do know how difficult this kind of change can be. The example I have in mind started before Spectre, and is unique to one platform. On Windows, developers of third-party screen readers for the blind are going through a painful transition where they can no longer inject code into application processes in order to make numerous accessibility API calls with low overhead. This change particularly impacts the way screen readers have been making web pages accessible since 1999. For the curious, here's a blog post on this subject: https://www.marcozehe.de/2017/09/29/rethinking-web-accessibi...

  • by andreiw on 1/17/18, 2:25 AM

    One thing curiously missing from this article is ARM’s laudable in-depth analysis - https://developer.arm.com/support/security-update, and their efforts (https://developer.arm.com/support/security-update/compiler-s...) to bring in architecture-neutral compiler intrinsics to address variant 1.
  • by Animats on 1/17/18, 5:33 AM

    The article is by someone with no involvement in the CPU business. We need to hear from CPU architects and manufacturers. This is a fundamental CPU design defect and needs to be fixed in silicon.
  • by phkahler on 1/17/18, 3:32 PM

    >> browsers are trying to keep the problem manageable by making it difficult for JS to extract information from the timing channel (by limiting timer resolution and disabling features like SharedArrayBuffer that can be used to implement high-resolution timers), but this unfortunately limits the power of Web applications compared to native applications.

    I don't see a problem with that. "Web applications" are inherently untrusted code. If it were not for untrusted code these attacks would not be an issue, so it doesn't seem unfair for a mitigation to negatively affect them.

  • by moyix on 1/17/18, 3:08 PM

    It's interesting to pair this with Adrian Sampson's (an academic who works on hardware architecture) thoughts, particularly his musings about other vectors:

    > The second thing is that it’s not just about speculation. We now live in a world with side channels in microarchitectures that leave no real trace in the machine’s architectural state. There is already work on leaks through prefetching, where someone learns about your activity by observing how it affected a reverse-engineered prefetcher. You can imagine similar attacks on TLB state, store buffer coalescing, coherence protocols, or even replacement policies. Suddenly, the SMT side channel doesn’t look so bad.

    http://www.cs.cornell.edu/~asampson/blog/spectacular.html

  • by mehrdadn on 1/17/18, 8:09 AM

    Could someone please explain to me why there is so much focus on Spectre vulnerabilities in Javascript and not really any on HTML/CSS, when it seems that a server could also be able to cause the client to perform speculative execution via pure HTML? Or is it not possible for some reason? The focus on Javascript as though it's somehow special is rather baffling to me, making me wonder whether I'm really understanding the fundamental issues. (?)
  • by faragon on 1/17/18, 2:13 PM

    In my opinion, the worst long-term consequence will be that even having newer CPUs with the issues fixed in hardware, we'll have a performance impact because of code compiled to work with both old and new CPUs. Just like the case of having a new CPU with fancy features unused because of code compiled to be backwards compatible.
  • by brndnmtthws on 1/17/18, 12:45 PM

    I doubt Intel will be lowering their prices, or refunding anyone a portion of the price of their previously purchased CPUs, that's for sure.

    Look what happened after the VW diesel scandal ('dieselgate'): VW had to pay for repairs, and pay buyers (my friend bought one of the cars and got about $6k IIRC). Some people even went to jail.

    Intel (or any other CPU maker) will probably not suffer similar fates. This situation is a bit different, because they may not have known about the problem. Still, everyone who bought a CPU is going to get a 10-30% performance haircut because they made a mistake. And Intel isn't going to have to pay for it.

  • by leoc on 1/17/18, 3:05 PM

  • by fulafel on 1/17/18, 3:54 PM

    Does anyone know how things are going in GPU land? Don't they support concurrent separate protection domains these days too?