by Kristine1975 on 1/4/18, 1:09 PM with 242 comments
by tptacek on 1/4/18, 2:36 PM
Its hard to get your head around how big a deal this is. This vulnerability is so bad they killed x86 indirect jump instructions. It's so bad compilers --- all of them --- have to know about this bug, and use an incantation that hacks ret like an exploit developer would. It's so bad that to restore the original performance of a predictable indirect jump you might have to change the way you write high-level language code.
It's glorious.
by dzdt on 1/4/18, 2:56 PM
Ouch! This is independent of other performance hurts, like from the kernel syscall overhead that was the hot topic yesterday. This is pretty crazy.
by AaronFriel on 1/4/18, 7:29 PM
I feel bad for all of the engineers currently working on performance sensitive applications in these languages. There's a whole lot of Java, .NET, and JavaScript that's about to get slower[1]. Enterprise-y, abstract class heavy (i.e.: vtable using) C++ will get slower. Rust trait objects get slower. Haskell type classes that don't optimize out get slower.
What a mess.
[1] These mitigations will need to be implemented for interpreters, and JITs will want to switch to emitting "retpoline" code for dynamic dispatch. There's no world in which I don't expect the JVM, V8, and others to switch to these by default soon.
by rntz on 1/4/18, 6:08 PM
Maybe I'm being naive, but would a simple modulo instruction work? Consider the example code from https://googleprojectzero.blogspot.com/2018/01/reading-privi...:
unsigned long untrusted_offset_from_caller = ...;
if (untrusted_offset_from_caller < arr1->length) {
unsigned char value = arr1->data[untrusted_offset_from_caller];
...
}
If instead we did: unsigned char value = arr1->data[untrusted_offset_from_caller % arr1->length];
Would this produce a data dependency that prevents speculative execution from reading an out-of-bounds memory address? (Ignore for the moment that a sufficiently smart compiler might "optimize" out the modulo here.)by jzl on 1/4/18, 4:10 PM
by leni536 on 1/4/18, 3:10 PM
The obvious drawback that it effectively disables sharing code in memory, it would still allow sharing code on disk though. So it would be a middle ground between the current state in dynamic and static linking.
https://www.technovelty.org/c/position-independent-code-and-...
by ealexhudson on 1/4/18, 2:04 PM
by badrequest on 1/4/18, 2:16 PM
by vfaronov on 1/4/18, 3:02 PM
What do people more knowledgeable in the field think about this?
by phkahler on 1/4/18, 2:59 PM
by coldcode on 1/4/18, 2:24 PM
by peapicker on 1/4/18, 3:57 PM
Trusting in a compiler you hope was used to build all the executables on your system isn't trustworthy enough to be the final solution.
[1] https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html
by cws125 on 1/4/18, 5:13 PM
* https://lkml.org/lkml/2018/1/4/432 * http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a...
It appears that Skylake and later can actually predict retpolines? Some hardware features called IBRS, IBPB, STIBP (not a lot of details on this are out there) are supposedly coming in a microcode update.
by jgowdy on 1/4/18, 4:16 PM
by teilo on 1/4/18, 4:08 PM
by nathell on 1/4/18, 4:01 PM
There's a lot of prominence being given to all kinds of damage malicious users might inflict, and ways to prevent or mitigate, but little to the malice itself. Whence does it arise? What emotions drive those users? What unmet needs?
Meanwhile, when these slowing-down patches for Sceptre and Meltdown arrive, I intend to not run them, to the possible extent. I intend to keep aside a VM with patches for critical stuff, like banking or others' data entrusted to me. But I don't want my machine to be slowed down just because someone, sometime, might invest effort in targeting these attacks at it. Given how transparent I want to be with my life, that's a risk I'm willing to take.
by fooker on 1/4/18, 2:18 PM
Also, any insight about performance impact here?
by contrarian_ on 1/4/18, 2:59 PM
by Pelam on 1/4/18, 3:58 PM
Something like that could allow the CPU to speculate agressively while preventing information leak exploits.
by userbinator on 1/4/18, 4:52 PM
It also sets a very bad precedent: I understand people want to mitigate/fix as much as possible, but this is basically giving an implicit message to the hardware designers: "it doesn't matter if our instructions are broken, regardless of how widespread in use they already are --- they'll just fix it in the software."
by sempron64 on 1/4/18, 2:13 PM
by strongholdmedia on 1/5/18, 10:11 PM
> We built multi-tenant cloud computing on top of processors and chipsets that were designed and hyper-optimized for
> single-tenant use. We crossed our fingers that it would be OK and it would all turn out great and we would all profit.
> In 2018, reality has come back to bite us.
This is the root of all the problems.
by crb002 on 1/4/18, 4:32 PM
Right now many function calls don't safely wipe registers and the new side channel caches found in Spectre. There really needs to be two kinds of function calls. Maybe a C PRAGMA?
The complier has parent function call wiping as a flag; the code has pragmas that over-ride the flag.
by okneil on 1/4/18, 2:02 PM
by lousken on 1/4/18, 5:45 PM
by eptcyka on 1/4/18, 3:01 PM
by silimike on 1/4/18, 4:11 PM
by andrewmcwatters on 1/4/18, 3:20 PM