by symisc_devel on 6/25/21, 10:58 PM with 97 comments
by malkia on 6/26/21, 1:35 AM
by sillysaurusx on 6/26/21, 1:22 AM
It can be between two to three orders of magnitude higher throughput. Equally important, lower latency.
(Throughput tends to be a consequence of low latency, but not always.)
I'm not saying this should be the norm, though. You probably don't need this design. But when you do, e.g. processing millions of stock market messages, there's no substitute.
EDIT: I love hearing about the designs and questions, but it was probably a mistake for me not to be explicit. Sorry! The thing I'm referring to is LMAX Disruptor pattern: https://lmax-exchange.github.io/disruptor/
I learned about it in 2011-ish, and it deeply changed my perspective on high speed designs.
by tialaramex on 6/26/21, 7:53 AM
It keeps using exchange() which swaps the old value in memory for your value and give back the old value, but it sets std::memory_order_acquire with the author apparently thinking that since this wants to acquire a lock this is enough.
But it isn't. The exchange() call is two memory operations, it's a load and a store, and so what you wanted here was Acquire and Release semantics ie. memory_order::acq_rel
What has been written is effectively Relaxed semantics for the store, and C++ doesn't do a great job of explaining that to programmers.
by raphlinus on 6/26/21, 1:07 AM
by samsquire on 6/26/21, 2:04 AM
I'm currently playing with multithreading right now. I'm implementing snapshot isolation multiversion concurrency control.
In theory you can avoid locks (except for data structure locks) by creating a copy of the data you want to write and detect conflicts at read and commit time. Set the read timestamp of a piece of data to the transaction timestamp (timestamps are just monotonically increasing numbers) that reads it. If someone with a higher transaction timestamp comes along, they abort and restart because someone got in before them.
At the moment I have something that mostly works but occasionally executes a duplicate. I'm trying to eradicate the last source of bugs but as with anything parallel, it's complicated due to the interleavings.
My test case is to spin up 100 threads, with each thread trying to increment a number. The end numbers should be 101 and 102. If there was a data race, then the numbers will be lower.
https://github.com/samsquire/multiversion-concurrency-contro...
by jeffbee on 6/26/21, 4:10 AM
by AnanasAttack on 6/26/21, 12:53 AM
by inshadows on 6/26/21, 3:32 AM
by RcouF1uZ4gsC on 6/26/21, 1:20 AM
by gpderetta on 6/26/21, 9:22 AM
I never thought whether it could lead to deadlocks or not and it is an interesting question. I assume there musy ve some formal proof against it but I would have to think about it (my first hunch is that if the reordering could cause a deadlock, then the cose wasn't safe in any case, like not acquiring locks in a consistent order).
by gentleman11 on 6/26/21, 1:31 AM
by MichaEiler on 6/26/21, 11:52 AM
by amelius on 6/26/21, 8:58 AM