by simplegeek on 12/18/24, 1:22 PM with 48 comments
by titzer on 12/20/24, 3:22 PM
[1] With weak memory hardware, it is effectively impossible to write correct programs without using at least one of atomic read-modify-write or barriers, because both compilers and hardware can reorder both reads and writes, to a maddening degree.
(I previously posted this comment as a reply to a comment that got flagged.)
by cjfd on 12/20/24, 1:57 PM
by kolektiv on 12/20/24, 8:35 AM
by pphysch on 12/20/24, 4:32 PM
This is a comforting idea, but enumerating every possible state of a real-world (integrated/distributed) software system is a fool's task.
Spend less time on formal verification (fancy word for perfectionism) and more time on risk analysis and cybernetics. Instead of praying that nothing ever goes wrong, plan for faults and design systems that integrate human and machine to respond rapidly and effectively.
"Correctable" is a much more practical target than "always correct".
by divan on 12/20/24, 6:01 PM
Visualizing Concurrency in Go (2016)
by dragontamer on 12/20/24, 2:36 PM
The instruction pointer is all synchronized, providing you with fewer states to reason about.
Then GPUs mess that up by letting us run blocks/thread groups independently, but now GPUs have highly efficient barrier instructions that line everyone back up.
It turns out that SIMDs innate assurances of instruction synchronization at the SIMD lane level is why warp based coding / wavefront coding is so efficient though, as none of those barriers are necessary anymore.
by tansan on 12/20/24, 11:27 AM
> How concurrecy works: A text guide
by rramadass on 12/20/24, 4:14 PM
The 1st edition divides the algorithms under two broad sections viz; "Message Passing" (eg. chapter Mutual Exclusion) and "Shared Memory" (eg. chapter Mutual Exclusion II) while the 2nd edition removes these section headings and simply groups under functional categories (eg. both the above under one chapter Mutual Exclusion).
Book website - https://mitpress.mit.edu/9780262037662/distributed-algorithm...
by tapland on 12/20/24, 10:00 AM
by 7bit on 12/20/24, 10:26 AM
by drewcoo on 12/20/24, 11:17 PM
by neonsunset on 12/20/24, 10:20 AM
by rubee64 on 12/20/24, 2:42 PM
by charlieflowers on 12/20/24, 1:04 PM
So is the ability to compute the entire state space and prove that liveness and/or safety properties hold the single main basis of model checker’s effectiveness? Or are there other fundamental capabilities as well?
by cess11 on 12/20/24, 12:30 PM
I find the BEAM/OTP process model relatively simple to use, and it moves some of the friction from logic bugs to a throughput problem.