from Hacker News

Asynchronous IO: the next billion-dollar mistake?

by YorickPeterse on 9/6/24, 3:52 PM with 5 comments

  • by nyrikki on 9/6/24, 4:26 PM

    > More specifically, what if instead of spending 20 years developing various approaches to dealing with asynchronous IO (e.g. async/await), we had instead spent that time making OS threads more efficient, such that one wouldn't need asynchronous IO in the first place?

    1) Moore's law for single cores has been over for a while 2) We are necessarily in a distributed world 3) Amdahl's law still applies 4) Concurrent operations would still be needed.

  • by wmf on 9/6/24, 4:52 PM

    Linux has already optimized threads extensively; AFAIK the overhead is mostly in the CPU at this point (fortunately Intel/AMD continue to reduce system call and context switch overhead).

    Using raw threads also has its own footguns but fortunately we have libraries like j.u.c and TBB to provide safer abstractions over threads. Unfortunately these libraries matured after async so most programmers aren't familiar with concepts like fork/join.