by truth_seeker on 5/28/23, 5:51 PM with 88 comments
by fanf2 on 5/28/23, 8:13 PM
by dan-robertson on 5/28/23, 9:37 PM
Question I’ve not figured out yet: how can one trace io_uring operations? The api seems kinda incompatible with ptrace (which is what strace uses) but maybe there is an appropriate place to attach an ebpf? Or maybe users of io_uring will have to add their own tracing?
by rektide on 5/28/23, 7:56 PM
by xiphias2 on 5/28/23, 7:41 PM
I would think that writing the kernel part would be the hardest, but it's usually the event loop implementations that don't use what the Windows/MacOS/Linux kenels offer.
by truth_seeker on 5/28/23, 6:26 PM
by alberth on 5/28/23, 7:33 PM
by ithinkso on 5/28/23, 8:27 PM
Github post does it normally: 'Add io_uring support for several asynchronous file operations:'
by moralestapia on 5/28/23, 9:31 PM
If you haven't yet, please go check it out, write a program with it and be amazed.
So glad to be a contributor.
by loeg on 5/28/23, 7:33 PM
by quietbritishjim on 5/29/23, 9:54 AM
Does this mean libuv already supported io_uring for non-file operations? Or it still doesn't?
Async file operations are useful in some applications, but not the main things people normally think of when they hear async IO.
by MichaelMoser123 on 5/29/23, 12:30 AM
Just one questions: what about older versions of linux that don't have io_uring, does it fall back gracefully to older system calls or are these older versions of linux no longer supported?
by heyoni on 5/28/23, 7:59 PM
by destructionator on 5/29/23, 1:41 AM
This isn't to say that io_uring is bad, just don't draw too much a conclusion from any benchmark of their old impl beyond the context of their old impl specifically.
by gigatexal on 5/29/23, 8:43 AM
by samsquire on 5/29/23, 7:13 AM
I've been studying how to create an asynchronous runtime that works across threads. My goal: neither CPU and IO bound work slow down event loops.
How do you write code that elegantly defines a state machine across threads/parallelism/async IO? How do you efficiently define choreographies between microservices, threads, servers and flows?
I've only written two Rust programs but in Rust you presumably you can use Rayon (CPU scheduling) and Tokio (IO scheduling)
I wrote about using the LMAX Disruptor ringbuffer pattern between threads.
https://github.com/samsquire/ideas4#51-rewrite-synchronous-c...
I am designing a state machine formulation syntax that is thread safe and parallelises effectively. It looks like EBNF syntax or a bash pipeline. Parallel steps go in curly brackets. There is an implied interthread ringbuffer between pipes. It is inspired by prolog, whereby there can be multiple conditions or "facts" before a stateline "fires" and transitions. Transitions always go from left to right but within a stateline (what is between a pipe symbol) can fire in any order. A bit like a countdown latch.
states = state1 | {state1a state1b state1c} {state2a state2b state2c} | state3
You can can think of each fact as an "await" but all at the same time. initial_state.await = { state1a.await state1b.await state1c.await }.await { state2a.await state2b.await state2c.await } | state3.await
In io_uring and LMAX Disruptor, you split all IO into two halves: submit and handle. Here is a liburing state machine that can send and receive in parallel. accept | { submit_recv! | recv | submit_send } { submit_send! | send | submit_recv }
I want there to be ring buffers between groups of states. So we have full duplex sending and receiving.Here is a state machine for async/await between threads:
next_free_thread = 2
task(A) thread(1) assignment(A, 1) = running_on(A, 1) |
paused(A, 1)
running_on(A, 1)
thread(1)
assignment(A, 1)
thread_free(next_free_thread) = fork(A, B)
| send_task_to_thread(B, next_free_thread)
| running_on(B, 2)
paused(B, 1)
running_on(A, 1)
| { yield(B, returnvalue) | paused(B, 2) }
{ await(A, B, returnvalue) | paused(A, 1) }
| send_returnvalue(B, A, returnvalue)
by Ahmad498 on 5/29/23, 1:43 AM
by 29athrowaway on 5/28/23, 11:27 PM