by mighty_plant on 7/14/24, 5:52 AM with 189 comments
by pron on 7/17/24, 8:46 AM
So it looks like their goal was: try adopting a new technology without changing any of the aspects designed for an old technology and optimised around it.
by cayhorstmann on 7/17/24, 6:36 PM
What "CPU-intensive apps" did they test with? Surely not acmeair-authservice-java. A request does next to nothing. It authenticates a user and generates a token. I thought it at least connects to some auth provider, but if I understand it correctly, it just uses a test config with a single test user (https://openliberty.io/docs/latest/reference/config/quickSta...). Which would not be a blocking call.
If the request tasks don't block, this is not an interesting benchmark. Using virtual threads for non-blocking tasks is not useful.
So, let's hope that some of the tests were with tasks that block. The authors describe that a modest number of concurrent requests (< 10K) didn't show the increase in throughput that virtual threads promise. That's not a lot of concurrent requests, but one would expect an improvement in throughput once the number of concurrent requests exceeds the pool size. Except that may be hard to see because OpenLiberty's default is to keep spawning new threads (https://openliberty.io/blog/2019/04/03/liberty-threadpool-au...). I would imagine that in actual deployments with high concurrency, the pool size will be limited, to prevent the app from running out of memory.
If it never gets to the point where the number of concurrent requests significantly exceeds the pool size, this is not an interesting benchmark either.
by pansa2 on 7/17/24, 7:00 AM
by exabrial on 7/17/24, 4:28 AM
A number of years ago I remember trying to have a sane discussion about “non blocking” and I remember saying “something” will block eventually no matter what… anything from the buffer being full on the NIC to your cpu being at anything less than 100%. Does it shake out to any real advantage?
by bberrry on 7/17/24, 12:46 PM
by LinXitoW on 7/17/24, 8:07 AM
In one project I had to basically turn a reactive framework into a one thread per request framework, because passing around the MDC (a kv map of extra logging information) was a horrible pain. Getting it to actually jump ship from thread to thread AND deleting it at the correct time was basically impossible.
Has that improved yet?
by davidtos on 7/17/24, 8:26 AM
[1] https://davidvlijmincx.com/posts/virtual-thread-performance-...
by taspeotis on 7/17/24, 6:12 AM
It’s a shame this article paints a neutral (or even negative) experience with virtual threads.
We rewrote a boring CRUD app that spent 99% of its time waiting the database to respond to be async/await from top-to-bottom. CPU and memory usage went way down on the web server because so many requests could be handled by far fewer threads.
by tzahifadida on 7/17/24, 4:53 AM