by rck on 3/9/23, 5:02 AM with 203 comments
by fxtentacle on 3/9/23, 8:06 AM
I'm surprised you did not look at "Dear ImGui", "Noesis", and "JUCE". All three of them are heavily used in gaming, are rather clean C++, use full GPU acceleration, and have visual editors available. Especially JUCE is used for A LOT of hard-realtime professional audio applications.
"When we started building Zed, arbitrary 2D graphics rendering on the GPU was still very much a research project."
What are you talking about? JUCE has had GPU-accelerated spline shapes and SVG animations since 2012?
BTW, I like the explanations for how they use SDFs for rendering basic primitives. But that technique looks an awful lot like the 2018 GPU renderer from KiCad ;) And lastly, that glyph atlas for font rendering is only 1 channel? KiCad uses a technique using RGB for gradients so that the rendered glyphs can be anti-aliased without accidentally rounding sharp corners. Overall, this reads to me like they did not do much research before starting, which is totally OK, but then they shouldn't say stuff like "did not exist" or "was still a research project".
by yason on 3/9/23, 9:45 AM
What causes user interfaces to hick up is that it's too easy to do stuff in the main UI thread. First it doesn't matter but stuff does accumulate, and eventually the UI begins to freeze briefly for example, after you press a button. The user interface gets intermingled with the program logic, and the execution of the program will visibly relay its operations to the user.
It would be very much possible to keep the user interface running in a thread, dedicated to a single CPU on a priority task, updating at vsync rate as soon as there are dirty areas in the window, merely sending UI events to the processing thread, and doing absolutely nothing more. This is closer to how games work: the rendering thread does rendering and there are other, more slow-paced threads running physics, simulation, and game logic at a suitable pace. With games it's obvious because rendering is hard and it needs to be fast so anything else that might slow down rendering must be moved away but UIs shouldn't be any different. An instantly reacting UI feels natural to a human, one that takes its time to act will slow down the brain.
But you don't need a GPU for that.
by maeln on 3/9/23, 9:11 AM
That would be my advice to anyone making a gpu-accelerated ui library in 2023: Try to support accessibility, and even better: make it a first class citizen.
by msvan on 3/9/23, 6:02 PM
by xlii on 3/9/23, 10:48 AM
I love immediate feedback but getting it ranges from hard to neigh impossible. E.g. I have a complex Emacs setup for rendering Pikchr diagrams, but there are a lot of problems to solve from diagram conception to the end result, so I thought, hey, why not make my own cool RT editor - in Rust obviously.
Unfortunately I learned that GUIs are though problem especially if idea is hobby-based so there's only one developer inside. Ultra responsive GUIs cool, I have a prototype in egui (not sure if that's as fast as Zed's premise but feels fast nonetheless) and yet it doesn't support multiple windows, which I wanted to have.
120 FPS with direct rendering sounds AWESOME just for sake of it, but I believe that for the end-user layout will be more important than refresh rate, and that's different beast to tame.
Personally I "almost" settled for Dioxus (shameless plug: [1], there's link to YT video) and I'm quite happy with it. Having editor in WebView feels really quirky though (e.g. no textareas, I'm intercepting key events and rendering in div glyph-by-glyph directly).
by Animats on 3/9/23, 6:04 PM
I'd like to see "gedit", for Linux, fixed. It can stall on large files, and, in long edit sessions, will sometimes mess up the file name in the tab. Or "notepad++" for Linux.
by nottorp on 3/9/23, 12:01 PM
by DeathArrow on 3/9/23, 10:05 AM
by almostdigital on 3/9/23, 11:16 AM
by fassssst on 3/9/23, 1:41 PM
by tayistay on 3/9/23, 7:04 PM
Is their GPUI library open source?
by monkeydust on 3/9/23, 3:31 PM
by scotty79 on 3/9/23, 1:59 PM
by soulbadguy on 3/9/23, 11:35 AM
by bjconlan on 3/10/23, 11:19 PM
I'm surprised nobody pointed out lite/litexl here either it's rendering of ui is very similar (although fonts are via a texture; like a game would) and doesn't focus overly on the GPU but optimises those paths like games circa directx9/opengl 1.3
There are great details of the approach taken with lite at https://rxi.github.io
Lite-xl might have evolved the renderer but the code here is very consumable for me.
by tiffanyh on 3/9/23, 6:41 PM
It should be noted that the main person behind Zed is Nathan Sobo, who created Atom while he was at Github, which is the basis of Visual Studio Code today.
As such, I have high hopes Zed will be a much faster version of Visual Studio Code and am excited to see what him & his team make.
by flohofwoe on 3/9/23, 1:56 PM
IME the main theme with achieving high performance not just in games, and not just in rendering, is to avoid frequent 'context switches' and instead queue/batch a lot of work (e.g. all rendering commands for one frame) and then submit all this work at once "to the other side" in a single call. This is not just how modern 3D APIs work, but is the same basic idea in 'miracle APIs' like io_uring
This takes care of the 'throughput problem', but it can easily lead to a 'latency problem' (if the work to be done needs to travel through several such queues which are 'pipelined' together).
by btryder on 3/9/23, 5:04 PM
Curious if you guys have thought about VR / AR possibilities with GPUI?
by jug on 3/9/23, 12:10 PM
by computing on 3/9/23, 4:59 PM
Second, forgive a naive question since I know nothing about graphics, but would the method described in the article perform better than Alacritty + Neovim?
by haberman on 3/9/23, 6:27 PM
I was really excited when I saw that demo. Why didn't this turn into a final product that people could use?
by FpUser on 3/9/23, 3:56 PM
by tgflynn on 3/9/23, 12:35 PM
I suspect a lot of time is likely to be spent on the CPU side updating vertex and other data and pushing it to the GPU so it would be useful to have some more detail on how they are handling that.
by trollied on 3/9/23, 5:27 PM
by p0nce on 3/9/23, 3:30 PM
by sebastianconcpt on 3/9/23, 1:34 PM
by DeathArrow on 3/9/23, 10:14 AM
by 29athrowaway on 3/9/23, 4:35 PM
by RonnieOwnsLexus on 3/9/23, 12:01 PM
by avereveard on 3/9/23, 12:51 PM
And it's not just about electricity cost and heat stress, it will conflict with everything else that requires the gpu to do stuff, including watching 4k videos on the second monitor, which does have a legitimate case for requiring hardware acceleration since they move a lot of data 60 times per second, and your editor doesn't.
And the limited resource is not the gpu itself but the nearby onboard memory is a scarce resource on its own. I'd be real mad at a software that prevents me to multitask.