from Hacker News

Hello World on the GPU (2019)

by thdespou on 11/15/23, 11:47 AM with 42 comments

  • by JonChesterfield on 11/16/23, 5:14 PM

    As of this year (ish), `int main() {puts("hello, world\n");}` stands a decent chance of running on a GPU and doing the right thing if you compile it with clang. Terminal application style. Should be able to spell it printf shortly, variadic functions turn out to be a bit of a mess.
  • by raytopia on 11/16/23, 6:04 PM

    Great parody of WebGPU and other low level graphics apis.
  • by runetech on 11/16/23, 9:35 PM

    If nothing else, I am grateful for the introduction to Selah Sue (music that plays when you press, well.. the play symbol in the top animation).

    Spectacular vibe! Combined with the fullscreen animation is almost reminiscent of the demo-scene. I enjoyed the rest of the actual web page much more after that.

    I salute thee whoever made this. Much appreciated!

  • by dragontamer on 11/16/23, 5:17 PM

    There's a degree of GPU-style going on here, but its not OpenGL or DirectX.

      for y in 0..height {
        for x in 0..width {
    
          // Get target position
          let tx = x + offset;
          let ty = y;
    
    So this code, in a language I'm not too familiar with, is clearly a GPU concept. Except, this 2-dimensional for-loop is executed in parallel on modern GPUs in the so-called pixel-shader.

    A Pixel-shader is all sorts of complications in practice that deserves at least a few days of studying the rendering pipeline to understand. But the tl;dr is that a pixel-shader launches a thread (erm... a SIMD-lane? A... work-item? A shader?) per pixel, and then the device drivers do some magic to group them together.

    Like, in the raw hardware, pixel0-0 is going to be rendered at the same time as pixel0-1, pixel0-2, etc. etc. And the values inside of this "for loop" are the code that runs it all.

    Sure its SIMD and all kinds of complicated to fully describe what's going on here. But the bulk of GPU-programming (or at least, for pixel shaders), is recognizing the one-thread-per-pixel (erm, SIMD-lane per pixel) approach.

    ------------------

    Anyway, I think this post is... GPU-enough. I'm not sure if this truly executes on a GPU given how the code was written. But I'd give it my stamp of approval as far as "Describing code as if it were being done on a GPU", even if they're cheating for simplicity in many spots.

    The #1 most important part is that the "rasterize" routine is written in the embarrassingly parallel mindset. Every pixel "could" in theory, be processed in parallel. (Notice that no pixels have race-conditions or locks, or sequencing needed with each other).

    And the #2 part is having the "sequential" CPU-code logically and seamlessly communicate with the "embarrassingly parallel" rasterize routine in a simple, logical, and readable manner. And this post absolutely accomplishes that.

    Its harder to write this cleanly than it looks. But having someone show you, as per this post, how it is done helps with the learning process.

  • by hutzlibu on 11/16/23, 5:55 PM

    "Graphics programming can be intimidating. It involves a fair amount of math, some low-level code, and it's often hard to debug. Nevertheless I'd like to show you how to do a simple "Hello World" on the GPU. You will see that there is in fact nothing to be afraid of."

    57 created objects later

    "Hm. Damn"

    Well .. there is a reason it is usually "hello triangle" on GPU tutorials. Spoiler alert, GPUs ain't easy.