by tlack on 10/7/24, 7:36 PM with 30 comments
by photonthug on 10/8/24, 11:18 PM
This is really cool. At about 150 lines, terse indeed. And it makes sense that of course APL could work well with gpus, but I’m kind of surprised there’s enough of it still out in the wild so that there’s already a reliable tool chain for doing this.
by sakras on 10/9/24, 3:57 AM
I've actually spent the better part of last year wondering why we _haven't_ been using APL for deep learning. And actually I've been wondering why we don't just use APL for everything that operates over arrays, like data lakes and such.
Honestly, APL is probably a good fit for compilers. I seem to remember a guy who had some tree-wrangling APL scheme, and could execute his compiler on a GPU. But I can't find it now.
by bornaahz on 10/9/24, 12:17 PM
I am the author of this project. If anyone has any questions concerning trap, I'd be more than happy to address them.
by anonzzzies on 10/9/24, 9:14 AM
k-torch llm(61) 14M 2 14 6 288 288 x+l7{l8x{x%1+E-x}l6x}rl5x+:l4@,/(hvi,:l3w)Ss@S''h(ki,:ql2w)mql1w:rl0x (18M 2 32000 288)
which apparently can run on the gpu someone told me on discord (but i'm not sure if it's true or not).
by gcanyon on 10/9/24, 1:39 AM
It sure did to me, even as someone who has written (a trivial amount of) J. But the argument that follows is more than convincing.
by smartmic on 10/9/24, 7:23 AM