by _yo2u on 4/10/24, 3:16 PM with 60 comments
by jsheard on 4/10/24, 4:27 PM
by modeless on 4/10/24, 3:47 PM
by mlsu on 4/10/24, 6:52 PM
And, they mention a compiler in PyTorch, is that open sourced? I really liked the Google Coral chips -- they are perfect little chips for running image recognition and bounding box tasks. But since the compiler is closed source it's impossible to extend them for anything else beyond what Google had in mind for them when they came out in 2018, and they are completely tied to Tensorflow, with a very risky software support story going forward (it's a google product after all).
Is it the same story for this chip?
by chessgecko on 4/10/24, 5:49 PM
Still this looks like it would make for an amazing prosumer home ai setup. Could probably fit 12 accelerators on a wall outlet with change for a cpu, would have enough memory to serve a 2T model at 4bit and reasonable dense performance for small training runs and image stuff. Potentially not costing too much to make either without having to pay for cowos or hbm.
I'd definitely buy one if they ever decided to sell it and could keep the price under like $800/accelerator.
by teaearlgraycold on 4/10/24, 4:39 PM
I can only imagine the lack of fear Jensen experiences when reading this.
by prng2021 on 4/10/24, 11:00 PM
by jrgd on 4/10/24, 5:13 PM
by duchenne on 4/10/24, 6:23 PM
by ein0p on 4/10/24, 7:43 PM
by sroussey on 4/10/24, 3:57 PM
Low power 25W
Could use higher bandwidth memory if their workloads were more than recommendation engines.
by throwaway48476 on 4/10/24, 6:37 PM
by xnx on 4/10/24, 4:00 PM
by bevekspldnw on 4/10/24, 10:07 PM
I feel like Zuck figured out he’s just running an ads network, the world is a long way anway from some VR fever dream, and to focus on milking each DAU for as many clicks as possible.