by fariszr on 12/6/23, 7:24 PM with 16 comments
by filterfiber on 12/6/23, 8:01 PM
I know they are "technically supported" by onyx/pytorch etc. But almost every project I've seen there's a bunch of asterisks, a bunch of people needing different work arounds, and subpar performance compared to the hardware.
Maybe it's because most people have their consumer cards? Nvidia however doesn't have this issue, people on pascal are only just now having compatibility issues with not supporting a high enough cuda version.
The dream is they release a 4090-type competitor for a few grand, it doesn't even need full 4090-level performance, just strap 48gb+ of HBM and it would be a absolutely incredible deal for individuals learning/researching -> who then use the full enterprise stuff for full training/commercial inference.
by rreichman on 12/6/23, 8:28 PM
[1] - https://www.semianalysis.com/p/amd-mi300-performance-faster-...
by Racing0461 on 12/6/23, 10:35 PM