by pella on 1/20/25, 7:31 PM with 173 comments
by latchkey on 1/20/25, 8:16 PM
However you want to dissect this specific issue, I'd generally consider this a positive step and nice to see it hit the front page.
https://www.reddit.com/r/ROCm/comments/1i5aatx/rocm_feedback...
by sorenjan on 1/21/25, 2:05 PM
It's also a strange value proposition. If I'm a programmer in some super computer facility and my boss has bought a new CDNA based computer, fine, I'll write AMD specific code for it. Otherwise why should I? If I want to write proprietary GPU code I'll probably use the de facto industry standard from the industry giant and pick CUDA.
AMD could be collaborating with Intel and a myriad of other companies and organizations and focus on a good open cross platform GPU programming platform. I don't want to have to think about who makes my GPU! I recently switched from an Intel CPU to an AMD, obviously to problem. If I had to get new software written for AMD processors I would have just bought a new Intel, even though AMD are leading in performance at the moment. Even Windows on ARM seems to work ok, because most things aren't written in x86 assembly anymore.
Get behind SYCL, stop with the platform specific compilation nonsense, and start supporting consumer GPUs on Windows. If you provide a good base the rest of the software community will build on top. This should have been done ten years ago.
by cherryteastain on 1/21/25, 12:33 AM
All because they went with a boneheaded decision to require per-device code compilation (gfx1030, gfx1031...) instead of compiling to an intermediate representation like CUDA's PTX. Doubly boneheaded considering the graphics API they developed, Vulkan, literally does that via SPIR-V!
by wtcactus on 1/20/25, 9:35 PM
Either the management at AMD is not smart enough to understand that without the computing software side they will always be a distant number 2 to NVIDIA, or the management at AMD considers it hopeless to ever be able to create something as good as CUDA because they don’t have and can’t hire smart enough people to write the software.
Really, it’s just baffling why they continue on this path to irrelevance. Give it a few years and even Intel will get ahead of them on the GPU side.
by superkuh on 1/20/25, 7:54 PM
By the time an (consumer) AMD device is supported by ROCm it'll only have a few years of ROCm support left before support is removed. Lifespan of support for AMD cards with ROCm is very short. You end up having to use Vulkan which is not optimized, of course, and a bit slower. I once bought an AMD GPU 2 years after release and 1 year after I bought it ROCm support was dropped.
by ghostpepper on 1/20/25, 7:57 PM
by __turbobrew__ on 1/20/25, 11:47 PM
Compare this to nvidia where I just imported the go nvml library and it built the cgo code and automatically links to nvidia-ml.so at runtime.
by ac29 on 1/20/25, 8:53 PM
Windows support is also bad, but supports significantly more than one GPU.
by phkahler on 1/20/25, 11:40 PM
by RandyOrion on 1/21/25, 2:20 AM
by maverwa on 1/20/25, 10:35 PM
by criticalfault on 1/21/25, 8:14 PM
What are the chances for amd to consider alternatives: - adopt oneapi and try to fight Nvidia together with intel - Vulkan and implement pytorch backend - sycl
by wkat4242 on 1/21/25, 3:59 PM
And they drop support too quickly too. The Radeon Pro VII is already out of support. It's barely 5 years since release.
This way it will never be a counterpart to CUDA.
by nsriv on 1/21/25, 2:24 AM
by jms55 on 1/21/25, 1:24 AM
Furthermore, don't people use PyTorch (and other libraries? I'm not really clear on what ML tooling is like, it feels like there's hundreds of frameworks and I haven't seen any simplified list explaining the differences. I would love a TLDR for this) and not ROCm/CUDA directly anyways? So the main draw can't be ergonomics, at least.
by nicman23 on 1/21/25, 11:17 AM