by rx_tx on 6/3/24, 4:34 AM with 292 comments
by buildbot on 6/3/24, 6:07 AM
by dhx on 6/3/24, 5:49 AM
The article makes it appear as:
* 16x PCIe 5.0 lanes for "graphics use" connected directly to the 9950X (~63GB/s).
* 1x PCIe 5.0 lane for an M.2 port connected directly to the 9950X (~4GB/s). Motherboard manufacturers seemingly could repurpose "graphics use" PCIe 5.0 lanes for additional M.2 ports.
* 7x PCIe 5.0 lanes connected to the X870E chipset (~28GB/s). Used as follows:
* 4x USB 4.0 ports connected to the X870E chipset (~8GB/s).
* 4x PCIe 4.0 ports connected to the X870E chipset (~8GB/s).
* 4x PCIe 3.0 ports connected to the X870E chipset (~4GB/s).
* 8x SATA 3.0 ports connected to the X870E chipset (some >~2.4GB/s part of ~8GB/s shared with WiFi 7).
* WiFi 7 connected to the X870E chipset (some >~1GB/s part of ~8GB/s shared with 8x SATA 3.0 ports).
by irusensei on 6/3/24, 12:43 PM
But now I'm seeing lots of things I'm locked out. Faster ethernet standards, the fun that brings with tons of GPU memory (no USB4, can't add 10Gbe either), faster and larger memory options, AV1 encoding. It's just sad that I bought a laptop right before those things were released.
Should had go with a proper PC. Not doing this mistake anymore.
by gautamcgoel on 6/3/24, 5:17 AM
by mmaniac on 6/3/24, 8:24 AM
The mobile APUs are way more interesting.
by ArtTimeInvestor on 6/3/24, 5:46 AM
If so, is this unique - that a whole industry of relies on one company?
by gattr on 6/3/24, 2:29 PM
by ComputerGuru on 6/3/24, 5:23 PM
In light of the "very good but not incredible" generation-over-generation improvement, I guess we can now play the "can you get more performance for less dollars buying used last-gen HEDT or Epyc hardware or with the newest Zen 5 releases?" game (NB: not "value for your dollar" but "actually better performance").
by tripdout on 6/3/24, 5:14 AM
by sylware on 6/3/24, 12:17 PM
But I am more interested in the cleanup of the GPU hardware interface (it should be astonishingly simple to program the GPU with its various ring buffers, like as it is rumored to be the case on nvidia side) AND in the squishing of all hardware shader bugs: look at valve ACO compiler erratas in mesa, AMD hardware shader is a bug minefield. Hopefully, the GFX12 did fix ALL KNOWN SHADER HARDWARE BUGS (sorry, ACO is written with that horrible c++, I dunno what went thru the head of valve and no, rust syntax is a complex as c++, then this is toxic too).
by Sparkyte on 6/3/24, 7:08 AM
by Pet_Ant on 6/3/24, 1:33 PM
by adriancr on 6/3/24, 5:51 AM
by nubinetwork on 6/3/24, 7:37 AM
by Tepix on 6/3/24, 11:38 AM
It's as if our planet wasn't being destroyed at a frightening speed. We're headed towards a cliff, but instead if braking, we're accelerating.
by jiggawatts on 6/3/24, 8:32 AM
Sooner or later, AI will need to run on the edge, and that'll require RAM bandwidths measured in multiple terabytes per second, as well as "tensor" compute integrated closely with CPUs.
Sure, a lot of people see LLMs as "useless toys" or "overhyped" now, but people said that about the Internet too. What it took to make everything revolve around the Internet instead of it being just a fad is broadband. When everyone had fast always-on Internet at home and in their mobile devices, then nobody could argue that the Internet wasn't useful. Build it, and the products will come!
If every gaming PC had the same spec as a GB200 or MI300, then games could do real-time voice interaction with "intelligent" NPCs with low latency. You could talk to characters, and they could talk back. Not just talk, but argue, haggle, and debate!
"No, no, no, the dragon is too powerful! ... I don't care if your sword is a unique artefact, your arm is weak!"
I feel like this is the same kind of step-change as floppy drives to hard drives, or dialup or fibre. It'll take time. People will argue that "you don't need it" or "it's for enterprise use, not for consumers", but I have faster Internet going to my apartment than my entire continent had 30 years ago.
by alberth on 6/3/24, 12:40 PM
Question: Am I understanding this correctly that AMD will be using a node size from TSMC that’s 2-years old, but in a way it’s kind of older.
Because N4 was like a “N5+” (and the current gen is “N3+”).
EDIT: why the downvotes for a question?
by aurareturn on 6/3/24, 5:18 AM
It will be significantly slower in ST than M4, and even more so against the M4 Pro/Max.
by papichulo2023 on 6/3/24, 5:42 AM