by gok on 12/11/24, 3:44 PM with 80 comments
by xnx on 12/11/24, 5:40 PM
Wow. I knew custom Google silicon was used for inference, but I didn't realize it was used for training too. Does this mean Google is free of dependence on Nvidia GPUs? That would be a huge advantage over AI competitors.
by lanthissa on 12/12/24, 12:59 PM
If TPU's are really that good why on earth would google not sell them. People say its better to rent, but how can that be true when you look at the value of nvidia.
by blackeyeblitzar on 12/11/24, 7:55 PM
Plus big tech companies have the data and customers and will probably be the only surviving big AI training companies. I doubt startups can survive this game - they can’t afford the chips, can’t build their own, don’t have existing products to leech data off of, and don’t have control over distribution channels like OS or app stores
by randomcatuser on 12/11/24, 5:57 PM
Would be neat if anyone has benchmarks!!
by WanderPanda on 12/12/24, 3:48 AM
by teleforce on 12/13/24, 3:22 AM
[1] Dataflow architecture:
https://en.wikipedia.org/wiki/Dataflow_architecture
[2] The GPU is not always faster:
by LittleTimothy on 12/12/24, 1:18 PM
by amelius on 12/11/24, 7:34 PM
by Hilift on 12/11/24, 8:40 PM
... "The growing importance of multi-step reasoning at inference time necessitates accelerators that can efficiently handle the increased computational demands."
Unlike others, my main concern with AI is any savings we got from converting petroleum generating plants to wind/solar, it was blasted away by AI power consumption months or even years ago. Maybe Microsoft is on to something with the TMI revival.