by jkuria on 1/13/25, 2:57 PM with 57 comments
by jkuria on 1/14/25, 4:34 AM
by openrisk on 1/14/25, 7:24 AM
by gnabgib on 1/14/25, 4:53 AM
New Training Technique for Highly Efficient AI Methods (2 points, 5 hours ago) https://news.ycombinator.com/item?id=42690664
DiLoCo: Distributed Low-Communication Training of Language Models (46 points, 1 year ago, 14 comments) https://news.ycombinator.com/item?id=38549337
by aimanbenbaha on 1/14/25, 1:20 PM
Federated learning breaks the barrier to entry and expands the ecosystem allowing more participants to share compute and/or datasets for small players to train models.
DiLoCo introduced by Douillard minimizes communication overhead by averaging weight updates. What this article misses though is that despite this, each GPU in the distributed cluster still needs to have enough VRAM to load the entire copy of the model to complete the training process. That's where DisTrO comes in which even reduces further the inter-GPU communication using a decoupling technique (DeMo) that only shares the fast moving parts of the optimizer across the GPU cluster.
>And what if the costs could drop further still? The dream for developers pursuing truly decentralised ai is to drop the need for purpose-built training chips entirely. Measured in teraflops, a count of how many operations a chip can do in a second, one of Nvidia’s most capable chips is roughly as powerful as 300 or so top-end iPhones. But there are a lot more iPhones in the world than gpus. What if they (and other consumer computers) could all be put to work, churning through training runs while their owners sleep?"
This aligns with DisTrO techniques because, according to them it could also allow consumer devices like Desktop Gaming PCs to join the compute cluster and share workloads. Besides there's also an open-source implementation called exo that allows models to be split among idle local devices but it's only limited to inference.
Again might still be relevant since in the article it mentions that DiLoCo was able to make the model respond better when faced with instruction prompts or reasoning questions never encountered during pre-training. And Arthur seems to think test-time training will make his approach become the norm.
sources: DisTrO: https://github.com/NousResearch/DisTrO DeMo: https://arxiv.org/pdf/2411.19870 Exo: https://github.com/exo-explore/exo
by m3kw9 on 1/14/25, 4:57 AM
by whazor on 1/14/25, 9:55 AM
However, in my naïvety, I wonder whether vastly simpler algorithms could be used to end up with similar results. Regular compression techniques work with speeds up to 700MB/s.
by neom on 1/14/25, 3:05 PM
(edit: I may also not be accounting enough for using a pre-trained general model next to a fine tuned specialized model?)
by FrustratedMonky on 1/14/25, 1:20 PM
There was a distributed Protein Folding project a couple decades ago.
I remember there was even Protein folding apps that could run on game consoles when not playing games.
But maybe Protein Folding code is more Parallelizable across machines, than AI models.