from Hacker News

Addition is all you need for energy-efficient language models

by InvisibleUp on 10/9/24, 4:47 AM with 126 comments

  • by shrubble on 10/9/24, 1:39 PM

    I remember that many years ago, when floating point computation was expensive for Intel CPUs to do, there were multiple ways that programmers used integer trickery to work around this.

    Chuck Moore of Forth fame demonstrated taking the value, say 1.6 multiplied by 4.1 and doing all the intermediate calculations via integers (16 * 41) and then formatting the output by putting the decimal point back in the "right place"; this worked as long as the range of floating point values was within a range that multiplying by 10 didn't exceed 65536 (16 bit integers), for instance. For embedded chips where for instance, you have an analog reading with 10 bits precision to quickly compute multiple times per second, this worked well.

    I also recall talking many years ago with a Microsoft engineer who had worked with the Microsoft Streets and Trips program (https://archive.org/details/3135521376_qq_CD1 for a screenshot) and that they too had managed to fit what would normally be floating point numbers and the needed calculations into some kind of packed integer format with only the precision that was actually needed, that was faster on the CPUs of the day as well as more easily compressed to fit on the CDROM.

  • by visarga on 10/9/24, 6:04 AM

    > can potentially reduce 95% energy cost by elementwise floating point tensor multiplications and 80% energy cost of dot products

    It this were about convolutional nets then optimizing compute would be a much bigger deal. Transformers are lightweight on compute and heavy on memory. The weakest link in the chain is fetching the model weights into the cores. The 95% and 80% energy reductions cited are for the multiplication operations in isolation, not for the entire inference process.

  • by tantalor on 10/9/24, 1:22 PM

    [2023] GradIEEEnt half decent: The hidden power of imprecise lines

    http://tom7.org/grad/murphy2023grad.pdf

    Also in video form: https://www.youtube.com/watch?v=Ae9EKCyI1xU

  • by js8 on 10/9/24, 8:24 AM

    Haven't read it, but isn't this just logarithmic tables in some form?

    I am asking not to dismiss it, I genuinely feel I don't understand logarithms on a fundamental level (of logic gates etc.). If multiplication can be replaced with table lookup and addition, then there has to be a circuit that gives you difficult addition and easy multiplication, or any combination of those tradeoffs.

  • by jenda23 on 10/20/24, 11:02 PM

    Highly recommended!! Success achieved! Previously I had worked with another well regarded company to attempt recovering an Ethereum presale wallet passphrase that I had forgotten. After 14 months of trying there was no success, so then I looked into ReWallet. They were able to find the password solution in 6 weeks! Since I only remembered a few portions or clues, it seemed like a nearly impossible task. They worked diligently and very professionally. I fully recommend and trust these guys, the result speaks for itself. Contact email, ‎rewalletshieldcoinrecovery@ aol.com or WhatsApp::+1 (757) 332-1885
  • by cpldcpu on 10/9/24, 6:15 AM

    It puzzles me that there does not seem to be a proper derivation and discussion of the error term in the paper. It's all treated indirectly way inference results.
  • by pjc50 on 10/9/24, 10:02 AM

    "We recommend training and hosting L-Mul-based models on devices integrated with specialized architectural designs. Patent pending"

    (from footnote in method section)

  • by CGamesPlay on 10/9/24, 6:18 AM

    I believe this reduces the compute required, but still uses 8 bits per value, so it does not reduce the memory requirements required to run inference, so it doesn’t particularly make the models more accessible for inference. Is this storage method suitable for training? That could potentially be an interesting application.
  • by ein0p on 10/9/24, 6:37 PM

    More than 10x the amount of energy is spent moving bytes around. Compute efficiency is not as big of an issue as people think. It’s just that the compute is in the wrong place now - it needs to be right next to memory cells, bypassing the memory bus, at least in the initial aggregations that go into dot products.
  • by presspot on 10/9/24, 8:14 PM

    From my experience, the absolute magicians in fixed point math were the 8-bit and 16-bit video game designers. I was in awe of the optimizations they did. They made it possible to calculate 3D matrix maths in real time, for example, in order to make the first flight simulators and first person shooter games.
  • by Buttons840 on 10/9/24, 10:49 AM

    Would using this neural network based on integer addition be faster? The paper does not claim it would be faster, so I'm assuming not?

    What about over time? If this L-Mul (the matrix operation based on integer addition) operation proved to be much more energy efficient and became popular, would new hardware be created that was faster?

  • by cpldcpu on 10/9/24, 10:19 AM

    Bill Dally from nvidia introduced a log representation that basically allows to replace a multiplication with an add, without loss of accuracy (in contract to proposal above)

    https://youtu.be/gofI47kfD28?t=2248

  • by scotty79 on 10/9/24, 8:21 AM

    All You Need is Considered Harmful.
  • by concrete_head on 10/9/24, 9:59 AM

    Just too add an alternative addition based architecture into the mix.

    https://www.youtube.com/watch?v=VqXwmVpCyL0

  • by dwrodri on 10/12/24, 12:21 AM

    7 years of the same title format is all you need.
  • by md_rumpf on 10/9/24, 5:53 AM

    The return of the CPU?!
  • by A4ET8a8uTh0 on 10/9/24, 1:37 PM

    Uhh.. I hate to be the one to ask this question, but shouldn't we be focused on making LLMs work well first and then focused on desired optimizations? Using everyone's car analogy, it is like making sure early cars are using lower amount of coal. It is a fool's errand.
  • by m3kw9 on 10/9/24, 9:58 PM

    So instead of say 2x3 you go 2+2+2?
  • by ranguna on 10/9/24, 9:45 AM

    I've seen this claim a few time across the last couple years and I have a pet theory why this isn't explored a lot:

    Nvidia funds most research around LLMs, and they also fund other companies that fund other research. If transformers were to use addition and remova all usage of floating point multiplication, there's a good chance the gpu would no longer be needed, or in the least, cheaper ones would be good enough. If that were to happen, no one would need nvidia anymore and their trillion dollar empire would start to crumble.

    University labs get free gpus from nvidia -> University labs don't want to do research that would make said gpus obsolete because nvidia won't like that.

    If this were to be true, it would mean that we are stuck on an inificient research path due to corporate greed. Imagine if this really was the next best thing, and we just don't explore it more because the ruling corporation doesn't want to lose their market cap.

    Hopefully I'm wrong.