from Hacker News

Speed Is All You Need: On-Device Acceleration of Large Diffusion Models

by Pelayu on 4/30/23, 9:22 PM with 8 comments

  • by nl on 4/30/23, 11:10 PM

    Interestingly these are OpenCL kernels so in theory some of the optimizations might run out-of-the-box on CPUs.

    It would be instructive to compare their speedups on the iPhone to the Apple CoreML implementation: https://github.com/apple/ml-stable-diffusion

  • by DennisAleynikov on 4/30/23, 10:34 PM

    This incredible, can't wait to run it. Is there a code sample somewhere to reproduce their Samsung s23 results?
  • by sigmoid10 on 4/30/23, 10:15 PM

    This is definitely a welcome development, but I'm getting so tired of all these papers trying to pay homage to the original Transformer paper in their title. It is neither funny anymore, nor does it give due credit or indicate quality and on top of that the original paper title was a pretty poor choice in hindsight, highlighting how the original authors didn't foresee the gigantic impact of their paper.