from Hacker News

XLA: The TensorFlow compiler framework

by hurrycane on 1/9/17, 5:55 AM with 16 comments

  • by ktta on 1/9/17, 7:33 AM

    This page has the first mention of Google TPUs since the initial announcement from Google.

    Anyone know what the status is? When TPUs will be allowed to be used in Google Cloud?

    I'm confused as to why they ever announced it at a big event like Google I/O rather than a paper or even a simple blog post if they aren't going to give people access to them. There's some hint of it being offered in conjunction with TF and other ML cloud offerings in the blog post[1], and this 'XLA compiler framework' looks it's related. But I'm still wondering how much time people have to wait.

    [1]:https://cloudplatform.googleblog.com/2016/05/Google-supercha...

  • by Seanny123 on 1/9/17, 9:18 AM

    Neato! I'm surprised they went with a JIT compiler over a full-on compiler, but that might just be me not understanding: a) Compilers b) How a JIT compiler would apply to this situation

    My lab-mate Jan Gosmann recently did something similar for our spiking neural network software Nengo [1]. Although it isn't Deep Learning, it also builds a computational graph of operations. He ended up optimising the layout of the operations in memory to increase the efficiency of Numpy operations and reduce the amount of time spent in Python. He's in the process of writing a paper about it.

    [1] https://github.com/nengo/nengo/pull/1035

  • by mafribe on 1/9/17, 2:25 PM

    The compiler does not appear to be open at this point. Anybody know when this will change? Which team in Google is writing the compiler?
  • by pilooch on 1/9/17, 12:21 PM

    It bears some similarities with with Nvidia's tensorRT that is closed source.