by hurrycane on 1/9/17, 5:55 AM with 16 comments
by ktta on 1/9/17, 7:33 AM
Anyone know what the status is? When TPUs will be allowed to be used in Google Cloud?
I'm confused as to why they ever announced it at a big event like Google I/O rather than a paper or even a simple blog post if they aren't going to give people access to them. There's some hint of it being offered in conjunction with TF and other ML cloud offerings in the blog post[1], and this 'XLA compiler framework' looks it's related. But I'm still wondering how much time people have to wait.
[1]:https://cloudplatform.googleblog.com/2016/05/Google-supercha...
by Seanny123 on 1/9/17, 9:18 AM
My lab-mate Jan Gosmann recently did something similar for our spiking neural network software Nengo [1]. Although it isn't Deep Learning, it also builds a computational graph of operations. He ended up optimising the layout of the operations in memory to increase the efficiency of Numpy operations and reduce the amount of time spent in Python. He's in the process of writing a paper about it.
by mafribe on 1/9/17, 2:25 PM
by pilooch on 1/9/17, 12:21 PM