by wrs on 2/14/18, 8:11 PM
>produce the high-performance codes that the machine learning community needs
Somewhat OT, but I've been wondering for a long time⦠Is the HPC community the only place the word "codes" is used like this? In usual CS parlance programming is done using a substance called "code" ("the high-performance code the community needs"), but in HPC literature the word "codes" is used, as if programming consisted of distinct objects. Does this arise from some divergent history (would I have called my LINPACK library punched card deck a "code"?) or what?
by phaedrus on 2/14/18, 11:40 PM
This web page is also the first I've heard of Halide and Polyhedral Compilation. This is exciting to me because I've been working on relational (database) data and logic comprehensions, and in a case of convergent evolution Halide looks a lot like my notation and Polyhedral Compilation looks much like diagrams I've been drawing on my whiteboard. Where can I learn more on this?
by falcor84 on 2/14/18, 8:26 PM
Could someone please explain how this compares to the TensorFlow approach? I can only assume that it's omitted from the article due to marketing reasons.
by cgmg on 2/15/18, 5:36 AM
by tehsauce on 2/14/18, 9:08 PM
I'm a fan of evolutionary algorithms, but are they really effective enough here to be comparable to an engineer tuning code? They might be able to find a good configuration of a few canned options but real optimization often requires some creativity or at least an understanding of the hardware. Will certainly be interesting to see this in practice!
by jabl on 2/14/18, 9:16 PM
Slightly resembles the Tensor Contraction Engine for quantum chemistry/physics (
http://www.csc.lsu.edu/%7Egb/TCE/ ). Although it predates this by a couple of decades.
by alexbeloi on 2/14/18, 9:49 PM
The most surprising thing to me is that they can parameterize a nontrivial section of the implementation space of a function, or that such a section exists that hasn't been optimized away by the compiler.
by amelius on 2/14/18, 11:22 PM
I think tensors are an overly crude way to model things. It's like computer science went back to the 60s and replaced all data structures by homogeneous blocks of memory.
Edit: of course a computer works best with blocks of memory; that doesn't mean a human developer should have the same view. As a simple example, think of the output vector of a classifier. Why is it a vector, and not a structure? Or think of the internals of an LSTM network; there is more structure in there than just tensors.
by charlescearl on 2/15/18, 4:01 PM
by grondilu on 2/15/18, 12:30 AM
From the documentation on arxiv:
> Variables not defined anywhere, implicitly become index variables.
That seems like a bold choice. Wasn't there a trend in programming languages, even very high level ones, to encourage variable declaration?
by pathsjs on 2/15/18, 4:53 PM
I could not find how to integrate this into a C++ program. Does it need ATen or is there a lower level of integration? Is there even the possibility of getting C bindings?
by tomrod on 2/14/18, 11:53 PM
Neat!
This needs Python bindings, stat!