by gr2020 on 6/19/19, 9:47 PM with 71 comments
by Who_me on 6/20/19, 3:29 AM
One of the biggest questions is why the Quadro RTX 6000? Few things:
1. Cost it has the same performance as the 8000. The difference is 8 more GB of RAM that comes at a steep premium. Cost is important to us as it allows us to be at a more affordable price point.
2. We have all heard or used the Tesla V100, and it's a great card. The biggest issue is that it's expensive. So one of the things that caught our eye is the RTX 6000 has a fast Single-Precision Performance, Tensor Performance, and INT8 performance. Plus the Quadro RTX supports INT4. https://www.nvidia.com/content/dam/en-zz/Solutions/design-vi... https://images.nvidia.com/content/technologies/volta/pdf/tes... Yes, these are manufactures numbers, but it caused us pause. As always, your mileage may vary.
3. RT cores. This is the first time (TMK) that a cloud provider is bringing RT cores into the market. There are many use cases for RT that have yet to be explored. What will we come up with as a community?!
Now with all that being said, there is a downside, FP64 aka double precision. The Tesla V100 does this very well, whereas the Quadro RTX 6000 does poorly in comparison. We think although those workloads are important, the goal was to find a solution that fits a vast majority of the use cases.
So is the marketing true to get the most out of MI/AI/Etc? Do you need a Tesla to get the best performance? Or is the Tesla starting to show its age? Give the cards a try I think you'll find these new RTX Quadros with Turning architecture are not the same as the Quadros of the past.
by picozeta on 6/20/19, 1:10 AM
GTX1080 for 100$ a month. Grantend, it is older, but it still works for DL. Let's say you do 10 experiments a month for ~20 hours. Thats 0.5$/hour and I don't think it is 3 times faster.
If you then want to do even more learning the price goes even down.
//DISCLAIMER: I do not work for them, but used it for DL in the past and it was for sure cheaper than GCP or AWS. If you have to do lots of experiments (>year) go with your own hardware, but do not underestimate the convenience of >100MByte/s if you download many big training sets.
by m0zg on 6/20/19, 1:48 AM
by ilaksh on 6/20/19, 12:12 AM
One thing I noticed when recently trying to get a GPU cloud instance, the high core counts are usually locked until you put in a quota increase. Then sometimes they want to call you.
So I wonder if Linode will have to do that or if they can figure out another way to handle it that would be more convenient.
I also wonder if Linode could somehow get Windows on these? I know they generally don't do anything other than Linux though. My graphics project where I am trying to run several hundred ZX Spectrum libretro cores on one screen only runs on Windows.
by keytarsolo on 6/19/19, 11:47 PM
Linode skews more towards smaller scale customers with many of their offerings so I think the GPUs here make sense. The real test will be how often they upgrade them and what they upgrade them too.
by hmart on 6/20/19, 4:32 AM
by dkobran on 6/20/19, 1:46 AM
by minimaxir on 6/19/19, 10:47 PM
by thenightcrawler on 6/19/19, 11:50 PM
edit: could be wrong thought I read of AWS being .65 dollars an hour for deep learning GPU use. edit2: Did a quick look, the .65 dollars doesn't include the actual instance, so its around 1.8 an hour on the low end, I think this cheaper.
by coherentpony on 6/19/19, 11:51 PM
by zonidjan on 6/20/19, 6:12 AM
by MuffinFlavored on 6/19/19, 11:56 PM
by bufferoverflow on 6/19/19, 11:39 PM
by ThomWilhelm3 on 6/20/19, 1:34 AM