from Hacker News

Supercharging TensorFlow.js with SIMD and multi-threading

by Marat_Dukhan on 10/26/20, 4:13 AM with 19 comments

  • by wffurr on 10/26/20, 2:23 PM

    Unfortunately, this feature is (still) stuck behind an origin trial and requires serving three different WebAssembly binaries to get correct fallback behavior across different browsers.

    Feature detection for WebAssembly[0] is stuck in spec discussions, and SIMD general availability is blocked on either that or its own mechanism for backwards compatibility[1].

    The issue is that a WebAssembly binary that contains instructions unknown to the engine (e.g. SIMD instructions not supported by a particular engine) won't validate, even if the functions aren't used at runtime. The only way to work around this is to compile your binary NxMx... times and detect which feature set is supported before loading a binary. It's a real pain in the tail when trying to support new WebAssembly features.

    e.g. check out this snippet from canvas.apps.chrome which supports WebAssembly threads on Chrome with a non-thread fallback for e.g. mobile / Firefox:

            var X;
            try {
                X = (new WebAssembly.Memory({
                    initial: 1,
                    maximum: 1,
                    shared: !0
                })).buffer instanceof SharedArrayBuffer ? !0 : !1
            } catch (a) {
                X = !1
            }
            var ua = r(X ? ["js/threads/ink.js", "defines_threads.js"] : ["js/nothreads/ink.js", "defines.js"])
              , va = ua.next().value
              , wa = ua.next().value;
    
    [0]: https://github.com/WebAssembly/conditional-sections [1]: https://github.com/WebAssembly/simd/issues/356
  • by etaioinshrdlu on 10/26/20, 5:33 AM

    If I read this right, this is much faster than the WebGL backend on the devices tested.

    If the CPU is really faster than the GPU, that really demonstrates how inefficient the WebGL backend really is, compared to something like CUDA.

  • by drej on 10/26/20, 7:20 AM

    As for traditional TensorFlow, the easiest way we found to improve performance (easily 2x) was to find/create builds tailored to our machines. Using Python, we had prebuilt wheels, which have (understandably) low feature requirements. If you find/build your own (e.g. if you have AVX-512), you can easily get pretty detect performance gains.

    (Yes, there are unofficial wheels for various CPUs, but, not sure if that passes your security requirements.)

  • by tpetry on 10/26/20, 6:07 AM

    Looks alot like https://github.com/microsoft/onnxjs but onnx.js adds multithreading by web workers which will tske a long time to be available on wasm
  • by dzhiurgis on 10/26/20, 10:50 AM

    28ms on 2018 iPhone without threads or SIMD, 24ms on Chrome MBP 2019 with threads and no SIMD, 11ms with SIMD.
  • by ajtulloch on 10/26/20, 1:00 PM

    Awesome work Marat.
  • by The_rationalist on 10/26/20, 12:34 PM

    Couldn't tensorflow leverage webgl / webgpu? Also it's really sad that there no webCL adoption yet