from Hacker News

Andrew Ng: Unbiggen AI

by leptoniscool on 2/9/22, 8:40 PM with 8 comments

  • by yeldarb on 2/10/22, 2:45 AM

    Totally. With computer vision especially, you often want to deploy the model on an edge device with limited bandwidth, power, and footprint (think remote camera monitoring, drones, robots, and autonomous vehicles).

    Requiring a huge desktop or server-grade graphics card (much less a box full of many of them) to fit the model into memory misses the mark.

    We’ve done a lot of work getting models to be performant on the Luxonis OAK (OpenCV AI Kit) and NVIDIA Jetson devices.

  • by 22c on 2/10/22, 12:20 AM

    I've had several discussions with friends about how I think AI should be more modular. You could have a general model which, eg. classifies an object as a "fruit" then passes it off to a separate, more specialised model which could classify that fruit as a "banana".

    This way you can improve your fruit classifier without needing to make changes to the general classifier. I think it also opens up the possibilities for things like having a general "offline" model on a smartphone but when connected to the internet it could make use of more specialised models.

    It would also be cool if you could download offline models for things you're particularly interested in, like birds species etc.

    I think one of the problems with having a really large AI that attempts to classify everything would be a sort of "tunnel vision" problem and eventually the AI has to make a guess as to what something is instead of saying "best I can do is this is an animal, but let me go ask a buddy of mine who's an expert on animals".

  • by random314 on 2/10/22, 6:49 AM

    I am not sure why Ng seems to suggest he pioneered the use of GPUs for ML. It was pioneered by the Toronto lab led by Hinton, afaik.