by rashidujang on 2/26/24, 2:08 PM with 1 comments
AI infrastructure moves really quickly and best practices are constantly evolving so I was wondering what is HN's opinion on the best way to host custom AI models for inference in Q1 2024?
by PaulHoule on 2/26/24, 3:10 PM
A hard question to ask because we don't know your use case, what kind of models you are using, if privacy is a consideration, all of that. I mean it is one thing to do super low power inference for wake words or something like that on the edge, another to do it on a customer's phone or PC, and another to have a huge model that runs on a cluster.