by lulzury on 7/7/24, 5:25 PM with 5 comments
I feel there's a large opportunity here for a more privacy-friendly, on-device solution that doesn't send the user's data to OpenAI.
Is RAM the current main limitation?
by throwaway888abc on 7/7/24, 5:30 PM
by throwaway425933 on 7/7/24, 10:31 PM
by FrenchDevRemote on 7/7/24, 7:41 PM
(V)RAM+processing power+storage(I mean what kind of average user wants to clog half their hard drive for a subpar model that output 1 token a second?)
by Crier1002 on 7/8/24, 4:36 AM
IMO the main limitation is access to powerful GPUs for running models locally and the size of some models causing UX problems with cold starts