by AJRF on 1/27/25, 4:32 PM with 1 comments
by AJRF on 1/27/25, 4:32 PM
I've recently started running local LLMs, and one problem I encountered was inconsistent information on whether a model will run within a certain amount of VRAM.
I created a simple calculator that helps determine if a model can run on your hardware. It will also tell you how much VRAM you need at different quantization levels.
It doesn’t work with all models yet, but I’m working on building a more stable dataset to pull from.
Feedback is appreciated!