from
Hacker News
Top
New
Load a large language model in 4bit and train it using Google Colab and PEFT
by
ritabratamaiti
on 6/17/23, 3:18 AM with 1 comments
by
ritabratamaiti
on 6/17/23, 3:19 AM
From the Huggingface blog post: Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA (
https://huggingface.co/blog/4bit-transformers-bitsandbytes
)