from Hacker News

QLoRA 4-bit finetuning of LLMs

by kashifr on 5/24/23, 4:59 PM with 1 comments

  • by kashifr on 5/24/23, 5:01 PM

    An efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance!