from Hacker News

Tell HN: Don't just call fine-tuning “training”

by sbussard on 3/28/23, 4:01 PM with 1 comments

It's easy to misunderstand claims of running LLMS locally, as if anyone can write the next ChatGPT on their laptop.

Even though fine tuning is a type of training, it is not the hard part, so one solution is to communicate more clearly and always call fine-tuning fine-tuning. There are a lot of new people wanting to get into the field, and having clarity in your claims will help us out.

Thanks

  • by coxomb on 3/28/23, 7:55 PM

    Well if the LLM is closed and proprietary, there is no insight into how training data is even used. It's just a black-box we have to use blindly and 'hope' the designers are using a blend of fine-tuning coupled with better training data.