by makaimc on 7/24/24, 2:10 PM with 2 comments
by sixhobbits on 7/24/24, 5:09 PM
I'm not sure if the code samples actually work but they look super generic, and eg it talks about using "accuracy" to evaluate and a test split of 10% in a way that doesn't make sense to me.
An LLM is never going to perfectly generate the same answer as your gold standard answer, so evaluating your model is a challenge on its own that would have been great to address here, but was skipped over in favour of an ad.
Also a lot of the stuff under "why fine tune" seems off. You can do most of that stuff with an LLM directly without fine tuning.
Overall this post looks a lot like the in depth, long form content I usually love seeing on HN, but I am suspicious that it is actually vapourware that follows the form of a technical guide without actually being one (eg written by someone nontechnical or partially auto generated)
by zwaps on 7/25/24, 4:14 AM
Otherwise this is a basic and incomplete Huggingface tutorial.
I am sorry but not a good showing