from Hacker News

DPO fine-tuned Mistral 7B beats Llama 70B on MT Bench

by clmnt on 10/10/23, 7:26 PM with 3 comments

  • by amilios on 10/10/23, 7:59 PM

    Can anyone corroborate this anecdotally? I.e. has anyone actually looked at the output of the two models side-by-side for common tasks? There's lots of talks these days about academic benchmarks being pretty "broken" for modern LMs, and not really properly showcasing the differences between models. I wonder if that's the case here or if the model is genuinely better.
  • by brucethemoose2 on 10/11/23, 8:05 PM

    > <|system|>, <|user|> and <|model|>.

    Oh hey, thats almost Metharme's format.

    It must originate from an older model, as most new models dont use that syntax.