from Hacker News

A statistical approach to model evaluations

by RobinHirst11 on 11/23/24, 12:37 PM with 49 comments

  • by fnordpiglet on 11/29/24, 6:56 PM

    This does feel a bit like under grad introduction to statistical analysis and surprising anyone felt the need to explain these things. But I also suspect most AI people out there now a days have limited math skills so maybe it’s helpful?
  • by Unlisted6446 on 11/30/24, 2:36 AM

    All things considered, although I'm in favor of Anthropic's suggestions, I'm surprised that they're not recommending more (nominally) advanced statistical methods. I wonder if this is because more advanced methods don't have any benefits or if they don't want to overwhelm the ML community.

    For one, they could consider using equivalence testing for comparing models, instead of significance testing. I'd be surprised if their significance tests were not significant given 10000 eval questions and I don't see why they couldn't ask the competing models 10000 eval questions?

    My intuition is that multilevel modelling could help with the clustered standard errors, but I'll assume that they know what they're doing.

  • by ipunchghosts on 11/29/24, 11:12 PM

    I have been promoting this and saying it since at least 2018. You can see my publication record as evidence!!!

    "Random seed xxx is all you need" was another demonstration of this need.

    You actually want a wilcoxon sum rank test as many metrics are not gaussian especially as they get to thier limits!! I.e. accuracy roughly 99 or 100! Then it becomes highly sub gaussian.

  • by intended on 11/30/24, 5:43 AM

    Since when the heck did evals change what they referred to. Evals were what you did to check if the output of a model was correct. What happened ?