from Hacker News

Show HN: Visualize OpenAI Evals of GPT-4

by confutio on 3/16/23, 3:47 PM with 0 comments

OpenAI's new evals library (https://github.com/zeno-ml/zeno-evals) makes it easy to create and run evaluations to find the limitations of GPT-4.

The CLI library just prints out an overall metric after the process has finished, and doesn't give you any insights into what types of outputs and failures GPT-4 produced.

This simple one-line command lets you pass in the log outputs from Open AI evals (https://github.com/openai/evals/) and interactively explore the actual generated data!

Try it out with `pip install zeno-evals`