from Hacker News

Better LLM response format – A simple trick reduces costs and response time

by livshitz on 7/25/23, 10:10 AM with 6 comments

  • by livshitz on 7/25/23, 10:10 AM

    I've composed a post that could be of interest to those of you working with GPT (or any other LLM) and seeking JSON as an output. Here's a simple trick that can help reduce expenses and improve response times.
  • by keskival on 7/25/23, 1:07 PM

    I wonder how well LLMs understand YAML Schema format.

    I have found providing JSON Schema to them to be an excellent way to reduce their improvisation in their outputs intended for machine consumption.

  • by villgax on 7/25/23, 12:13 PM

    Might make more sense for invoking 3rd party API but for self run LLMs TypeChat w/ JSON is just fine instead of adapting to YAML across your stack