from Hacker News

Decoding LLM Uncertainties for Better Predictability

by shayanjm on 10/16/23, 4:00 PM with 2 comments

  • by armcat on 10/16/23, 7:59 PM

    Great work! I love the use of simple techniques like normalized entropy and cosine distance to reveal what the model is "thinking". The example with random number generation is very cool! I actually managed to get that example to work by telling it that it's allowed to use replacement, AND by giving it an example of repeated numbers (just telling it that it can repeat the numbers won't work). Then I get perfect uniform distribution on each step (spikes are all same length). I can definitely see how something like this could be used to guide prompt engineering strategies.
  • by shayanjm on 10/16/23, 4:00 PM

    Building off our last research post, we wanted to figure out ways to quantify "ambiguity" and "uncertainty" in prompts/responses to LLMs. We ended up discovering two useful forms of uncertainty: "Structural" and "Conceptual" uncertainty.

    In a nutshell: Conceptual uncertainty is when the model isn't sure what to say, and Structural uncertainty is when the model isn't sure how to say it.

    You can play around with this yourself in the demo!