from Hacker News

Ask HN: Avoiding irrelevant or undesirable model context in RAG

by davidajackson on 11/18/23, 7:52 PM with 1 comments

Considering a RAG prompt say of the format:

Answer question <q> using context <c>.

Say <c> is a contradictory statement to what the model was trained on.

There are certain situations where one wants the llm to use model knowledge, and some where one does not. Is there any formal research in this area?

  • by Ephil012 on 11/18/23, 8:08 PM

    At my company, we developed an open source library to measure if the context the model received is accurate or not. While not exactly the same as what you're asking, you could in theory use it to measure when an LLM deviates from the context to tweak the LLM to not always use the provided context.

    Shameless plug for the library: https://github.com/TonicAI/tvalmetrics