by selalipop on 5/8/24, 8:33 PM with 1 comments
by selalipop on 5/8/24, 8:33 PM
But instead of a chatbot, this generates a set of guardrails for a chatbot based on your webpage
-
For example, if your website has information about a hotel, an LLM using RAG would attempt to answer most questions about hotels.
But by default there's no real-time information on things like weather or traffic conditions.
Rather than risk the chatbot hallucinating an answer, the guardrail model would detect a query likely to result in a hallucination and preemptively block it from reaching the underlying model