from Hacker News

Who is building LLM Chatbots, and what issues are you running into?

by petervandijck on 3/5/24, 1:14 AM with 11 comments

Heya, like probably everyone, we are building some internals LLM chatbots for customers of ours. I'd love to hear hands-on insights for what people are doing, why, what's working for them/not working, etc.
  • by BoorishBears on 3/5/24, 6:25 PM

    Chain of thought is underutilized. It almost never makes sense to show the user the "bare" response of the LLM. It's so easy to have LLMs self-critique, think through user intent, etc. to drastically improve the final output
  • by petervandijck on 3/5/24, 1:16 AM

    For example, for us, we are building an LLM chatbot that pulls in the data of a technical book publisher. They have 20 years of technical books, and 20 years of videotaped conference talks.

    Hard:

    - We're using LangChain, which isn't always great

    - The data pipeline was trickier than I had initially thought

    - Indexing embeddings (in PostGres) is just hard (requires tons of ram)

    But the hardest thing has been working on conversation quality. We've started to use LangSmith, which was a godsend for tracing and observability, and came out fairly recently. But it's not perfect and I wish there were better tools out there.