by eskibars on 7/17/24, 6:01 AM with 1 comments
I lead product at Vectara and we've just released a new LLM in our platform that outperforms GPT4 and Gemini 1.5 Pro on RAG tasks.
Vectara is a Retrieval Augmented Generation (RAG) platform primarily deployed as a SaaS service which includes a generous free tier so you can try it for free.
The way we've been able to offer a "better but cheaper" is that we focus a lot of our attention on taking smaller models (which can be hosted in a cost efficient way) and fine tuning them to specific tasks: in this case RAG. This ends up with a model that is less capable of arbitrary tasks like crafting creative stories, but for many enterprises we've learned they don't see the "creativity" of LLMs as a positive, as they also result in hallucinations.
Would love feedback!
by llm-apprentice on 7/19/24, 6:27 PM