by Rauchg on 7/8/24, 2:21 PM with 29 comments
by AaronFriel on 7/8/24, 5:27 PM
This is very good. But this is, unfortunately, still bound by the dominant paradigm of web APIs. The speech to text model doesn't get its first byte until I'm done talking, the LLM doesn't get its first byte until the speech to text model is done transcribing, and the speech to text model doesn't get its first byte until the LLM call is complete.
When all of these things are very fast, it can be very seamless, but each of these contributes to a floor of latency that makes it hard to get to lifelike conversation. Most of these models should be capable of streaming prefill - if not decode (for the transformer like models) - but inference servers are targeting the lowest common denominator on the web: a synchronous POST.
When only 3 very fast models are involved, that's great. But this only compounds when trying to combine these with agentic systems, tool calling.
The sooner we adopt end-to-end, bidirectional streaming for AI, the sooner we'll reach more lifelike, friendly, low latency experiences. After all, inter-speaker gaps in person to person conversations are often in the sub-100ms range and between friends, can even be negative! We won't have real "agents" until models can interrupt one another and talk over each other. Otherwise these latencies compound to a pretty miserable experience.
Relatedly, Guillermo - I've contributed PRs to reduce the latency of tool calling APIs to the AI SDK and Websockets to Next.js. Let's break free of request-response and remove the floor on latency.
by Y_Y on 7/8/24, 4:27 PM
(Not your server, not your code!)
by isoprophlex on 7/8/24, 4:37 PM
Is it really open source then, even though (as far as I can tell) Whisper and Llama have open weights but not open data, and that speech synthesis thing is seemingly fully proprietary?
Loving the new wave of ultrafast voice assistants though, and your execution in particular is very good.
by leobg on 7/8/24, 5:08 PM
I take it that Show HN is not just about the creation but also about the creator and the journey behind what’s being shown.
by sigmonsays on 7/8/24, 5:03 PM
by ashryan on 7/8/24, 5:01 PM
I haven't been using LLM-powered voice assistants much since I usually prefer text. One thing I noticed playing around with this demo is that the conversational uncanny valley becomes much more apparent when you're speaking with the LLM.
That's not a knock on this project, but wow it's something I want to think about more.
Thanks for sharing!
by 101008 on 7/8/24, 5:36 PM
by oynqr on 7/8/24, 5:52 PM
Why is this still so easy?
by bberenberg on 7/8/24, 4:40 PM
by maho on 7/8/24, 7:45 PM
by lostmsu on 7/8/24, 8:31 PM