by 0asa on 6/16/23, 8:54 AM with 6 comments
Do you think we'll see LLMs integrated into voice assistants in the near future? Are there technical limitations or concerns around privacy and security that need to be addressed? What's preventing them from doing it now? What might be holding them back?
by didntcheck on 6/16/23, 9:29 AM
I'm quite certain that Amazon and the others will be putting a lot of time into investigating this avenue, it's just not ready for rollout yet. I'm looking forward to it - I have an Echo and it definitely does use a lot of fuzzy matching to understand basic rewording of queries, but it's still very much a limit set of defined "functions", just with a fuzzy call site. The user has to handle the breaking down of their high-level intent into small requests from those primitives, and keep all the "variables" in their head and pass them to each "function call". Smarter assistants that can remember context is something that's been worked on and allegedly deployed almost since the start, but ChatGPT is the only thing I've seen that lives up to the promise in the real world
by 0asa on 6/23/23, 7:48 PM
by senttoschool on 6/16/23, 9:00 AM
Probably because LLMs hallucinate and often answer with the wrong information. Can you imagine Siri spitting out wrong information, causing some accident or health risk? Apple's pristine PR image would never allow something like that. Quite frankly, I'm surprised that Google and Microsoft are willing to take that risk.
I totally get why OpenAI said "f it, let's just ignore hallucination and accuracy" that only affects 5% of queries. OpenAI does not have an established worldwide brand.
But could you imagine the news stories if Apple released ChatGPT?
by jrepinc on 6/16/23, 2:59 PM