from Hacker News

Show HN: Speech to Speech and Visual canvas for LLM interactions

by zekone on 11/25/23, 8:26 PM with 1 comments

  • by zekone on 11/25/23, 8:26 PM

    hi folks, would appreciate your thoughts and feedback on this kind of user experience. while ai models are increasingly becoming multi-modal, our experiences are still synchronous and via single mode.

    here i offer a demo of what a combined interaction would look like with speech-to-speech and a visual canvas, mimicking how people normally give presentations or convey complex ideas.

    thanks

    https://twitter.com/albfresco/status/1728435103443845508