from Hacker News

Questions about LLMs in Group Chats

by vvoruganti on 9/9/24, 11:19 AM with 19 comments

  • by theturtle32 on 9/13/24, 6:42 AM

    I've been contemplating these exact same thoughts and ideas, and indeed have been very surprised how little exploration there seems to be around these nuances!
  • by vintro on 9/13/24, 9:42 PM

    given the mechanics of language models, i think it's really interesting to consider them in group settings. how do you create an environment where they can "decide" to respond? how do they make that decision?

    this is something that's talked a lot about in education -- how to foster a productive group discussion: https://www.teacher.org/blog/what-is-the-harkness-discussion...

  • by knowaveragejoe on 9/13/24, 6:10 AM

    This is very interesting and thought provoking. On its face it sounds so simple - just put the bots in a room together and let them go at it. But of course it's much more complicated than that, at least if you want good and/or interesting results.
  • by rasengan on 9/13/24, 12:14 PM

    I did a small test a year and a half ago (very basic) and it was already funny xD

    https://github.com/realrasengan/chatgpt-groupchat-test

  • by skeptrune on 9/13/24, 4:43 AM

    I'm interested. Could you use a LLM function call to decide whether or not to respond instead of randomness so it feels more intelligent?

    Also, I have no idea what the use case for this would be but making it work sounds cool and kinda fruitful.

  • by internetter on 9/13/24, 4:40 AM

    What does this achieve?