from Hacker News

The many ways that digital minds can know – A better way to think about LLMs

by moultano on 7/5/23, 5:17 PM with 16 comments

  • by turnsout on 7/5/23, 9:25 PM

    Nice article, and I like the choice of the word "integration" rather than "generalization" to describe the ability of a model to take an internal representation and apply it in a new scenario.

    I continue to think that Relational Frame Theory [0] both explains why these models work so well, and also how they're able to integrate knowledge through nothing but language. I believe that a researcher could show that LLMs emergently encode "frames" that describe relationships between concepts; that frames can be combined to form more complex expressions; and that frames can be reused in different contexts to make "novel" connections.

    [0]: https://en.wikipedia.org/wiki/Relational_frame_theory

  • by ftxbro on 7/6/23, 12:08 AM

    The best way to think about LLMs since even before ChatGPT has always been the simulators essay https://generative.ink/posts/simulators/
  • by tkgally on 7/5/23, 11:16 PM

    Interesting "brain drop," as the author describes it. It reminds me, though, of theories of human language that treat language as cognitive processing inside individuals' minds rather than as a social phenomenon. It might be more useful if we think about LLMs in terms of how they interact with us and with other software.
  • by rob74 on 7/6/23, 7:58 AM

    The article claims to balance the views of the promoters and detractors of LLMs, but already in the title it uses terms like "digital mind" and "knowing" which will probably be strongly opposed by the detractors...