from Hacker News

Building AGI Using Language Models

by leogao on 8/18/20, 1:34 AM with 19 comments

  • by hprotagonist on 8/18/20, 2:51 AM

    I remain to be convinced that encoding statistical information about syntactical manipulation alone will somehow magically convert to semantic knowledge and agency if you just try really hard and do it a lot.
  • by ilaksh on 8/18/20, 7:39 AM

    To me GPT-3 excitement is equivalent to when people get hyped about "defeating aging" after seeing some resveratrol trial or something.

    Language is only part of it. And you can't get complete understanding without integrating spatial information. Take a look at Josh Tenenbaum's work for explanation of why.

  • by msamogh1 on 8/18/20, 3:13 AM

    It talks about how you can go from a language model that simply generates text to an agent that is capable of performing actions in the real world

    Essentially, the missing pieces in the picture come down to input and output modules. "How do you formulate any given problem into a form that a language model can answer?".

  • by bionhoward on 8/18/20, 3:03 PM

    If the agent state is symbolic, that’s cool and interesting, but isn’t reality sub-symbolic?
  • by mrfusion on 8/18/20, 2:33 AM

    I don’t completely follow this. Can anyone explain?