by eirikurh on 3/12/23, 10:31 AM with 3 comments
by richardjam73 on 3/12/23, 11:16 AM
Humans might see that a ball dropped from height will fall to the ground. They can have a mental model of such a thing. If enough humans write that on the internet then LLM might also say the ball will fall. The model the LLM has however is that those words go together. Humans will have both models in their mind, that the words go together and the ball falling. I think that people are confusing the two different type of models and assuming the LLM has both when it only has one. A human raised in an environment without language will still understand the model of a ball falling but a LLM will not.
by hubadu on 3/12/23, 10:51 AM
Once you know roughly how ChatGPT works under the hood (see ChatGPT API), most of these claims on ChatGPT seems to emerge from us humans being easily tricked by our own anthropomorphism.