by PunchTornado on 11/20/24, 7:52 AM with 1 comments
by philipswood on 11/20/24, 4:39 PM
An interesting paper suggesting that while LLMs produce seemingly human-like output, they use very un-human-like computational approaches with significant weaknesses.
Suggesting that, while they are not stochastic parrots, - they're nowhere near human level intelligence yet.
Tasks like:
> John deceived Mary and Lucy was deceived by Mary. In this context, did Mary deceive Lucy?
> Franck read to himself and John read to himself, Anthony and Franck. In this context, was Franck read to?