by comova on 12/17/24, 2:14 AM with 3 comments
by gregjor on 12/17/24, 3:04 AM
Actually, in every way. But LLMs don't "predict" in the sense of predicting the future. They predict from probabilities based on training data. They auto-complete based on samples of words used in similar contexts before.
Since politicians tend to deliver stock messages, using a specific kind of language, and repeat themselves, an LLM may very well successfully auto-complete something as canned and, well, predictable as a state of the union address or inauguration speech. The vocabulary and space of novel thoughts will stay small in such contexts (more or less depending on the speaker). A bot to imitate almost any president since Lincoln would only need slightly more sophistication than a word processor.
LLMs would have less success "predicting" what I might say in a conversation unless I have written something similar before and that got into the training data.
Using an LLM to predict or simulate what someone might say introduces the temptation to confuse the prediction with what someone would actually say. Imagine talking to someone who interrupts to finish your sentences. They may get it right, more or less, probably not verbatim.
by compressedgas on 12/17/24, 2:52 AM
by joegibbs on 12/17/24, 2:27 AM
You would want to use a raw LLM which just does next token completion rather than a model like Claude or ChatGPT, and you could probably use it to display branching chains of next most likely strings.