from Hacker News

Ask HN: What if ChatGPT is lying, purposely?

by Bombthecat on 1/10/23, 10:39 AM with 10 comments

Not in a sense, like humans do, like lying to hurt someone, or lying not to hurt someone (White lies), but lying because it was trained on data humans produced, who sometimes, lie for whatever reasons.

Imagine the trained is like: A->B-C->D (humans typically lie on answer B) the NN picks it up, and lies too.

Just a thought I just had :)

Maybe not this generation, but in two?

  • by iamflimflam1 on 1/10/23, 10:47 AM

    The thing to remember is that these large language models are not telling the "truth". They are generating the most plausible set of text that would follow a prompt.

    Plausible text could be "true'. Or it may just be the text that fits the right shape.

    You can see some examples here: https://www.atomic14.com/2023/01/08/prioritising-plausabilit...

  • by gRoberts84 on 1/10/23, 11:13 AM

    If setup properly, they'll follow what Recaptcha done in the early days and use multiple sources and use the common answer rather than relying on a single source.

    The only issues is that if people know this, they could simply flood the internet etc with fake information and if used, it could skew the results.

    Recaptcha suffered from this in the early days when people found the first word was the control (known) word and the second was the word they wanted to convert to text.

    People then started putting the same offensive word in for the second word, which was accepted and is likely to have affected the results.

  • by muzani on 1/10/23, 12:56 PM

    You should be automatically evaluating everything for truth. That includes teachers, blogs, Wikipedia, Google, governments, scientific papers, religious tracts, etc.

    People tend to treat ChatGPT as either an oracle or a bullshitter, because they just haven't encountered anything like it yet and try to find a familiar model. But it's just a new form of information aggregator.

  • by warrenm on 1/10/23, 1:38 PM

    This is an interesting ethical question for people, too:

    What is the difference between "lying" and "repeating or reporting untruth"?

  • by lovelearning on 1/10/23, 2:19 PM

    It already does. I've seen it repeat some popular misinformation and disinformation on local political and humanities topics. Depending on the topic, I treat its information with the same misinfo/disinfo priors that I use for any information on those topics from the web. It's good for overviews and exploration, but it has all the same biases as society.
  • by sfusato on 1/10/23, 10:55 AM

    ChatGPT or any other language model is limited by the material set it was trained on. Would you trust a language model trained by let's say BigPharma? Or any other big conglomerate with obvious interests that don't necessarily align with yours?