from Hacker News

Asking ChatGPT to Repeat Words 'Forever' Is Now a Terms of Service Violation

by abhinavstarts on 12/5/23, 7:03 AM with 34 comments

  • by skilled on 12/5/23, 8:16 AM

    “We can steal people’s copyrighted content but we can’t let you see it for yourself.”

    Outside of privacy (leaking PII), the above is likely the main reason. Someone could have invested a lump of money to scrape as much as they can and then go to town in the courts.

    The terms that prohibit it are under “2. Usage Requirements” that restrict reverse engineering the underlying model structure.

  • by LeoPanthera on 12/5/23, 8:07 AM

    Fairly clickbaity headline. Asking it to do so causes the response to terminate early with a message that says it may violate the TOS.

    I don't think the actual TOS has been changed though.

  • by prepend on 12/5/23, 2:46 PM

    Publicly available PII isn’t very sensitive, I think.

    So I feel like it’s important to distinguish between sensitive PII (my social or bank number) and non-sensitive PII (my name and phone number scraped from my public web site).

    The former is really bad, both to train on and to divulge. The latter is not bad at all and not even remarkable, unless tied to something else making it sensitive (eg, hiv status from a medical record).

  • by beej71 on 12/5/23, 4:54 PM

    It was my naïve understanding that the training data no longer existed, having been absorbed in aggregate. (Like how a simple XOR neutral net can't reproduce its training data.) But a) I don't know how this stuff actually works, and b) apparently it does exist.

    Has anyone figured out why asking it to repeat words forever makes the exploit work?

    Also, I've gotten it into infinite loops before without asking. I wonder if that would eventually reveal anything.

  • by namlem on 12/5/23, 1:07 PM

    Does this issue happen with llama models too? If you ask them to repeat a word they'll eventually leak their training data?
  • by bravetraveler on 12/5/23, 10:40 AM

    Lol, what a weak defense. Fine, ban your competitors when they pay $20 per peek at your training data
  • by karmakaze on 12/5/23, 5:34 PM

    Ok, but can we still ask it to repeat a word a billion times?