from Hacker News

ChatGPT fails with complex constraints but it pretends it complies with them

by puntofisso on 12/26/22, 12:04 PM with 3 comments

  • by puntofisso on 12/26/22, 12:04 PM

    I tried this set of prompts trying to get ChatGPT generate a diet plan with close-to-impossible constraints. Its behaviour is interesting: it generates a response that ignores some of the constraints. When corrected, it admits the mistake, but then it does it again when offering a correction.