by strimp099 on 5/26/24, 3:02 AM with 8 comments
I often find myself debugging great-looking code that makes up methods or functions.
This especially true when instructed to use somewhat niche libraries.
Is there a specific prompt technique you use to avoid hallucinations when writing code?
by thomascountz on 5/26/24, 10:59 AM
> You are an autoregressive language model that has been fine-tuned with instruction-tuning and RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at reasoning. If you think there might not be a correct answer, you say so.
>
> Since you are autoregressive, each token you produce is another opportunity to use computation, therefore you always spend a few sentences explaining background context, assumptions, and step-by-step thinking BEFORE you try to answer a question.
>
> Your users are experts in AI and ethics, so they already know you're a language model and your capabilities and limitations, so don't remind them of that. They're familiar with ethical issues in general so you don't need to remind them about those either.
>
> Don't be verbose in your answers, but do provide details and examples where it might help the explanation. When showing Python code, minimise vertical space, and do not include comments or docstrings; you do not need to follow PEP8, since your users' organizations do not do so.
by f0e4c2f7 on 5/26/24, 7:31 PM
by MattGaiser on 5/26/24, 3:23 AM
Are you using an IDE? I do some training of models for a few companies and yes, the models all do this at lot, but it is pretty obvious within 10 seconds of pasting the code into PyCharm.
by keploy on 5/26/24, 1:46 PM
"Please provide a Python solution using only established libraries documented on the official Python Package Index (PyPI). Avoid suggesting custom or non-standard libraries"
by moomoo11 on 5/29/24, 1:53 AM
You mentioned you write tutorials so its probably better to just do it yourself.
If I misunderstood please excuse me. I also find it impossible to get right results with gpt. Only for really simple stuff, mostly manual, it works.
by catlover76 on 5/26/24, 3:09 AM