from Hacker News

Securing LLM Systems Against Prompt Injection – Nvidia Technical Blog

by yandie on 8/4/23, 3:13 PM with 1 comments

  • by yandie on 8/4/23, 3:14 PM

    Who executes LLM-generated code (or non PR-ed code in general) against trusted environments/databases? I hope that's just a bad pattern introduced by langchain pattern and not the norm...