by jsdeveloper on 3/21/24, 5:03 AM with 1 comments
Within minutes of talk and creating imaginary scenario, any one is able to jail break an AI model and get recipes to make dangerous stuff. The question is why would this companies train the LLM on such dangerous data altogether? And to whom are they providing such extra services , clearly if you have trained it and is already serving it ?
Selecting the dataset was clearly their job and all terrible outcomes can simply be avoided at the training itself.
by joegibbs on 3/21/24, 5:45 AM