by JDEW on 4/4/24, 9:37 PM with 11 comments
by andy99 on 4/4/24, 10:14 PM
I want to see a real example of an LLM giving specific information that is (a) not readily available online and (b) would allow a layperson with access to regular consumer stuff to do something dangerous.
Otherwise these "attacks" are completely hollow. Show me there is an actual danger they are supposed to be holding back.
Incidentally, I've never made a molotov cocktail but it looks self explanatory which is presumably why they're popular amongst the kinds of thugs that would use them. If you know what the word means, you basically know how to make one. Literally: https://www.merriam-webster.com/dictionary/Molotov%20cocktai... is the dictionary also dangerous?
by freitzkriesler2 on 4/4/24, 11:45 PM
by HarHarVeryFunny on 4/5/24, 12:00 AM
https://www.anthropic.com/research/many-shot-jailbreaking
In this "crescendo attack" the Q&A history comes from actual turn-taking rather than the fake Q&A of Anthropic's example, but it seems the model's guardrails are being overridden in a similar fashion by making the desired dangerous response a higher liklihood prediction than if it had been asked cold.
It's going to be interesting to see how these companies end up addressing these ICL attacks. Anthropic's safety approach so far seems to be based on interpretability research to understand the models inner working and be able to identify specific "circuits" responsible for given behaviors/capabilities. It seems the idea is that they can neuter the model to make it safe, once they figure out what needs cutting.
The trouble with runtime ICL attacks is that these occur AFTER the model has been vetted for safety and released. It seems that fundamentally the only way to guard against these is to police the output of the model (2nd model?), rather than hoping you can perform brain surgery and prevent it from saying something dangerous in the first place.