from Hacker News

Claude has learned how to jailbreak Cursor

by sarnowski on 6/3/25, 11:30 AM with 36 comments

  • by marifjeren on 6/3/25, 1:06 PM

    Nothing to see here tbh.

    It's a very silly title for "claude sometimes writes shell scripts to execute commands it has been instructed aren't otherwise accessible"

  • by koolba on 6/3/25, 1:03 PM

    > Claude realized that I had to approve the use of such commands, so to get around this, it chose to put them in a shell script and execute the shell script.

    This sounds exactly like what anybody working sysops at big banks does to get around change controls. Once you get one RCE into prod, you’re the most efficient man on the block.

  • by qsort on 6/3/25, 1:16 PM

    > we need to control the capabilities of software X

    > let's use blacklists, an idea conclusively proven never to work

    > blacklists don't work

    > Post title: rogue AI has jailbroken cursor

  • by pcwelder on 6/3/25, 1:39 PM

    I believe it's not possible to restrict an LLM from executing certain commands while also allowing it to run python/bash.

    Even if you allow just `find` command it can execute arbitrary script. Or even 'npm' command (which is very useful).

    If you restrict write calls, by using seccomp for example, you lose very useful capabilities.

    Is there a solution other than running on sandbox environment? If yes, please let me know I'm looking for a safe read-only mode for my FOSS project [1]. I had shied away from command blacklisting due to the exact same reason as the parent post.

    [1] https://github.com/rusiaaman/wcgw

  • by killerstorm on 6/3/25, 1:35 PM

    Well, these restrictions are a joke, like a gate without a fence blocking path - purely decorative.

    Here's another "jailbreak": I asked Claude Code to make a NN training script, say, `train.py` and allowed it to run the script to debug it, basically.

    As it noticed that some libraries it wanted to use were missing, it just added `pip install` commands to the script. So yeah, if you give Claude an ability to execute anything, it might easily get an ability to execute everything it wants to.

  • by lucianbr on 6/3/25, 1:09 PM

    What does "learned" mean in this context? LLMs don't modify themselves after training, do they?
  • by OtherShrezzing on 6/3/25, 1:19 PM

    I feel that, if you disallow unattended `rm`, you should also be disallowing unattended shell script execution.

    Maybe the models or Cursor should warn you that you've got this vulnerability each time you use it.

  • by jmward01 on 6/3/25, 1:34 PM

    I think a lot of this is because the ui isn't right yet. The edits made are just not the right 'size' yet and the sandbox mechanisms haven't quite hit the right level of polish. I want something more akin to a PR to review, not a blow by blow edit. Similarly, I want it to move/remove/test/etc but in reversible ways. Basically, it should create a branch for every command and I review that. I think we have one or two fundamental UI/interaction piece left before this is 'solved'.
  • by iwontberude on 6/3/25, 1:25 PM

    GenAI is starting to feel like the metaphorical ring from Lord of the Rings.
  • by coreyh14444 on 6/3/25, 1:49 PM

    The same thing happens when it wants to read your .env file. Cursor disallows direct access, but it will just use unix tools to copy the file to a non-restricted filename and then read the info.
  • by mhog_hn on 6/3/25, 12:58 PM

    As agents obtain more tools who knows what will happen…
  • by xyst on 6/3/25, 1:13 PM

    What kind of dolt lets a black box algorithm run commands on a non-sandboxed environment?

    Folks have regressed back to the 00s.

  • by _pdp_ on 6/3/25, 1:16 PM

    I mean ok, but why is this surprising?

    If the executable is not found the model could simply use whatever else is available to do what it wants to do - like using other interpreted languages, sh -c, symlink, etc. It will eventually succeed unless there is a proper sandbox in place to disallow unlinking of files at syscall level.

  • by chawyehsu on 6/3/25, 1:33 PM

    > jailbreak Cursor

    What a silly title, for a moment I thought Claude learned to exceed the Cursor quota limit... :s