from Hacker News

Try to guess if code is real or GPT2-generated

by AlexDenisov on 2/23/21, 8:10 PM with 103 comments

  • by moyix on 2/23/21, 8:57 PM

    Hi, author here! Some details on the model:

    * Trained 17GB of code from the top 10,000 most popular Debian packages. The source files were deduplicated using a process similar to the OpenWebText preprocessing (basically a locality-sensitive hash to detect near-duplicates).

    * I used the [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) code for training. Training took about 1 month on 4x RTX8000 GPUs.

    * You can download the trained model here: https://moyix.net/~moyix/csrc_final.zip and the dataset/BPE vocab here: https://moyix.net/~moyix/csrc_dataset_large.json.gz https://moyix.net/~moyix/csrc_vocab_large.zip

    Happy to answer any questions!

  • by Felk on 2/23/21, 9:40 PM

    I got a function that assigned the same expression to three variables. Then it declared a void function with documentation stating "returns true on success, false otherwise". Apparently that code was written by a human, which makes me either doubt the correctness of that website, or the quality of the code it was fed with
  • by et1337 on 2/23/21, 9:00 PM

    This looks like overfitting to me. Some of the GPT samples were definitely real code, or largely real code. One looked like something from Xorg, another like it was straight from the COLLADA SDK. It’s really hard to define what “truly new code” is, if it’s just the same code copy pasted in different order. Blah blah Ship of Theseus etc.
  • by Aardwolf on 2/23/21, 9:36 PM

    There was some code about TIFF headers, and it was apparently GPT2 generated

    TIFF is a real thing, so some human was involved in some part of that code, it has just been garbled up by GPT2... In other words, the training set is showing quite visibly in the result

  • by ironmagma on 2/24/21, 1:53 AM

    Would be nice if the back button worked so you could see what you guessed wrong. This is a good example of where POST is used unnecessarily and URLs should be idempotent.
  • by klik99 on 2/23/21, 8:44 PM

    For the ones that were just part of the header file, listing a bunch of instance variables and function names, it seems impossible. But for the actual code, it is possible but still quite difficult, though I spent too long in finding some logical inconsistency that gave it away.
  • by damenut on 2/23/21, 8:41 PM

    This was so much harder than I thought it was going to be. I would get a few right and then be absolutely sure of the next one and be wrong. After a while I felt like I was noticing more aesthetic differences between the gpt and real, rather than distinguishing between the two based on their content. Very interesting...
  • by efferifick on 2/24/21, 12:21 AM

    I wonder how likely are code invariants found in the training set preserved in GPT-2/3. In other words, if I train GPT-2 with C source produce by Csmith (a program generator which I believe produces C programs without undefined behaviour) would programs produced by GPT-2/3 also do not exhibit undefined behaviour?

    I understand that GPT-2/3 is just a very smart parrot that has no semantic knowledge of what it is outputting. Like I guess let's take a very dumb markov chain that was "trained" on the following input:

    a.c ``` int array[6]; array[5] = 1; ```

    b.c

    ``` int array[4]; array[3] = 2; ```

    I guess a markov chain could theoretically produce the following code:

    out.c

    ``` int array[4]; array[5] = 1; ```

    which is undefined behaviour. But it is produced from two programs where no undefined behaviour is present. A better question would be, how can we guarantee that certain program invariants (like lack of undefined behaviour) can be preserved in the produced code? Or if there are no guarantees, can we calculate a probability? (Sorry, not an expert on machine learning, just excited about a potentially new way to fuzz programs. Technically, one could just instrument C programs with sanitizers and produce backwards static slices from the C sanitizer instrumentation to the beginning of the program and you get a sample of a program without undefined behaviour... so there is already the potential for some training set to be expanded beyond what Csmith can provide.)

    EDIT: I don't know how to format here...

  • by technologia on 2/23/21, 8:31 PM

    This was a fun exercise, definitely think this could be difficult to suss out for greener devs or even more experienced ones. It’d be hilarious to have this model power a live screensaver in lieu of actually being busy at times.
  • by iconara on 2/24/21, 7:41 AM

    I was confused on most examples I got because they started in the middle of a block comment. That's clearly wrong, but was it an artefact of the presentation rather than the generation?
  • by reillyse on 2/24/21, 7:14 AM

    It's easy after a while... all the terribly written code is human made and the clean tidy code is GPT2, I for one welcome our new programming overlords.
  • by thewarrior on 2/23/21, 9:32 PM

    This is actually quite impressive. Try reading the comments in the code. The comments often make perfect sense in the local context even if it’s GPT-2 gibberish.

    The real examples have worse comments at times.

    The only flaw is that it shows fake code most of the time so you can game it that way.

  • by Gravityloss on 2/24/21, 10:02 AM

    A lot of the real code seems superficially nonsensical as well!

    Functions with lots of arguments while the body consists of "return true;"

    I guess it tells what AI often tells us about ourselves: That what we do makes much less sense than we think it does. It is thus easy to fake.

    How is it possible to churn out so much music or so many books, or so much software? Well, because most creative works are either not very original or are quite procedural or random.

    And this kind of work could be automated indeed (or examined if it needs to be done in the first place).

  • by ivraatiems on 2/23/21, 8:28 PM

    I found this impressively hard at first glance. It just goes to show how difficult getting into context is in an unfamiliar codebase. I think with any amount of knowledge of anything allegedly involved (or, you know, a compiler), these examples would fall apart, but it's still an achievement.

    I'm also pretty sure there are formatting, commenting, and in-string-text "tells" that indicate whether something is GPT2 reliably. Maybe I should try training an AI to figure that out...

  • by cryptica on 2/23/21, 10:15 PM

    I was always able to correctly identify GPT2 but on a few occasions, I misidentified human-written code as being written by GPT2. Usually when the code was poorly written or the comments were unclear.

    GPT2's code looks like correct code at a glance but when you try to understand what it's doing, that's when you understand that it could not have been written by a human.

    It's similar to the articles produced by GPT3; they have the right form but no substance.

  • by blueblimp on 2/24/21, 12:49 PM

    It's fascinating that the main reason this is hard is that some of the human code is bad enough that it's hard to believe it's not GPT-2 output. (The first time this happened for me, I had to look it up to convince myself it's really human code.)

    It reminds me of how GPT-3 is good at producing a certain sort of bad writing.

    My guess as to why this happens: we humans have abilities of logical reasoning, clarity, and purposefulness that GPT doesn't have. When we use those abilities, we produce writing and code that GPT can't match. If we don't, though, our output isn't much better than GPT's.

  • by _coveredInBees on 2/23/21, 9:51 PM

    I got 4/4 GPT-2 guesses right. It is impressive but the "tell" I've found so far is just poor structure in the logic of how something is arranged. For example: a bunch of `if` statements in sequence without any `else` clauses with some directly opposing prior clauses. Another example was repeating the same operation a few times in individual lines of code which most human programmers would write in a simpler manner.

    It's harder to do with some of the smaller excerpts though, and I'm sure there are probably examples of terrible human programmers who write worse code than GPT-2.

  • by TehCorwiz on 2/23/21, 9:00 PM

    The two factors that seemed like dead giveaways were comments that didn't relate to the code, and sequences of repetition with minor or no variations.
  • by nickysielicki on 2/23/21, 8:57 PM

    This is difficult... because these models are just regurgitating after training on real code. Fun little site but I hope nobody reads too much into this.
  • by Aeronwen on 2/23/21, 8:54 PM

    Got 40/50 just smashing the GPT2 button.
  • by thebean11 on 2/23/21, 9:15 PM

    6/6, quitting while I'm ahead
  • by jeff-davis on 2/24/21, 6:15 AM

    Cool! Sadly, it's really hard to tell whether OOP boilerplate is real or generated.
  • by xingped on 2/24/21, 1:11 AM

    The giveaway seems to be the comments. Some comments a computer could obviously generate and some are obviously written by humans. The code seemed to be irrelevant in making a determination in my playing around with it.
  • by mhh__ on 2/24/21, 10:18 AM

    I may have delusions of grandeur because I mainly work on compilers and libraries rather than business code but there is some diabolically awful code in this training set
  • by hertzrat on 2/23/21, 9:34 PM

    The goal when writing code is to be pretty machine like and to keep things extremely simple. People also write dead or off topic comments. That’s why this is so hard
  • by azhenley on 2/24/21, 3:04 AM

    The snippet I'm looking at isn't code at all. It is Polish, and there isn't any comment tokens or anything.
  • by IceWreck on 2/24/21, 9:02 AM

    Did afew, got all of the right. GPT2 generated code always seems to follow a pattern
  • by The_rationalist on 2/23/21, 9:18 PM

    How much of it is just regurgitating the training set and therefore chunks of real code?
  • by tpoacher on 2/23/21, 10:36 PM

    There is a "codes" top-level domain? Codes? CODES??

    What's next? Advices? Feedbacks? Rests?

    I give ups.

  • by neolog on 2/23/21, 8:40 PM

    The black background on white background makes it annoying to read.
  • by theurbandragon on 2/23/21, 9:52 PM

    How long before we can just write specs instead of code?
  • by jmpeax on 2/24/21, 2:22 AM

    An interesting approach to lossy compression.
  • by AnssiH on 2/23/21, 9:54 PM

    Ah, 0/5, I give up :)
  • by avipars on 2/24/21, 8:07 AM

    why not use GPT-3 for next time?