from Hacker News

I asked ChatGPT to write the code to print "Hello, world " as a junior developer

by vnglst on 3/26/24, 6:17 AM with 37 comments

  • by mastazi on 3/26/24, 7:21 AM

    This reminds me of this classic post that was discussed many times here on HN. Maybe newer readers will enjoy it.

    https://gwern.net/doc/cs/2005-09-30-smith-whyihateframeworks...

  • by wokwokwok on 3/26/24, 7:20 AM

    Notice the “I asked it to refactor…” … “now how would a…”

    This is a sequence of prompts in a single context which is “pre biasing” the LLM to respond the way you want it to.

    Many such “emergent behaviours” are from the asker, not the LLM.

    When you look in the mirror, you only see yourself.

    > In the final exercise of this experiment, I challenged ChatGPT to step back, assess the code's objectives, and propose a better solution. The attempt was unsuccessful at first, as it continued to tweak the existing program and preserve the "existing architecture." Only when I instructed it to envision starting from scratch did it offer a new solution, as follows:

    Surprise.

    Naive usage.

  • by everling on 3/26/24, 7:23 AM

  • by big_paps on 3/26/24, 6:48 AM

    Even being a freelance programmer, this reminds me a bit of the evolution of my own programming style until i hit burnout several years ago. Now i try to be that junior programmer again. Even if its good to know all these programming patterns its also helpful to know how to write compact and dense style. Pico8 and Dwitter helped me with that.
  • by wruza on 3/26/24, 6:46 AM

    It decided to repeat a well-known joke.
  • by reilly3000 on 3/26/24, 6:54 AM

    I’m a Principal Cloud Engineer and I practice minimum viable software. Less to grok, less to maintain, less debugging effort during an incident. In the context of a larger system it may make sense to implement a logger for wrapping messages with consistent contextual values because DRY.

    I’ll give ChatGPT a pass because the prompt was so contrived. I would be curious how it would respond if prompted with a more open-ended problem.

  • by ro_bit on 3/26/24, 6:33 AM

    I don't get it. I'm only starting my career but most of the tenured/senior developers I know spend too much time in meetings to write that much boilerplate
  • by yakshaving_jgt on 3/26/24, 7:49 AM

    How I write a "Hello, World!" program as an expert haskell programmer

        {-# LANGUAGE ImportQualifiedPost #-}
        {-# LANGUAGE OverloadedStrings   #-}
        {-# LANGUAGE QuasiQuotes         #-}
        {-# LANGUAGE TemplateHaskell     #-}
    
        module Main where
    
        import Foreign.C.Types
        import Language.C.Inline qualified as C
    
        C.include "<stdio.h>"
        C.include "<stdlib.h>"
    
        main :: IO ()
        main = [C.block| void {
          system("python -c 'print \"Hello, World!\"'");
          } |]
  • by soneil on 3/26/24, 12:30 PM

    I think it'd be interesting to see the actual prompts that went with this.

    I followed it through to the accompanying blog post[1] and I found the examples for "we close at 6pm on Friday" interesting because none of them work. They fail the stated problem (they don't test for Friday) and the unstated problem (when do we re-open? the examples re-open at midnight). And of course, without the prompts I can only guess whether the fault was in the answer or the question.

    If the complex version worked and the simple version was naïve - or vice versa - it'd make for a much more interesting conclusion.

    [1] https://koenvangilst.nl/blog/keeping-code-complexity-in-chec...

  • by mo_42 on 3/26/24, 6:58 AM

    Seems like it learned the boilerplate from Java and transfered it to TypeScript. I don't know such jokes in JS ecosystem but in Java the one.

    Tried the same with Haskell and the Fibonacci series because there's a similar joke on the internet. It's similar but not as stereotypical.

  • by palad1n on 3/26/24, 7:23 AM

    Has anyone ever asked ChatGPT to apply the lessons of the Tao of Programming? Just curious.
  • by self_awareness on 3/26/24, 6:52 AM

    This joke is funny until you need to add unit testing to the project.

    Then you start to realize that everything needs to sit behind some abstraction.

  • by kleiba on 3/26/24, 6:38 AM

    This reads almost too cliche to be believable.
  • by edpichler on 3/26/24, 1:13 PM

    It is because it was fed by Internet information. Lots of memes and jokes.
  • by ByQuyzzy on 3/26/24, 6:57 AM

    Really starting to hate AI and thinly veiled ads for AI.
  • by feverzsj on 3/26/24, 6:48 AM

    Not sure if it's satire or stereotype.
  • by ninetyninenine on 3/26/24, 7:27 AM

    It understands progressive complexity in code. Keyword: understand.

    Because there's no underlying training data that a stochastic parrot would need to generate this output the gap by this lack of data is only bridged by something that can be described a high level word: "understanding".

    chatGPT understood the request and delivered answers to the request. I'm not saying those answers are correct. That doesn't even matter. What matters is chatGPT gave a biased answer to that request that indicates understanding of several concepts including ranking in human social structures and complexity of code.

    One could say that the answers are incorrect. chatGPT hallucinated those answers because staff engineers don't write code like that. These people don't get it. The act of the hallucination indicates "understanding" regardless.