from Hacker News

Show HN: 13Sheep – a JavaScript game largely authored by ChatGPT

by neeldhara on 3/26/23, 12:21 AM with 26 comments

13 Sheep is a quick roll-and-write game designed by Moritz Dressler, intended for one or more players. By drawing fences on a grid, players try to protect as many sheep as possible before the wolf comes. I reproduced the rules in an online experience, allowing for additional user customization compared to the pen and paper version. A lot of the code comes from a conversation I had with chatGPT.

Here's a blog detailing the prompts used in the conversation with chatGPT: https://www.neeldhara.com/blog/13sheep/

  • by 8organicbits on 3/26/23, 2:23 AM

    The chat log is very helpful.

    > I think I spent close to a good twelve hours (including a couple of early throw-away prototypes, and all the failed attempts on the flood filling) altogether2… at some point it did get a little addictive, and perhaps there was a sunk cost argument for not letting go halfway through.

    Also very helpful, I get the sense that we're seeing lots of cherrypicked results that under report all the toil needed to get to the magic prompt that made it all work. Maybe we'll all get better at prompt engineering, but I think a lot of hype is based on that misunderstanding.

  • by TechBro8615 on 3/26/23, 2:41 AM

    This reminds me of a game from the 1990s, but I don't remember the name of it. It involved something with rats and cheese and drawing walls to keep them from it. Does anyone remember this game?

    EDIT: I think I'm remembering Rodent's Revenge: https://en.wikipedia.org/wiki/Rodent%27s_Revenge

  • by dave333 on 3/27/23, 4:03 AM

    Inspired by your example I just had a go at creating a mountain climbing game in similar fashion and I agree it does well on the concepts but there are lots of bugs in the details. For example it referred to certain divs by class and id but failed to assign class or id attributes to the respective divs. Maybe there are separate elements generating the response that do not always know exactly what the other has generated. Given the elementary nature of the bugs it should be easy to add a code review element that can find and fix these bugs before presenting the response to the user. When I pointed out a failing it regenerated a new different version of the code responsible but missed the cause of the bug.
  • by sylware on 3/26/23, 2:46 AM

    Some guys did ask chaGPT(3.5) to write a vectorized quicksort in x86_64 avx2 then avx512 assembly, then SPIR-V (GPU).

    It seems it is not that too bad for small "well-known" algorithms. I am thinking high level language "ports" toward "human" assembly.

  • by Waterluvian on 3/26/23, 2:52 AM

    I tried to break it by asking for -1 rounds. It caught it but then it told me to pick between 10-20, not 5-20 as previously indicated. This feels like a ChatGPT style bug. I find it doesn’t like to do abstractions such as “MIN_GAME_COUNT”, possibly because it doesn’t exactly do a great job going back to what it previously wrote, unless you ask it to.
  • by xupybd on 3/26/23, 2:16 AM

    This reminds me of sheepish the iphone game.

    Unfortunately the only record I can find of it is a negative review.

    https://arstechnica.com/gadgets/2009/02/10-lessons-of-iphone...

  • by beders on 3/26/23, 6:32 AM

    – a JavaScript game largely authored by Stack Overflow

    FTFY

  • by Vanit on 3/26/23, 3:55 AM

    This is neat, thanks for sharing!
  • by upwardbound on 3/26/23, 1:01 AM

    Impressive!
  • by Thorentis on 3/26/23, 3:19 AM

    The word "authored" is being used completely wrong here.

    What ChatGPT did is equivalent to what a writing assistant might do who is given a list of bullet points by an author, and asked to expand them into a synopsis.

    We would never say that a brick layer designed a building, or that a printer wrote a book. ChatGPT did not author this game. The rules were created by humans. The code was churned out by a machine.