by vnglst on 3/26/24, 6:17 AM with 37 comments
by mastazi on 3/26/24, 7:21 AM
https://gwern.net/doc/cs/2005-09-30-smith-whyihateframeworks...
by wokwokwok on 3/26/24, 7:20 AM
This is a sequence of prompts in a single context which is “pre biasing” the LLM to respond the way you want it to.
Many such “emergent behaviours” are from the asker, not the LLM.
When you look in the mirror, you only see yourself.
> In the final exercise of this experiment, I challenged ChatGPT to step back, assess the code's objectives, and propose a better solution. The attempt was unsuccessful at first, as it continued to tweak the existing program and preserve the "existing architecture." Only when I instructed it to envision starting from scratch did it offer a new solution, as follows:
Surprise.
Naive usage.
by everling on 3/26/24, 7:23 AM
by big_paps on 3/26/24, 6:48 AM
by wruza on 3/26/24, 6:46 AM
by reilly3000 on 3/26/24, 6:54 AM
I’ll give ChatGPT a pass because the prompt was so contrived. I would be curious how it would respond if prompted with a more open-ended problem.
by ro_bit on 3/26/24, 6:33 AM
by yakshaving_jgt on 3/26/24, 7:49 AM
{-# LANGUAGE ImportQualifiedPost #-}
{-# LANGUAGE OverloadedStrings #-}
{-# LANGUAGE QuasiQuotes #-}
{-# LANGUAGE TemplateHaskell #-}
module Main where
import Foreign.C.Types
import Language.C.Inline qualified as C
C.include "<stdio.h>"
C.include "<stdlib.h>"
main :: IO ()
main = [C.block| void {
system("python -c 'print \"Hello, World!\"'");
} |]
by soneil on 3/26/24, 12:30 PM
I followed it through to the accompanying blog post[1] and I found the examples for "we close at 6pm on Friday" interesting because none of them work. They fail the stated problem (they don't test for Friday) and the unstated problem (when do we re-open? the examples re-open at midnight). And of course, without the prompts I can only guess whether the fault was in the answer or the question.
If the complex version worked and the simple version was naïve - or vice versa - it'd make for a much more interesting conclusion.
[1] https://koenvangilst.nl/blog/keeping-code-complexity-in-chec...
by mo_42 on 3/26/24, 6:58 AM
Tried the same with Haskell and the Fibonacci series because there's a similar joke on the internet. It's similar but not as stereotypical.
by palad1n on 3/26/24, 7:23 AM
by self_awareness on 3/26/24, 6:52 AM
Then you start to realize that everything needs to sit behind some abstraction.
by kleiba on 3/26/24, 6:38 AM
by edpichler on 3/26/24, 1:13 PM
by ByQuyzzy on 3/26/24, 6:57 AM
by feverzsj on 3/26/24, 6:48 AM
by ninetyninenine on 3/26/24, 7:27 AM
Because there's no underlying training data that a stochastic parrot would need to generate this output the gap by this lack of data is only bridged by something that can be described a high level word: "understanding".
chatGPT understood the request and delivered answers to the request. I'm not saying those answers are correct. That doesn't even matter. What matters is chatGPT gave a biased answer to that request that indicates understanding of several concepts including ranking in human social structures and complexity of code.
One could say that the answers are incorrect. chatGPT hallucinated those answers because staff engineers don't write code like that. These people don't get it. The act of the hallucination indicates "understanding" regardless.