by mpaepper on 4/4/23, 9:37 PM with 35 comments
by cube2222 on 4/4/23, 11:06 PM
That said, the bit about "The trick to avoid hallucination" in the attached blog post is a very neat idea. Obvious in hindsight, as it often is:
> So the trick is that we send a stop pattern which in this case is when we see Observation: in it’s output, because then it has created a Thought, an Action and used a tool and hallucinates the Observation: itself :D
> This stop parameter is a normal parameter of the OpenAI API by the way, so nothing special to implement there.
In general, it's magical when you put these things in a feedback loop. The way they can automatically respond to errors and adjust their actions is really cool - take a look at the last gif here[1].
by summarity on 4/4/23, 11:46 PM
It worked quite well. To well almost: I started a meta-conversation where I asked another GPT4 instance to come up with conversations SoulverGPT could have with a user where the addition of solving is beneficial. This worked, and eventually even found a bug in Soulver - essentially fuzzing the language.
by mkmk on 4/4/23, 11:01 PM
Really neat to see the parallels to OODA loops, which are a frequent feature (implicit or explicit) of human decision-making processes.
by jamesfisher on 4/5/23, 7:02 AM
by skybrian on 4/5/23, 12:17 AM
by furyofantares on 4/4/23, 11:14 PM
That said, while I am very, very impressed with GPT4 for lots of uses right now, so far it's just not clear to me that feeding it back into itself is fruitful at this point.
When I use GPT4 for coding, I give it MUCH higher level instructions than I use on a search engine, much closer to the actual problem I'm solving. But I'm still breaking the problem down into smaller problems; I need to read the output, fix errors or instruct it to fix errors, and then have it build more features on. It's similar with creative processes, brainstorming, and other writing.
These agents largely strike me as an attempt to replace this whole fact-checking/editing type routine with the LLM itself; but seeing as it's the thing the LLM is not yet good at, I'm not sure how much progress can be made there, vs just waiting for GPT5 and hoping it's another big leap in capabilities.
by eshnil on 4/5/23, 2:37 AM
by joshcam on 4/5/23, 12:59 AM
If I had GPT-4 API access, it would have found that on the first try. Sigh.
--
I've noticed it does skip the thought process sometimes.
Question: Who is the president? What year whas he born. Name one other famous person born in that year. Thought:
Final answer is The current president of the United States is Joe Biden, born in 1942. Some other famous people born in the same year include Harrison Ford, Muhammad Ali, and Aretha Franklin.
by wjessup on 4/5/23, 2:27 AM
by antiatheist on 4/5/23, 2:30 AM
Though the project makes a decent GUI to use ChatGPT if anyone is interested: https://github.com/blipk/gptroles
You can run and edit code snippets in the chat interface
by qaq on 4/4/23, 11:22 PM