by primordialsoup on 6/4/23, 3:26 PM with 36 comments
by coffeebeqn on 6/4/23, 4:27 PM
It would do things like Google something, the result wasn’t relevant , try again , got an error from one of the pages and then seemingly started to do something completely incoherent related to the error message
by fullstackchris on 6/4/23, 6:47 PM
basically, this effort ended up failing because, well, problem solving itself is inherently complex.
https://en.wikipedia.org/wiki/Fifth-generation_programming_l...
seems like the exact same thing happened with ChatGPT / AutoGPT / GPT4, and this will keep happening.
by itsuka on 6/4/23, 7:15 PM
I was thinking that perhaps we have been working with abstractions that are too low-level. Instead of providing a set of tools such as API calls or text splitters, wouldn't it be more reliable to give agents templates or workflows of successful tasks, such as trimming videos or booking restaurants?
These templates would consist of a set of function calls, or a graph of connected components in low-code tools like LangFlow. I believe auto agents already use a similar concept where they cache successful tasks for future reuse. The idea is to populate these caches with the most common use cases, and use retrieval if they become too large, so that we don't experience cache-miss most of the time and work with lower-level abstractions (tools) as the baseline. Templates, like prompts, should be portable (e.g. JSON) to avoid the need for everyone to reinvent the wheel. While this solution may not be as impressive as a full autonomous agent and may not work for a generalized case, it should produce a more predictable outcome, I think.
by Ozzie_osman on 6/4/23, 5:55 PM
It's really turtles all the way down.
by ianbutler on 6/4/23, 5:02 PM
by olup on 6/5/23, 4:56 AM
by simonw on 6/4/23, 5:16 PM