by deshraj on 9/5/24, 4:18 PM with 23 comments
by twothamendment on 9/5/24, 8:51 PM
by ggnore7452 on 9/5/24, 8:48 PM
by imranq on 9/5/24, 8:06 PM
by chipdart on 9/5/24, 7:55 PM
Instead of long-term memory I'd be happy if it had short-term reliability. I lost count the number of times this week that Claude failed to process prompts because it was down.
by quantadev on 9/5/24, 9:07 PM
Personally I use LangChain/Python for this, and that way any new AI features I create therefore easily work across ALL LLMs, and my app just lets the end user pick the LLM they want to run on. Every feature I have works on every LLM.
by decide1000 on 9/5/24, 5:34 PM