by thor-rodrigues on 8/19/24, 2:35 PM with 24 comments
by koinedad on 8/19/24, 5:01 PM
by gavmor on 8/19/24, 3:28 PM
> Tool responses can be provided via messages with the `tool` role.
by fancy_pantser on 8/19/24, 5:23 PM
by codeisawesome on 8/19/24, 3:28 PM
by hm-nah on 8/19/24, 4:21 PM
- Is the environment that the tools are run from sandboxed?
I’m unclear on when/how/why you’d want an LLM executing code on your machine or in a non-sandboxed environment.
Anyone care to enlighten?
by SV_BubbleTime on 8/19/24, 3:46 PM
Is this to the effect of running a local LLM, that reads your prompt and then decides which correct/specialized LLM to hand it off to? If that is the case, isn’t it going to be a lot of latency to switch models back and forth as most people usually run the single largest model that will fit on their GPU?