from Hacker News

Ollama now supports tool calling with popular models in local LLM

by thor-rodrigues on 8/19/24, 2:35 PM with 24 comments

  • by koinedad on 8/19/24, 5:01 PM

    Pretty sweet to get to run models locally and have more advanced usages like tool calling, excited to try it out
  • by gavmor on 8/19/24, 3:28 PM

    Where is `get_current_weather` implemented?

    > Tool responses can be provided via messages with the `tool` role.

  • by fancy_pantser on 8/19/24, 5:23 PM

    I see Command-R+ but not Command-R marked for tool use. The model is geared for it, much easier to fit on commodity hardware like 4090s, and Ollama's own description for it even includes tool use. I think it's just not labeled for some reason. It works really well with the provided ollama-python package and other tools that already brought function calling capabilities via Ollama's API.

    https://ollama.com/library/command-r

  • by codeisawesome on 8/19/24, 3:28 PM

    How does this compare to Agent Zero (frdel/agent-zero on GitHub)? Seems that provides similar functionality and uses docker for running the scripts / code generated.
  • by hm-nah on 8/19/24, 4:21 PM

    The first I think of when anyone mentions agent-like “tool use” is:

    - Is the environment that the tools are run from sandboxed?

    I’m unclear on when/how/why you’d want an LLM executing code on your machine or in a non-sandboxed environment.

    Anyone care to enlighten?

  • by SV_BubbleTime on 8/19/24, 3:46 PM

    My guess since programmer blog post writing (plus autism?) assumes “Everyone already knows everything about my project because I do!”

    Is this to the effect of running a local LLM, that reads your prompt and then decides which correct/specialized LLM to hand it off to? If that is the case, isn’t it going to be a lot of latency to switch models back and forth as most people usually run the single largest model that will fit on their GPU?