by rdescartes on 12/7/24, 3:55 AM
by chirau on 12/7/24, 2:47 AM
This is wonderful news.
I was actually scratching my head on how to structure a regular prompt to produce csv data without extra nonsense like "Here is your data" and "Please note blah blah" at the beginning and end, so this is much welcome as I can define exactly what I want returned then just push structured output to csv.
by quaintdev on 12/7/24, 3:47 AM
Yay! It works. I used gemma2:2b and gave it below text
You have spent 190 at Fresh Mart. Current balance: 5098
and it gave below output
{\n\"amount\": 190,\n\"balance\": 5098 ,\"category\": \"Shopping\",\n\"place\":\"Fresh Mart\"\n}
by guerrilla on 12/7/24, 7:12 AM
No way. This is amazing and one of the things I actually wanted. I love ollama be because it makes using an LLM feel like using any other UNIX program. It makes LLMs feel like they belong on UNIX.
Question though. Has anyone had luck running it on AMD GPUs? I've heard it's harder but I really want to support the competition when I get cards next year.
by bluechair on 12/7/24, 1:39 AM
Has anyone seen how these constraints affect the quality of the output out of the LLM?
In some instances, I'd rather parse Markdown or plain text if it means the quality of the output is higher.
by quaintdev on 12/7/24, 1:40 AM
So I can use this with any supported models? The reason I'm asking is because I can only run 1b-3b models reliably on my hardware.
by JackYoustra on 12/7/24, 5:55 AM
PRs on this have been open for something like a year! I'm a bit sad about how quiet the maintainers have been on this.
by lxe on 12/7/24, 2:14 AM
I'm still running oobabooga because of its exlv2 support which does much more efficient inference on dual 3090s
by highlanderNJ on 12/7/24, 12:22 PM
by xnx on 12/7/24, 2:58 AM
Is there a best approach for providing structured input to LLMs? Example: feed in 100 sentences and get each one classified in different ways. It's easy to get structured data out, but my approach of prefixing line numbers seems clumsy.
by ein0p on 12/7/24, 6:49 AM
That's very useful. To see why, try to get an LLM _reliably_ generate JSON output without this. Sometimes it will, but sometimes it'll just YOLO and produce something you didn't ask for, that can't be parsed.
by rcarmo on 12/7/24, 9:53 AM
I must say it is nice to see the curl example first. As much as I like Pydantic, I still prefer to hand-code the schemas, since it makes it easier to move my prototypes to Go (or something else).
by seertaak on 12/8/24, 2:28 AM
Could someone explain how this is implemented? I saw on Meta's Llama page that the model has intrinsic support for structured output. My 30k ft mental model of LLM is as a text completer, so it's not clear to me how this is accomplished.
Are llama.cpp and ollama leveraging llama's intrinsic structured output capability, or is this something else bolted ex-post on the output? (And if the former, how is the capability guaranteed across other models?)
by vincentpants on 12/7/24, 2:10 AM
Wow neat! The first step to format ambivalence! Curious to see how well does this perform on the edge, our overhead is always so scarce!
Amazing work as always, looking forward to taking this for a spin!
by lormayna on 12/7/24, 7:24 AM
This is a fantastic news!
I spent hours on fine tuning my prompt to summarise text and output in JSON and still have some issues sometimes.
Is this feature available also with Go?
by diimdeep on 12/7/24, 1:09 PM
Very annoying marketing and pretending to be anything other than just wrapper around llama.cpp.