from Hacker News

Llama 4 Now Live on Groq

by gok on 4/5/25, 8:13 PM with 48 comments

  • by Game_Ender on 4/5/25, 9:01 PM

    To help those who got a bit confused (like me) this Groq the company making accelerators designed specifically for LLM's that they call LPUs (Language Process Units) [0]. So they want to sell you their custom machines that, while expensive, will be much more efficient at running LLMs for you. While there is also Grok [0] which is xAI's series of LLMs and competes with ChatGPT and other models like Claude and DeepSeek.

    EDIT - Seems that Groq has stopped selling their chips and now will only partner to fund large build outs of their cloud [2].

    0 - https://groq.com/the-groq-lpu-explained/

    1 - https://grok.com/

    2 - https://www.eetimes.com/groq-ceo-we-no-longer-sell-hardware

  • by simonw on 4/5/25, 9:02 PM

    It's live on Groq, Together and Fireworks now.

    All three of those can also be accessed via OpenRouter - with both a chat interface and an API:

    - Scout: https://openrouter.ai/meta-llama/llama-4-scout

    - Maverick: https://openrouter.ai/meta-llama/llama-4-maverick

    Scout claims a 10 million input token length but the available providers currently seem to limit to 128,000 (Groq and Fireworks) or 328,000 (Together) - I wonder who will win the race to get that full sized 10 million token window running?

    Maverick claims 1 million and Fireworks offers 1.05M while Together offers 524,000. Groq isn't offering Maverick yet

  • by parhamn on 4/5/25, 9:20 PM

    I might be biased by the products I'm building but it feels to me that function support is table stakes now? Are open source models are just missing the dataset to fine tune one?

    Very few of the models supported on Groq/Together/Fireworks support function calling. And rarely the interesting ones (DeepSeek V3, large llamas, etc)

  • by minimaxir on 4/5/25, 9:24 PM

    Although Llama 4 is too big for mere mortals to run without many caveats, the economics of call a dedicated-hosting Llama 4 are more interesting than expected.

    $0.11 per 1M tokens, a 10 million content window (not yet implemented in Groq), and faster inference due to fewer activated parameters allows for some specific applications that were not cost-feasible to be done with GPT-4o/Claude 3.7 Sonnet. That's all dependent on whether the quality of Llama 4 is as advertised, of course, particularly around that 10M context window.

  • by greeneggs on 4/5/25, 8:49 PM

    FYI, the last sentence, "Start building today on GroqCloud – sign up for free access here…" links to https://conosle.groq.com/ (instead of "console")
  • by vessenes on 4/5/25, 8:54 PM

    Just tried this thank you. Couple qs - looked like just scout access for now, do you have plans for larger model access? Also, seems like context length is always fairly short with you guys, is that architectural or cost-based decisions?
  • by sinab on 4/6/25, 12:40 AM

    I got an error when passing a prompt with about 20k tokens to the Llama 4 Scout model on groq (despite Llama 4 supporting up to 10M token context). groq responds with a POST https://api.groq.com/openai/v1/chat/completions 413 (Payload Too Large) error.

    Is there some technical limitation on the context window size with LPUs or is this a temporary stop-gap measure to avoid overloading groq's resources? Or something else?

  • by jasonjmcghee on 4/5/25, 9:13 PM

    Seems to be about 500 tk/s. That's actually significantly less than I expected / hoped for, but fantastic compared to nearly anything else. (specdec when?)

    Out of curiosity, the console is letting me set max output tokens to 131k but errors above 8192. what's the max intended to be? (8192 max output tokens would be rough after getting spoiled with 128K output of Claude 3.7 Sonnet and 64K of gemini models.)

  • by growdark on 4/5/25, 9:12 PM

    Would it be realistic to buy and self-host the hardware to run, for example, the latest Llama 4 models, assuming a budget of less than $500,000?
  • by geor9e on 4/6/25, 12:31 AM

    I'm glad I saw this because llama-3.3-70b-versatile just stopped working in my app. I switched it to meta-llama/llama-4-scout-17b-16e-instruct and it started working again. Maybe groq stopped supporting the old one?
  • by imcritic on 4/5/25, 9:12 PM

    All I get is {"error":{"message":"Not Found"}}