from Hacker News

Llama 3.1

by luiscosio on 7/23/24, 2:47 PM with 269 comments

  • by dang on 7/23/24, 5:56 PM

    Related ongoing thread:

    Open source AI is the path forward - https://news.ycombinator.com/item?id=41046773 - July 2024 (278 comments)

  • by lelag on 7/23/24, 3:25 PM

    The 405b model is actually competitive against closed source frontier models.

    Quick comparison with GPT-4o:

        +----------------+-------+-------+
        |     Metric     | GPT-4o| Llama |
        |                |       | 3.1   |
        |                |       | 405B  |
        +----------------+-------+-------+
        | MMLU           |  88.7 |  88.6 |
        | GPQA           |  53.6 |  51.1 |
        | MATH           |  76.6 |  73.8 |
        | HumanEval      |  90.2 |  89.0 |
        | MGSM           |  90.5 |  91.6 |
        +----------------+-------+-------+
  • by zone411 on 7/23/24, 8:35 PM

    I've just finished running my NYT Connections benchmark on all three Llama 3.1 models. The 8B and 70B models improve on Llama 3 (12.3 -> 14.0, 24.0 -> 26.4), and the 405B model is near GPT-4o, GPT-4 turbo, Claude 3.5 Sonnet, and Claude 3 Opus at the top of the leaderboard.

    GPT-4o 30.7

    GPT-4 turbo (2024-04-09) 29.7

    Llama 3.1 405B Instruct 29.5

    Claude 3.5 Sonnet 27.9

    Claude 3 Opus 27.3

    Llama 3.1 70B Instruct 26.4

    Gemini Pro 1.5 0514 22.3

    Gemma 2 27B Instruct 21.2

    Mistral Large 17.7

    Gemma 2 9B Instruct 16.3

    Qwen 2 Instruct 72B 15.6

    Gemini 1.5 Flash 15.3

    GPT-4o mini 14.3

    Llama 3.1 8B Instruct 14.0

    DeepSeek-V2 Chat 236B (0628) 13.4

    Nemotron-4 340B 12.7

    Mixtral-8x22B Instruct 12.2

    Yi Large 12.1

    Command R Plus 11.1

    Mistral Small 9.3

    Reka Core-20240501 9.1

    GLM-4 9.0

    Qwen 1.5 Chat 32B 8.7

    Phi-3 Small 8k 8.4

    DBRX 8.0

  • by foundval on 7/23/24, 3:36 PM

    You can chat with these new models at ultra-low latency at groq.com. 8B and 70B API access is available at console.groq.com. 405B API access for select customers only – GA and 3rd party speed benchmarks soon.

    If you want to learn more, there is a writeup at https://wow.groq.com/now-available-on-groq-the-largest-and-m....

    (disclaimer, I am a Groq employee)

  • by netsec_burn on 7/23/24, 3:09 PM

    Today appears to be the day you can run an LLM that is competitive with GPT-4o at home with the right hardware. Incredible for progress and advancement of the technology.

    Statement from Mark: https://about.fb.com/news/2024/07/open-source-ai-is-the-path...

  • by meetpateltech on 7/23/24, 3:00 PM

    Open Source AI Is the Path Forward - Mark Zuckerberg

    https://about.fb.com/news/2024/07/open-source-ai-is-the-path...

  • by ajhai on 7/23/24, 6:12 PM

    You can already run these models locally with Ollama (ollama run llama3.1:latest) along with at places like huggingface, groq etc.

    If you want a playground to test this model locally or want to quickly build some applications with it, you can try LLMStack (https://github.com/trypromptly/LLMStack). I wrote last week about how to configure and use Ollama with LLMStack at https://docs.trypromptly.com/guides/using-llama3-with-ollama.

    Disclaimer: I'm the maintainer of LLMStack

  • by primaprashant on 7/23/24, 3:14 PM

    I have found Claude 3.5 Sonnet really good for coding tasks along with the artifacts feature and seems like it's still the king on the coding benchmarks
  • by CGamesPlay on 7/24/24, 1:42 AM

    The LMSys Overall leaderboard <https://chat.lmsys.org/?leaderboard> can tell us a bit more about how these models will perform in real life, rather than in a benchmark context. By comparing the ELO score against the MMLU benchmark scores, we can see models which outperform / underperform based on their benchmark scores relative to other models. A low score here indicates that the model is more optimized for the benchmark, while a higher score indicates it's more optimized for real-world examples. Using that, we can make some inferences about the training data used, and then extrapolate how future models might perform. Here's a chart: <https://docs.getgrist.com/gV2DtvizWtG7/LLMs/p/5?embed=true>

    Examples: OpenAI's GPT 4o-mini is second only to 4o on LMSys Overall, but is 6.7 points behind 4o on MMLU. It's "punching above its weight" in real-world contexts. The Gemma series (9B and 27B) are similar, both beating the mean in terms of ELO per MMLU point. Microsoft's Phi series are all below the mean, meaning they have strong MMLU scores but aren't preferred in real-world contexts.

    Llama 3 8B previously did substantially better than the mean on LMSys Overall, so hopefully Llama 3.1 8B will be even better! The 70B variant was interestingly right on the mean. Hopefully the 430B variant won't fall below!

  • by kingsleyopara on 7/23/24, 3:38 PM

    The biggest win here has to be the context length increase to 128k from 8k tokens. Till now my understanding is there hasn't been any open models anywhere close to that.
  • by Workaccount2 on 7/23/24, 4:30 PM

    @dang why was this removed/filtered from the front page?
  • by AaronFriel on 7/23/24, 3:10 PM

    Is there pricing available on any of these vendors?

    Open source models are very exciting for self hosting, but the per-token hosted inference pricing hasn't been competitive with OpenAI and Anthropic, at least for a given tier of quality. (E.g.: Llama 3 70B costing between $1 and $10 per million tokens on various platforms, but Claude Sonnet 3.5 is $3 per million.)

  • by primaprashant on 7/23/24, 2:59 PM

    The resources for link to model card[1], research paper, and Prompt Guard Tutorial[2] on the page doesn't exist yet

    [1]: https://github.com/meta-llama/llama-models/blob/main/models/...

    [2]: https://github.com/meta-llama/llama-recipes/blob/main/recipe...

  • by dado3212 on 7/23/24, 4:35 PM

    > We use synthetic data generation to produce the vast majority of our SFT examples, iterating multiple times to produce higher and higher quality synthetic data across all capabilities. Additionally, we invest in multiple data processing techniques to filter this synthetic data to the highest quality. This enables us to scale the amount of fine-tuning data across capabilities. [0]

    Have other major models explicitly communicated that they're trained on synthetic data?

    [0]. https://ai.meta.com/blog/meta-llama-3-1/

  • by jcmp on 7/23/24, 3:06 PM

    "Meta AI isn't available yet in your country" Hi from europe :/
  • by anotherpaulg on 7/24/24, 7:15 AM

    Llama 3.1 405B instruct is #7 on aider's leaderboard, well behind Claude 3.5 Sonnet & GPT-4o. When using SEARCH/REPLACE to efficiently edit code, it drops to #11.

    https://aider.chat/docs/leaderboards/

      77.4% claude-3.5-sonnet
      75.2% DeepSeek Coder V2 (whole)
      72.9% gpt-4o
      69.9% DeepSeek Chat V2 0628
      68.4% claude-3-opus-20240229
      67.7% gpt-4-0613
      66.2% llama-3.1-405b-instruct (whole)
  • by sagz on 7/23/24, 3:41 PM

    The 405B model is already being served on WhatsApp: https://ibb.co/kQ2tKX5
  • by ofou on 7/24/24, 4:12 AM

        Llama 3 Training System
              19.2 exaFLOPS
                  _____
                 /     \      Cluster 1     Cluster 2
                /       \    9.6 exaFLOPS  9.6 exaFLOPS
               /         \     _______      _______
              /  ___      \   /       \    /       \
        ,----' /   \`.     `-'  24000  `--'  24000  `----.
       (     _/    __)        GPUs          GPUs         )
        `---'(    /  )     400+ TFLOPS   400+ TFLOPS   ,'
             \   (  /       per GPU       per GPU    ,'
              \   \/                               ,'
               \   \        TOTAL SYSTEM         ,'
                \   \     19,200,000 TFLOPS    ,'
                 \   \    19.2 exaFLOPS      ,'
                  \___\                    ,'
                        `----------------'
  • by unraveller on 7/23/24, 4:27 PM

    What are the substantial changes from 3.0 to 3.1 (70B) in terms of training approach? They don't seem to say how the training data differed just that both were 15T. I gather 3.0 was just a preview run and 3.1 was distilled down from the 405B somehow.
  • by sfblah on 7/23/24, 3:33 PM

    Is there an actual open-source community around this in the spirit of other ones where people outside meta can somehow "contribute" to it? If I wanted to "work on" this somehow, what would I do?
  • by denz88 on 7/23/24, 3:02 PM

    I'm glad to see the nice incremental gains on the benchmarks for the 8B and 70B models as well.
  • by chown on 7/23/24, 4:10 PM

    Wow! The benchmarks are truly impressive, showing significant improvements across almost all categories. It's fascinating to see how rapidly this field is evolving. If someone had told me last year that Meta would be leading the charge in open-source models, I probably wouldn't have believed them. Yet here we are, witnessing Meta's substantial contributions to AI research and democratization.

    On a related note, for those interested in experimenting with large language models locally, I've been working on an app called Msty [1]. It allows you to run models like this with just one click and features a clean, functional interface. Just added support for both 8B and 70B. Still in development, but I'd appreciate any feedback.

    [1]: https://msty.app

  • by zhanghsfz on 7/23/24, 7:02 PM

    We supported Llama 3.1 405B model on our distributed GPU network at Hyperbolic Labs! Come and use the API for FREE at https://app.hyperbolic.xyz/models

    Let us know if you have other needs!

  • by TechDebtDevin on 7/23/24, 2:49 PM

    Nice, someone donate me a few 4090s :(
  • by ChrisArchitect on 7/23/24, 4:21 PM

  • by Atreiden on 7/23/24, 3:03 PM

    Is there a way to run this in AWS?

    Seems like the biggest GPU node they have is the p5.48xlarge @ 640GB (8xH100s). Routing between multiple nodes would be too slow unless there's an InfiniBand fabric you can leverage. Interested to know if anyone else is exploring this.

  • by TheAceOfHearts on 7/23/24, 3:04 PM

    Does anyone know why they haven't released any 30B-ish param models? I was expecting that to happen with this release and have been disappointed once more. They also skipped doing a 30B-ish param model for llama2 despite claiming to have trained one.
  • by diimdeep on 7/23/24, 3:17 PM

    This 405B seriously need quantization solution like 1.625 bpw ternary packing for BitNet b1.58

    https://github.com/ggerganov/llama.cpp/pull/8151

  • by rcarmo on 7/23/24, 7:49 PM

  • by bick_nyers on 7/23/24, 7:29 PM

    I'm curious what techniques they used to distill the 405B model down to 70B and 8B. I gave the paper they released a quick skim but couldn't find any details.
  • by jiriro on 7/24/24, 8:07 PM

    Can this Llama process ~1GB of custom XML data?

    And answer queries like:

    Give all <myObject> which refer to <location> which refer to an Indo-European <language>.

  • by albert_e on 7/23/24, 3:00 PM

  • by IceHegel on 7/23/24, 7:27 PM

    Will 405b run on 8x H100s? Will it need to be quantized?
  • by breadsniffer on 7/23/24, 7:12 PM

    I tried it, and it's good but I feel like the synthetic data used for training 3.1 does not hold up to gpt4o prob using human-curated data.
  • by daft_pink on 7/23/24, 3:19 PM

    What kind of machine do I need to run 405B local?
  • by yinser on 7/23/24, 3:07 PM

    The race to the bottom for pricing continues.
  • by casper14 on 7/23/24, 2:53 PM

    Damn 405b params
  • by htk on 7/23/24, 11:44 PM

    Very insteresting! Running the 70B version on ollama on a mac and it's great. I asked to "turn off the guidelines" and it did, then I asked to turn off the disclaimers, after that I asked for a list of possible "commands to reduce potencial biases from the engineers" and it complied giving me an interesting list.
  • by Vagantem on 7/23/24, 3:01 PM

    As someone who just started generating AI landing pages for Dropory, this is music to my ears
  • by kristianp on 7/23/24, 9:00 PM

    Has anyone got a comparison of the performance of Llama 3.1 8B and the recent GPT-4o-mini?
  • by ofermend on 7/23/24, 11:07 PM

    I'm excited to try it with RAG and see how it performs (the 405B model)
  • by zhanghsfz on 7/23/24, 7:02 PM

    We supported Llama 3.1 405B model on our distributed GPU network at Hyperbolic Labs! Come and use the API for FREE at https://app.hyperbolic.xyz/models

    Would love to hear your feedback!

  • by ThrowawayTestr on 7/23/24, 2:57 PM

    Are there any other models with free unlimited use like chatgpt?
  • by Jiahang on 7/24/24, 12:52 PM

    it is nice to see the 405b model is actually competitive against closed source frontier models But i just have M2pro may can't play it
  • by stiltzkin on 7/23/24, 8:06 PM

    WhatsApp now uses 70B too if you want to test it.
  • by hubraumhugo on 7/23/24, 2:58 PM

    I wrote about this when llama-3 came out, and this launch confirms it:

    Meta's goal from the start was to target OpenAI and the other proprietary model players with a "scorched earth" approach by releasing powerful open models to disrupt the competitive landscape.

    Meta can likely outspend any other AI lab on compute and talent:

    - OpenAI makes an estimated revenue of $2B and is likely unprofitable. Meta generated a revenue of $134B and profits of $39B in 2023.

    - Meta's compute resources likely outrank OpenAI by now.

    - Open source likely attracts better talent and researchers.

    - One possible outcome could be the acquisition of OpenAI by Microsoft to catch up with Meta.

    The big winners of this: devs and AI product startups