from Hacker News

Chrome's New Embedding Model: Smaller, Faster, Same Quality

by kaycebasques on 5/13/25, 2:39 PM with 11 comments

  • by jbellis on 5/13/25, 5:50 PM

    TIL that Chrome ships an internal embedding model, interesting!

    It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.

    (I'd use it if it were released; in the meantime, MiniLM-L6-v2 works reasonably well. https://brokk.ai/blog/brokk-under-the-hood)

  • by darepublic on 5/13/25, 7:18 PM

    > Yes – Chromium now ships a tiny on‑device sentence‑embedding model, but it’s strictly an internal feature.

    What it’s for “History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:

    ^ response from chatgpt

  • by pants2 on 5/13/25, 7:25 PM

    What does Chrome use embeddings for?
  • by Alifatisk on 5/14/25, 6:22 AM

    How does this affect Chromes load on the system? Will this make older devices fan start spinning as soon as I load up Chrome? Anyone who’s more into embeddings and can tell?
  • by owebmaster on 5/18/25, 11:27 AM

    can this be used as a JS API?