by kaycebasques on 5/13/25, 2:39 PM with 11 comments
by jbellis on 5/13/25, 5:50 PM
It's a shame that it's not open source, unlikely that there's anything super proprietary in an embeddings model that's optimized to run on CPU.
(I'd use it if it were released; in the meantime, MiniLM-L6-v2 works reasonably well. https://brokk.ai/blog/brokk-under-the-hood)
by darepublic on 5/13/25, 7:18 PM
What it’s for “History Embeddings.” Since ~M‑128 the browser can turn every page‑visit title/snippet and your search queries into dense vectors so it can do semantic history search and surface “answer” chips. The whole thing is gated behind two experiments:
^ response from chatgpt
by pants2 on 5/13/25, 7:25 PM
by Alifatisk on 5/14/25, 6:22 AM
by owebmaster on 5/18/25, 11:27 AM