from Hacker News

Turbopuffer: Fast search on object storage

by Sirupsen on 7/9/24, 2:48 PM with 64 comments

  • by softwaredoug on 7/9/24, 8:01 PM

    Having worked with Simon he knows his sh*t. We talked a lot about what the ideal search stack would look when we worked together at Shopify on search (him more infra, me more ML+relevance). I discussed how I just want a thing in the cloud to provide my retrieval arms, let me express ranking in a fluent "py-data" first way, and get out of my way

    My ideal is that turbopuffer ultimately is like a Polars dataframe where all my ranking is expressed in my search API. I could just lazily express some lexical or embedding similarity, boost with various attributes like, maybe by recency, popularity, etc to get a first pass (again all just with dataframe math). Then compute features for a reranking model I run on my side - dataframe math - and it "just works" - runs all this as some kind of query execution DAG - and stays out of my way.

  • by cmcollier on 7/9/24, 8:18 PM

    Unrelated to the core topic, I really enjoy the aesthetic of their website. Another similar one is from Fixie.ai (also, interestingly, one of their customers).
  • by nh2 on 7/10/24, 3:58 AM

    > $3600.00/TB/month

    It doesn't have to be that way.

    At Hetzner I pay $200/TB/month for RAM. That's 18x cheaper.

    Sometimes you can reach the goal faster with less complexity by removing the part with the 20x markup.

  • by omneity on 7/9/24, 10:40 PM

    > In 2022, production-grade vector databases were relying on in-memory storage

    This is irking me. pg_vector has existed from before that, doesn't require in-memory storage and can definitely handle vector search for 100m+ documents in a decently performant manner. Did they have a particular requirement somewhere?

  • by bigbones on 7/9/24, 8:35 PM

    Sounds like a source-unavailable version of Quickwit? https://quickwit.io/
  • by eknkc on 7/9/24, 9:29 PM

    Is there a good general purpose solution where I can store a large read only database in s3 or something and do lookups directly on it?

    Duckdb can open parquet files over http and query them but I found it to trigger a lot of small requests reading bunch of places from the files. I mean a lot.

    I mostly need key / value lookups and could potentially store each key in a seperate object in s3 but for a couple hundred million objects.. It would be a lot more managable to have a single file and maybe a cacheable index.

  • by solatic on 7/10/24, 4:19 PM

    Is it feasible to try to build this kind of approach (hot SSD cache nodes sitting in front of object storage) with prior open-source art (Lucene)? Or are the search indexes themselves also proprietary in this solution?

    Having witnessed some very large Elasticsearch production deployments, being able to throw everything into S3 would be incredible. The applicability here isn't only for vector search.

  • by zX41ZdbW on 7/10/24, 6:45 AM

    A correction to the article. It mentions

        Warehouse BigQuery, Snowflake, Clickhouse ≥1s Minutes
    
    For ClickHouse, it should be: read latency <= 100ms, write latency <= 1s.

    Logging, real-time analytics, and RAG are also suitable for ClickHouse.

  • by drodgers on 7/9/24, 9:22 PM

    I love the object-storage-first approach; it seems like such a natural fit for the could.
  • by cdchn on 7/10/24, 4:32 AM

    The very long introductory page has a ton of very juicy data in it, even if you don't care about the product itself.
  • by arnorhs on 7/10/24, 12:31 PM

    This looks super interesting. I'm not that familiar with vector databases. I thought they were mostly something used for RAG and other AI-related stuff.

    Seems like a topic I need to delive into a bit more.

  • by endisneigh on 7/10/24, 1:33 AM

    Slightly relevant - do people really want article recommendations? I don’t think I’ve ever read an article and wanted a recommendation. Even with this one - I sort of read it and that’s it; no feeling of wanting recommendations.

    Am I alone in this?

    In any case this seems like a pretty interesting approach. Reminds me of Warpstream which does something similar with S3 to replace Kafka.

  • by CyberDildonics on 7/9/24, 9:08 PM

    Sounds like a filesystem with attributes in a database.
  • by yawnxyz on 7/10/24, 4:21 AM

    can't wait for the day the get into GA!
  • by vidar on 7/9/24, 8:45 PM

    Can you compare to S3 Athena (ELI5)?
  • by yamumsahoe on 7/9/24, 11:29 PM

    unsure if they are comparable, but is this and quickwit comparable?
  • by hipadev23 on 7/10/24, 1:19 AM

    That’s some woefully disappointing and incorrect metrics (read and write latency are both sub-second, storage medium would be “ Memory + Replicated SSDs”) you’ve got for Clickhouse there, but I understand what you’re going for and why you categorized it where you did.