by levkk on 5/26/25, 4:55 PM with 80 comments
Here’s a walkthrough of how it works: https://www.youtube.com/watch?v=y6sebczWZ-c
Running Postgres at scale is hard. Eventually, one primary isn’t enough at which point you need to split it up. Since there is currently no good tooling out there to do this, teams end up breaking their apps apart instead.
If you’re familiar with PgCat, my previous project, PgDog is its spiritual successor but with a fresh codebase and new goals. If not, PgCat is a pooler for Postgres also written in Rust.
So, what’s changed and why a new project? Cross-shard queries are supported out of the box. The new architecture is more flexible, completely asynchronous and supports manipulating the Postgres protocol at any stage of query execution. (Oh, and you guessed it — I adopted a dog. Still a cat person though!)
Not everything is working yet, but simple aggregates like max(), min(), count(*) and sum() are in. More complex functions like percentiles and average will require a bit more work. Sorting (i.e. ORDER BY) works, as long as the values are part of the result set, e.g.:
SELECT id, email FROM users
WHERE admin = true
ORDER BY 1 DESC;
PgDog buffers and sorts the rows in memory, before sending them to the client. Most of the time, the working set is small, so this is fine. For larger results, we need to build swap to disk, just like Postgres does, but for OLTP workloads, which PgDog is targeting, we want to keep things fast. Sorting currently works for bigint, integer, and text/varchar. It’s pretty straightforward to add all the other data types, I just need to find the time and make sure to handle binary encoding correctly.All standard Postgres features work as normal for unsharded and direct-to-shard queries. As long as you include the sharding key (a column like customer_id, for example) in your query, you won’t notice a difference.
How does this compare to Citus? In case you’re not familiar, Citus is an open source extension for sharding Postgres. It runs inside a single Postgres node (a coordinator) and distributes queries between worker databases.
PgDog’s architecture is fundamentally different. It runs outside the DB: it’s a proxy, so you can deploy it anywhere, including managed Postgres like RDS, Cloud SQL and others where Citus isn’t available. It’s multi-threaded and asynchronous, so it can handle thousands, if not millions, of concurrent connections. Its focus is OLTP, not OLAP. Meanwhile, Citus is more mature and has good support for cross-shard queries and aggregates. It will take PgDog a while to catch up.
My Rust has improved since my last attempt at this and I learned how to use the bytes crate correctly. PgDog does almost zero memory allocations per request. That results in a 3-5% performance increase over PgCat and a much more consistent p95. If you’re obsessed with performance like me, you know that small percentage is nothing to sneeze at. Like before, multi-threaded Tokio-powered PgDog leaves the single-threaded PgBouncer in the dust (https://pgdog.dev/blog/pgbouncer-vs-pgdog).
Since we’re using pg_query (which itself bundles the Postgres parser), PgDog can understand all Postgres queries. This is important because we can not only correctly extract the WHERE clause and INSERT parameters for automatic routing, but also rewrite queries. This will be pretty useful when we’ll add support for more complex aggregates, like avg(), and cross-shard joins!
Read/write traffic split is supported out of the box, so you can put PgDog in front of the whole cluster and ditch the code annotations. It’s also a load balancer, so you can deploy it in front of multiple replicas to get 4 9’s of uptime.
One of the coolest features so far, in my opinion, is distributed COPY. This works by hacking the Postgres network protocol and sending individual rows to different shards (https://pgdog.dev/blog/hacking-postgres-wire-protocol). You can just use it without thinking about cluster topology, e.g.:
COPY temperature_records (sensor_uuid, created_at, value)
FROM STDIN CSV;
The sharding function is straight out of Postgres partitions and supports uuid v4 and bigint. Technically, it works with any data type, but I just haven’t added all the wrappers yet. Let me know if you need one.What else? Since we have the Postgres parser handy, we can inspect, block and rewrite queries. One feature I was playing with is ensuring that the app is passing in the customer_id in all queries, to avoid data leaks between tenants. Brain dump of that in my blog here: https://pgdog.dev/blog/multi-tenant-pg-can-be-easy.
What’s on the roadmap: (re)sharding Postgres using logical replication, so we can scale DBs without taking downtime. There is a neat trick on how to quickly do this on copy-on-write filesystems (like EBS used by RDS, Google Cloud volumes, ZFS, etc.). I’ll publish a blog post on this soon. More at-scale features like blocking bad queries and just general “I wish my Postgres proxy could do this” stuff. Speaking of which, if you can think of any more features you’d want, get in touch. Your wishlist can become my roadmap.
PgDog is being built in the open. If you have thoughts or suggestions about this topic, I would love to hear them. Happy to listen to your battle stories with Postgres as well.
Happy hacking!
Lev
by jashmatthews on 5/27/25, 3:25 AM
I've been looking into PgDog for sharding a 40TB Postgres database atm vs building something ourselves. This could be a good opportunity to collaborate because what we need is something more like Vitess for PostgreSQL. The scatter gather stuff is great but what we really need is config management via something like etcd, shard splitting, best-effort transactions for doing schema changes across all shards etc.
Almost totally unrelated but have you had good success using pg_query.rs to re-write queries? Maybe I misunderstood how pg_query.rs works but re-writing an AST seems like a nightmare with how the AST types don't really support mutability or deep cloning. I ended up using the sqlparser crate which supports mutability via Visitors. I have a side project I'm chipping away at to build online schema change for PG using shadow tables and logical replication ala gh-ost.
Jake
by denysvitali on 5/27/25, 6:49 AM
Such a cool project, good job Lev!
by williamdclt on 5/26/25, 10:13 PM
I don’t know that I’d want my sharding to be so transparently handled / abstracted away. First, because usually sharding is on the tenancy boundary and I’d want friction on breaking this boundary. Second, because the implications of joining across shards are not the same as in-shard (performance, memory, cpu) and I’d want to make that explicit too
That takes nothing out of this project, it’s really impressive stuff and there’s tons of use cases for it!
by mijoharas on 5/26/25, 6:01 PM
Congrats on the launch Lev, and keep it up!
by xnickb on 5/26/25, 7:22 PM
For me the key point in such projects is always handling of distributed queries. It's exciting that pgDog tries to stay transparent/compatible while operating on the network layer.
Of course the limitations that are mentioned in the docs are expected and will require trade-offs. I'm very curious to see how you will handle this. If there is any ongoing discussion on the topic, I'd be happy to follow and maybe even share ideas.
Good luck!
by Existenceblinks on 5/27/25, 12:55 AM
Unique indexes Not currently supported. Requires query rewriting and separate execution engine to validate uniqueness across all shards.
But still looks promising.by PeterZaitsev on 5/27/25, 7:06 PM
What can be challenge with such solutions is getting the last 1% right when it comes to sharding tricky queries properly (or at least detecting queries which are not handled properly) and also isolation and consistency
by aeyes on 5/26/25, 10:00 PM
The benchmarks presented only seem to address standard pooling, I'd like to see what it looks like once query parsing and cross-shard join come into play.
by anurag on 5/26/25, 6:52 PM
by rco8786 on 5/26/25, 5:30 PM
by jimmyl02 on 5/26/25, 9:42 PM
if not, what is the approach to enable restarts without downtime? (let's say one node crashes)?
by sroussey on 5/26/25, 5:30 PM
What’s the long term (business) plan to keep it updated?
by ewalk153 on 5/26/25, 10:27 PM
Hot shard management is a job in of itself and adds lot of operational complexity.
by JyB on 5/26/25, 9:21 PM
by theusus on 5/26/25, 6:02 PM
by achanda358 on 5/26/25, 11:34 PM
by deadbabe on 5/27/25, 12:24 AM
by dbworku on 5/28/25, 1:35 AM
by simplecto on 5/27/25, 10:24 AM
by hn_throwaway_99 on 5/27/25, 2:38 PM
https://news.ycombinator.com/item?id=44071418
Quite from the article:
> At OpenAI, we utilize an unsharded architecture with one writer and multiple readers, demonstrating that PostgreSQL can scale gracefully under massive read loads.
Of course, if you have a lot of write volume this would be an unsuitable architecture, but just a reminder that pg can scale a lot more than many people think with just a single writer.
by iamdanieljohns on 5/26/25, 7:05 PM
by iFire on 5/26/25, 7:09 PM