by Vonng on 3/15/24, 3:43 AM with 142 comments
by egnehots on 3/15/24, 7:22 AM
- The codebase is old and huge, accruing some heavy technical debt, making it a less than ideal foundation for iterating quickly on a new paradigm like AI and vector databases.
- Some ancient design decisions have aged poorly, such as its one connection per process model, which is not as efficient as distributing async tasks over thread pools. If not mitigated through an external connection pooler you can easily have real production issues.
- Certain common use cases suffer from poor performance; for example, write amplification is a known issue. Many junior developers mistakenly believe they can simply update a timestamp or increment a field on a main table with numerous columns.
So, yes, PG is one of the best compromises available on the database market today. It's robust, offers good enough performance, and is feature-rich. However, I don't believe it can become the ONE database for all purposes.
Using a dedicated tool best suited for a specific use case still has its place; SQLite and DuckDB, for instance, are very different solutions with interesting trade-offs.
by jillesvangurp on 3/15/24, 7:41 AM
I played with postgresql a while ago to implement search. It's not horrible. But it's nowhere near Elasticsearch in terms of its capabilities. It's adequate for implementing very narrow use cases where search ranking really doesn't matter much (i.e. your revenue is not really impacted by poor precision and recall metrics). If your revenue does depend on that (e.g. because people buy stuff that they find on your website), you should be a bit more careful about monitoring your search performance and using the right tools to improve performance.
But for everything else you only have a handful of tools to work with to tune things. And what little there is is hard to use and kind of clunky. Great if that really is all you need and you know what you are doing but if you've used Elasticsearch and know how to use it properly you'll find your self missing quite a few things. Maybe some of those things will get added over time but for now it simply does not give you a lot to work with.
That being said, if you go down that path the trigram support in postgres is actually quite useful for implementing simple search. I went for that after trying the very clunky tsvector support and finding it very underwhelming for even the simplest of use cases. Trigrams are easier to deal with in postgres and you can implement some half decent ranking with it. Great for searching across product ids, names, and other short strings.
by zachmu on 3/15/24, 5:27 PM
https://github.com/dolthub/doltgresql/
We're doing this because our main product (Dolt) is MySQL-compatible, but a lot of people prefer postgres. Like, they really strongly prefer postgres. When figuring out how to support them, we basically had three options:
1) Foreign data wrapper. This doesn't work well because you can't use non-native stored procedure calls, which are used heavily throughout our product (e.g. CALL DOLT_COMMIT('-m', 'changes'), CALL DOLT_BRANCH('newBranch')). We would have had to invent a new UX surface area for the product just to support Postgres.
2) Fork postgres, write our own storage layer and parser extensions, etc. Definitely doable, but it would mean porting our existing Go codebase to C, and not being able to share code with Dolt as development continues. Or else rewriting Dolt in C, throwing out the last 5 years of work. Or doing something very complicated and difficult to use a golang library from C code.
3) Emulation. Keep Dolt's Go codebase and query engine and build a Postgres layer on top of it to support the syntax, wire protocol, types, functions, etc.
Ultimately we went with the emulation approach as the least bad option, but it's an uphill climb to get to enough postgres support to be worth using. Our main effort right now is getting all of postgres's types working.
by monero-xmr on 3/15/24, 4:23 AM
I'm not a database developer, and last time I researched this (a few years ago) I found many good reasons for not enabling this from postgres contributors. But it would still be very useful.
by fulafel on 3/15/24, 6:44 AM
by TOMDM on 3/15/24, 5:38 AM
The feature I'd love to see added that has been kicking around the mailing list for ages now would be incremental view maintenance.
Being able to keep moderately complex analysis workloads fresh in realtime would be such a boon.
by paulmd on 3/15/24, 7:57 AM
And a surprising amount of other stuff (similar to lisp inner platforms) converges on half-hearted, poorly-implemented replications of Postgres features… in this world you either evolve to Cassandra/elastic or return to postgres.
(not saying one or the other is better, mind you… ;)
by lmm on 3/15/24, 6:05 AM
by ksec on 3/15/24, 8:22 AM
I am not aware of such machine in a single Node unless it is talking about vCPU / Thread. Intel Sierra Forest 288 Core doesn't do dual socket option. So I have no idea where the 512 x86 core came from.
by issung on 3/15/24, 5:25 AM
by ksec on 3/15/24, 9:16 AM
I have been stating this since at least 2020 if not earlier.
We are expecting DDR6 and PCI-E 7.0 Spec to be finalised by 2025. You could expect them to be on market by no later than 2027. Although I believe we have reach the SSD IOPS limits without some special SSD with Z-NAND. I assume ( I could be wrong ) this makes SSD bandwidth on Server less important. In terms of TSMC Roadmap that is about 1.4nm or 14A. Although in server sector they will likely be on 2nm. Hopefully we should have 800Gbps Ethernet by then with ConnectX Card support. ( I want to see the Netflix FreeBSD serving 1.6Tbps update )
We then have software and DB that is faster and simpler to scale. What used to be a huge cluster of computer that is mentally hard to comprehend, is now just a single computer or a few larger server doing its job.
There is 802.3dj 1.6Tbps Ethernet looking at competition on 2026. Although product coming through to market tends to take much longer compared to Memory and PCI-Express.
AMD Zen6C in ~2025 / 2026 with 256 Core per Socket, on Dual Socket System that is 512 Core or 1024 vCPU / Thread.
The future is exciting.
by patrickdavey on 3/15/24, 6:09 AM
One thing I've always liked about MySQL is that it pretty much looks after itself, whereas with Postgres I've had issues before doing upgrades (this was with brew though) and I'm not clear on whether it looks after itself for vacuuming etc.
Should I just give it a go the next time I'm upgrading? It does seem like a tool I need to get familiar with.
by swcode on 3/15/24, 5:39 AM
by artyom on 3/15/24, 4:23 PM
But Postgres is a work of art, and compared to all the other relational database options, if it's ultimately crowned the king of them all, it'd be well deserved.
I'd also say that the PG protocol and the extensions ecosystem are as important as the database engine.
by jpalomaki on 3/15/24, 7:08 AM
The analytics part should scale independently. Often this is only needed occasionally, so scale-to-zero (like Snowflake) would be great.
by thinkerswell on 3/15/24, 5:57 AM
by NorwegianDude on 3/15/24, 10:41 AM
I have seen a lot of people praising Postgres over e.g. MariaDB. But more often than not it seems to be people how lack knowledge.
Take this linked post, where the author points out "The untuned PostgreSQL performs poorly (x1050)" later followed by "This performance can’t be considered bad, especially compared to pure OLTP databases like MySQL and MariaDB (x3065, x19700)".
Frist of all, those are not pure OLTP databases. And if the author took a better look at the benchmark he would see that MariaDB using ColumnStore is at x98. That's 10x the performance of Postgres out of the box, and 200x faster than the author stated.
by givemeethekeys on 3/15/24, 6:03 AM
How does one go about finding paying customers when developing a new database tool? How does one figure out the size of the market, and pricing structure?
by elliotwagner on 3/16/24, 11:24 AM
by throwawaaarrgh on 3/15/24, 7:08 AM