by janpio on 3/19/24, 7:45 PM with 53 comments
by kstrauser on 3/19/24, 8:39 PM
I've been using PostgreSQL for a decades, and I feel so spoiled. It always Just Works. Not to say there've never been bugs, but compared to anything else with that much surface area, it's a brilliant piece of engineering.
It's astonishing how often it's a perfectly fine stand-in for the "right" solution. Need a K-V store to hold a bunch of JSON docs indexed by UUID? Fine. Want to make an append-only log DB? Why not. Should you do those things? Probably not, but unless you specifically need to architect for global-scale concurrent usage, it's likely to work out just fine.
For me, it's the default place to stick data unless I have a specific requirement that only something else can meet. I've never once regretted using it to launch a production system, and only a couple of times have needed to migrate off of it due to performance demands.
Thanks, PostgreSQL team! You rock.
by whartung on 3/19/24, 9:03 PM
I came from the Old Days when we had to chisel BASIC code into cooling silicon. Having something like a SQL RDBMS just sitting there, busy, or not, maybe just wasting away, ready for any weird nonsense you throw at it, is just a treasure.
I have postgres on my mac. I've had postgres on my mac since I've had a Mac, so, what, 2006? I still have DBs on there that are now pushing 17 years old (after several PG version upgrades). I have the space, no reason to delete them. Just there. Old projects, strange experiments, idle.
That I have this much capability languishing is amazing.
SQL databases used to be a Big Deal. They were large step up from hand coding B-Tree indexes. I remember once we got a call from a client complaining about performance on a system we installed. We popped in, took a look around, and, yea, we dropped the ball. Not a single index was created on their system. It was just the tables. No wonder it was slowing down. 10 minutes of mad index creation later, all was well.
If you weren't there in those days, it's remarkable that we had a system where indexes were (mostly) a performance thing, rather than a core thing the entire system was designed around. A paradigm shift in development.
SQL DBs were amazing. They were also rare, and expensive. Custom libraries to access them, etc. But also, generic query tools, no code to write to beat on the data, or dump out quick queries, just the SQL front end. Powerful. Capable. So, yea, I held them on a bit of a pedestal.
And I can now just let one of those things, with untold modern capability and range, just sit idle on my machine. Just like I can leave a Calculator window open. Waiting for whenever I deign I need to work with it some.
Extraordinary.
by irrational on 3/19/24, 9:12 PM
by finnh on 3/19/24, 9:29 PM
by xnx on 3/19/24, 8:35 PM
Postgres is eating the database world
https://news.ycombinator.com/item?id=39711863
5 days ago 138 comments
Edit: included the wrong link
by eivanov89 on 3/19/24, 11:03 PM
by da39a3ee on 3/20/24, 9:49 AM
by edhelas on 3/19/24, 8:16 PM
by roynasser on 3/19/24, 8:36 PM
by ralusek on 3/19/24, 8:38 PM
RDS Proxy/PG Bouncer should be default connection behavior. Ideally no persistent connection at all, more akin to https would be great.
Vacuuming is ridiculous. It doesn't make sense to me what could possibly take so long. It also doesn't make sense to me that it needs to be blocking (I understand that it's now parallelizable-ish). Using a comparatively slow interpreted language, I can iterate through millions of items, on disk, and do any number of things, within a few seconds at most. I have had databases with like, a few thousand items, somehow take hours upon hours to vacuum/analyze.
Nested transactions would be great. I know there are savepoints but it doesn't work well when dealing with anything in parallel.
And finally, my #1 complaint: Please let ME decide when to roll back/invalidate a transaction. If I want to write something like an upsert, maybe my code says "insert this record, and if I catch an unique constraint error, update the record." In Postgres, at the initial insert, because there's an error, it will just invalidate my transaction! I could have done 100 other things in this transaction so far, all invalidated because of a DB error. An error that I was expecting to catch and handle myself at the application level, and now the entire transaction needs to be rolled back. WHY?