by bagol on 6/8/25, 6:09 PM with 6 comments
by jjice on 6/9/25, 1:59 PM
The CEO (the one who wrote most of the initial code but now was uninvolved in the software) never wanted to touch them - he saw the entire server as a house of cards. The problem was, not updating this thing just let to piles and piles of out of date packages and eventually past EOL for the entire distro it used. I'd say it lasted six years without an update til I showed that that it _needed_ an update when they went for SOC2. It honestly wasn't that bad to replicate - some strange edge cases, but like a day of work.
Their MySQL instance was about the same age, but they chose an older version than was stable at the time, so it also went EOL and they left it sitting there. Hell, I don't know if they've update it to this day. For some context, it's a small enough DB (in terms of size), that you could probably even get away with a long running SQL dump.
What scares me is that I have to imagine so many companies operate this way and they handle much more sensitive data. C'est la vie, I guess.
by PaulShin on 6/9/25, 8:33 AM
It was a simple Node.js + PostgreSQL server running on a fire-walled on-premise machine. We let it run untouched for just over two years. No OS updates, no package updates. It was terrifying, but it was the client's explicit demand for stability over new features. The lesson was that "uptime" and "security" can sometimes mean different things to different customers.
For our new company, Markhub, we're on the opposite end of the spectrum. We run a modern CI/CD pipeline on a cloud infrastructure. We deploy multiple times a week, sometimes multiple times a day.
My takeaway is that the "right" answer depends entirely on the service's promise to the customer. For some, reliability means "it never changes." For modern SaaS like ours, reliability means "it's always improving and secure." The real challenge is building a system that can deliver both when needed—which is exactly what we had to architect for our first on-premise enterprise deal.
by metadat on 6/9/25, 4:41 PM
Professionally, not long, the business has always wanted to reduce the risk of a security incident infinitely more than keeping a server up without restarting.
Unprofessionally, from 2019-2023 I achieved more 1,200 days of uptime on a public facing server. The only reason is because of how risky it can be to apply updates to some machines - you never know if some boot error will pop up and require driving across town and fiddling with for several hours. Amusingly, I checked it for updates this weekend and Ubuntu (note: I really don't recommend Ubuntu, Debian is so much better in every way I know of these days? Live Patch support was expired! How that nice.
Of course, YMMV. Yolo!
Sidenote: Achieving 10 year+ uptimes with BSD isn't difficult or particularly noteworthy at all, which I find impressive.
by mmarian on 6/9/25, 5:33 PM
by zeke on 6/9/25, 5:10 PM
19:08:00 up 1709 days, 16:01, 2 users, load average: 0.00, 0.00, 0.00
I'll probably just order a newer server and move over before rebooting now. There is no money riding on it.
by rizoa on 6/11/25, 2:30 PM