by brakmic on 4/20/23, 5:02 AM with 53 comments
by reacharavindh on 4/20/23, 9:00 AM
The "vendor agnostic" approach was the right call to make at that point in time. Sure, every business is unique and some cases fit better for a hands-off (lets pay for the convenience of AWS taking care of managed offerings) approach. But, it is a fallacy to think there is no cost to pay for that decision.
The cost of operating your vendor agnostic infrastructure is replaced by your team now needing to learn the intricacies of AWS such as IAM, AWS's way of networking, backups etc. Those operational needs dont just go away, they just become "easier" and more defined as AWS's way of doing it.
As a consultant, one must know where to draw the line and recommend the appropriate route.
by withinboredom on 4/20/23, 6:18 AM
I don’t have to pay someone else to handle this, but if did, I would get rid of k8s in a heartbeat. I’ve seen a devops team of only a few people manage tens of thousands of traditional servers, but I doubt such a small team could handle a k8s cluster of the same size.
I’m considering moving back to traditional architecture for my blog and other projects. K8s has been fun, but there’s too much magic everywhere.
by thrashh on 4/20/23, 6:21 AM
Low key I hate touching Google-created projects. On paper technically sound but in practice a guaranteed usability disaster.
by cocoland2 on 4/20/23, 10:07 AM
We got a lot of critical infra running on them and then slowly there was tech-debt that would start accumulating. Clusters have to get updated , older DNS versions in k8s are slow, networking (Older Weave versions was bursting through the seams when the traffic exploded with many applications onboarded). SRE teams get overwhelmed, constant requests for adding PVC (Kafka & C* was on k8s) took a toll. Sanity prevailed in the end, there was decision to move to hosted PaaS infra, though I no longer work there, I just reminisced what we were going through.
Though a "cloud-independent" solution will save pennies, it will definitely drown dollars in personnel costs and the uptime/SLA
History repeats itself, because we don't learn from our mistakes (us or others)
by dikei on 4/20/23, 8:14 AM
by mt42or on 4/20/23, 6:12 AM
by bradwood on 4/20/23, 6:06 AM
New shop: lambda and eventbridge = life is good.
by holografix on 4/20/23, 6:32 AM
Why didn’t the customer use EKS?
by metacatdud on 4/26/23, 6:30 AM
Everytime I am trying to point out systems are more and more complicated and do a call for simplicity I am pushed away.
I heard you are not taken seriously if you don't use a well established cloud provider or similar things.
Truth be told, there are not many projects you do or will work on which need this kind of things.
We though cloud will help us with a lot of things but to what cost and by cost I mean stress, data protection, money etc.
by SilverBirch on 4/20/23, 11:41 AM
by theoldlove on 4/20/23, 5:55 AM
by qeternity on 4/20/23, 11:23 AM
We run Rancher across a couple of bare metal clusters and it's been mostly an amazing experience (ca. 3 years). The only issues we had were with Rancher specific bugs, but those have been resolved and for the most part our infra is pretty autonomous. We do all HA at the application layer, so local NVME as opposed to network storage. This means Patroni, Redis Sentinel/Cluster, etc. But it broadly just works. Maybe we're not big enough to bump into issues, but I couldn't imagine migrating to the labyrinth of vendor lockin masquerading as cloud services.
What am I missing? Why do we have such a wildly different experience to others?
by morelisp on 4/20/23, 11:50 AM
It feels like there's something deeper hiding here, more along the lines of "our developers really don't / can't care about how the software is operating in production."
by kyugkyugyuog on 4/20/23, 11:10 AM