by config_yml on 4/5/25, 1:22 PM with 12 comments
by horsawlarway on 4/5/25, 3:04 PM
I started with MicroK8s, and while it's a functional solution for some use-cases, I was genuinely disappointed in it's overall behavior (esp with regards to small node count clusters, in the 3 to 10 node range).
The biggest hit was Dqlite - Overall I had a tremendous number of problems that originated explicitly with Dqlite. Everything from unexpected high cpu usage, failure to form consensus with even node counts (esp after a network split), manual configuration files that needed to be deleted or renamed to get specific hosts back into the cluster, and generally poor performance for a long term setup (2 year old cluster stalled to basically a standstill spinning on Dqlite).
I have not used Dqlite in other projects, so it's possible this was a Microk8s problem, but based on my experience with Microk8s... I won't touch either of these projects again.
I switched away to K3s about 3 years ago now and have had essentially no problems. Considerably fewer random headaches, no unexpected performance degradation, very stable, incredibly pleasant to work with.
---
I have also migrated about half of my workloads to Longhorn backed PVs at this point (coming from a large shared NAS exposed as NFS) and while I've had a couple more headaches here than with K3s directly - this has been surprisingly smooth sailing as well, while giving me much more flexibility in how I manage my block devices (for context, I'm small - so just under a petabyte of storage, of which ~60% is in use).
If you want to run a cluster on hardware you own rather than rent - K3s and Longhorn are amazing tools to do so, and I really have to give Rancher/SUSE a hand here. It's nice tooling.
by sofixa on 4/5/25, 2:50 PM
A few years back I wrote about it, and most of the core principles of the article are still valid:
https://atodorov.me/2021/02/27/why-you-should-take-a-look-at...
Disclaimer: I work at HashiCorp, but I've had that opinion since before joining and in fact it's among the reasons I joined
by arkaniad on 4/5/25, 6:10 PM
Lately there's also RKE2 (https://docs.rke2.io/) that I've been growing fondness for and it's only marginally more tricky to setup, with the bonus effect of having a more 'standard' cluster distribution and more knobs to twist.
Not that I'd be shy of running K3s in production, but it seems easier to follow 'standard Kubernetes way' for things without having to diff with some of K3s's default configuration choices - which, again, aren't bad at all for folks who do not need all of the different options.
For edge workloads and smaller clusters / less familiar operators that want to run Kubernetes platforms themselves without depending on a managed provider, K3s is pretty impossible to beat.
by hdjjhhvvhga on 4/5/25, 2:56 PM
by znpy on 4/5/25, 2:07 PM