by hjacobs on 1/20/19, 4:12 PM with 236 comments
by m0zg on 1/20/19, 8:24 PM
Most importantly: I want a lot fewer moving parts than it currently has. Being "extensible" is a noble goal, but at some point cognitive overhead begins to dominate. Learn to say "no" to good ideas.
Unfortunately there's a lot of K8S configs and specific software already written, so people are unlikely to switch to something more manageable. Fortunately if complexity continues to proliferate, it may collapse under its own weight, leaving no option but to move somewhere else.
by dvnguyen on 1/20/19, 6:32 PM
The first disappointment is setting up a local development environment. I failed to get minikube running on a Macbook Air 2013 and a Ubuntu Thinkpad. Both have VTx enabled and Docker and VirtualBox running flawlessly. Their online interactive tutorial was good though, enough for the learning purpose.
Production setup is a bigger disappointment. The only easy and reliable ways to have a production grade Kubernetes cluster are to lock yourself into either a big player cloud provider, or an enterprise OS (Redhat/Ubuntu), or introduce a new layer on top of Kubernetes [1]. Locking myself into enterprise Ubuntu/Redhad is expensive, and I'm not comfortable with adding a new, moving, unreliable layer on top of Kubernestes which is built on top of Docker. One thing I like about the Docker movement is that they commoditize infrastructure and reduce lock-ins. I can design my infrastructure so it can utilize an open source based cloud product first and easily move to others or self-host if needed. With Kubernetes, things are going the other way. Even if I never moved out of the big 3 (AWS/Azure/GCloud), the migration process could be painful since their Kubernetes may introduce further lock-ins for logging, monitoring, and so on.
by cygned on 1/20/19, 8:15 PM
I was able to set it up successfully a couple of times, with more or less time required. Last time, I gave up after four days because I realized that what I need was a "I just want to run a simple cluster" solution and while k8s might provide that, its flexibility makes it hard for me to use it.
by manigandham on 1/20/19, 10:46 PM
by nisa on 1/20/19, 9:47 PM
Of course it's 2019 and you have to migrate Hadoop to run on k8s now :)
My impression is that if you are a small shop and have the money, use k8s on google and be happy, but don't attempt to set it up for yourself.
If you only have a few dedicated boxes somewhere just use Docker Swarm and something like Portainer.
by awinter-py on 1/20/19, 5:19 PM
The adoption failures are mostly networking issues specific to their cloud. Performance and box limits vary widely depending on cloud vendor and I still don't quite understand the performance penalty of the different overlay networks / adapters.
by stonewhite on 1/20/19, 9:42 PM
I really liked/missed the beauty of simplicity in marathon that everything was a task, the load balancer, autoscaler, app servers everything. I think it failed because provisioning was not easy, lack of first-class integrations with cloud vendors and horrible horrible documentation.
Kind of sad to see it lost the hype battle, and since then even Mesosphere had to come up with a K8s offering.
by bdcravens on 1/20/19, 7:28 PM
1) no matter what I think I know, there's too many dark corners to create an adequate course
2) K8S is such a dumpster fire that I shouldn't encourage others
3) there's a hell of an opportunity here
Thoughts? Worth pursuing? Anything in particular that should be included that usually isn't in this kind of training?
by stunt on 1/20/19, 8:18 PM
For the majority, it just adds a little value when you compare to added complexity to infrastructure and the cost of a learning curve and the ongoing operation and maintenance.
by tnolet on 1/20/19, 4:51 PM
by hjacobs on 1/20/19, 4:28 PM
by dcomp on 1/20/19, 9:22 PM
for f in /.yaml ...
with a directory structure of:
drwxrwsrwx+ 1 root 1002 176 Jan 20 21:15 .
drwxrwsrwx+ 1 root 1002 194 Nov 17 20:06 ..
drwxrwsrwx+ 1 root 1002 68 Jan 20 20:50 0-pod-network
drwxrwsrwx+ 1 root 1002 104 Nov 1 11:18 1-cert-manager
drwxrwsrwx+ 1 root 1002 34 Jul 11 2018 2-ingress
-rwxrwxrwx+ 1 root 1002 93 Jan 20 21:15 apply-config.sh
drwxrwsrwx+ 1 root 1002 22 Jul 14 2018 cockpit
drwxrwsrwx+ 1 root 1002 36 Jul 3 2018 samba
drwxrwsrwx+ 1 root 1002 76 Jul 6 2018 staticfiles
by AaronFriel on 1/21/19, 5:57 AM
* About half of the post-mortems involve issues with AWS load balancers (mostly ELB, one with ALB) * Two of the post-mortems involve running control plane components dependent on consensus on Amazon's `t2` series nodes
This was pretty surprising to me because I've never run Kubernetes on AWS. I've run it on Azure using acs-engine and more recently AKS since its release, and on Google Cloud Platform using GKE; and it's a good reminder not to to run critical code on T series instances because AWS can and will throttle or pause these instances.
by peterwwillis on 1/20/19, 6:19 PM
by hjacobs on 1/30/19, 6:13 PM