by carlosrdrz on 10/3/18, 8:20 AM with 269 comments
by solatic on 10/4/18, 10:49 AM
Does that mean that Kubernetes should now be used for all hobbyist projects? No. If I'm thinking of playing around with a Raspberry Pi or other SBC, do I need to install Kubernetes on the SBC first? If I'm thinking of playing around with IoT or serverless, should I dump AWS- or GCE-proprietary tools because nobody will ever run anything that that can't run on Kubernetes ever again? If I'm going to play around with React or React Native, should I write up a backend just so I can have something that I can run in a Kubernetes cluster, because all hobbyist projects must run Kubernetes now, because it's cheap enough for hobbyist projects? If I'm going to play around with machine learning at home, buy a machine with a heavy GPU, figure out how to get Kubernetes to schedule my machine learning workload correctly instead of just running it directly on that machine, because uhhh maybe someday I'll have three such machines with powerful GPUs plus other home servers for all my other hobbyist projects?
No, no, no, no, no. Clearly.
But maybe I envision my side project turning into full-time startup some day. Maybe I see all the news about Kubernetes and think it would be cool to be more familiar with it. Nah, probably too expensive. Oh wait, I can get something running for $5? Hey, that's pretty neat!
Different people will use different solutions for different project requirements.
by jypepin on 10/4/18, 9:24 AM
I recently setup a digital ocean droplet and setup my blog there to actually understand how it works. It was great because I learned a ton and feel in control. Pretty simple setup - single droplet, rails with postgres, capistrano to automate deploys and a very simply NGINX config. It took me multiple days to setup everything, compared to the 5 minutes Heroku would have required - and it's not as nice as what Heroku offers.
Still, I'd wait as long as I can to get out of something so simple as Heroku for _anything_. I understand it gets expensive quickly, but I really want to see the cost difference of Heroku vs the time spent for the engineering team to manage all the complexities of devops, automated deploys, scaling, and I'm not even mentioning all the data/logging/monitoring things that Heroku allows to add with 1 click.
by nimbius on 10/4/18, 1:03 PM
Professional mechanics use high grade tools that can cost thousands of dollars each. We have laser alignment rigs, plasma cutters, computer controlled balancing and timing hardware, and high performance benchmarking hardware that can cost as much as the car you're working on. We have a "Kubernetes" like setup because we service hundreds of cars a month.
The shade-tree mechanic wrenching on her fox body mustang on the other hand? her hand-me-down tool box and a good set of sockets will get her by with about 90% of what she wants to do on that car. she doesnt need to perform a manifold remap, so she doesnt need a gas fluid analyzer any better than her own two ears.
I should also clarify that these two models are NOT mutually exclusive. If i take home an old Chevy from work, I can absolutely work on it with my own set of tools. And if the shade-tree wants to turn her mustang into a professional race car, she can send it to a professional "kubernetes" type shop that will scale that car up proper.
by vbsteven on 10/4/18, 9:38 AM
My current setup uses a couple of Hetzner dedicated machines and services are deployed with ansible playbooks. The playbooks install and configure nginx, install the right version of ruby/java/php/postgres, configure and start systemd services. These playbooks end up copied and slightly modified for each project, and sometimes they interfere with one another in sublte ways (different ruby versions, conflicting ports, etc)
With my future Kubernetes setup I would just package each project into its own self-contained container image, write a kubernetes deployment/pod spec, update the ingress and I'm done.
by whydoineedthis on 10/4/18, 7:12 PM
As someone whom can setup and run a kubernetes cluster in my sleep, I can tell you that it is a superb production ready platform that solves many real world problems.
That in mind, kubernetes has constraints also, like running networked elixer containers is possible, but not ideal from elixer's perspective. Dealing with big data takes extra consideration. etc. etc.
All said, if you have an interest in DevOps/Ops/SysAdmin type technologies, learning Kubernetes is a fine way to spend your time. Once you have a few patterns under your belt, you are going to run way faster at moving your stack to production for real users to start using, and that has value.
I think the initial author (not this article, the other one) was just pointing out that you can indeed run kubernetes pretty cheap, and that is useful information and good introduction. This article is clickbait designed to mooch off of the others success.
by pstadler on 10/4/18, 12:01 PM
Kubernetes is likely here to stay. If you're interested in running a cluster to undestand what the hype is all about and to learn something new, you should do it. Also, ignore everybody telling you that this platform wasn't meant for that.
Complexity is a weak argument. Once your cluster is running you just write a couple of manifests to deploy a project, versus: ssh into machine; useradd; mkdir; git clone; add virtual host to nginx; remember how certbot works; apt install whatever; systemctl enable whatever; pm2 start whatever.yml; auto-start project on reboot; configure logrotate; etc. Can this be automated? Sure, but I'd rather automate my cluster provisioning.
by wheresvic1 on 10/4/18, 9:30 AM
- pm2 for uptime (pm2 itself is setup as a systemd serivce, it's really simple to do and pm2 can install itself as a systemd service)
- I create and tag a release using git
- on the production server, I have a little script that fetches the latest tag, wipes and does a fresh npm install and pm2 restart.
- nginx virtual host with ssl from letsencrypt (setting this stuff was a breeze given the amount of integration and documentation available online)
Ridiculously simple and I only pay for a single micro instance which I can use for multiple things including running my own email server and a git repo!
The only semi-problem that I have is that a release is not automagically deployed, I would have to write a git hook to run my deployment script but in a way I'm happy to do manual deployments as well to keep an eye on how it went :)
by majewsky on 10/4/18, 9:34 AM
[1] Gitea (Github clone), Murmur (voice-chat server), Matrix Synapse (instant messaging), Prosody (XMPP), nginx (for those services and for various static websites)
by AYBABTME on 10/4/18, 4:41 PM
In the end, the steps you take to deploy with rsync and run your systemd service are the same (conceptually) you'd take to run on K8S, but translated to some YAML and a docker push. In one case you need to learn a new paradigm, in the other case you deal with something you already know. Not having to learn something new is an argument, but it doesn't mean your bare-Linux approach is simpler than the K8S approach. You just know it more.
by caymanjim on 10/4/18, 3:33 PM
Why separate your code into multiple files? Why write tests? Why use a code linter? Why use virtual environments? Why write a Makefile?
If you're working on a small personal project, or you're a newer developer learning the ropes, or the project is temporary, not important, doesn't need to scale, etc. then it's simply a matter of personal choice. It doesn't make sense to get bogged down learning a lot of tools and processes if there's no compelling business need and you're just trying to get the job done.
If you already know how to use these tools, though, they usually make your life a whole lot easier. I learn how to use complex systems in my career, where they're necessary and helpful. I apply these same tools and practices on my personal projects now, because once you know how to use something like Kubernetes, there's little cost to it and many of the benefits still apply.
by comboy on 10/4/18, 11:07 AM
Unless the personal project is something that you really care about, potential startup or something like that, then obviously you choose something that you are already proficient in because then it's about getting stuff done and moving forward.
So while it may make sense to discuss what technology is good or bad for some kind of companies, I think we won't arrive at any ultimate conclusion like "X is good/bad for personal projects".
by giobox on 10/4/18, 8:53 PM
As soon as this author mentioned he was happy with using Ansible, Systemd etc instead (which are all reasonable tools for what they are) he lost me - this is collectively much more work for me as the sole developer than a simple Docker container setup for virtually all web app projects in my experience. If you understand these relatively complex tools, you can likely learn Docker well enough in about an hour or two, the payoff in time savings in the future will make this time well spent.
In my experience "Dockerising" a web app is much, much less time consuming than trying to script it in Ansible (or Chef, Puppet, <name your automation tool>) and of course much less error prone too. I've yet to meet an Ansible setup that didn't become brittle or require maintenance eventually. If you are using straight forward technologies (Ruby, Java, Node, Whatever) your Dockerfile is often just a handful of lines at most. You can even configure it as a "service" without having to bother with Systemd service definitions and the like at all.
by codegladiator on 10/4/18, 11:45 AM
You don't need to run a new cluster for every project. You can deploy multiple projects in a single cluster. I was running close to 5 different projects in a single cluster, backed by about 3-6 machines (machines added/removed on demand).
Kubernetes is basically like your own heroku. You can create namespaces for your projects. No scripts. You can deduce everything (how is a service deployed, whats the config, whats the arch) from the config files (yml)
> Is a single Nginx virtual host more complex or expensive than deploying a Nginx daemon set and virtual host in Kubernetes? I don't think so.
Yes it is. I wonder if the author has actually tried setting this themselves. I do realise i had similar opinions before I had worked with kubernetes, but after working with it, I cannot recommend it enough.
> When you do a change in your Kubernetes cluster in 6 months, will you remember all the information you have today?
Yes, why does the author think otherwise ? Or if this is a real argument why does the author think their "ansible" setup would be at the top of the head. I had one instance where I had to bring a project back up on prod (it was a collection of 4 services + 2 databases not including the LB) after 6-8 months of keeping it "down". Guess what, I just scaled the instances from 0 to 3, ssl is back, all services are back, everything is up and running.
This is not to say you wont have issues, I had plenty during the time i started trying it out. There is a learning curve and please do try out the ecosystem multiple times before thinking of using it in production.
by reilly3000 on 10/4/18, 9:08 AM
by markbnj on 10/4/18, 3:47 PM
by fcgravalos on 10/4/18, 10:53 AM
I work in my day to day 100% and fully dedicated automating Kubernetes cluster lifecycle, maintaining them, monitoring them and creating tools around it. Kubernetes it's a production-grade container orchestrator, it solves really difficult problems but it brings some challenges though. All of their components work distributed across the cluster, including network plugins, firewall policies, the control plane, etc. So be prepared to understand all of it.
Don't get me wrong, I love Kubernetes and if you want to have some fun go for it, but don't use the HA features as an excuse to do it.
But overall saying "NO" to rsync or ansible to deploy your small project just because it's not fancy enough it sounds to me like "Are you really going to the supermarket by car, when there are helicopters out there?"
Great article!
by isugimpy on 10/4/18, 12:47 PM
Containers (and thus Kubernetes) aren't the magical solution to every problem in the world. But they help, and the earlier you can get to an automated, consistent build/deploy process with anything that'll actually serve real customers, the better off you are. Personally, I'd rather design with containers in mind from day one, because it's what I'm comfortable with. There's nothing wrong with deploying code as a serverless-style payload, or even running on a standalone VM, but you need to start planning for how something should work in the real world as early as you can reasonably.
by stuaxo on 10/4/18, 12:26 PM
Every new thing that you add, adds complexity. If that thing interfaces with another, then there is complexity at the interfaces of both.
Modern tools that atomise everything reduce density (and thus complexity), but people aren't paying attention to the amount of abstractions they are adding and their cost.
by pawurb on 10/4/18, 2:38 PM
by cs02rm0 on 10/4/18, 10:05 AM
It needs a certain scale before the overheads are worth it.
by tomc1985 on 10/4/18, 4:52 PM
by throw2016 on 10/5/18, 6:14 PM
Unfortunately the devops community always wanted to promote themselves as the only option for containers and even though they were based on the LXC project they did not explain the technical decisions and tradeoffs made as they did not want users to think there are valid alternatives. And this is the source of fundamental confusion among users about containers.
Why are you using single process containers? This is a huge technical decision that is barely discussed, a non standard OS environment adds significant technical debt at the lowest point of your stack. Why are you using layers to build containers? Why not just use them as runtime? What are the tradeoffs of not using layers? What about storage? Can all users just wish away state and storage? Why are these important technical issues about devops and alternative approaches not discussed? Unless you can answer these questions you cannot make an informed choice.
There is a culture of obfuscation in devops. You are not merely using an Nginx reverse proxy or Haproxy but a 'controller', using networking or mounting a filesystem is now a 'driver'. So most users end up trying Kubernetes or Docker and get stuck in the layers of complexity when they could benefit from something straightforward like LXC.
by doppel on 10/4/18, 1:07 PM
Using something you are familiar with, even if it's just a 10-line bash script, a simple virtual private server and the adding an nginx config there, is usually faster than having to orchestrate everything. If you want to invest the time in setting up Kubernetes for all your personal projects, it would probably make sense.
Basically, is it worth it? https://xkcd.com/1205/
by marenkay on 10/4/18, 4:19 PM
by mcs_ on 10/4/18, 2:38 PM
in office we do not work with docker, containers, cloud etc, we run legacy asp.net 2.0 on-premise without any kind of automation (just a couple of us coordinating the releases and copying and pasting into the customer Windows Server 2008).
Kubernetes for personal projects? In my case, after 10 years of on-premise deployments, VM Ware, SQL Clusters, web.config, IIS, ARR and the rest of the things related, YES!
I absolutely want 3 hosts for less then 100$, a gitlab account for 4$, a free account in cloudflare, code and deploy.
by tracker1 on 10/4/18, 5:29 PM
Of course, if you're in a workplace on a project likely to see more than a few hundred simultaneous users in a given application, definitely look at what K8s offers.
Edit: as to deploys, get CI/CD working from your source code repository. GitLab, Azure DevOps (formerly VSTS), CircleCI, Travis and so many others are free to very affordable for this. Take the couple hours to get this working, and when you want to update, just update your source repo in the correct branch.
by thrower123 on 10/4/18, 9:26 PM
by tiangolo on 10/8/18, 2:12 PM
by djhworld on 10/4/18, 5:39 PM
But they're tiny, tiny things that are very personal (i.e. they have 1 user - me)
If you're getitng to the point where you need to scale things using a kubernetes cluster or whatever it seems to me like that thing has graduated from "personal project" to an actual product that needs the features of kubernetes like reslience and so on.
I mean, I'd love the idea of having a kubernetes cluster to throw some things onto but I really don't have the patience to set it all up right now, it seems way too much cost and effort
by hosh on 10/4/18, 7:42 PM
Like everything, there are tradeoffs. If there were a fairly easy way to do a one-node Kubernetes setup (say, Minikube), I would probably just go that route. One doesn't have to use the full feature set of Kubernetes to get one or two things that are advantageous.
As it is, I setup Minikube for the dev machines for the team I am on. I might consider Kubernetes for my personal side project if I knew Minikube would do well for machines under 1 GB of memory (it doesn't really).
The pre-emptible VMs that cost less than $5 is interesting, and I might do something like that.
by strzibny on 10/5/18, 4:57 PM
For anybody who is interested in understanding this basic building blocks I decided to write https://vpsformakers.com/.
by kujaomega on 10/4/18, 2:02 PM
by beat on 10/4/18, 9:20 PM
by jitl on 10/4/18, 9:11 AM
by chilicuil on 10/4/18, 4:05 PM
by z3 on 10/4/18, 10:30 AM
by gant on 10/4/18, 9:26 PM
So I'm calling it quits for now. Just running the cluster requires a small ops team.
by dropmann on 10/4/18, 10:27 AM
In case you are using GKE, you actually need two ingresses to support IPv6 + IPv4
This adds up to like 10 times the cost of an single droplet. For personal projects this seems kind of wasteful to me.
by lazyant on 10/4/18, 11:42 PM
by wwarner on 10/4/18, 3:19 PM
by hardwaresofton on 10/4/18, 10:23 AM
I'd argue that a lot of the complexity people find in Kubernetes is essential when you consider what it takes to run an application in any kind of robust manner. Take the simplest example -- reverse proxying to an instance of an application, a process (containerized or not) that's bound to a local port on the machine. If you want to edit your nginx config manually to add new upstreams when you deploy another process, then reload nginx be my guest. If you find and setup tooling that helps you do this by integrating with nginx directly or your app runtime that's even better. Kubernetes solves this problem once and for all consistently for a large amount of cases, regardless of whether you use haproxy, nginx, traefik, or whatever else for your "Ingress Controller". In Kubernetes, you push the state you want your world to be in to the control plane, and it makes it so or tells you why not.
Of course, the cases where Kubernetes might not make sense are many:
- Still learning/into doing very manual server management (i.e. systemd, process management, user management) -- ansible is the better pick here
- Not using containerization (you really kinda should be at this point, if you read past the hype train there's valuable tech/concepts below)
- Not interested in packaged solutions for the issues that kubernetes solves in a principled way that you could solve relatively quickly/well adhoc.
- Launching/planning on launching a relatively small amount of services
- Are running on a relatively small machine (I have a slightly beefy dedicated server, so I'm interested in efficiently running lots of things).
A lower-risk/simpler solution for personal projects might be something like Dokku[0], or Flynn[1]. In the containerized route, there's Docker Swarm[2] +/- Compose[3].
Here's an example -- I lightly/lazily run https://techjobs.tokyo (which is deployed on my single-node k8s cluster), and this past weekend I put up https://techjobs.osaka. The application itself was generically written so all I had to do for the most part was swap out files (for the front page) and environment variables -- this meant that deploying a completely separate 3-tier application (to be fair the backend is SQLite), only consisted of messing with YAML files. This is possible in other setups, but the number of files and things with inconsistent/different/incoherent APIs you need to navigate is large -- systemd, nginx, certbot, docker (instances of the backend/frontend). Kubernetes simplified deploying this additional almost identical application in a robust manner massively for me. After making the resources, bits of kubernetes got around to making sure things could run right, scale if necessary, retrieve TLS certificates, etc -- all of this is possible to set up manually on a server but I'm also in a weird spot where it's something I probably won't do very often (making a whole new region for an existing webapp), so maybe it wouldn't be a good idea to write a super generic ansible script (assuming I was automating the deployment but not with kubernetes).
Of course, Kubernetes is not without it's warts -- I have more than once found myself in a corner off the beaten path thoroughly confused about what was happening and sometimes it took days to fix, but that's mostly because of my penchant to use relatively new/young/burgeoning technology (for example kube-router recently instead of canal for routing), and lack of business-value to my projects (if my blog goes down for a day, I don't really mind).
[0]: http://dokku.viewdocs.io/dokku
[1]: https://github.com/flynn/flynn/
by billylindeman on 10/4/18, 2:10 PM
by jeremychone on 10/3/18, 3:25 PM
by barbecue_sauce on 10/4/18, 5:28 PM
by michaelmior on 10/4/18, 6:37 PM
I assume "any" should be "every"?
by mychael on 10/4/18, 10:40 PM
by adamc on 10/4/18, 7:50 PM
by amthna on 10/4/18, 5:37 PM
by Annatar on 10/4/18, 10:50 AM
Yep! For any thing which goes beyond the initial viability test, I make an OS package. SmartOS has SMF, so integrating automatic startup/shutdown is as easy as delivering a single SMF manifest, and running svccfg import in the package postinstall. For the configuration, I just make another package which delivers it, edits it dynamically and automatically in postinstall if required, and calls svcadm refresh svc://...
it's easy. It's fast. The OS knows about all my files. I can easily remove it or upgrade it. It's clean. When I'm done, I make another ZFS image for imgadm(1M) to consume and Bob's my uncle.
by Annatar on 10/4/18, 10:55 AM
No, the author of the Kubernetes article completely, so utterly missed the point that it's not even funny: none of those Kubernetes complications are necessary if one runs SmartOS and optionally, as a bonus, Triton.
Since doing something the harder and more complicated way for the same effect is irrational, which presumably the author of the Kubernetes article isn't, I'm compelled to presume that he just didn't know about SmartOS and Triton, or that the person is just interested in boosting their resume rather than researching what the most robust and simplest technology is. If resume boosting with Kubernetes is their goal then their course of action makes sense, but the company where they work won't get the highest reliability and lowest complexity that they could get. So good for them, suboptimal for their (potential) employer. And that's also a concern, moreover, it's a highly professional one. I'm looking through the employer's eyes on this, but then again, I really like sleeping through entire nights without an incident. A simple and robust architecture is a big part of that. Resume boosting isn't.