by rocketpastsix on 7/20/21, 1:34 PM with 250 comments
by imglorp on 7/20/21, 2:15 PM
by smashed on 7/20/21, 2:20 PM
> Ably is a Pub/Sub messaging platform that companies can use to develop realtime features in their products.
At this time, https://status.ably.com/ is reporting all green.
Althought their entire website is returning 500 errors, including the blog.
It is very hard not to point the irony of the situation.
In general I would not be so critic, but this is a company claiming to run highly available, mission critical distributed computing systems. Yet, they publish a popular blog article and it brings down their enire web presence?
by imstil3earning on 7/20/21, 2:19 PM
Perhaps my team runs a simpler cluster, but we have been running a Kubernetes cluster for 2+ years as a team of 2 and it has been nothing less than worth it
The way the author describes the costs of moving to Kubernetes makes me think that they don't have the experience with Kubernetes to actually realize the major benefits over the initial costs
by dijit on 7/20/21, 5:25 PM
But, I have to say that kubernetes is not the devil. Lock-in, is the devil.
I recently underwent the task of getting us off of AWS, which was not as painful as it could have been (I talk about it here[0])
But the thing is: I like auto healing, auto scaling and staggered rollouts.
I had previously implemented/deployed this all myself using custom C++ code, salt and a lot of python glue. It worked super well but it was also many years of testing and trial and error.
Doing all of that again is an insane effort.
Kubernetes is 80% of the same stuff if your workload fits in it, but you have to learn the edge cases, which of course increases tremendously from the standard: python, Linux, terraform stuff most operators know.
Anyway.
I’m not saying go for it. But don’t replace it with lock-in.
[0]: https://www.gcppodcast.com/post/episode-265-sharkmob-games-w...
by ojhughes on 7/20/21, 3:35 PM
by lallysingh on 7/20/21, 2:29 PM
> Packing servers has the minor advantage of using spare resources on existing machines instead of additional machines for small-footprint services. It also has the major disadvantage of running heterogeneous services on the same machine, competing for resources. ...
Have a look at your CPU/MEM resource distributions, specifically the tails. That 'spare' resource is often 25-50% of resource used for the last 5% of usage. Cost optimization on the cloud is a matter of raising utilization. Have a look at your pods' use covariance and you can find populations to stochastically 'take turns' on that extra CPU/RAM.
> One possible approach is to attempt to preserve the “one VM, one service” model while using Kubernetes. The Kubernetes minions don’t have to be identical, they can be virtual machines of different sizes, and Kubernetes scheduling constraints can be used to run exactly one logical service on each minion. This raises the question, though: if you are running fixed sets of containers on specific groups of EC2 instances, why do you have a Kubernetes layer in there instead of just doing that?
The real reason is your AWS bill. Remember that splitting up a large .metal into smaller VMs means that you're paying the CPU/RAM bill for a kernel + basic services multiple times for the same motherboard. Static allocation is inefficient when exposed to load variance. Allocating small VMs to reduce the sizes of your static allocations costs a lot more overhead than tuning your pod requests and scheduling prefs.
Think of it like trucks for transporting packages. Yes you can pay AWS to rent you just the right truck, in the right number for each package you want to carry. Or you can just rent big-rigs and carry many, many packages. You'll have to figure out how to pack them in the trailer, and to make sure they survive the vibration of the trip, but you will almost certainly save money.
EDIT: Formatting
by mjlee on 7/20/21, 2:13 PM
by flowerlad on 7/20/21, 5:54 PM
All I had to learn is Docker and Kubernetes. If Kubernetes didn't exist I would have had to learn myriad tools and services, cloud-specific tools and services, and my application would be permanently wedded to one cloud.
Thanks to Kubernetes my application can be moved to another cloud in whole or in part. Kubernetes is so well designed, it is the kind of thing you learn just because it is well designed. I am glad I invested the time. The knowledge I acquired is both durable and portable.
by orf on 7/20/21, 3:01 PM
Jokes aside, it sounds like they should just use ECS instead.
by mrweasel on 7/20/21, 2:53 PM
This is one of the first cases where I think that maybe Kubernetes would be the right solution, and it's an article about not using it. While there's a lot of information in the article, there might be some underlaying reason why this isn't a good fit for Kubernetes.
One thing that is highlighted very well is that fact that Kubernetes is pretty much just viewed as orchestration now. It's no longer amount utilizing your hardware better (in fact it uses more hardware in a many cases).
by kgraves on 7/20/21, 3:13 PM
Instead it is easier to to critique the low hanging fruit point rather than discussing their actual reason for not using this 'kubernetes' software.
So is their blog the main product? If not, then the 'their blog gone down lol' quips are irrelevant.
I found their post rather interesting and didn't suffer any issues the rest of the 'commenters' are facing.
by gtirloni on 7/20/21, 2:28 PM
by Ensorceled on 7/20/21, 3:01 PM
ably cto: Go talk to the CMO, I can't help.
ably ceo: What!!
ably cto: Remember when you said the website was "Totally under the control of the CMO" and "I should mind my own business"? Well I don't even have a Wordpress login. I literally can't help.
by dadro on 7/20/21, 2:36 PM
by andreineculau on 7/20/21, 2:17 PM
Google Cache doesn't work either https://webcache.googleusercontent.com/search?q=cache:YECd_I...
Luckily there's an Internet Archive https://web.archive.org/web/20210720134229/https://ably.com/...
by 100011_100001 on 7/20/21, 2:11 PM
They have essentially semi mocked Kubernetes without any of the benefits.
by 0xbadcafebee on 7/20/21, 4:48 PM
And if you eventually need to use Kubernetes, you can always spin up EKS. Just don't rush to ship on the Ever Given when an 18 wheeler works fine.
by rcarmo on 7/20/21, 6:00 PM
It's OK to not use k8s. We should normalize that.
by Notanothertoo on 7/20/21, 3:55 PM
I also now have a k3s cluster at home. The learning curve was insane, and I hated it all for about 8 weeks but then it all just clicked and it's working great. The arrogance to roll your own without assessing the standard fully speaks volumes. Candidates figured that out and saw the red flag.. Writing your own image bootstrapper... What about all the other features, plus the community and things h things like helm charts.
by trhoad on 7/20/21, 2:14 PM
by kaydub on 7/20/21, 9:31 PM
ECS Fargate has been awesome. No reason to add the complexity of K8s/EKS. We're all in on AWS and everything works together.
But this... you guys re-invented the wheel. You're probably going to find it's not round in certain spots too.
by phendrenad2 on 7/20/21, 4:20 PM
by motoboi on 7/20/21, 5:50 PM
And we got to maintain the code all by ourselves too! It might take a bit too long to implement a new feature, but hey! its ours!"
Really, Kubernetes is complex, but the problem it solves it even more complex.
If you are ok solving a part of the problem, nice. You just built a competitor to google. Good luck hiring people who come in already knowing how to operate it.
Good luck trying to keep it modern and useful too.
But I totally understand the appeal.
by benlivengood on 7/20/21, 7:21 PM
Where I disagree with this article is on Kubernetes stability and manageability. The caveat is that GKE is easy to manage and EKS is straightforward but not quite easy. Terraform with a few flags for the google-gke module can manage dozens of clusters with helm_release resources making the clusters production-ready with very little human management overhead. EKS is still manageable but does require a bit more setup per cluster, but it all lives in the automation and can be standardized across clusters.
Daily autoscaling is one of those things that some people can get away with, but most won't save money. For example, prices for reservations/commitments are ~65% of on-demand. Can a service really scale so low during off-hours that average utilization from peak machine count is under 35%? If so, then autoscale aggressively and it's totally worth it. Most services I've seen can't actually achieve that and instead would be ~60% utilized over a whole day (mostly global customer bases). The exception is if you can scale (or run entirely with loose enough SLOs) into spot or preemptible instances which should be about as cheap as committed instances at the risk of someday not being available.
by icythere on 7/20/21, 6:02 PM
It _forces_ you to become a "yaml engineer" and to forget the other part of the systems. I was interviewed by a company and when I replied the next step I could do was to write some operators for the ops things, they simply rejected because I'm too experienced lolz
by pm90 on 7/20/21, 2:16 PM
I celebrate a diversity in opinion on infrastructure but… if I was a CTO/VP of engineering and I read that line, that would be enough to convince me to use kubernetes.
by bdcravens on 7/20/21, 5:15 PM
by emodendroket on 7/21/21, 3:44 AM
by andrewmcwatters on 7/20/21, 7:18 PM
This type of language isn't something that should come out of a company, and may be a signal that there are other reasons developers refused to offer their services other than they just don't use K8s.
by nunez on 7/20/21, 4:05 PM
I'm saying this because while their architecture seems reasonable, albeit crazy expensive (though I'd say it's small-scale if they use network CIDRs and tags for service discovery), it also seems like they wrote this without even trying to use Kubernetes. If they did, it isn't expressed clearly by this post.
For instance, this:
> Writing YAML files for Kubernetes is not the only way to manage Infrastructure as Code, and in many cases, not even the most appropriate way.
and this:
> There is a controller that will automatically create AWS load balancers and point them directly at the right set of pods when an Ingress or Service section is added to the Kubernetes specification for the service. Overall, this would not be more complicated than the way we expose our traffic routing instances now.
> The hidden downside here, of course, is that this excellent level of integration is completely AWS-specific. For anyone trying to use Kubernetes as a way to go multi-cloud, it is therefore not very helpful.
Sound like theoretical statements rather than ones driven by experience.
Few would ever use raw YAMLs to deploy Kubernetes resources. Most would use tools like Helm or Kustomize for this purpose. These tools came online relatively soon after Kubernetes saw growth and are battle-tested.
One would also know that while ingress controllers _can_ create cloud-provider-specific networking appliances, swapping them out for other ingress controllers is not only easy to do, but, in many cases, it can be done without affecting other Ingresses (unless they are using controller-specific functionality).
I'd also ask them to reconsider is how they are using Docker images as a deployment package. They're using Docker images as a replacement for tarballs. This is evidenced by them using EC2 instances to run their services. I can see how they arrived at this (Docker images are just filesystem layers compressed as a gzipped tarball), but because images were meant to be used by containers, dealing with where Docker puts those images and moving things around must be a challenge.
I would encourage them to try running their services on Docker containers. The lift is pretty small, but the amount of portability they can gain is massive. If containers legitimately won't work for them, then they should try something like Ansible for provisioning their machines.
by hkt on 7/20/21, 2:48 PM
by voidfunc on 7/20/21, 2:12 PM
This is not too surprising. Candidates want to join companies that are perceived to be hip and with it technology-wise in order to further their own resume.
by ameyv on 7/20/21, 3:21 PM
Ably using ghost blogging platform (https://ghost.ably.com) seeing requests in network console.
by ampdepolymerase on 7/20/21, 4:10 PM
by knodi on 7/20/21, 8:46 PM
by exabrial on 7/20/21, 6:03 PM
by codetrotter on 7/20/21, 2:39 PM
by ossusermivami on 7/20/21, 4:11 PM
by aiven on 7/20/21, 4:51 PM
by ep103 on 7/20/21, 2:14 PM
Also Employers: Some of our applicants turned down our job offer, when we revealed that we don't use specific technologies!
by pantulis on 7/20/21, 2:53 PM
by blacktriangle on 7/20/21, 6:14 PM
https://www.sebastianbuza.com/2021/07/20/no-we-dont-use-kube...
Another pub/sub startup publishing almost the identical blog post also today, July 20, 2021.
by JediPig on 7/20/21, 2:23 PM
seems like a me too, i know more than you type of project. looks geared towards cryptos though.
by pictur on 7/20/21, 2:18 PM