by polvi on 6/12/15, 3:44 PM with 30 comments
by justizin on 6/12/15, 5:56 PM
This was basically the repeated experience I had which caused me to abandon etcd for the time being.
If it can barely ever heal, what the fuck good is it? And I found that it could barely ever heal. A 3-node CoreOS cluster I ran _always_ crashed when it attempted a coordinated update, and rarely could be repaired with the help of #CoreOS over hours.
Because CoreOS pushes out updates with versions of etcd incompatible with recent versions, the etcd cluster could never survive the upgrade.
Add this to the fact that the CEO of CoreOS told me in person that he expected them to be the _only_ Operating System on the internet, and I'm generally not along for the ride with CoreOS any longer.
Consul, Mesos, and Docker are looking good.
Anyone interested in this space should check out:
https://github.com/CiscoCloud/microservices-infrastructure
by jefe78 on 6/12/15, 5:27 PM
In all seriousness, this is really interesting. They solved some of the problems associated with persisting a cluster and we're likely going to use that. Feels weird thanking them for anything though.
Edit: Is anyone using CoreOS in a physical DC? We're using AWS with ~1.5k VMs but have another 5-6k hosts in physical DCs. Trying to move us towards containers but struggling.
by yeukhon on 6/12/15, 6:01 PM
For example, we use CF (old version), and we hit https://github.com/coreos/etcd/issues/863.
by KnownSubset on 6/12/15, 5:56 PM
by narsil on 6/13/15, 8:16 AM
Autoscaling Groups can be configured to have instances join multiple ELBs. We have one be the regular ELB to access the instances with, and the other is an internal ELB that only allows connections from instances in the cluster to other instances in the cluster on the etcd port (controlled via security groups).
When an instance comes up, it adds itself to the cluster via the internal ELB's hostname. The hostname is set in Route 53.
The biggest issues we've been having with etcd continue to be simultaneous reboots and/or joins to the cluster. It would also be great if the membership timeout feature that used to exist in 0.4 made its way back in. Right now, each member has to be explicitly removed rather than eventually timing out if it hasn't joined back in.
Looking forward to hear any other approaches folks have taken.
by codewithcheese on 6/12/15, 8:55 PM
by gct on 6/12/15, 7:50 PM