by lexaude on 6/6/14, 4:58 PM with 6 comments
by thu on 6/6/14, 7:06 PM
The approach is taken from a blog post where Open vSwitch was used instead of Tinc: http://goldmann.pl/blog/2014/01/21/connecting-docker-contain...
I really like this approach: I run a SkyDNS per group of containers. That DNS is used in `docker run --dns`. Containers can then look up services naturally (via DNS) and those services can actually be running on different machines. Those containers can be running on my laptop or multiple machines across multiple datacenters and there's no difference to them.
by 286c8cb04bda on 6/6/14, 6:22 PM
I wonder how this scales when you have multiple databases, or have read-only slaves, or something like that.
Then developers have to remember to send some traffic to localhost:3306 and some traffic to localhost:3307, and who knows how many more ports.
Documentation never manages to stay up-to-date, so perhaps you could use some sort of Service Discovery Protocol to map these semi-arbitrary numbers to more memorable names.
Then, as long as you know what port the service-discovery-service runs on, you could simply query it for the address to reach your databases.
Maybe that's too much work, though. We could just stuff everything in /etc/hosts.
by contingencies on 6/7/14, 7:17 AM
What about IPv6? Non-standard protocols? Layer 2 connectivity requirements? Same across multiple data centers where latency and packet loss are potentially higher than negligible and volatile? Link disruption handling? Failover protocol? Service ordering? Distinction between startup and execution time dependencies? Who to call/notify when something breaks? How to do so? How to escalate? Integration of high security requirements such as multi-party authentication and signoff using crypto and/or multi-factor (phone, sms, crypto-devices, etc.)?
There's so much to do here, this post is barely scratching the surface.