by andrewgodwin on 1/27/14, 6:23 PM with 44 comments
by thu on 1/27/14, 9:41 PM
Docker is a tool to run processes with some isolation and, that's the big selling point, nicely packaged with "all" their dependencies as images.
To understand "all" their dependencies, think C dependencies for e.g. a Python or Ruby app. That's not the kind of dependencies e.g. virtualenv can solve properly. Think also assets, or configuration files.
So instead of running `./app.py` freshly downloaded from some Git <repo>, you would run `docker run <repo> ./app.py`. In the former case, you would need to care of, say, the C dependencies. In the second case, they are packaged in the image that Docker will download from <repo> prior to run the ./app.py process in it. (Note that the two <repo> are not the same things. One is a Git repo, the other is a Docker repo.)
So really at this point, that's what Docker is about: running processes. Now Docker offers a quite rich API to run the processes: shared volumes (directories) between containers (i.e. running images), forward port from the host to the container, display logs, and so on.
But that's it: Docker as of now, remains at the process level. While it provides options to orchestrate multiple containers to create a single "app", it doesn't address the managemement of such group of containers as a single entity.
And that's where tools such as Fig come in: talking about a group of containers as a single entity. Think "run an app" (i.e. "run an orchestrated cluster of containers") instead of "run a container".
Now I think that Fig comes short of that goal (I haven't played with it, that's just from a glance at its docuementation). Abstracting over the command-line arguments of Docker by wrapping them in a JSON file is the easy part (i.e. launching a few containers). The hard part is about managing the cluster as Docker manages the containers: display aggregated logs, replace a particular container by a new version, move a container to a different host, and thus abstract the networking between different hosts, and so on.
This is not a negative critique of Fig. Many people are working on that problem. For instance I solve that very problem with ad-hoc bash scripts. Doing so we are just exploring the design space.
I believe that Docker itself will provide that next level in the future; it is just that people need the features quickly.
tl;dr:
Docker -> processes
Fig (and certainly Docker in the future) -> clusters (or formations) of processes
by fit2rule on 1/27/14, 6:52 PM
What this needs is the ability to be pointed at a working VM - lets say, Ubuntu 13.10 server - and then just figure out whats different about it, compared to the distro release.
Something like the blueprint tool, in fact.
by cameronmaske on 1/27/14, 7:44 PM
by bryanlarsen on 1/27/14, 8:36 PM
by airnomad on 1/28/14, 11:05 AM
So I put those files in VCS so the next guy could just clone the repo, run make devel and get the app running, ready to code on.
So unless you want to use Docker at deployment, dont split app in multiple containers - you got more running parts to integrate and no gain, instead use supervisord and run all processes in single container.
Theres few hacky parts (how to inject ssh keys into container) but so far its really cool.
I too wrote few wrapper scripts around lxc-attach so i can run ./build/container/command.sh tail -n 20 -f /path/to/log/somewhere/on/container
I cant share any code but im happy to answer questions at [HNusername]@gmail.com
by ewindisch on 1/28/14, 5:34 PM
I must say that this is great. I've been advocating this sort of usage of Docker for a while as most still think of Docker or containers as individual units. I'm happy to see others adopting the viewpoint of using container groups.
However, it is something I do hope to eventually see supported within Docker itself.
Also, recently, I've been telling others how since October you could do this exact same thing using OpenStack Heat. Using Heat and Docker is similar to Fig, the configuration syntax is quite similar even, but it requires the heavy Heat service and an OpenStack cloud. That means that for most people, it isn't even an option. It's great that Fig now provides a solid lightweight alternative.
As 'thu' has said already, people want and need these features quickly and I expect in the next year we'll see serious interest growing around using these solutions and solving these problems.
by slowmover on 1/27/14, 7:20 PM
by aidos on 1/27/14, 11:40 PM
by hoprocker on 1/28/14, 7:10 AM
by rwmj on 1/27/14, 7:38 PM
by maikhoepfel on 1/27/14, 9:38 PM
by tudborg on 1/27/14, 7:36 PM
by finishingmove on 1/27/14, 10:56 PM
by frozenport on 1/28/14, 1:34 AM
For example, EC2 disabled some of there extended instruction sets to ensue uniformity but I am not sure how long this will last. Then we will have to deal with Docker deployment problems.
I propose we dig deep into our Gentoo roots and build the dependencies on demand.
by notastartup on 1/27/14, 8:19 PM
One application I thought of is for deploying to client. You just get them to use the instance and there's zero configuration needed. but then, what if you need to make updates to the code base, how do you update the code changes to all the deployed fig/docker instances running already?