from Hacker News

Fig: Fast, isolated development environments using Docker

by andrewgodwin on 1/27/14, 6:23 PM with 44 comments

  • by thu on 1/27/14, 9:41 PM

    I'm not involved with this project but there is some confusions in this thread, maybe I can share my point of view:

    Docker is a tool to run processes with some isolation and, that's the big selling point, nicely packaged with "all" their dependencies as images.

    To understand "all" their dependencies, think C dependencies for e.g. a Python or Ruby app. That's not the kind of dependencies e.g. virtualenv can solve properly. Think also assets, or configuration files.

    So instead of running `./app.py` freshly downloaded from some Git <repo>, you would run `docker run <repo> ./app.py`. In the former case, you would need to care of, say, the C dependencies. In the second case, they are packaged in the image that Docker will download from <repo> prior to run the ./app.py process in it. (Note that the two <repo> are not the same things. One is a Git repo, the other is a Docker repo.)

    So really at this point, that's what Docker is about: running processes. Now Docker offers a quite rich API to run the processes: shared volumes (directories) between containers (i.e. running images), forward port from the host to the container, display logs, and so on.

    But that's it: Docker as of now, remains at the process level. While it provides options to orchestrate multiple containers to create a single "app", it doesn't address the managemement of such group of containers as a single entity.

    And that's where tools such as Fig come in: talking about a group of containers as a single entity. Think "run an app" (i.e. "run an orchestrated cluster of containers") instead of "run a container".

    Now I think that Fig comes short of that goal (I haven't played with it, that's just from a glance at its docuementation). Abstracting over the command-line arguments of Docker by wrapping them in a JSON file is the easy part (i.e. launching a few containers). The hard part is about managing the cluster as Docker manages the containers: display aggregated logs, replace a particular container by a new version, move a container to a different host, and thus abstract the networking between different hosts, and so on.

    This is not a negative critique of Fig. Many people are working on that problem. For instance I solve that very problem with ad-hoc bash scripts. Doing so we are just exploring the design space.

    I believe that Docker itself will provide that next level in the future; it is just that people need the features quickly.

    tl;dr:

    Docker -> processes

    Fig (and certainly Docker in the future) -> clusters (or formations) of processes

  • by fit2rule on 1/27/14, 6:52 PM

    I'd love to use this .. but who has time to learn yet another configuration and provisioning management tool? I mean, I can make the time - and will - but since this is just another docker management tool, lets use this moment to pick on it, a little bit..

    What this needs is the ability to be pointed at a working VM - lets say, Ubuntu 13.10 server - and then just figure out whats different about it, compared to the distro release.

    Something like the blueprint tool, in fact.

  • by cameronmaske on 1/27/14, 7:44 PM

    I've been using fig on some side projects. It's incredibly exciting how easy it makes configuring what could be a quite involved development environment. Installing redis is 3 line addition to a fig.yml file (https://github.com/orchardup/fig-rails-example/blob/master/f...). It also has amazing potential for an agnostic development environment across teams.
  • by bryanlarsen on 1/27/14, 8:36 PM

    Is this meant to be a next-generation Vagrant? What advantages does it have over vagrant-lxc?
  • by airnomad on 1/28/14, 11:05 AM

    I just spent few days dockerize my development enviroment and now Im able to recreate complete enviroment in one command - It took less then 100 lines of bash and dockerfiles.

    So I put those files in VCS so the next guy could just clone the repo, run make devel and get the app running, ready to code on.

    So unless you want to use Docker at deployment, dont split app in multiple containers - you got more running parts to integrate and no gain, instead use supervisord and run all processes in single container.

    Theres few hacky parts (how to inject ssh keys into container) but so far its really cool.

    I too wrote few wrapper scripts around lxc-attach so i can run ./build/container/command.sh tail -n 20 -f /path/to/log/somewhere/on/container

    I cant share any code but im happy to answer questions at [HNusername]@gmail.com

  • by ewindisch on 1/28/14, 5:34 PM

    I'm an employee of Docker, Inc. although my thoughts are my own, etc...

    I must say that this is great. I've been advocating this sort of usage of Docker for a while as most still think of Docker or containers as individual units. I'm happy to see others adopting the viewpoint of using container groups.

    However, it is something I do hope to eventually see supported within Docker itself.

    Also, recently, I've been telling others how since October you could do this exact same thing using OpenStack Heat. Using Heat and Docker is similar to Fig, the configuration syntax is quite similar even, but it requires the heavy Heat service and an OpenStack cloud. That means that for most people, it isn't even an option. It's great that Fig now provides a solid lightweight alternative.

    As 'thu' has said already, people want and need these features quickly and I expect in the next year we'll see serious interest growing around using these solutions and solving these problems.

  • by slowmover on 1/27/14, 7:20 PM

    Can anyone explain, in a nutshell, what features this tool provides beyond just using Docker itself?
  • by aidos on 1/27/14, 11:40 PM

    Looks interesting. I know that Docker is all about the single process model, but there's are some images I've been meaning to play with that align themselves more with the single app (including dependencies) model (which fig also seems to attempt to solve).

    https://github.com/phusion/baseimage-docker

    https://github.com/phusion/passenger-docker

  • by hoprocker on 1/28/14, 7:10 AM

    This looks great. It looks a lot like what I've scraped together using a fabfile and Python dicts for configuration, but much more formal. I'm excited to try it out.
  • by rwmj on 1/27/14, 7:38 PM

    See also libvirt-sandbox (in Fedora for more than a year) which lets you use either KVM or LXC to sandbox apps:

    https://fedoraproject.org/wiki/Features/VirtSandbox

  • by maikhoepfel on 1/27/14, 9:38 PM

    So, why is this development only? What's missing to make it production ready?
  • by tudborg on 1/27/14, 7:36 PM

    This is exactly what i have been working on for the past few days, just in a way nicer package. Think i will ditch my current work and use this instead. thx.
  • by finishingmove on 1/27/14, 10:56 PM

    Can't help but think about the Framework Interoperability Group whenever I read FIG. This looks awesome though.
  • by frozenport on 1/28/14, 1:34 AM

    How does Docker handle ABI incompatibility?

    For example, EC2 disabled some of there extended instruction sets to ensue uniformity but I am not sure how long this will last. Then we will have to deal with Docker deployment problems.

    I propose we dig deep into our Gentoo roots and build the dependencies on demand.

  • by notastartup on 1/27/14, 8:19 PM

    can someone explain to me the application of Fig and Docker? Also, how do they differ?

    One application I thought of is for deploying to client. You just get them to use the instance and there's zero configuration needed. but then, what if you need to make updates to the code base, how do you update the code changes to all the deployed fig/docker instances running already?