by someguy1233 on 7/7/15, 11:46 AM with 154 comments
I run many websites/applications that need isolation from each other on a single server, but I just use the pretty-standard OpenVZ containers to deal with that (yes I know I could use KVM servers instead, but I haven't ran into any issues with VZ so far).
What's the difference between Docker and normal virtualization technology (OpenVZ/KVM)? Are there any good examples of when and where to use Docker over something like OpenVZ?
by tinco on 7/7/15, 2:50 PM
Docker is exactly like OpenVZ. It became popular because they really emphasize their OpenVZ Application Templates feature, and made it much more user friendly.
So users of Docker, instead of following this guide: https://openvz.org/Application_Templates
They write a Dockerfile, which in a simple case might be:
FROM nginx
COPY index.html /usr/share/nginx/html
So no fuzzing with finding a VE somewhere, downloading it customizing it, and then installing stuff manually, stopping the container and tarring it, Docker does that all for you when you run `docker build`.Then you can push your nice website container to the public registry, ssh to your machine and pull it from the registry. Of course you can have your own private registry (we do) so you can have proprietary docker containers that run your apps/sites.
From my perspective, the answer to your question would be: Always prefer Docker over OpenVZ, they are the same technology but Docker is easier to use.
But I've never really invested in OpenVZ so maybe there's some feature that Docker doesn't have.
by hueving on 7/7/15, 3:14 PM
No security patching story at your workplace? No problem, containers don't have one either! If someone has shipped a container that embedded a vulnerable library, you better hope you can get a hold of them for a rebuild or you have to pull apart the image yourself. It's the static linking of the 21st century!
by KaiserPro on 7/7/15, 12:52 PM
docker is a glorified chroot and cgroup wrapper.
There is also a library of prebuilt docker images (think of it as a tar of a chroot) and a library of automated build instructions.
The library is the most compelling part of docker. everything else is basically a question of preference.
You will hear a lot about build once, deploy anywhere. whilst true in theory, your mileage will vary.
what docker is currently good for:
o micro-services that talk on a messaging queue
o supporting a dev environment
o build system hosts
However if you wish to assign ip addresses to each service, docker is not really mature enough for that. Yes its possible, but not very nice. You're better off looking at KVM or vmware.
There is also no easy hot migration. So there is no real solution for HA clustering of non-HA images. (once again possible, but not without lots of lifting, Vmware provides it with a couple of clicks.)
Basically docker is an attempt at creating a traditional unix mainframe system (not that this was the intention) A large lump of processors and storage that is controlled by a singular CPU scheduler.
However, true HA clustering isn't easy. Fleet et al force the application to deal with hardware failures, whereas Vmware and KVm handle it in the hypervisor.
by grhmc on 7/7/15, 12:24 PM
This is the core that Docker solves, and in such a way that developers can do most of the dependency wrangling for me. I don't even mind Java anymore because the CLASSPATHs can be figured out once, documented in the Dockerfile in a repeatable programatic fashion, and then ignored.
In my opinion the rest of it is gravy. Nice tasty gravy, but I don't care so much about the rest at the moment.
Edit: As danesparz points out, nobody has mentioned immutable architecture. This is what we do at Clarify.io. See also: https://news.ycombinator.com/item?id=9845255
by shawnee_ on 7/7/15, 3:22 PM
But it's not, at least in my experience; not to mention that as of now, anything running Docker in production (probably a bad idea) is wide open to the OpenSSL security flaw in versions of 1.0.1 and 1.0.2, despite the knowledge of this issue being out there for at least a few days.
Docker's currently "open" issue on github: https://github.com/docker/compose/issues/1601
Other references: https://mta.openssl.org/pipermail/openssl-announce/2015-July... http://blog.valbonne-consulting.com/2015/04/14/as-a-goat-im-...
by alextgordon on 7/7/15, 12:56 PM
You can tear down the host server, then recreate it with not much more than a `git clone` and `docker run`.
2. Precise test environment. I can mirror my entire production environment onto my laptop. No internet connection required! You can be on a train, on a plane, on the beach, in a log cabin in the woods, and have a complete testing environment available.
Docker is not a security technology. You still need to run each service on a separate host kernel, if you want them to be properly isolated.
by danesparza on 7/7/15, 1:30 PM
Docker gives you the ability to version your architecture and 'roll back' to a previous version of a container.
by zwischenzug on 7/7/15, 1:56 PM
- Docker is nothing new - it's a packaging of pre-existing technologies (cgroups, namespaces, AUFS) into a single place
- Docker has traction, ecosystem, community and support from big vendors
- Docker is _very_ fast and lightweight compared to VMs in terms of provisioning, memory usage, cpu usage and disk space
- Docker abstracts applications, not machines, which is good enough for many purposes
Some of these make a big difference in some contexts. I went to a talk where someone argued that Docker was 'just a packaging tool'. A sound argument, but packaging is a big deal!
Another common saw is "I can do this with a VM". Well, yes you can, but try spinning up 100 vms in a minute and see how your MacBook Air performs.
by akshaykarle on 7/7/15, 1:42 PM
OpenVZ, LXC, solaris zones and bsd jails on the other hand or mainly run complete OS and the focus is quite different from packaging your applications and deployments.
You can also have a look at this blog which explains the differences more in detail: http://blog.risingstack.com/operating-system-containers-vs-a...
by jacques_chester on 7/7/15, 12:13 PM
Add in image registries and a decent CLI and the developer ergonomics are outstanding.
Technologies only attract buzz when they're accessible to mainstream developers on a mainstream platform. The web didn't matter until it was on Windows. Virtualization was irrelevant until it reached x86, containerization was irrelevant until it reached Linux.
Disclaimer: I work for a company, Pivotal, which has a more-than-passing interest in containers. I did a presentation on the history which you might find interesting: http://livestre.am/54NLn
by sudioStudio64 on 7/7/15, 5:44 PM
Some people have mentioned security...patching in particular. Containers won't help if you don't have patching down. At the very least it lets you patch in the lab and easily promote the entire application into production.
I think that the security arguments are a canard. By making it easier and faster to deploy you should be able to patch app dependencies as well. I, for one, would automate the download and install of the latest version of all libs in a container as part of the app build process. Hell, build them all from source.
IT departments need to be able to easily move applications around instead of the crazy build docs that have been thrown over the wall for years.
by jtwebman on 7/7/15, 2:30 PM
by hmans on 7/7/15, 1:46 PM
by csardi on 7/7/15, 12:18 PM
by mariocesar on 7/7/15, 1:17 PM
If the docker tool will have something like `docker serve` and start his own local registry will be more than great.
For this case when I switch to Go was a great solution, building the binary is everything you need.
About docker being helpful for development, definitively yes, I switch to postgres, elasticsearch and redis containers instead of installing them on my computer, is easy to flush and restart and having different versions of services is also more manageable
by spaceisballer on 7/7/15, 12:14 PM
by dschiptsov on 7/7/15, 1:06 PM
by hosh on 7/7/15, 5:10 PM
What differentiates Docker is not virtualization, so much as package management. Docker is a package management tool that happens to allow you to execute the content of the package with some sort of isolation.
Further, when you look at it from that angle, you start seeing the flaws with it, as well as it's potential. It's no accident that Rocket and the Open Container Project are arising to standardize the container format. Other, less-well-known efforts include being able to distribute the container format themselves in a p2p distribution system, such as IPFS.
by Kiro on 7/7/15, 2:12 PM
by pjc50 on 7/7/15, 2:35 PM
by tobbyb on 7/7/15, 8:54 PM
Docker took the LXC OS container template as a base, modified the container OS init to run a single app, builds the OS file system with layers of aufs, overlayfs, and disables storage persistence. And this is the app container.
This is an opinionated use case of containers that adds significant complexity, more a way to deploy app instances in a PAAS centric scenario.
A lot of confusion around containers is because of the absence of informed discussion on the merits or demerits of this approach and the understanding that you have easy to use OS containers like LXC that are perfectly usable by end users like VMs are, and then app containers that are doing a few more things on top of this.
You don't need to adopt Docker to get the benefits of containers, you adopt Docker to get the benefits of docker and often this distinction is not made.
A lot of users whose first introduction to containers is Docker tend to conflate Docker to containers, and thanks to some 'inaccurate' messaging from the Docker ecosystem think LXC is 'low level' or 'difficult' to use, Why would anyone try LXC if they think it's low level or difficult to use? But those who do will be pleasantly surprised how simple and straightforward it is.
For those who want to understand containers, without too much fuss, we have tried to provide a short overview in a single page in the link below.
https://www.flockport.com/containers-minus-the-hype
Disclosure - I run flockport.com that provides an app store based on LXC containers and tons of tutorials and guides on containers, that can hopefully promote more informed discussion.
by theknarf on 7/7/15, 2:28 PM
I think thats the best way I can summarise what Docker _is_.
by mbrock on 7/7/15, 12:58 PM
Where I've worked in the past, setting up a new development or production environment has been difficult and relied on half-documented steps, semi-maintained shell scripts, and so on. With a simple setup of a Dockerfile and a Makefile, projects can be booted by installing one program (Docker) and running "make".
You could do that with other tools as well, but Docker, and even moreso the emerging "standards" for container specification, seems like an excellent starting point.
by bfirsh on 7/7/15, 2:40 PM
by corradio on 7/7/15, 12:32 PM
by johnminter on 7/7/15, 8:19 PM
https://github.com/mine-cetinkaya-rundel/useR-2015/blob/mast...
by lgunsch on 7/7/15, 8:08 PM
https://en.wikipedia.org/wiki/Operating-system-level_virtual...
Edit: formatting.
by somberi on 7/8/15, 3:08 AM
by theneb on 7/7/15, 1:10 PM
by justincormack on 7/7/15, 12:32 PM
by xaduha on 7/7/15, 3:18 PM
I think it was here [1], but deleted now.
[1] http://ibuildthecloud.tumblr.com/post/63895248725/docker-is-...
by tfn on 7/9/15, 3:29 PM
TL;DR: It's better for deploying applications and running them than using home-made scripts.
by atsaloli on 7/7/15, 12:16 PM
by programminggeek on 7/7/15, 8:07 PM
by kolyshkin on 7/10/15, 1:50 AM
Technologically, both OpenVZ and Docker are similar, i.e. they are containers -- isolated userspace instances, relying on Linux Kernel features such as namespaces. [Shameless plug: most of namespaces functionality is there because of OpenVZ engineers work on upstreaming]. Both Docker and OpenVZ has tools to set up and run containers. This is there the similarities end.
The differences are:
1 system containers vs application containers
OpenVZ containers are very much like VMs, except for the fact they are not VMs but containers, i.e. all containers on a host are running on top of one single kernel. Each OpenVZ container has everything (init, sshd, syslogd etc.) except the kernel (which is shared).
Docker containers are application containers, meaning Docker only runs a single app inside (i.e. a web server, a SQL server etc).
2 Custom kernel vs vanilla kernel
OpenVZ currently comes with its own kernel. 10 years ago there were very few container features in the upstream kernel, so OpenVZ has to provide their own kernel, patched for containers support. That support includes namespaces, resource management mechanisms (CPU scheduler, I/O scheduler, User Beancounters, two-level disk quota etc), virtualization of /proc and /sys, and live migration. Over ten years of work of OpenVZ kernel devs and other interesting parties (such as Google and IBM) a lot of this functionality is now available in the upstream Linux kernel. That opened a way for other container orchestration tools to exist -- including Docker, LXC, LXD, CoreOS etc. While there are many small things missing, the last big thing -- checkpointing and live migration -- was also recently implemented in upstream, see CRIU project (a subproject of OpenVZ, so another shameless plug -- it is OpenVZ who brought live migration to Docker). Still, OpenVZ comes with its own custom kernel, partly due to retain backward compatibility, partly due to some features still missing from the upstream kernel. Nowadays that kernel is optional but still highly recommended.
Docker, on the other side, runs on top of a recent upstream kernel, i.e. it does not need a custom kernel.
3 Scope
Docker has a broader scope than that of OpenVZ. OpenVZ just provides you with a way to run secure, isolated containers, manage those, tinker with resources, live migrate, snapshot, etc. But most of OpenVZ stuff is in the kernel.
Docker has some other things in store, such as Docker Hub -- a global repository of Docker images, Docker Swarm -- a clustering mechanism to work with a pool of Docker servers, etc.
4 Commercial stuff
OpenVZ is a base for commercial solution called Virtuozzo, which is not available for free but adds some more features, such as cluster filesystem for containers, rebootless kernel upgrades, more/better tools, better containers density etc. With Docker there's no such thing. I am not saying it's good or bad, just stating the difference.
This is probably it. Now, it's not that OpenVZ and Docker are opposed to each other, in fact we work together on a few things:
1. OpenVZ developers are authors of CRIU, P.Haul, and CRIU integration code in Docker's libcontainer. This is the software that enables checkpoint/restore support for Docker.
2. Docker containers can run inside OpenVZ containers (https://openvz.org/Docker_inside_CT)
3. OpenVZ devs are authors of libct, a C library to manage containers, a proposed replacement or addition to Docker's libcontainer. When using libct, you can use enhanced OpenVZ kernel for Docker containers.
There's more to come, stay tuned.
by droidztix on 7/8/15, 6:47 AM