by spang on 7/9/15, 10:30 PM with 125 comments
by svieira on 7/10/15, 3:09 AM
* Downloading any new dependencies to a cached folder on the server (this was before wheels had really taken off) * Running pip install -r requirements.txt from that cached folder into a new virtual environment for that deployment (`/opt/company/app-name/YYYY-MM-DD-HH-MM-SS`) * Switching a symlink (`/some/path/app-name`) to point at the latest virtual env. * Running a graceful restart of Apache.
Fast, zero downtime deployments, multiple times a day, and if anything failed, the build simply didn't go out and I'd try again after fixing the issue. Rollbacks were also very easy (just switch the symlink back and restart Apache again).
These days the things I'd definitely change would be:
* Use a local PyPi rather than a per-server cache * Use wheels wherever possible to avoid re-compilation on the servers.
Things I would consider:
* Packaging (deb / fat-package / docker) to avoid having any extra work done over per-machine + easy promotions from one environment to the next.
by morgante on 7/10/15, 3:13 AM
Their first reason (not wanting to upgrade a kernel) is terrible considering that they'll eventually be upgrading it anyways.
Their second is slightly better, but it's really not that hard. There are plenty of hosted services for storing Docker images, not to mention that "there's a Dockerfile for that."
Their final reason (not wanting to learn and convert to a new infrastructure paradigm) is the most legitimate, but ultimately misguided. Moving to Docker doesn't have to be an all-or-nothing affair. You don't have to do random shuffling of containers and automated shipping of new images—there are certainly benefits of going wholesale Docker, but it's by no means required. At the simplest level, you can just treat the Docker contain as an app and run it as you normally would, with all your normal systems. (ie. replace "python example.py" with "docker run example")
by Cieplak on 7/9/15, 10:43 PM
by doki_pen on 7/10/15, 1:44 AM
Basically, what it comes down to a build script that builds a deb with the virtualenv of your project versioned properly(build number, git tag), along with any other files that need to be installed (think init scripts and some about file describing the build). It also should do things like create users for daemons. We also use it to enforce consistent package structure.
We use devpi to host our python libraries (as opposed to applications), reprepro to host our deb packages, standard python tools to build the virtualenv and fpm to package it all up into a deb.
All in all, the bash build script is 177 LoC and is driven by a standard build script we include in every applications repository defining variables, and optionally overriding build steps (if you've used portage...).
The most important thing is that you have a standard way to create python libraries and application to reduce friction on starting new projects and getting them into production quickly.
by remh on 7/9/15, 11:07 PM
https://www.datadoghq.com/blog/new-datadog-agent-omnibus-tic...
It's more complicated than the proposed solution by nylas but ultimately it gives you full control of the whole environment and ensure that you won't hit ANY dependency issue when shipping your code to weird systems.
by kbar13 on 7/9/15, 10:41 PM
by tschellenbach on 7/9/15, 11:58 PM
Deploys are harder if you have a large codebase to ship. rSync works really well in those cases. It requires a bit of extra infrastructure, but is super fast.
by sandGorgon on 7/9/15, 10:59 PM
For someone trying out building python deployment packages using deb, rpm, etc. I really recommend Docker.
by sophacles on 7/9/15, 11:05 PM
On the app end we just build a new virtualenv, and launch. If something fails, we switch back to the old virtualenv. This is managed by a simple fabric script.
by nZac on 7/10/15, 2:19 AM
Bitbucket and GitHub are reliable enough for how often we deploy that we aren't all that worried about downtime from those services. We could also pull from a dev's machine should the situation be that dire.
We have looked into Docker but that tool has a lot more growing before "I" would feel comfortable putting it into production. I would rather ship a packaged VM than Docker at this point, there are to many gotchas that we don't have time to figure out.
by viraptor on 7/10/15, 3:30 AM
It's really not hard to deploy a package repository. Either a "proper" one with a tool like `reprepro`, or a stripped one which is basically just .deb files in one directory. There's really no need for curl+dpkg. And a proper repository gives you dependency handling for free.
by perlgeek on 7/10/15, 11:03 AM
You can set a different base path in debian/rules with export DH_VIRTUALENV_INSTALL_ROOT=/your/path/here
by serkanh on 7/10/15, 1:50 PM
by erikb on 7/10/15, 9:11 AM
Do people really do that? Git pull their own projects into the production servers? I spent a lot of time to put all my code in versioned wheels when I deploy, even if I'm the only coder and the only user. Application and development are and should be two different worlds.
by objectified on 7/10/15, 7:41 AM
by rfeather on 7/10/15, 1:46 AM
by StavrosK on 7/10/15, 1:12 AM
by avilay on 7/10/15, 1:24 AM
1. Create a python package using setup.py 2. Upload the resulting .tar.gz file to a central location 3. Download to prod nodes and run pip3 install <packagename>.tar.gz
Rolling back is pretty simple - pip3 uninstall the current version and re-install the old version.
Any gotchas with this process?
by velocitypsycho on 7/10/15, 3:09 AM
I vaguely remember .deb files having install scripts, is that what one would use?
by lifeisstillgood on 7/9/15, 11:39 PM
by webo on 7/10/15, 1:25 AM
So how is this solving the first issue? If PyPI or the Git server is down, this is exactly like the git & pip option.
by compostor42 on 7/10/15, 3:01 AM
How has your experience with Ansible been so far? I have dabbled with it but haven't taken the plunge yet. Curious how it has been working out for you all.
by BuckRogers on 7/10/15, 1:13 AM
by ah- on 7/9/15, 10:55 PM
by jacques_chester on 7/10/15, 2:50 AM
cf push some-python-app
So far it's worked pretty well.Works for Ruby, Java, Node, PHP and Go as well.
by daryltucker on 7/9/15, 11:00 PM
by theseatoms on 7/10/15, 3:23 PM
by stefantalpalaru on 7/10/15, 12:06 AM
No, the state of the art where I'm handling deployment is "run 'git push' to a test repo where a post-update hook runs a series of tests and if those tests pass it pushes to the production repo where a similar hook does any required additional operation".
by hobarrera on 7/9/15, 11:44 PM
Looks like these guys never heard of things like CI.