Python is an excellent language. It’s easy to learn, easy to use, with a rich standard library and even more vibrant library of 3rd party authors. Python is available everywhere and can be used almost for any task.
Unfortunately, the problem is not only in development but during deployment as well. I love how the system of compiled languages work, how you can bundle one binary, and just run it, without dependency hell.
Nowadays, Docker can be a rescue. The problem is, Docker has many of its issues as well. I wouldn’t like to use Docker without any container orchestration, for example, Kubernetes. But with that, you need to manage many services and solve a lot of new problems you wouldn’t have to. I could write a blog post only for that.
I can see the benefits of containers; in some cases, it doesn’t make sense to use any other solution. But the container is not a hammer, and not everything is a nail.
So, how can Python be deployed reasonably in a case when the container doesn’t make sense?
I like Debian, and Debian-like systems are the most popular ones. Well, at least we can agree on the desktop. Servers are more complicated. Anyway, the rest would be almost similar.
So… my answer after many years of tweaking is Debian package, with dh_virtualenv, systemd services, included configuration files, and with
Makefile taking care of virtualenv and for all actions needed during development (and deployment).
Let’s elaborate it a little bit more. First is the Debian package. I saw many times usage of Git or Fabric or scp or whatever, but the problem is, those tools are trying to do jobs that can be handled by the system’s packaging tool. There is no need to make a script to manage configuration files or removing an app and so on, for example. Also, Debian has already defined places where to put different files (logs, pids, binaries, and so on).
The problem with the Debian package is dependencies. Any dependency you have in your Python’s
setup.py, you need to pass it to
debian/control. It would be easy if Debian would always have up-to-date versions and, mostly, all Python libraries available as a Debian package. This is not the case, though. In Seznam.cz, we solved that problem with custom Debian repository and custom builds of Python libraries. The not very optimal solution, but works well if you can afford it.
I started to use
dh_virtualenv, a helper for building Debian packages to create a virtualenv during the build and keep all Python dependencies as simple Python modules in one bundle. I used this technique also at CZ.NIC, and I never saw any problem with that. Thanks to that solution, you don’t mess with any Python libraries installed in your system. You can install any Python library as a Debian package or simply by
pip and your app will not be affected.
The next thing is to have something to (not just) start up your app. I used
init.d scripts for many years. It had its problems, but after years, I solved many issues, so it was working just fine. But today we need to move to systemd. It doesn’t matter if you or I like it or not, it’s here. I think it’s okay, it’s better than init scripts, but it lacks proper documentation with suitable examples.
Which is also a purpose why I’m writing this blog post. After years of copy & paste and improving for every new app, I created a working example. I want to keep that example up-to-date to have it as a reference guide. Documentations are not very good, the examples are tough to find, and I haven’t noticed the best practices for this kind of deployment.
Here it is!
The example contains more ideas, like Makefile or included configurations. You can have a different opinion, and that’s fine, you don’t have to use it that way. It’s the way I make web applications. The critical part is a Debian package with
dh_virtualenv and working
systemd for uwsgi. Use any framework or configuration system you like!