It's a dream come true

Ever since I was a wide-eyed little boy, I would look up at the stars and wonder in wonder: “What if I could lease my very own, beefy, dedicated Hetzner server and have an easy way to deploy all my projects onto that?” But lo, my dreams were dashed because Docker wouldn’t be invented for another twenty years, and Hetzner did not accept Mastercard at the time.

Decades later, with Docker finally invented and Hetzner accepting all major credit cards, my dream lay all but forgotten, because Docker could not do zero-downtime deploys natively and I hated it. That was how things remained, until my friend Theodore told me that he tried Dokku and that it worked very well.

I had heard of Dokku (and Fig, Deis, Flynn, Kubernetes, etc etc), but I never paid too much attention, as these PaaSaaSes struck me as too webcale for my simple projects. All I wanted was a way to skip through all the boilerplate configuration of deploying a Django app, and Ansible wasn’t cutting it, as it was still too much plumbing.

Since Theodore tried it and said it was apparently pretty easy to deploy with, though, I figured I’d give it a shot and see. It helped that Dokku was explicitly designed to be light and self-contained, whereas Kubernetes is for much larger deployments, so Dokku fit my use case exactly.

Trying Dokku out

To try Dokku out, I needed a project. Luckily, I had just the thing, as my friend Stelios and I had written Eternum (an IPFS file pinning service) just a few weeks earlier, so it was a prime candidate for Dokkuization.

Portable storage closets

Spoiler alert, Dokku turned out to be fantastic and I love it now. I’ve only played with it for half a day, but managed (after a few hours of work) to make provisioning a new app a matter of minutes, and deploying new versions come with a minimum of (or zero) downtime. It looks like it will really make deploying new applications much easier (and with full TLS support with Let’s Encrypt), and guess what: I’m going to tell you exactly how to deploy your own Django projects to a Dokku server in less than five minutes.

Let’s see how!

Deploying Django

If you’re a regular reader, you might remember that I tried to deploy Django with Docker, and, while it worked rather well, deployments took an unacceptable amount of downtime. Dokku fortunately does not suffer from that problem, as it waits for your new version to be ready before switching over, minimizing downtime to almost zero (we’ll talk about that later).

I’ll show you exactly what you need to do, step by step, to deploy your application, but you can also take a look at the Dokku deployment guide, which is very similar (but more generic). The following guide assumes you have Dokku set up on a server already, so do that first by installing Dokku and then continue reading.


As you might already know, what Dokku does is provision a bunch of Docker containers and take care of the connections between them. It also handles the web server part by routing vhosts to your app, and generally handles all the nitty-gritty of getting your code working in a production setting.

Eternum, the project I want to deploy, uses Django, Postgres, Redis and Celery, so the guide on focus on these components. Hopefully your application will be similar, but by the end you should know enough about how Dokku works to customize the process to your project regardless.

Zero-downtime deployments

The way Dokku does zero-downtime deployments is pretty simple and effective: It launches the new version of your application in a container while the old version is still running, waits for the new version to be up and checks that it’s successfully running (by optionally hitting some URLs you specify and expecting success), and, if everything goes well, switches over to the new version so that no connections are dropped.

If you have no migrations to perform, that is ideal, because you then get literally zero downtime, as no connections are dropped or refused. However, there is a slight issue with migrations, which not Dokku-specific, and is the same issue that you’ll have with any other stack, and the first thing I ask every time I hear people talk about zero-downtime deployments: What happens to connections served by the old version after the new version has changed the schema?


Since migrations (necessarily) happen before switching over to the new code version, there is an interval of time between when the migrations are done and when the new version is deployed, when things are being checked, or are reloading, or whatnot. In that interval, the old code will look for old fields and will not find them, or will try to insert data and hit new fields that it doesn’t know about, triggering a “field cannot be NULL” exception, which means downtime.

Even worse, if your deployment ends up failing, the migrations won’t be rolled back (they usually could, but it is hard to do automatically), and you’ll end up with your applications hitting database errors until you can deploy a working version.

You can generally avoid this by only creating backwards-compatible migrations, e.g. by only adding columns with default values, or only deleting columns that are no longer in use. That pretty much has to be handled on a case-by-case basis, as there’s no hard-and-fast rule, just be aware that it can happen and think about whether the few seconds of downtime that your migrations will bring are acceptable to you.

As I said above, this problem is not Dokku-specific, so you’re probably already aware of it, but I figured it’s worth repeating here because it’s an important point to make for any system that claims zero-downtime deployments.

Preparing your application

Alright, now that the theory is out of the way, let’s get our application ready to run on Dokku. The first step is to adapt the Django part of your application so Dokku will know what you want to run and how.

The way I’ve deployed my project here is compatible with my previous post about Docker, and they work very well together, so I recommend following both procedures. The Dockerfile from the previous post will work well with some minor modifications, and will allow you to both deploy the app to Dokku and to use docker-compose to launch a complete stack locally so you can develop with something that matches production very closely.

If you don’t already have a project, you can easily create a Docker- and Dokku-compatible one (along with various other goodies) by using my django-project-template with a single command. If you already have a project, just keep reading (or visit the repo for django-project-template to see the complete/up-to-date files).

To begin with, I have created a directory called misc/dokku/ at the root of my project directory to hold all the Dokku-related files, as I like having them in one place. That directory’s contents looks like this:

├── app.json
├── Procfile
└── uwsgi.ini

Let’s start with the two files that need to be changed and are not located in that directory, Django’s and the Dockerfile:

Your will be very specific to your application, but I’ll give you some guidelines and snippets here so you can build on them and modify them to your liking.

# We should get this from the environment, never store them in git.
SECRET_KEY = os.environ.get("SECRET_KEY", 'secret')

# Set DEBUG to False if the NODEBUG env var has been set.
DEBUG = True if os.environ.get("NODEBUG") is None else False

# Set the allowed hosts based on the environment.
ALLOWED_HOSTS = ["web", "localhost"] if os.environ.get("NODEBUG") is None else [""]

if os.environ.get("IN_DOCKER"):
    # Stuff for when running in Docker-compose.

    CELERY_BROKER_URL = 'redis://redis:6379/1'
    CELERY_RESULT_BACKEND = 'redis://redis:6379/1'

        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': "postgres",
            'USER': 'postgres',
            'PASSWORD': 'password',
            'HOST': "db",
            'PORT': 5432,
elif os.environ.get("DATABASE_URL"):
    # Stuff for when running in Dokku.

    # Parse the DATABASE_URL env var.
    USER, PASSWORD, HOST, PORT, NAME = re.match("^postgres://(?P<username>.*?)\:(?P<password>.*?)\@(?P<host>.*?)\:(?P<port>\d+)\/(?P<db>.*?)$", os.environ.get("DATABASE_URL", "")).groups()

        'default': {
            'ENGINE': 'django.db.backends.postgresql_psycopg2',
            'NAME': NAME,
            'USER': USER,
            'PASSWORD': PASSWORD,
            'HOST': HOST,
            'PORT': int(PORT),

    CELERY_BROKER_URL = os.environ.get("REDIS_URL", "") + "/1"
    CELERY_RESULT_BACKEND = os.environ.get("REDIS_URL", "") + "/1"
    # Stuff for when running locally.

        'default': {
            'ENGINE': 'django.db.backends.sqlite3',
            'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),

The gist of the matter here is that every single setting (especially secrets) should not be stored in the settings file. Instead, you should shift most of the production-specific settings into the environment variables.

In general, what I like doing is having the file have all the necessary information and dummy secret keys for running the application locally (or in compose), and have everything be overridable by environment variables (or a in other setups) for production.

The Dockerfile

This is what my Dockerfile looks like:

FROM python:latest
RUN apt-get update

# Install some necessary dependencies.
RUN apt-get install -y swig libssl-dev dpkg-dev netcat

# Install the requirements. This is done early so the requirements
# don't need to be reinstalled every time something unrelated changes,
# which would otherwise happen due to the way Docker does image
# caching.
RUN pip install -U pip
ADD requirements.txt /code/
RUN pip install -Ur /code/requirements.txt

# Add the Dokku-specific files to their locations.
ADD misc/dokku/CHECKS /app/
ADD misc/dokku/* /code/

# Copy the code and collect static media.
# You can use whitenoise to serve them, or CloudFront to proxy them.
COPY . /code/
RUN /code/ collectstatic --noinput

This will build your image and work for both docker-compose and Dokku.

Next, let’s deal with the files in the misc/dokku/ directory.


Since I use uwsgi to serve my app, the configuration is fairly simple:


Fairly self-explanatory. I’m not married to any of the settings, feel free to change them and tell me about your favorites. The only mandatory lines are module and http-socket, as your app needs to be listening on port 5000.


The Procfile is a very simple list of commands to run for each worker. In our case, we just need a web worker to run uwsgi and a celery worker to run celery. Feel free to add more workers as you need for your project.

web: /usr/local/bin/uwsgi --chdir=/code --ini=/code/uwsgi.ini
worker: /usr/local/bin/celery -A yourproject worker -P gevent


To actually run the workers specified in the Procfile, we need to tell Dokku how many we want of each. For example, we can have 3 Django workers and 4 Celery workers. I just needed one of each, so the configuration is trivial:


Adjust to suit your needs accordingly.


The app.json tells Dokku what to run before and after the deployment. In our case, we only need to migrate the database before each run.

  "scripts": {
    "dokku": {
      "predeploy": "/code/ migrate --noinput"

That’ll do it!


The final file is the CHECKS file, which lists which URLs you want Dokku to check before considering your deployment successful. I highly recommend setting this, otherwise failed deployments may erroneously be considered successful.

It’s a simple list of URLs and some optional text to look for in the page. The URLs can be from absolute to relative, but I prefer something in-between, as relative URLs sometimes failed for me (Dokku sometimes wouldn’t pass the correct hostname and Django would reply with a 400 error code).

Here’s what it might look like:


That’s it for the project files! You shouldn’t need to make any more changes to your Django application to get it deployed.

Preparing the server

The second (and last) step is to prepare the server to accept your application. This is pretty straightforward, you just instantiate the containers for your services and link them to your app.

Here are the commands you need to run (on the Dokku server):

# Replace "yourproject" everywhere with your project's name.

# Create a container for your project.
sudo dokku apps:create yourproject

# Install the postgres plugin (you can skip this if you have done it before).
sudo dokku plugin:install

# Create a database for your project.
sudo dokku postgres:create yourproject-database

# Install the Redis plugin.
sudo dokku plugin:install redis

# Create a Redis instance for your project.
sudo dokku redis:create yourproject-redis

# Link the above instances to your project, this will set up networking
# and expose environment variables to your project so you can connect.
sudo dokku postgres:link yourproject-database yourproject
sudo dokku redis:link yourproject-redis yourproject

# I generally like setting this variable so my settings file knows to disable
# DEBUG and change various other options for running on production.

# If you don't want the variable to be set globally, just change `--global` to
# your project's name.
sudo dokku config:set --no-restart --global NODEBUG=1

# Add other environment variables to taste.
sudo dokku config:set --no-restart yourproject SECRET_KEY=somelongkey

That’s pretty much it, your server is now set up and ready to receive your project. Just add the remote and push it:

git remote add dokku
git push dokku master

You will see the application getting deployed in git’s output, and hopefully it will end with your application being live. Afterwards, you can finish the installation by getting a TLS certificate from Let’s Encrypt.

Run these on the server:

# When you're done testing, add the final domain and remove the testing subdomain.
sudo dokku domains:add yourproject
sudo dokku domains:remove yourproject

# Install the Let's Encrypt plugin.
sudo dokku plugin:install

# Set your email for Let's Encrypt.
sudo dokku config:set --no-restart --global DOKKU_LETSENCRYPT_EMAIL=<your@email>

# Get the certificate and set HTTP to redirect to HTTPS.
sudo dokku letsencrypt yourproject

# Add the cron job to autorenew certificates (only ever needed once, not per-project).
sudo dokku letsencrypt:cron-job --add

# Install the redirect plugin to redirect non-www to www.
sudo dokku plugin:install

# Redirect non-www to www.
sudo dokku redirect:set yourproject

And your application should be up and running, with an autorenewed TLS cert, all ready for production.

To open a Django shell, you can run:

# Use --rm so the container gets removed after running.
sudo dokku --rm run project /code/ shell


For your convenience, here’s a script that will create a project following the above instructions. Just put the following in a file called mkproject and run it whenever you want to create a new Dokku project:


randpw(){ < /dev/urandom tr -dc A-Za-z0-9 | head -c${1:-64};echo;}

if [ $# -ne 2 ];
    then echo "Usage:\n    mkproject <appname> <naked domain>"
    exit 1

sudo dokku apps:create $1
sudo dokku postgres:create $1-db
sudo dokku redis:create $1-redis

sudo dokku postgres:link $1-db $1
sudo dokku redis:link $1-redis $1

sudo dokku config:set --no-restart $1 NODEBUG=1
sudo dokku config:set --no-restart $1 SECRET_KEY=`randpw`

read -p "Now push your code to Dokku, wait for it to deploy successfully and press any key here." mainmenuinput

sudo dokku domains:add $1 $2 www.$2
sudo dokku letsencrypt $1
sudo dokku redirect:set $1 $2 www.$2


That’s it! This is a huge load off my shoulders, because, even with my Ansible scripts, I had to spend hours deploying and making sure everything worked as expected. Dokku will hopefully cut down the time to new deployments from hours to mere minutes, and ensure that the server never runs a broken version.

I hope the above guide helps you, and if you know of a better way to do some of the steps above, or have any feedback, please leave a comment here or tweet to me. Happy deployments!