Using Drupal and Docker in production

Want to run a fully dockerized Drupal setup in production? Read on.

In the previous post, we created a setup to run Drupal + Docker in local. With a skip and a jump, we can make the same setup run in production as well. We’ll do a deep dive of the same in this post.

12-factor apps and other goodies

One thing I’ll keep repeating is how effectively we can “12-factor“ize our app. This will make a host of other best practices like one step build, backup-restore etc. a lot easier. Drupal is a stateful application, wherein it cannot be readily 12-factorized. For example, the sites/default/filesdirectory resides inside the code base. Hence, it cannot be easily scaled horizontally like a stateless app. We still strive to make Drupal as 12-factor as possible by storing configuration in files, using environment variables etc. Here’s a quick run down of 12-factor tenets and how we apply it to our Docker-Drupal context:

one code base, many deploys

Holds good for us by default. We can use the same code base and deploy to a production and staging environment separately.


We explicitly declare and manage dependencies using composer.json and composer.lock.


Site config is stored in files as part of the code. Other config like database credentials and keys are stored as environment variables. Also, we store docker specific stuff in a .env file.

Backing services

These are other services needed to run the app, in our case MariaDB. The 12factor tenet treats this as an attached resource. We add this as a part of our composer file.

Build, release and run

We will come back to this in a future installment, but we currently don’t have it.


Every part of a stack is a single container running a process.( PHP FPM, Nginx, MariaDB etc.)

Port binding

This essentially means that each of the services in the web app are bound to a port and a URL which can be referred to in another app. We refer to the DB in our settings.php using this approach. The Nginx container refers to the PHP FPM process running in a port using the same approach.


In a 12factor setup, scaling happens via the process model. We’ll see how we can achieve this using Docker containers in this post.

  1. Disposability

Docker compose up and down commands with ability to stop and start the app gracefully and build new images on demand.

Dev/Prod parity

That’s the whole premise for adopting this setup! We can have a clone of the production setup in both local and staging.


Docker allows you to stream logs either to stdout or a file.

Admin processes

The 12factor site states this as, “Run admin/management tasks as one-off processes”. I think drush fits the bill perfectly here.

domain names & traefik

How would you manage domain names on a VM based setup? You would map them with their IPs. What about when you run containers? different processes running in the same IP but on different ports? That’s exactly what Traefik does for you. It actually offers much more than that. Traefik is a modern reverse proxy tool which integrates easily with your Docker containers.

When I first heard about Traefik, it sounded too good to be true. The closest thing I’d worked with for getting a similar functionality was nginx proxy. After fiddling with nginx proxy unsuccessfully for a few days, I gave Traefik a try, and never looked back. Traefik plays well with docker-compose and consumes the label metadata associated with a Docker container to expose it as a route. I’d also like to mention here that Traefik is extensively well documented. We will be covering Traefik installation and setup in the deployment stage towards the end of this post.

Free certificates

Traefik allows you to use Let’s Encrypt to automatically generate and manage SSL certificates for your domain. All it requires is the HTTP challenge and the associated configuration required. A challege is a task posed to you by the Let’s Encrypt Certificate Authority to show proof that you own and control the domain you’ve requested an SSL certificate for. There are many ways to do this. For this context, we will be using the DNS challege, where Let’s Encrypt will request you to add TXT records to your domain and verify if they’re there or not. We can automate the whole thing by supplying a DigitalOcean read-write API token(as I manage my domain in DigitalOcean) to Trafeik, so that it can add the TXT records via API and do the verification. We also mention in Traefik’s configuration that we use the DNS challenge to verify that the domain belongs to us.

The next step is to add Traefik related annotation to our containers. We can do this in the production.yml file by annontating each container appropriately. You might want to choose which containers you expose to the outside world. Not all containers need to be exposed. For instance, the database will only be visible to the php container. We can set these rules in the production.yml file under the network section. There are 2 overlay networks available for a container. One which is specific to the compose file and assoiciated containers. In our case, the nginx, mariadb and php containers all belong to the same network by default. The other is the external network which we created earlier in the article. If you want a container to be visible to Traefik, you should add it to the external network proxy which we created above.

In addition to the network info, you provide addition metadata specific to Traefik, like what port you’re exposing, which domain you want to expose it under, etc. Our updated production.ymllooks like this:

    file: docker-compose.yml
    service: nginx
    - ./:/code
    - ./deploy/nginx/config/default:/etc/nginx/conf.d/default.conf
    - ./deploy/nginx/config/drupal:/etc/nginx/include/drupal
    - traefik.backend=example
    - traefik.port=80
    - internal
    - proxy

You can see that nginx is part of both the internal and proxy networks.

We also refer to both the internal and the external proxy network in the docker compose file.

    external: true
    external: false

checking your app logs

Once you have booted your docker containers, it is easy to check the logs of each service, like Nginx, PHP FPM and the database. For instance, You can check nginx logs by running:

$ docker-compose -f production.yml logs nginx

Just replace the container name in above command to check the respective service logs. Also, you can add a -f option after the logs command to stream the logs.

a sample deployment

Now that we have a production ready setup, we can deploy this to our servers. If you don’t have a server ready, I recommend spinning a new one using DigitalOcean and mandatorily secure the server if you are running a production site. Once you’ve done those, then clone your Drupal code at a convenient location in the server. By now, you can checkin the docker-compose.yml and its accompanying production.yml files at the top level directory of your codebase.

Also, install docker and docker-compose on the production server. We will be running all the services, including Traefik inside containers. This is a one time task. Next, you set up your Traefik service. For this, we first create a Docker network(let’s call it proxy). Every dockerized web app we create will have containers which belong to the overlay network specific to that stack, or both the overlay network and the proxy network. We can choose which stack to expose to Traefik and thus the outside world.

Let’s create the proxy network.

$ docker network create proxy

Traefik comes with a web console as well and requires some basic configuration to run. Here’s the Traefik configuration for our Drupal site(s),

defaultEntryPoints = ["http", "https"]
address = ":8080"
  users = ["admin:$apr1$NpIqapqV$PReV1wDm6xXjvqpl7PYqN0"]
  address = ":80"
      entryPoint = "https"
  address = ":443"

Traefik web console requires username/password credentials to show up, this is mentioned above. The weird looking password hash can be obtained by running htpasswd.

$ htpasswd -nb admin <my-password>

To specify Traefik that I’m using the DNS challenge, I’d have to add this part to my traefik.tomlfile,

email = ""
storage = "acme.json"
entryPoint = "https"
onHostRule = true
entryPoint = "http"

We decided run Traefik as a docker container. As it comes with a fair bit of configuration, its better run as a docker compose file in itself.

version: '2'

    image: traefik
    restart: always
    command: --docker
      - 80:80
      - 443:443
      - proxy
      - /var/run/docker.sock:/var/run/docker.sock
      - $PWD/traefik.toml:/traefik.toml
      - $PWD/acme.json:/acme.json
    container_name: traefik
      - traefik.port=8080

    external: true

A few things of note here.

  • Traefik consumes port 80 and 443, the HTTP and SSL ports of your system. Make sure you don’t run any other process in those ports.
  • You can see in the above config that Traefik run on the proxy network.
  • I inject the DigitalOcean API token inside Traefik containers(I use DigitalOcean to manage my infrastructure and DNS). We’ll see why in a moment.

Also, this will be totally separate and exclusive from your app codebase, the reasoning behind this is to reuse the Traefik setup for different webapps running on the same machine.

Let’s boot the Traefik setup we created:

$ docker-compose up -d

NOTE you might need to create a file called acme.json with write permissions for Traefik to start successfully, in the same directory alongside the traefik.toml and docker-compose.yml.

You can check the Trafik web console by hitting in your browser(the domain you gave in your docker compose file above). The first time, it is going to prompt for credentials which you gave in the toml file above.

Next step is to boot your Drupal setup. You can go to the Drupal codebase and create a .env file. This file is:

  • NOT checked in to your code base
  • contains environment specific details and some sensitive information related to your site, like MySQL credentials.

If you want to run a staging setup of your site, you clone the codebase and create a different .envfile. This file will be read by docker when booting all your containers. Here’s a sample env file.


Finally, boot your docker containers.

$ docker-compose -f production.yml up

Some Drupal specific steps you need to do,

  1. Create DB dump file in the /mariadb-init2/${ENV} directory of your server(this is in your production yml file) if you are porting an existing site. You do this before booting the containers so that MariaDB picks it up when booting.
  2. Run composer to install dependencies. To run composer in a docker setup,
    $ docker-compose -f production.yml run php composer install
  1. To run drush,
    $ docker-compose -f production.yml run php ./vendor/bin/drush --root=/code/web cr

You can add drush to $PATH in the Dockerfile, and create an alias so that you need not specify --root, but all those are cosmetic changes. You get the idea 🙂

Once you have your setup running, you can hit the domain you specified in the .env file to view the site. Congratulations! you successfully created a fully dockerized production setup of your Drupal site.

How do you deploy changes to this setup? It would be awesome if we just do a git push to our code and the deployment happens automagically right? We will walk through this exact setup in the next post! Till then, adieu.

docker Drupal Planet production