Drupal on OpenShift: Deploying your first Drupal site

Drupal on OpenShift: Deploying your first Drupal site

Learn how to deploy your first Drupal 8 site on OpenShift.

We saw the business value of running OpenShift in the last post. Now we will look at how to build and deploy your first Drupal 8 site on OpenShift.

Docker vs OpenShift

First, we have to understand the relationship between Docker containers and OpenShift. They are still the fundamental units of an OpenShift cluster but are abstracted out by an entity called Pod. A pod is a collection of containers which share the same port space and IP address. For instance, in a Drupal context, a pod will contain an Nginx container and a PHP FPM container running inside it. A pod stays true to the unwritten dictum of containers, i.e., only 1 process per container. We will build 2 containers, one for Nginx and one for Drupal.

Building the Nginx container

This container will contain Nginx and the related configuration required for our Drupal site. I’m currently assuming the “/web” directory to be the top level directory according to Drupal convention, but this can be changed in the Nginx configuration file before building the image.

This custom Nginx container will differ from the official Nginx container in 3 ways:

  1. It will run Nginx as a non-root user. This is a mandatory security practice if you want to run your container in an OpenShift cluster, unlike Kubernetes.
  2. The default and only Nginx configuration will be the one which supports running a Drupal 8 site.
  3. A non-root user cannot run Nginx on port 80. Hence, we will run Nginx in an unconventional port, i.e. 8080.
FROM nginx:1.15

RUN useradd -u 1001 -r -g 0 -d /app -s /sbin/nologin -c "Default Application User" default \
&& mkdir -p /app \
&& chown -R 1001:0 /app && chmod -R g+rwX /app

COPY nginx.conf /etc/nginx
COPY drupal.conf /etc/nginx/conf.d/default.conf

RUN chown -R 1001:0 /var/log && chmod -R g+rwX /var/log
RUN chown -R 1001:0 /var/cache/nginx && chmod -R g+rwX /var/cache/nginx
RUN chown -R 1001:0 /var/run && chmod -R g+rwX /var/run
RUN chown -R 1001:0 /etc/nginx && chmod -R g+rwX /etc/nginx

EXPOSE 8080

USER 1001

Let’s build and push our Nginx container to the public docker registry.

$ docker build -t lakshminp/nginx-openshift:1.0 .

$ docker push lakshminp/nginx-openshift:1.0

Building our Drupal container

Let’s build our Drupal container which will live inside our pod. This container will contain:

  1. All the required prerequisites to run Drupal 8, like PHP 7.1 and its dependencies.
  2. composer and a globally installed drush
  3. A running PHP FPM process
  4. The source code of your Drupal site with composer dependencies built in

We can do this in one of two ways. The first way is to build our container in one go and in the last step, add our source code and run composer. The other better way is to build a base image with everything except the most frequently changing part, i.e. the source code of your Drupal application.

Then, we will use this base image to build the final container with the source code and composer dependencies injected. This will also help us conceptually understand a feature of OpenShift we’ll use later, called source 2 image.

FROM php:7.1-fpm

RUN apt-get update \
&& apt-get install -y libfreetype6-dev libjpeg62-turbo-dev libpng-dev wget git

RUN docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \
&& docker-php-ext-install gd \
&& :\
&& docker-php-ext-install pdo pdo_mysql opcache zip \
&& docker-php-ext-enable pdo pdo_mysql opcache zip

# Install Composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer

RUN set -ex; \
\
# Drush
composer global require drush/drush:^8.0; \
\
# Drush launcher
wget -O drush.phar \
"https://github.com/drush-ops/drush-launcher/releases/download/0.6.0/drush.phar"; \
chmod +x drush.phar; \
mv drush.phar /usr/local/bin/drush; \
\
# Drupal console
curl https://drupalconsole.com/installer -L -o drupal.phar; \
mv drupal.phar /usr/local/bin/drupal; \
chmod +x /usr/local/bin/drupal; \
\
# Clean up
composer clear-cache;

# /app will map to nginx container
RUN useradd -u 1001 -r -g 0 -d /app -s /sbin/nologin -c "Default Application User" default \
&& mkdir -p /app \
&& chown -R 1001:0 /app && chmod -R g+rwX /app

# This is where the source code will be cloned.
RUN mkdir /code && chown -R 1001:0 /code && chmod -R g+rwX /code

# Drupal files directory will map to this.
RUN mkdir /shared && chown -R 1001:0 /shared && chmod -R g+rwX /shared


USER 1001

WORKDIR /app

You can see that we create “/app” and “/code”. We will eventually move the code from “/code” to “/app” at the time of deploying the containers, as this will be a commonly shared volume between Nginx and the Drupal containers. Let’s build and push this base image.

$ docker build -t lakshminp/drupal-openshift-base:1.0 .

$ docker push lakshminp/drupal-openshift-base:1.0

Now, injecting our source code on top of this base image is fairly easy.

FROM lakshminp/drupal-openshift-base:1.0

RUN git clone --depth=1 https://github.com/badri/drupal-8-composer.git /code
RUN cd /code && rm -rf .git && composer install

I’ve used my own code repository, but feel free to use yours, provided it satisfies the following conditions:

  1. It uses the Drupal composer project file structure.
  2. The docroot is “/web”, you can customize this, but for now we’ll stick to this.
  3. You need to make your code more 12-factor compatible.

What I mean by the last part is, your settings are going to be derived from environment variables. You have to create a new “settings.openshift.php” in your “sites/default” with the following contents:

<?php
$databases['default']['default'] = array (
   'database' => getenv('MYSQL_DATABASE'),
   'username' => getenv('MYSQL_USER'),
   'password' => getenv('MYSQL_PASSWORD'),
   'host' => getenv('MYSQL_HOST'),
   'port' => getenv('MYSQL_PORT'),
   'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
   'driver' => 'mysql',
   'prefix' => '',
   'collation' => 'utf8mb4_general_ci',
);
$settings['hash_salt'] = json_encode($databases);

And add that to your “settings.php” conditionally,

if (getenv('OPENSHIFT_BUILD_NAME')) {
  if (file_exists($app_root . '/' . $site_path . '/settings.openshift.php')) {
    include $app_root . '/' . $site_path . '/settings.openshift.php';
  }
}

Once you’ve done these, you can build and push your final drupal container.

$ docker build -t lakshminp/drupal-openshift:1.0 .

$ docker push lakshminp/drupal-openshift:1.0

Shipping your containers to OpenShift

We’ve so far created 2 docker images for each of our containers, but haven’t actually deployed them to our OpenShift cluster. We need a functional cluster for that. The quickest way to create a bare minimal cluster is to use Minishift. You can install Minishift in your favorite OS and start using it instantly. There are other slower but better ways to create a cluster. Once you have the cluster setup, you can start deploying your applications instantly.

OpenShift resources 101

Everything in an OpenShift is a resource. I’ve written about the different resources in OpenShift. To quickly recap,

A pod is a set of containers which share the same IP address and port space. In our case, the pod will contain a PHP FPM container running in port 9000 and Nginx container running in port 8080.

OpenShift drupal pod
OpenShift drupal pod

A replicaset is a specification on how many copies of your pods should exist in the cluster. You scale up your web application by bumping up the number of replicas of your pod.

A service is a resource which tags all replicas with the same configuration into a single entity. It also specifies which ports are exposed by your pods to the outside world. In our case, let’s say that we are running 5 replicas of the PHP FPM + Nginx pod in the cluster, we tag it as a “Drupal” service and expose 8080 as the only port.

OpenShift Drupal service
OpenShift Drupal service

Creating a DB in OpenShift

I’ll introduce a few new resources relevant to our context. The first one is secrets. Secrets provide a way to inject configuration and secret stuff(like passwords and API keys) into our app pods. In our case, we need a secret resource to furnish the database credentials of our Drupal database.

kind: Secret
apiVersion: v1
metadata:
  name: drupal-8
  labels:
    app: drupal-8
stringData:
  database-user: drupal8
  database-password: eSoiDenThicO
  database-root-password: nKatIcTRIToR

Persistent volume claims

The next resource type is a persistent volume claim or called by its acronym PVC. A PVC is a way to abstract out the storage of an app in a cluster. Most consumers of a cluster are developers, and they needn’t bother about the innards of storage. For example, let’s say that I want 100 GB of SSD storage in the “Asia Pacific” region of my cluster. I need to know the specifics of the storage apparatus, like the API key and other details. This is information which can’t be shared with all the developers.

Instead, what OpenShift does is, it manages the storage for you, with the specifics managed by cluster operators. A developer just needs to give their preferences as to what kind of storage they want, how much etc. The just raise a “claim” for storage and it is the responsibility of the OpenShift cluster to allocate the claimed storage for them. Think of it as raising a request to your IT support department for 100 GB of SSD storage, albeit in this context, the IT support department is your OpenShift cluster.

In our case, we need the MariaDB database to persist data, as containers are ephemeral otherwise. Let’s create a PVC resource for the same.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: drupal-8-db
  labels:
    app: drupal-8  
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

As you can see, we’ve requested 1 GB  of storage for storing the database data. Let’s create this resource in our cluster:

$ oc apply -f mariadb-pvc.yml

Deploying the DB to the cluster

The next step is to deploy this database in the cluster. We use a resource called DeploymentConfig. I’ll do a deep dive of this resource when we are deploying the Drupal container.

apiVersion: v1
kind: DeploymentConfig
metadata:
  name: drupal-8-db
  labels:
    app: drupal-8  
spec:
  replicas: 1
  selector:
    name: drupal-8-db
    app: drupal-8    
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        name: drupal-8-db
        app: drupal-8  
    spec:
      containers:
      - env:
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              key: database-user
              name: drupal-8
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              key: database-password
              name: drupal-8
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: database-root-password
              name: drupal-8
        - name: MYSQL_DATABASE
          value: drupal8
        image: ' '
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 3306
          timeoutSeconds: 1
        name: mysql
        ports:
        - containerPort: 3306
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -i
            - -c
            - MYSQL_PWD="$MYSQL_PASSWORD" mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE
              -e 'SELECT 1'
          failureThreshold: 3
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /var/lib/mysql/data
          name: drupal-8-db-data
      volumes:
      - name: drupal-8-db-data
        persistentVolumeClaim:
          claimName: drupal-8-db
  triggers:
  - imageChangeParams:
      automatic: true
      containerNames:
      - mysql
      from:
        kind: ImageStreamTag
        name: mariadb:10.2
        namespace: openshift
    type: ImageChange
  - type: ConfigChange

That’s quite a mouthful. This essentially tells OpenShift to deploy an instance of the “mariadb:10.2” image in the cluster. A few things of note in this config.

  1. It consumes the secrets resource created earlier and injects those values as environmental variables into the MariaDB container.
  2. There is a “readiness probe” in the config which is a way of telling “Is my container ready yet to start consuming?”. In this case, if we can connect to the MySQL DB and run a sample query successfully, the container is ready to serve traffic.
  3. There is a “liveness probe” which tells whether the container is alive or not. In this case, OpenShift tries to establish a connection in the said TCP port, if it can do so successfully, the container is said to be alive and flagged as dead otherwise. This can even be an HTTP request to the app/service inside your container.

Let’s create this resource in OpenShift.

$ oc apply -f mariadb-dc.yml

Lastly, we have to expose this MariaDB service to other containers in the cluster. To do that, we create a service resource type.

apiVersion: v1
kind: Service
metadata:
  name: drupal-8-db
  labels:
    app: drupal-8  
spec:
  ports:
  - name: mysql
    port: 3306
    protocol: TCP
    targetPort: 3306
  selector:
    name: drupal-8-db
    app: drupal-8  

and add it to the cluster.

$ oc apply -f mariadb-svc.yml

Deploying our Drupal app

Our Drupal app also contains a similar list of resources.

  1. A persistent volume claim for storing files, the “sites/default/files” folder.
  2. A deployment config which contains how many replicas should be run, how the containers should be built etc.
  3. A service for exposing the Nginx process

Let’s digress the Drupal deployment config in detail.

What does a deployment config mean?

A deployment involves creating replicas of the said container image in the cluster.

A deployment config is an OpenShift-only resource which tells you under what circumstances an app will be deployed. Our application will be deployed if:

  1. We change the number of replicas of the app
  2. We change the container image/tag of the application(by changing the source code, we create a new version of the image which results in changing of the container tag)

These are called triggers and are a part of the deployment config spec.

apiVersion: v1
kind: DeploymentConfig
metadata:
  name: drupal-8
  labels:
    app: drupal-8
spec:
  replicas: 1
  selector:
    app: drupal-8
  template:
    metadata:
      labels:
        app: drupal-8
    spec:
      volumes:
        # Create the shared files volume to be used in both pods
        - name: app
          emptyDir: {}
        - name: drupal-8-files
          persistentVolumeClaim:
            claimName: drupal-8-files
      containers:
      - name: php-fpm
        image: 'lakshminp/drupal-openshift:1.0'
        env:
        - name: MYSQL_USER
          valueFrom:
            secretKeyRef:
              key: database-user
              name: drupal-8
        - name: MYSQL_PASSWORD
          valueFrom:
            secretKeyRef:
              key: database-password
              name: drupal-8
        - name: MYSQL_HOST
          value: drupal-8-db
        - name: MYSQL_PORT
          value: "3306"
        - name: MYSQL_DATABASE
          value: drupal8
        - name: OPENSHIFT_BUILD_NAME
          value: "1"
        volumeMounts:
          - name: app
            mountPath: /app
          - name: drupal-8-files
            mountPath: /shared
        lifecycle:
          postStart:
            exec:
              command:
                - "/bin/sh"
                - "-c"
                - > 
                  cp -fr /code/. /app;
                  rm -rf /app/web/sites/default/files;
                  ln -s /shared /app/web/sites/default/files;
      - name: nginx
        image: 'lakshminp/nginx-openshift:1.0'
        ports:
          - name: http
            containerPort: 8080
        volumeMounts:
          - name: app
            mountPath: /app
  triggers:
    - type: ConfigChange
    - type: ImageChange
      imageChangeParams:
        automatic: true
        containerNames:
          - php-fpm
        from:
          kind: "ImageStreamTag"
          name: "drupal-openshift:latest"
          namespace: openshift
    - type: ImageChange
      imageChangeParams:
        automatic: true
        containerNames:
          - nginx
        from:
          kind: "ImageStreamTag"
          name: "nginx-openshift:latest"
          namespace: openshift

The above deployment config states that OpenShift will create 1 replica of a pod(comprising of PHP-FPM and Nginx containers). The PHP FPM will contain a persistent volume(/app/web/sites/default/files) and a deployment will be triggered if the “drupal-openshift” or “nginx-openshift” image changes or the replica count changes. Now, where does OpenShift get these images from? If you guessed dockerhub, you’re wrong.

Managing Docker images in OpenShift

OpenShift provides an easy and consistent way to manage container images using its internal registry(in case you didn’t know, unlike Kubernetes, OpenShift runs its own per-cluster docker registry). You can tag external images using your own scheme inside OpenShift. These are called image streams. Image streams help you prevent a wrongful deployment, for example when an external image changes, thereby providing stability.

Before we import our Drupal and Nginx images into OpenShift’s registry, we have to ensure we are logged in as the system admin user, a user with special privileges, like access to the “openshift” namespace. This is the namespace where are the standard images are stored in the registry.

We can log in as the system admin user in Minishift using the command,

$ oc login <minishift-ip>:8443 -u system:admin

You can find out your Minishift IP by running,

$ minishift ip

Ensuring that we are in the “openshift” namespace,

$ oc project openshift

We import the nginx and drupal images from dockerhub.

$ oc import-image drupal-openshift:1.0 --from=lakshminp/drupal-openshift:1.0 --confirm

$ oc import-image nginx-openshift:1.0 --from=lakshminp/nginx-openshift:1.0 --confirm

And tag them as the latest images.

$ oc tag drupal-openshift:1.0 drupal-openshift:latest

$ oc tag nginx-openshift:1.0 nginx-openshift:latest
OpenShift imagestreams
OpenShift imagestreams

Now, instead of having a file for each resource, we can club all the resources into a single file and use it to deploy our app.

$ oc apply -f drupal-8.yml

If you messed up things and want to start over again, you can run,

$ oc delete all -l app=drupal-8

What this says is to delete all the resources with the label “app=drupal-8”.

Exposing our Drupal site using routes

Once we have a deployment config in place and a valid imagestream, we will have the Drupal and DB pods up and running in minutes. Similar to how a service exposes a pod to the rest of the cluster, a route is a resource which exposes a service to the outside world.

We expose a service as a route using the command,

$ oc expose svc/drupal-8

You can get this route, visit it in the browser and install and use Drupal. To find out the URL, you can run,

$ oc get routes

Triggering a redeployment

Let’s verify how a new image automatically triggers a redeployment. The most frequent reason for a new deployment is pushing new code to your code base. As a simple example, let’s add a new module to our Drupal setup.

$ composer require drupal/token
$ git commit -a -m "Add token module"
$ git push origin master

Now, we build a newer image using the newly pushed code.

$ docker build -t lakshminp/drupal-openshift:1.1 .
$ docker push lakshminp/drupal-openshift:1.1

This does not trigger any build so far, as I’ve created a new image in an external repository and OpenShift is not aware of it. The next step will be to import this image into OpenShift.

$ oc import-image drupal-openshift:1.1 --from=lakshminp/drupal-openshift:1.1 --confirm

Our Drupal deployment config is triggered only when the “drupal-openshift” image with the “latest” tag changes. So, let’s tag the “1.1” version as latest.

$ oc tag drupal-openshift:1.1 drupal-openshift:latest

You can see that this will fire a deployment automatically.

To verify that our new code has landed after the deployment is complete, you can inspect the module list and confirm that the token module is listed.

Verify that token module is in code.
Verify that token module is in code.

Next steps

That was quite a journey. We built our own containers containing the source code and deployed our first Drupal app. We also created a trigger system which will deploy the new container whenever one is built. This approach is still complex, as it involves a lot of moving parts. Also, I’d critique certain aspects of it, like:

  1. You build and push your containers containing possibly proprietary code into a public repository like dockerhub. Not something you do in the real world.
  2. It is a hard requirement that your team needs to have some ops knowledge like building containers.
  3. Every time you want to deploy, you need to build a new image and push it to OpenShift’s registry. Though this can be automated(as a post-hook script or a CI step for example), we can make this workflow more seamless.
  4. Creating a new Drupal site involves manually editing the YAML files, which could be messy. Overall, it is not a good developer experience.

We shall address all the above concerns and more in the next post!

Drupal on OpenShift
Up Next:

Drupal on OpenShift: The business value of OpenShift

Drupal on OpenShift: The business value of OpenShift