In this post, we do a deep dive on what elementary resources get created behind the scenes when you create a new app in OpenShift.
We previously created an OpenShift cluster on DigitalOcean. Let’s deploy apps on that server now.
To begin with, you need the
oc command line tool. You can download it from here. Once you download the client tool, you need to login with your credentials. You can get the credentials from the OpenShift console by accessing the login command from the menu as shown here,
The command will be of the format,
$ oc login https://console.<your-domain>.com:8443 --token=ABCDXyz123
Using OpenShift resources
There are many ways to create apps in OpenShift. In this post, we will focus on using base elements of an OpenShift cluster, resources like pods, deployments and replication controllers. Even though we don’t create apps this way in actual usage, it helps
Pods are the fundamental units of abstraction in an OpenShift cluster, similar to Kubernetes. In fact, any operation you do with a Kubernetes resource can be done in OpenShift as well. Let’s go ahead and create a pod which runs a simple web server.
$ oc apply -f apache_root_pod.yml
You can then check the status of the pod.
$ oc get pods NAME READY STATUS RESTARTS AGE apache 0/1 ContainerCreating 0 4s
Let’s take a look at the pod’s logs,
$ oc logs -f apache (13)Permission denied: AH00072: make_sock: could not bind to address [::]:80 (13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
Our pod didn’t get created successfully, and rightly so. OpenShift is designed to run containers as non-root. This is a deliberate design decision. Even though containers are isolated constructs, OpenShift takes extra security measure to ensure that your container process runs as a non-root user. The implication is, most of the official Docker images don’t run by default in OpenShift out of the box! But don’t sweat over it, as OpenShift maintains nonroot versions of most of the stacks and services.
Let’s replace this with the non-root equivalent.
But first, remove the old pod.
$ oc pod delete apache
$ oc create -f apache_pod.yml
Services and routes
There can be multiple pods with the same configuration and containers. A “service” is a resource which tags all these similar pods into a single entity. Why do we need services? If I have more than one pod with the same configuration(typically happens if you are scaling up), I need a way to collectively bundle all these pods. That’s where services help.
Let’s create a service.
$ oc apply -f apache_service.yml
Let’s try to access the service,
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE apache-service ClusterIP 172.30.172.205 <none> 8080/TCP 16m $ curl -s http://172.30.172.205:8080
8080 because that’s what we gave in the pod. The above curl will time out. But, if you
$ ssh -i <your ssh private key> firstname.lastname@example.org . . [root@openshift-master ~]# curl -s http://10.130.0.10:8080 | head
What just happened? Your service is by default visible and exposed only inside the OpenShift cluster, not outside it.
To expose your service to outside work, we create a new entity called route associated with a service. Routes are resources specific to OpenShift and not found in Kubernetes. Think of route as a mapping between your service and a domain name.
Let’s expose the service as a route.
$ oc expose svc/apache-service
Now, we can access our app using a domain name. This is usually of the format app-name.project-name.your-app-subdomain. You can check the route using the command,
$ oc get route
A route is another resource, very similar to a pod or a service. Hence, we can add it using YAML as well. Let’s try adding another route to the same service, this time using YAML.
Make sure you change the “host” in the route yaml file to your host before your run this command:
$ oc apply -f apache_route.yml
We saw earlier that a service is a tagged collection of similar pods. Let’s create another pod with the same specification.
$ oc apply -f apache_pod2.yml
If you observe the spec for
generateName: apache- instead of a
name property. That’s just another way to specify a unique name for a pod.
Now, because our service is tagged with all pods which have the
app: demo1 label selector, this will be added as one of the service endpoints. This means that the service can either serve from the first pod or the second pod. We can verify this by looking at the service spec.
$ oc describe svc/apache-service | grep -i endpoints Endpoints: 10.129.0.8:8080,10.130.0.10:8080
We see that our apache server is served by 2 pods. Let’s try deleting one of them.
$ oc delete pod apache
Now, only one pod serves our app, as indicated by the service endpoints.
$ oc describe svc/apache-service | grep -i endpoints Endpoints: 10.129.0.8:8080
Its quite common for pods to perish in a containerized setup. In order to ensure that a specified number of pods run always, we need to create a new construct called replica set. Its like saying to the OpenShift cluster, “Hey! always make sure that 2 copies of my Apache pod are always running.”
$ oc apply -f apache_rs.yml $ # After a minute... $ oc get pods NAME READY STATUS RESTARTS AGE apache-rs-948q6 1/1 Running 0 11s apache-rs-bw7vt 1/1 Running 0 11s
Try deleting one of the pods, and notice that they get “replicated” again.
$ oc delete pod apache-rs-948q6 pod "apache-rs-948q6" deleted $ oc get pods NAME READY STATUS RESTARTS AGE apache-rs-55pl9 1/1 Running 0 22s apache-rs-bw7vt 1/1 Running 0 7m
Let’s say we want to ship a newer image of our “apache” app. The issue with replica sets is, when we want to ship new code, we have to destroy the existing replica sets.
$ oc delete rs apache-rs replicaset.extensions "apache-rs" deleted
Edit the replica set yaml to point to the new image, and recreate the replica set.
To avoid this, OpenShift has another abstraction on top of replica set called the deployment config. It contains the same information as in a replica set, and more. The main problem we’re trying to solve with deployments is to trigger an automatic deployment based on an image change.
Let’s create a deplyment set.
$ oc apply -f apache_dc.yml
The deployment config looks similar for the most part except after line 23 where there is a specification which says what will trigger a new deployment. In our case, its a new image. A deployment config by itself won’t deploy anything, we have to fire it initially.
$ oc rollout latest dc/apache-dc Error from server (BadRequest): cannot trigger a deployment for "apache-dc" because it contains unresolved images
The “unresolved images” error is because OpenShift is not able to find the above image in its internal docker registry. Let’s add that from our public docker image.
$ oc tag --source=docker lakshminp/apache:v1 myproject/apache:latest Tag apache:latest set to lakshminp/apache:v1.
I’m essentially tagging “
$ oc get is NAME DOCKER REPO TAGS UPDATED apache 172.30.1.1:5000/myproject/apache latest About an hour ago
We rollout the deployment again, create the service, and expose it as a route.
$ oc apply -f apache_service.yml
$ oc expose svc/apache-service
So far, not very different from a replica set. In fact, we incurred on
$ oc tag --source=docker lakshminp/apache:v2 myproject/apache:latest Tag apache:latest set to lakshminp/apache:v2.
You can see that OpenShift destroys the old pods and creates the new ones.
$ oc get pods NAME READY STATUS RESTARTS AGE apache-dc-2-btpqz 1/1 Running 0 5m apache-dc-2-nqxkc 0/1 Terminating 0 5m apache-dc-3-7b8c2 1/1 Running 0 2s apache-dc-3-8qvqz 1/1 Running 0 6s apache-dc-3-deploy 1/1 Running 0 8s
Creating apps by using these YAML files is not what we typically do things. We’ll see how we can create scale this and improve the developer experience in the next post.