GETTING STARTED WITH KUBERNETES – PART 3

Welcome back to the 3rd series of my Kubernetes series This post would be a continuation of my previous post, please click on the link below to read the first  and second post in case you haven’t read it

 

At the end of this post, you will have a working local Kubernetes system with the three microservices deployed (Vue, Flask restful, and Postgresql )and working together as a whole.

 

PROJECT SETUP

  • Run the following command on your terminal, here I am cloning the repo and creating YAML files for our K8s objects
git clone https://github.com/oluchilinda/Devops-Setup.git
cd Devops-Setup
git checkout docker-setup
mkdir kubernetes
cd kubernetes
touch flask-deployment.yml  ingress.yml \
postgres-persistent-vol.yml\
postgres-deployment.yml secret.yml vue-deployment.yml

Your folder structure will look this way

├── note_ui
├── backend
├── kubernetes
│   ├── flask-deployment.yml
│   ├── ingress.yml
│   ├── postgres-persistent-vol.yml
│   ├── postgres-deployment.yml
│   ├── secret.yml
│   ├── vue-deployment.yml
|── .gitignore
├── .dockerignore
├── docker-compose.yml

  • Set up Ingress on Minikube with the NGINX Ingress Controller

Launch a new terminal, start your minikube locally and enable the NGINX Ingress controller with the command below

minikube start --driver=hyperkit 
minikube addons enable ingress

Verify that the NGINX Ingress controller is working by running the command below

kubectl get pods -n kube-system

Configuring the services

To configure our apps services such as Vue, Flask, and PostgreSQL in Kubernetes, we need to define the following Kubernetes objects per app in a YAML file:

SETTING UP POSTGRESQL APPLICATION

Using this guide on Kubernetes docs, which explains how to run a stateful application like MySQL, the same steps can be repeated here.

  • Create a PersistentVolume referencing a disk in your environment.
  • Configure secrets, to store our database credentials
  • Create a PostgreSQL Deployment.
Create a Persistent Volume

Since containers are ephemeral, we need to configure a volume, via a PersistentVolume and a PersistentVolumeClaim, to store the Postgres data outside of the Pod.

PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource.

Using the guide available on Kubernetes docs, add the following code below to the kubernetes/postgres-persistent-vol.yml  file

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv-volume
  labels:
    type: local
spec:
  storageClassName: standard
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "data/postgres-pv"

The configuration file specifies that the volume is at /data/postgres-pv on the cluster’s Node. The configuration also specifies a size of 5 gibibytes and an access mode of ReadWriteOnce, which means the volume can be mounted as read-write by a single Node. It defines the StorageClass name standard for the PersistentVolume, which will be used to bind PersistentVolumeClaim requests to this PersistentVolume.

A PersistentVolumeClaim (PVC) is a request for storage by a user.

Now create a PersistentVolumeClaim that requests a volume of at least five gibibytes that can provide read-write access for at least one Node.

Add this new code in kubernetes/postgres-persistent-vol.yml  file

apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

#persistent-volume-claim  (New code) 
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

 

Create the PersistentVolume and PersistentVolumeClaim:

$ kubectl apply -f ./kubernetes/postgres-persistent-vol.yml

You see:

persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created

In a production cluster, you would not use hostPath. Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume.

Secrets:

Kubernetes Secrets let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys.

We’ll set up a Secret to store our Postgres database credentials.

There are different types of Secret, we use Opaque, which is an arbitrary user-defined data, check Kubernetes docs here for more info

Open your terminal, and run the following code, please substitute it for your database username and password

The user ,password and database namefields are base64 encoded strings

$ echo -n "db_username" | base64
ZGJfdXNlcm5hbWU=

$ echo -n "db_password" | base64
ZGJfcGFzc3dvcmQ=

$ echo -n "db_name" | base64
ZGJfbmFtZQ==

kubernetes/secret.yml 

Copy the base64 encoded words into your secret.yml

apiVersion: v1
kind: Secret
metadata:
  name: postgres-credentials
type: Opaque
data:
  user: ZGJfdXNlcm5hbWU=
  password: ZGJfcGFzc3dvcmQ=
  db: ZGJfbmFtZQ==

Add the Secrets object:

$ kubectl apply -f ./kubernetes/secret.yml

 

CREATE A POSTGRESQL DEPLOYMENT

We are going to create 2 objects deployment and service

Service

To access the deployment or container, we need to expose the PostgreSQL service. Kubernetes provides different types of services like ClusterIP, NodePort, and LoadBalancer.

With ClusterIP we can access the PostgreSQL service within Kubernetes. NodePort gives the ability to expose the service endpoint on the Kubernetes nodes.

kubernetes/postgres-deployment.yml 

apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
spec:
  type: NodePort
  ports:
   - port: 5432
  selector:
   app: postgres

Deployment

The deployment will control the creation of pods, so they will always be available. It will also create them based on the image and will add configuration, where needed. The pod runs the app.

Add the code below in the  kubernetes/postgres-deployment.yml 

apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    service: postgres
spec:
  selector:
    service: postgres
  type: NodePort
  ports:
  - port: 5432
  



#Deployment   (new code)
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  labels:
    name: database
spec:
  replicas: 1
  selector:
    matchLabels:
      service: postgres
  template:
    metadata:
      labels:
        service: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:11
        env:
          - name: POSTGRES_USER
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: user
          - name: POSTGRES_PASSWORD
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: password
          - name: POSTGRES_DB
            valueFrom:
              secretKeyRef:
                name: postgres-credentials
                key: db
        volumeMounts:
          - name: postgres-volume-mount
            mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-volume-mount
        persistentVolumeClaim:
          claimName: postgres-pv-claim
      restartPolicy: Always

 

This YAML file describes a Deployment that runs our PostgreSQL and references the PersistentVolumeClaim. The file defines a volume mount for/var/lib/postgresql/data, and then creates a PersistentVolumeClaim that looks for a 5GB volume which we have already provisioned.

Using secretKeyRef we got our database credentials that were declared in secrets and we are pulling our Postgres image directly from DockerHub.

We only created a single pod  replica  for our DB  and we pulled our PostgreSQL image from Docker Hub image: postgres:11

Create the Deployment:

$ kubectl create -f ./kubernetes/postgres-deployment.yml

You see:

service/postgres created
deployment.apps/postgres created

 

SETTING UP FLASK  APPLICATION

We are going to create 2 objects service and deployment

Add the code below in the  kubernetes/flask-deployment.yml 

apiVersion: v1
kind: Service
metadata:
  name: flask
  labels:
    service: flask
spec:
  selector:
    app: flask
  ports:
  - protocol: "TCP"
    port: 6000
    targetPort: 5000
  type: LoadBalancer



---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: flask
  labels:
    name: flask
spec:
  replicas: 4
  selector:
    matchLabels:
      app: flask
  template:
    metadata:
      labels:
        app: flask
    spec:
      containers:
      - name: flask
        image: oluchilinda/flask-setup:latest   
        env:
        - name: FLASK_ENV
          value: "development"
        - name: FLASK_APP
          value: "main.py"
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: user
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: password
        - name: POSTGRES_DB
          valueFrom:
            secretKeyRef:
              name: postgres-credentials
              key: db
      restartPolicy: Always

Our YAML file is telling Kubernetes the following:

  • You want a load-balanced service exposing port 6000
  • You want four instances of the flask container running replicas: 4
  • We are pulling our flask image from docker hub image: oluchilinda/flask-setup:latest

Create the Deployment:

$ kubectl create -f ./kubernetes/flask-deployment.yml

 

SETTING UP Vue  APPLICATION

We are going to create 2 objects service and deployment

Add the code below in the  kubernetes/vue-deployment.yml

apiVersion: v1
kind: Service
metadata:
  name: vue
  labels:
    service: vue
  name: vue
spec:
  selector:
    app: vue
  ports:
  - port: 8080
    targetPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vue
  labels:
    name: vue
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vue
  template:
    metadata:
      labels:
        app: vue
    spec:
      containers:
      - name: vue
        image: oluchilinda/vue-setup:latest
      restartPolicy: Always

Create the Deployment:

$ kubectl create -f ./kubernetes/vue-deployment.yml

 

SETTING UP INGRESS

Ingress: exposes HTTP and HTTPS routes from outside the cluster so we can access the app from outside the cluster.

Using the guide available on K8s docs

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: noteapp-ingress
  annotations:
spec:
  rules:
  - host: note.app
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: vue
            port:
              number: 8080
      - pathType: Prefix
        path: /v1/notes/
        backend:
          service:
            name: flask
            port:
              number: 5000
      - pathType: Prefix
        path: /home/
        backend:
          service:
            name: flask
            port:
              number: 5000

Here, we defined the following HTTP rules:

  1. / – routes requests to the Vue Service
  2. /home/ – routes requests to the Flask Service
  3. /v1/notes/ – routes requests to the Flask Service

 

Create the Ingress object:

$ kubectl apply -f ./kubernetes/ingress.yml

To see the description of the ingress, run this code on your terminal, from the ingress.yml file, you noticed we named it name: noteapp-ingress

 

$ kubectl describe ingress noteapp-ingress
Name:             noteapp-ingress
Namespace:        default
Address:          192.168.64.2
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  note.app    
              /            vue:8080 172.17.0.11:8080)
              /v1/notes/   flask:5000   172.17.0.10:5000,172.17.0.7:5000,172.17.0.8:5000 + 1 more...)
              /home/       flask:5000   172.17.0.10:5000,172.17.0.7:5000,172.17.0.8:5000 + 1 more...)
Annotations:  <none>
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  12m   nginx-ingress-controller  Ingress default/noteapp-ingress
  Normal  UPDATE  12m   nginx-ingress-controller  Ingress default/noteapp-ingress

View the state of the ingress you added:

kubectl get ingress noteapp-ingress

You see

NAME              CLASS    HOSTS      ADDRESS        PORTS   AGE
noteapp-ingress   <none>   note.app   192.168.64.2   80      13m

Where  192.168.64.2 is the IP allocated by the Ingress controller to satisfy this Ingress.

Get all pods

kubectl get pods

Everything is running now.

You can also visit the web-based Kubernetes user interface to get an overview of applications running on your cluster.

Type minikube dashboardon your terminal, automatically a web page is opened

 

TEST:

Let us check and see if everything is running perfectly fine

Add your Ingress host into the local host’s file (/etc/hosts), run echo "$(minikube ip) note.app" | sudo tee -a /etc/hostson the terminal.

Remember on our ingress YAML file, we named our - host: note.app

spec: 
  rules: 
   - host: note.app

Our app can be accessible on http://note.app/

You can install httpie (HTTPie is a command-line HTTP client ) using pip.

Now run the following on your terminal

http GET http://note.app/home/
http POST http://note.app/v1/notes/   title="First day in Nairobi" notes="A visit to Yaba"
http POST http://note.app/v1/notes/   title="First day in Nairobi"  notes="I  love Nairobi, it is so beautiful"

We have the following output

 

 

Viola, we are done, but on your own

  • You can learn how to deploy Kubernetes clusters on cloud services like AKS (Azure Kubernetes Service), EKS (Amazon Elastic Kubernetes Service), and GKE (Google Kubernetes Engine )
  • Learn important tips on managing Kubernetes in production andKubernetes Failure Stories, here

You can access this project here on Github

 

Leave a Reply

Your email address will not be published. Required fields are marked *