Getting Started with Docker (Beginner’s Guide)

Today, I would be talking about containerizing your Python Apps using Docker.

We would be covering a lot of important concepts and at the end of the tutorial, you learn the following

  1. Learn how to Containerize a Flask Restful API (backend) and a frontend UI (Vue)
  2. Learn important concepts like DockerFile, Docker Compose, and Docker Volume and Network.
  3. Use Docker Compose to build, run, and connect multiple containers together


What is Containerization?

It is a method of packaging up software code and all its dependencies so that it can run uniformly and consistently on any infrastructure.

Containerization allows your applications to be “written once and ran anywhere.”

Many people assume that since containers that isolate applications, they are the same as virtual machines. It looks like it, but the fundamental difference is that containers share the same kernel as the host without the need to virtualize or emulate anything.

Image Source: Docker

Unlike virtualization, when a virtual machine is spun up, the hypervisor virtualizes an entire system—from the CPU to RAM to storage. To support this virtualized system, an entire operating system needs to be installed.


Image Source: Docker


What is Docker?

Docker is a tool that enables you to create, deploy, and run applications using the concept of containerization (containers).

As a Python programmer, you should already familiar with the concept of virtual environments such as virtualenv , venv and Pipenv, which are a way to isolate Python packages.

A simple difference between virtual environments and Docker is that virtual environments can only isolate Python packages. They cannot isolate non-Python software like a PostgreSQL or MySQL database and they still rely on a global, system-level installation of Python on your computer.

The virtual environment points to an existing Python installation; it does not contain Python itself but Docker which is a Linux container isolates the entire operating system, not just the Python parts. In other words, it will install Python itself within Docker as well as install and run a production-level database.


Once Docker has been installed,  run this to check your docker version

$ docker --version
Docker version 19.03.13, build 4484c46d9d


Important Keywords

  • Docker Image: A Docker image is a  template that contains a set of instructions for creating a container that can run on the Docker platform and it is represented by a Dockerfile.Images become containers at runtime.
  • Docker Containers: Is a lightweight and executable package of software that includes everything needed to run an application
  • Docker Hub: is a hosted repository service provided by Docker for finding and sharing container images with your team, more like a Github or Bitbucket to host your docker images (Dockerfile).
  • Docker Engine:  Docker Engine is a client-server application that provides the platform, the runtime, and the tooling for building and managing Docker images, and it provides the following Docker API, Docker Daemon, and Docker CLI
  • Docker Daemon: Docker daemon is a service that runs in the background of the host computer and handles the heavy lifting of most of the Docker commands.
  • Docker CLI: Docker CLI is the primary way that we interact with Docker. Docker CLI exposes a set of commands we can use.


Clone this repo, here I designed a simple note app with Flask restful and built a simple frontend with Vue to consume it.

git clone

To the backend app running, run the following on your terminal

cd backend
python3 -m venv env 
source env/bin/activate
export FLASK_DEBUG=1
export FLASK_ENV=development
flask run


Open another terminal, and run the following to set up your frontend

cd note_ui
npm install
npm run serve


Let’s containerize the backend and frontend service

To get our Flask and Vue code running in a container,

We do the following

  • Create a DockerFile and package each of them separately as a Docker image
  • Then run a container based on it


A Dockerfile is a simple text file that contains a list of commands that the Docker client calls while creating an image and it’s a simple way to automate the image creation process.

Simply put, a Dockerfile is a set of instructions that tells Docker how to build an image.

A typical Dockerfile is made up of the following:

  • A FROM instruction that tells Docker what the base image is
  • WORKDIR instruction sets the current working directory for RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.
  • COPY supports the basic copying of files to the container.
  • RUN instruction to run some shell commands (for example, install-dependent programs not available in the base image)
  • A CMD or an ENTRYPOINT instruction that tells Docker which executable to run when a container is started. They define which command is executed when running a container.
  • The ENV instruction sets the environment variables to the image.
  • The VOLUME instruction tells Docker to create a directory on the host and mount it to a path specified in the instruction.
  • The EXPOSE instruction tells Docker that the container listens for the specified network ports at runtime.


Now create the 2 DockerFile each in the root directory of note_ui and backend folder respectively.

FROM node:lts-alpine

# install simple http server for serving static content
RUN npm install -g http-server

# make the 'app' folder the current working directory

# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./

# install project dependencies
RUN npm install

# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .

# build app for production with minification
RUN npm run build

CMD [ "http-server", "dist" ]

Run the following on your command line

docker build -t app-ui .
docker run -it -p 8080:8080 --rm app-ui
Starting up http-server, serving dist
Available on:
Hit CTRL-C to stop the server

Our Vue app is up and running.


# pull official base image
FROM python:3.8.1-slim-buster

# set work directory

# set environment variables

# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt

# copy project
COPY . /app/

CMD ["gunicorn", "--bind", "", "main:app"]

PYTHONDONTWRITEBYTECODE: Prevents Python from writing .pyc files to disc
PYTHONUNBUFFERED: ensures our console output looks familiar and is not buffered

We chose to use Gunicorn as our default web server and to use it, we must bind it to an application callable (what the application server uses to communicate with your code) as an entry point CMD ["gunicorn", "--bind", "", "{subfolder}.{module_file}:app"]

Run the following on your terminal

docker build -t app-backend .
docker run -it -p 5000:5000 --rm app-backend

The following output would be displayed

2020-12-01 21:16:40 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2020-12-01 21:16:40 +0000] [1] [INFO] Listening at: (1)
[2020-12-01 21:16:40 +0000] [1] [INFO] Using worker: sync


Container Orchestration with Docker Compose

Containerized applications are usually composed of several running containers. You have seen that we have our frontend and backend as independent containers, also we would need to set our database server too.

Looking at our setup, we have three functionalities

  • UI tier – Vue app
  • Logic  – the Python component we focus on
  • Data Source– we use a PostgreSQL database to store some data we need in the logic tier

However, we want these 3 components to communicate together, coordinating all these containers is going to become much harder if we only use Dockerfile, so this is where Docker Compose would come in handy.

Unlike the Dockerfile, which is a set of instructions to the Docker engine about how to build the Docker image, the Compose file is a YAML configuration file that defines the services, networks, and volumes that are required for the application to be started.

This would be the setup for our application

docker compose

Our Docker Compose file

  • is going to take care of pulling the PostgreSQL image from Docker Hub and launching the postgres container
  • while for our server and client service from our respective Docker images (Dockerfile) declared in backend and note_ui folders.
  • Runs the application

It builds the images locally and then runs the containers from them. It also takes care of creating a default network and placing all containers in it so that they can reach each other.

To use Docker Compose, you’ll need to define how to build your containers with YAML in a docker-compose.yml file. Now, add a docker-compose.yml file to the project root folder and copy the code below.

version: '3.7'


      context: ./backend
      dockerfile: Dockerfile
    command: gunicorn --bind main:app
    restart: always
      - 5000:5000
    env_file: app.env
      - postgres
      - backend-network
      - frontend-network

    image: postgres:11
    restart: always
      - postgres_data:/var/lib/postgresql/data
    env_file: db.env
      - backend-network

      context: ./note_ui
      dockerfile: Dockerfile
    restart: always
      - 8080:8080
      - server
      - frontend-network




  • Version: Docker Compose Version 3 is the current major version of Compose having a version key with value 3 or 3.x. Version 3 removes several deprecated options such as including volume_driver etc.
  • Services : is the first root key of the Compose YAML and is the configuration of the container that needs to be created and we defined our  application, which includes a server (flask backend),  postgre(database server), and client (our Vue UI)

  • build: The build key contains the configuration options that are applied at build time. The build key can be a path to the build context (where we build our images)  and our Dockerfile location.
  • env_file : The environment key sets the environment variables for the application and provides the path to the environment file, which is read for setting the environment variables.

For our environment file, we have 2 app.env  (app environment variables )and dev.env (database variables).



You notice, I named the localhost of the Postgresql as postgre instead of localhost.


Because each container in Docker is a separate host, which means that you can’t reach PostgreSQL host using localhost, you have to use the hostname of the PostgreSQL container, which by default is the name of the service defined in the Docker Compose file (postgres).

I struggled with this error psycopg2.OperationalError: could not connect to server: Connection refused
when I referred the POSTGRES_HOST=localhost

  • depends_on: This key is used to set the dependency requirements across various services, like our server depending on postgres while the client(Vue app) depending on the server vice versa.
  • ports: This key specifies the ports that will be exposed to the port.
  • volumes:  Docker volumes are the recommended method of persisting data stored in containers because data doesn’t persist when the container is terminated and extracting the data out of the container is difficult.

Every time we take down our containers, we lose the data stored in previous sessions. To avoid that and persist our PostgreSQL (DB) data between different containers, we use volumes. For this, we simply define a named volume in the Compose file and specify a mount point for it in the postgres service as

    image: postgres:11
    restart: always
      - postgres_data:/var/lib/postgresql/data


The volume key has value of postgres_data:/var/lib/postgresql/datawhich means Docker will mount the /var/lib/postgresql/data of the container to the postgres_datadirectory.

  • network :  By default Compose sets up a single network for your app, instead of just using the default app network, we are going to specify our own networks with the top-level networks key.

The client service is isolated from the postgres service, because they do not share a network in common – only server can talk to both.

version: "3.7"
    image: postgres:11
      - backend-network
      - backend-network
      - frontend-network
      - backend

  • restart:  The restart key provides the restart policy for the container. By default, the restart policy is set to “no”, which means Docker will not restart the container, no matter what.

The following restart policies are available:

  • no: Container will never restart
  • always: Container will always restart after exit
  • on-failure: Container will restart if it exits due to an error
  • unless-stopped: Container will always restart unless exited explicitly or if the Docker daemon is stopped


docker-compose up -d --build
docker-compose exec server flask db init
docker-compose exec server flask db migrate -m "first migration"
docker-compose exec server flask db upgrade
docker-compose up
docker-compose down
  1. docker-compose up -d --build The build command reads the Compose file, scans for build keys, and then proceeds to build the image and tag the image.
  2. docker-compose exec It lets you run ad hoc commands on your container, as seen below, we are doing database migrations for our Flask app using Alembic.
docker-compose exec server flask db init 
docker-compose exec server flask db migrate -m "first migration" 
docker-compose exec server flask db upgrade

3. docker-compose upI start the containers.

4. docker-compose downI stop the containers and will proceed to remove the containers, volumes, and networks.

5.  docker-compose logsit shows logs for all services, but if you wanted to see log for the frontend UI, then do   docker-compose logs client  for our backend service  docker-compose logs server



$docker-compose up

Open a new terminal and run the following, to make a POST request to our server

http POST   title="First day in Nairobi" notes="A visit to Yaba" 
http POST   title="First day in Nairobi"  notes="I  love Nairobi, it is so beautiful" 
http POST   title="First day learning DevOps" notes="I learning Containerization today" 
http POST   title="First day learning IaaC" notes="I learning DevOps today"

Now check our app on the browser, open http://localhost:5000/v1/notes/ , our REST API

Now check our app on the browser, open our Vue UI


VIOLA, we are done

The source code for this setup can be found here

Publish  your images to Docker Hub

Before we can push an image to Docker Hub, we will first need an account on  Docker Hub.

After you create your account, you will have your own unique username, next step is to build the image docker build -t <username>/flask-setup .insert your username, login, and push.

Open two separate terminals, and run the following command below

cd backend
docker build -t oluchilinda/flask-setup .
docker login
docker push oluchilinda/flask-setup
cd note_ui
docker build -t oluchilinda/vue-setup .
docker login
docker push oluchilinda/vue-setup


What’s next?

In my future post, would write more about

  •  Container Orchestration with Kubernetes (platform for scheduling and automating the deployment, management, and scaling of containerized applications)
  • Could go a step further and dockerize this flask application with Nginx







Leave a Reply

Your email address will not be published. Required fields are marked *