Practice 2 - Working with Docker
In this lab, we will take a look at how to install Docker, use Docker CLI commands, and how to build clusters using Docker swarm. Docker containers allow the packaging of your application (and everything that you need to run it) in a “container image”. Inside a container, you can include a base operating system, libraries, files and folders, environment variables, volume mount-points, and your application binaries.
Key terminology:
- Docker image
- Lightweight, stand-alone, executable package that includes everything needed to run piece of software
- Includes code, a runtime, library, environment variables and config files
- Docker container
- Runtime instance of an image - what the image become in memory when actually executed.
- Completely isolated from the host environment.
Build, Ship, and Run Any App, Anywhere
- Docker is available in two editions: Community Edition (CE) and Enterprise Edition (EE)
- Supported Platform: macOS, Microsoft Windows 10, CentOS, Debian, Fedora, RHEL,Ubuntu and more.
References
Referred documents and web sites contain supportive information for the practice.
Manuals
- Docker fundamentals: https://docs.docker.com/engine/docker-overview/
- Docker CLI :https://docs.docker.com/engine/reference/commandline/cli/
- Building docker image: https://docs.docker.com/engine/reference/builder/
- Docker swarm: https://docs.docker.com/engine/swarm/
Exercise 2.1. Installation of docker inside OpenStack instance
In this task, you are going to install docker in Ubuntu OS and try to run the basic commands to make you comfortable with docker commands used in the next tasks.
- Create a virtual machine with ubuntu18.04 OS as carried out in Practice1 and connect to the virtual machine remotely using via SSH.
- NB! DO NOT use your previous lab image/snapshot!
- NB! Extra modifications required to change docker network settings:
- Create a directory in the virtual machine in the path:
sudo mkdir /etc/docker
- Create a file in the docker directory:
sudo vi /etc/docker/daemon.json
with the following content:
{ "default-address-pools": [{ "base":"172.80.0.0/16","size":24 }] }
- This change is required because otherwise, Docker will use network addresses that collide with the university networks, and you WILL lose access to the instance.
- Create a directory in the virtual machine in the path:
- Update the apt repo
sudo apt-get update
- Install packages to allow apt to use a repository over HTTPS:
sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
- Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
- Use the following command to set up the stable repository.
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- Update the apt package index, install Docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
NB! To run a docker commands with non root privileges
- Create a docker group (If it's already created than ignore):
sudo groupadd docker
- Add a user to docker group:
sudo usermod -aG docker $USER
- Activate the changes:
newgrp docker
- Create a docker group (If it's already created than ignore):
- Check the installation by displaying docker version:
docker --version
Exercise 2.2. Practicing docker commands
This task mainly helps you to learn basic commands used by docker CLI such as run, pull, listing images, attaching data volume, working with exec (like ssh a container), checking ip address and port forwarding.Docker commands
- Pull an image from Docker Hub and run an Ubuntu container in https://docs.docker.com/engine/reference/run/#detached-vs-foreground detached mode and assign your name as container-name, install http server (use docker exec command) and use port forwarding to access container-http traffic via host port 80.
- Create a login account at Docker Hub sign-up page
- Login in to your docker account from docker host terminal:
docker login
- Provide an input to the following:
- Username: your docker hub id
- Password: Docker hub password
- Provide an input to the following:
- NB! The login part is not mandatory but needed/recommended as recently docker hub limits the number of docker pull from a particular IP. As you are using the university network through VPN, it serves as a single IP for the Docker hub. (Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit)
- Pull an image from docker hub:
docker pull ubuntu
- Check the downloaded images in local repository:
docker images
- Run a simple ubuntu container:
docker run -dit -p 80:80 --name <<yourname>> ubuntu
- <<yourname>> = please type your name
- Get the bash shell of container:
docker exec -it <container_name> sh
- Exit from the container:
exit
- Connect to container and update the apt repo:
docker exec -dit <container_name> apt-get update
- Install http server:
docker exec -it <container_name> apt-get install apache2
- Check the status of http server:
docker exec -it <container_name> service apache2 status
- If not running, start the http server:
docker exec -it <container_name> service apache2 start
- Check the webserver running container host machine :
curl localhost:80
- Check the ip address of the container:
docker inspect <container_id> | grep -i "IPAddress"
- Host directory as a data volume: Here you are mounting a host directory in a container and this is useful for testing the applications. For example you store source code in the host directory and mount in the container, the code changed in host directory file can effect the application running in the container.
- Accessing a host file system on container with read only and read/write modes:
- Create directory with name test:
mkdir test && cd test
, Create a file:touch abc.txt
- Run a container and -v parameter to mount the host directory to the container
- Read only:
docker run -dit -v /home/ubuntu/test/:/home/:ro --name vol1 ubuntu sh
- Access the file in a container in the path /home and try to create a new file(you should see access denied) from container
docker exec -it vol1 sh
,cd /home
,ls
,exit
. - Read/write:
docker run -dit -v /home/ubuntu/test/:/home/:rw --name vol2 ubuntu
,docker exec -it vol2 sh
,cd /home
,ls
,Try to create some text files,exit
.You can see the created files in host machinecd /home/ubuntu/test/
,ls
- Read only:
- Stop and delete the container :
docker stop vol1
,docker rm vol1
- Create directory with name test:
- NB! Take the screenshot here
docker ps
- Accessing a host file system on container with read only and read/write modes:
- Data volume containers: A popular practice with Docker data sharing is to create a dedicated container that holds all of your persistent shareable data resources, mounting the data inside of it into other containers once created and setup.
- Create a data volume container and share data between containers.
- Create a data volume container
docker run -dit -v /data --name data-volume ubuntu
,docker exec -it data-volume sh
- Go to volume and create some files:
cd /data && touch file1.txt && touch file2.txt
Exit the containerexit
- Run another container and mount the volume as earlier container:
docker run -dit --volumes-from data-volume --name data-shared ubuntu
,docker exec -it data-shared sh
- Go to the data directory in the created container and list the files:
cd /data && ls
- Create a data volume container
- NB! Take the screenshot here
docker ps
- Create a data volume container and share data between containers.
Exercise 2.3. Building a Dockerfile and using Docker Compose
The task is to create your own docker image and try to run the same. More information about Dockerfile (https://docs.docker.com/engine/reference/builder/). A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession. While Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. To learn more about all the features of Compose, see the list of features.
Dockerfile commands:
FROM | Set the base image |
LABEL | Add a metadata to image |
RUN | Execute the command and execute the commands |
CMD | Allowed only once |
EXPOSE | Container listen on the specific ports at runtime |
ENV | Set environment variables |
COPY | Copy the files or directories into container’s filesystem |
WORKDIR | Set working directory |
- The scenario is to deploy a web server with Flask and Jinja, the goal is to run a simple HTML file to display your name
- Create a directory for the project:
mkdir testcompose
- Enter into the directory:
cd testcompose
- Create a file called app.py in your project directory and paste this in:
nano app.py
from flask import Flask from flask import Flask, render_template app = Flask(__name__) @app.route('/') def hello(): name = "<YOURNAME>" return render_template('index.html',name = name)
- Create a folder called templates in your project directory and create an HTML page,the content of which will be rendered using the Flask application:
mkdir templates
cd templates
nano index.html
<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> </head> <body> <div> <h3>Hi!! I am {{ name }}</h3> </div> </body> </html>
- Navigate back to the directory test compose
cd ..
- Create Docker file in this directory to build a container image using the Dockerfile commands
sudo vi Dockerfile
FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 RUN apk add --no-cache gcc musl-dev linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt EXPOSE 5000 COPY . . CMD ["flask", "run"]
- The functionalities of these commands are each Indicated as:
- Build an image starting with the Python 3.7 image.
- Set the working directory to /code.
- Set environment variables used by the flask command.
- Install gcc and other dependencies
- Copy requirements.txt and install the Python dependencies.
- Add metadata to the image to describe that the container is listening on port 8080
- Copy the current directory . in the project to the workdir . in the image.
- Set the default command for the container to flask run
- Create a file called docker-compose.yml in your project directory and paste the following:
nano docker-compose.yml
- Create a directory for the project:
version: "3.3" services: web: build: . ports: - "8080:5000"
- NB! The web service uses an image that’s built from the Dockerfile in the current directory. It then binds the container to 5000 and the host machine to the exposed port, 8080.
- Create another file called requirements.txt in your project directory. The file should include only one line - which defines that flask should be installed as a package:
flask
- Install docker-compose using
sudo apt install docker-compose
- From your project directory, start up your application by running
docker-compose up
- Check from another terminal the created docker container:
docker ps
- Check the running container at http://Your_VM_IP:8080/
- NB! Take the screenshot of your web application running.
- Tip: if you need to change the app (e.g. modify .py file), you also need to re-build the Docker Image (docker-compose will otherwise just keep using old version of the image. Do this with:
docker-compose build
- Install docker-compose using
Exercise 2.4. Shipping a docker image to Docker hub
Docker hub is a hosted repository service provided by Docker for finding and sharing container images with your team.(https://www.docker.com/products/docker-hub)
- Create a login account in docker hub using link https://hub.docker.com/signup?next=%2Fsearch%3Fq%3D and sign up.
- Push the docker image to docker hub
- Initially login in to your docker account from docker host terminal:
docker login
- Provide an input to the following:
- Username: your docker hub id
- Password: Docker hub password
- Provide an input to the following:
- Tag your image before pushing into docker hub:
docker tag mywebserver your-dockerhub-id/mywebv1.0
- Finally push the image to docker hub:
docker push your-dockerhub-id/mywebv1.0
- Initially login in to your docker account from docker host terminal:
- Scenario 1: Modify the container by creating some directories and save the changes in the image using docker commit command.
- Scenario 2: Export the container as a tar file by using the command docker export.
- NB! Take the screenshot of docker hub account showing your image
Exercise 2.5. Working with docker swarm.
In the following task you are required to construct swarm cluster and deploy a HTTP Web Server service (Using docker image created in Task 2.3).
Docker swarm is used for cluster management and orchestration of containers. It is supported in docker engine by swarmkit. A swarm consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and workers (which run swarm services). A given Docker host can be a manager, a worker, or perform both roles.
Requirements to setup a docker swarm:
- Docker Engine:Current docker engine support swarmkit, no need to install in your machines.
- At-least two docker hosts.
- IP's assigned to each host.
How to set up docker swarm cluster
NB! You no need run this command, because manager is already configured with Manager_IP:172.17.66.255
- Choose any of docker host to designate it as manager, run the following command to generate the token, which can be used by the workers to join the cluster
docker swarm init — advertise-addr Manager_IP
- Choose any of docker host to designate it as manager, run the following command to generate the token, which can be used by the workers to join the cluster
NB! You need run this command in your terminal
- Manger will generate the token as
docker swarm join --token SWMTKN-1-3d0ww604j60rbqjemna522o405t74erlnhn723pr14h3rhb91l-9e97nb4mf6nue6f2catdukyoo 172.17.66.255:2377
, In your docker host terminal please copy the above command to join the cluster.
- Manger will generate the token as
NB! You need not to run this command and its just only to read
- The cluster can be seen in manager host
docker node ls
- The cluster can be seen in manager host
Docker Service:To deploy an application image when Docker Engine is in swarm mode, you create a service. Frequently a service is the image for a microservice within the context of some larger application. Docker service is a when you create a service, you define its optimal state (number of replicas, network and storage resources available to it, ports the service exposes to the outside world, and more).For information refer to this link Docker swarm service
Services, tasks, and containers When service is deployed on the swarm, swarm manager receives the service definition and schedules the service in to nodes in the form one or more replica tasks.
Creating a service in swarm cluster You required a Docker image of your service stored in docker hub, so that accessible to all the nodes in a cluster.
- NB! Here, you need not to run the below command in your docker host, You need to choose one port between 8080 to 8200(should not same as your friend) and your docker image name, come to me to run your service. If your performing this task after the lab hours, email(shivananda.poojara@ut.ee) me your image name in the docker hub.
- NB! You do not need to run this commands, this is only for your information.
- In the swarm manager, run the command to create your service
docker service create --replicas 3 -p 8081:5000 --name web shivupoojar/mywebv1.0
Your creating a service with 5 replica tasks, forwarding container HTTP traffic from 5000 to host port 8081, service name as web, service image stored in docker hub. Now, swarm scheduler creates a service with 3 replicas running in nodes.
- NB! You do not need to run this commands
- You can view the service running using
docker service ls
in the manager host - Access the service running in manager http://Manager-IP:<<YOUR_CHOSEN_PORT>>
- NB! Here,take screenshot of your service running on your browser http://Manager-IP:<<YOUR_CHOSEN_PORT>>
- You can inspect your node in manager node
docker node inspect web
- Scaling can be done by
docker service scale web=4
Deliverables
- Upload the screenshot taken as mentioned in Exercise 2.2(2 screenshots),Exercise 2.3,Exercise 2.4 and Exercise 2.5.
- Pack the screenshots into a single zip file and upload them through the following submission form.
- Your instance must have been be terminated!