Lab 10
Introduction to Docker
In this lab we will cover docker installation, Docker images, Docker containers, Docker compose and Docker volume. Docker containers allow packaging of your application (and everything that you need to run it) in a “container image”. Inside a container you can include a base operating system, libraries, files and folders, environment variables, volume mount-points, and your application binaries. The main differences between containers and virtual machines is that virtual machines are managed by a hypervisor and utilize VM hardware (interacting directly with the hardware), while container systems provide operating system services from the underlying host and isolate the applications using virtual-memory hardware.
To elaborate, docker is a tool which is used to automate the deployment of applications in lightweight containers so that applications can work efficiently in different environments. Added benefits include easy sharing and copying, in many cases tens if not hundreds of containers can be spun up with minimal differences.
There are few important features of docker.
- Multiple containers run on the same hardware
- High productivity
- Maintains isolated applications
- Quick and easy configuration.
Thus, the following figure we will provide the entire image of DevOps and where Docker is relying.
Note: Extra modifications required to change docker network settings. These will prevent docker from overriding your VMs network which happens to be in the same subnet that the university uses. Also make sure you have set a password for the root user incase this needs to later be manually fixed, as network login might be broken
- Create a directory in the virtual machine in the path:
sudo mkdir /etc/docker
- Create a file in the docker directory and edit it:
/etc/docker/daemon.json
- Copy the following script:
{ "bip": "192.168.67.1/24", "fixed-cidr": "192.168.67.0/24", "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ], "default-address-pools": [ { "base": "192.168.167.1/24", "size": 24 }, { "base": "192.168.168.1/24", "size": 24 }, { "base": "192.168.169.1/24", "size": 24 }, { "base": "192.168.170.1/24", "size": 24 }, { "base": "192.168.171.1/24", "size": 24 }, { "base": "192.168.172.1/24", "size": 24 }, { "base": "192.168.173.1/24", "size": 24 }, { "base": "192.168.174.1/24", "size": 24 } ] }
Before the installation a good idea to update all the packages in your VM.
Docker Installation for CentOS 8
Docker installation in CentOS 8 requires manual installation, first we have to add the repo docker is downloaded from.
sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf list docker-ce
sudo dnf install docker-ce --nobest -y
Note: For CentOS 7, do not required to run above commands just starts from the following commands, but before the installation, upgrade your OS as mentioned on above.
sudo yum install –y docker
To check the version and the docker information,
docker --version
docker info
Use the following commands to start the docker service,
sudo systemctl start docker.service
orsudo systemctl start docker
sudo systemctl status docker.service
orsudo systemctl status docker
sudo systemctl enable docker.service
orsudo systemctl enable docker
Now check docker information from the following command
docker info
Now you have similar information on the console as mentioned in the following screen shot.
Afterwards, docker is up and running. Now we are checking the images with the following command.
docker images
The above command lists all the images, though there is no images at the moment as you can see in the following screen shot.
To list out container, you can use ps command see below
docker ps
To list out all containers, use flag a with ps, see following for complete command
docker ps -a
At the moment there is no container, as you can see in the above screen shot. To run a container (hello-world), use the following command. If the image not available in your system currently, docker will get the image from the docker hub which is an online repository then run the image and it will start the container.
docker run hello-world
After running the command you will see that docker will start pulling image once if it is not present.
You can confirm the cached container image using the docker images
command
docker images
Docker images
Docker images are templates that used to create Docker containers. Moreover, container is a running instance of image. As you've already guessed, copies of images can be stored both locally and online. Docker-hub has thousands of images prebuilt along with the compositsion files to modify. This gives developers easy access to services and infrastructure to test and modify
Docker Commands for Images
docker images
docker images --help
docker pull <images name: tag>
ordocker pull ubuntu:18.04 (for e.g.).
The image name can be find out from the docker hub web link, https://hub.docker.com, as mentioned in the following screen shot.
After pulling an ubuntu image, then use the following command to run the ubuntu container. Feel free to explore around, if you happen to break it you can always re-pull the image or restart it.
docker run -it ubuntu
ordocker run -it ubuntu bash
Note: (it=interactive)
You can also set the name of this container like docker run --name MyUbuntu1 -it ubuntu bash
.
The output of the above command is in the following screen shot.
Find out the output of the following commands,
docker ps
docker inspect ubuntu
. (This command will show the stack of the layer)
Find out the difference between docker image -f " dangling=false"
and docker image -f " dangling=true"
. And also check with various flags such as (-q and -f).
To make shorter version of your existing user name such as XXXXXXX: ~ user$ export PS1=" \u$"
then it will be like user$.
What is the status of the commands such as "docker system df"
and "docker system prune"?
You can also delete the container using docker rmi -f ubuntu:18.04
. (This -f flag means remove the image forcefully).
Now, you have gotten the basic idea about the docker on the linux system.
Docker Container
As explained earlier that docker container is runtime instance of an image. In other words, containers are running instances of docker images. It is mainly isolated from the host environment.
A Docker container provides a way to run multiple isolated systems on a single server or host. Each container shares kernel (& libraries also) with the host’s operating system. And since each container that is being used, shares OS (operating system) with host, it makes docker containers very light in size. In-fact the size of a Docker container can be in Megabytes (not GBs) & they load up extremely faster than a VM, in mere seconds as compared to virtual machines that are bigger in size & take minutes not seconds to load up.
Docker Container Commands
As you've pulled or run from docker hub web link for images, similarly used same commands for container too. Here, we will use start/stop commands for the docker container in the following,
docker start/stop <container_name>
Find out the status of pause
and unpause
by replacing start from the above command. Similarly check the top/stats
as well. Afterwards see the output of the docker attach <container_name or ID>
and also see the output of docker kill/rm <container_name or ID>
. Use history
command to see the image history.
Dockerfile
A dockerfile is a text file with instructions to build images. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can rebuild images, making changes to them as they desire.
As you have observed that until now we are getting image from docker hub repository, however, now we will create our own image.
Create a folder in your users home directory and inside of it open a file named Dockerfile
You will want to add the following lines.
FROM centos
MAINTAINER Student <email>
(email is optional)RUN dnf clean all && dnf upgrade -y && dnf update -y & echo hostname
CMD [“echo”, “Hello World….. and this is my first docker image”]
Here we use the official CentOS image from docker hub as our base. We can run a few commands, namely dnf clean for testing but we can add anything here. You can even create a build script to make a copy of the VM you're working on.
Afterwards we use docker build
and point it to the location of our directory. Docker will look for the dockerfile, parse it and build you a container. Something similar should work
docker build -t myimage1:1.0 folder_path
ordocker build -t myimage1:1.0.
Here –t myimage1:1.0 is the tag and the name of the image. After that the output should be shown as below in the screen shot.
Docker will assign the container an ID, you can check this from docker images
or not. To run that image you need to provide the image ID,
docker run bc66c5ed778a
Then the output should be like that as provide below,
That shows the images build successfully and works correctly. For more redup, the following links may be of use.
https://github.com/wsargent/docker-cheat-sheet#dockerfile
https://docs.docker.com/engine/reference/builder/#environment-replacement
From here you could create an account in docker hub and push your image. This easy system of sharing services and images has made docker a popular tool.
Docker Compose
Docker compose is a tool for defining and running multi-container Docker applications. We will use YAML file to configure applications services. By using a single command, we will be able to build and start containers. Moreover, compose is used in all environments such as production, staging, development, testing and as well as CI (continuous integration) workflow.
Compose can be used to build multi-service enviroments and have multiple containers running services seperately instead of one big container. This increases security and also allows simpler containers, most of which have already been uploaded to the docker hub. A workflow of working with compose will look similar to the following:
- Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
- Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
- Run docker-compose up and compose starts and runs your entire app.
Basically, you define each of the containers you want to deploy plus certain characteristics for each container deployment. Once you have a multi-container deployment description file, you can deploy the whole solution in a single action orchestrated by the docker-compose up command. Further info can be found from the following link
Installation of Docker compose on Linux Systems
Compose can be download from the compose repository release from GitHub. Run the following command to download the stable release of Docker compose,curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname-s)-$(uname -m)"-o/usr/local/bin/docker-compose
Once downloaded and the docker compose binary will be available under /usr/local/bin/docker-compose
. This download will take some time and will be pulled directly from the github repository. After downloading, you will also need to set the execute permission for the file. After changing the permissions, check the version or help to verify it is working.
Note: If you have problems running compose you might want to run the following commands and restart the docker service.
systemctl unmask docker.service
systemctl unmask docker.socket
Now your task is to create multiple containers, first check is there any containers are running, check images available and also check the docker-compose version. After that create a directory named “docker_compose01” then go inside that directory and create a file named “docker-compose.yml”. In that file write as below,
version: '3': services: mysql_database: # Note: This is a service then defined image underneath of it. That image is use for building this container. image:”mysql:latest” # As you know already this latest is a tag. You can choose exact tag from docker hub link. Next we have few options like a port then environment, though lot of things we can place here, remember for MySQL needs to have some mandatory parameters that we need to pass along with the environment option such as MYSQL_ROOT_PASSWORD=” password”. That “password” could be anything.
Thereafter, you need to create another service named web_test, and image:” nginx:latest”. These images will be downloaded if locally not available then save the file.
For further clarification, this screenshot will help.
For creating the container, use the command docker-compose up
. It will start downloading MySQL and nginx images from internet, as you can see in the following picture.
After that open another terminal and check the docker images and docker container. There you will see the images and that the two containers are running. Furthermore, you can use another option for creating the container, use command docker-compose up -d
(here flag d means detachable mode). So that it will run this process in background.
Now for terminating or stopping the container use docker-compose down
, then check the container and images again, here you find that container is not available, however images will be there.
Congratulations, this task is finished, you have created multiple containers and then terminate them successfully. Note: Micro services provides the scaling facility for instance, if we wanted to scale up the particular services by using multiple containers databases, so you can use the following command,
docker-compose up -d --scale mysql_database=4
As you can see the scale flag is increase 4 instances of service (mysql_database) in the following picture.
For practice, build Docker images running container and deploy apache to Ubuntu web server with the following picture naming convention.
Dockers Volume
Docker volume is the intermediate level. By default all the files will be stored inside the docker container and when we terminate the container everything will be vanished, also if you want to store some data persistently there are different ways to do that, one is volume and the second way is by bind mount. Bind mounts have limited functionality compared to volumes and Docker Volumes have several advantages over bind mounts:
- Volumes are easier to back up or migrate than bind mounts.
- You can manage volumes using Docker CLI commands or the Docker API.
- Volumes work on both Linux and Windows containers.
- Volumes can be more safely shared among multiple containers.
- Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.
- New volumes can have their content pre-populated by a container.
There are some commands, and your task is to understand and explored the commands, see the following commands:
docker volume
docker volume create vol_name
Implement the following statements,
Check the list and see the name of the volume.
Check also what are the images are available?
Create a container for nginx and mounting that nginx directory with this volume.
Hint: Set location like /usr/share/nginx/html.
Check the docker container again.
Check the port which is mapped to host operating system.
Hint: Check the ip address.
Create new container and map that docker volume with that directory.
Now check the container again and you will find 2 or more containers
Stop this container, terminate and then create a new container by using same docker volume.
In what follows the Bind Mount will be discussed,
Bind Mount
Bind mounts have been around since the early days of Docker. When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine. By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist. Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available.
Implement the following statement,
Create a directory for mounting. Hint: /opt/nginx/html (If that directory does not exist, which flag will create that particular directory on your host operating system?)
Create the a container and mounting this directory with the above nginx dirctory.
Check the container and what is a IP and port of this host machine.
Congratulations, you are sharing file between the host machine and the container successfully.