Docker on CentOS 7

What is Docker?

  • Docker is an open platform for developing, shipping, and running applications. 
  • Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. 
  • Docker manages the lifecycle of the container.
  • The use of containers to deploy applications is called containerization.

History

  • Developed using Linux core components, in 2013.
  • It was developed as an internal project at a platform-as-a-service company called dotCloud and later renamed as Docker.

Why Docker Containers?

  • Flexible: Even the most complex applications can be containerized.
  • Lightweight: Containers leverage and share the host kernel, making them much more efficient in terms of system resources than virtual machines.
  • Portable: You can build locally, deploy to the cloud, and run anywhere.
  • Loosely coupled: Containers are highly self sufficient and encapsulated, allowing you to replace or upgrade one without disrupting others.
  • Scalable: You can increase and automatically distribute container replicas across a datacenter.
  • Secure: Containers apply aggressive constraints and isolations to processes without any configuration required on the part of the user.

How Docker works?






Docker Editions

  • Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps.
  • Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale.

Time-based Release Schedule

  • Starting with Docker 18.03, Docker uses a time-based release schedule.
  • Docker CE Edge  - Monthly.
  • Docker CE Stable - Quarterly, with patch releases as needed.
  • Docker EE - Twice a year, with patch releases as needed.

Understanding the Docker Internals










Namespace

  • Docker makes use of kernel namespaces to provide the isolated workspace called the container.
  • Docker creates a set of namespaces for that container. 
    • PID namespace for process isolation.
    • NET namespace for managing network interfaces.
    • IPC namespace for managing access to IPC resources.
    • MNT namespace for managing filesystem mount points.
    • UTS namespace for isolating kernel and isolating hostnames (UNIX Time Sharing).

CGroup

  • Resource Allocation, Isolation
  • Enforce limits and constraints
    • Memory cgroup - managing accounting, limits and notifications.
    • CPU group - managing user / system CPU time and usage.
    • BlkIO cgroup - measuring & limiting amount of blckIO by group.
    • net_cls and net_prio cgroup - tagging the traffic control.
    • Devices cgroup - reading / writing access devices.

Union Filesystem

  • Union file systems operate by creating layers, making them very lightweight and fast.
  • Docker Engine uses UnionFS to provide the building blocks for containers.
  • Docker Engine can use multiple UnionFS variants, including:
    •  AUFS, btrfs, vfs, and devicemapper.

Container Format

  • Docker Engine combines the namespaces, control groups and UnionFS into a wrapper called a container format. 
  • The default container format is libcontainer.

Security

  • Docker Engine makes use of AppArmor, Seccomp and Capabilities for security purposes which are basically kernel features.
    • AppArmor - allows to restrict programs capabilities with per-program profiles.
    • Seccomp - used for filtering syscalls issued by a program.
    • Capabilities - for performing permission checks.

Drawbacks of Docker

  • Containers don't run at bare-metal speeds. 
  • All Containers are run inside the host system's Kernel.
  • Windows and Linux Containers cannot be on the same host.
  • Data persistence is complicated.
  • Not all applications benefit from containers. 
  • Managing a large number of containers is challenging – especially when it comes to clustering containers. 
  • Solutions:
    • Docker SWARM
    • Kubernetes
    • RANCHER
    • OpenShift

Docker Architecture

  • Docker uses a Client-Server Architecture.
  • The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing Docker containers.
  • The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.
  • The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.

Docker Components

  •  Docker Daemon or Server/Engine
  •  Docker Client
  •  Docker Registry (Ex: Docker Hub)
  •  Docker Objects
    •  Images
    •  Containers

Docker Daemon

The Docker daemon (docker) listens for Docker API requests and manages Docker objects such as:
  •  Images
  •  Containers
  •  Networks
  •  Volumes

Docker Client

  • The Docker client (docker command) is the primary tool to interact with Docker.
  • When you use commands such as docker container run, the client sends these commands to docker, which carries them out.
  • The docker command uses the Docker API.

Docker Registry

  • A Docker registry stores Docker images.
  • Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default.
  • You can even run your own Private Registry.
  • If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR).
  • When you use the docker image pull or docker container run commands, the required images are pulled from your configured registry.

Docker Objects

  • Images - An image is a read-only template with instructions for creating a Docker container.
  • Containers - A container is a runnable instance of an image.

Lab 1 – Installing and Configuring Docker

Docker is an open platform for developing, shipping, and running applications.

Docker helps the user to ship code faster, test faster, deploy faster and shorten the cycle between writing code and running code.

In this lab, you will install, configure and enable the docker daemon on one of the servers and pull a simple centos image to get familiar with docker commands.

Environment Setup

1.1 Ensure you have logged in as root user with the password linux on master server.

1.2 Set hostname for the server

# hostnamectl set-hostname master --static

# hostnamectl

1.3 Verify the server IP Address

# ip r

1.4 Make sure that SELinux is disabled.

# sed -i 's/enforcing/disabled/g' /etc/selinux/config

1.5 Disable Firewall for this Lab environment.

# systemctl disable --now firewalld

1.6 Enable IP forwarding feature:
IP forwarding is the ability for an operating system to accept incoming network packets on one interface, recognize that it is not meant for the system itself, but that it should be passed on to another network, and then forwards it accordingly.

# cat >> /etc/sysctl.conf <<EOF
net.ipv4.ip_forward = 1
EOF

# sysctl -p

# sysctl -a | grep net.ipv4.ip_forward

1.7 Update the system with the latest packages and reboot the host to take effect.

# yum update -y

# reboot

1.8 To check the current kernel version, selinux and firewall, by run the below command:
Note: Docker requires a 64-bit installation & kernel must be 3.10 at minimum.

# uname -r

# sestatus

# systemctl status firewalld

2 Install Docker

2.1 Install the Docker package.

wget https://download.docker.com/linux/centos/docker-ce.repo
# mv docker-ce.repo /etc/yum.repo.d/ 
# yum -y install docker

2.2 Once docker is installed, you will need to start the Docker daemon. Run the below command to enable and start the docker daemon on boot.

# systemctl enable --now docker

2.3 Verify the Status of the Docker.

# systemctl status docker

Verify Docker

2.4 Verify Docker is installed correctly by running a test image in a container

# docker container run hello-world

2.5 The “docker container ls” command only shows running containers by default. To see all containers, use the -a.

# docker container ls -a

2.6 As you noticed in the previous steps, Docker was “Unable to find image ‘hello- world:latest’ locally” and it fetched it form one of Public registry and saved it locally.
The “docker image” List all the available Docker images stored locally.

# docker image ls

3. Clean up

3.1 To remove all the “containers” run the below commands:

# docker container rm `docker container ls -a -q` -f

3.2 To remove all the “Images” run the below commands:

# docker image rm `docker images -q` -f

# docker container ls

3.3 Verify that Docker images are removed:

# docker image ls

3.4 Run the below command for more information on a command:

Information available like options and Management commands.

# docker --help

Lab -2 : Managing Docker Containers

Introduction
Docker provides the ability to package and run an application in a isolated environment called a container.

In this Lab, you will learn below items:

Objective:
  • Create and Manage the Lifecycle of container
  • Clean up
1 Ensure that you have logged-in as root user with password as linux.
1.1 Let us list the containers.

# docker container ls

1.2 Let us create a container.

# docker container create nginx

1.3 Let us list the container created.

# docker container ls

Note: ls only displays the containers in the up state

# docker container ls -a

Note: Docker by default, assigns a random name comprised of two dictionary words separated
by a ‘_’.

1.4 Let us name the container.


# docker container create --name web01 nginx

Note: As the image was already pulled in the previous steps, it did not pull an image.

1.5 Let us list the container created.

# docker container ls -a

1.6 Let us start the container created.

# docker container start web01

1.7 Let us list the container created.

# docker container ls

1.8 Let us create a new container using run option.

# docker container run --name server01 centos

Note: docker run command is the combination of docker create + docker start.

1.9 Let us list the container created.

# docker container ls -a

Note: The container is in exited status, as containers by default run to completion.

1.10 Let us create a new container with interactive terminal (-i -t) option.

# docker container run --name server02 -i -t centos
# ps
# exit

Note: The container is created and interactive terminal is displayed. You can run commands inside the container and exit out.

1.11 Let us list the container.

# docker container ls -a

1.12 Let us create a new container with detached interactive terminal (-i -t) option.

# docker container run --name server03 -dit centos

Note: The container was created in background as we had mentioned -d (detach) option.

1.13 Let us list the container.

# docker container ls -a

1.14 Let us now attach the container server03, by executing the below command.

# docker container attach server03
# ps
Press ctrl+p,ctrl+q

1.15 Let us list the container.

# docker container ls

Note: The container continues to run as we had used ctrl+p ctrl+q to exit from the container.
1.16 Let us run a command inside the container using exec option.

# docker container exec server03 cat /etc/resolv.conf

1.17 Let us rename the existing container.

# docker container rename server03 server003

1.18 Let us list the container.

# docker container ls

1.19 Let us pause the existing container.

# docker container pause server003

1.20 Let us list the container.

# docker container ls

1.21 Let us unpause the existing container.

# docker container unpause server003

1.22 Let us list the container.

# docker container ls

1.23 Let us stop the existing container.

# docker container stop server003

1.24 Let us list the container.

# docker container ls

1.25 Let us restart the existing container.

# docker container restart server003

1.26 Let us list the container.

# docker container ls

1.27 Let us verify the top running process inside the container.

# docker container top server003

1.28 Let us verify the stats of the running containers.

# docker container stats

Note: Press ctrl+c to exit from the output screen or run the below command to disable streaming stats .

# docker container stats --no-stream

1.29 Let us verify the logs of the containers.

# docker container logs server003 --timestamps

1.30 Let us expose the custom port to the container.

# docker container run --name web02 -dit -p 8080:80 nginx

1.31 Let us list the container.

# docker container ls

Note: The container port is 80 and same is exposed to 8080 port on the host.

1.32 Let us expose the custom port to the container.

# docker container run --name web03 -dit -P nginx

1.33 Let us list the container.

# docker container ls

1.34 Let us inspect the container.

# docker container inspect web02 | grep -e "HostPort" -e "IPAddress"

1.35 Let us access the webserver by using the container ip.

# curl 172.17.0.4

1.36 Let us access the webserver by using the docker host ip and port exposed.

# curl 192.168.100.10:8080

1.37 Let us access the webserver by using the docker host ip and port exposed.

# curl 192.168.100.10:32769

1.38 Let us check the space consumed by the container.

# docker container ls -sa

1.39 Let us kill the container.

# docker container kill web02

1.40 Let us list the container.

# docker container ls -a

1.41 Let us remove all the stopped containers.

# docker container prune

1.42 Let us list the container.

# docker container ls -a

1.43 Let us remove the server003 container gracefully.

# docker container stop server003
# docker container rm server003

1.44 Let us cleanup, by removing the containers forcefully.

# docker container rm web01 -f

1.45 Let us list the container.

# docker container ls -a

1.46 Let us also remove the images downloaded.

# docker image rm nginx centos

1.47 Let us cleanup.

# docker container rm `docker container ls -a -q` -f
# docker image rm `docker images -q` -f

Docker Registry

What is Docker Registry?

  • The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images.
  • It serves as a target for your docker image push & docker image pull commands. 

What is the use-case?

You should use the Registry if you want to:
  • Tightly control where your images are being stored
  • Fully own your images distribution pipeline
  • Integrate image storage and distribution tightly into your in-house development workflow

Terminology - Docker Registries

Image : 
  • An image is essentially a template for Docker containers. It consists of a manifest and an associated series of layers.
Layer : 
  • A layer represents a filesystem difference.
  •  An image’s layers are combined together into a single image that forms the base for a container’s filesystem.
Registry :
  • A registry is a content delivery and storage system for named Docker images. 
Repository : 
  • A repository is simply a collection of different versions for a single Docker Image 
  • Similar to a git repository.

Run the local Registry

  • The registry Docker image is configured to start on port 5000 in the container, so we will expose the host port also as 5000.
  • You can launch the registry via the following command:
# docker container run -d -p 5000:5000 --name localregistry registry

Basic Commands .. 

  • Tag the image so that it points to your registry
# docker image tag ubuntu localhost:5000/myfirstimage
  • Push it
# docker image push localhost:5000/myfirstimage
  • Pull it back
# docker image pull localhost:5000/myfirstimage

Remove Local Registry

  • Stop Registry and Remove all data
# docker container stop localregistry
# docker container rm -v localregistry

Docker Hub

Docker Hub is a cloud-based registry service which allows you to:
  • Link to code repositories
  • Build your images and Test them
  • Stores manually pushed images
Docker Hub is the default registry used when you docker image push or docker image pull.

Features

Image Repositories: Find, manage, push and pull images from community, official
Automated Builds: Automatically create new images when you make changes to a source code repository
Webhooks: A feature of Automated Builds, Webhooks let you trigger actions after a successful push to a repository
Organizations: Create work groups to manage access to image repositories

Pushing & Pulling to Docker Hub

Getting an image to Docker Hub
  • Log in on https://hub.docker.com/
  • Click on Create Repository.
  • Choose a name and a description for your repository and click Create.
  • Log into the Docker Hub from the command line
# docker login --username=yourhubusername --email=youremail@company.com

Lab: Docker Hub

Introduction
In this lab, you will learn how signup with Docker Hub and setup local docker registry.
Objective:
• Signup with Docker Hub
• Tag an Image
• Push an Image to Docker Hub
• Clean up 

1 Ensure that you have logged-in as root user with password as linux.

1.1 Creating Docker Hub Account
Copy the below link and open the web browser, paste the URL and you need to do a simple sign-up on Docker hub

https://hub.docker.com/

Once you have signed up, Verify your Email id and login to Docker Hub account.

1.2 Let us login to Docker Hub, by executing the below command.

# docker login

1.3 Let us pull an image from Docker Hub, by executing the below command.

# docker image pull alpine
# docker image ls

1.4 Let us tag an image, by executing the below command.

# docker image tag alpine eyesoncloud/alpine:ver1
# docker image ls

1.5 Let us push an image to Docker Hub, by executing the below command.

# docker image push eyesoncloud/alpine:ver1

1.6 Let us verify the image push to Docker Hub, by logging into Docker Hub account.

1.7 Let us remove the local image, by executing the below command.

# docker image rm eyesoncloud/alpine:ver1

1.8 Let us pull the image from our repository, by executing the below command.

# docker image pull eyesoncloud/alpine:ver1

Cleanup

2.1 Let us cleanup, by executing the below command.

# docker container rm `docker container ls -a -q` -f
# docker image rm `docker images -q` -f

Lab – Configuring Local Registry

Introduction
Docker is a great tool for deploying user servers. Docker even has a public registry called Docker Hub to store Docker Images. While Docker let users upload Docker creations to their Docker Hub free, anything user upload is also public. This might not be the best option for a user project.
This guide will show how to set up and secure own Private Docker registry. By the end of this lab will be able to push a custom Docker image to the private registry and pull the image securely from a host.

1.1 Run a local registry
Use a command like the following to start the registry container:

# docker container run -d -p 5000:5000 --restart=always --name registry registry:2

# docker container ls

1.2 Copy an image from Docker Hub to your registry
You can pull an image from Docker Hub and push it to your registry. The following example pulls the ubuntu:16.04 image from Docker Hub and re-tags it as my-ubuntu, then pushes it to the local registry. Finally, the ubuntu:16.04 and my-ubuntu images are deleted locally and the my-ubuntu image is pulled from the local registry.

1.3 Pull the ubuntu:16.04 image from Docker Hub.

# docker image pull ubuntu:16.04

# docker image ls

1.4 Tag the image as localhost:5000/my-ubuntu. This creates an additional tag for the existing image. When the first part of the tag is a hostname and port, Docker interprets this as the location of a registry, when pushing.

# docker image tag ubuntu:16.04 localhost:5000/my-ubuntu
# docker image ls

1.5 Push the image to the local registry running at localhost:5000

# docker image push localhost:5000/my-ubuntu
# docker image ls

1.6 Remove the locally-cached ubuntu:16.04 and localhost:5000/my-ubuntu images, so that you can test pulling the image from your registry. This does not remove the localhost:5000/my-ubuntu image from your registry

# docker image rm localhost:5000/my-ubuntu docker.io/ubuntu:16.04
# docker image ls

1.7 Pull the localhost:5000/my-ubuntu image from your local registry.

# docker image pull localhost:5000/my-ubuntu
# docker image ls

2 Cleanup

2.1 To remove all the images run the below commands:

# docker image rm `docker images -q` -f

Note: Ignore if one of the registry image is not deleted.

2.2 Verify that docker images are removed:

# docker images

Lab - Building an Image using Dockerfile

Introduction
DockerFile defines what type of environment need to go inside your container.
In this lab, you will learn how to create a simple DockerFile and you can expect that the build of your app defined in this DockerFile behave the same wherever it runs.
Objectives:
1. Building an Image from a Dockerfile
2. Run a Container using a newly built image

1.1 Login as “root” user.

1.2 Create a directory and a Dockerfile.

# mkdir example && cd example

1.3 Create a Dockerfile.

# cat > Dockerfile <<EOF
FROM centos
RUN yum -y install epel-release
RUN yum -y update
RUN yum -y install nginx
RUN mkdir -p /data/storage
WORKDIR /data/storage
ADD index.html /usr/share/nginx/html/index.html
EXPOSE 80/tcp
CMD ["nginx", "-g daemon off;"]
EOF

# cat > index.html << EOF
Welcome to Docker Learning
EOF

1.4 Take Dockerfile and use the docker build command to build an image

# docker image build -t centos:nginx .

1.5 Let us list the image that has built.

# docker image ls

1.6 Let us run a container using a newly built image.

# docker container run --name web-nginx -dit centos:nginx

1.7 Let us inspect the container to capture the ip address

# docker container inspect web-nginx | grep IPAddress

1.8 Let us access the containerized webserver, by executing the below command.

# curl 172.17.0.2

2 Cleanup

2.1 To remove all the containers run the below commands.

# docker rm `docker container ls -a -q` -f

2.2 To remove all the images run the below commands.

# docker image rm `docker image ls -q` -f

2.3 Verify that containers are remove

# docker container ls -a

2.4 Verify that docker images are removed:

# docker image ls

Docker Volume Service

Storage for Containers

  • Containers often requires storage for capturing/saving data beyond the container life cycle. 
  • Docker volume service is the best option to keep data for future use.

Manage data in Docker

  • By default all files created inside a container are stored on a writable container layer.
  • The data doesn’t persist when that container no longer exists.
  • A container’s writable layer is tightly coupled to the host machine where the container is running. 
  • Writing into a container’s writable layer requires a storage driver to manage the filesystem.
  • The storage driver provides a union filesystem, using the Linux kernel. 
  • This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem.
  • Docker has two options for persisting data
    • Volumes
    • Bind Mounts

Docker Volumes – Life Cycle

  • Volumes are often a better choice than persisting data in a container’s writable layer
  • Volume does not increase the size of the containers using it
  • Volume’s contents exist outside the lifecycle of a given container.
Volumes
  • Stored in a part of the host filesystem and managed by Docker
  • Location - /var/lib/docker/volumes/
  • Non-Docker processes should not modify this part of the filesystem
  • Volumes are the best way to persist data in Docker
Bind mounts
  • Stored anywhere on the host system
  • Non-Docker processes on the Docker host or a Docker container can modify them at any time
tmpfs
  • Stored in the host system’s memory only, and are never written to the host system’s filesystem.

Volumes

  • Created and Managed by Docker.
  • Run docker volume create command to create volume.
  • To remove unused volumes use docker volume prune.
  • When you create a volume, it may be named or anonymous.
  • Volumes can be more safely shared among multiple containers.
  • Volumes are easier to Backup or Migrate than the bind mounts.
  • When no running container is using a volume, the volume is still available to Docker and is not removed automatically.
  • Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt the contents of volumes, or to add other functionality.

Bind Mounts

  • Available since the early days of Docker.
  • Bind mounts have limited functionality compared to volumes.
  • When you use a bind mount, a file or directory on the host machine is mounted into a container.
  • The file or directory is referenced by its full path on the host machine. 
  • The file or directory does not need to exist on the Docker host already.
  • It is created on demand if it does not yet exist.
  • Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available.
  • If you are developing new Docker applications, consider using named volumes instead. 
  • You can’t use Docker CLI commands to directly manage bind mounts.

tmpfs Mounts

  • A tmpfs mount is not persisted on disk, either on the Docker host or within a container.
  • It can be used by a container during the lifetime of the container, to store non-persistent state or sensitive information. 
  • For instance, internally, swarm services use tmpfs mounts to mount secrets into a service’s containers.

Use cases for volumes

  • When you want to store your container’s data on a remote host or a cloud provider, rather than locally.
  • Sharing data among multiple running containers. 
    • Multiple containers can mount the same volume simultaneously, either read-write or read-only.
    • Volumes are only removed when you explicitly remove them.
  • When the Docker host is not guaranteed to have a given directory or file structure.
    • Volumes help you decouple the configuration of the Docker host from the container runtime.
  • When you need to back up, restore, or migrate data from one Docker host to another, volumes are a better choice. 
    • You can stop containers using the volume, then back up the volume’s directory (such as /var/lib/docker/volumes/<volume-name>).

Use cases for bind mounts

  • Sharing configuration files from the host machine to containers.
    • This is how Docker provides DNS resolution to containers by default, by mounting /etc/resolv.conf from the host machine into each container.
  • Sharing source code or build artifacts between a development environment on the Docker host and a container.
    • For instance, you may mount a Maven target/ directory into a container, and each time you build the Maven project on the Docker host, the container gets access to the rebuilt artifacts.

Use cases for tmpfs mounts

  • tmpfs mounts are best used for cases when you do not want the data to persist either on the host machine or within the container.
  • This may be for security reasons or to protect the performance of the container when your application needs to write a large volume of non-persistent state data.

Container and Layers

  • The major difference between a container and an image is the top writable layer.
  • All writes to the container that add new or modify existing data are stored in this writable layer.
  • When the container is deleted, the writable layer is also deleted.
  • The underlying image remains unchanged.
  • Because each container has its own writable container layer, and all changes are stored in this container layer, multiple containers can share access to the same underlying image and yet have their own data state.
  • Docker uses storage drivers to manage the contents of the image layers and the writable container layer. 
  • Each storage driver handles the implementation differently, but all drivers use stackable image layers and the copy-on-write (CoW) strategy.

Container size on disk

  • To view the approximate size of a running container
  • docker container ls -s
    • size: the amount of data (on disk) that is used for the writable layer of each container.
    • virtual size: the amount of data used for the read-only image data used by the container plus the container’s writable layer size.

The copy-on-write (CoW) strategy

  • Copy-on-write is a strategy of sharing and copying files for maximum efficiency.

Docker storage drivers

  • overlay2 - Preferred storage driver, for all currently supported Linux distributions, and requires no extra configuration.
  • aufs - Was the preferred storage driver for Docker 18.06 and older.
  • devicemapper - Was the recommended storage driver for CentOS and RHEL, as their kernel version did not support overlay2.
  • btrfs and zfs – Supports advanced options, such as creating “snapshots”, but require more maintenance and setup. 
  • vfs - Intended for testing purposes, and for situations where no copy-on-write filesystem can be used.

Check your current storage driver

  • Run - docker info cmd

Volume Plug-ins

  • Docker has support for plugins to interact with 3rd party storage solutions.
  • This allows the Docker to take advantage of the features of these storage solutions.
  • Starting with version 1.8, Docker added support for 3rd party plugins.
  • This enables the engine deployments to be integrated with external storage systems and volumes to persist beyond the lifetime of a single engine host.

Volume Plug-ins

  • Install appropriate plugin
# docker plugin install –-grant-all-permissions <plugin>

Use-cases of read-only volume


  • Multiple containers can mount the same volume, and it can be mounted read-write for some of them and read-only for others, at the same time.
  • Container only needs read access to the data

# docker container run --name nginxtest –dit -v nginx-vol:/usr/share/nginx/html:ro nginx:latest

  • Use docker container inspect nginxtest to verify the read-only mount.

"Mounts": [
"Type": "volume", 
"Name": "nginx-vol", 
"Source": "/var/lib/docker/volumes/nginx-vol/_data", 
"Destination": "/usr/share/nginx/html",
"Driver": "local", 
"Mode": "", 
"RW": false,
"Propagation": "" 
}
  ],

Lab - Docker Volume Service

Introduction
Volumes are the preferred mechanism for persisting the data in Docker.

In this Lab, you will learn below items:

Objective:
  • Create and manage volumes
  • Start a container with a volume
  • Persist a volume using a container
  • Use a read-only volume
  • Backup, restore, or migrate data volumes
  • Backup a container
  • Restore container from backup
  • Remove the all unused volume Ensure that you have logged-in as root user with password as linux.

1. Create and manage volumes

1.1 Let us list the volumes, by executing the below command.

# docker volume ls

1.2 Let us create a volume, by executing the below command.

# docker volume create my-vol

1.3 Let us List volumes, by executing the below command.

# docker volume ls

 1.4 Let us inspect volume my-vol, by executing the below command.

# docker volume inspect my-vol

 1.5 Let us remove a volume my-vol, by executing the below command.

# docker volume rm my-vol

2. Start a container with a volume

If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following step mounts the volume myvol2 into /app/ in the container.

 # docker container run --name devtest -dit -v myvol2:/app nginx:latest

2.1 Let us Inspect the container's devtest to verify that the volume was created and mounted correctly

# docker inspect devtest | grep "Mounts" -A 10

 2.2 Let us Stop the container and remove the volume

# docker container stop devtest

2.3 Let us Stop the container and remove the volume

# docker container rm devtest

2.4 Let us Stop the container and remove the volume

# docker volume rm myvol2

3. Persist a volume using a container

3.1 In this step starts a nginx container and populates the new volume nginx-vol with the contents of the container’s /usr/share/nginx/html directory, which is where Nginx stores its default HTML content.

# docker container run --name nginxtest -dit -v nginx-vol:/usr/share/nginx/html nginx:latest
# cat > /var/lib/docker/volumes/nginx-vol/_data/index.html << EOF
"HELLO FROM DOCKER"
EOF

 3.2 Let us search the IP address of the container

# docker container inspect nginxtest | grep -i ipaddress

3.3 Let us access the web container to verify the data

# curl 172.17.0.2

 3.4 Let us Stop the container and remove the volume

# docker container stop nginxtest

3.5 Let us Stop the container and remove the volume

# docker container rm nginxtest

 3.6 Let us verify if the data persist even after the container is deleted.

# cat /var/lib/docker/volumes/nginx-vol/_data/index.html

 3.7 Let us remove the volume

# docker volume rm nginx-vol

4. Use a read-only volume

4.1 In this step let us mount a volume as readonly.
 
# docker container run --name nginxtest -dit -v nginx-vol:/usr/share/nginx/html:ro nginx:latest

4.2 Let us Inspect the volume devtest to verify that the volume was created and mounted correctly

# docker container inspect nginxtest | grep "Mounts" -A 10

 4.3 Let us verify if its readonly by creating a file.

# docker container exec nginxtest touch /usr/share/nginx/html/testfile

4.4 Let us Stop the container and remove the volume

    # docker container stop nginxtest

4.5 Let us Stop the container and remove the volume

# docker container rm nginxtest

4.6 Let us Stop the container and remove the volume

# docker volume rm nginx-vol

 5 Backup, restore, or migrate data volumes

Volumes are useful for backups, restores, and migrations. Use the --volumes-from flag to create a new container that mounts that volume.
Backup a container
5.1 Let us create a new container named dbstore:

 # docker container run --name dbstore -dit -v dbdata:/dbdata ubuntu

5.2 Let us populate some files inside the directory which is mounted to dbstore container.

# hostnamectl | tee /var/lib/docker/volumes/dbdata/_data/host-data
# cat /var/lib/docker/volumes/dbdata/_data/host-data

 5.3 Let us launch a new container and mount the volume from the dbstore container Mount a local host directory as /backup

Let us Pass a command that tars the contents of the dbdata volume to a backup.tar file inside our /backup directory.

# docker container run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /dbdata

Let us verify the tar file on the host system.

# ls -lh backup.tar
# docker container ls -a

5.4 Restore container from backup

With the backup just created, you can restore it to the same container, or another that you made elsewhere.
For example, create a new container named dbstore2

 # docker container run --name dbstore2 -dit -v dbdata:/dbdata ubuntu

5.5 Then un-tar the backup file in the new container`s data volume:

 # docker container run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c "cd /dbdata && tar xvf /backup/backup.tar"

 5.6 Let us verify the data by running below command, to confirm the restore is successful.:

 # docker container exec dbstore2 cat /dbdata/host-data

Cleanup

4.1 Let us cleanup, by executing the below command

# docker container rm `docker container ls -a -q` -f

# docker image rm `docker image ls -q` -f

# docker volume prune -f


Docker Networking

Networking overview

  • Docker containers uses Linux Bridge feature by default to communicate.
  • Whether your Docker hosts run Linux, Windows, or a mix of the two, you can use Docker to manage them in a platform-agnostic way.

CNM – Container Network Model

  • CNM is an open-source networking specification for containers.
  • CNM defines Networks, Endpoints, Sandboxes.
  • Libnetwork is Docker’s implementation of CNM.
  • Libnetwork is extensible via pluggable drivers which allows various network technologies.

CNM built on 5 Objects

A CNM has mainly built on 5 objects: 

Network Controller

  • Provides the entry-point into Libnetwork that exposes simple APIs for Docker Engine to allocate and manage networks. 

Driver

  • Owns the Network
  • Responsible for managing the Network
  • Supports multiple drivers to satisfy various use-cases and deployment scenarios.

Network

  • Provides connectivity between a group of endpoints that belong to the same network and isolate from the rest.
  • Whenever a network is created or updated, the corresponding Driver will be notified of the event.

Endpoint

  • Provides the connectivity for container in a Network.

Sandbox

  • Created when users request to create an endpoint on a network.
  • A Sandbox can have multiple endpoints attached to different networks representing container’s network configuration such as:
    • IP-Address
    • MAC-Address
    • Routes
    • DNS

Network Drivers

  • Bridge networks - Best when you need multiple containers to communicate on the same Docker host.
  • Host: Uses the host’s networking directly. 
  • None: Disables all networking. 
  • Overlay networks - Best when you need containers running on different Docker hosts to communicate, or when multiple applications work together using swarm services.
  • Macvlan networks - Best when you are migrating from a VM setup or need your containers to look like physical hosts on your network, each with a unique MAC address.
  • Third-party network plugins - allow you to integrate Docker with specialized network stacks.

Lab: Docker Networking

Introduction

Docker abstracts the underling networking of the host to provide the networking capability to the containers running. Each container running the Docker will be able to communicate with each other with the help of Docker Network.

In this Lab, you will learn below items:

Objective:

  • Use of the default bridge network
  • Use of the user-defined bridge networks 

1. Ensure that you have logged-in as root user.

1.1 Let us list current networks before you do anything else

# docker network ls

1.2 Let us inspect the bridge by using below command

# docker network inspect bridge

1.3 Let us create two containers to verify the default bridge network.

# docker container run --name demo-1 -dit centos

# docker container run --name demo-2 -dit centos

1.4 Let us Inspect the bridge network to see which containers are connected to it

# docker network inspect bridge | grep Containers -A 15

1.5 By using docker attach command, enter inside demo-1 container, From within demo-1, make sure you can connect to the internet by pinging google.com. The -c 2 flag limits the command to two ping attempts.

# docker container attach demo-1

# ip a s

# ping -c 2 172.17.0.2

# ping -c 2 172.17.0.3

# curl ifconfig.io

# ping -c 2 www.google.com

Note: Press ctrl+p,ctrl+q to come out of the container gracefully

1.6 By using docker attach command dive inside demo-2, From within demo-1, make sure you can connect to the internet by pinging google.com. The -c 2 flag limits the command to two ping attempts.

# docker container attach demo-2

# ip a s

# ping -c 2 172.17.0.2

# ping -c 2 172.17.0.3

# curl ifconfig.io

# ping -c 2 www.google.com

Note: Press ctrl+p,ctrl+q to come out of the container gracefully

1.7 Let us clean-up the containers, by executing the below command.

# docker container rm `docker container ls -a -q` -f

2. Customized or user-defined Bridge Networks

2.1 Let us create a bridge by the name demo-bridge

# docker network create --driver bridge demo-bridge

2.2 let us list the docker networks

# docker network ls

2.3 Inspect the demo-bridge network. This shows you its IP address and the fact that no containers are connected to it:

# docker network inspect demo-bridge

2.4 Create your four containers. Notice the --network flags. You can only connect to one network during the docker run command, so you need to use docker network connect afterward to connect one of the container to the bridge network as well

# docker container run --name centos-1 -dit --network demo-bridge centos

# docker container run --name centos-2 -dit --network demo-bridge centos

# docker container run --name centos-3 -dit centos

# docker container run --name centos-4 -dit --network demo-bridge centos

# docker network connect bridge centos-4

Note: In above steps we have create a bunch of containers, where for some containers we have specified network bridge.

2.5 Let us list the container created

# docker container ls -a

2.6 Inspect the demo-bridge network to verify the containers in the attached to the demo-bridge

# docker network inspect demo-bridge | grep -i name -A 4

Note: Containers centos-2, centos-1, and centos-4 are connected to the demo-bridge network.

2.7 Inspect the bridge network to verify the containers in the attached to the default bridge

# docker network inspect bridge | grep -i name -A 4

Note: Containers centos-3, and centos-4 are connected to the default bridge network

2.8 On user-defined networks like demo-bridge, containers can not only communicate by IP address, but can also resolve a container name to an IP address. This capability is called automatic service discovery.

2.8.1 Let’s connect to centos-1 and test this out. centos-1 should be able to resolve centos-2 and centos-4 (and centos-1, itself) to IP addresses.

# docker container attach centos-1

# ping -c 2 centos-1

# ping -c 2 centos-2

# ping -c 2 centos-4

2.8.2 From centos-1, you should not be able to connect to centos-3 at all, since it is not on the demo-bridge network.

# ping -c 2 centos-3

Note: Detach from centos-1 using detach sequence, CTRL + p, CTRL + q

2.8.3 Remember that centos-4 is connected to both the default bridge network and demo-bridge. It should be able to reach all of the other containers.

However, you will need to address centos-3 by its IP address. Attach to it and run the tests.

# docker container attach centos-4

# ping -c 2 centos-1

# ping -c 2 centos-2

# ping -c 2 centos-3

# ping -c 2 172.17.0.2

# ping -c 2 172.18.0.4

Note: Detach from centos-4 using detach sequence, CTRL + p, CTRL + q

Cleanup

3. Let us clean-up all containers and the demo-bridge network.

# docker container rm `docker container ls -a -q` -f

# docker image rm `docker image ls -a -q` -f

# docker network rm demo-bridge

Docker Compose

Docker Compose Overview
* Compose is a tool for defining and running multi-container Docker applications.
* Use a YAML file to configure application’s services.
* Single command creates and starts all the services from the configuration. 
* Compose works in all environments: production, staging, development, testing, as well as CI workflows.
* Using Compose is basically a three-step process:
> Define your app’s environment with a Dockerfile so it can be reproduced anywhere.
> Define the services that make up your app in docker-compose.yml so they can be run together in an isolated environment.
> Run docker-compose up and Compose starts and runs your entire app.
Features
* Multiple isolated environments on a single host
* Preserve volume data when containers are created
* Only recreate containers that have changed
* Variables and moving a composition between environments
Common use cases
* Development environments
* Automated testing environments
* Single host deployments
Lab Docker Compose
Introduction
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.
Compose works in all environments: production, staging, development, testing, as well as CI workflows.
1. Install Docker Compose
1.1 Log in as “root” user.
# curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
1.2 Run this command to download the latest version of Docker Compose:
1.3 After the download has completed, make the binary executable with the command:
1.4 Verify the version has been installed.
# chmod +x /usr/local/bin/docker-compose
# docker-compose --version
1.5 Make sure bash completion is installed.
# curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose > /etc/bash_completion.d/docker-compose
1.6 Install git, by executing the below command.
# yum install -y git
1.7 Let us clone the git repository, which contains the manifest
# git clone https://github.com/EyesOnCloud/docker.git
2. Get started with Docker Compose
Docker Compose is a four-step process:
• Create a directory for the install files.
• Create a YAML file named docker-compose.yml in that directory.
• Put the application group launch commands into docker-compose.yml.
• From inside that directory, use the command docker-compose up to launch the container(s).
2.1 Create a directory for a simple project called test_compose:
# mkdir test_compose1 && cd test_compose1
2.2 Create a YAML file named the docker-compose.yml file,
In this example, you’ll take the base application from GitHub and complete the docker- compose.yml file in it. This application uses Node, NPM, and MongoDB.
Just like the Dockerfile, the docker-compose.yml file tells Docker how to build what you need for your containers.
a. Choose Your Docker Compose Version
The first line of any docker-compose.yml file is the version setting
b. Define Node and Mongo Services
Services are how Docker refers to each container you want to build in the docker- compose file.
In this case, you’ll create two services:
• NodeJS application
• MongoDB database.
First, tell Docker what image you want to build the app service from by specifying that you’ll be building from the sample:1.0 image. So you’ll specify that indented under the app tag.
Of course, that image doesn’t exist, so you’ll need to let Docker know where to find the Dockerfile to build it by setting a build context. If you don’t, Docker will try to pull the image from Docker Hub and when that fails it will fail the docker-compose command altogether.
Here, you’ve specified that the build context is the current directory, so when Docker can’t find the sample:1.0 image locally, it will build it using the Dockerfile in the current directory.
Next, you’ll tell Docker what the container name should be once it’s built the image to create the container from.
Now, when Docker builds the image, it will immediately create a container named sample_app from that image.
By default, NodeJS apps run on port 3000, so you will need to map that port to 80 since this is the “production” docker-compose file. You do that using the ports tag in YAML.
Here, you have mapped port 80 on the host operating system, to port 3000 from the container. That way, when you have moved this container to a production host, users of The application can go to the host machine’s port 80 and have those requests answered from the container on port 3000.
Your application will be getting data from a MongoDB database and to do that the application will need a connection string that it will get from an environment variable called “MONGO_URI”.
2.3 Create a Docker Network
For the application service to actually be able to reach the sample database, it will need to be on the same network. To put both of these services on the same network, create one in the docker-compose file by using the network's tag at the top level.
This creates a network called “samplenet” using a bridge type network. This will allow the two containers to communicate over a virtual network between them.
Back in the app section of the file; join the app service to the “samplenet” network:
d. Create the MongoDB Service
Now the app service is ready, but it will not be much good without the DB service. So add the same kinds of things in the next section for the DB service
This service builds from the official MongoDB 3.0.15 image and creates a container named “sample_db”. It also joins the “samplenet” network with an alias of “sampledb”.
3. Use Docker Volumes
3.1 Copy the manifest to current folder
# cp ~/docker/docker-compose.yml ~/test_compose1/
3.2 Let us view the manifest, by executing the below command.
# cat -n docker-compose.yml
With all that done, your final docker-compose.yml file should look like.
2.4 Create a Dockerfile which references to the base images:
# cat > Dockerfile <<EOF
FROM node:latest
COPY . /app
WORKDIR /app
RUN ["npm", "install"]
EXPOSE 3000/tcp
CMD ["npm", "start"]
EOF
2.5 Use Docker Compose to launch this test container with the command:
# docker-compose up -d
This command runs docker-compose in the background. If you want to see all of the output of docker-compose, run this command without the -d flag.
Remember, you will need to issue this command from inside the test_compose1 directory
2.6 Verify that the container was created by using the command:
# docker-compose ps
2.7 Verify that the container was created in detailed information using -a flag:
# docker container ls -a
3 Stopping and Deleting Containers (stop, rm and down)
3.3 To stop all the containers in an application group, use the command:
# docker-compose stop
3.4 After the containers have been stopped, you can remove all containers which were started with Docker Compose with the command:
# docker-compose rm
Note: Press “y” to remove all containers.
3.5 To both stop all containers in an application group and remove them all with everything that was created by docker-compose run below command.
# docker-compose down
3.6 Verify that the docker-compose container is removed
# docker-compose ps
3.7 Verify that the container was created in detailed information using -a flag:
# docker container ls -a
Build Services with Docker Compose
4 Create a Ghost Blog and MySQL Service
4.1 Create a docker-compose.yml file in the root directory.
# mkdir ~/ghost-app && cd ~/ghost-app
4.2 Copy the manifest to current folder
# cp ~/docker/docker-ghost-compose.yml ~/ghost-app/docker-compose.yml
4.3 Let us view the manifest
# cat -n docker-compose.yml
Bring Up the Ghost Blog Service
Start up the Docker Compose service.
# docker-compose up -d
Conclusion
Confirm by access the below URL from the web browser.
http://IP-Address:80
5 Cleanup
5.1 Stop and bring down the application.
# docker-compose stop
# docker-compose rm
# docker-compose down
5.2 To remove all the images run the below commands.
# docker rmi `docker images -q` -f
5.3 Verify that containers are removed:
# docker container ls
5.4 Verify that docker images are removed:
# docker image ls 


No comments:

Post a Comment