Introduction#
This is the first article in the work practice series on containers, mainly introducing the basics and practices of Docker.
As a backend developer, when I first started working, I mainly debugged locally and didn't have much understanding of Docker's related usage. It wasn't until I began to engage in more complex underlying chain development that I encountered the complicated dependencies of chains or their related tools, which also involved many version conflict issues. Each time I needed to configure a complex environment on my local machine or server, and after each restart, many services and configurations needed to be redeployed, which was cumbersome and prone to inexplicable cross-platform errors.
Therefore, I gradually began to adopt the method of writing project-specific Dockerfiles and compiling images for subsequent development and debugging. The deployed machines only need to install the Docker environment (and Docker Compose), without needing to install various dependencies locally, which is very convenient. Later, I also worked with my leader to configure the CI/CD process of the project based on Docker images, GitLab CI, and k8s environment, greatly improving the efficiency of development and debugging.
This article will summarize the concepts and practices related to Docker based on these experiences, hoping to be helpful.
Introduction to Docker#
The services we develop often run in binary form on the operating system, and Docker is a container technology that packages our applications and related dependencies into a container. The container is often based on a lightweight Linux image and is a stack of multi-layer images, with our application typically at the top layer. These dependencies are specified in the Dockerfile.
Deploying with containers has many obvious advantages over running on a local machine or remote server.
- No need to install various environments and dependencies on the operating system (except for Docker itself). If the original service startup mode is used, the development process becomes very cumbersome, requiring constant communication and collaboration between development and operations to complete environment configuration and deployment. Moreover, if multiple services are deployed on one machine, it can easily lead to dependency/version conflict issues.
- Independent deployment environments can be created. By writing Dockerfiles for different projects to build images, we can package the required environment and dependencies in the image, allowing us to easily run different versions of the same application or run multiple instances of general services like MySQL. Management can be done through Docker commands or Docker Compose commands, allowing for one-click start/pause.
- Docker does not strongly depend on the version of the operating system itself. The same Docker image can run on different operating systems (Windows, macOS, different distributions of Linux), making it easy to share, migrate, and deploy services across platforms.
- Compared to virtual machines, Docker containers do not have a kernel and only contain the application layer, making them smaller, faster to start, and more lightweight.
Of course, the compatibility of Docker containers is relatively worse compared to operating systems and virtual machines, as VMs can run any other operating system and meet more specific needs.
Basic Docker Operations#
Installing Docker#
Installing Docker is straightforward. Download the installation package corresponding to your operating system from the official website and follow the instructions to install it.
macOS#
I initially installed Docker Desktop on my macOS system, which allows for graphical management of images and containers. It's very convenient but consumes more resources and is power-hungry.
Later, I tried Colima, a more lightweight container runtime environment that is very convenient for local debugging on macOS. I recommend using it; just install and configure the environment according to the project's official documentation. I installed it directly using the brew
package manager:
brew install colima
After installation, run colima start
to start the container, and colima stop
to stop it. More commands can be viewed with colima --help
.
I started my commonly used development environment with the following command, and you can configure it according to your needs:
colima start -c 8 -m 16 -a x86_64 -p docker-amd
CentOS#
Compared to local development, Docker is more commonly used for deploying applications on servers. The operating system I frequently use is CentOS 7
, which can be installed using the yum
package manager:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce
After installation, start the Docker service and configure it to start on boot:
systemctl enable docker
systemctl start docker
Docker Images#
Docker primarily has two concepts: images and containers. An image can be considered a template for a container compiled from a Dockerfile, while a container is an instance of an image.
Dockerfile#
We specify the required environment and dependencies for the application using a Dockerfile, which has the following basic format:
FROM <image>
ENV USERNAME=admin \
PASSWORD=123456
RUN mkdir -p <app-directory>
COPY . /<app-directory>
CMD ["<command>", "<entrypoint file>"]
After completing the Dockerfile, we can build the image using the docker build
command in the same directory (or specify the Dockerfile):
# Build the image
docker build -t <image:tag> .
Saving and Loading Images#
We can save the locally compiled image as a tar
package for sharing:
docker save -o <image-name>.tar <image-name>
When we need to use the image, we can load the tar package using the docker load
command:
docker load -i <image-name>.tar
Pushing and Pulling Images#
Of course, sharing images via tar
packages is not very convenient, especially if the images are large. Therefore, we can use the docker push
command to push the image to the official image repository or a private repository (like the one I work on, which uses Harbor to manage images), and use the docker pull
command to pull images.
# Pull official image (short form)
docker pull <image:tag>
# Pull official image (full command)
docker pull docker.io/library/<image:tag>
# Push image to official image repository Docker Hub
docker push <image:tag>
# Push image to private repository (requires authentication configuration)
docker tag <image:tag> <private-repo-path>/<image:tag>
docker push <private-repo-path>/<image:tag>
Docker Image Operations#
For Docker images, the common operations I use are viewing, deleting, and renaming tags. More commands can be viewed with docker image --help
or on the official website.
# View all images
docker images
# Delete an image
docker rmi <image:tag>
# Rename an image
docker tag <old-image:tag> <new-image:tag>
Container Operations#
Viewing Containers#
After starting an image using Docker or Docker Compose commands, we can check the service status with the following commands:
# View running containers
docker ps
# View all containers
docker ps -a
Starting/Stopping Instances from Images#
Once we have compiled the required image using the Dockerfile, we can start the image instance using the docker run
command and add some configurations to meet our service needs. My common operations are as follows:
# Run a container
docker run <image:tag>
# Run a container and specify a name
docker run --name <server-name> <image:tag>
# Run a container in detached mode
docker run -d <image:tag>
# Port mapping
docker run -p6000:6379 <image:tag>
# Configure environment variables
docker run -e USERNAME=admin -e PASSWORD=123456 <image:tag>
Starting/Stopping Container Services#
After creating an instance from an image, we can start/stop the container service with the following commands:
# Start/restart a container
docker start <container-id>
# Stop a container
docker stop <container-id>
Viewing Logs#
After starting a service with Docker, we often need to check its running logs for debugging. We can view logs using docker logs
, with the specific command as follows:
# View logs
docker logs <container-id>
# View logs in real-time
docker logs -f <container-id>
Entering a Container#
Sometimes we need to enter the Docker container service for inspection and debugging. We can enter the container using the docker exec
command, with the specific command as follows:
# Enter a specific container by id
docker exec -it <container-id> <command>
Docker Networking#
Docker container instances run in a network. The commands mentioned above did not specify a network, so the services will run on the default network. We can view the networks with the following command:
# View all networks
docker network ls
If we do not want to run on the default network, we can create a custom network with the following command:
# Create a custom network
docker network create <network-name>
After creating our custom network, we can specify the network when creating container instances using the --network
parameter:
docker run --network <network-name> <image:tag>
Docker Data Persistence#
When running services with Docker instances, our data will be stored in the container. When the container is deleted, the data will also be deleted, which can lead to data loss for services that need to run long-term. Therefore, we need to ensure data persistence. I commonly use host mounts and container mounts for this purpose.
We can achieve persistence by mounting a specific directory from the host to a directory inside the container:
# Mount a host directory to a container directory
docker run -v <host-file-path>:<container-file-path> <image:tag>
We can also use container mounts with volumes to achieve persistence:
# Reference volume by name
# Docker will automatically generate a path
# Windows: C:\ProgramData\docker\volumes
# Linux: /var/lib/docker/volumes
# macOS: /var/lib/docker/volumes
docker run -v <volume-name>:<container-file-path> <image:tag>
If we only need to mount without managing or viewing files specifically, we can use anonymous container mounts without specifying a volume name, allowing Docker to use its automatically generated directory:
# Docker will automatically generate a path
# Windows: C:\ProgramData\docker\volumes
# Linux: /var/lib/docker/volumes
# macOS: /var/lib/docker/volumes
docker run -v <container-file-path> <image:tag>
Docker Compose#
Docker provides a rich set of commands for us to use, but using command-line operations can be hard to remember. If an application depends on multiple environments/services, it requires running and managing multiple containers separately, which can be inconvenient. Therefore, we can use the Docker Compose tool for management.
Docker Compose is a tool for defining and running multi-container Docker applications, configured through a .yaml
file. In my daily work, I frequently use Docker Compose; I only use the docker run
command for very simple applications, as it allows for unified management and subsequent configuration adjustments.
Installation#
If Docker Desktop is installed on macOS, Docker Compose is included by default and can be used directly. For Linux systems, it needs to be installed separately. Here, I will also take CentOS 7
as an example:
curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
After installation, you can use the docker-compose
command.
Configuration Management#
The configuration file for Docker Compose is a yaml
file, with the following basic format:
version: '3'
services:
contrainer-1:
image: <image-name>
ports:
- <host>:<container>
volumes:
- <host-file-path>:<container-file-path>
environment:
<ENV-KEY>=<ENV-VALUE>
contrainer-2:
image: <image-name>
ports:
- <host>:<container>
volumes:
- <volume-name-1>:<container-file-path>
environment:
<ENV-KEY>=<ENV-VALUE>
volumes:
volume-name-1:
driver: local
Most of the configurations are intuitive, such as service names, image names, port mappings, file mounts, environment variables, etc.
Here, version
indicates the version of the configuration file, services
indicates the list of services, and volumes
indicates the list of mounted volumes.
In the specific services
, image
indicates the image name, ports
indicates port mappings, volumes
indicates file mounts, and environment
indicates environment variables. More configurations can be viewed based on project needs.
Common Commands#
Starting/Stopping Services#
Similar to the docker run
command, Docker Compose also provides up
and down
commands to start and stop services.
# Start services
docker-compose -f <name>.yaml up
# Start services in detached mode
docker-compose -f <name>.yaml up -d
# Stop services
docker-compose -f <name>.yaml down
Viewing Logs#
We can view the service logs using the logs
command.
# View logs
docker-compose logs <container-id>
# View logs in real-time
docker-compose logs -f <container-id>
Practical Operation Commands#
In addition to the basic commands mentioned above, I also commonly use the following commands.
Clearing Unused Containers#
When a container instance exits due to configuration or program runtime errors, it will still be retained. We can view it using the docker ps -a
command. We can use the following combination of commands to clean up:
docker rm `docker ps -a | grep Exited | awk '{print $1}'`
Batch Importing Local Images#
When we need to import a large number of local images into a machine, importing them one by one can be very cumbersome. We can place the images in the same directory and use the following command for batch import:
for i in `ls`; do docker load < $i ; done
Conclusion#
The above is my explanation of the basic knowledge and practical operations of Docker container technology. I hope it is helpful to you. In fact, there is much more to Docker, such as trying out Docker's Buildkit
feature in my last project, which greatly reduced the final image size, and using buildx
for cross-platform compatibility, etc. This article aims to explain the basic knowledge and commonly used commands in practice. If anyone is interested in these extended parts, I will update them later.