Docker 介绍和基础操作

1

Container 即容器,平时生活中指的是可以装下其它物品的工具, 以方便人类归纳放置物品 、存储和异地运输 ,比如人类使用的衣柜 、行李箱、 背包等可以成为容器,Container 除了容器以外,另一个意思是集装箱, 很多码头工人将很多装有不同物品但却整齐划一的箱子装载到停靠在岸边大船,然后方便的运来运去。

2

容器其实是一种沙盒技术。顾名思义,沙盒就是能够像一个集装箱一样,把你的应用装起来。这样,应用与应用之间就有了边界而不会相互干扰;同时装在沙盒里面的应用,也可以很方便的被搬来搬去,这也是 PaaS 想要的最理想的状态( 可移植性,标准化,隔离性 )。

容器是软件工业上的集装箱的技术,集装箱的标准化,减少了包装成本,大大提高货物运输和装卸效率,是传统运输行业的重大变革。早期的软件项目中软件更新,发布低效,开发测试发布周期很长,很难敏捷。有了容器技术,就可以利用其标准化的特点,大幅提高生产效率。

容器技术是虚拟化、云计算、大数据之后的一门新兴的并且是炙手可热的新技术, 容器技术提高了硬件资源利用率、 方便了企业的业务快速横向扩容(可以达到秒级快速扩容)、 实现了业务宕机自愈功能(配合K8S可以实现,但OpenStack无此功能),因此未来数年会是一个容器愈发流行的时代 ,这是 一个对于 IT 行业来说非常有影响和价值的技术,而对于IT行业的从业者来说,熟练掌握容器技术无疑是一个很有前景的行业工作机会。

3

Docker 介绍

容器历史

虽然 docker 把容器技术推向了巅峰,但容器技术却不是从 docker 诞生的。实际上,容器技术连新技术都算不上,因为它的诞生和使用确实有些年头了。下面的一串名称可能有的你都没有听说过,但它们的确都是容器技术的应用:

1 、Chroot Jail

就是我们常见的 chroot 命令的用法。它在 1979 年的时候就出现了,被认为是最早的容器化技术之一。

它可以把一个进程的文件系统隔离起来。

2 、The FreeBSD Jail

Freebsd Jail (监狱)实现了操作系统级别的虚拟化,它是操作系统级别虚拟化技术的先驱之一。 2000 年,伴随FreeBSD4.0版的发布

3 、Linux VServer

使用添加到 Linux 内核的系统级别的虚拟化功能实现的专用虚拟服务器。允许创建许多独立的虚拟专用服务器(VPS),这些虚拟专用服务器在单个物理服务器上全速同时运行,从而有效地共享硬件资源。

VPS提供与传统Linux服务器几乎相同的操作环境。可以在这样的VPS上启动所有服务(例如ssh,邮件Web和数据库服务器),而无需(或者在特殊情况下只需进行很少的修改),就像在任何真实服务器上一样。

每个VPS都有自己的用户帐户数据库和root密码,并且与其他虚拟服务器隔离,但它们共享相同的硬件资源

2003 年 11 月 1 日 VServer 1.0 发布

官网:http://linux-vserver.org/

4 、Solaris Containers

它也是操作系统级别的虚拟化技术,专为 X86 和 SPARC 系统设计。Solaris 容器是系统资源控制和通过”区域” 提供边界隔离的组合。

5 、OpenVZ

OpenVZ 是一种 Linux 中操作系统级别的虚拟化技术。 它允许创建多个安全隔离的 Linux 容器,即VPS。

6 、Process Containers

Process 容器由 Google 的工程师开发,一般被称为 cgroups。

7 、LXC

LXC为Linux Container的简写。可以提供轻量级的虚拟化,以便隔离进程和资源,而且不需要提供指令解释机制以及全虚拟化的其他复杂性。容器有效地将由单个操作系统管理的资源划分到孤立的组中,以更好地在孤立的组之间平衡有冲突的资源使用需求

Linux Container提供了在单一可控主机节点上支持多个相互隔离的server container同时执行的机制。

Linux Container有点像chroot,提供了一个拥有自己进程和网络空间的虚拟环境,但又有别于虚拟机,因为lxc是一种操作系统层次上的资源的虚拟化。

8 、Warden

在最初阶段,Warden 使用 LXC 作为容器运行时。 如今已被 CloudFoundy 取代。

9 、LMCTFY

LMCTY 是 Let me contain that for you 的缩写。它是 Google 的容器技术栈的开源版本。

Google 的工程师一直在与 docker 的 libertainer 团队合作,并将 libertainer 的核心概念进行抽象并移植到此项目中。该项目的进展不明,估计会被 libcontainer 取代。

10 、Docker

Docker 是一个可以将应用程序及其依赖打包到几乎可以在任何服务器上运行的容器的工具。

11 、RKT

RKT 是 Rocket 的缩写,它是一个专注于安全和开放标准的应用程序容器引擎。

综上所述正如我们所看到的,docker 并不是第一个容器化技术,但它的确是最知名的一个。

Docker 是什么

4

2010 年,Solomon Hykes(Docker CTO)和几个年轻人在美国旧金山成立了一家 PaaS平台的dotCloud公司 , 此公司主要基于PaaS平台为开发者提供技术服务。

Docker 是一个开源项目,诞生于 2013 年 3 月 27 日,最初是 dotCloud 公司(后由于 Docker 开源后大受欢迎在 2013 年 10 月就将公司改名为 Docker Inc ,总部位于美国加州的旧金山)内部的一个开源的PAAS 服务 (Platform as a ServiceService )的业余项目。它基于 Google 公司推出的 Go 语言实现。 项目后来加入了 Linux 基金会,遵从了 Apache 2.0 协议,项目代码在 GitHub 上进行维护。

Docker 是基于 Linux 内核实现,Docker 最早采用 LXC 技术 ,LXC 是 Linux 原生支持的容器技术 ,可以提供轻量级的虚拟化 ,可以说 docker 就是基于 LXC 发展起来 的,提供 LXC 的高级封装,标准的配置方法,在LXC的基础之上,docker提供了一系列更强大的功能。而虚拟化技术 KVM(KernelKernel-based Virtual Machine Machine) 基于 模块实现, 后来Docker 改为自己研发并开源的 runc 技术运行容器,彻底抛弃了LXC。

Docker 相比虚拟机的交付速度更快,资源消耗更低,Docker 采用客户端/服务端架构,使用远程API来管理和创建容器,其可以轻松的创建一个轻量级的、可移植的、自给自足的容器,docker 的三大理念是build(构建)、ship(运输)、 run(运行),Docker遵从apache 2.0协议,并通过(namespace及cgroup等)来提供容器的资源隔离与安全保障等,所以Docke容器在运行时不需要类似虚拟机(空运行的虚拟机占用物理机6-8%性能)的额外资源开销,因此可以大幅提高资源利用率,总而言之Docker是一种用了新颖方式实现的轻量级虚拟机.类似于VM但是在原理和应用上和VM的差别还是很大的,并且docker的专业叫法是应用容器(Application Container)。

Docker 的主要目标

5

Build, Ship and Run Any App, Anywhere,即通过对应用组件的封装(Packaging)、分发(Distribution)、部署(Deployment)、运行(Runtime)等生命周期的管理,达到应用组件级别的“一次封装,到处运行”。这里的应用组件,既可以是一个Web应用,也可以是一套数据库服务,甚至是一个操作系统。将应用运行在Docker 容器上,可以实现跨平台,跨服务器,只需一次配置准备好相关的应用环境,即可实现到处运行,保证研发和生产环境的一致性,解决了应用和运行环境的兼容性问题,从而极大提升了部署效率,减少故障的可能性

使用Docker 容器化封装应用程序的意义:

6

  • 统一基础设施环境-docker环境

    • 硬件的组成配置
    • 操作系统的版本
    • 运行时环境的异构
  • 统一程序打包(装箱)方式-docker镜像

    • java程序
    • python程序
    • nodejs程序
  • 统一程序部署(运行)方式-docker容器

    • java-jar…→ docker run…
    • python manage.py runserver… → docker run…
    • npm run dev … → docker run…

Docker 和虚拟机,物理主机

7

容器和虚拟机技术比较

8

  • 传统虚拟机是虚拟出一个主机硬件,并且运行一个完整的操作系统 ,然后在这个系统上安装和运行软件
  • 容器内的应用直接运行在宿主机的内核之上,容器并没有自己的内核,也不需要虚拟硬件,相当轻量化
  • 每个容器间是互相隔离,每个容器内都有一个属于自己的独立文件系统,独立的进程空间,网络空间,用户空间等,所以在同一个宿主机上的多个容器之间彼此不会相互影响

容器和虚拟机表现比较

9

  • 资源利用率更高: 开销更小,不需要启动单独的虚拟机OS内核占用硬件资源,可以将服务器性能压榨至极致.虚拟机一般会有5-20%的损耗,容器运行基本无损耗,所以生产中一台物理机只能运行数十个虚拟机,但是一般可以运行数百个容器
  • 启动速度更快: 可以在数秒内完成启动
  • 占用空间更小: 容器一般占用的磁盘空间以MB为单位,而虚拟机以GB
  • 集成性更好: 和 CI/CD(持续集成/持续部署)相关技术结合性更好,实现打包镜像发布测试可以一键运行,做到自动化并快速的部署管理,实现高效的开发生命周期

使用虚拟机是为了更好的实现服务运行环境隔离,每个虚拟机都有独立的内核,虚拟化可以实现不同操作系统的虚拟机,但是通常一个虚拟机只运行一个服务,很明显资源利用率比较低且造成不必要的性能损耗,我们创建虚拟机的目的是为了运行应用程序,比如Nginx、PHP、Tomcat等web程序,使用虚拟机无疑带来了一些不必要的资源开销,但是容器技术则基于减少中间运行环节带来较大的性能提升。

根据实验,一个运行着CentOS的KVM虚拟机启动后,在不做优化的情况下,虚拟机自己就需要占用100~200 MB内存。此外,用户应用运行在虚拟机里面,它对宿主机操作系统的调用就不可避免地要经过虚拟化软件的拦截和处理,这本身又是一层性能损耗,尤其对计算资源、网络和磁盘I/O的损耗非常大。

比如: 一台96G内存的物理服务器,为了运行java程序的虚拟机一般需要分配8G内存/4核的资源,只能运行 13 台左右虚拟机,但是改为在docker容器上运行Java程序,每个容器只需要分配4G内存即可,同样的物理服务器就可以运行 25 个左右容器,运行数量相当于提高一倍,可以大幅节省IT支出,通常情况下至少可节约一半以上的物理设备

Docker 的组成

docker 官网: http://www.docker.com

帮助文档链接: https://docs.docker.com/

docker 镜像: https://hub.docker.com/

docker 中文网站: http://www.docker.org.cn/

10

  • Docker 主机(Host): 一个物理机或虚拟机,用于运行Docker服务进程和容器,也称为宿主机,node节点
  • Docker 服务端(Server): Docker守护进程,运行docker容器
  • Docker 客户端(Client): 客户端使用 docker 命令或其他工具调用docker API
  • Docker 镜像(Images): 镜像可以理解为创建实例使用的模板,本质上就是一些程序文件的集合
  • Docker 仓库(Registry): 保存镜像的仓库,官方仓库: https://hub.docker.com/,可以搭建私有仓库harbor
  • Docker 容器(Container): 容器是从镜像生成对外提供服务的一个或一组服务,其本质就是将镜像中的程序启动后生成的进程

10-5

Namespace

1
2
https://man7.org/linux/man-pages/man7/namespaces.7.html
https://en.wikipedia.org/wiki/Linux_namespaces

一个宿主机运行了N个容器,多个容器共用一个 OS,必然带来的以下问题:

  • 怎么样保证每个容器都有不同的文件系统并且能互不影响?
  • 一个docker主进程内的各个容器都是其子进程,那么如果实现同一个主进程下不同类型的子进程?各个容器子进程间能相互通信(内存数据)吗?
  • 每个容器怎么解决IP及端口分配的问题?
  • 多个容器的主机名能一样吗?
  • 每个容器都要不要有root用户?怎么解决账户重名问题?

11

namespace是Linux系统的底层概念,在内核层实现,即有一些不同类型的命名空间被部署在内核,各个docker容器运行在同一个docker主进程并且共用同一个宿主机系统内核,各docker容器运行在宿主机的用户空间,每个容器都要有类似于虚拟机一样的相互隔离的运行空间,但是容器技术是在一个进程内实现运行指定服务的运行环境,并且还可以保护宿主机内核不受其他进程的干扰和影响,如文件系统空间、网络空间、进程空间等,目前主要通过以下技术实现容器运行空间的相互隔离:

隔离类型 功能 系统调用参数 内核版本
MNT Namespace(mount) 提供磁盘挂载点和文件系统的隔离能力 CLONE_NEWNS 2.4.19
IPC Namespace(Inter-Process Communication) 提供进程间通信的隔离能力,包括信号量,消息队列和共享内存 CLONE_NEWIPC 2.6.19
UTS Namespace(UNIX Timesharing System) 提供内核,主机名和域名隔离能力 CLONE_NEWUTS 2.6.19
PID Namespace(Process Identification) 提供进程隔离能力 CLONE_NEWPID 2.6.24
Net Namespace(network) 提供网络隔离能力,包括网络设备,网络栈,端口等 CLONE_NEWNET 2.6.29
User Namespace(user) 提供用户隔离能力,包括用户和组 CLONE_NEWUSER 3.8

Pid namespace

  • 不同用户的进程就是通过Pid namespace 隔离开的,且不同namespace 中可以有相同Pid。
  • 有了Pid namespace, 每个namespace 中的Pid 能够相互隔离。

net namespace

  • 网络隔离是通过net namespace 实现的, 每个net namespace 有独立的network devices, IP addresses, IP routing tables, /proc/net 目录。
  • Docker 默认采用veth 的方式将container 中的虚拟网卡同host 上的一个docker bridge: docker连接在一起。

ipc namespace

  • Container 中进程交互还是采用linux 常见的进程间交互方法(interprocess communication –IPC), 包括常见的信号量、消息队列和共享内存。
  • container 的进程间交互实际上还是host上具有相同Pid namespace 中的进程间交互,因此需要在IPC 资源申请时加入namespace 信息- 每个IPC 资源有一个唯一的 32 位ID。

mnt namespace

  • mnt namespace 允许不同namespace 的进程看到的文件结构不同,这样每个namespace 中的进程所看到的文件目录就被隔离开了

uts namespace

  • UTS(“UNIX Time-sharing System”) namespace允许每个container 拥有独立的hostname 和domain name, 使其在网络上可以被视作一个独立的节点而非Host 上的一个进程。

user namespace

  • 每个container 可以有不同的user 和group id, 也就是说可以在container 内部用container 内部的用户执行程序而非Host 上的用户。

范例: namespace

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
[root@master1 ~]# lsns --help

用法:
lsns [选项] [<名字空间>]

列出系统名字空间。

选项:
-J, --json 使用 JSON 输出格式
-l, --list 使用列表格式的输出
-n, --noheadings 不打印标题
-o, --output <list> 定义使用哪个输出列
--output-all output all columns
-p, --task <pid> 打印进程名字空间
-r, --raw 使用原生输出格式
-u, --notruncate 不截断列中的文本
-W, --nowrap don't use multi-line representation
-t, --type <name> namespace type (mnt, net, ipc, user, pid, uts, cgroup,time)

-h, --help display this help
-V, --version display version

Available output columns:
NS 名字空间标识符 (inode 号)
TYPE 名字空间类型
PATH 名字空间路径
NPROCS 名字空间中的进程数
PID 名字空间中的最低 PID
PPID PID 的 PPID
COMMAND PID 的命令行
UID PID 的 UID
USER PID 的用户名
NETNSID namespace ID as used by network subsystem
NSFS nsfs mountpoint (usually used network subsystem)
PNS parent namespace identifier (inode number)
ONS owner namespace identifier (inode number)

更多信息请参阅 lsns(8)。

[root@ubuntu2204 ~]# nsenter --help

用法:
nsenter [选项] [<程序> [<参数>...]]

以其他程序的名字空间运行某个程序。

选项:
-a, --all enter all namespaces
-t, --target <pid> 要获取名字空间的目标进程
-m, --mount[=<文件>] 进入 mount 名字空间
-u, --uts[=<文件>] 进入 UTS 名字空间(主机名等)
-i, --ipc[=<文件>] 进入 System V IPC 名字空间
-n, --net[=<文件>] 进入网络名字空间
-p, --pid[=<文件>] 进入 pid 名字空间
-C, --cgroup[=<文件>] 进入 cgroup 名字空间
-U, --user[=<文件>] 进入用户名字空间
-T, --time[=<file>] enter time namespace
-S, --setuid <uid> 设置进入空间中的 uid
-G, --setgid <gid> 设置进入名字空间中的 gid
--preserve-credentials 不干涉 uid 或 gid
-r, --root[=<目录>] 设置根目录
-w, --wd[=<dir>] 设置工作目录
-F, --no-fork 执行 <程序> 前不 fork
-Z, --follow-context 根据 --target PID 设置 SELinux 环境
-h, --help display this help
-V, --version display version

更多信息请参阅 nsenter(1)。

[root@ubuntu2204 ~]# lsns -t net
NS TYPE NPROCS PID USER NETNSID NSFS COMMAND
4026531840 net 229 1 root unassigned /run/docker/netns/default /sbin/init
4026532691 net 2 4136 65535 1 /run/docker/netns/5090da825e77 /pause
4026532770 net 2 4140 65535 0 /run/docker/netns/cb903a9d63e0 /pause


[root@ubuntu2204 ~]# ls -l /proc/4140/ns
总用量 0
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 cgroup -> 'cgroup:[4026532838]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:35 ipc -> 'ipc:[4026532768]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 mnt -> 'mnt:[4026532766]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:35 net -> 'net:[4026532770]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:35 pid -> 'pid:[4026532769]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 pid_for_children -> 'pid:[4026532769]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 time -> 'time:[4026531834]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 time_for_children -> 'time:[4026531834]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 user -> 'user:[4026531837]'
lrwxrwxrwx 1 65535 65535 0 11月 12 18:47 uts -> 'uts:[4026532767]'

#说明4136为容器在宿主机的Pid,下面表示进入4136容器的对应网络名称空间执行命令
[root@ubuntu2204 ~]# nsenter -t 4136 -n ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
group default
link/ether 42:73:2e:34:e3:91 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.0.11/24 brd 10.244.0.255 scope global eth0
valid_lft forever preferred_lft forever

[root@ubuntu2204 ~]# nsenter -t 4140 -n ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
3: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP
group default
link/ether f6:20:4e:63:e9:0a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.244.0.10/24 brd 10.244.0.255 scope global eth0
valid_lft forever preferred_lft forever

Control groups

Linux Cgroups的全称是Linux Control Groups,是Linux内核的一个功能.最早是由Google的工程师(主要是Paul Menage和Rohit Seth)在 2006 年发起,最早的名称为 进程容器 (process containers)。在2007 年时,因为在Linux内核中,容器(container)这个名词有许多不同的意义,为避免混乱,被重命名为cgroup,并且被合并到2.6.24版的内核中去。自那以后,又添加了很多功能。

如果不对一个容器做任何资源限制,则宿主机会允许其占用无限大的内存空间,有时候会因为代码bug程序会一直申请内存,直到把宿主机内存占完,为了避免此类的问题出现,宿主机有必要对容器进行资源分配限制,比如CPU、内存等

Cgroups 最主要的作用,就是限制一个进程组能够使用的资源上限,包括CPU、内存、磁盘、网络带宽等等。此外,还能够对进程进行优先级设置,资源的计量以及资源的控制(比如:将进程挂起和恢复等操作)。

Cgroups在内核层默认已经开启,从CentOS 和 Ubuntu 不同版本对比,显然内核较新的支持的功能更多。

Centos 8.1 cgroups:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@centos8 ~]# cat /etc/redhat-release
CentOS Linux release 8.1.1911 (Core)

[root@centos8 ~]# grep CGROUP /boot/config-4.18.0-147.el8.x86_64
CONFIG_CGROUPS=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
# CONFIG_BLK_CGROUP_IOLATENCY is not set
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y

Centos 7.6 cgroups:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@centos7 ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)

[root@centos7 ~]# grep CGROUP /boot/config-3.10.0-957.el7.x86_64
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NETPRIO_CGROUP=y

ubuntu cgroups:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@ubuntu1804 ~]# grep CGROUP /boot/config-4.15.0-29-generic
CONFIG_CGROUPS=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
CONFIG_CGROUP_RDMA=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NET_CLS_CGROUP=m
CONFIG_CGROUP_NET_PRIO=y
CONFIG_CGROUP_NET_CLASSID=y

cgroups 中内存模块:

1
2
3
4
5
[root@ubuntu1804 ~]#grep MEMCG /boot/config-4.15.0-29-generic
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
# CONFIG_MEMCG_SWAP_ENABLED is not set
CONFIG_SLUB_MEMCG_SYSFS_ON=y

容器管理工具

有了以上的chroot、namespace、cgroups就具备了基础的容器运行环境,但是还需要有相应的容器创建与删除的管理工具、以及怎么样把容器运行起来、容器数据怎么处理、怎么进行启动与关闭等问题需要解决,于是容器管理技术出现了。目前主要是使用docker,早期使用 LXC

LXC

LXC: Linux Container。可以提供轻量级的虚拟化功能,以便隔离进程和资源,包括一系列容器的管理工具软件,如,lxc-create,lxc-start,lxc-attach等,但这技术功能不完善,目前较少使用

官方网站: https://linuxcontainers.org/

案例: Ubuntu安装 和使用 LXC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@ubuntu1804 ~]# apt install lxc lxd
Reading package lists... Done
Building dependency tree
Reading state information... Done
lxd is already the newest version (3.0.3-0ubuntu1~18.04.1).
lxc is already the newest version (3.0.3-0ubuntu1~18.04.1).
......

[root@ubuntu1804 ~]# lxc-checkconfig #检查内核对lcx的支持状况,必须全部为lcx
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-4.15.0-29-generic
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
......

[root@ubuntu1804 ~]# lxc-create -t download --name alpine1 -- --dist alpine --release 3.9 --arch amd
Setting up the GPG keyring
Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
You just created an Alpinelinux 3.9 x86_64 (20200121_13:00) container.

[root@ubuntu1804 ~]# lxc-start alpine1 #启动lxc容器
[root@ubuntu1804 ~]# lxc-attach alpine1 #进入lxc容器
~ # ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:DF:9E:
inet addr:10.0.1.51 Bcast:10.0.1.255 Mask:255.255.255.
inet6 addr: fe80::216:3eff:fedf:9e45/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:
RX packets:23 errors:0 dropped:0 overruns:0 frame:
TX packets:12 errors:0 dropped:0 overruns:0 carrier:
collisions:0 txqueuelen:
RX bytes:2484 (2.4 KiB) TX bytes:1726 (1.6 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:
RX packets:0 errors:0 dropped:0 overruns:0 frame:
TX packets:0 errors:0 dropped:0 overruns:0 carrier:
collisions:0 txqueuelen:
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

~ # uname -r
4.15.0-29-generic

~ # uname -a
Linux alpine12 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018x86_64 Linux

~ # cat /etc/issue
Welcome to Alpine Linux 9
Kernel \r on an \m (\l)

~ # exit

命令选项说明:

1
2
3
4
5
6
7
8
9
-t 模板: -t 选项后面跟的是模板,模式可以认为是一个原型,用来说明需要一个什么样的容器(比如容器里面需不需要有
vim, apache等软件).模板实际上就是一个脚本文件(位于/usr/share/lxc/templates目录),我们这里指定
download模板(lxc-create会调用lxc-download脚本,该脚本位于刚说的模板目录中)是说明我们目前没有自己模板,需要下载官方的模板

--name 容器名称: 为创建的容器命名
-- : --用来说明后面的参数是传递给download脚本的,告诉脚本需要下载什么样的模板
--dist 操作系统名称: 指定操作系统
--release 操作系统: 指定操作系统,可以是各种Linux的变种
--arch 架构: 指定架构,是x86还是arm,是 32 位还是 64 位

lxc启动容器依赖于模板,清华模板源: https://mirrors.tuna.tsinghua.edu.cn/help/lxc-images/,但是做模板相对较难,需要手动一步步创构建文件系统、准备基础目录及可执行程序等,而且在大规模使用容器的场景很难横向扩展,另外后期代码升级也需要重新从头构建模板,基于以上种种原因便有了docker

docker

Docker 相当于增强版的LXC,功能更为强大和易用,也是当前最主流的容器前端管理工具

Docker 先启动一个容器也需要一个外部模板,也称为镜像,docke的镜像可以保存在一个公共的地方共享使用,只要把镜像下载下来就可以使用,最主要的是可以在镜像基础之上做自定义配置并且可以再把其提交为一个镜像,一个镜像可以被启动为多个容器。

12

Docker的镜像是分层的,镜像底层为库文件且只读层即不能写入也不能删除数据,从镜像加载启动为一个容器后会生成一个可写层,其写入的数据会复制到宿主机上对应容器的目录,但是容器内的数据在删除容器后也会被随之删除。

pouch

项目网点: https://github.com/alibaba/pouch

Pouch (小袋子)起源于 2011 年,并于 2017 年 11 月 19 日上午,在中国开源年会现场,阿里巴巴正式开源了基于 Apache 2.0 协议的容器技术 Pouch。Pouch 是一款轻量级的容器技术,拥有快速高效、可移植性高、资源占用少等特性,主要帮助阿里更快的做到内部业务的交付,同时提高超大规模下数据中心的物理资源利用率

目前的容器方案大多基于 Linux 内核提供的 cgroup 和 namespace 来实现隔离,然后这样轻量级方案存在弊端:

  • 容器间,容器与宿主间,共享同一个内核
  • 内核实现的隔离资源,维度不足

面对如此的内核现状,阿里巴巴采取了三个方面的工作,来解决容器的安全问题:

  • 用户态增强容器的隔离维度,比如网络带宽、磁盘使用量等
  • 给内核提交 patch,修复容器的资源可见性问题,cgroup 方面的 bug
  • 实现基于 Hypervisor 的容器,通过创建新内核来实现容器隔离

Podman

12-5

虽然目前 Docker 是管理 Linux 容器最好的工具,注意没有之一,但是podman的横空出现即将改变这一点

什么是Podman?

Podman即Pod Manager tool,从名称上可以看出和kubernets的pod的密切联系,不过就其功能来说,简而言之: alias docker = podman,是CentOS 8 新集成的功能,或许不久的未来会代替docker

Podman是一个 为 Kubernetes 而生的开源的容器管理工具,原来是 CRI-O(即容器运行时接口CRI 和开放容器计划OCI) 项目的一部分,后来被分离成一个单独的项目叫 libpod。其可在大多数Linux平台上使用,它是一种无守护程序的容器引擎,用于在Linux系统上开发,管理和运行任何符合Open Container Initiative(OCI)标准的容器和容器镜像。

Podman 提供了一个与Docker兼容的命令行前端,Podman 里面87%的指令都和Docker CLI 相同,因此可以简单地为Docker CLI别名,即“ alias docker = podman”,事实上,podman使用的一些库也是docker的一部分。

1
2
CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to
enable using OCI (Open Container Initiative) compatible runtimes

官网地址: https://podman.io/

项目地址: https://github.com/containers/libpod

Podman 和docker不同之处

  • docker 需要在系统上运行一个守护进程(docker daemon),这会产生一定的开销,而 podman 不需要
  • 启动容器的方式不同:
    docker cli 命令通过API跟 Docker Engine(引擎)交互告诉它我想创建一个container,然后docker Engine才会调用OCI container runtime(runc)来启动一个container。这代表container的process(进程)不会是Docker CLIchild process(子进程),而是Docker Enginechild process
    Podman是直接给OCI containner runtime(runc)进行交互来创建container的,所以container process直接是podmanchild process
  • 因为docke有docker daemon,所以docker启动的容器支持–restart策略,但是podman不支持
  • docker需要使用root用户来创建容器。 这可能会产生安全风险,尤其是当用户知道docker run命令的–privileged选项时。podman既可以由root用户运行,也可以由非特权用户运行
  • docker在Linux上作为守护进程运行扼杀了容器社区的创新。 如果要更改容器的工作方式,则需要更改docker守护程序并将这些更改推送到上游。 没有守护进程,容器基础结构更加模块化,更容易进行更改。 podman的无守护进程架构更加灵活和安全。

Docker 的优势

  • 快速部署: 短时间内可以部署成百上千个应用,更快速交付到线上
  • 高效虚拟化: 不需要额外hypervisor支持,基于linux内核实现应用虚拟化,相比虚拟机大幅提高性能和效率
  • 节省开支: 提高服务器利用率,降低IT支出
  • 简化配置: 将运行环境打包保存至容器,使用时直接启动即可
  • 环境统一: 将开发,测试,生产的应用运行环境进行标准化和统一,减少环境不一样带来的各种问题
  • 快速迁移和扩展: 可实现跨平台运行在物理机、虚拟机、公有云等环境,良好的兼容性可以方便将应用从A宿主机迁移到B宿主机,甚至是A平台迁移到B平台
  • 更好的实现面向服务的架构,推荐一个容器只运行一个应用,实现分布的应用模型,可以方便的进行横向扩展,符合开发中高内聚,低耦合的要求,减少不同服务之间的相互影响

Docker 的缺点

  • 多个容器共用宿主机的内核,各应用之间的隔离不如虚拟机彻底
  • 由于和宿主机之间的进程也是隔离的,需要进入容器查看和调试容器内进程等资源,变得比较困难和繁琐
  • 如果容器内进程需要查看和调试,需要在每个容器内都需要安装相应的工具,这也造成存储空间的重复浪费

容器的相关技术

容器规范

13

OCI 官网:https://opencontainers.org/

容器技术除了的docker之外,还有coreOS的rkt,还有阿里的Pouch,为了保证容器生态的标准性和健康可持续发展,包括Linux 基金会、Docker、微软、红帽谷歌和IBM等公司在 2015 年 6 月共同成立了一个叫Open Container Initiative(OCI)的组织,其目的就是制定开放的标准的容器规范,目前OCI一共发布了两个规范, 分别是runtime specimage format spec ,有了这两个规范,不同的容器公司开发的容器只要兼容这两个规范,就可以保证容器的可移植性和相互可操作性。

容器 runtime

runtime是真正运行容器的地方,因此为了运行不同的容器runtime需要和操作系统内核紧密合作相互在
支持,以便为容器提供相应的运行环境

runtime 类型:

  • Lxc: linux上早期的runtime,在 2013 年 Docker 刚发布的时候,就是采用lxc作为runtime, Docker把 LXC 复杂的容器创建与使用方式简化为 Docker 自己的一套命令体系。随着Docker的发展,原有的LXC不能满足Docker的需求,比如跨平台功能
  • Libcontainer: 随着 Docker 的不断发展,重新定义容器的实现标准,将底层实现都抽象化到Libcontainer 的接口。这就意味着,底层容器的实现方式变成了一种可变的方案,无论是使用namespace、cgroups 技术抑或是使用 systemd 等其他方案,只要实现了 Libcontainer 定义的一组接口,Docker 都可以运行。这也为 Docker 实现全面的跨平台带来了可能。
  • runc : 早期libcontainer是Docker公司控制的一个开源项目,OCI的成立后,Docker把libcontainer项目移交给了OCI组织,runC就是在libcontainer的基础上进化而来,目前Docker默认的runtime,runc遵守OCI规范
  • rkt: 是CoreOS开发的容器runtime,也符合OCI规范,所以使用rktruntime也可以运行Docker容器

范例: 查看docker的 runtime

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@ubuntu1804 ~]# docker info
Client:
Debug Mode: false

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc #Runtimes
Default Runtime: runc #runtime
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-29-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 962MiB
Name: ubuntu1804.wang.org
ID: G2JQ:M4DG:CW74:EETR:GU5U:OROC:ZN2F:RKSA:YQY2:XJYX:OHG7:SSVE
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

容器管理工具

管理工具连接runtime与用户,对用户提供图形或命令方式操作,然后管理工具将用户操作传递给runtime执行。

  • lxc 是lxd 的管理工具
  • Runc的管理工具是 docker engine ,docker engine包含后台deamon和cli两部分,大家经常提到的Docker就是指的docker engine
  • Rkt的管理工具是rkt cli

范例: 查看docker engine

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@ubuntu1804 ~]# docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:29:52 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:28:22 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683

镜像仓库 Registry

统一保存镜像而且是多个不同镜像版本的地方,叫做镜像仓库

  • Docker hub: docker官方的公共仓库,已经保存了大量的常用镜像,可以方便大家直接使用
  • 阿里云,网易等第三方镜像的公共仓库
  • Image registry: docker 官方提供的私有仓库部署工具,无web管理界面,目前使用较少
  • Harbor: vmware 提供的自带web界面自带认证功能的镜像私有仓库,目前有很多公司使用

范例: 镜像地址格式

1
2
3
4
5
docker.io/library/alpine
harbor.wang.org/project/centos:7.2.1511
registry.cn-hangzhou.aliyuncs.com/wangxiaochun/magedu:v1
172.18.200.101/project/centos: latest
172.18.200.101/project/java-7.0.59:v1

容器编排工具

14

当多个容器在多个主机运行的时候,单独管理容器是相当复杂而且很容易出错,而且也无法实现某一台主机宕机后容器自动迁移到其他主机从而实现高可用的目的,也无法实现动态伸缩的功能,因此需要有一种工具可以实现统一管理、动态伸缩、故障自愈、批量执行等功能,这就是容器编排引擎

容器编排通常包括容器管理、调度、集群定义和服务发现等功能

  • Docker compose : docker 官方实现单机的容器的编排工具
  • Docker swarm: docker 官方开发的容器编排引擎,支持overlay network
  • Mesos+Marathon: Mesos是Apache下的开源分布式资源管理框架,它被称为是分布式系统的内核。Mesos最初是由加州大学伯克利分校的AMPLab开发的,后在Twitter得到广泛使用。通用的集群组员调度平台,mesos(资源分配)与marathon(容器编排平台)一起提供容器编排引擎功能
  • Kubernetes: google领导开发的容器编排引擎,内部项目为Borg,且其同时支持 docker 和CoreOS,当前已成为容器编排工具事实上的标准

Docker 安装及基础命令介绍

Docker 安装准备

官方网址: https://www.docker.com/

OS系统版本选择:

Docker 目前已经支持多种操作系统的安装运行,比如Ubuntu、CentOS、Redhat、Debian、Fedora,甚至是还支持了Mac和Windows,在linux系统上需要内核版本在3.10或以上

Docker版本选择:

15

Docker版本号之前一直是0.X版本或1.X版本,从 2013 年 3 月 13 日发布第一个版本0.1.1-1开始一直到 2017年 02 月 08 日发布1.13.1版

从 2017 年 3 月 1 号开始改为每个季度发布一次稳定版,其版本号规则也统一变更为YY.MM.xx,第一个版本为17.03.0, 例如17.09表示是 2017 年 9 月份发布的

Docker之前没有区分版本,但是 2017 年推出(将docker更名为)新的项目Moby,github地址: https://github.com/moby/moby,Moby项目属于Docker项目的全新上游,Docker将是一个隶属于的Moby的子产品,而且之后的版本之后开始区分为 CE(Docker Community Edition,社区版本)和 EE(Docker Enterprise Edition,企业收费版),CE社区版本和EE企业版本都是每个季度发布一个新版本,但是EE版本提供后期安全维护 1 年,而CE版本是 4 个月,以下为官方原文:

https://blog.docker.com/2017/03/docker-enterprise-edition/

1
2
3
4
5
Docker CE and EE are released quarterly, and CE also has a monthly “Edge” option.
Each Docker EE release is supported and maintained for one year and receives
security and critical bugfixes during that period. We are also improving Docker
CE maintainability by maintaining each quarterly CE release for 4 months. That
gets Docker CE users a new 1-month window to update from one version to the next.

如果要布署到 kubernetes上,需要查看相关kubernetes对docker版本要求的说明,比如:

https://github.com/kubernetes/kubernetes/blob/v1.17.2/CHANGELOG-1.17.md

安装和删除方法

官方文档 : https://docs.docker.com/engine/install/

阿里云文档: https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.3e221b11guHCWE

Ubuntu 安装和删除Docker

官方文档: https://docs.docker.com/install/linux/docker-ce/ubuntu/

Ubuntu 14.04/16.04/18.04/20.04 安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# step 1: 安装必要的一些系统工具
sudo apt-get update
sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

# step 2: 安装GPG证书
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-keyadd -

# Step 3: 写入软件源信息
sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

# Step 4: 更新并安装Docker-CE
sudo apt-get -y update
sudo apt-get -y install docker-ce

# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
apt-cache madison docker-ce
docker-ce | 5:19.03.5~3-0~ubuntu-bionic | https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages

# Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的5:17.03.1~ce-0~ubuntu-xenial)
sudo apt-get -y install docker-ce=[VERSION] docker-ce-cli=[VERSION]
#示例:指定版本安装
apt-get -y install docker-ce=5:18.09.9~3-0~ubuntu-bionic docker-ce-cli=5:18.09.9~3-0~ubuntu-bionic

删除docker

1
2
[root@ubuntu ~]# apt purge docker-ce
[root@ubuntu ~]# rm -rf /var/lib/docker

范例: 内置仓库安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
[root@ubuntu2004 ~]# apt -y install docker.io
[root@ubuntu2004 ~]# docker version
Client:
Version: 20.10.12
API version: 1.41
Go version: go1.16.2
Git commit: 20.10.12-0ubuntu2~20.04.1
Built: Wed Apr 6 02:14:38 2022
OS/Arch: linux/amd64
Context: default
Experimental: true

Server:
Engine:
Version: 20.10.12
API version: 1.41 (minimum version 1.12)
Go version: go1.16.2
Git commit: 20.10.12-0ubuntu2~20.04.1
Built: Thu Feb 10 15:03:35 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.5.9-0ubuntu1~20.04.4
GitCommit:
runc:
Version: 1.1.0-0ubuntu1~20.04.1
GitCommit:
docker-init:
Version: 0.19.0
GitCommit:

[root@ubuntu2004 ~]# docker info
Client:
Context: default
Debug Mode: false

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 20.10.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk
syslog
Swarm: inactive
Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
Default Runtime: runc
Init Binary: docker-init
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.4.0-89-generic
Operating System: Ubuntu 20.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.913GiB
Name: ubuntu2004.wang.org
ID: UF2A:GX7G:OIWE:W35O:EFSB:5WBH:AYCZ:N37P:YCIF:4AXD:D3IL:NCI4
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

范例: 安装指定版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
# step 1: 安装必要的一些系统工具
[root@ubuntu2004 ~]# sudo apt-get update
[root@ubuntu2004 ~]# sudo apt-get -y install apt-transport-https ca-certificates curl software-properties-common

# step 2: 安装GPG证书
[root@ubuntu2004 ~]# curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

# Step 3: 写入软件源信息
[root@ubuntu2004 ~]# sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

# Step 4: 更新并安装Docker-CE
[root@ubuntu2004 ~]# sudo apt-get -y update

# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
[root@ubuntu2004 ~]# apt-cache madison docker-ce
# docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages
# docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://mirrors.aliyun.com/docker-ce/linux/ubuntu xenial/stable amd64 Packages

# Step 2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.1~ce-0~ubuntu-xenial)
# sudo apt-get -y install docker-ce=[VERSION]
[root@ubuntu2004 ~]# apt-cache madison docker-ce
docker-ce | 5:20.10.17~3-0~ubuntu-focal | https://mirrors.aliyun.com/docker-ce/linux/ubuntu focal/stable amd64 Packages
docker-ce | 5:20.10.16~3-0~ubuntu-focal | https://mirrors.aliyun.com/docker-ce/linux/ubuntu focal/stable amd64 Packages
docker-ce | 5:20.10.15~3-0~ubuntu-focal | https://mirrors.aliyun.com/docker-ce/linux/ubuntu focal/stable amd64 Packages

[root@ubuntu2004 ~]# apt install docker-ce=5:20.10.10~3-0~ubuntu-focal docker-ce-cli=5:20.10.10~3-0~ubuntu-focal

[root@ubuntu2004 ~]# docker version
Client: Docker Engine - Community
Version: 20.10.10
API version: 1.41
Go version: go1.16.9
Git commit: b485636
Built: Mon Oct 25 07:42:59 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

Server: Docker Engine - Community
Engine:
Version: 20.10.10
API version: 1.41 (minimum version 1.12)
Goversion: go1.16.9
Git commit: e2f740d
Built: Mon Oct 25 07:41:08 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.6
GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc:
Version: 1.1.2
GitCommit: v1.1.2-0-ga916309
docker-init:
Version: 0.19.0
GitCommit: de40ad0

CentOS 安装和删除Docker

官方文档: https://docs.docker.com/install/linux/docker-ce/centos/

CentOS 6 因内核太旧,即使支持安装docker,但会有各种问题,不建议安装

CentOS 7 的 extras 源虽然可以安装docker,但包比较旧,建议从官方源或镜像源站点下载安装docker

CentOS 8 有新技术 podman 代替 docker

因此建议在CentOS 7 上安装 docker

1
2
3
4
5
6
7
8
9
10
11
12
#extras 源中包名为docker
[root@centos7 ~]# yum list docker
Loaded plugins: fastestmirror
Repository base is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Loading mirror speeds from cached hostfile
* base: mirrors.tuna.tsinghua.edu.cn
* extras: mirrors.tuna.tsinghua.edu.cn
* updates: mirrors.tuna.tsinghua.edu.cn
Available Packages
docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos
extras

下载rpm包安装:

官方rpm包下载地址:

https://download.docker.com/linux/centos/7/x86_64/stable/Packages/

阿里镜像下载地址:

https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/

通过yum源安装:

由于官网的yum源太慢,下面使用阿里云的Yum源进行安装

1
2
3
4
5
6
7
8
9
10
rm -rf /etc/yum.repos.d/*

#CentOS 7 安装docker依赖三个yum源:Base,Extras,docker-ce
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum clean all
yum -y install docker-ce
systemctl enable --now docker

删除 docker

1
2
3
4
[root@centos7 ~]# yum remove docker-ce

#删除docker资源存放的相关文件
[root@centos7 ~]# rm -rf /var/lib/docker

范例: 安装指定版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
[root@rocky8 ~]# cat /etc/yum.repos.d/docker.repo
[docker-ce]
name=docker-ce
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/
gpgcheck=0

[root@rocky8 ~]# yum list docker-ce --showduplicates
Last metadata expiration check: 0:00:13 ago on Mon 07 Apr 2025 08:41:43 AM CST.
Available Packages
docker-ce.x86_64 3:19.03.13-3.el8 docker-ce
docker-ce.x86_64 3:19.03.14-3.el8 docker-ce
docker-ce.x86_64 3:19.03.15-3.el8 docker-ce
docker-ce.x86_64 3:20.10.0-3.el8 docker-ce
docker-ce.x86_64 3:20.10.1-3.el8 docker-ce
docker-ce.x86_64 3:20.10.2-3.el8 docker-ce
docker-ce.x86_64 3:20.10.3-3.el8 docker-ce
docker-ce.x86_64 3:20.10.4-3.el8 docker-ce
docker-ce.x86_64 3:20.10.5-3.el8 docker-ce
docker-ce.x86_64 3:20.10.6-3.el8 docker-ce
docker-ce.x86_64 3:20.10.7-3.el8 docker-ce
docker-ce.x86_64 3:20.10.8-3.el8 docker-ce
docker-ce.x86_64 3:20.10.9-3.el8 docker-ce
docker-ce.x86_64 3:20.10.10-3.el8 docker-ce
docker-ce.x86_64 3:20.10.11-3.el8 docker-ce
docker-ce.x86_64 3:20.10.12-3.el8 docker-ce
docker-ce.x86_64 3:20.10.13-3.el8 docker-ce
docker-ce.x86_64 3:20.10.14-3.el8 docker-ce
docker-ce.x86_64 3:20.10.15-3.el8 docker-ce
docker-ce.x86_64 3:20.10.16-3.el8 docker-ce
docker-ce.x86_64 3:20.10.17-3.el8 docker-ce
docker-ce.x86_64 3:20.10.18-3.el8 docker-ce
docker-ce.x86_64 3:20.10.19-3.el8 docker-ce
docker-ce.x86_64 3:20.10.20-3.el8 docker-ce
docker-ce.x86_64 3:20.10.21-3.el8 docker-ce
docker-ce.x86_64 3:20.10.22-3.el8 docker-ce
docker-ce.x86_64 3:20.10.23-3.el8 docker-ce
docker-ce.x86_64 3:20.10.24-3.el8 docker-ce
docker-ce.x86_64 3:23.0.0-1.el8 docker-ce
docker-ce.x86_64 3:23.0.1-1.el8 docker-ce
docker-ce.x86_64 3:23.0.2-1.el8 docker-ce
docker-ce.x86_64 3:23.0.3-1.el8 docker-ce
docker-ce.x86_64 3:23.0.4-1.el8 docker-ce
docker-ce.x86_64 3:23.0.5-1.el8 docker-ce
docker-ce.x86_64 3:23.0.6-1.el8 docker-ce
docker-ce.x86_64 3:24.0.0-1.el8 docker-ce
docker-ce.x86_64 3:24.0.1-1.el8 docker-ce
docker-ce.x86_64 3:24.0.2-1.el8 docker-ce
docker-ce.x86_64 3:24.0.3-1.el8 docker-ce
docker-ce.x86_64 3:24.0.4-1.el8 docker-ce
docker-ce.x86_64 3:24.0.5-1.el8 docker-ce
docker-ce.x86_64 3:24.0.6-1.el8 docker-ce
docker-ce.x86_64 3:24.0.7-1.el8 docker-ce
docker-ce.x86_64 3:24.0.8-1.el8 docker-ce
docker-ce.x86_64 3:24.0.9-1.el8 docker-ce
docker-ce.x86_64 3:25.0.0-1.el8 docker-ce
docker-ce.x86_64 3:25.0.1-1.el8 docker-ce
docker-ce.x86_64 3:25.0.2-1.el8 docker-ce
docker-ce.x86_64 3:25.0.3-1.el8 docker-ce
docker-ce.x86_64 3:25.0.4-1.el8 docker-ce
docker-ce.x86_64 3:25.0.5-1.el8 docker-ce
docker-ce.x86_64 3:26.0.0-1.el8 docker-ce
docker-ce.x86_64 3:26.0.1-1.el8 docker-ce
docker-ce.x86_64 3:26.0.2-1.el8 docker-ce
docker-ce.x86_64 3:26.1.0-1.el8 docker-ce
docker-ce.x86_64 3:26.1.1-1.el8 docker-ce
docker-ce.x86_64 3:26.1.2-1.el8 docker-ce
docker-ce.x86_64 3:26.1.3-1.el8 docker-ce

[root@rocky8 ~]# yum install docker-ce-3:26.1.3-1.el8 docker-ce-cli-1:26.1.3-1.el8 -y

[root@rocky8 ~]# docker version
Client: Docker Engine - Community
Version: 26.1.3
API version: 1.45
Go version: go1.21.10
Git commit: b72abbb
Built: Thu May 16 08:34:39 2024
OS/Arch: linux/amd64
Context: default
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

[root@rocky8 ~]# systemctl enable --now docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

范例: CentOS 7 基于阿里云的安装docker方法

阿里云说明: https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.3e221b11sUMKNV

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
# step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装Docker-CE
yum makecache fast
yum -y install docker-ce
# Step 4: 开启Docker服务
service docker start

# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ee.repo
# 将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
# Step 1: 查找Docker-CE的版本:
# yum list docker-ce.x86_64 --showduplicates | sort -r
# Loading mirror speeds from cached hostfile
# Loaded plugins: branch, fastestmirror, langpacks
# docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
# docker-ce.x86_64 17.03.1.ce-1.el7.centos @docker-ce-stable
# docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
# Available Packages
# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
yum -y install docker-ce-[VERSION]

#示例
[root@centos7 ~]# yum -y install docker-ce-19.03.12-3.el7
[root@centos7 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)

[root@centos7 ~]# ls /etc/yum.repos.d/

范例: 在CentOS 7上安装指定版本的docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
[root@centos7 ~]# cat /etc/redhat-release 
CentOS Linux release 7.6.1810 (Core)

[root@centos7 ~]# ls /etc/yum.repos.d/
backup base.repo

[root@centos7 ~]# wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Saving to: ‘/etc/yum.repos.d/docker-ce.repo’
100%[====================================================================>]
2,640 --.-K/s in 0s

2020-01-23 21:56:21 (505 MB/s) - ‘/etc/yum.repos.d/docker-ce.repo’ saved [2640/2640]

[root@centos7 ~]# ls /etc/yum.repos.d/
backup base.repo docker-ce.repo

[root@centos7 ~]# yum clean all
Loaded plugins: fastestmirror
Cleaning repos: base docker-ce-stable epel extras
Cleaning up list of fastest mirrors

[root@centos7 ~]# yum repolist
[root@centos7 ~]# yum list docker-ce* --showduplicates | sort -r
Loading mirror speeds from cached hostfile
Loaded plugins: fastestmirror
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
......

[root@centos7 ~]# yum -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7

[root@centos7 ~]# docker version
Client:
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:51:21 2019
OS/Arch: linux/amd64
Experimental: false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

[root@centos7 ~]# systemctl enable --now docker
[root@centos7 ~]# docker version
Client:
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:51:21 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.9
API version: 1.39 (minimum version 1.12)
Go version: go1.11.13
Git commit: 039a7df
Built: Wed Sep 4 16:22:32 2019
OS/Arch: linux/amd64
Experimental: false

范例: 在CentOS8安装docker

1
2
3
4
5
6
7
8
[root@centos8 ~]# tee /etc/yum.repos.d/docker.repo <<EOF
[docker]
name=docker
gpgcheck=0
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/
EOF

[root@centos8 ~]# dnf -y install docker-ce

Linux 二进制安装

本方法适用于无法上网或无法通过包安装方式安装的主机上安装docker

安装文档: https://docs.docker.com/install/linux/docker-ce/binaries/

二进制安装下载路径

https://download.docker.com/linux/

https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/

范例: 在CentOS8上实现二进制安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
[root@centos8 ~]# wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.5.tgz

[root@centos8 ~]# tar xvf docker-19.03.5.tgz
docker/
docker/docker-init
docker/docker
docker/dockerd
docker/runc
docker/ctr
docker/docker-proxy
docker/containerd
docker/containerd-shim

[root@centos8 ~]# cp docker/* /usr/bin/

#启动dockerd服务
[root@centos8 ~]# dockerd &>/dev/null &

[root@centos8 ~]# docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:22:05 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:28:45 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683

[root@centos8 ~]# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:9572f7cdcee8591948c2963463447a53466950b3fc15a247fcad1917ca215a2f
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:

1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

[root@centos8 ~]# pstree -p
systemd(1)─┬─NetworkManager(829)─┬─{NetworkManager}(846)
│ └─{NetworkManager}(847)
├─agetty(855)
├─atd(854)
├─auditd(792)───{auditd}(793)
├─chronyd(838)
├─containerd(2589)─┬─{containerd}(2590)
│ ├─{containerd}(2591)
│ ├─{containerd}(2592)
│ ├─{containerd}(2593)
│ ├─{containerd}(2594)
│ ├─{containerd}(2595)
│ ├─{containerd}(2596)
│ ├─{containerd}(2597)
│ └─{containerd}(2599)
├─crond(859)
├─dbus-daemon(823)
├─dockerd(2600)─┬─{dockerd}(2601)
│ ├─{dockerd}(2602)
│ ├─{dockerd}(2603)
│ ├─{dockerd}(2604)
│ ├─{dockerd}(2605)
│ ├─{dockerd}(2606)
│ ├─{dockerd}(2607)
│ ├─{dockerd}(2608)
│ └─{dockerd}(2609)
├─irqbalance(816)───{irqbalance}(819)
├─lsmd(815)
├─polkitd(1102)─┬─{polkitd}(1109)
│ ├─{polkitd}(1110)
│ ├─{polkitd}(1112)
│ ├─{polkitd}(1113)
│ ├─{polkitd}(1114)
│ ├─{polkitd}(1116)
│ └─{polkitd}(1122)
├─smartd(837)
├─sshd(850)───sshd(1125)───sshd(1137)───bash(1138)───pstree(2768)
├─systemd(1129)───(sd-pam)(1131)
├─systemd-journal(649)
├─systemd-logind(817)
├─systemd-udevd(677)
└─tuned(848)─┬─{tuned}(1101)
├─{tuned}(1115)
└─{tuned}(1118)

范例: 创建 service文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[root@centos8 ~]# cat > /lib/systemd/system/docker.service <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker
containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

[root@centos8 ~]# systemctl daemon-reload
[root@centos8 ~]# systemctl enable --now docker

范例: 创建相关的service文件,此方式新版有问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
#创建相关的service文件,此方式新版有问题
[root@centos8 ~]# groupadd -r docker

#将Ubuntu1804或CentOS7基于包方式安装的相关文件复制到相应目录下
[root@ubuntu1804 ~]# ll /lib/systemd/system/docker.*
-rw-r--r-- 1 root root 1683 Jun 22 23:44 /lib/systemd/system/docker.service
-rw-r--r-- 1 root root 197 Jun 22 23:44 /lib/systemd/system/docker.socket

[root@ubuntu1804 ~]# ll /lib/systemd/system/containerd.service
-rw-r--r-- 1 root root 1085 May 2 2020 /lib/systemd/system/containerd.service

[root@ubuntu1804 ~]# cat /lib/systemd/system/docker.socket
[Unit]
Description=Docker Socket for the API
PartOf=docker.service

[Socket]
ListenStream=/var/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target

[root@ubuntu1804 ~]# cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
#for containers run by docker

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always


# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

[root@ubuntu1804 ~]# cat /lib/systemd/system/containerd.service

# Copyright 2018-2020 Docker Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/bin/containerd
KillMode=process
Delegate=yes
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity

[Install]
WantedBy=multi-user.target

[root@ubuntu1804 ~]# scp /lib/systemd/system/docker.* /lib/systemd/system/containerd.service 10.0.0.8:/lib/systemd/system/

[root@centos8 ~]# systemctl daemon-reload
[root@centos8 ~]# systemctl enable --now docker

范例: 一键离线安装二进制 docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
#!/bin/bash

DOCKER_VERSION=20.10.10
URL=https://mirrors.aliyun.com

prepare () {
if [! -e docker-${DOCKER_VERSION}.tgz ];then
wget ${URL}/docker-ce/linux/static/stable/x86_64/docker-${DOCKER_VERSION}.tgz
fi
[ $? -ne 0 ] && { echo "文件下载失败"; exit; }
}

install_docker () {
tar xf docker-${DOCKER_VERSION}.tgz -C /usr/local/
cp /usr/local/docker/* /usr/bin/
cat > /lib/systemd/system/docker.service <<-EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H unix://var/run/docker.sock
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
}
start_docker (){
systemctl enable --now docker
docker version
}

prepare
install_docker
start_docker

安装 podman

范例: 在CentOS8上安装podman

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#在CentOS8上安装docker会自动安装podman,docker工具只是一个脚本,调用了Podman
[root@centos8 ~]# dnf install docker
[root@centos8 ~]# rpm -ql podman-docker
/usr/bin/docker

[root@centos8 ~]# cat /usr/bin/docker
#!/bin/sh
[ -f /etc/containers/nodocker ] || \
echo "Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg." >&2
exec /usr/bin/podman "$@"

[root@centos8 ~]# podman version
Version: 1.4.2-stable2
RemoteAPI Version: 1
Go Version: go1.12.8
OS/Arch: linux/amd64

#修改拉取镜像的地址的顺序,提高速度
[root@centos8 ~]# vim /etc/containers/registries.conf
[registries.search]
registries = ['docker.io''quay.io''registry.redhat.io','registry.access.redhat.com']

在不同系统上实现一键安装 docker 脚本

基于 ubuntu 18.04和20.04 的 一键安装 docker 脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@ubuntu1804 ~]# cat install_docker_ubuntu.sh
#!/bin/bash
#Description: Install docker on Ubuntu18.04 and 20.04
#Version:1.0
#Date:2020-01-22

COLOR="echo -e \\033[1;31m"
END="\033[m"
DOCKER_VERSION="5:19.03.5~3-0~ubuntu-bionic"

install_docker(){
dpkg -s docker-ce &> /dev/null && ${COLOR}"Docker已安装,退出"${END} && exit
apt update
apt -y install apt-transport-https ca-certificates curl software-properties-common

#curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-keyadd -
#add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"

curl -fsSL https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64]
https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu $(lsb_release -cs)stable"

apt update
${COLOR}"Docker有以下版本"${END}
apt-cache madison docker-ce
${COLOR}"5秒后即将安装: docker-"${DOCKER_VERSION}" 版本....."${END}
${COLOR}"如果想安装其它Docker版本,请按ctrl+c键退出,修改版本再执行"${END}
sleep 5

apt -y install docker-ce=${DOCKER_VERSION} docker-ce-cli=${DOCKER_VERSION}

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF

systemctl daemon-reload
systemctl enable --now docker
docker version && ${COLOR}"Docker 安装成功"${END} || ${COLOR}"Docker 安装失败" ${END}
}

install_docker
基于 CentOS 8 实现一键安装 docker 脚本
脚本 1

利用阿里云的基于CentOS8的docker yum源实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
#!/bin/bash

. /etc/init.d/functions
COLOR="echo -e \\E[1;32m"
END="\\E[0m"
DOCKER_VERSION="-19.03.13-3.el8"

install_docker() {
rpm -q docker-ce &> /dev/null && action "Docker已安装" && exit
${COLOR}"开始安装 Docker....."${END}
sleep 1
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker]
name=docker
gpgcheck=0
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/8/x86_64/stable/
#baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/
#EOF

yum clean all
yum -y install docker-ce$DOCKER_VERSION docker-ce-cli$DOCKER_VERSION \
|| { ${COLOR}"Base,Extras的yum源失败,请检查yum源配置"${END};exit; }

mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF

systemctl enable --now docker
docker version && ${COLOR}"Docker安装成功"${END} || ${COLOR}"Docker安装失败" ${END}

}

install_docker
脚本 2

早期CentOS8无yum仓库,可以利用下面脚本安装docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

. /etc/init.d/functions
COLOR="echo -e \\E[1;32m"
END="\\E[0m"
DOCKER_VERSION="-19.03.8-3.el7"

install_docker() {

${COLOR}"开始安装 Docker....."${END}
sleep 1

#wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo || { ${COLOR}"互联网连接失败,请检查网络配置!"${END};exit; }
wget -P /etc/yum.repos.d/ https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo || { ${COLOR}"互联网连接失败,请检查网络配置!" ${END};exit; }
yum clean all

dnf -y install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
yum -y install docker-ce$DOCKER_VERSION docker-ce-cli$DOCKER_VERSION \
|| { ${COLOR}"Base,Extras的yum源失败,请检查yum源配置"${END};exit; }

mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF

systemctl enable --now docker
docker version && ${COLOR}"Docker安装成功"${END} || ${COLOR}"Docker安装失败" ${END}
}

rpm -q docker &> /dev/null && action "Docker已安装" || install_docker
基于 CentOS 7 实现一键安装docker 脚本
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@centos7 ~]#cat install_docker_for_centos7.sh
#!/bin/bash

. /etc/init.d/functions
COLOR="echo -e \\033[1;31m"
END="\033[m"
VERSION="19.03.5-3.el7"


rpm -q docker-ce &> /dev/null && action "Docker已安装" && exit

wget -P /etc/yum.repos.d/ https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo || { ${COLOR}"互联网连接失败,请检查网络配置!" ${END};exit; }

#wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo || { ${COLOR}"互联网连接失败,请检查网络配置!"${END};exit; }

yum clean all
yum -y install docker-ce-$VERSION docker-ce-cli-$VERSION || {${COLOR}"Base,Extras的yum源失败,请检查yum源配置"${END};exit; }

#使用阿里做镜像加速
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
EOF

systemctl enable --now docker
docker version && ${COLOR}"Docker安装成功"${END} || ${COLOR}"Docker安装失败"${END}
通用安装Docker脚本

从Docker官方下载通用安装脚本

1
2
[root@ubuntu1804 ~]# curl -fsSL get.docker.com -o get-docker.sh
[root@ubuntu1804 ~]# sh get-docker.sh --mirror Aliyun

Docker 程序环境

环境配置文件:

1
2
3
/etc/sysconfig/docker-network
/etc/sysconfig/docker-storage
/etc/sysconfig/docker

Unit File:

1
/usr/lib/systemd/system/docker.service

docker-ce 配置文件:

1
/etc/docker/daemon.json

Docker Registry配置文件:

1
/etc/containers/registries.conf

范例: ubuntu 查看docker相关文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
##服务器端相关文件

[root@ubuntu1804 ~]# dpkg -L docker-ce
/.
/etc
/etc/default
/etc/default/docker
/etc/init
/etc/init/docker.conf
/etc/init.d
/etc/init.d/docker
/lib
/lib/systemd
/lib/systemd/system
/lib/systemd/system/docker.service
/lib/systemd/system/docker.socket
/usr
/usr/bin
/usr/bin/docker-init
/usr/bin/docker-proxy
/usr/bin/dockerd
/usr/share
/usr/share/doc
/usr/share/doc/docker-ce
/usr/share/doc/docker-ce/README.md
/usr/share/doc/docker-ce/changelog.Debian.gz
/var
/var/lib
/var/lib/docker-engine
/var/lib/docker-engine/distribution_based_engine.json

##客户端相关文件

[root@ubuntu1804 ~]# dpkg -L docker-ce-cli
/.
/usr
/usr/bin
/usr/bin/docker
/usr/libexec
/usr/libexec/docker
/usr/libexec/docker/cli-plugins
/usr/libexec/docker/cli-plugins/docker-app
/usr/libexec/docker/cli-plugins/docker-buildx
/usr/share
/usr/share/bash-completion
/usr/share/bash-completion/completions
/usr/share/bash-completion/completions/docker
/usr/share/doc
/usr/share/doc/docker-ce-cli
/usr/share/doc/docker-ce-cli/changelog.Debian.gz
/usr/share/fish
/usr/share/fish/vendor_completions.d
/usr/share/fish/vendor_completions.d/docker.fish
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/docker-attach.1.gz
/usr/share/man/man1/docker-build.1.gz
/usr/share/man/man1/docker-builder-build.1.gz
/usr/share/man/man1/docker-builder-prune.1.gz
/usr/share/man/man1/docker-builder.1.gz
/usr/share/man/man1/docker-checkpoint-create.1.gz
/usr/share/man/man1/docker-checkpoint-ls.1.gz
/usr/share/man/man1/docker-checkpoint-rm.1.gz
/usr/share/man/man1/docker-checkpoint.1.gz
/usr/share/man/man1/docker-commit.1.gz
/usr/share/man/man1/docker-config-create.1.gz
/usr/share/man/man1/docker-config-inspect.1.gz
/usr/share/man/man1/docker-config-ls.1.gz
/usr/share/man/man1/docker-config-rm.1.gz
/usr/share/man/man1/docker-config.1.gz
/usr/share/man/man1/docker-container-attach.1.gz
/usr/share/man/man1/docker-container-commit.1.gz
/usr/share/man/man1/docker-container-cp.1.gz
/usr/share/man/man1/docker-container-create.1.gz
/usr/share/man/man1/docker-container-diff.1.gz
/usr/share/man/man1/docker-container-exec.1.gz
/usr/share/man/man1/docker-container-export.1.gz
/usr/share/man/man1/docker-container-inspect.1.gz
/usr/share/man/man1/docker-container-kill.1.gz
/usr/share/man/man1/docker-container-logs.1.gz
/usr/share/man/man1/docker-container-ls.1.gz
/usr/share/man/man1/docker-container-pause.1.gz
/usr/share/man/man1/docker-container-port.1.gz
/usr/share/man/man1/docker-container-prune.1.gz
/usr/share/man/man1/docker-container-rename.1.gz
/usr/share/man/man1/docker-container-restart.1.gz
/usr/share/man/man1/docker-container-rm.1.gz
/usr/share/man/man1/docker-container-run.1.gz
/usr/share/man/man1/docker-container-start.1.gz
/usr/share/man/man1/docker-container-stats.1.gz
/usr/share/man/man1/docker-container-stop.1.gz
/usr/share/man/man1/docker-container-top.1.gz
/usr/share/man/man1/docker-container-unpause.1.gz
/usr/share/man/man1/docker-container-update.1.gz
/usr/share/man/man1/docker-container-wait.1.gz
/usr/share/man/man1/docker-container.1.gz
/usr/share/man/man1/docker-context-create.1.gz
/usr/share/man/man1/docker-context-export.1.gz
/usr/share/man/man1/docker-context-import.1.gz
/usr/share/man/man1/docker-context-inspect.1.gz
/usr/share/man/man1/docker-context-ls.1.gz
/usr/share/man/man1/docker-context-rm.1.gz
/usr/share/man/man1/docker-context-update.1.gz
/usr/share/man/man1/docker-context-use.1.gz
/usr/share/man/man1/docker-context.1.gz
/usr/share/man/man1/docker-cp.1.gz
/usr/share/man/man1/docker-create.1.gz
/usr/share/man/man1/docker-deploy.1.gz
/usr/share/man/man1/docker-diff.1.gz
/usr/share/man/man1/docker-engine-activate.1.gz
/usr/share/man/man1/docker-engine-check.1.gz
/usr/share/man/man1/docker-engine-update.1.gz
/usr/share/man/man1/docker-engine.1.gz
/usr/share/man/man1/docker-events.1.gz
/usr/share/man/man1/docker-exec.1.gz
/usr/share/man/man1/docker-export.1.gz
/usr/share/man/man1/docker-history.1.gz
/usr/share/man/man1/docker-image-build.1.gz
/usr/share/man/man1/docker-image-history.1.gz
/usr/share/man/man1/docker-image-import.1.gz
/usr/share/man/man1/docker-image-inspect.1.gz
/usr/share/man/man1/docker-image-load.1.gz
/usr/share/man/man1/docker-image-ls.1.gz
/usr/share/man/man1/docker-image-prune.1.gz
/usr/share/man/man1/docker-image-pull.1.gz
/usr/share/man/man1/docker-image-push.1.gz
/usr/share/man/man1/docker-image-rm.1.gz
/usr/share/man/man1/docker-image-save.1.gz
/usr/share/man/man1/docker-image-tag.1.gz
/usr/share/man/man1/docker-image.1.gz
/usr/share/man/man1/docker-images.1.gz
/usr/share/man/man1/docker-import.1.gz
/usr/share/man/man1/docker-info.1.gz
/usr/share/man/man1/docker-inspect.1.gz
/usr/share/man/man1/docker-kill.1.gz
/usr/share/man/man1/docker-load.1.gz
/usr/share/man/man1/docker-login.1.gz
/usr/share/man/man1/docker-logout.1.gz
/usr/share/man/man1/docker-logs.1.gz
/usr/share/man/man1/docker-manifest-annotate.1.gz
/usr/share/man/man1/docker-manifest-create.1.gz
/usr/share/man/man1/docker-manifest-inspect.1.gz
/usr/share/man/man1/docker-manifest-push.1.gz
/usr/share/man/man1/docker-manifest.1.gz
/usr/share/man/man1/docker-network-connect.1.gz
/usr/share/man/man1/docker-network-create.1.gz
/usr/share/man/man1/docker-network-disconnect.1.gz
/usr/share/man/man1/docker-network-inspect.1.gz
/usr/share/man/man1/docker-network-ls.1.gz
/usr/share/man/man1/docker-network-prune.1.gz
/usr/share/man/man1/docker-network-rm.1.gz
/usr/share/man/man1/docker-network.1.gz
/usr/share/man/man1/docker-node-demote.1.gz
/usr/share/man/man1/docker-node-inspect.1.gz
/usr/share/man/man1/docker-node-ls.1.gz
/usr/share/man/man1/docker-node-promote.1.gz
/usr/share/man/man1/docker-node-ps.1.gz
/usr/share/man/man1/docker-node-rm.1.gz
/usr/share/man/man1/docker-node-update.1.gz
/usr/share/man/man1/docker-node.1.gz
/usr/share/man/man1/docker-pause.1.gz
/usr/share/man/man1/docker-plugin-create.1.gz
/usr/share/man/man1/docker-plugin-disable.1.gz
/usr/share/man/man1/docker-plugin-enable.1.gz
/usr/share/man/man1/docker-plugin-inspect.1.gz
/usr/share/man/man1/docker-plugin-install.1.gz
/usr/share/man/man1/docker-plugin-ls.1.gz
/usr/share/man/man1/docker-plugin-push.1.gz
/usr/share/man/man1/docker-plugin-rm.1.gz
/usr/share/man/man1/docker-plugin-set.1.gz
/usr/share/man/man1/docker-plugin-upgrade.1.gz
/usr/share/man/man1/docker-plugin.1.gz
/usr/share/man/man1/docker-port.1.gz
/usr/share/man/man1/docker-ps.1.gz
/usr/share/man/man1/docker-pull.1.gz
/usr/share/man/man1/docker-push.1.gz
/usr/share/man/man1/docker-rename.1.gz
/usr/share/man/man1/docker-restart.1.gz
/usr/share/man/man1/docker-rm.1.gz
/usr/share/man/man1/docker-rmi.1.gz
/usr/share/man/man1/docker-run.1.gz
/usr/share/man/man1/docker-save.1.gz
/usr/share/man/man1/docker-search.1.gz
/usr/share/man/man1/docker-secret-create.1.gz
/usr/share/man/man1/docker-secret-inspect.1.gz
/usr/share/man/man1/docker-secret-ls.1.gz
/usr/share/man/man1/docker-secret-rm.1.gz
/usr/share/man/man1/docker-secret.1.gz
/usr/share/man/man1/docker-service-create.1.gz
/usr/share/man/man1/docker-service-inspect.1.gz
/usr/share/man/man1/docker-service-logs.1.gz
/usr/share/man/man1/docker-service-ls.1.gz
/usr/share/man/man1/docker-service-ps.1.gz
/usr/share/man/man1/docker-service-rm.1.gz
/usr/share/man/man1/docker-service-rollback.1.gz
/usr/share/man/man1/docker-service-scale.1.gz
/usr/share/man/man1/docker-service-update.1.gz
/usr/share/man/man1/docker-service.1.gz
/usr/share/man/man1/docker-stack-deploy.1.gz
/usr/share/man/man1/docker-stack-ls.1.gz
/usr/share/man/man1/docker-stack-ps.1.gz
/usr/share/man/man1/docker-stack-rm.1.gz
/usr/share/man/man1/docker-stack-services.1.gz
/usr/share/man/man1/docker-stack.1.gz
/usr/share/man/man1/docker-start.1.gz
/usr/share/man/man1/docker-stats.1.gz
/usr/share/man/man1/docker-stop.1.gz
/usr/share/man/man1/docker-swarm-ca.1.gz
/usr/share/man/man1/docker-swarm-init.1.gz
/usr/share/man/man1/docker-swarm-join-token.1.gz
/usr/share/man/man1/docker-swarm-join.1.gz
/usr/share/man/man1/docker-swarm-leave.1.gz
/usr/share/man/man1/docker-swarm-unlock-key.1.gz
/usr/share/man/man1/docker-swarm-unlock.1.gz
/usr/share/man/man1/docker-swarm-update.1.gz
/usr/share/man/man1/docker-swarm.1.gz
/usr/share/man/man1/docker-system-df.1.gz
/usr/share/man/man1/docker-system-events.1.gz
/usr/share/man/man1/docker-system-info.1.gz
/usr/share/man/man1/docker-system-prune.1.gz
/usr/share/man/man1/docker-system.1.gz
/usr/share/man/man1/docker-tag.1.gz
/usr/share/man/man1/docker-top.1.gz
/usr/share/man/man1/docker-trust-inspect.1.gz
/usr/share/man/man1/docker-trust-key-generate.1.gz
/usr/share/man/man1/docker-trust-key-load.1.gz
/usr/share/man/man1/docker-trust-key.1.gz
/usr/share/man/man1/docker-trust-revoke.1.gz
/usr/share/man/man1/docker-trust-sign.1.gz
/usr/share/man/man1/docker-trust-signer-add.1.gz
/usr/share/man/man1/docker-trust-signer-remove.1.gz
/usr/share/man/man1/docker-trust-signer.1.gz
/usr/share/man/man1/docker-trust.1.gz
/usr/share/man/man1/docker-unpause.1.gz
/usr/share/man/man1/docker-update.1.gz
/usr/share/man/man1/docker-version.1.gz
/usr/share/man/man1/docker-volume-create.1.gz
/usr/share/man/man1/docker-volume-inspect.1.gz
/usr/share/man/man1/docker-volume-ls.1.gz
/usr/share/man/man1/docker-volume-prune.1.gz
/usr/share/man/man1/docker-volume-rm.1.gz
/usr/share/man/man1/docker-volume.1.gz
/usr/share/man/man1/docker-wait.1.gz
/usr/share/man/man1/docker.1.gz
/usr/share/man/man5
/usr/share/man/man5/Dockerfile.5.gz
/usr/share/man/man5/docker-config-json.5.gz
/usr/share/man/man8
/usr/share/man/man8/dockerd.8.gz
/usr/share/zsh
/usr/share/zsh/vendor-completions
/usr/share/zsh/vendor-completions/_docker

范例: CentOS7 查看docker相关文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
[root@centos7 ~]# rpm -ql docker-ce
/usr/bin/docker-init
/usr/bin/docker-proxy
/usr/bin/dockerd
/usr/lib/systemd/system/docker.service
/usr/lib/systemd/system/docker.socket

[root@centos7 ~]# rpm -ql docker-ce-cli
/usr/bin/docker
/usr/libexec/docker/cli-plugins/docker-app
/usr/libexec/docker/cli-plugins/docker-buildx
/usr/share/bash-completion/completions/docker
/usr/share/doc/docker-ce-cli-19.03.12
/usr/share/doc/docker-ce-cli-19.03.12/LICENSE
/usr/share/doc/docker-ce-cli-19.03.12/MAINTAINERS
/usr/share/doc/docker-ce-cli-19.03.12/NOTICE
/usr/share/doc/docker-ce-cli-19.03.12/README.md
/usr/share/fish/vendor_completions.d/docker.fish
/usr/share/man/man1/docker-attach.1.gz
/usr/share/man/man1/docker-build.1.gz
/usr/share/man/man1/docker-builder-build.1.gz
/usr/share/man/man1/docker-builder-prune.1.gz
/usr/share/man/man1/docker-builder.1.gz
/usr/share/man/man1/docker-checkpoint-create.1.gz
/usr/share/man/man1/docker-checkpoint-ls.1.gz
/usr/share/man/man1/docker-checkpoint-rm.1.gz
/usr/share/man/man1/docker-checkpoint.1.gz
/usr/share/man/man1/docker-commit.1.gz
/usr/share/man/man1/docker-config-create.1.gz
/usr/share/man/man1/docker-config-inspect.1.gz
/usr/share/man/man1/docker-config-ls.1.gz
/usr/share/man/man1/docker-config-rm.1.gz
/usr/share/man/man1/docker-config.1.gz
/usr/share/man/man1/docker-container-attach.1.gz
/usr/share/man/man1/docker-container-commit.1.gz
/usr/share/man/man1/docker-container-cp.1.gz
/usr/share/man/man1/docker-container-create.1.gz
/usr/share/man/man1/docker-container-diff.1.gz
/usr/share/man/man1/docker-container-exec.1.gz
/usr/share/man/man1/docker-container-export.1.gz
/usr/share/man/man1/docker-container-inspect.1.gz
/usr/share/man/man1/docker-container-kill.1.gz
/usr/share/man/man1/docker-container-logs.1.gz
/usr/share/man/man1/docker-container-ls.1.gz
/usr/share/man/man1/docker-container-pause.1.gz
/usr/share/man/man1/docker-container-port.1.gz
/usr/share/man/man1/docker-container-prune.1.gz
/usr/share/man/man1/docker-container-rename.1.gz
/usr/share/man/man1/docker-container-restart.1.gz
/usr/share/man/man1/docker-container-rm.1.gz
/usr/share/man/man1/docker-container-run.1.gz
/usr/share/man/man1/docker-container-start.1.gz
/usr/share/man/man1/docker-container-stats.1.gz
/usr/share/man/man1/docker-container-stop.1.gz
/usr/share/man/man1/docker-container-top.1.gz
/usr/share/man/man1/docker-container-unpause.1.gz
/usr/share/man/man1/docker-container-update.1.gz
/usr/share/man/man1/docker-container-wait.1.gz
/usr/share/man/man1/docker-container.1.gz
/usr/share/man/man1/docker-context-create.1.gz
/usr/share/man/man1/docker-context-export.1.gz
/usr/share/man/man1/docker-context-import.1.gz
/usr/share/man/man1/docker-context-inspect.1.gz
/usr/share/man/man1/docker-context-ls.1.gz
/usr/share/man/man1/docker-context-rm.1.gz
/usr/share/man/man1/docker-context-update.1.gz
/usr/share/man/man1/docker-context-use.1.gz
/usr/share/man/man1/docker-context.1.gz
/usr/share/man/man1/docker-cp.1.gz
/usr/share/man/man1/docker-create.1.gz
/usr/share/man/man1/docker-deploy.1.gz
/usr/share/man/man1/docker-diff.1.gz
/usr/share/man/man1/docker-engine-activate.1.gz
/usr/share/man/man1/docker-engine-check.1.gz
/usr/share/man/man1/docker-engine-update.1.gz
/usr/share/man/man1/docker-engine.1.gz
/usr/share/man/man1/docker-events.1.gz
/usr/share/man/man1/docker-exec.1.gz
/usr/share/man/man1/docker-export.1.gz
/usr/share/man/man1/docker-history.1.gz
/usr/share/man/man1/docker-image-build.1.gz
/usr/share/man/man1/docker-image-history.1.gz
/usr/share/man/man1/docker-image-import.1.gz
/usr/share/man/man1/docker-image-inspect.1.gz
/usr/share/man/man1/docker-image-load.1.gz
/usr/share/man/man1/docker-image-ls.1.gz
/usr/share/man/man1/docker-image-prune.1.gz
/usr/share/man/man1/docker-image-pull.1.gz
/usr/share/man/man1/docker-image-push.1.gz
/usr/share/man/man1/docker-image-rm.1.gz
/usr/share/man/man1/docker-image-save.1.gz
/usr/share/man/man1/docker-image-tag.1.gz
/usr/share/man/man1/docker-image.1.gz
/usr/share/man/man1/docker-images.1.gz
/usr/share/man/man1/docker-import.1.gz
/usr/share/man/man1/docker-info.1.gz
/usr/share/man/man1/docker-inspect.1.gz
/usr/share/man/man1/docker-kill.1.gz
/usr/share/man/man1/docker-load.1.gz
/usr/share/man/man1/docker-login.1.gz
/usr/share/man/man1/docker-logout.1.gz
/usr/share/man/man1/docker-logs.1.gz
/usr/share/man/man1/docker-manifest-annotate.1.gz
/usr/share/man/man1/docker-manifest-create.1.gz
/usr/share/man/man1/docker-manifest-inspect.1.gz
/usr/share/man/man1/docker-manifest-push.1.gz
/usr/share/man/man1/docker-manifest.1.gz
/usr/share/man/man1/docker-network-connect.1.gz
/usr/share/man/man1/docker-network-create.1.gz
/usr/share/man/man1/docker-network-disconnect.1.gz
/usr/share/man/man1/docker-network-inspect.1.gz
/usr/share/man/man1/docker-network-ls.1.gz
/usr/share/man/man1/docker-network-prune.1.gz
/usr/share/man/man1/docker-network-rm.1.gz
/usr/share/man/man1/docker-network.1.gz
/usr/share/man/man1/docker-node-demote.1.gz
/usr/share/man/man1/docker-node-inspect.1.gz
/usr/share/man/man1/docker-node-ls.1.gz
/usr/share/man/man1/docker-node-promote.1.gz
/usr/share/man/man1/docker-node-ps.1.gz
/usr/share/man/man1/docker-node-rm.1.gz
/usr/share/man/man1/docker-node-update.1.gz
/usr/share/man/man1/docker-node.1.gz
/usr/share/man/man1/docker-pause.1.gz
/usr/share/man/man1/docker-plugin-create.1.gz
/usr/share/man/man1/docker-plugin-disable.1.gz
/usr/share/man/man1/docker-plugin-enable.1.gz
/usr/share/man/man1/docker-plugin-inspect.1.gz
/usr/share/man/man1/docker-plugin-install.1.gz
/usr/share/man/man1/docker-plugin-ls.1.gz
/usr/share/man/man1/docker-plugin-push.1.gz
/usr/share/man/man1/docker-plugin-rm.1.gz
/usr/share/man/man1/docker-plugin-set.1.gz
/usr/share/man/man1/docker-plugin-upgrade.1.gz
/usr/share/man/man1/docker-plugin.1.gz
/usr/share/man/man1/docker-port.1.gz
/usr/share/man/man1/docker-ps.1.gz
/usr/share/man/man1/docker-pull.1.gz
/usr/share/man/man1/docker-push.1.gz
/usr/share/man/man1/docker-rename.1.gz
/usr/share/man/man1/docker-restart.1.gz
/usr/share/man/man1/docker-rm.1.gz
/usr/share/man/man1/docker-rmi.1.gz
/usr/share/man/man1/docker-run.1.gz
/usr/share/man/man1/docker-save.1.gz
/usr/share/man/man1/docker-search.1.gz
/usr/share/man/man1/docker-secret-create.1.gz
/usr/share/man/man1/docker-secret-inspect.1.gz
/usr/share/man/man1/docker-secret-ls.1.gz
/usr/share/man/man1/docker-secret-rm.1.gz
/usr/share/man/man1/docker-secret.1.gz
/usr/share/man/man1/docker-service-create.1.gz
/usr/share/man/man1/docker-service-inspect.1.gz
/usr/share/man/man1/docker-service-logs.1.gz
/usr/share/man/man1/docker-service-ls.1.gz
/usr/share/man/man1/docker-service-ps.1.gz
/usr/share/man/man1/docker-service-rm.1.gz
/usr/share/man/man1/docker-service-rollback.1.gz
/usr/share/man/man1/docker-service-scale.1.gz
/usr/share/man/man1/docker-service-update.1.gz
/usr/share/man/man1/docker-service.1.gz
/usr/share/man/man1/docker-stack-deploy.1.gz
/usr/share/man/man1/docker-stack-ls.1.gz
/usr/share/man/man1/docker-stack-ps.1.gz
/usr/share/man/man1/docker-stack-rm.1.gz
/usr/share/man/man1/docker-stack-services.1.gz
/usr/share/man/man1/docker-stack.1.gz
/usr/share/man/man1/docker-start.1.gz
/usr/share/man/man1/docker-stats.1.gz

Docker 命令帮助

docker 命令是最常使用的 docker 客户端命令,其后面可以加不同的参数以实现不同的功能

docker 命令格式

1
2
3
4
5
docker [OPTIONS] COMMAND

COMMAND分为
Management Commands #指定管理的资源对象类型,较新的命令用法,将命令按资源类型进行分类,方便使用
Commands #对不同资源操作的命令不分类,使用容易产生混乱

docker 命令有很多子命令,可以用下面方法查看帮助

1
2
3
4
5
6
7
8
#docker 命令帮助
man docker
docker
docker --help

#docker 子命令帮助
man docker-COMMAND
docker COMMAND --help

官方文档:

1
2
https://docs.docker.com/reference/
https://docs.docker.com/engine/reference/commandline/cli/

16

范例: 查看docker命令帮助

1
[root@ubuntu1804 ~]# docker --help

查看 Docker 相关信息

查看 docker 版本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@ubuntu1804 ~]# docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:29:52 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea838
Built: Wed Nov 13 07:28:22 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683

查看 docker 详解信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
[root@ubuntu1804 ~]# docker info
Client:
Debug Mode: false #client 端是否开启 debug
Server:
Containers: 2 #当前主机运行的容器总数
Running: 0 #有几个容器是正在运行的
Paused: 0 #有几个容器是暂停的
Stopped: 2 #有几个容器是停止的
Images: 4 #当前服务器的镜像数
Server Version: 19.03.5 #服务端版本
Storage Driver: overlay2 #正在使用的存储引擎
Backing Filesystem: extfs #后端文件系统,即服务器的磁盘文件系统
Supports d_type: true #是否支持 d_type
Native Overlay Diff: true #是否支持差异数据存储
Logging Driver: json-file #日志类型,每个容器的标准输出以日志存放在/var/lib/docker/containers/<CONTAINER ID>/<CONTAINER ID>-json.log
Cgroup Driver: cgroupfs #Cgroups 类型
Plugins: #插件
Volume: local #卷
Network: bridge host ipvlan macvlan null overlay # overlay 跨主机通信
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk
syslog # 日志类型
Swarm: inactive #是否支持 swarm
Runtimes: runc #已安装的容器运行时
Default Runtime: runc #默认使用的容器运行时
Init Binary: docker-init #初始化容器的守护进程,即 pid 为 1 的进程
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339 #版本
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657 #runc 版本
init version: fec3683 #init 版本
Security Options: #安全选项
apparmor #安全模块,https://docs.docker.com/engine/security/apparmor/
seccomp #安全计算模块,即制容器操作,https://docs.docker.com/engine/security/seccomp/
Profile: default #默认的配置文件
Kernel Version: 4.15.0-29-generic #宿主机内核版本
Operating System: Ubuntu 18.04.1 LTS #宿主机操作系统
OSType: linux #宿主机操作系统类型
Architecture: x86_64 #宿主机架构
CPUs: 1 #宿主机 CPU 数量
Total Memory: 962MiB #宿主机总内存
Name: ubuntu1804.wang.org #宿主机 hostname
ID: IZHJ:WPIN:BRMC:XQUI:VVVR:UVGK:NZBM:YQXT:JDWB:33RS:45V7:SQWJ #宿主机 ID
Docker Root Dir: /var/lib/docker #宿主机关于docker数据的保存目录,建议使用独立SSD的磁盘,保证性能和空间
Debug Mode: false #server 端是否开启 debug
Registry: https://index.docker.io/v1/ #仓库路径
Labels:
Experimental: false #是否测试版
Insecure Registries:
127.0.0.0/8 : #非安全的镜像仓库
Registry Mirrors:
https://si7y70hh.mirror.aliyuncs.com/ #镜像仓库
Live Restore Enabled: false #是否开启活动重启 (重启docker-daemon 不关闭容器 )

WARNING: No swap limit support #系统警告信息 (没有开启 swap 资源限制 )

范例: 解决上述SWAP报警提示

官方文档: https://docs.docker.com/install/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ubuntu1804 ~]# docker info
......
WARNING: No swap limit support

[root@ubuntu1804 ~]# vim /etc/default/grub
GRUB_DEFAULT=0
GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=2
GRUB_DISTRIBUTOR=`lsb_ release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0 swapaccount=1" #修改此行

[root@ubuntu1804 ~]# update-grub
[root@ubuntu1804 ~]# reboot

范例: Docker 优化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@ubuntu2004 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": [
"https://registry.docker-cn.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn"
],
"insecure-registries": ["harbor.wang.org"],
"exec-opts": ["native.cgroupdriver=systemd"],
"data-root": "/data/docker", # Docker 23.0+ 版本已弃用 graph 参数,需改为 data-root
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 5,
"log-opts": {
"max-size": "300m",
"max-file": "2"
},
"live-restore": true
}


[root@ubuntu2004 ~]# systemctl daemon-reload ;systemctl restart docker.service

查看 docker0 网卡

在docker安装启动之后,默认会生成一个名称为docker0的网卡并且默认IP地址为172.17.0.1的网卡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#ubuntu18.04安装docker后网卡配置
[root@ubuntu1804 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 00:0c:29:34:df:91 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe34:df91/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
group default
link/ether 02:42:d3:26:ed:4e brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:d3ff:fe26:ed4e/64 scope link
valid_lft forever preferred_lft forever

#CentOS 7.6 安装docker后网卡配置
[root@centos7 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
group default qlen 1000
link/ether 00:0c:29:ca:00:e4 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.7/24 brd 10.0.0.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:feca:e4/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:d2:81:c2:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

#CentOS 8.1 安装docker后网卡配置
[root@centos8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ee:76:de:00 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever


[root@centos8 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens160

docker 存储引擎

官方文档关于存储引擎的相关文档:

https://docs.docker.com/storage/storagedriver/

https://docs.docker.com/storage/storagedriver/select-storage-driver/

  • AUFS: (Advanced Mult-Layered Unification Filesystem,版本 2 之前旧称AnotherUnionFS)是一种 Union FS ,是文件级的存储驱动。Aufs是之前的UnionFS的重新实现, 2006 年由JunjiroOkajima开发

    所谓 UnionFS就是把不同物理位置的目录合并 mount 到同一个目录中。简单来说就是支持将不同目录挂载到一个虚拟文件系统下的。这种可以层层地叠加修改文件。无论底下有多少都是只读的,最上系统可写的。当需要修改一个文件时, AUFS 创建该文件的一个副本,使用 CoW 将文件从只读层复制到可写进行修改,结果也保存在Docker 中,底下的只读层就是 image,可写层就是Container

    aufs 被拒绝合并到主线 Linux 。其代码被批评为”dense, unreadable, uncommented 密集、不可读、未注释”。 相反,OverlayFS被合并到 Linux 内核中。在多次尝试将 aufs 合并到主线内核失败后,作者放弃了

    AUFS 是 Docker 18.06 及更早版本的首选存储驱动程序,在内核 3.13 上运行 Ubuntu 14.04 时不支持 overlay2

  • Overlay: 一种 Union FS 文件系统, Linux 内核 3.18 后支持

  • Overlay2: Overlay 的升级版,到目前为止,所有 Linux 发行版推荐使用的存储类 型,也是docker默认使用的存储引擎为overlay2,需要磁盘分区支持d-type功能,因此需要系统磁盘的额外支持,相对AUFS来说Overlay2 有以下优势: 更简单地设计; 从3.18开始就进入了Linux内核主线;资源消耗更少

  • devicemapper: 因为CentOS 7.2和RHEL 7.2 的之前版本内核版本不支持 overlay2,默认使用的存储驱动程序,最大数据容量只支持100GB且性能不佳,当前较新版本的CentOS 已经支持overlay2, 因此推荐使用 overlay2,另外此存储引擎已在Docker Engine 18.09中弃用

  • ZFS(Sun -2005)/btrfs(Oracle-2007): 目前没有广泛使用

  • vfs: 用于测试环境,适用于无法使用 copy-on -writewrite 时的情况。 此存储驱动程序的性能很差,通常不建议用于生产

修改存储引擎参考文档:

https://docs.docker.com/storage/storagedriver/overlayfs-driver/

范例: 在CentOS7.2修改存储引擎

1
2
3
4
5
6
7
8
9
10
11
[root@centos7 ~]# vim /lib/systemd/system/docker.service
.....
ExecStart=/usr/bin/dockerd -s overlay2 -H fd:// --containerd=/run/containerd/containerd.sock
......

#创建新的xfs分区,添加ftype特性,否则默认无法启动docker服务
[root@centos7 ~]# mkfs.xfs -n ftype=1 /dev/sdb
[root@centos7 ~]# mount /dev/sdb /var/lib/docker

[root@centos7 ~]# systemctl daemon-reload
[root@centos7 ~]# systemctl restart docker

注意:修改存储引擎会导致所有容器丢失,所以先备份再修改

1
2
3
4
5
6
7
8
9
10
#查看Ubuntu1804的默认存储引擎
[root@ubuntu1804 ~]# docker info |grep Storage
WARNING: No swap limit support
Storage Driver: overlay2

#查看CentOS7.6的默认存储引擎
[root@centos7 ~]# docker info |grep Storage
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Storage Driver: overlay2

Docker官方推荐首选存储引擎为overlay2,其次为devicemapper,但是devicemapper存在使用空间方面的一些限制,虽然可以通过后期配置解决,但是官方依然推荐使用overlay2

范例: aufs 实现联合文件系统挂载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
[root@ubuntu1804 ~]# cat /proc/filesystems 
nodev sysfs
nodev rootfs
nodev ramfs
nodev bdev
nodev proc
nodev cpuset
nodev cgroup
nodev cgroup2
nodev tmpfs
nodev devtmpfs
nodev configfs
nodev debugfs
nodev tracefs
nodev securityfs
nodev sockfs
nodev dax
nodev bpf
nodev pipefs
nodev hugetlbfs
nodev devpts

[root@ubuntu1804 ~]# grep -i aufs /boot/config-4.15.0-29-generic
CONFIG_AUFS_FS=m
CONFIG_AUFS_BRANCH_MAX_127=y
# CONFIG_AUFS_BRANCH_MAX_511 is not set
# CONFIG_AUFS_BRANCH_MAX_1023 is not set
# CONFIG_AUFS_BRANCH_MAX_32767 is not set
CONFIG_AUFS_SBILIST=y
# CONFIG_AUFS_HNOTIFY is not set
CONFIG_AUFS_EXPORT=y
CONFIG_AUFS_INO_T_64=y
CONFIG_AUFS_XATTR=y
# CONFIG_AUFS_FHSM is not set
# CONFIG_AUFS_RDU is not set
CONFIG_AUFS_DIRREN=y
# CONFIG_AUFS_SHWH is not set
# CONFIG_AUFS_BR_RAMFS is not set
# CONFIG_AUFS_BR_FUSE is not set
CONFIG_AUFS_BR_HFSPLUS=y
CONFIG_AUFS_BDEV_LOOP=y
# CONFIG_AUFS_DEBUG is not set

[root@ubuntu1804 ~]# mkdir dir{1,2}
[root@ubuntu1804 ~]# echo here is dir1 > dir1/file1
[root@ubuntu1804 ~]# echo here is dir2 > dir2/file2
[root@ubuntu1804 ~]# mkdir /data/aufs
[root@ubuntu1804 ~]# mount -t aufs -o br=/root/dir1=ro:/root/dir2=rw none /data/aufs
[root@ubuntu1804 ~]# ll /data/aufs/
total 16
drwxr-xr-x 4 root root 4096 Jan 25 16:22 ./
drwxr-xr-x 4 root root 4096 Jan 25 16:22 ../
-rw-r--r-- 1 root root 13 Jan 25 16:22 file1
-rw-r--r-- 1 root root 13 Jan 25 16:22 file2

[root@ubuntu1804 ~]# cat /data/aufs/file1
here is dir1

[root@ubuntu1804 ~]# cat /data/aufs/file2
here is dir2

[root@ubuntu1804 ~]# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 462560 0 462560 0% /dev
tmpfs tmpfs 98512 10296 88216 11% /run
/dev/sda2 ext4 47799020 2770244 42570972 7% /
tmpfs tmpfs 492552 0 492552 0% /dev/shm
tmpfs tmpfs 5120 0 5120 0% /run/lock
tmpfs tmpfs 492552 0 492552 0% /sys/fs/cgroup
/dev/sda3 ext4 19091540 45084 18053588 1% /data
/dev/sda1 ext4 944120 77112 801832 9% /boot
tmpfs tmpfs 98508 0 98508 0% /run/user/0
none aufs 47799020 2770244 42570972 7% /data/aufs
[root@ubuntu1804 ~]# echo write to file1 >> /data/aufs/file1
-bash: /data/aufs/file1: Read-only file system

[root@ubuntu1804 ~]# echo write to file2 >> /data/aufs/file2
[root@ubuntu1804 ~]# cat /data/aufs/file1
here is dir1

[root@ubuntu1804 ~]# cat /data/aufs/file2
here is dir2
write to file2

[root@ubuntu1804 ~]# umount /data/aufs
[root@ubuntu1804 ~]# mv dir1/file1 dir1/file2
[root@ubuntu1804 ~]# cat dir1/file2
here is dir1

[root@ubuntu1804 ~]# cat dir2/file2
here is dir2
write to file2

[root@ubuntu1804 ~]# mount -t aufs -o br=/root/dir1=ro:/root/dir2=rw none /data/aufs
[root@ubuntu1804 ~]# ls /data/aufs -l
total 4
-rw-r--r-- 1 root root 13 Jan 25 16:22 file2

[root@ubuntu1804 ~]# cat /data/aufs/file2
here is dir1

镜像管理

镜像结构和原理

17

镜像即创建容器的模版,含有启动容器所需要的文件系统及所需要的内容,因此镜像主要用于方便和快速的创建并启动容器

镜像含里面是一层层的文件系统,叫做 Union FS(联合文件系统),联合文件系统,可以将几层目录挂载到一起(就像千层饼,洋葱头,俄罗斯套娃一样),形成一个虚拟文件系统,虚拟文件系统的目录结构就像普通 linux 的目录结构一样,镜像通过这些文件再加上宿主机的内核共同提供了一个 linux 的虚拟环境,每一层文件系统叫做一层 layer,联合文件系统可以对每一层文件系统设置三种权限,只读(readonly)、读写(readwrite)和写出(whiteout-able),但是镜像中每一层文件系统都是只读的,构建镜像的时候,从一个最基本的操作系统开始,每个构建提交的操作都相当于做一层的修改,增加了一层文件系统,一层层往上叠加,上层的修改会覆盖底层该位置的可见性,这也很容易理解,就像上层把底层遮住了一样,当使用镜像的时候,我们只会看到一个完全的整体,不知道里面有几层,实际上也不需要知道里面有几层,结构如下:

18

一个典型的 Linux文件系统由 bootfs 和 rootfs 两部分组成

bootfs(boot file system) 主要包含bootloader和kernel,bootloader主要用于引导加载 kernel,Linux刚启动时会加载bootfs文件系统,当boot加载完成后,kernel 被加载到内存中后接管系统的控制权,bootfs会被 umount 掉

rootfs (root file system) 包含的就是典型 Linux 系统中的/dev,/proc,/bin,/etc 等标准目录和文件,不同的 linux 发行版(如 ubuntu 和 CentOS ) 主要在 rootfs 这一层会有所区别。

一般的镜像通常都比较小,官方提供的Ubuntu镜像只有60MB多点,而 CentOS 基础镜像也只有200MB左右,一些其他版本的镜像甚至只有几MB,比如: busybox 才1.22MB,alpine镜像也只有5M左右。镜像直接调用宿主机的内核,镜像中只提供 rootfs,也就是只需要包括最基本的命令,配置文件和程序库等相关文件就可以了。

下图就是有两个不同的镜像在一个宿主机内核上实现不同的rootfs。

19

容器、镜像和父镜像关系:

20

范例: 查看镜像的分层结构

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
[root@ubuntu1804 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
6e909acdb790: Pull complete
5eaa34f5b9c2: Pull complete
417c4bccf534: Pull complete
e7e0ca015e55: Pull complete
373fe654e984: Pull complete
97f5c0f51d43: Pull complete
c22eb46e871a: Pull complete
Digest: sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest


#查看镜像分层历史
[root@ubuntu1804 ~]# docker image history nginx
IMAGE CREATED CREATED BY SIZE COMMENT
53a18edff809 2 months ago CMD ["nginx" "-g" "daemon off;"] 0B buildkit.dockerfile.v0
<missing> 2 months ago STOPSIGNAL SIGQUIT 0B buildkit.dockerfile.v0
<missing> 2 months ago EXPOSE map[80/tcp:{}] 0B buildkit.dockerfile.v0
<missing> 2 months ago ENTRYPOINT ["/docker-entrypoint.sh"] 0B buildkit.dockerfile.v0
<missing> 2 months ago COPY 30-tune-worker-processes.sh /docker-ent… 4.62kB buildkit.dockerfile.v0
<missing> 2 months ago COPY 20-envsubst-on-templates.sh /docker-ent… 3.02kB buildkit.dockerfile.v0
<missing> 2 months ago COPY 15-local-resolvers.envsh /docker-entryp… 389B buildkit.dockerfile.v0
<missing> 2 months ago COPY 10-listen-on-ipv6-by-default.sh /docker… 2.12kB buildkit.dockerfile.v0
<missing> 2 months ago COPY docker-entrypoint.sh / # buildkit 1.62kB buildkit.dockerfile.v0
<missing> 2 months ago RUN /bin/sh -c set -x && groupadd --syst… 117MB buildkit.dockerfile.v0
<missing> 2 months ago ENV DYNPKG_RELEASE=1~bookworm 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV PKG_RELEASE=1~bookworm 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV NJS_RELEASE=1~bookworm 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV NJS_VERSION=0.8.9 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV NGINX_VERSION=1.27.4 0B buildkit.dockerfile.v0
<missing> 2 months ago LABEL maintainer=NGINX Docker Maintainers <d… 0B buildkit.dockerfile.v0
<missing> 2 months ago # debian.sh --arch 'amd64' out/ 'bookworm' '… 74.8MB debuerreotype 0.15

[root@ubuntu1804 ~]# docker inspect nginx
[
{
"Id": "sha256:53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0",
"RepoTags": [
"nginx:latest"
],
"RepoDigests": [
"nginx@sha256:124b44bfc9ccd1f3cedf4b592d4d1e8bddb78b51ec2ed5056c52d3692baebc19"
],
"Parent": "",
"Comment": "buildkit.dockerfile.v0",
"Created": "2025-02-05T21:27:16Z",
"DockerVersion": "",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.27.4",
"NJS_VERSION=0.8.9",
"NJS_RELEASE=1~bookworm",
"PKG_RELEASE=1~bookworm",
"DYNPKG_RELEASE=1~bookworm"
],
"Cmd": [
"nginx",
"-g",
"daemon off;"
],
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint@nginx.com>"
},
"StopSignal": "SIGQUIT"
},
"Architecture": "amd64",
"Os": "linux",
"Size": 192004242,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7aa5619fb13336c1c67cd7588653a68b6412bf4d73c3909af5ff7be664e7a95d/diff:/var/lib/docker/overlay2/81ed2cec722ec310e32e8032a03ea7072b5773b4f7cafb34407170f8a3122cf6/diff:/var/lib/docker/overlay2/ebc78063335a2b83f75456cf7dfe02e77ef1f99004adeee38d7f265574f88616/diff:/var/lib/docker/overlay2/b8ad0e4caf8900050d73217dd80fcec67140d5d4de83e36d89437800d0d4947f/diff:/var/lib/docker/overlay2/6c43d1009f3e35653277132cde4ab4fe92e18cf7951223722c4ccc846bc6cbe3/diff:/var/lib/docker/overlay2/c8e2efbf3fc876783f1bcc80fb5cf73d1af097a4ab09484b6c45527b74c7861f/diff",
"MergedDir": "/var/lib/docker/overlay2/3cc13c6e69f8dab680ad2d891cc6e4c59110921c51b478bdb9e2eccbff9069b9/merged",
"UpperDir": "/var/lib/docker/overlay2/3cc13c6e69f8dab680ad2d891cc6e4c59110921c51b478bdb9e2eccbff9069b9/diff",
"WorkDir": "/var/lib/docker/overlay2/3cc13c6e69f8dab680ad2d891cc6e4c59110921c51b478bdb9e2eccbff9069b9/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:1287fbecdfcce6ee8cf2436e5b9e9d86a4648db2d91080377d499737f1b307f3",
"sha256:135f786ad04647c6e58d9a2d4f6f87bd677ef6144ab24c81a6f5be7acc63fbc9",
"sha256:ad2f08e39a9de1e12157c800bd31ba86f8cc222eedec11e8e072c3ba608d26fb",
"sha256:d98dcc720ae098efb91563f0a9abe03de50b403f7aa6c6f0e1dfb8297aedb61f",
"sha256:aa82c57cd9fe730130e35d42c6b26a4a9d3c858f61c23f63d53b703abf30adf8",
"sha256:d26dc06ef910f67b1b2bcbcc6318e2e08881011abc7ad40fd859f38641ab105c",
"sha256:03d9365bc5dc9ec8b2f032927d3d3ae10b840252c86cf245a63b713d50eaa2fd"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]

[root@ubuntu1804 ~]# docker save nginx -o nginx.tar
[root@ubuntu1804 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
nginx latest 53a18edff809 2 months ago 192MB

[root@ubuntu1804 ~]# ll -h nginx.tar
-rw------- 1 root root 131M Jul 20 22:33 nginx.tar

[root@ubuntu1804 ~]# tar xf nginx.tar -C /data
[root@ubuntu1804 ~]# ll /data
total 60
drwxr-xr-x 8 root root 4096 Jul 20 22:34 ./
drwxr-xr-x 24 root root 4096 Jul 20 16:23 ../
-rw-r--r-- 1 root root 7510 Jul 11 04:26 0901fa9da894a8e9de5cb26d6749eaffb67b373dc1ff8a26c46b23b1175c913a.json
drwxr-xr-x 2 root root 4096 Jul 11 04:26 0bb74fcd4b686412f7993916e58c26abd155fa10b10a4dc09a778e7c324c39a2/
drwxr-xr-x 2 root root 4096 Jul 11 04:26 517e3239147277447b60191907bc66168963e0ce8707a6a33532f7c63a0d2f12/
drwxr-xr-x 2 root root 4096 Jul 11 04:26 68c9e9da52d5a57ee196829ce4a461cc9425b0b920689da9ad547f1da13dbc9d/
drwxr-xr-x 2 root root 4096 Jul 11 04:26 d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd/
drwxr-xr-x 2 root root 4096 Jul 11 04:26 f4bf863ecdbb8bddb4b3bb271bdd97b067dcb6c95c56f720018abec6af190c6e/
drwx------ 2 root root 16384 Mar 18 09:49 lost+found/
-rw-r--r-- 1 root root 509 Jan 1 1970 manifest.json
-rw-r--r-- 1 root root 88 Jan 1 1970 repositories

[root@ubuntu1804 ~]# cat /data/manifest.json
[{"Config":"0901fa9da894a8e9de5cb26d6749eaffb67b373dc1ff8a26c46b23b1175c913a.json","RepoTags":["nginx:latest"],"Layers":
["d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd/layer.tar","f
4bf863ecdbb8bddb4b3bb271bdd97b067dcb6c95c56f720018abec6af190c6e/layer.tar","517e
3239147277447b60191907bc66168963e0ce8707a6a33532f7c63a0d2f12/layer.tar","0bb74fc
d4b686412f7993916e58c26abd155fa10b10a4dc09a778e7c324c39a2/layer.tar","68c9e9da52
d5a57ee196829ce4a461cc9425b0b920689da9ad547f1da13dbc9d/layer.tar"]}]

[root@ubuntu1804 ~]# du -sh /data/*
8.0K /data/0901fa9da894a8e9de5cb26d6749eaffb67b373dc1ff8a26c46b23b1175c913a.json
16K /data/0bb74fcd4b686412f7993916e58c26abd155fa10b10a4dc09a778e7c324c39a2
16K /data/517e3239147277447b60191907bc66168963e0ce8707a6a33532f7c63a0d2f12
16K /data/68c9e9da52d5a57ee196829ce4a461cc9425b0b920689da9ad547f1da13dbc9d
70M /data/d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd
62M /data/f4bf863ecdbb8bddb4b3bb271bdd97b067dcb6c95c56f720018abec6af190c6e
16K /data/lost+found
4.0K /data/manifest.json
4.0K /data/repositories

[root@ubuntu1804 ~]# cd /data/d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd/
[root@ubuntu1804 d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd]# ls
json layer.tar VERSION

[root@ubuntu1804 d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd]# tar xf layer.tar
[root@ubuntu1804 d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd]# ls
bin dev home layer.tar lib64 mnt proc run srv tmp var
boot etc json lib media opt root sbin sys usr VERSION

[root@ubuntu1804 d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd]# cat etc/i
init.d/ issue issue.net

[root@ubuntu1804 d2cf0fc540bb3be33ee7340498c41fd4fc82c6bb02b9955fca2109e599301dbd]# cat etc/issue
Debian GNU/Linux 10 \n \l

搜索镜像

搜索镜像

官方网站进行镜像的搜索

官网:

1
2
http://hub.docker.com
http://dockerhub.com

21

22

在官方的docker 仓库中搜索指定名称的docker镜像,也会有很多三方镜像。

执行docker search命令进行搜索

格式如下:

1
2
3
4
5
6
7
8
9
10
Usage: docker search [OPTIONS] TERM
Options:
-f, --filter filter Filter output based on conditions provided
--format string Pretty-print search using a Go template
--limit int Max number of search results (default 25)
--no-trunc Don't truncate output

说明:
OFFICIAL: 官方
AUTOMATED: 使用第三方docker服务来帮助编译镜像,可以在互联网上面直接拉取到镜像,减少了繁琐的编译过程

范例:

1
2
[root@ubuntu1804 ~]# docker search centos
......

范例: 选择性的查找镜像

1
2
3
4
5
6
7
8
9
10
#搜索点赞 100 个以上的镜像

#旧语法

[root@ubuntu1804 ~]# docker search -s 100 centos
Flag --stars has been deprecated, use --filter=stars=3 instead
......

#新语法
[root@ubuntu1804 ~]# docker search --filter=stars=100 centos

alpine 介绍

23

Alpine 操作系统是一个面向安全的轻型 Linux 发行版。它不同于通常 Linux 发行版,Alpine 采用了musl libc 和 busybox 以减小系统的体积和运行时资源消耗,但功能上比 busybox 又完善的多,因此得到开源社区越来越多的青睐。在保持瘦身的同时,Alpine 还提供了自己的包管理工具 apk,可以通过 https://pkgs.alpinelinux.org/packages 网站上查询包信息,也可以直接通过 apk 命令直接查询和安装各种软件。

Alpine 由非商业组织维护的,支持广泛场景的 Linux发行版,它特别为资深/重度Linux用户而优化,关注安全,性能和资源效能。Alpine 镜像可以适用于更多常用场景,并且是一个优秀的可以适用于生产的基础系统/环境。

Alpine Docker 镜像也继承了 Alpine Linux 发行版的这些优势。相比于其他 Docker 镜像,它的容量非常小,仅仅只有 5 MB 左右(对比 Ubuntu 系列镜像接近 200 MB),且拥有非常友好的包管理机制。官方镜像来自 docker-alpine 项目。

目前 Docker 官方已开始推荐使用 Alpine 替代之前的 Ubuntu 做为基础镜像环境。这样会带来多个好处。包括镜像下载速度加快,镜像安全性提高,主机之间的切换更方便,占用更少磁盘空间等。

下表是官方镜像的大小比较:

1
2
3
4
5
6
REPOSITORY  TAG       IMAGE ID      VIRTUAL SIZE

alpine latest 4e38e38c8ce0 4.799 MB
debian latest 4d6ce913b130 84.98 MB
ubuntu latest b39b81afc8ca 188.3 MB
centos latest 8efe422e6104 210 MB

范例: alpine管理软件

1
2
3
4
5
6
7
8
9
10
11
12
13
#修改源替换成阿里源,将里面 dl-cdn.alpinelinux.org 的 改成 mirrors.aliyun.com
vi /etc/apk/repositories
http://mirrors.aliyun.com/alpine/v3.8/main/
http://mirrors.aliyun.com/alpine/v3.8/community/

#更新源
apk update

#安装软件
apk add vim

#删除软件
apk del openssh openntp vim

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
/ # apk --help
/ # apk add nginx
/ # apk info nginx
nginx-1.16.1-r6 description:
HTTP and reverse proxy server (stable version)

nginx-1.16.1-r6 webpage:
https://www.nginx.org/

nginx-1.16.1-r6 installed size:
1126400

~ # apk manifest nginx
sha1:d21a96358a10b731f8847e6d32799efdc2a7f421 etc/logrotate.d/nginx
sha1:50bd6d3b4f3e6b577d852f12cd6939719d2c2db5 etc/init.d/nginx
sha1:379c1e2a2a5ffb8c91a07328d4c9be2bc58799fd etc/nginx/scgi_params
sha1:da38e2a0dded838afbe0eade6cb837ac30fd8046 etc/nginx/fastcgi_params
sha1:cc2fcdb4605dcac23d59f667889ccbdfdc6e3668 etc/nginx/uwsgi_params
sha1:cbf596ddb3433a8e0d325f3c188bec9c1bb746b3 etc/nginx/fastcgi.conf
sha1:e39dbc36680b717ec902fadc805a302f1cf62245 etc/nginx/mime.types
sha1:e9dddf20f1196bb67eef28107438b60c4060f4d3 etc/nginx/nginx.conf
sha1:7b2a4da1a14166442c10cbf9e357fa9fb53542ca etc/nginx/conf.d/default.conf
sha1:cd7f5dc8ccdc838a2d0107511c90adfe318a81e7 etc/conf.d/nginx
sha1:05f050f6ed86c5e6b48c2d2328e81583315431be usr/sbin/nginx
sha1:c3f02ca81f7f2c6bde3f878b3176f225c7781c7d var/lib/nginx/modules
sha1:0510312d465b86769136983657df98c1854f0b60 var/lib/nginx/run
sha1:35db17c18ce0b9f84a3cc113c8a9e94e19f632b1 var/lib/nginx/logs
sha1:7dd71afcfb14e105e80b0c0d7fce370a28a41f0a var/lib/nginx/html/index.html
sha1:95de71d58b37f9f74bede0e91bc381d6059fc2d7 var/lib/nginx/html/50x.html

~ # ls -l /bin
total 824
lrwxrwxrwx 1 root root 12 Jan 16 21:52 arch -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 ash -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 base64 -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 bbconfig -> /bin/busybox
-rwxr-xr-x 1 root root 841288 Jan 15 10:36 busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 cat -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 chgrp -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 chmod -> /bin/busybox
lrwxrwxrwx 1 root root 12 Jan 16 21:52 chown -> /bin/busybox

Debian(ubuntu)系统建议安装的基础包

在很多软件官方提供的镜像都使用的是Debian(ubuntu)的系统,比如:nginx,tomcat,mysql,httpd 等,但镜像内缺少很多常用的调试工具.当需要进入容器内进行调试管理时,可以安装以下常用工具包

1
2
3
4
5
# apt update #安装软件前需要先更新索引
# apt install procps #提供top,ps,free等命令
# apt install psmisc #提供pstree,killall等命令
# apt install iputils-ping #提供ping命令
# apt install net-tools #提供netstat网络工具等

下载镜像

从 docker 仓库将镜像下载到本地,命令格式如下:

1
2
3
4
5
6
7
8
9
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
Options:
-a, --all-tags Download all tagged images in the repository
--disable-content-trust Skip image verification (default true)
--platform string Set platform if server is multi-platform capable
-q, --quiet Suppress verbose output

NAME: 是镜像名,一般的形式 仓库服务器:端口/项目名称/镜像名称
:TAG: 即版本号,如果不指定:TAG,则下载最新版镜像

镜像下载说明

1
2
3
4
5
6
7
8
[root@ubuntu1804 ~]# docker pull hello-world
Using default tag: latest #默认下载最新版本
latest: Pulling from library/hello-world
1b930d010525: Pull complete #分层下载
Digest: sha256:9572f7cdcee8591948c2963463447a53466950b3fc15a247fcad1917ca215a2f
#摘要
Status: Downloaded newer image for hello-world:latest
docker.io/library/hello-world:latest #下载的完整地址

镜像下载保存的路径:

1
/var/lib/docker/overlay2/镜像ID

注意: 镜像下载完成后,会自动解压缩,比官网显示的可能会大很多,如: centos8.1.1911下载时只有70MB,下载完后显示237MB

范例: 从docker官网下载镜像

1
2
3
4
5
6
7
8
docker pull hello-world
docker pull alpine
docker pull busybox
docker pull nginx
docker pull centos
docker pull centos:centos7.7.1908
docker pull docker.io/library/mysql:5.7.30
docker pull mysql:5.6.47

范例: 下载镜像 alpine,busybox等镜像,查看下载的存放目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@ubuntu1804 ~]# ls /var/lib/docker/overlay2/
1
[root@ubuntu1804 ~]# du -sh /var/lib/docker/overlay2
8.0K /var/lib/docker/overlay2

[root@ubuntu1804 ~]# ls /var/lib/docker/overlay2/1
[root@ubuntu1804 ~]# docker pull hello-world
[root@ubuntu1804 ~]# docker pull alpine:3.11.3
[root@ubuntu1804 ~]# docker pull busybox
[root@ubuntu1804 ~]# docker pull centos:centos8.1.1911
[root@ubuntu1804 ~]# du -sh /var/lib/docker/overlay2/*
5.9M /var/lib/docker/overlay2/1802616f4c8e0a0b52c839431b6faa8ac21f4bd831548dcbd46943d3f60061fa
16K /var/lib/docker/overlay2/5773b92e1351da5e589d0573d9f22d1ec3be1e0e98edbfcddba4b830f12c7be2
1.3M /var/lib/docker/overlay2/de31641b8d2207de7f08eabb5240474a1aaccfef08b6034dcee02b9623f8d9dc
252M /var/lib/docker/overlay2/f41df336075611f9e358e5eaf2ebd5089920a90ba68760cdec8da03edff362f7
20K /var/lib/docker/overlay2/l

[root@ubuntu1804 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine 3.11.3 e7d92cdc71fe 7 days ago 5.59MB
centos centos8.1.1911 470671670cac 7 days ago 237MB
busybox latest 6d5fcfe5ff17 4 weeks ago 1.22MB
hello-world latest fce289e99eb9 12 months ago 1.84kB

[root@ubuntu1804 ~]# ls -l /var/lib/docker/overlay2/l
total 16
lrwxrwxrwx 1 root root 72 Jan 25 19:51 C5ZTDYHYDTO7BQG6HX36MU6X5K -> ../de31641b8d2207de7f08eabb5240474a1aaccfef08b6034dcee02b9623f8d9dc/diff
lrwxrwxrwx 1 root root 72 Jan 25 19:57 DEXHVNUGFLFJCSJAKISOHQG7JY -> ../f41df336075611f9e358e5eaf2ebd5089920a90ba68760cdec8da03edff362f7/diff
lrwxrwxrwx 1 root root 72 Jan 25 19:51 KJ5IA5AUHFUEQXFKJA7UDUIA7A -> ../1802616f4c8e0a0b52c839431b6faa8ac21f4bd831548dcbd46943d3f60061fa/diff
lrwxrwxrwx 1 root root 72 Jan 25 19:51 ZM3U4WDNHGJJX5DXHA5M4ZWAIW -> ../5773b92e1351da5e589d0573d9f22d1ec3be1e0e98edbfcddba4b830f12c7be2/diff

范例: 指定 TAG下载特定版本的镜像

1
2
3
[root@ubuntu1804 ~]# docker pull docker.io/library/mysql:5.7.30

[root@ubuntu1804 ~]# docker pull mysql:5.6.47

范例: 指定DIGEST下载特定版本的镜像

先到 hub.docker.com查到指定版本的DIGEST

24

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@ubuntu1804 ~]# docker pull alpine@sha256:156f59dc1cbe233827642e09ed06e259ef6fa1ca9b2e29d52ae14d5e7b79d7f0
sha256:156f59dc1cbe233827642e09ed06e259ef6fa1ca9b2e29d52ae14d5e7b79d7f0: Pulling
from library/alpine
5d2415897100: Pull complete
Digest: sha256:156f59dc1cbe233827642e09ed06e259ef6fa1ca9b2e29d52ae14d5e7b79d7f0
Status: Downloaded newer image for
alpine@sha256:156f59dc1cbe233827642e09ed06e259ef6fa1ca9b2e29d52ae14d5e7b79d7f0
docker.io/library/alpine@sha256:156f59dc1cbe233827642e09ed06e259ef6fa1ca9b2e29d5
2ae14d5e7b79d7f0

[root@ubuntu1804 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine <none> 3c791e92a856 3 weeks ago 5.57MB

镜像加速配置和优化

docker 镜像官方的下载站点是: https://hub.docker.com/

25

从国内下载官方的镜像站点有时候会很慢,因此可以更改docker配置文件添加一个加速器,可以通过加速器达到加速下载镜像的目的

国内有许多公司都提供了docker 加速镜像,比如: 阿里云,腾讯云,网易云,以下以阿里云为例

阿里云获取加速地址

浏览器打开http://cr.console.aliyun.com,注册或登录阿里云账号,点击左侧的镜像加速器,将会得到一个专属的加速地址,而且下面有使用配置说明:

26

docker 镜像加速配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1. 安装/升级Docker客户端
推荐安装1.10.0以上版本的Docker客户端,参考文档 docker-ce

2. 配置镜像加速器
修改daemon配置文件/etc/docker/daemon.json来使用加速器

mkdir -p /etc/docker

#因为政策,国内封了dockerhub,以下时可用的镜像加速地址
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": [
"https://docker.m.daocloud.io",
"https://docker.imgdb.de",
"https://docker-0.unsee.tech",
"https://docker.hlmirror.com",
"https://docker.1ms.run",
"https://func.ink",
"https://lispy.org",
"https://docker.xiaogenban1993.com"
]
}
EOF

#网易云: http://hub-mirror.c.163.com/
#中科大: https://docker.mirrors.ustc.edu.cn
#腾讯云: https://mirror.ccs.tencentyun.com
#七牛云: https://reg-mirror.qiniu.com

systemctl daemon-reload
systemctl restart docker

范例: 利用阿里云实现镜像加速

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@ubuntu1804 ~]# docker info |tail
WARNING: the overlay storage-driver is deprecated, and will be removed in a future release.
ID: IZHJ:WPIN:BRMC:XQUI:VVVR:UVGK:NZBM:YQXT:JDWB:33RS:45V7:SQWJ
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

[root@ubuntu1804 ~]# vim /etc/docker/daemon.json
[root@ubuntu1804 ~]# cat /etc/docker/daemon.json
{
"storage-driver": "overlay",
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ubuntu1804 ~]# systemctl daemon-reload
[root@ubuntu1804 ~]# systemctl restart docker
[root@ubuntu1804 ~]# docker info |tail
WARNING: the overlay storage-driver is deprecated, and will be removed in a future release.
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://si7y70hh.mirror.aliyuncs.com/
Live Restore Enabled: false

范例: 镜像加速器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ubuntu1804 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors" : [
"http://registry.docker-cn.com",
"http://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
"insecure-registries" : [
"registry.docker-cn.com",
"docker.mirrors.ustc.edu.cn"
],
"debug" : true,
"experimental" : true
}

查看本地镜像

docker images 可以查看下载至本地的镜像

格式:

1
2
3
4
5
6
7
8
9
10
11
docker images [OPTIONS] [REPOSITORY[:TAG]]
docker image ls [OPTIONS] [REPOSITORY[:TAG]]


#常用选项:
-q, --quiet Only show numeric IDs
-a, --all Show all images (default hides intermediate images)
--digests Show digests
--no-trunc Don't truncate output
-f, --filter filter Filter output based on conditions provided
--format string Pretty-print images using a Go template

执行结果的显示信息说明:

1
2
3
4
5
REPOSITORY      #镜像所属的仓库名称
TAG #镜像版本号(标识符),默认为latest
IMAGE ID #镜像唯一ID标识,如果ID相同,说明是同一个镜像有多个名称
CREATED #镜像在仓库中被创建时间
VIRTUAL SIZE #镜像的大小

Repository仓库

  • 由某特定的docker镜像的所有迭代版本组成的镜像仓库
  • 一个Registry中可以存在多个Repository
  • Repository可分为“顶层仓库”和“用户仓库”
  • Repository用户仓库名称一般格式为“用户名/仓库名”
  • 每个Repository仓库可以包含多个Tag(标签),每个标签对应一个镜像

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@ubuntu1804 ~]# docker images
[root@ubuntu1804 ~]# docker images -q
e7d92cdc71fe
470671670cac
6d5fcfe5ff17
fce289e99eb9

#显示完整的ImageID
[root@ubuntu1804 ~]# docker images --no-trunc

#只查看指定REPOSITORY的镜像
[root@ubuntu1804 ~]# docker images tomcat
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat 9.0.37-v1 b8d669ebf99e 47 hours ago 652MB
tomcat latest df72227b40e1 5 days ago 647MB

范例: 查看指定镜像的详细信息

1
[root@centos8 ~]# podman image inspect alpine

镜像导出

利用docker save命令可以将从本地镜像导出为一个打包 tar文件,然后复制到其他服务器进行导入使用

格式:

1
2
3
4
5
6
7
docker save [OPTIONS] IMAGE [IMAGE...]
选项:
-o, --output string Write to a file, instead of STDOUT


#说明:
Docker save 使用IMAGE ID导出,在导入后的镜像没有REPOSITORY和TAG,显示为<none>

常见用法:

1
2
3
4
5
6
#导出为tar格式
docker save -o /path/file.tar IMAGE1 IMAGE2 ...
docker save IMAGE1 IMAGE2 ... > /path/file.tar

#导出为压缩格式
docker save IMAGE1 IMAGE2 ... | gzip > /path/file.tar.gz

范例: 导出指定镜像

1
2
3
4
5
6
7
[root@ubuntu1804 ~]# docker images

[root@ubuntu1804 ~]# docker save mysql:5.7.30 alpine:3.11.3 -o /data/myimages.tar

#或者
[root@ubuntu1804 ~]# docker save mysql:5.7.30 alpine:3.11.3 > /data/myimages.tar
[root@ubuntu1804 ~]# scp /data/myimages.tar 10.0.0.7:/data

范例: 导出所有镜像至不同的文件中

1
2
3
4
5
[root@centos8 ~]# docker images | awk 'NR!=1{print $1,$2}' | while read repo tag ;do docker save   $repo:$tag -o /opt/$repo-$tag.tar ;done
[root@centos8 ~]# ls /opt/*.tar
/opt/alpine-3.21.3.tar /opt/centos-centos7.7.1908.tar /opt/nginx-latest.tar
/opt/alpine-latest.tar /opt/hello-world-latest.tar
/opt/busybox-latest.tar /opt/my-alpine-latest.tar

范例:导出所有镜像到一个打包文件

1
2
3
4
5
6
7
8
#方法1: 使用image ID导出镜像,在导入后的镜像没有REPOSITORY和TAG,显示为<none>
[root@ubuntu1804 ~]# docker save `docker images -qa` -o all.tar

#方法2:将所有镜像导入到一个文件中,此方法导入后可以看REPOSITORY和TAG
[root@ubuntu1804 ~]# docker save `docker images | awk 'NR!=1{print $1":"$2}'` -o all.tar

#方法3:将所有镜像导入到一个文件中,此方法导入后可以看REPOSITORY和TAG
[root@centos8 ~]# docker image save `docker image ls --format "{{.Repository}}:{{.Tag}}"` -o all.tar

镜像导入

利用docker load命令可以将镜像导出的打包或压缩文件再导入

格式:

1
2
3
4
5
docker load [OPTIONS]

#选项
-i, --input string Read from tar archive file, instead of STDIN
-q, --quiet Suppress the load output

常见用法:

1
2
docker load -i /path/file.tar
docker load < /path/file.tar

范例: 镜像导入

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

[root@rocky8 ~]# docker load -i /opt/alpine-3.21.3.tar
Loaded image: alpine:3.21.3

#或者
[root@rocky8 ~]# docker load < /opt/nginx-latest.tar
1287fbecdfcc: Loading layer 77.84MB/77.84MB
135f786ad046: Loading layer 118.3MB/118.3MB
ad2f08e39a9d: Loading layer 3.584kB/3.584kB
d98dcc720ae0: Loading layer 4.608kB/4.608kB
aa82c57cd9fe: Loading layer 2.56kB/2.56kB
d26dc06ef910: Loading layer 5.12kB/5.12kB
03d9365bc5dc: Loading layer 7.168kB/7.168kB
Loaded image: nginx:latest

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine 3.21.3 60733ce3f702 30 minutes ago 7.83MB
nginx latest 53a18edff809 2 months ago 192MB

范例: 一次导出多个镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine 3.21.3 60733ce3f702 34 minutes ago 7.83MB
nginx latest 53a18edff809 2 months ago 192MB

[root@rocky8 ~]# docker save alpine:3.21.3 nginx -o test.tar
[root@rocky8 ~]# ll -h test.tar
-rw------- 1 root root 195M Apr 7 16:01 test.tar

[root@rocky8 ~]# docker rmi -f `docker images -q`
Untagged: alpine:3.21.3
Deleted: sha256:60733ce3f702b79c9026de89d902b80386827dd39800802081c630ed15a0b1e2
Untagged: nginx:latest
Deleted: sha256:53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
Deleted: sha256:9624c14fde1debdc1256228b54278fec5e576a42dcbf73f420762a91f4a06c87
Deleted: sha256:75cef3a8c4e762e0d3d0c01fbe5cf9407478057005f945fa78edef29a2bc6e33
Deleted: sha256:bf22610f6a6c90cb4a456617b926c87cb1c50efd3f90b1d96d9c88e5f4b75a6e
Deleted: sha256:8e41d2be566aeafda18718a8a4b8c515c50b06f82cd7a92420ae91010773e15c
Deleted: sha256:da2d6794d8696a98178b6882353953c9f410dcffff428cfa3caa5759036d24bd
Deleted: sha256:e9228041e2928859e124edaf5a456926097605092e1855d51aa9e43f984f770e
Deleted: sha256:1287fbecdfcce6ee8cf2436e5b9e9d86a4648db2d91080377d499737f1b307f3

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

[root@rocky8 ~]# docker load -i test.tar
Loaded image: alpine:3.21.3
1287fbecdfcc: Loading layer 77.84MB/77.84MB
135f786ad046: Loading layer 118.3MB/118.3MB
ad2f08e39a9d: Loading layer 3.584kB/3.584kB
d98dcc720ae0: Loading layer 4.608kB/4.608kB
aa82c57cd9fe: Loading layer 2.56kB/2.56kB
d26dc06ef910: Loading layer 5.12kB/5.12kB
03d9365bc5dc: Loading layer 7.168kB/7.168kB
Loaded image: nginx:latest

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine 3.21.3 60733ce3f702 35 minutes ago 7.83MB
nginx latest 53a18edff809 2 months ago 192MB

面试题: 将一台主机的所有镜像传到另一台主机

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
#方法1: 使用image ID导出镜像,在导入后的镜像没有REPOSITORY和TAG,显示为<none>
[root@rocky8 ~]# docker save `docker images -qa` -o all.tar
[root@rocky8 ~]# scp all.tar 192.168.1.12:
[root@rocky8 ~]# docker load -i all.tar
08000c18d16d: Loading layer 8.121MB/8.121MB
068f50152bbc: Loading layer 4.516MB/4.516MB
Loaded image ID: sha256:aded1e1a5b3705116fa0a92ba074a5e0b0031647d9c315983ccba2ee5428ec8b
Loaded image ID: sha256:ff7a7936e9306ce4a789cf5523922da5e585dc1216e400efb3b6872a5137ee6b

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> aded1e1a5b37 7 weeks ago 7.83MB
<none> <none> ff7a7936e930 6 months ago 4.28MB

#方法2:将所有镜像导入到一个文件中,此方法导入后可以看REPOSITORY和TAG
[root@rocky8 ~]# docker save `docker images | awk 'NR!=1{print $1":"$2}'` -o backup.tar

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

[root@rocky8 ~]# docker load -i backup.tar
08000c18d16d: Loading layer 8.121MB/8.121MB
Loaded image: alpine:latest
068f50152bbc: Loading layer 4.516MB/4.516MB
Loaded image: busybox:latest

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

#方法3:将所有镜像导入到一个文件中,此方法导入后可以看REPOSITORY和TAG
[root@rocky8 ~]# docker image save `docker images --format "{{.Repository}}:{{.Tag}}"` -o test.tar

[root@rocky8 ~]# docker load -i test.tar
08000c18d16d: Loading layer 8.121MB/8.121MB
Loaded image: alpine:latest
068f50152bbc: Loading layer 4.516MB/4.516MB
Loaded image: busybox:latest

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

删除镜像

docker rmi 命令可以删除本地镜像

格式

1
2
3
4
5
6
docker rmi [OPTIONS] IMAGE [IMAGE...]
docker image rm [OPTIONS] IMAGE [IMAGE...]

#选项:
-f, --force Force removal of the image
--no-prune Do not delete untagged parents

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

[root@rocky8 ~]# docker rmi aded
Untagged: alpine:latest
Deleted: sha256:aded1e1a5b3705116fa0a92ba074a5e0b0031647d9c315983ccba2ee5428ec8b
Deleted: sha256:08000c18d16dadf9553d747a58cf44023423a9ab010aab96cf263d2216b8b350

[root@rocky8 ~]# docker rmi busybox
Untagged: busybox:latest
Deleted: sha256:ff7a7936e9306ce4a789cf5523922da5e585dc1216e400efb3b6872a5137ee6b
Deleted: sha256:068f50152bbc6e10c9d223150c9fbd30d11bcfd7789c432152aa0a99703bd03a

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

范例: 删除多个镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

[root@rocky8 ~]# docker rmi alpine:latest busybox:latest
Untagged: alpine:latest
Deleted: sha256:aded1e1a5b3705116fa0a92ba074a5e0b0031647d9c315983ccba2ee5428ec8b
Deleted: sha256:08000c18d16dadf9553d747a58cf44023423a9ab010aab96cf263d2216b8b350
Untagged: busybox:latest
Deleted: sha256:ff7a7936e9306ce4a789cf5523922da5e585dc1216e400efb3b6872a5137ee6b
Deleted: sha256:068f50152bbc6e10c9d223150c9fbd30d11bcfd7789c432152aa0a99703bd03a

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

范例: 强制删除正在使用的镜像,也会删除对应的容器(新版删不了)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed697ade69d6 centos:centos7.7.1908 "ping 8.8.8.8" 25 seconds ago Up 24 seconds centos7

[root@rocky8 ~]# docker rmi centos:centos7.7.1908
Error response from daemon: conflict: unable to remove repository reference "centos:centos7.7.1908" (must force) - container ed697ade69d6 is using its referenced image 08d05d1d5859

[root@rocky8 ~]# docker rmi -f centos:centos7.7.1908
Untagged: centos:centos7.7.1908

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed697ade69d6 08d05d1d5859 "ping 8.8.8.8" 50 seconds ago Up 49 seconds centos7

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 08d05d1d5859 5 years ago 204MB

[root@rocky8 ~]# docker rmi -f 08d0
Error response from daemon: conflict: unable to delete 08d05d1d5859 (cannot be forced) - image is being used by running container ed697ade69d6

范例: 删除所有镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

[root@rocky8 ~]# docker rmi -f `docker images -q`
Untagged: alpine:latest
Deleted: sha256:aded1e1a5b3705116fa0a92ba074a5e0b0031647d9c315983ccba2ee5428ec8b
Deleted: sha256:08000c18d16dadf9553d747a58cf44023423a9ab010aab96cf263d2216b8b350
Untagged: busybox:latest
Deleted: sha256:ff7a7936e9306ce4a789cf5523922da5e585dc1216e400efb3b6872a5137ee6b
Deleted: sha256:068f50152bbc6e10c9d223150c9fbd30d11bcfd7789c432152aa0a99703bd03a

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

镜像打标签

docker tag 可以给镜像打标签,类似于起别名,但通常要遵守一定的命名规范,才可以上传到指定的仓库

格式

1
2
3
4
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

#TARGET_IMAGE[:TAG]格式一般形式
仓库主机FQDN或IP[:端口]/项目名(或用户名)/image名字:版本

TAG默认为latest

范例:

1
2
3
4
5
6
7
8
9
10
11
[root@rocky8 ~]# docker images 
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

[root@rocky8 ~]# docker tag alpine alpine:3.11
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine 3.11 aded1e1a5b37 7 weeks ago 7.83MB
alpine latest aded1e1a5b37 7 weeks ago 7.83MB
busybox latest ff7a7936e930 6 months ago 4.28MB

总结: 企业使用镜像及常见操作: 搜索、下载、导出、导入、删除

命令总结:

1
2
3
4
5
6
docker search centos
docker pull alpine
docker images
docker save > /opt/centos.tar #centos #导出镜像
docker load -i /opt/centos.tar #导入本地镜像
docker rmi 镜像ID/镜像名称 #删除指定ID的镜像,此镜像对应容器正启动镜像不能被删除,除非将容器全部关闭

容器操作基础命令

容器生命周期

27

容器相关命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@rocky8 ~]# docker container 

Usage: docker container COMMAND

Manage containers

Commands:
attach Attach local standard input, output, and error streams to a running container
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
exec Execute a command in a running container
export Export a container's filesystem as a tar archive
inspect Display detailed information on one or more containers
kill Kill one or more running containers
logs Fetch the logs of a container
ls List containers
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
prune Remove all stopped containers
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
run Create and run a new container from an image
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
wait Block until one or more containers stop, then print their exit codes

Run 'docker container COMMAND --help' for more information on a command.

启动容器

docker run 可以启动容器,进入到容器,并随机生成容器ID和名称

启动第一个容器

范例: 运行docker 的 hello world

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@rocky8 ~]# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

For more examples and ideas, visit:
https://docs.docker.com/get-started/

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest 74cc54e27dc4 2 months ago 10.1kB

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
baeee841d085 hello-world "/hello" 24 seconds ago Exited (0) 24 seconds ago quizzical_curran

启动容器的流程

28

启动容器用法

帮助: man docker-run

命令格式:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
docker run [选项] [镜像名] [shell命令] [参数]

#选项:
-i, --interactive Keep STDIN open even if not attached,通常和-t一起使用
-t, --tty 分配pseudo-TTY,通常和-i一起使用,注意对应的容器必须运行shell才支持进入
-d, --detach Run container in background and print container ID,台后运行,默认前台
--name string Assign a name to the container
--h, --hostname string Container host name
--rm Automatically remove the container when it exits
-p, --publish list Publish a container's port(s) to the host
-P, --publish-all Publish all exposed ports to random ports
--dns list Set custom DNS servers
--entrypoint string Overwrite the default ENTRYPOINT of the image
--restart policy
--privileged Give extended privileges to container
-e, --env=[] Set environment variables
--env-file=[] Read in a line delimited file of environment variables

–restart 可以指定四种不同的policy

policy 说明
no Default is no,Do not automatically restart the container when it exits.
on-failure[:max-retries] on-failure[:max-retries] Restart only if the container exits with a non-zero exit status. Optionally, limit the number of restart retries the Docker daemon attempts.
always Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.利用此项可以实现开机自动启动容器
unless-stopped Always restart the container regardless of the exit status, but do not start it on daemon startup if the container has been put to a stopped state before.

注意: 容器启动后,如果容器内没有前台运行的进程,将自动退出停止

从容器内退出,并停止容器

1
exit

从容器内退出,且容器不停止

1
同时按三个键,ctrl+p+q

范例: 运行容器

1
2
3
4
5
#启动容器时会自动随机字符作为容器名
[root@rocky8 ~]# docker run alpine
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1510db110e72 alpine "/bin/sh" 5 seconds ago Exited (0) 4 seconds ago quizzical_blackwell

范例: 一次性运行容器中命令

1
2
3
4
5
6
7
8
9
10
11
#启动的容器在执行完shell命令就退出,用于测试
[root@rocky8 ~]# docker run busybox echo "Hello WANG"
Hello WANG

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4bcd7fa5126a busybox "echo 'Hello WANG'" 12 seconds ago Exited (0) 10 seconds ago frosty_matsumoto
1510db110e72 alpine "/bin/sh" About a minute ago Exited (0) About a minute ago quizzical_blackwell

范例: 指定容器名称

1
2
3
4
5
6
7
#注意每个容器的名称要唯一
[root@rocky8 ~]# docker run --name a1 alpine
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fabb2001c932 alpine "/bin/sh" 5 seconds ago Exited (0) 4 seconds ago a1
4bcd7fa5126a busybox "echo 'Hello WANG'" About a minute ago Exited (0) About a minute ago frosty_matsumoto
1510db110e72 alpine "/bin/sh" 2 minutes ago Exited (0) 2 minutes ago quizzical_blackwell

范例: 运行交互式容器并退出

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# docker run -it busybox sh
/ # exit

#用exit退出后容器也停止
[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
901cc5325c3d busybox "sh" 32 seconds ago Exited (0) 26 seconds ago

[root@rocky8 ~]# docker run -it busybox sh
/ # 同时按三个键:ctrl+p+q

#用同时按三个键ctrl+p+q退出后容器不会停止
[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
13bdfadb18b5 busybox "sh" 42 seconds ago Up 41 seconds friendly_wozniak

范例: 设置容器内的主机名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@rocky8 ~]# docker run -it --name a1 -h a1.wang.org alpine
/ # hostname
a1.wang.org

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 a1.wang.org a1

/ # cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 223.5.5.5
nameserver 223.6.6.6

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: []

范例: 一次性运行容器,退出后立即删除,用于测试

1
2
3
4
5
6
[root@rocky8 ~]# docker run --rm alpine cat /etc/issue
Welcome to Alpine Linux 3.21
Kernel \r on an \m (\l)

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

范例: 创建容器后直接进入并退出

退出两种方式:

  • exit 容器也停止
  • 按ctrl+p+q 容器不停止
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#执行exit退出后容器关闭
[root@rocky8 ~]# docker run -it --name alpine2 alpine
/ # cat /etc/issue
Welcome to Alpine Linux 3.21
Kernel \r on an \m (\l)

/ # exit #退出容器,容器也停止运行

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
46c789f613b3 alpine "/bin/sh" 37 seconds ago Exited (0) 5 seconds ago alpine2

[root@rocky8 ~]# docker run -it --name alpine3 alpine
/ # #同时按ctrl+p+q 三个键退出后,容器不停止

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2d2b939be82 alpine "/bin/sh" 17 seconds ago Up 16 seconds alpine3
46c789f613b3 alpine "/bin/sh" About a minute ago Exited (0) 33 seconds ago alpine2

什么是守护式容器:

  • 能够长期运行
  • 无需交互式会话
  • 适合运行应用程序和服务

范例: 启动前台守护式容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
[root@rocky8 ~]# docker run nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/04/07 08:55:01 [notice] 1#1: using the "epoll" event method
2025/04/07 08:55:01 [notice] 1#1: nginx/1.27.4
2025/04/07 08:55:01 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2025/04/07 08:55:01 [notice] 1#1: OS: Linux 4.18.0-553.el8_10.x86_64
2025/04/07 08:55:01 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2025/04/07 08:55:01 [notice] 1#1: start worker processes
2025/04/07 08:55:01 [notice] 1#1: start worker process 28
2025/04/07 08:55:01 [notice] 1#1: start worker process 29
2025/04/07 08:55:01 [notice] 1#1: start worker process 30
2025/04/07 08:55:01 [notice] 1#1: start worker process 31

#另一个终端进入nginx容器
[root@rocky8 ~]# docker exec -it f35ebf8bb84f sh
# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 f35ebf8bb84f #IP地址
# ctrl+p+q

[root@rocky8 ~]# docker run --rm --name b1 busybox wget -qO - 172.17.0.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

范例: 启动后台守护式容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@rocky8 ~]# docker run -d nginx
10047369f7c26ba8b0d0a32705bbd77a96fc4ac729db979f33c50570e4c1648e

[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
10047369f7c2 nginx "/docker-entrypoint.…" 4 seconds ago Up 3 seconds 80/tcp competent_visvesvaraya

#有些容器后台启动不会持续运行
[root@rocky8 ~]# docker run -d --name alpine4 alpine
d3762c34560b5e1855e8b492ba0d0972769ea38aefce0d495b27850d51e0f175

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3762c34560b alpine "/bin/sh" 9 seconds ago Exited (0) 8 seconds ago alpine4
10047369f7c2 nginx "/docker-entrypoint.…" 53 seconds ago Up 51 seconds 80/tcp competent_visvesvaraya
f35ebf8bb84f nginx "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp musing_cray
e2d2b939be82 alpine "/bin/sh" 7 minutes ago Up 7 minutes alpine3
46c789f613b3 alpine "/bin/sh" 7 minutes ago Exited (0) 7 minutes ago alpine2

[root@rocky8 ~]# docker run -td --name alpine5 alpine
185a22f886147340c5207585ae578b13ab32b843ee3616e03fd67602d690f44b

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
185a22f88614 alpine "/bin/sh" 3 seconds ago Up 2 seconds alpine5
d3762c34560b alpine "/bin/sh" 58 seconds ago Exited (0) 57 seconds ago alpine4
10047369f7c2 nginx "/docker-entrypoint.…" About a minute ago Up About a minute 80/tcp competent_visvesvaraya
f35ebf8bb84f nginx "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp musing_cray
e2d2b939be82 alpine "/bin/sh" 7 minutes ago Up 7 minutes alpine3
46c789f613b3 alpine "/bin/sh" 8 minutes ago Exited (0) 8 minutes ago alpine2

范例: 开机自动运行容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#默认容器不会自动启动
[root@rocky8 ~]# docker run -d --name nginx -p 80:80 nginx
f3816fb172d300183ada1aa0e31a3561f66dcc162389533293d480abe48210a9

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3816fb172d3 nginx "/docker-entrypoint.…" 2 seconds ago Up 1 second 0.0.0.0:80->80/tcp, :::80->80/tcp nginx

[root@rocky8 ~]# reboot
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

#设置容器总是运行
[root@rocky8 ~]# docker run -d --name ngnix --restart=always -p 80:80 nginx
047e2e36e07cabe62bd3538507ce8da45d45a0ee50171a868fb3106d2f29af33

[root@rocky8 ~]# reboot
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
047e2e36e07c nginx "/docker-entrypoint.…" 35 seconds ago Up 12 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp ngnix

–privileged 选项

大约在0.6版,–privileged 选项被引入docker。使用该参数,container内的root拥有真正的root权限。否则,container内的root只是外部的一个普通用户权限。privileged启动的容器,可以看到很多host上的设备,并且可以执行mount。甚至允许你在docker容器中启动docker容器。

范例: 使用–privileged 让容器获取 root 权限

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[root@rocky8 ~]# docker run -it centos:centos7.7.1908
[root@939a9ec34aef /]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@939a9ec34aef /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
|-sda1 8:1 0 1G 0 part
|-sda2 8:2 0 100G 0 part
|-sda3 8:3 0 50G 0 part
|-sda4 8:4 0 1K 0 part
`-sda5 8:5 0 2G 0 part [SWAP]
sr0 11:0 1 7G 0 rom
[root@382ab09932a7 /]# mount /dev/sda3 /mnt
mount: /mnt: permission denied.

[root@939a9ec34aef /]# exit
exit

#利用 --privileged 选项运行容器
[root@rocky8 ~]# docker run -it --privileged centos:centos7.7.1908
#可以看到宿主机的设备
[root@a6391a8f82e3 /]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
|-sda1 8:1 0 1G 0 part
|-sda2 8:2 0 100G 0 part
|-sda3 8:3 0 50G 0 part
|-sda4 8:4 0 1K 0 part
`-sda5 8:5 0 2G 0 part [SWAP]
sr0 11:0 1 7G 0 rom

[root@a6391a8f82e3 /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 104806400 2754832 102051568 3% /
tmpfs 65536 0 65536 0% /dev
tmpfs 408092 5892 402200 2% /etc/hosts
shm 64000 0 64000 0% /dev/shm
tmpfs 408092 0 408092 0% /sys/fs/cgroup
[root@a6391a8f82e3 /]# mount /dev/sda3 /mnt

[root@a6391a8f82e3 /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 104806400 2754632 102051768 3% /
tmpfs 65536 0 65536 0% /dev
tmpfs 408092 5892 402200 2% /etc/hosts
shm 64000 0 64000 0% /dev/shm
tmpfs 408092 0 408092 0% /sys/fs/cgroup
/dev/sda3 52403200 619068 51784132 2% /mnt

范例: 运行docker官方文档容器

1
2
3
4
5
[root@centos8 ~]# docker run -it -d -p 4000:4000 docs/docker.github.io:latest
[root@centos8 ~]# docker images docs/docker.github.io
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/docs/docker.github.io latest ffd9131eeee7 2 days ago 1.99 GB
#用浏览器访问http://localhost:4000/可以看到下面docker文档资料

29

查看容器信息

显示当前存在容器

格式

1
2
3
4
5
6
7
8
9
10
docker ps [OPTIONS]
docker container ls [OPTIONS]

选项:
-a, --all Show all containers (default shows just running)
-q, --quiet Only display numeric IDs
-s, --size Display total file sizes
-f, --filter filter Filter output based on conditions provided
-l, --latest Show the latest created container (includes all states)
-n, --last int Show n last created containers (includes all states) (default -1)

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#显示运行的容器
[root@rocky8 ~]# docker ps

#显示全部容器,包括退出状态的容器
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76818e032218 centos:centos7.7.1908 "/bin/bash" 29 minutes ago Exited (0) 23 minutes ago magical_kapitsa
1458fb8f2e1c centos:centos7.7.1908 "/bin/bash" 29 minutes ago Exited (0) 29 minutes ago admiring_diffie
939a9ec34aef centos:centos7.7.1908 "/bin/bash" 32 minutes ago Exited (127) 29 minutes ago priceless_jepsen

#只显示容器ID
[root@rocky8 ~]# docker ps -aq
76818e032218
1458fb8f2e1c
939a9ec34aef

#显示容器大小
[root@rocky8 ~]# docker ps -as
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE
76818e032218 centos:centos7.7.1908 "/bin/bash" 29 minutes ago Exited (0) 24 minutes ago magical_kapitsa 23B (virtual 204MB)
1458fb8f2e1c centos:centos7.7.1908 "/bin/bash" 30 minutes ago Exited (0) 29 minutes ago admiring_diffie 14B (virtual 204MB)
939a9ec34aef centos:centos7.7.1908 "/bin/bash" 32 minutes ago Exited (127) 30 minutes ago priceless_jepsen 44B (virtual 204MB)

#显示最新创建的容器(停止的容器也能显示)
[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76818e032218 centos:centos7.7.1908 "/bin/bash" 30 minutes ago Exited (0) 24 minutes ago magical_kapitsa

范例: 显示指定状态的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7916dfca4670 nginx "/docker-entrypoint.…" 16 seconds ago Up 14 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp ngnix
76818e032218 centos:centos7.7.1908 "/bin/bash" 30 minutes ago Exited (0) 25 minutes ago magical_kapitsa
1458fb8f2e1c centos:centos7.7.1908 "/bin/bash" 31 minutes ago Exited (0) 30 minutes ago admiring_diffie
939a9ec34aef centos:centos7.7.1908 "/bin/bash" 33 minutes ago Exited (127) 31 minutes ago priceless_jepsen

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7916dfca4670 nginx "/docker-entrypoint.…" 20 seconds ago Up 19 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp ngnix

#查看退出状态的容器
[root@rocky8 ~]# docker ps -f 'status=exited'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
76818e032218 centos:centos7.7.1908 "/bin/bash" 31 minutes ago Exited (0) 25 minutes ago magical_kapitsa
1458fb8f2e1c centos:centos7.7.1908 "/bin/bash" 31 minutes ago Exited (0) 31 minutes ago admiring_diffie
939a9ec34aef centos:centos7.7.1908 "/bin/bash" 34 minutes ago Exited (127) 32 minutes ago priceless_jepsen

查看容器内的进程

1
docker top CONTAINER [ps OPTIONS]

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# docker run -d httpd
0186ab10e78a7b1a42e4ff8a96efc41e83e9070d9ff536a8f12fc56f95bcf9f3

[root@rocky8 ~]# docker top 0186ab
UID PID PPID C STIME TTY TIME CMD
root 2631 2610 0 17:45 ? 00:00:00 httpd -DFOREGROUND
33 2655 2631 0 17:45 ? 00:00:00 httpd -DFOREGROUND
33 2656 2631 0 17:45 ? 00:00:00 httpd -DFOREGROUND
33 2657 2631 0 17:45 ? 00:00:00 httpd -DFOREGROUND

[root@rocky8 ~]# docker run -d alpine /bin/sh -c 'i=1;while true;do echo hello$i;let i++;sleep 1;done'
75ac4bb8b6fa2417fef6aa09dbe33ed112bbcd6fa900433b2b29de85fef40efe

[root@rocky8 ~]# docker top 75ac4b
UID PID PPID C STIME TTY TIME CMD
root 2813 2791 0 17:46 ? 00:00:00 /bin/sh -c i=1;while true;do echo hello$i;let i++;sleep 1;done
root 2848 2813 0 17:46 ? 00:00:00 sleep 1

查看容器资源使用情况

1
2
3
4
5
6
7
8
9
docker stats [OPTIONS] [CONTAINER...]

Display a live stream of container(s) resource usage statistics

Options:
-a, --all Show all containers (default shows just running)
--format string Pretty-print images using a Go template
--no-stream Disable streaming stats and only pull the first result
--no-trunc Do not truncate output

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@rocky8 ~]# docker stats 75ac4b

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75ac4bb8b6fa elegant_leavitt 0.01% 2.32MiB / 3.799GiB 0.06% 866B / 0B 3.33MB / 0B 2

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75ac4bb8b6fa elegant_leavitt 0.11% 2.336MiB / 3.799GiB 0.06% 866B / 0B 3.33MB / 0B 2

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75ac4bb8b6fa elegant_leavitt 0.11% 2.336MiB / 3.799GiB 0.06% 866B / 0B 3.33MB / 0B 2

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
75ac4bb8b6fa elegant_leavitt 0.11% 2.336MiB / 3.799GiB 0.06% 866B / 0B 3.33MB / 0B 2


#默认启动elasticsearch会使用较多的内存
[root@rocky8 ~]# docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.6.2
55debcbd3f0fcd123096478d923d6a85a30a8c0a65c1f979e861fd42dee192c4

[root@rocky8 ~]# curl 192.168.1.11:9200
{
"name" : "55debcbd3f0f",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "jei5FPBRT9-p0kcDIuYqlw",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

#查看所有容器
[root@rocky8 ~]# docker stats

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
55debcbd3f0f elasticsearch 1.19% 1.265GiB / 3.799GiB 33.29% 1.29kB / 984B 0B / 1.89MB 47
75ac4bb8b6fa elegant_leavitt 0.16% 2.457MiB / 3.799GiB 0.06% 1.02kB / 0B 3.33MB / 0B 2
0186ab10e78a elegant_cannon 0.00% 25.16MiB / 3.799GiB 0.65% 1.02kB / 0B 483kB / 0B 82
7916dfca4670 ngnix 0.00% 5.18MiB / 3.799GiB 0.13% 1.31kB / 0B 1.36MB / 23.6kB 5

#限制内存使用大小
[root@rocky8 ~]# docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -e ES_JAVA_OPTS="-Xms64m -Xmx128m" elasticsearch:7.6.2
8544b4374f6d6534f15be516274805a993cc65f62af0ada370370b23b2470748

[root@rocky8 ~]# docker stats

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8544b4374f6d elasticsearch 1.17% 385.7MiB / 3.799GiB 9.92% 806B / 0B 0B / 1.69MB 48

查看容器的详细信息

docker inspect 可以查看docker各种对象的详细信息,包括:镜像,容器,网络等

1
2
3
4
docker inspect [OPTIONS] NAME|ID [NAME|ID...]
Options:
-f, --format string Format the output using the given Go template
-s, --size Display total file sizes if the type is container

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
[root@rocky8 ~]# docker run -d alpine /bin/sh -c 'i=1;while true;do echo hello$i;let i++;sleep 1;done'
08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f

[root@rocky8 ~]# docker inspect 08a0f5
[
{
"Id": "08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f",
"Created": "2025-04-07T10:12:39.096014483Z",
"Path": "/bin/sh",
"Args": [
"-c",
"i=1;while true;do echo hello$i;let i++;sleep 1;done"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 4206,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-07T10:12:39.473043374Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:aded1e1a5b3705116fa0a92ba074a5e0b0031647d9c315983ccba2ee5428ec8b",
"ResolvConfPath": "/var/lib/docker/containers/08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f/hostname",
"HostsPath": "/var/lib/docker/containers/08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f/hosts",
"LogPath": "/var/lib/docker/containers/08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f/08a0f574a380ba09be166e93f1fc16338d304c9c2ce22b075761137c7f784c1f-json.log",
"Name": "/ecstatic_goodall",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "bridge",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
24,
105
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware",
"/sys/devices/virtual/powercap"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/d3d1532ed4844473a09b14337e529c428c0264b52ae86642ce90da8415721c09-init/diff:/var/lib/docker/overlay2/1f2e711c2a688ce87a4484c61dd45092a07dff03f1af2802a01dcfc9f89f9947/diff",
"MergedDir": "/var/lib/docker/overlay2/d3d1532ed4844473a09b14337e529c428c0264b52ae86642ce90da8415721c09/merged",
"UpperDir": "/var/lib/docker/overlay2/d3d1532ed4844473a09b14337e529c428c0264b52ae86642ce90da8415721c09/diff",
"WorkDir": "/var/lib/docker/overlay2/d3d1532ed4844473a09b14337e529c428c0264b52ae86642ce90da8415721c09/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "08a0f574a380",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"i=1;while true;do echo hello$i;let i++;sleep 1;done"
],
"Image": "alpine",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "eb790be35916ad9a95540dbdf5de9d9d3046d7e1a43cd9add0e02c9073b8fc7c",
"SandboxKey": "/var/run/docker/netns/eb790be35916",
"Ports": {},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "c42012ade215635feb2b1b63f41050340079d912dc528205997a1e389808893e",
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:03",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"MacAddress": "02:42:ac:11:00:03",
"NetworkID": "f2c0f57eee5abd7e3329ea9b7363e06af0230327e5f9fbc8d384b8b572c5ca59",
"EndpointID": "c42012ade215635feb2b1b63f41050340079d912dc528205997a1e389808893e",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DriverOpts": null,
"DNSNames": null
}
}
}
}
]

#选择性查看
[root@rocky8 ~]# docker inspect -f "{{.Metadata}}" 08a0f5

template parsing error: template: :1:2: executing "" at <.Metadata>: map has no entry for key "Metadata"

[root@rocky8 ~]# docker inspect -f "{{.Created}}" 08a0f5
2025-04-07T10:12:39.096014483Z

[root@rocky8 ~]# docker inspect --format "{{.Created}}" 08a0f5
2025-04-07T10:12:39.096014483Z

删除容器

docker rm 可以删除容器,即使容器正在运行当中,也可以被强制删除掉

格式

1
2
3
4
5
6
7
8
9
10
11
12
docker rm [OPTIONS] CONTAINER [CONTAINER...]
docker container rm [OPTIONS] CONTAINER [CONTAINER...]

#选项:
-f, --force Force the removal of a running container (uses SIGKILL)
-v, --volumes Remove the volumes associated with the container

#删除停止的容器
docker container prune [OPTIONS]
Options:
--filter filter Provide filter values (e.g. 'until=<timestamp>')
-f, --force Do not prompt for confirmation

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
08a0f574a380 alpine "/bin/sh -c 'i=1;whi…" 14 hours ago Exited (137) 12 hours ago ecstatic_goodall
8544b4374f6d elasticsearch:7.6.2 "/usr/local/bin/dock…" 14 hours ago Exited (143) 12 hours ago elasticsearch

[root@rocky8 ~]# docker rm 08a0f574a380
08a0f574a380

[root@rocky8 ~]# docker rm elasticsearch
elasticsearch

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

范例: 删除所有容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
85840eb8ad82 alpine "ping 8.8.8.8" 4 seconds ago Up 3 seconds a1
6c8aafb81343 nginx "/docker-entrypoint.…" 36 seconds ago Up 35 seconds 80/tcp relaxed_tu
8a54f8d25ecd httpd "httpd-foreground" 41 seconds ago Up 40 seconds 80/tcp sad_montalcini

[root@rocky8 ~]# docker rm -f `docker ps -aq`
85840eb8ad82
6c8aafb81343
8a54f8d25ecd

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

[root@rocky8 ~]# docker ps -aq | xargs docker rm -f
2f088e3f3827
7b6de60ca4de
daf51f6b687b
5866c2002463

范例: 删除指定状态的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
736f59e27fe4 alpine "/bin/sh" 39 seconds ago Exited (0) 38 seconds ago reverent_burnell
0227d775fbe9 alpine "ping 8.8.8.8" 39 seconds ago Up 38 seconds gracious_austin
0f5ae325ed84 nginx "/docker-entrypoint.…" 40 seconds ago Up 38 seconds 80/tcp intelligent_chandrasekhar
7b03de1b8d6b httpd "httpd-foreground" 40 seconds ago Up 39 seconds 80/tcp elated_elgamal

[root@rocky8 ~]# docker rm `docker ps -qf status=exited`
736f59e27fe4

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0227d775fbe9 alpine "ping 8.8.8.8" 47 seconds ago Up 46 seconds gracious_austin
0f5ae325ed84 nginx "/docker-entrypoint.…" 48 seconds ago Up 47 seconds 80/tcp intelligent_chandrasekhar
7b03de1b8d6b httpd "httpd-foreground" 48 seconds ago Up 47 seconds 80/tcp elated_elgamal

范例: 删除所有停止的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 ~]# docker stop `docker ps -qa`
0227d775fbe9
0f5ae325ed84
7b03de1b8d6b

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0227d775fbe9 alpine "ping 8.8.8.8" 3 minutes ago Exited (137) 10 seconds ago gracious_austin
0f5ae325ed84 nginx "/docker-entrypoint.…" 3 minutes ago Exited (0) 20 seconds ago intelligent_chandrasekhar
7b03de1b8d6b httpd "httpd-foreground" 3 minutes ago Exited (0) 19 seconds ago elated_elgamal

[root@rocky8 ~]# docker container prune -f
Deleted Containers:
0227d775fbe91cb4845f27d2339534e3ffc95f0dff099bed13443be7ba0cbc34
0f5ae325ed84d686ded23fbbac5fcfbf46cd814cadbbacbd0025f683398647fd
7b03de1b8d6ba6ae243fa9a7a15a5525cd55c8d9dc72f18fdbb5bd2a7615da7d

Total reclaimed space: 1.093kB

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

容器的启动和停止

格式

1
docker start|stop|restart|pause|unpause 容器ID

批量正常启动或关闭所有容器

1
2
docker start $(docker ps -a -q)  
docker stop $(docker ps -a -q)

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@rocky8 ~]# docker run -d --name nginx1 nginx
9da772bc04c74891bbf755d39f48018dd0be69ffb7f618c554a3cebcc128a7ba

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9da772bc04c7 nginx "/docker-entrypoint.…" 5 seconds ago Up 5 seconds 80/tcp nginx1

[root@rocky8 ~]# docker stop nginx1
nginx1

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9da772bc04c7 nginx "/docker-entrypoint.…" 19 seconds ago Exited (0) 2 seconds ago nginx1

[root@rocky8 ~]# docker start nginx1
nginx1

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9da772bc04c7 nginx "/docker-entrypoint.…" 32 seconds ago Up 2 seconds 80/tcp nginx1

[root@rocky8 ~]# docker restart nginx1
nginx1

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9da772bc04c7 nginx "/docker-entrypoint.…" 39 seconds ago Up 1 second 80/tcp nginx1

范例: 启动并进入容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@rocky8 ~]# docker run --name c1 -it ubuntu bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
5a7813e071bf: Pull complete
Digest: sha256:72297848456d5d37d1262630108ab308d3e9ec7ed1c3286a32fe09856619a782
Status: Downloaded newer image for ubuntu:latest
root@087d1e8eb24c:/# exit
exit

[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
087d1e8eb24c ubuntu "bash" 13 seconds ago Exited (0) 3 seconds ago c1

[root@rocky8 ~]# docker start c1
c1

[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
087d1e8eb24c ubuntu "bash" 21 seconds ago Up 1 second c1

[root@rocky8 ~]# docker stop c1
c1

[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
087d1e8eb24c ubuntu "bash" 51 seconds ago Exited (137) 5 seconds ago c1

#启动并进入容器
[root@rocky8 ~]# docker start -i c1
root@087d1e8eb24c:/# exit
exit

[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
087d1e8eb24c ubuntu "bash" 2 minutes ago Exited (0) 2 seconds ago c1

范例: 启动和停止所有容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@rocky8 ~]# docker rm -f `docker ps -aq`
087d1e8eb24c
9da772bc04c7

[root@rocky8 ~]# docker run -d --name nginx1 nginx
e091ef51076092115a362c2e7b31a44fef580efc5fbbd681e49fb8c926ce7e4d

[root@rocky8 ~]# docker run -d --name nginx2 nginx
1385dac196bfd5ee5fcb33ee124b29ea6837eab60a7f6b869149974b7df7a1f7

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1385dac196bf nginx "/docker-entrypoint.…" 3 seconds ago Up 2 seconds 80/tcp nginx2
e091ef510760 nginx "/docker-entrypoint.…" 7 seconds ago Up 6 seconds 80/tcp nginx1

[root@rocky8 ~]# docker stop `docker ps -aq`
1385dac196bf
e091ef510760

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1385dac196bf nginx "/docker-entrypoint.…" 16 seconds ago Exited (0) 3 seconds ago nginx2
e091ef510760 nginx "/docker-entrypoint.…" 20 seconds ago Exited (0) 3 seconds ago nginx1

[root@rocky8 ~]# docker start `docker ps -aq`
1385dac196bf
e091ef510760

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1385dac196bf nginx "/docker-entrypoint.…" 26 seconds ago Up 4 seconds 80/tcp nginx2
e091ef510760 nginx "/docker-entrypoint.…" 30 seconds ago Up 3 seconds 80/tcp nginx1

范例: 暂停和恢复容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
[root@rocky8 ~]# docker run -d --name n1 nginx
3ea6c5504968a12dbe9c3d1bb274888b8157a3b05ca664e164695e07e5516bff

[root@rocky8 ~]# docker top n1
UID PID PPID C STIME TTY TIME CMD
root 6206 6185 0 08:30 ? 00:00:00 nginx: master process nginx -g daemon off;
101 6250 6206 0 08:30 ? 00:00:00 nginx: worker process
101 6251 6206 0 08:30 ? 00:00:00 nginx: worker process
101 6252 6206 0 08:30 ? 00:00:00 nginx: worker process
101 6253 6206 0 08:30 ? 00:00:00 nginx: worker process

[root@rocky8 ~]# ps aux | grep nginx
root 6206 0.1 0.1 11456 7672 ? Ss 08:30 0:00 nginx: master process nginx -g daemon off;
101 6250 0.0 0.0 11952 2788 ? S 08:30 0:00 nginx: worker process
101 6251 0.0 0.0 11952 2788 ? S 08:30 0:00 nginx: worker process
101 6252 0.0 0.0 11952 2788 ? S 08:30 0:00 nginx: worker process
101 6253 0.0 0.0 11952 2792 ? S 08:30 0:00 nginx: worker process
root 6268 0.0 0.0 222012 1100 pts/0 S+ 08:31 0:00 grep --color=autonginx

[root@rocky8 ~]# docker pause n1
n1

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ea6c5504968 nginx "/docker-entrypoint.…" 29 seconds ago Up 28 seconds (Paused) 80/tcp n1

[root@rocky8 ~]# ps aux | grep nginx
root 6206 0.0 0.1 11456 7672 ? Ds 08:30 0:00 nginx: master process nginx -g daemon off;
101 6250 0.0 0.0 11952 2788 ? D 08:30 0:00 nginx: worker process
101 6251 0.0 0.0 11952 2788 ? D 08:30 0:00 nginx: worker process
101 6252 0.0 0.0 11952 2788 ? D 08:30 0:00 nginx: worker process
101 6253 0.0 0.0 11952 2792 ? D 08:30 0:00 nginx: worker process
root 6289 0.0 0.0 222012 1172 pts/0 S+ 08:31 0:00 grep --color=autonginx

[root@rocky8 ~]# docker unpause n1
n1

[root@rocky8 ~]# ps aux | grep nginx
root 6206 0.0 0.1 11456 7672 ? Ss 08:30 0:00 nginx: master process nginx -g daemon off;
101 6250 0.0 0.0 11952 2788 ? S 08:30 0:00 nginx: worker process
101 6251 0.0 0.0 11952 2788 ? S 08:30 0:00 nginx: worker process
101 6252 0.0 0.0 11952 2788 ? S 08:30 0:00 nginx: worker process
101 6253 0.0 0.0 11952 2792 ? S 08:30 0:00 nginx: worker process
root 6353 0.0 0.0 222012 1200 pts/0 S+ 08:32 0:00 grep --color=autonginx

范例: 容器的暂停和恢复

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@rocky8 ~]# docker run -itd centos:8
b96b24033d05fa8de1ce9a79305a78fd34047c38bad4ecbcb187c1e7e33137f3

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 4 seconds ago Up 3 seconds upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp n1

[root@rocky8 ~]# docker pause upbeat_khayyam
upbeat_khayyam

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 26 seconds ago Up 25 seconds (Paused) upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 5 minutes ago Up 5 minutes 80/tcp n1

[root@rocky8 ~]# docker unpause upbeat_khayyam
upbeat_khayyam

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 48 seconds ago Up 47 seconds upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 6 minutes ago Up 6 minutes 80/tcp n1

给正在运行的容器发信号

docker kill 可以给容器发信号,默认号SIGKILL,即9信号

格式

1
2
3
4
docker kill [OPTIONS] CONTAINER [CONTAINER...]

#选项:
-s, --signal string Signal to send to the container (default "KILL")
1
2
3
4
5
6
7
8
9
10
11
12
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 2 minutes ago Up 2 minutes upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 7 minutes ago Up 7 minutes 80/tcp n1

[root@rocky8 ~]# docker kill n1
n1

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 2 minutes ago Up 2 minutes upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 8 minutes ago Exited (137) 2 seconds ago n1

范例: 关闭所有容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 3 minutes ago Up 3 minutes upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 9 minutes ago Up 1 second 80/tcp n1

#强制关闭所有运行中的容器
[root@rocky8 ~]# docker kill `docker ps -aq`
b96b24033d05
3ea6c5504968

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b96b24033d05 centos:8 "/bin/bash" 4 minutes ago Exited (137) 5 seconds ago upbeat_khayyam
3ea6c5504968 nginx "/docker-entrypoint.…" 9 minutes ago Exited (137) 5 seconds ago n1

进入正在运行的容器

使用attach命令

docker attach 容器名,attach 类似于vnc,操作会在同一个容器的多个会话界面同步显示,所有使用此方式进入容器的操作都是同步显示的,且使用exit退出后容器自动关闭,不推荐使用,需要进入到有shell环境的容器

格式:

1
docker attach [OPTIONS] CONTAINER

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@rocky8 ~]# docker run -it centos:8
[root@a3d06b403f2d /]# cat /etc/redhat-release
CentOS Linux release 8.4.2105
#ctrl+p+q 退出

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3d06b403f2d centos:8 "/bin/bash" 23 seconds ago Up 22 seconds blissful_kare

[root@rocky8 ~]# docker attach a3d06b
[root@a3d06b403f2d /]# cat /etc/redhat-release
CentOS Linux release 8.4.2105

#同时在第二个终端attach到同一个容器,执行命令,可以在前一终端看到显示图面是同步的
[root@rocky8 /]# docker attach a3d06b
[root@a3d06b403f2d /]# cat /etc/redhat-release
CentOS Linux release 8.4.2105

#两个终端都同时退出
[root@a3d06b403f2d /]# exit
exit

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a3d06b403f2d centos:8 "/bin/bash" 2 minutes ago Exited (0) 25 seconds ago blissful_kare

使用exec命令

在运行中的容器启动新进程,可以执行单次命令,以及进入容器

测试环境使用此方式,使用exit退出,但容器还在运行,此为推荐方式

格式:

1
2
3
4
5
6
7
8
9
docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
常用选项:
-d, --detach Detached mode: run command in the background
-e, --env list Set environment variables
-i, --interactive Keep STDIN open even if not attached
-t, --tty Allocate a pseudo-TTY

#常见用法
docker exec -it 容器ID sh|bash

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@rocky8 ~]# docker run -itd centos:8
363bd34686630dbfd71ba719d14b9d905347fbb98d72fd9c987679c3203d6885

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
363bd3468663 centos:8 "/bin/bash" 4 seconds ago Up 3 seconds vibrant_rosalind

#执行一次性命令
[root@rocky8 ~]# docker exec 363bd3 cat /etc/redhat-release
CentOS Linux release 8.4.2105

#进入容器,执行命令,exit退出但容器不停止
[root@rocky8 ~]# docker exec -it 363bd3 bash
[root@363bd3468663 /]# cat /etc/redhat-release
CentOS Linux release 8.4.2105

[root@363bd3468663 /]# exit
exit

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
363bd3468663 centos:8 "/bin/bash" About a minute ago Up About a minute vibrant_rosalind

暴露所有容器端口

容器启动后,默认处于预定义的NAT网络中,所以外部网络的主机无法直接访问容器中网络服务

docker run -P 可以将事先容器预定义的所有端口映射宿主机的网卡的随机端口,默认从32768开始

使用随机端口 时,当停止容器后再启动可能会导致端口发生变化

1
2
3
4
-P , --publish-all= true | false默认为false

#示例:
docker run -d -P --name nginx-c1 nginx #映射容器所有暴露端口至随机本地端口

docker port 可以查看容器的端口映射关系

格式

1
docker port CONTAINER [PRIVATE_PORT[/PROTO]]

范例:

1
2
3
4
5
6
7
[root@rocky8 ~]# docker port nginx-c1 
80/tcp -> 0.0.0.0:32769
80/tcp -> [::]:32769

[root@rocky8 ~]# docker port nginx-c1 80/tcp
0.0.0.0:32769
[::]:32769

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

[root@rocky8 ~]# ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*

#前台启动的会话窗口无法进行其他操作,除非退出,但是退出后容器也会退出
[root@rocky8 ~]# docker run -P nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2025/04/08 00:56:05 [notice] 1#1: using the "epoll" event method
2025/04/08 00:56:05 [notice] 1#1: nginx/1.27.4
2025/04/08 00:56:05 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
2025/04/08 00:56:05 [notice] 1#1: OS: Linux 4.18.0-553.el8_10.x86_64
2025/04/08 00:56:05 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2025/04/08 00:56:05 [notice] 1#1: start worker processes
2025/04/08 00:56:05 [notice] 1#1: start worker process 28
2025/04/08 00:56:05 [notice] 1#1: start worker process 29
2025/04/08 00:56:05 [notice] 1#1: start worker process 30
2025/04/08 00:56:05 [notice] 1#1: start worker process 31


#另开一个窗口执行下面命令
[root@rocky8 /]# ss -lnt
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 2048 0.0.0.0:32770 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
LISTEN 0 2048 [::]:32770 [::]:*

[root@rocky8 /]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33e4f9e80b99 nginx "/docker-entrypoint.…" 56 seconds ago Up 56 seconds 0.0.0.0:32770->80/tcp, :::32770->80/tcp nice_ganguly

[root@rocky8 /]# curl 127.0.0.1:32770
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#自动生成Iptables规则
[root@rocky8 /]# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
2 104 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3 252 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:32770 to:172.17.0.2:80

#回到之前的会话窗口,同时按两个键 ctrl+c 退出容器
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
33e4f9e80b99 nginx "/docker-entrypoint.…" 3 minutes ago Exited (0) 14 seconds ago nice_ganguly

端口映射的本质就是利用NAT技术实现的

范例: 端口映射和iptables

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
#端口映射前的iptables规则
[root@rocky8 ~]# iptables -S
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN

[root@rocky8 ~]# iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P POSTROUTING ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN

[root@rocky8 ~]# iptables -S > pre.filter
[root@rocky8 ~]# iptables -S -t nat > pre.nat

#实现端口映射
[root@rocky8 ~]# docker run -d -P --name nginx1 nginx
c20e59dcc412733425ce8dd4726711c8ac6ebd34a40440c06c103af418a8ccb9

[root@rocky8 ~]# docker exec -it nginx1 hostname -i
172.17.0.2

[root@rocky8 ~]# docker port nginx1
80/tcp -> 0.0.0.0:32771
80/tcp -> [::]:32771

#端口映射后的iptables规则
[root@rocky8 ~]# iptables -S
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN

[root@rocky8 ~]# iptables -S -t nat
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P POSTROUTING ACCEPT
-P OUTPUT ACCEPT
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32771 -j DNAT --to-destination 172.17.0.2:80

#对比端口映射前后的变化
[root@rocky8 ~]# diff pre.filter post.filter
13a14
> -A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT

[root@rocky8 ~]# diff pre.nat post.nat
7a8
> -A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
9a11
> -A DOCKER ! -i docker0 -p tcp -m tcp --dport 32771 -j DNAT --to-destination 172.17.0.2:80

#本地和选程都可以访问
[root@rocky8 ~]# curl 127.0.0.1:32771
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>


[root@rocky8 ~]# curl 192.168.1.11:32771
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#利用iptables 阻止同一个宿主机的其它容器CentOS8的访问
[root@rocky8 ~]# iptables -I DOCKER -s 192.168.1.11 -d 172.17.0.2 -p tcp --dport 80 -j REJECT

[root@rocky8 ~]# iptables -S
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -s 192.168.1.11/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN

#测试访问
[root@rocky8 ~]# docker run -it centos:8
[root@4ff6133ab467 /]# curl 172.17.0.2
curl: (7) Failed to connect to 172.17.0.2 port 80: Connection timed out

指定端口映射

docker run -p 可以将容器的预定义的指定端口映射到宿主机的相应端口

注意: 多个容器映射到宿主机的端口不能冲突,但容器内使用的端口可以相同

方式1: 容器80端口映射宿主机本地随机端口

1
docker run  -p 80 --name nginx-test-port1 nginx

方式2: 容器80端口映射到宿主机本地端口81

1
docker run  -p 81:80 --name nginx-test-port2 nginx

方式3: 宿主机本地IP:宿主机本地端口:容器端口

1
docker run  -p 10.0.0.100:82:80 --name nginx-test-port3 docker.io/nginx

方式4: 宿主机本地IP:宿主机本地随机端口:容器端口,默认从32768开始

1
docker run -p 10.0.0.100::80 --name nginx-test-port4 docker.io/nginx

方式5: 宿主机本机ip:宿主机本地端口:容器端口/协议,默认为tcp协议

1
docker run  -p 10.0.0.100:83:80/udp --name nginx-test-port5 docker.io/nginx

方式6: 一次性映射多个端口+协议

1
docker run  -p 8080:80/tcp -p 8443:443/tcp -p 53:53/udp --name nginx-test-port6 nginx

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@rocky8 ~]# docker run -d -p 8080:80 -p 8443:443 -p 8053:53/udp nginx
e9cda15aaa0194a12f64991aa787a185a6ddbcd6264352a3dd55cdd6daa66d40
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9cda15aaa01 nginx "/docker-entrypoint.…" 7 seconds ago Up 6 seconds 0.0.0.0:8053->53/udp, :::8053->53/udp, 0.0.0.0:8080->80/tcp, :::8080->80/tcp, 0.0.0.0:8443->443/tcp, :::8443->443/tcp dazzling_goldberg


[root@rocky8 ~]# ss -luntp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:8053 0.0.0.0:* users:(("docker-proxy",pid=11020,fd=4))
udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=834,fd=5))
udp UNCONN 0 0 [::]:8053 [::]:* users:(("docker-proxy",pid=11026,fd=4))
udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=834,fd=6))
tcp LISTEN 0 2048 0.0.0.0:8080 0.0.0.0:* users:(("docker-proxy",pid=10998,fd=4))
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=852,fd=3))
tcp LISTEN 0 2048 0.0.0.0:8443 0.0.0.0:* users:(("docker-proxy",pid=10976,fd=4))
tcp LISTEN 0 2048 [::]:8080 [::]:* users:(("docker-proxy",pid=11006,fd=4))
tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=852,fd=4))
tcp LISTEN 0 2048 [::]:8443 [::]:* users:(("docker-proxy",pid=10984,fd=4))


[root@rocky8 ~]# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
69 3804 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
3 252 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:443
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:80
0 0 MASQUERADE udp -- * * 172.17.0.2 172.17.0.2 udp dpt:53

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
14 840 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
27 1620 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8443 to:172.17.0.2:443
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.17.0.2:80
0 0 DNAT udp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:8053 to:172.17.0.2:53


#杀死nginx进程,nginx将关闭,相应端口也会关闭
[root@rocky8 ~]# kill <NGINXPID>

实战案例: 修改已经创建的容器的端口映射关系

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@rocky8 ~]# docker run -d -p 80:80 --name nginx01 nginx
87f4d74f1c261085d3545c884a87ff9f39b962da568e5b01c7caa7379606da60

[root@rocky8 ~]# docker port nginx01
80/tcp -> 0.0.0.0:80
80/tcp -> [::]:80

[root@rocky8 ~]# lsof -i:80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
docker-pr 11593 root 4u IPv4 70300 0t0 TCP *:http (LISTEN)
docker-pr 11600 root 4u IPv6 70303 0t0 TCP *:http (LISTEN)

[root@rocky8 ~]# ls /var/lib/docker/containers/87f4d74f1c261085d3545c884a87ff9f39b962da568e5b01c7caa7379606da60/
87f4d74f1c261085d3545c884a87ff9f39b962da568e5b01c7caa7379606da60-json.log
checkpoints
config.v2.json
hostconfig.json
hostname
hosts
mounts
resolv.conf
resolv.conf.hash

[root@rocky8 ~]# systemctl stop docker
[root@rocky8 ~]# vim /var/lib/docker/containers/87f4d74f1c261085d3545c884a87ff9f39b962da568e5b01c7caa7379606da60/hostconfig.json
"PortBindings":{"80/tcp":[{"HostIp":"","HostPort":"80"}]}
#PortBindings后80/tcp对应的是容器内部的80端口,HostPort对应的是映射到宿主机的端口80 修改此处为8000
"PortBindings":{"80/tcp":[{"HostIp":"","HostPort":"8000"}]}

[root@rocky8 ~]# systemctl start docker.service
[root@rocky8 ~]# docker start nginx01
nginx01

[root@rocky8 ~]# docker port nginx01
80/tcp -> 0.0.0.0:8080
80/tcp -> [::]:8080

范例:实现 wordpress 应用

1
2
3
4
5
6
[root@rocky8 ~]# docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=000000 \
-e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=000000 \
--name mysql -d --restart=always mysql:8.0.29-oracle

[root@rocky8 ~]# docker run -d -p 8080:80 --name wordpress -v /data/wordpress:/var/www/html/ \
--restart=always wordpress:php7.4-apache

查看容器的日志

docker logs 可以查看容器中运行的进程在控制台输出的日志信息

格式

1
2
3
4
5
6
7
8
9
docker logs [OPTIONS] CONTAINER

选项:
--details Show extra details provided to logs
-f, --follow Follow log output
--since string Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)
--tail string Number of lines to show from the end of the logs (default "all")
-t, --timestamps Show timestamps
--until string Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)

范例: 查看容器日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@rocky8 ~]# docker run -d alpine /bin/sh -c 'i=1;while true;do echo hello$i;let i++;sleep 2;done'
718a94fe85e607cf7ece3d408862faa34771b997dc479ed8c420a80163b1fa2c

[root@rocky8 ~]# docker logs 718a
hello1
hello2
hello3
hello4
hello5
hello6
hello7

[root@rocky8 ~]# docker logs 718a
hello1
hello2
hello3
hello4
hello5
hello6
hello7
hello8
hello9
hello10
hello11

[root@rocky8 ~]# docker logs --tail 3 718a
hello13
hello14
hello15

#显示时间
[root@rocky8 ~]# docker logs --tail 1 -t 718a
2025-04-08T02:23:03.178747914Z hello40

#持续跟踪
[root@rocky8 ~]# docker logs -f 718a
hello1
hello2
hello3
hello4
......

范例: 查看httpd服务日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@rocky8 ~]# docker run -d -p 80:80 --name web1 httpd
f12a28ff05a7323d0081618d4ab8b5056a6a6113ff644e2d3604128739005ee1

[root@rocky8 ~]# docker logs web1
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
[Tue Apr 08 02:25:04.722339 2025] [mpm_event:notice] [pid 1:tid 1] AH00489: Apache/2.4.63 (Unix) configured -- resuming normal operations
[Tue Apr 08 02:25:04.722480 2025] [core:notice] [pid 1:tid 1] AH00094: Command line: 'httpd -D FOREGROUND'

[root@rocky8 ~]# docker logs -f web1
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName' directive globally to suppress this message
[Tue Apr 08 02:25:04.722339 2025] [mpm_event:notice] [pid 1:tid 1] AH00489: Apache/2.4.63 (Unix) configured -- resuming normal operations
[Tue Apr 08 02:25:04.722480 2025] [core:notice] [pid 1:tid 1] AH00094: Command line: 'httpd -D FOREGROUND'

范例: 查看nginx服务访问日志

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#查看一次
[root@rocky8 ~]# docker logs nginx1
172.17.0.1 - - [08/Apr/2025:07:47:38 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:47:39 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"

#持续查看
[root@rocky8 ~]# docker logs -f nginx1
172.17.0.1 - - [08/Apr/2025:07:47:38 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:47:39 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:48:35 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:48:52 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:48:58 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:49:04 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.61.1" "-"
172.17.0.1 - - [08/Apr/2025:07:49:08 +0000] "GET /test HTTP/1.1" 404 153 "-" "curl/7.61.1" "-"
2025/04/08 07:49:08 [error] 29#29: *7 open() "/usr/share/nginx/html/test" failed (2: No such file or directory), client: 172.17.0.1, server: localhost, request: "GET /test HTTP/1.1", host: "172.17.0.2"

传递运行命令

容器需要有一个前台运行的进程才能保持容器的运行,通过传递运行参数是一种方式,另外也可以在构建镜像的时候指定容器启动时运行的前台命令

容器里的PID为1的守护进程的实现方式

  • 服务类: 如: Nginx,Tomcat,Apache ,但服务不能停
  • 命令类: 如: tail -f /etc/hosts ,主要用于测试环境,注意: 不要tail -f <服务访问日志> 会产生不 必要的磁盘IO

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@rocky8 ~]# docker run -d alpine
6b0764be17d18ae532f048a9f9c70c05e2e0dccdc5577e5024da8e5872ba9507

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b0764be17d1 alpine "/bin/sh" 3 seconds ago Exited (0) 2 seconds ago brave_pasteur

[root@rocky8 ~]# docker run -d alpine tail -f /etc/hosts
78138ac0a2f3871f2e6021facdd6f19a4d2054866da308d1a6dcf637bcb1dda9

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78138ac0a2f3 alpine "tail -f /etc/hosts" 5 seconds ago Up 4 seconds pensive_volhard
6b0764be17d1 alpine "/bin/sh" 27 seconds ago Exited (0) 26 seconds ago brave_pasteur

[root@rocky8 ~]# docker exec -it 78138a sh
/ # ps aux
PID USER TIME COMMAND
1 root 0:00 tail -f /etc/hosts
7 root 0:00 sh
13 root 0:00 ps aux

/ # exit

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78138ac0a2f3 alpine "tail -f /etc/hosts" 58 seconds ago Up 56 seconds pensive_volhard
6b0764be17d1 alpine "/bin/sh" About a minute ago Exited (0) About a minute ago brave_pasteur

容器内部的hosts文件

容器会自动将容器的ID加入自已的/etc/hosts文件中,并解析成容器的IP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@rocky8 ~]# docker run -it centos /bin/bash
[root@aae98e2610ba /]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 aae98e2610ba #默认会将实例的ID 添加到自己的hosts文件

[root@aae98e2610ba /]# hostname
aae98e2610ba

[root@aae98e2610ba /]# ping aae98e2610ba
PING aae98e2610ba (172.17.0.2) 56(84) bytes of data.
64 bytes from aae98e2610ba (172.17.0.2): icmp_seq=1 ttl=64 time=0.033 ms
64 bytes from aae98e2610ba (172.17.0.2): icmp_seq=2 ttl=64 time=0.037 ms
64 bytes from aae98e2610ba (172.17.0.2): icmp_seq=3 ttl=64 time=0.057 ms
^C
--- aae98e2610ba ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2060ms
rtt min/avg/max/mdev = 0.033/0.042/0.057/0.011 ms


#在另一个会话执行
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aae98e2610ba centos "/bin/bash" 57 seconds ago Up 56 seconds awesome_leakey

范例: 修改容器的 hosts文件

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@rocky8 ~]# docker run -it --rm --add-host www.wang.org:6.6.6.6 \
--add-host www.wang.com:8.8.8.8 busybox

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
6.6.6.6 www.wang.org
8.8.8.8 www.wang.com
172.17.0.2 6ac4986f2cb6

指定容器DNS

容器的dns服务器,默认采用宿主机的dns 地址,可以用下面方式指定其它的DNS地址

  • 将dns地址配置在宿主机
  • 在容器启动时加选项 –dns=x.x.x.x
  • 在/etc/docker/daemon.json 文件中指定

范例: 容器的DNS默认从宿主机的DNS获取

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@rocky8 ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 223.5.5.5
nameserver 223.6.6.6

[root@ubuntu1804 ~]#systemd-resolve --status|grep -A1 -i "DNS Servers"
DNS Servers: 180.76.76.76
223.6.6.6

[root@rocky8 ~]# docker run -it --rm centos bash
[root@1f89c014042c /]# cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 223.5.5.5
nameserver 223.6.6.6

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: []
[root@1f89c014042c /]# exit
exit

范例: 指定DNS地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@rocky8 ~]# docker run -it --rm --dns 1.1.1.1 --dns 8.8.8.8 centos bash

[root@01037b0422ff /]# cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 1.1.1.1
nameserver 8.8.8.8

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: [nameservers]
[root@01037b0422ff /]# exit
exit

范例: 指定domain名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@rocky8 ~]# docker run -it --rm --dns 1.1.1.1 --dns 8.8.8.8 --dns-search a.com --dns-search b.com busybox

/ # cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 1.1.1.1
nameserver 8.8.8.8
search a.com b.com

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: [nameservers search]
/ # exit

范例: 配置文件指定DNS和搜索domain名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
[root@rocky8 ~]# vim /etc/docker/daemon.json 
[root@rocky8 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": [
"https://docker.m.daocloud.io",
"https://docker.imgdb.de",
"https://docker-0.unsee.tech",
"https://docker.hlmirror.com",
"https://docker.1ms.run",
"https://func.ink",
"https://lispy.org",
"https://docker.xiaogenban1993.com"
],
"storage-driver": "overlay2",
"dns": [ "114.114.114.114", "119.29.29.29" ],
"dns-search": [ "wang.com", "wang.org" ]
}

[root@rocky8 ~]# systemctl restart docker.service
[root@rocky8 ~]# docker run -it --rm centos bash
[root@b3299209b405 /]# cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 114.114.114.114
nameserver 119.29.29.29
search wang.com wang.org

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: [nameservers search]
[root@b3299209b405 /]# exit
exit

#用--dns指定优先级更高
[root@rocky8 ~]# docker run -it --rm --dns 1.1.1.1 --dns 8.8.8.8 centos bash
[root@39ab9050332b /]# cat /etc/resolv.conf
# Generated by Docker Engine.
# This file can be edited; Docker Engine will not make further changes once it
# has been modified.

nameserver 1.1.1.1
nameserver 8.8.8.8
search wang.com wang.org

# Based on host file: '/etc/resolv.conf' (legacy)
# Overrides: [nameservers search]
[root@39ab9050332b /]# exit
exit

容器内和宿主机之间复制文件

1
2
3
4
5
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
Options:
-a, --archive Archive mode (copy all uid/gid information)
-L, --follow-link Always follow symbol link in SRC_PATH

范例: 复制容器的文件至宿主机

1
2
3
[root@rocky8 ~]# docker run -it --name b1 --rm busybox sh
[root@rocky8 ~]# docker cp b1:/bin/busybox /usr/local/bin/
[root@rocky8 ~]# busybox ls

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 ~]# docker run -itd centos
ea5987185dbc7e3993fe57e3df198704dd10e7d97db6e63cacef1ae7567f3eb9

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ea5987185dbc centos "/bin/bash" 4 seconds ago Up 3 seconds stupefied_merkle

#将容器内文件复制到宿主机
[root@rocky8 ~]# docker cp -a ea5987:/etc/centos-release .
Successfully copied 2.05kB to /root/.

[root@rocky8 ~]# cat centos-release
CentOS Linux release 8.4.2105

#将宿主机文件复制到容器内
[root@rocky8 ~]# docker cp /etc/issue ea5987:/root/
Successfully copied 2.05kB to ea5987:/root/

[root@rocky8 ~]# docker exec ea5987 cat /root/issue
\S
Kernel \r on an \m

使用 systemd 控制容器运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@rocky8 ~]# vim /lib/systemd/system/hello.service
[root@rocky8 ~]# cat /lib/systemd/system/hello.service
[Unit]
Description=Hello World
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill busybox-hello
ExecStartPre=-/usr/bin/docker rm busybox-hello
ExecStartPre=/usr/bin/docker pull busybox
ExecStart=/usr/bin/docker run --name busybox-hello busybox /bin/sh -c "while true; do echo Hello World; sleep 1; done"
ExecStop=/usr/bin/docker kill busybox-hello

[Install]
WantedBy=multi-user.target


[root@rocky8 ~]# systemctl daemon-reload
[root@rocky8 ~]# systemctl enable --now hello.service
[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ec8b0636e22 busybox "/bin/sh -c 'while t…" 6 seconds ago Up 5 seconds busybox-hello

传递环境变量

有些容器运行时,需要传递变量,可以使用 -e <参数> 或 –env-file <参数文件> 实现

范例: 传递变量创建MySQL

变量参考链接: https://hub.docker.com/_/mysql

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#MySQL容器运行时需要指定root的口令
[root@rocky8 ~]# docker run --name mysql mysql:8.0.29-oracle
You need to specify one of the following:
- MYSQL_ROOT_PASSWORD
- MYSQL_ALLOW_EMPTY_PASSWORD
- MYSQL_RANDOM_ROOT_PASSWORD

[root@rocky8 ~]# docker run --name mysql-test1 -v /data/mysql:/var/lib/mysql/ \
-e MYSQL_ROOT_PASSWORD=000000 -e MYSQL_DATABASE=wordpress -e MYSQL_USER=wpuser \
-e MYSQL_PASSWORD=000000 -d -p 3306:3306 mysql:8.0.29-oracle
df13cf06cc56ccb3f286bd3fcdf68e0b60c312a057b922defff975bfbde24995


[root@rocky8 ~]# cat mysql/mysql-test.cnf
[mysqld]
server-id=100
log-bin=mysql-bin

[root@rocky8 ~]# cat env.list
MYSQL_ROOT_PASSWORD=000000
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=000000

[root@rocky8 ~]# docker run --name mysql-test2 -v /root/mysql/:/etc/mysql/conf.d -v /data/mysql2:/var/lib/mysql --env-file=env.list -d -p 3307:3306 mysql:8.0.29-oracle
1de4c0ab06265c3b95f4ebddc5395ccd8b14f31233b95007b75eb97bce94a2eb

实战案例: 利用 docker 快速部署自动化运维平台

30

项目说明

Spug 面向中小型企业设计的轻量级无 Agent 的自动化运维平台,整合了主机管理、主机批量执行、主机在线终端、文件在线上传下载、应用发布部署、在线任务计划、配置中心、监控、报警等一系列功能

特性

  • 批量执行: 主机命令在线批量执行
  • 在线终端: 主机支持浏览器在线终端登录
  • 文件管理: 主机文件在线上传下载
  • 任务计划: 灵活的在线任务计划
  • 发布部署: 支持自定义发布部署流程
  • 配置中心: 支持 KV、文本、json 等格式的配置
  • 监控中心: 支持站点、端口、进程、自定义等监控
  • 报警中心: 支持短信、邮件、钉钉、微信等报警方式
  • 优雅美观: 基于 Ant Design 的 UI 界面
  • 开源免费: 前后端代码完全开源

官网地址: https://www.spug.dev/

使用文档:https://www.spug.dev/docs/about-spug/

gitee链接: https://gitee.com/openspug/spug

部署过程

官方说明:

1
https://www.spug.dev/docs/install-docker/

安装docker

lue

拉取镜像

1
[root@rocky8 ~]# docker pull registry.aliyuncs.com/openspug/spug

启动容器

1
2
3
4
5
[root@rocky8 ~]# docker run -d --restart always --name spug -p 80:80 registry.aliyuncs.com/openspug/spug

# 持久化存储启动命令:
# mydata指的是本地磁盘路径,也可以是其他目录,但需要保证映射的本地磁盘路径已经存在,/data是容器内代码和数据初始化存储的路径
$ docker run -d --restart=always --name=spug -p 80:80 -v /mydata/:/data/spug

初始化

以下操作会创建一个用户名为 admin 密码为 000000的管理员账户,可自行替换管理员账户。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@rocky8 ~]# docker exec spug init_spug admin 000000

Running migrations:
Applying account.0001_initial... OK
Applying alarm.0001_initial... OK
Applying config.0001_initial... OK
Applying app.0001_initial... OK
Applying repository.0001_initial... OK
Applying deploy.0001_initial... OK
Applying exec.0001_initial... OK
Applying home.0001_initial... OK
Applying host.0001_initial... OK
Applying monitor.0001_initial... OK
Applying notify.0001_initial... OK
Applying schedule.0001_initial... OK
Applying setting.0001_initial... OK
初始化/更新成功
/usr/local/lib/python3.6/site-packages/OpenSSL/_util.py:6: CryptographyDeprecationWarning: Python 3.6 is no longer supported by the Python core team. Therefore, support for it is deprecated in cryptography. The next release of cryptography will remove support for Python 3.6.
from cryptography.hazmat.bindings.openssl.binding import Binding
创建用户成功

访问测试

在浏览器中输入 http://localhost:80 访问。

1
用户名: admin 密码: 000000

31
32

Docker 镜像制作和管理

Docker 镜像说明

Docker 镜像中有没有内核

从镜像大小上面来说,一个比较小的镜像只有1MB多点或几MB,而内核文件需要几十MB, 因此镜像里面是没有内核的,镜像在被启动为容器后将直接使用宿主机的内核,而镜像本身则只提供相应的rootfs,即系统正常运行所必须的用户空间的文件系统,比如: /dev/,/proc,/bin,/etc等目录,容器当中/boot目录是空的,而/boot当中保存的就是与内核相关的文件和目录。

为什么没有内核

由于容器启动和运行过程中是直接使用了宿主机的内核,不会直接调用物理硬件,所以也不会涉及到硬件驱动,因此也无需容器内拥有自已的内核和驱动。而如果使用虚拟机技术,对应每个虚拟机都有自已独立的内核

容器中的程序后台运行会导致此容器启动后立即退出

Docker容器如果希望启动后能持续运行,就必须有一个能前台持续运行的进程,如果在容器中启动传统的服务,如:httpd,php-fpm等均为后台进程模式运行,就导致 docker 在前台没有运行的应用,这样的容器启动后会立即退出。所以一般会将服务程序以前台方式运行,对于有一些可能不知道怎么实现前台运行的程序,只需要在你启动的该程序之后添加类似于 tail ,top 这种可以前台运行的程序即可. 比较常用的方法,如 tail -f /etc/hosts

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#httpd
ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD ["-D", "FOREGROUND"]

#nginx
ENTRYPOINT [ "/usr/sbin/nginx", "-g", "daemon off;" ]

#用脚本运行容器
cat run_haproxy.sh
#!/bin/bash
haproxy -f /etc/haproxy/haproxy.cfg
tail -f /etc/hosts
tail -n1 Dockerfile
CMD ["run_haproxy.sh"]

docker 镜像生命周期

33

制作镜像方式

Docker 镜像制作类似于虚拟机的镜像(模版)制作,即按照公司的实际业务需求将需要安装的软件、相关配置等基础环境配置完成,然后将其做成镜像,最后再批量从镜像批量生成容器实例,这样可以极大的简化相同环境的部署工作.

Docker的镜像制作分为手动制作(基于容器)和自动制作(基于DockerFile),企业通常都是基于Dockerfile制作镜像

1
2
docker commit   #通过修改现有容器,将之手动构建为镜像
docker build #通过Dockerfile文件,批量构建为镜像

将现有容器通过 docker commit 手动构建镜像

基于容器手动制作镜像步骤

docker commit 格式

1
2
3
4
5
6
7
8
9
10
11
12
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

#选项
-a, --author string Author (e.g., "John Hannibal Smith <hannibal@a-team.com>")
-c, --change list Apply Dockerfile instruction to the created image
-m, --message string Commit message
-p, --pause Pause container during commit (default true)

#说明:
制作镜像和CONTAINER状态无关,停止状态也可以制作镜像
如果没有指定[REPOSITORY[:TAG]],REPOSITORY和TAG都为<none>
提交的时候标记TAG号: 生产当中常用,后期可以根据TAG标记创建不同版本的镜像以及创建不同版本的容器

基于容器手动制作镜像步骤具体如下:

  1. 下载一个系统的官方基础镜像,如: CentOS 或 Ubuntu

  2. 基于基础镜像启动一个容器,并进入到容器

  3. 在容器里面做配置操作

    • 安装基础命令
    • 配置运行环境
    • 安装服务和配置服务
    • 放业务程序代码
  4. 提交为一个新镜像 docker commit

  5. 基于自己的的镜像创建容器并测试访问

实战案例: 基于 busybox 制作 httpd 镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
[root@rocky8 ~]# docker run -it --name b1 busybox
/ # mkdir /data/html -p
/ # echo "httpd website in busybox" > /data/html/index.html
/ # httpd --help
BusyBox v1.37.0 (2024-09-26 21:31:42 UTC) multi-call binary.

Usage: httpd [-ifv[v]] [-c CONFFILE] [-p [IP:]PORT] [-u USER[:GRP]] [-r REALM] [-h HOME]
or httpd -d/-e/-m STRING

Listen for incoming HTTP requests

-i Inetd mode
-f Run in foreground
-v[v] Verbose
-p [IP:]PORT Bind to IP:PORT (default *:80)
-u USER[:GRP] Set uid/gid after binding to port
-r REALM Authentication Realm for Basic Authentication
-h HOME Home directory (default .)
-c FILE Configuration file (default {/etc,HOME}/httpd.conf)
-m STRING MD5 crypt STRING
-e STRING HTML encode STRING
-d STRING URL decode STRING
/ # exit

#格式1
[root@rocky8 ~]# docker commit -a "wang<root@wshuaiqing.cn>" -c "CMD /bin/httpd -fv -h /data/html" -c "EXPOSE 80" b1 httpd-busybox:v1.0


#格式2
[root@rocky8 ~]# docker commit -a "wang<root@wshuaiqing.cn>" -c 'CMD ["/bin/httpd", "-f", "-v", "-h", "/data/html"]' -c "EXPOSE 80" b1 httpd-busybox:v2.0

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd-busybox v2.0 171d8747fd7f 24 seconds ago 4.28MB
httpd-busybox v1.0 f9b8dfc20bc0 2 minutes ago 4.28MB

[root@rocky8 ~]# docker run -d -P --name httpd01 httpd-busybox:v1.0
ed184063880e560573a587e62ef866fd04d654588a1028c96a1c2fa5e1f8ff05

[root@rocky8 ~]# docker run -d -P --name httpd02 httpd-busybox:v2.0
c5e31570973b418e157ef2a638fbfacd33226b0df7b029fb08f201b919a4063f

[root@rocky8 ~]# docker inspect -f "{{.NetworkSettings.Networks.bridge.IPAddress}}" httpd01
172.17.0.2

[root@rocky8 ~]# docker inspect -f "{{.NetworkSettings.Networks.bridge.IPAddress}}" httpd02
172.17.0.3

#对应格式1
[root@rocky8 ~]# docker inspect -f "{{.Config.Cmd}}" httpd01
[/bin/sh -c /bin/httpd -fv -h /data/html]

#对应格式2
[root@rocky8 ~]# docker run -d -P --name httpd02 httpd-busybox:v2.0
c5e31570973b418e157ef2a638fbfacd33226b0df7b029fb08f201b919a4063f

[root@rocky8 ~]# docker inspect -f "{{.Config.Cmd}}" httpd02
[/bin/httpd -f -v -h /data/html]


[root@rocky8 ~]# docker exec -it httpd01 sh
/ # pstree -p
httpd(1)

/ # ps aux
PID USER TIME COMMAND
1 root 0:00 /bin/httpd -fv -h /data/html
7 root 0:00 sh
14 root 0:00 ps aux

[root@rocky8 ~]# curl 172.17.0.2
httpd website in busybox

[root@rocky8 ~]# curl 192.168.1.11:32768
httpd website in busybox

实战案例: 基于官方镜像生成的容器制作 tomcat 镜像

下载官方的tomcat镜像并运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

[root@rocky8 ~]# docker run -d -p 8080:8080 tomcat
Unable to find image 'tomcat:latest' locally
latest: Pulling from library/tomcat
5a7813e071bf: Pull complete
8dbbbc6af9dc: Pull complete
a10b6847b9f1: Pull complete
dcc1c5ea3c7d: Pull complete
91e6cc55403a: Pull complete
5d4660d0a9e9: Pull complete
4f4fb700ef54: Pull complete
e231914ca483: Pull complete
Digest: sha256:1374a565d5122fdb42807f3a5f2d4fcc245a5e15420ff5bb5123afedc8ef769d
Status: Downloaded newer image for tomcat:latest
871ddf4c5611ee11afee3bb62cfdab10b6a9b5eb069173e45152583c7a0a2bc4

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
871ddf4c5611 tomcat "catalina.sh run" 42 seconds ago Up 41 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp heuristic_poitras

[root@rocky8 ~]# curl -I 127.0.0.1:8080
HTTP/1.1 404
Content-Type: text/html;charset=utf-8
Content-Language: en
Content-Length: 682
Date: Tue, 08 Apr 2025 10:12:04 GMT

修改容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@rocky8 ~]# docker exec -it 871ddf bash
root@871ddf4c5611:/usr/local/tomcat# ls
bin lib README.md webapps
BUILDING.txt LICENSE RELEASE-NOTES webapps.dist
conf logs RUNNING.txt work
CONTRIBUTING.md native-jni-lib temp
filtered-KEYS NOTICE upstream-KEYS

root@871ddf4c5611:/usr/local/tomcat# ls webapps
root@871ddf4c5611:/usr/local/tomcat# ls webapps.dist/
docs examples host-manager manager ROOT

root@871ddf4c5611:/usr/local/tomcat# cp -a webapps.dist/* webapps/
root@871ddf4c5611:/usr/local/tomcat# ls webapps/
docs examples host-manager manager ROOT

root@871ddf4c5611:/usr/local/tomcat# exit
exit

[root@rocky8 ~]# curl -I 127.0.0.1:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Date: Tue, 08 Apr 2025 10:18:57 GMT

提交新镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
[root@rocky8 ~]# docker commit 871ddf tomcat:11.0.5
sha256:e25d1667ae52b796c9e64d56b981e8aeec16d80bdd712a13cb491313bb84a1e6

[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat 11.0.5 e25d1667ae52 4 seconds ago 524MB
tomcat latest 88b0f1cee84c 4 weeks ago 519MB

[root@rocky8 ~]# docker inspect tomcat:11.0.5
[
{
"Id": "sha256:e25d1667ae52b796c9e64d56b981e8aeec16d80bdd712a13cb491313bb84a1e6",
"RepoTags": [
"tomcat:11.0.5"
],
"RepoDigests": [],
"Parent": "sha256:88b0f1cee84c76bb84a450edacdc37fb3ee00a8706be9298dfe8ec69e5040cdb",
"Comment": "",
"Created": "2025-04-08T10:26:01.404563755Z",
"DockerVersion": "26.1.3",
"Author": "",
"Config": {
"Hostname": "871ddf4c5611",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/tomcat/bin:/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"JAVA_HOME=/opt/java/openjdk",
"LANG=en_US.UTF-8",
"LANGUAGE=en_US:en",
"LC_ALL=en_US.UTF-8",
"JAVA_VERSION=jdk-21.0.6+7",
"CATALINA_HOME=/usr/local/tomcat",
"TOMCAT_NATIVE_LIBDIR=/usr/local/tomcat/native-jni-lib",
"LD_LIBRARY_PATH=/usr/local/tomcat/native-jni-lib",
"TOMCAT_MAJOR=11",
"TOMCAT_VERSION=11.0.5",
"TOMCAT_SHA512=99c4b3acafd5bd1a10c15b52b97ed7ff3ac7b943bf324aba0645d9894aa6f2868ebb746571332f4fa826209aa4d48b70a66e96998cecb3eac93b74f3f29170f2"
],
"Cmd": [
"catalina.sh",
"run"
],
"Image": "tomcat",
"Volumes": null,
"WorkingDir": "/usr/local/tomcat",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"org.opencontainers.image.ref.name": "ubuntu",
"org.opencontainers.image.version": "24.04"
}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 524063690,
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/4c598f6c21c79adcf8f1d312fed49f68e5db5b0afb8fcda9d332303a8dea8692/diff:/var/lib/docker/overlay2/797aad065a1d507081a17332e3a2c01d5bf21c2dfb554bc09c1bbac3731d87b4/diff:/var/lib/docker/overlay2/2cc3ec97291901f9e15385d34b28a4ac16439676afa5eae573d9e0e636afd7eb/diff:/var/lib/docker/overlay2/2765f5d286d88192ec9002c5520768204d36a0e505b9bcad07cf080cc6a347c3/diff:/var/lib/docker/overlay2/ed720e91c6c2a38da16b553482ac194287a8bdcb3998cd34e8e802807dc18c2c/diff:/var/lib/docker/overlay2/5003434228b4449ab4a73897f0790b3e92456e1d28fe58c81bc077ec55bbb66f/diff:/var/lib/docker/overlay2/34bbbc437d8c55a50d7fbaa7b6b2cf7a40b19e167b3847a9949afeb799643bc4/diff:/var/lib/docker/overlay2/fb1889a11761e13b8225d853aacaaf21849f474b19152aa6a262ca7ff1a5fde3/diff:/var/lib/docker/overlay2/3a277976be1007eb2f7d79cbadca141937db61eb38a7ab5c8e358a72e8ca60fa/diff",
"MergedDir": "/var/lib/docker/overlay2/b2643d3dcd360f19e02339bf76708020390b9f5c1f27552ea5808971092c78c7/merged",
"UpperDir": "/var/lib/docker/overlay2/b2643d3dcd360f19e02339bf76708020390b9f5c1f27552ea5808971092c78c7/diff",
"WorkDir": "/var/lib/docker/overlay2/b2643d3dcd360f19e02339bf76708020390b9f5c1f27552ea5808971092c78c7/work"
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:4b7c01ed0534d4f9be9cf97d068da1598c6c20b26cb6134fad066defdb6d541d",
"sha256:3359bc3d7a6a1f94c063d743f3ebd025e299dfbbbb1d48afe18a90e4d5e1f36f",
"sha256:f844dcf94898d99c5a27de863a79e15d5353a6802f1804d01475be0e7b23221f",
"sha256:39cf0ac89a5a18bb69e6cc51b9f37eb9025b0bc85a7433d2ef85256810804361",
"sha256:4e5b554b734518d308942fd75da104b3dc27a25676fa51ce8d36a40e4a5f2491",
"sha256:49cb1bc2daeb9c8543094a01a8a7e261040e7a3cbbc9e58ffae279dde71ac65b",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:6fbdf02a6a33fb7e6564c9d0d4f879d3845c91f60805babfe73104e1e0969def",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:bb6d85a0bf949d520975c55f10dd895eeb849e980e631b5ca83aae8ef5f29f81"
]
},
"Metadata": {
"LastTagTime": "2025-04-08T18:26:01.407969774+08:00"
}
}
]

#删除当前的容器
[root@rocky8 ~]# docker rm -f 871ddf4c5611
[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

利用新镜像启动容器

1
2
3
4
5
6
[root@rocky8 ~]# docker run -d --name tomcat -p 8080:8080 tomcat:11.0.5
bd112dd8a3040a91b06d76fbdac96964b75f4a8ea3413b591d9c7206c7be7a96

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd112dd8a304 tomcat:11.0.5 "catalina.sh run" 4 seconds ago Up 3 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp tomcat

测试新镜像启动的容器

浏览器访问 http://192.168.1.11:8080/ 可以看到下面显示

34

实战案例: 基于Ubuntu的基础镜像利用 apt 安装手动制作 nginx 的镜像

启动Ubuntu基础镜像并实现相关的配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
[root@rocky8 ~]# docker run -it -p 80 --name nginx_ubuntu ubuntu bash
root@ecb03c42d1f0:/# cat /etc/os-release
PRETTY_NAME="Ubuntu 24.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.1 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo

root@ecb03c42d1f0:/# ll /etc/apt/sources.list
-rw-r--r-- 1 root root 270 Jan 27 02:09 /etc/apt/sources.list

#如果时间不对需要同步以下时间,如果不同步下面配置阿里源报错
root@ecb03c42d1f0:/# apt install -y ntpdate
------------------

Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing the time
zones in which they are located.

1. Africa 3. Antarctica 5. Asia 7. Australia 9. Indian 11. Etc
2. America 4. Arctic 6. Atlantic 8. Europe 10. Pacific 12. Legacy
Geographic area: 5

Please select the city or region corresponding to your time zone.

1. Aden 23. Dili 45. Krasnoyarsk 67. Samarkand
2. Almaty 24. Dubai 46. Kuala_Lumpur 68. Seoul
3. Amman 25. Dushanbe 47. Kuching 69. Shanghai
4. Anadyr 26. Famagusta 48. Kuwait 70. Singapore
5. Aqtau 27. Gaza 49. Macau 71. Srednekolymsk
6. Aqtobe 28. Harbin 50. Magadan 72. Taipei
7. Ashgabat 29. Hebron 51. Makassar 73. Tashkent
8. Atyrau 30. Ho_Chi_Minh 52. Manila 74. Tbilisi
9. Baghdad 31. Hong_Kong 53. Muscat 75. Tehran
10. Bahrain 32. Hovd 54. Nicosia 76. Tel_Aviv
11. Baku 33. Irkutsk 55. Novokuznetsk 77. Thimphu
12. Bangkok 34. Istanbul 56. Novosibirsk 78. Tokyo
13. Barnaul 35. Jakarta 57. Omsk 79. Tomsk
14. Beirut 36. Jayapura 58. Oral 80. Ulaanbaatar
15. Bishkek 37. Jerusalem 59. Phnom_Penh 81. Urumqi
16. Brunei 38. Kabul 60. Pontianak 82. Ust-Nera
17. Chita 39. Kamchatka 61. Pyongyang 83. Vientiane
18. Choibalsan 40. Karachi 62. Qatar 84. Vladivostok
19. Chongqing 41. Kashgar 63. Qostanay 85. Yakutsk
20. Colombo 42. Kathmandu 64. Qyzylorda 86. Yangon
21. Damascus 43. Khandyga 65. Riyadh 87. Yekaterinburg
22. Dhaka 44. Kolkata 66. Sakhalin 88. Yerevan
Time zone: 69

root@ecb03c42d1f0:/# date
Wed Apr 9 09:03:22 CST 2025

root@ecb03c42d1f0:/# cat > /etc/apt/sources.list
deb http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-proposed main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ focal-backports main restricted universe multiverse
^C

root@ecb03c42d1f0:/# apt update
root@ecb03c42d1f0:/# apt install -y nginx
root@ecb03c42d1f0:/# nginx -v
nginx version: nginx/1.24.0 (Ubuntu)

root@ecb03c42d1f0:/# grep include /etc/nginx/nginx.conf
include /etc/nginx/modules-enabled/*.conf;
include /etc/nginx/mime.types;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;

root@ecb03c42d1f0:/# grep root /etc/nginx/sites-enabled/default
root /var/www/html;
# deny access to .htaccess files, if Apache's document root
# root /var/www/example.com;

root@ecb03c42d1f0:/# echo Nginx Website in Docker > /var/www/html/index.html

root@ecb03c42d1f0:/# exit
exit

提交为镜像

1
2
3
4
5
6
[root@rocky8 ~]# docker commit -a "wshuaiqing.cn" -m 'nginx-ubuntu:24.04' nginx_ubuntu nginx_ubuntu24.04:v1.18.0
sha256:e41ea02f6d243d55402d0934905b56e9a1605b8162ffaa4a611f5f226c8e4a39

[root@rocky8 ~]# docker images nginx_ubuntu24.04:v1.18.0
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx_ubuntu24.04 v1.18.0 e41ea02f6d24 12 seconds ago 266MB

从制作的新镜像启动容器并测试访问

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@rocky8 ~]# docker run -d -p 80 --name nginx-web nginx_ubuntu24.04:v1.18.0 nginx -g 'daemon off;'
6a433a6c11cbb44c95c4980e74f0178381f76b10bd7151d62e13ec0982adf413

[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a433a6c11cb nginx_ubuntu24.04:v1.18.0 "nginx -g 'daemon of…" 6 seconds ago Up 5 seconds 0.0.0.0:32772->80/tcp, :::32772->80/tcp nginx-web

[root@rocky8 ~]# docker port nginx-web
80/tcp -> 0.0.0.0:32772
80/tcp -> [::]:32772

[root@rocky8 ~]# curl 127.0.0.1:32772
Nginx Website in Docker

实战案例: 基于CentOS的基础镜像利用 yum 安装手动制作 nginx 的镜像

下载基础镜像并初始化系统

基于某个基础镜像之上重新制作,因此需要先有一个基础镜像,本次使用官方提供的centos镜像为基础

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@rocky8 ~]# docker run -it --name nginx_centos centos bash

#修改时区
[root@37bb54287e87 /]# rm -rf /etc/localtime
[root@37bb54287e87 /]# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

#更改yum 源
[root@37bb54287e87 /]# rm -rf /etc/yum.repos.d/*
[root@37bb54287e87 /]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2495 100 2495 0 0 6160 0 --:--:-- --:--:-- --:--:-- 6175

[root@37bb54287e87 /]# yum repolist
Failed to set locale, defaulting to C.UTF-8
repo id repo name
AppStream CentOS-8.5.2111 - AppStream - mirrors.aliyun.com
base CentOS-8.5.2111 - Base - mirrors.aliyun.com
extras CentOS-8.5.2111 - Extras - mirrors.aliyun.com

安装相关软件和工具

1
2
3
4
5
6
7
8
#yum安装nginx
[root@37bb54287e87 /]# yum install -y nginx

#安装常用命令
[root@37bb54287e87 /]# yum install -y vim curl iproute net-tools wget

#清理yum缓存
[root@37bb54287e87 /]# rm -rf /var/cache/dnf/*

修改服务的配置信息关闭服务后台运行

1
2
3
4
#关闭nginx后台运行
[root@37bb54287e87 /]# vim /etc/nginx/nginx.conf
user nginx;
daemon off; #关闭后台运行

准备程序和数据

1
2
3
#自定义web界面
[root@37bb54287e87 /]# rm -f /usr/share/nginx/html/index.html
[root@37bb54287e87 /]# echo "Nginx Page in Docker" > /usr/share/nginx/html/index.html

提交为镜像

docker commit 命令在宿主机基于容器ID 提交为镜像

1
2
3
4
5
6
7
#不关闭容器的情况,将容器提交为镜像
[root@rocky8 /]# docker commit -a "wshuaiqing.cn" -m "nginx yum v1" -c "EXPOSE 80 443" nginx_centos centos8-nginx:1.16.1.v1
sha256:049d292d07454f71532acfcaaeffc221bf2e8a2ba1d91f7314c910e33cd493f1

[root@rocky8 /]# docker images centos8-nginx:1.16.1.v1
REPOSITORY TAG IMAGE ID CREATED SIZE
centos8-nginx 1.16.1.v1 049d292d0745 16 seconds ago 348MB

从制作的镜像启动容器

1
2
3
4
5
6
[root@rocky8 /]# docker run -d -p 8080:80 --name nginx_centos centos8-nginx:1.16.1.v1 /usr/sbin/nginx
c0c74b99406df23273a79838db9360c76c874740feb497eda8bf1cc92d59b430

[root@rocky8 /]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c0c74b99406d centos8-nginx:1.16.1.v1 "/usr/sbin/nginx" 5 seconds ago Up 4 seconds 443/tcp, 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx_centos

访问测试镜像

1
2
[root@rocky8 /]# curl 127.0.0.1:8080
Nginx Page in Docker

实战案例: 基于CentOS 基础镜像手动制作编译版本 nginx 镜像

在CentOS 基础镜像的容器之上手动编译安装nginx,然后再将此容器提交为镜像

下载镜像并初始化系统

1
2
3
4
5
6
7
8
9
10
[root@rocky8 ~]# docker run -it centos /bin/bash

#生成yum源配置
[root@d4641b86e4d3 /]# rm -rf /etc/yum.repos.d/*
[root@d4641b86e4d3 /]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2495 100 2495 0 0 6911 0 --:--:-- --:--:-- --:--:-- 6911

[root@d4641b86e4d3 /]# yum install wget -y

编译安装 nginx

1
2
3
4
5
6
7
8
9
10
11
[root@d4641b86e4d3 /]# useradd -r -s /sbin/nologin nginx

#安装基础包
[root@d4641b86e4d3 /]# yum install -y gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel make

[root@d4641b86e4d3 src]# wget https://nginx.org/download/nginx-1.26.3.tar.gz
[root@d4641b86e4d3 src]# tar xf nginx-1.26.3.tar.gz
[root@d4641b86e4d3 src]# cd nginx-1.26.3
[root@d4641b86e4d3 nginx-1.26.3]# ./configure --prefix=/apps/nginx
[root@d4641b86e4d3 nginx-1.26.3]# make && make install
[root@d4641b86e4d3 nginx-1.26.3]# rm -rf /var/cache/dnf/*

关闭 nginx 后台运行

1
2
3
4
5
6
7
8
[root@d4641b86e4d3 nginx-1.26.3]# cd /apps/nginx/conf/
[root@d4641b86e4d3 conf]# vi nginx.conf
user nginx;
daemon off;

[root@d4641b86e4d3 conf]# ln -s /apps/nginx/sbin/nginx /usr/sbin/
[root@d4641b86e4d3 conf]# ls -l /usr/sbin/nginx
lrwxrwxrwx 1 root root 22 Apr 9 02:24 /usr/sbin/nginx -> /apps/nginx/sbin/nginx

准备相关数据自定义web界面

1
[root@d4641b86e4d3 conf]# echo "Nginx Test Page in Docker" > /apps/nginx/html/index.html

提交为镜像

1
2
3
4
5
6
7
#不要退出容器,在另一个终端窗口执行以下命令
[root@rocky8 /]# docker commit -c "CMD nginx" d4641b86e4d3 centos8-nginx:1.26.3
sha256:a5f1a17e45cfbd889a1db5d1924f49bafa0d4d8dcdbdab627c18dc8c53004b67

[root@rocky8 /]# docker images centos8-nginx:1.26.3
REPOSITORY TAG IMAGE ID CREATED SIZE
centos8-nginx 1.26.3 a5f1a17e45cf 11 seconds ago 530MB

从自己的镜像启动容器

1
2
3
4
5
6
[root@rocky8 /]# docker run -d -p 80:80 centos8-nginx:1.26.3 nginx
f07cbcf3fdfb0638520fcc30bdf79d9f8e3f9b0ab993b5839c85dd82b7058118

[root@rocky8 /]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f07cbcf3fdfb centos8-nginx:1.26.3 "nginx" 5 seconds ago Up 4 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp optimistic_cori

备注: 最后面的nginx是运行的命令,即镜像里面要运行一个nginx命令,所以前面软链接到/usr/sbin/nginx,目的为了让系统不需要指定路径就可以执行此命令

访问测试

1
2
[root@rocky8 /]# curl 127.0.0.1
Nginx Test Page in Docker

查看Nginx访问日志和进程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 /]# docker exec -it f07cbcf3fdfb bash
[root@f07cbcf3fdfb /]# cat /apps/nginx/logs/access.log
172.17.0.1 - - [09/Apr/2025:02:32:39 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:16 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:17 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:17 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:18 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:18 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:18 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"
172.17.0.1 - - [09/Apr/2025:02:33:19 +0000] "GET / HTTP/1.1" 200 25 "-" "curl/7.61.1"

[root@f07cbcf3fdfb /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18660 2628 ? Ss 02:31 0:00 nginx: master proce
nginx 6 0.0 0.0 37408 4280 ? S 02:31 0:00 nginx: worker proce
root 7 0.1 0.0 15096 3640 pts/0 Ss 02:33 0:00 bash
root 22 0.0 0.0 47596 3764 pts/0 R+ 02:33 0:00 ps aux

利用 DockerFile 文件执行 docker build 自动构建镜像

Dockfile 使用详解

Dockerfile 介绍

DockerFile 是一种被Docker程序解释执行的脚本,由一条条的命令组成的,每条命令对应linux下面的一条命令,Docker程序将这些DockerFile指令再翻译成真正的linux命令,其有自己的书写方式和支持的命令,Docker程序读取DockerFile并根据指令生成Docker镜像,相比手动制作镜像的方式,DockerFile更能直观的展示镜像是怎么产生的,有了DockerFile,当后期有额外的需求时,只要在之前的DockerFile添加或者修改响应的命令即可重新生成新的Docker镜像,避免了重复手动制作镜像的麻烦,类似与shell脚本一样,可以方便高效的制作镜像

Docker守护程序 Dockerfile 逐一运行指令,如有必要,将每个指令的结果提交到新镜像,然后最终输出新镜像的ID。Docker守护程序将自动清理之前发送的上下文

请注意,每条指令都是独立运行的,并会导致创建新镜像,比如 RUN cd /tmp 对下一条指令不会有任何影响。

Docker将尽可能重用中间镜像层(缓存),以显著加速 docker build 命令的执行过程,这由 Using cache 控制台输出中的消息指示

Dockerfile 镜像制作和使用流程

35

Dockerfile文件的制作镜像的分层结构

36

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#按照业务类型或系统类型等方式划分创建目录环境,方便后期镜像比较多的时候进行分类
[root@rocky8 ~]# mkdir /data/dockerfile/{web/{nginx,apache,tomcat,jdk},system/{centos,ubuntu,alpine,debian}} -p
[root@rocky8 ~]# tree /data/dockerfile/
/data/dockerfile/
├── system
│   ├── alpine
│   ├── centos
│   ├── debian
│   └── ubuntu
└── web
├── apache
├── jdk
├── nginx
└── tomcat

10 directories, 0 files

Dockerfile 文件格式

Dockerfile 是一个有特定语法格式的文本文件

dockerfile 官方说明: https://docs.docker.com/engine/reference/builder/

帮助: man 5 dockerfile

Dockerfile 文件说明

  • 每一行以Dockerfile的指令开头,指令不区分大小写,但是惯例使用大写 使用
  • # 开始作为注释
  • 每一行只支持一条指令,每条指令可以携带多个参数
  • 指令按文件的顺序从上至下进行执行
  • 每个指令的执行会生成一个新的镜像层,为了减少分层和镜像大小,尽可能将多条指令合并成一条指令
  • 制作镜像一般可能需要反复多次,每次执行dockfile都按顺序执行,从头开始,已经执行过的指令已经缓存,不需要再执行,如果后续有一行新的指令没执行过,其往后的指令将会重新执行,所以为加速镜像制作,将最常变化的内容放下dockerfile的文件的后面

Dockerfile 相关指令

dockerfile 文件中的常见指令:

1
2
3
4
5
6
7
8
9
10
ADD
COPY
ENV
EXPOSE
FROM
LABEL
STOPSIGNAL
USER
VOLUME
WORKDIR

37

FROM: 指定基础镜像

定制镜像,需要先有一个基础镜像,在这个基础镜像上进行定制。

FROM 就是指定基础镜像,此指令通常必需放在Dockerfile文件第一个非注释行。后续的指令都是运行于此基准镜像所提供的运行环境

基础镜像可以是任何可用镜像文件,默认情况下,docker build会在docker主机上查找指定的镜像文件,在其不存在时,则会从Docker Hub Registry上拉取所需的镜像文件.如果找不到指定的镜像文件,docker build会返回一个错误信息

如何选择合适的镜像呢?

对于不同的软件官方都提供了相关的docker镜像,比如: nginx、redis、mysql、httpd、tomcat等服务类的镜像,也有操作系统类,如: centos、ubuntu、debian等。建议使用官方镜像,比较安全。

格式:

1
2
3
4
5
6
7
FROM [--platform=<platform>] <image> [AS <name>]
FROM [--platform=<platform>] <image>[:<tag>] [AS <name>]
FROM [--platform=<platform>] <image>[@<digest>] [AS <name>]

#说明:
--platform 指定镜像的平台,比如: linux/amd64, linux/arm64, or windows/amd64
tag 和 digest是可选项,如果不指定,默认为latest

说明: 关于scratch 镜像

1
2
3
4
5
6
FROM scratch
参考链接:
https://hub.docker.com/_/scratch?tab=description
https://docs.docker.com/develop/develop-images/baseimages/
该镜像是一个空的镜像,可以用于构建busybox等超小镜像,可以说是真正的从零开始构建属于自己的镜像
该镜像在构建基础镜像(例如debian和busybox)或超最小镜像(仅包含一个二进制文件及其所需内容,例如:hello-world)的上下文中最有用。

范例:

1
2
3
4
FROM scratch #所有镜像的起源镜像,相当于Object类
FROM ubuntu
FROM ubuntu:bionic
FROM debian:buster-slim
LABEL: 指定镜像元数据

可以指定镜像元数据,如: 镜像作者等

1
LABEL <key>=<value> <key>=<value> <key>=<value> ...

范例:

1
2
3
4
5
LABEL "com.example.vendor"="ACME Incorporated"
LABEL com.example.label-with-value="foo"
LABEL version="1.0"
LABEL description="This text illustrates \
that label-values can span multiple lines."

一个镜像可以有多个label ,还可以写在一行中,即多标签写法,可以减少镜像的的大小

范例: 多标签写法

1
2
3
4
5
6
7
#一行格式
LABEL multi.label1="value1" multi.label2="value2" other="value3"

#多行格式
LABEL multi.label1="value1" \
multi.label2="value2" \
other="value3"

docker inspect 命令可以查看LABEL

范例:

1
2
3
4
5
6
7
8
9
"Labels": {
"com.example.vendor": "ACME Incorporated"
"com.example.label-with-value": "foo",
"version": "1.0",
"description": "This text illustrates that label-values can span multiple lines.",
"multi.label1": "value1",
"multi.label2": "value2",
"other": "value3"
},

**MAINTAINER: 指定维护者信息 **

此指令已过时,用LABEL代替

1
MAINTAINER <name>

范例:

1
2
3
MAINTAINER wangxiaochun <root@wangxiaochun.com>
#用LABEL代替
LABEL maintainer="wangxiaochun <root@wangxiaochun.com>"
RUN: 执行 shell命令

RUN 指令用来在构建镜像阶段需要执行 FROM 指定镜像所支持的Shell命令。

通常各种基础镜像一般都支持丰富的shell命令

注意: RUN 可以写多个,每一个RUN指令都会建立一个镜像层,所以尽可能合并成一条指令,比如将多个shell命令通过 && 连接一起成为在一条指令

每个RUN都是独立运行的,和前一个RUN无关

1
2
3
4
5
6
7
8
#shell 格式: 相当于 /bin/sh -c <命令> 此种形式支持环境变量
RUN <命令>

#exec 格式: 此种形式不支持环境变量,注意:是双引号,不能是单引号
RUN ["executable","param1","param2"...]

#exec格式可以指定其它shell
RUN ["/bin/bash","-c","echo hello wang"]

说明:

1
2
3
4
5
shell格式中,<command>通常是一个shell命令,且以"/bin/sh -c”来运行它,这意味着此进程在容器中的PID不为1,不能接收Unix信号,因此,当使用docker stop <container>命令停止容器时,此进程接收不到SIGTERM信号

exec格式中的参数是一个JSON格式的数组,其中<executable>为要运行的命令,后面的<paramN>为传递给命令的选项或参数;然而,此种格式指定的命令不会以"/bin/sh -c"来发起,因此常见的shell操作如变
量替换以及通配符(?,*等)替换将不会进行;不过,如果要运行的命令依赖于此shell特性的话,可以将其替换为类似下面的格式。
RUN ["/bin/bash", "-c", "<executable>", "<param1>"]

范例:

1
2
3
4
5
6
RUN echo '<h1>Hello, Docker!</h1>' > /usr/share/nginx/html/index.html
RUN ["/bin/bash", "-c", "echo hello world"]
RUN yum -y install epel-release \
&& yum -y install nginx \
&& rm -rf /usr/share/nginx/html/*
&& echo "<h1> docker test nginx </h1>" > /usr/share/nginx/html/index.html

范例: 多个前后RUN 命令独立无关和shell命令不同

1
2
3
#world.txt并不存放在/app内
RUN cd /app
RUN echo "hello" > world.txt
ENV: 设置环境变量

ENV 可以定义环境变量和值,会被后续指令(如:ENV,ADD,COPY,RUN等)通过$KEY或${KEY}进行引用,并在容器运行时保持

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#变量赋值格式1
ENV <key> <value> #此格式只能对一个key赋值,<key>之后的所有内容均会被视作其<value>的组成部分

#变量赋值格式2
ENV <key1>=<value1> <key2>=<value2> \ #此格式可以支持多个key赋值,定义多个变量建议使用,减少镜像层
<key3>=<value3> ...

#如果<value>中包含空格,可以以反斜线\进行转义,也可通过对<value>加引号进行标识;另外,反斜线也可用于续行

#只使用一次变量
RUN <key>=<value> <command>

#引用变量
RUN $key .....

#变量支持高级赋值格式
${key:-word}
${kye:+word}

如果运行容器时如果需要修改变量,可以执行下面通过基于 exec 机制实现

注意: 下面方式只影响容器运行时环境,而不影响构建镜像的过程,即只能覆盖docker run时的环境变量,而不会影响docker build时环境变量的值

1
2
3
4
5
docker run -e|--env <key>=<value>

#说明
-e, --env list #Set environment variables
--env-file filename #Read in a file of environment variables

示例: 两种格式功能相同

1
2
3
4
5
6
7
8
#格式1
ENV myName="John Doe" myDog=Rex\ The\ Dog \
myCat=fluffy

#格式2
ENV myName John Doe
ENV myDog Rex The Dog
ENV myCat fluffy

范例:

1
2
3
4
5
ENV VERSION=1.0 DEBUG=on NAME="Happy Feet"
ENV PG_MAJOR 9.3
ENV PG_VERSION 9.3.4
RUN curl -SL http://example.com/postgres-$PG_VERSION.tar.xz | tar -xJC /usr/src/postgress && …
ENV PATH /usr/local/postgres-$PG_MAJOR/bin:$PATH

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
[root@rocky8 dockerfile]# cat Dockerfile 
FROM busybox
LABEL maintainer="wsq <wshuaiqing.cn>"
ENV NAME wang shuai qing
RUN touch $NAME.txt

[root@rocky8 dockerfile]# docker build -t test:v5.0 .
[+] Building 0.5s (6/6) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 134B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/2] FROM docker.io/library/busybox:latest 0.0s
=> [2/2] RUN touch wang shuai qing.txt 0.4s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:554ed570553dad4f1bff217c761345377f7429e6c0d190c5bd1 0.0s
=> => naming to docker.io/library/test:v5.0

[root@rocky8 dockerfile]# docker run --rm --name c1 test:v5.0 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=35ad9638af4a
NAME=wang shuai qing
HOME=/root

[root@rocky8 dockerfile]# docker run --rm -e NAME=mage --name c1 test:v5.0 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=1e2a1905450e
NAME=mage
HOME=/root

[root@rocky8 dockerfile]# docker run --rm -e NAME=mage --name c1 test:v5.0 ls -l
total 20
drwxr-xr-x 2 root root 12288 Sep 26 2024 bin
drwxr-xr-x 5 root root 340 Apr 9 03:22 dev
drwxr-xr-x 1 root root 66 Apr 9 03:22 etc
drwxr-xr-x 2 nobody nobody 6 Sep 26 2024 home
drwxr-xr-x 2 root root 4096 Sep 26 2024 lib
lrwxrwxrwx 1 root root 3 Sep 26 2024 lib64 -> lib
dr-xr-xr-x 206 root root 0 Apr 9 03:22 proc
-rw-r--r-- 1 root root 0 Apr 9 03:20 qing.txt
drwx------ 2 root root 6 Sep 26 2024 root
-rw-r--r-- 1 root root 0 Apr 9 03:20 shuai
dr-xr-xr-x 13 root root 0 Apr 9 03:22 sys
drwxrwxrwt 2 root root 6 Sep 26 2024 tmp
drwxr-xr-x 4 root root 29 Sep 26 2024 usr
drwxr-xr-x 4 root root 30 Sep 26 2024 var
-rw-r--r-- 1 root root 0 Apr 9 03:20 wang

[root@rocky8 dockerfile]# docker run --rm --env-file env.txt --name c1 test:v5.0 env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=64cbb34a5896
NAME=wang
TITLE=cto
HOME=/root
COPY: 复制文本

复制本地宿主机的到容器中的 。

1
2
COPY [--chown=<user>:<group>] <src>... <dest>
COPY [--chown=<user>:<group>] ["<src>",... "<dest>"] #路径中有空白字符时,建议使用此格式

说明:

  • 可以是多个,可以使用通配符,通配符规则满足Go的filepath.Match 规则

    filepath.Match 参考链接: https://golang.org/pkg/path/filepath/#Match

  • 必须是build上下文中的路径(为 Dockerfile 所在目录的相对路径),不能是其父目录中的文件

  • 如果是目录,则其内部文件或子目录会被递归复制,但目录自身不会被复制

  • 如果指定了多个, 或在中使用了通配符,则必须是一个目 录,且必须以 / 结尾

  • 可以是绝对路径或者是 WORKDIR 指定的相对路径

  • 使用 COPY 指令,源文件的各种元数据都会保留。比如读、写、执行权限、文件变更时间等

  • 如果事先不存在,它将会被自动创建,这包括其父目录路径,即递归创建目录

范例:

1
2
COPY hom* /mydir/    
COPY hom?.txt /mydir/
ADD: 复制和解包文件

该命令可认为是增强版的COPY,不仅支持COPY,还支持自动解压缩。可以将复制指定的 到容器中的

1
2
ADD [--chown=<user>:<group>] <src>... <dest>
ADD [--chown=<user>:<group>] ["<src>",... "<dest>"]

说明:

  • 可以是Dockerfile所在目录的一个相对路径;也可是一个 URL;还可是一个 tar 文件(自动解压)
  • 可以是绝对路径或者是 WORKDIR 指定的相对路径
  • 如果是目录,只复制目录中的内容,而非目录本身
  • 如果是一个 URL ,下载后的文件权限自动设置为 600
  • 如果为URL且不以/结尾,则指定的文件将被下载并直接被创建为,如果以 / 结尾,则文件名URL指定的文件将被直接下载并保存为/< filename>
  • 如果是一个本地文件系统上的打包文件,如: gz, bz2 ,xz ,它将被解包 ,其行为类似于”tar -x”命令,但是通过URL获取到的tar文件将不会自动展开
  • 如果有多个,或其间接或直接使用了通配符,则必须是一个以/结尾的目录路径;如果不以/结尾,则其被视作一个普通文件,的内容将被直接写入到

范例:

1
2
3
4
5
6
7
ADD test relativeDir/          # adds "test" to `WORKDIR`/relativeDir/
ADD test /absoluteDir/ # adds "test" to /absoluteDir/
ADD --chown=55:mygroup files* /somedir/
ADD --chown=bin files* /somedir/
ADD --chown=1 files* /somedir/
ADD --chown=10:11 files* /somedir/
ADD ubuntu-xenial-core-cloudimg-amd64-root.tar.gz /
CMD: 容器启动命令

38

一个容器中需要持续运行的进程一般只有一个,CMD 用来指定启动容器时默认执行的一个命令,且其运行结束后,容器也会停止,所以一般CMD 指定的命令为持续运行且为前台命令.

  • 如果docker run没有指定任何的执行命令或者dockerfile里面也没有ENTRYPOINT命令,那么开启容器时就会使用执行CMD指定的默认的命令
  • 前面介绍过的 RUN 命令是在构建镜像时执行的命令,注意二者的不同之处
  • 每个 Dockerfile 只能有一条 CMD 命令。如指定了多条,只有最后一条被执行
  • 如果用户启动容器时用 docker run xxx 指定运行的命令,则会覆盖 CMD 指定的命令
1
2
3
4
5
6
7
8
# 使用 exec 执行,推荐方式,第一个参数必须是命令的全路径,此种形式不支持环境变量
CMD ["executable","param1","param2"]

# 在 /bin/sh 中执行,提供给需要交互的应用;此种形式支持环境变量
CMD command param1 param2

# 提供给 ENTRYPOINT 命令的默认参数
CMD ["param1","param2"]

范例:

1
CMD ["nginx", "-g", "daemon off;"]

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[root@rocky8 dockerfile]# cat Dockerfile 
FROM ubuntu
LABEL maintainer="wsq <wshuaiqing.cn>"
RUN apt update \
&& apt -y install curl \
&& rm -rf /var/lib/apt/lists/*
CMD ["curl","-s","https://cip.cc"]


[root@rocky8 dockerfile]# docker build -t test:v1.0 .
[+] Building 0.1s (6/6) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 198B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/2] FROM docker.io/library/ubuntu:latest 0.0s
=> CACHED [2/2] RUN apt update && apt -y install curl && rm -rf /var/lib/apt/l 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:aedaa0715b9ce6ad6b981ff182c3b883358fc9fc5ec7a728674 0.0s
=> => naming to docker.io/library/test:v1.0

[root@rocky8 dockerfile]# docker run test:v1.0
IP : 106.34.172.94
地址 : 中国 河南
运营商 : 电信

数据二 : 中国河南郑州 | 电信

数据三 : 中国河南省郑州市 | 电信

URL : http://www.cip.cc/106.34.172.94

#cat /etc/etc/issue覆盖了curl命令
[root@rocky8 dockerfile]# docker run test:v1.0 cat /etc/issue
Ubuntu 24.04.1 LTS \n \l

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@rocky8 dockerfile]# cat Dockerfile 
FROM busybox
LABEL maintainer="wsq <wshuaiqing.cn>"
ENV ROOT /data/website
COPY index.html ${ROOT}/index.html
CMD /bin/httpd -f -h ${ROOT}
EXPOSE 80

[root@rocky8 dockerfile]# cat index.html
website in Dockerfile

[root@rocky8 dockerfile]# docker build -t test:v2.0 .
[+] Building 0.1s (7/7) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 188B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 59B 0.0s
=> CACHED [1/2] FROM docker.io/library/busybox:latest 0.0s
=> [2/2] COPY index.html /data/website/index.html 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:e6cebb60862b142410c25e440e656b56bd111743938ecf50fd4 0.0s
=> => naming to docker.io/library/test:v2.0

[root@rocky8 dockerfile]# docker run -d --rm -P --name c1 test:v2.0
fbcc56f0dd2e93c7667d01773708c272913cab3d72dfd954e6dbdb150d007edc

[root@rocky8 dockerfile]# docker port c1
80/tcp -> 0.0.0.0:32773
80/tcp -> [::]:32773

[root@rocky8 dockerfile]# curl 127.0.0.1:32773
website in Dockerfile

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@rocky8 dockerfile]# cat Dockerfile 
FROM busybox
LABEL maintainer="wsq <wshuaiqing.cn>"
ENV ROOT /data/website
RUN mkdir -p ${ROOT} && echo '<h1>Busybox httpd server in Dockerfile</h1>' > ${ROOT}/index.html
CMD [ "/bin/sh", "-c", "/bin/httpd -f -h ${ROOT}" ]
EXPOSE 80


[root@rocky8 dockerfile]# docker build -t test:v3.0 .
[+] Building 0.4s (6/6) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 272B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> CACHED [1/2] FROM docker.io/library/busybox:latest 0.0s
=> [2/2] RUN mkdir -p /data/website && echo '<h1>Busybox httpd server in Docke 0.4s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:95db5b4e1a69931ca64d07064224183cfa85f5114057fb0be49 0.0s
=> => naming to docker.io/library/test:v3.0


[root@rocky8 dockerfile]# docker run -d --rm -P --name c3 test:v3.0
0edfa6ceb89bb92b8c4e078ed6e9d0e02c7be995742744ca8f13d832d391dd22

[root@rocky8 dockerfile]# docker port c3
80/tcp -> 0.0.0.0:32774
80/tcp -> [::]:32774

[root@rocky8 dockerfile]# curl 127.0.0.1:32774
<h1>Busybox httpd server in Dockerfile</h1>

[root@rocky8 dockerfile]# docker inspect -f "{{.Config}}" test:v3.0
{ false false false map[80/tcp:{}] false false false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ROOT=/data/website] [/bin/sh -c /bin/httpd -f -h ${ROOT}] <nil> true map[] [] false [] map[maintainer:wsq <wshuaiqing.cn>] <nil> []}


#查看进程关系
[root@rocky8 dockerfile]# docker exec -it c3 sh
/ # ps
PID USER TIME COMMAND
1 root 0:00 /bin/httpd -f -h /data/website
8 root 0:00 sh
14 root 0:00 ps
ENTRYPOINT: 入口点

功能类似于CMD,配置容器启动后执行的命令及参数

1
2
3
4
5
# 使用 exec 执行
ENTRYPOINT ["executable", "param1", "param2"...]

# shell中执行
ENTRYPOINT command param1 param2
  • ENTRYPOINT 不能被 docker run 提供的参数覆盖,而是追加,即如果docker run 命令有参数,那么参数全部都会作为ENTRYPOINT的参数
  • 如果docker run 后面没有额外参数,但是dockerfile中有CMD命令(即上面CMD的第三种用法),即Dockerfile中即有CMD也有ENTRYPOINT,那么CMD的全部内容会作为ENTRYPOINT的参数
  • 如果docker run 后面有额外参数,同时Dockerfile中即有CMD也有ENTRYPOINT,那么docker run 后面的参数覆盖掉CMD参数内容,最终作为ENTRYPOINT的参数
  • 可以通过docker run –entrypoint string 参数在运行时替换,注意string不要加空格
  • 使用CMD要在运行时重新写命令本身,然后在后面才能追加运行参数,ENTRYPOINT则可以运行时无需重写命令就可以直接接受新参数
  • 每个 Dockerfile 中只能有一个 ENTRYPOINT,当指定多个时,只有最后一个生效
  • 通常会利用ENTRYPOINT指令配合脚本,可以为CMD指令提供环境配置

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
FROM centos:centos7.9-v10.0
LABEL maintainer="wangxiaochun <root@wangxiaochun.com>"
ENV version=1.18.0
ADD nginx-$version.tar.gz /usr/local/
RUN cd /usr/local/nginx-$version && ./configure --prefix=/apps/nginx && make &&
make install && rm -rf /usr/local/nginx* && sed -i 's/.*nobody.*/user nginx;/'
/apps/nginx/conf/nginx.conf && useradd -r nginx
COPY index.html /apps/nginx/html
VOLUME ["/apps/nginx/html"]
EXPOSE 80 443
CMD ["-g","daemon off;"]
ENTRYPOINT ["/apps/nginx/sbin/nginx"]
#上面两条指令相当于ENTRYPOINT ["/apps/nginx/sbin/nginx","-g","daemon off;"]
HEALTHCHECK --interval=5s --timeout=3s CMD curl -fs http://127.0.0.1/

范例:

1
2
3
[root@ubuntu1804 ~]#docker run -it --entrypoint cat alpine /etc/issue
Welcome to Alpine Linux 3.12
Kernel \r on an \m (\l)

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
[root@rocky8 dockerfile]# cat Dockerfile 
FROM ubuntu
RUN apt update \
&& apt -y install curl \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT [ "curl", "-s", "https://cip.cc" ]


[root@rocky8 dockerfile]# docker build -t test:v4.0 .
[+] Building 0.1s (6/6) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 170B 0.0s
=> [internal] load metadata for docker.io/library/ubuntu:latest 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/2] FROM docker.io/library/ubuntu:latest 0.0s
=> CACHED [2/2] RUN apt update && apt -y install curl && rm -rf /var/lib/apt/l 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:b750d685d2be39f30f640eb49d55c9faf1d7f628e30161f21e6 0.0s
=> => naming to docker.io/library/test:v4.0 0.0s


[root@rocky8 dockerfile]# docker run -it --rm test:v4.0
IP : 106.34.172.94
地址 : 中国 河南
运营商 : 电信

数据二 : 中国河南郑州 | 电信

数据三 : 中国河南省郑州市 | 电信

URL : http://www.cip.cc/106.34.172.94


#追加-i参数
[root@rocky8 dockerfile]# docker run -it --rm test:v4.0 -i
HTTP/1.1 200 OK
Server: openresty
Date: Wed, 09 Apr 2025 07:51:32 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 188
Connection: keep-alive
x-content-type-options: nosniff
permissions-policy: interest-cohort=()
x-frame-options: SAMEORIGIN

IP : 106.34.172.94
地址 : 中国 河南
运营商 : 电信

数据二 : 中国河南郑州 | 电信

数据三 : 中国河南省郑州市 | 电信

URL : http://www.cip.cc/106.34.172.94

范例: 利用脚本实现指定环境变量动态生成配置文件内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@rocky8 dockerfile]# cat entrypoint.sh 
#!/bin/bash
cat > /etc/nginx/conf.d/www.conf <<EOF
server {
server_name ${HOSTNAME};
listen ${IP:-0.0.0.0}:${PORT:-80};
root ${DOC_ROOT:-/usr/share/nginx/html};
}
EOF

[root@rocky8 dockerfile]# cat Dockerfile
FROM nginx
ENV DOC_ROOT='/data/website/'
ADD index.html ${DOC_ROOT}
ADD entrypoint.sh /bin/
EXPOSE 80/tcp 8080
CMD [ "/usr/sbin/nginx", "-g", "daemon off;" ] #CMD指令的内容都成为了ENTRYPOINT的参数
ENTRYPOINT [ "/bin/entrypoint.sh" ]

[root@rocky8 dockerfile]# chmod +x entrypoint.sh
[root@rocky8 dockerfile]# docker build -t nginx:v1.0 .
[root@rocky8 dockerfile]# docker run --name n1 --rm -P -e "PORT=8080" -e "HOSTNAME=www.wang.org" nginx:v1.0
ARG: 构建参数

ARG指令在build 阶段指定变量,和ENV不同的是,容器运行时不会存在这些环境变量

1
ARG <name>[=<default value>]

如果和ENV同名,ENV覆盖ARG变量

可以用 docker build –build-arg <参数名>=<值> 来覆盖

范例:

1
2
3
4
5
6
FROM busybox
ARG author="wang <root@wangxiaochun.com>"
LABEL maintainer="${author}"


[root@ubuntu1804 ~]# docker build --build-arg author="29308620@qq.com" -t busybox:v1.0 .

说明: ARG 和 FROM

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#FROM指令支持ARG指令放在第一个FROM之前声明变量

#示例:
ARG CODE_VERSION=latest
FROM base:${CODE_VERSION}
CMD /code/run-app


FROM extras:${CODE_VERSION}
CMD /code/run-extras


#在FROM之前声明的ARG在构建阶段之外,所以它不能在FROM之后的任何指令中使用。 要使用在第一个FROM之前声明的ARG的默认值,请在构建阶段内使用没有值的ARG指令

#示例:
ARG VERSION=latest
FROM busybox:$VERSION
ARG VERSION
RUN echo $VERSION > image_version
VOLUME: 匿名卷

在容器中创建一个可以从本地主机或其他容器挂载的挂载点,一般用来存放数据库和需要保持的数据等,默认会将宿主机上的目录挂载至VOLUME 指令指定的容器目录。即使容器后期被删除,此宿主机的目录仍会保留,从而实现容器数据的持久保存。

宿主机目录为

1
/var/lib/docker/volumes/<volume_id>/_data

语法:

1
2
3
4
5
6
VOLUME <容器内路径>
VOLUME ["<容器内路径1>", "<容器内路径2>"...]

注意:
<容器内路径>如果在容器内不存在,在创建容器时会自动创建
<容器内路径>如果是存在的,同时目录内有内容,将会把此目录的内容复制到宿主机的实际目录

注意:

  • Dockerfile中的VOLUME实现的是匿名数据卷,无法指定宿主机路径和容器目录的挂载关系
  • 通过docker rm -fv <容器ID> 可以删除容器的同时删除VOLUME指定的卷

范例: 在容器创建两个/data1 ,/data2的挂载点

1
VOLUME [ "/data1","/data2" ]

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@rocky8 dockerfile]# cat Dockerfile 
FROM alpine
VOLUME [ "/testdata1","/testdata2" ]

[root@rocky8 dockerfile]# docker build -t test:v7.0 .
[root@rocky8 dockerfile]# docker run -it --rm test:v7.0 sh
/ # df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 52403200 12414100 39989100 24% /
tmpfs 65536 0 65536 0% /dev
tmpfs 2845540 0 2845540 0% /sys/fs/cgroup
shm 65536 0 65536 0% /dev/shm
/dev/mapper/rl-root 52403200 12414100 39989100 24% /testdata2
/dev/mapper/rl-root 52403200 12414100 39989100 24% /testdata1

/ # cp /etc/issue /testdata1/f1.txt
/ # cp /etc/issue /testdata2/f2.txt
/ # exit

[root@rocky8 /]# tree /var/lib/docker/volumes/
/var/lib/docker/volumes/
├── 5bec1388af1d923f77220700dd7f458068f2ed956513ec8e9dfb336e9f5529c4
│   └── _data
│   └── f1.txt
├── 8e2e3802d6c512d9f94a5912e5cdca326cb1cca0212607e2da2abdb964532415
│   └── _data
│   └── f2.txt
EXPOSE: 暴露端口

指定服务端的容器需要对外暴露(监听)的端口号,以实现容器与外部通信。

EXPOSE 仅仅是声明容器打算使用什么端口而已,并不会真正暴露端口,即不会自动在宿主进行端口映射

因此,在启动容器时需要通过 -P 或 -p ,Docker 主机才会真正分配一个端口转发到指定暴露的端口才可使用

注意: 即使 Dockerfile 没有 EXPOSE 端口指令,也可以通过docker run -p 临时暴露容器内程序真正监听的端口,所以EXPOSE 相当于指定默认的暴露端口,可以通过docker run -P 进行真正暴露

1
2
3
4
EXPOSE <port>[/ <protocol>] [<port>[/ <protocol>] ..]

#说明
<protocol>用于指定传输层协议,可为tcp或udp二者之一,默认为TCP协议

范例:

1
2
EXPOSE 80 443
EXPOSE 11211/udp 11211/tcp

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@rocky8 dockerfile]# echo website in Dockerfile > index.html
[root@rocky8 dockerfile]# cat Dockerfile
FROM busybox
LABEL maintainer="wshuaiqing.cn"
COPY index.html /data/website/
EXPOSE 80


[root@rocky8 dockerfile]# docker build -t test:v1.0 .
[root@rocky8 dockerfile]# docker run --rm -P --name c1 test:v1.0 /bin/httpd -f -h /data/website


#新终端
[root@rocky8 /]# docker port c1
80/tcp -> 0.0.0.0:32778
80/tcp -> [::]:32778

[root@rocky8 /]# curl 127.0.0.1:32778
website in Dockerfile

[root@rocky8 /]# docker kill c1
c1
WORKDIR: 指定工作目录

为后续的 RUN、CMD、ENTRYPOINT 指令配置工作目录,当容器运行后,进入容器内WORKDIR指定的默认目录

WORKDIR 指定工作目录(或称当前目录),以后各层的当前目录就被改为指定的目录,如该目录不存在,WORKDIR 会自行创建

1
WORKDIR /path/to/workdir

范例:

1
2
3
4
5
6
7
#两次RUN独立运行,不在同一个目录
RUN cd /app
RUN echo "hello" > world.txt

#如果想实现相同目录可以使用WORKDIR
WORKDIR /app
RUN echo "hello" > world.txt

可以使用多个 WORKDIR 指令,后续命令如果参数是相对路径,则会基于之前命令指定的路径。例如

1
2
3
4
5
WORKDIR /a
WORKDIR b
WORKDIR c
RUN pwd
#则最终路径为 /a/b/c
ONBUILD: 子镜像引用父镜像的指令

可以用来配置当构建当前镜像的子镜像时,会自动触发执行的指令,但在当前镜像构建时,并不会执行,即延迟到子镜像构建时才执行

1
ONBUILD [INSTRUCTION]

例如,Dockerfile 使用如下的内容创建了镜像 image-A。

1
2
3
4
...
ONBUILD ADD http://www.magedu.com/wp-content/uploads/2017/09/logo.png /data/
ONBUILD RUN rm -rf /*
ONBUILD RUN /usr/local/bin/python-build --dir /app/src...

如果基于 image-A 创建新的镜像image-B时,新的Dockerfile中使用 FROM image-A指定基础镜像时,会自动执行ONBUILD 指令内容,等价于在后面添加了两条指令。

1
2
3
4
5
FROM image-A

#Automatically run the following
ADD http://www.magedu.com/wp-content/uploads/2017/09/logo.png /data
RUN /usr/local/bin/python-build --dir /app/src

说明:

  • 尽管任何指令都可注册成为触发器指令,但ONBUILD不能自我能套,且不会触发FROM和MAINTAINER指令
  • 使用 ONBUILD 指令的镜像,推荐在标签中注明,例如 ruby:1.9-onbuild
USER: 指定当前用户

指定运行容器的用户名或 UID,在后续dockerfile中的 RUN ,CMD和ENTRYPOINT指令时使用此用户

当服务不需要管理员权限时,可以通过该命令指定运行用户

这个用户必须是事先建立好的,否则无法切换

如果没有指定 USER,默认是 root 身份执行

1
2
USER <user>[:<group>] 
USER <UID>[:<GID>]

范例:

1
2
RUN groupadd -r mysql && useradd -r -g mysql mysql
USER mysql
HEALTHCHECK: 健康检查

检查容器的健康性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
HEALTHCHECK [选项] CMD <命令> #设置检查容器健康状况的命令,如果命令执行失败,则返回1,即unhealthy
HEALTHCHECK NONE #如果基础镜像有健康检查指令,使用这行可以屏蔽掉其健康检查指令


HEALTHCHECK 支持下列选项:
--interval=<间隔> #两次健康检查的间隔,默认为 30 秒
--timeout=<时长> #健康检查命令运行超时时间,如果超过这个时间,本次健康检查就被视为失败,默认 30 秒
--retries=<次数> #当连续失败指定次数后,则将容器状态视为 unhealthy,默认3次
--start-period=<FDURATION> #default: 0s


#检查结果返回值:
0 #success the container is healthy and ready for use
1 #unhealthy the container is not working correctly
2 #reserved do not use this exit code

范例

1
2
3
FROM nginx
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
HEALTHCHECK --interval=5s --timeout=3s CMD curl -fs http://localhost/

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#如果健康性检查成功,STATUS会显示 (healthy)
[root@rocky8 dockerfile]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56060ebe7bca test:v2.0 "/docker-entrypoint.…" 12 seconds ago Up 11 seconds (healthy) 80/tcp happy_goldberg
2b7b33437449 nginx "/docker-entrypoint.…" 15 minutes ago Up 15 minutes 80/tcp quizzical_tharp

#如果健康性检查不通过,STATUS会显示(unhealthy)
[root@rocky8 dockerfile]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56060ebe7bca test:v2.0 "/docker-entrypoint.…" 12 seconds ago Up 11 seconds (unhealthy) 80/tcp happy_goldberg

#查看健康状态
[root@rocky8 dockerfile]# docker inspect -f '{{.State.Health.Status}}' 56060ebe7bca
healthy

#以json格式显示
[root@rocky8 dockerfile]# docker inspect -f '{{json .State.Health}}' 56060ebe7bca


[root@rocky8 dockerfile]# docker inspect -f '{{json .State.Health}}' 56060ebe7bca | python3 -m json.tool
{
"Status": "healthy",
"FailingStreak": 0,
......
}
.dockerignore文件

官方文档: https://docs.docker.com/engine/reference/builder/#dockerignore-file与.gitignore文件类似,生成构建上下文时Docker客户端应忽略的文件和文件夹指定模式

.dockerignore 使用 Go 的文件路径规则 filepath.Match

参考链接: https://golang.org/pkg/path/filepath/#Match

完整的语法

1
2
3
4
5
6
#     #以#开头的行为注释
* #匹配任何非分隔符字符序列
? #匹配任何单个非分隔符
\\ #表示 \
** #匹配任意数量的目录(包括零)例如,**/*.go将排除在所有目录中以.go结尾的所有文件,包括构建上下文的根。
! #表示取反,可用于排除例外情况

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#排除 test 目录下的所有文件
test/*

#排除 md 目录下的 xttblog.md 文件
md/xttblog.md

#排除 xttblog 目录下的所有 .md 的文件
xttblog/*.md

#排除以 xttblog 为前缀的文件和文件夹
xttblog?

#排除所有目录下的 .sql 文件夹
**/*.sql

范例:

1
2
3
4
5
6
7
8
9
#除了README的md不排外,排除所有md文件,但不排除README-secret.md
*.md
!README*.md
README-secret.md

#除了所有README的md文件以外的md都排除
*.md
README-secret.md
!README*.md
Dockerfile 构建过程和指令总结

Dockerfile 构建过程

  • 基础镜像运行一个容器
  • 执行一条指令,对容器做出修改
  • 执行类似docker commit的操作,提交一个新的中间镜像层(可以利用中间层镜像创建容器进行调试和排错)
  • 再基于刚提交的镜像运行一个新容器
  • 执行Dockerfile中的下一条指令,直至所有指令执行完毕

Dockerfile 指令总结

39

构建镜像docker build 命令

docker build命令使用Dockerfile文件创建镜像

1
2
3
4
5
6
7
8
9
10
docker build [OPTIONS] PATH | URL | -

说明:
PATH | URL | - #可以使是本地路径,也可以是URL路径。若设置为 - ,则从标准输入获取Dockerfile的内容
-f, --file string #Dockerfile文件名,默认为 PATH/Dockerfile
--force-rm #总是删除中间层容器,创建镜像失败时,删除临时容器
--no-cache #不使用之前构建中创建的缓存
-q --quiet=false #不显示Dockerfile的RUN运行的输出结果
--rm=true #创建镜像成功时,删除临时容器
-t --tag list #设置注册名称、镜像名称、标签。格式为 <注册名称>/<镜像名称>:<标签>(标签默认为latest)

范例:

1
2
3
4
5
6
7
docker build .
docker build /usr/local/src/nginx
docker build -f /path/to/a/Dockerfile .
docker build -t shykes/myapp .
docker build -t shykes/myapp:1.0.2 -t shykes/myapp:latest .
docker build -t test/myapp .
docker build -t nginx:v1 /usr/local/src/nginx

查看镜像的构建历史:

1
docker history 镜像ID

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
root@rocky8 dockerfile]# docker history nginx:latest 
IMAGE CREATED CREATED BY SIZE COMMENT
4cad75abc83d 2 months ago CMD ["nginx" "-g" "daemon off;"] 0B buildkit.dockerfile.v0
<missing> 2 months ago STOPSIGNAL SIGQUIT 0B buildkit.dockerfile.v0
<missing> 2 months ago EXPOSE map[80/tcp:{}] 0B buildkit.dockerfile.v0
<missing> 2 months ago ENTRYPOINT ["/docker-entrypoint.sh"] 0B buildkit.dockerfile.v0
<missing> 2 months ago COPY 30-tune-worker-processes.sh /docker-ent… 4.62kB buildkit.dockerfile.v0
<missing> 2 months ago COPY 20-envsubst-on-templates.sh /docker-ent… 3.02kB buildkit.dockerfile.v0
<missing> 2 months ago COPY 15-local-resolvers.envsh /docker-entryp… 389B buildkit.dockerfile.v0
<missing> 2 months ago COPY 10-listen-on-ipv6-by-default.sh /docker… 2.12kB buildkit.dockerfile.v0
<missing> 2 months ago COPY docker-entrypoint.sh / # buildkit 1.62kB buildkit.dockerfile.v0
<missing> 2 months ago RUN /bin/sh -c set -x && groupadd --syst… 117MB buildkit.dockerfile.v0
<missing> 2 months ago ENV DYNPKG_RELEASE=1~bookworm 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV PKG_RELEASE=1~bookworm 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV NJS_RELEASE=1~bookworm 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV NJS_VERSION=0.8.9 0B buildkit.dockerfile.v0
<missing> 2 months ago ENV NGINX_VERSION=1.27.4 0B buildkit.dockerfile.v0
<missing> 2 months ago LABEL maintainer=NGINX Docker Maintainers <d… 0B buildkit.dockerfile.v0
<missing> 2 months ago # debian.sh --arch 'amd64' out/ 'bookworm' '… 74.8MB debuerreotype 0.15

范例: 利用Dockerfile构建基于CentOS的nginx镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@rocky8 dockerfile]# cat Dockerfile 
FROM centos:8
LABEL maintainer="wshuaiqing.cn"
RUN rm -rf /etc/yum.repos.d/* && curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
RUN yum install -y nginx && echo Nginx Website in Docker > /usr/share/nginx/html/index.html
EXPOSE 80
CMD [ "nginx","-g","daemon off;" ]

[root@rocky8 dockerfile]# docker build -t nginx_centos8:v1.26.3 .
[root@rocky8 dockerfile]# docker run -d -P --name nginx-web nginx_centos8:v1.26.3
4bb1b8095dde2fce2ced8c92cb69506be03ea256533eca8ffe92e2c0837a7e44

[root@rocky8 dockerfile]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4bb1b8095dde nginx_centos8:v1.26.3 "nginx -g 'daemon of…" 11 seconds ago Up 10 seconds 0.0.0.0:32779->80/tcp, :::32779->80/tcp nginx-web


[root@rocky8 dockerfile]# curl http://127.0.0.1:32779
Nginx Website in Docker

[root@rocky8 dockerfile]# curl -I http://127.0.0.1:32779
HTTP/1.1 200 OK
Server: nginx/1.14.1
Date: Wed, 09 Apr 2025 11:38:05 GMT
Content-Type: text/html
Content-Length: 24
Last-Modified: Wed, 09 Apr 2025 11:35:14 GMT
Connection: keep-alive
ETag: "67f65b72-18"
Accept-Ranges: bytes

范例: 刷新镜像缓存重新构建新镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ubuntu1804 ~]# cat /data/Dockerfile
FROM centos
LABEL maintainer="wangxiaochun <root@wangxiaochun.com>"
RUN yum install -y nginx
RUN echo Nginx Website in Docker > /usr/share/nginx/html/index.html
#修改下面行,从下面行开始不再使用缓存
ENV REFRESH_DATA 2020-01-01
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

[root@ubuntu1804 ~]# docker build -t nginx_centos8.2:v1.14.1 /data/

#全部不利用缓存重新构建镜像
[root@ubuntu1804 ~]# docker build --no-cache -t nginx_centos8.2:v1.14.1 /data/

实战案例: Dockerfile 制作基于基础镜像的Base镜像

准备目录结构,下载镜像并初始化系统

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#按照业务类型或系统类型等方式划分创建目录环境,方便后期镜像比较多的时候进行分类
[root@rocky8 dockerfile]# mkdir /data/dockerfile/{web/{nginx,apache,tomcat,jdk},system/{centos,ubuntu,alpine,debian}} -p
[root@rocky8 dockerfile]# tree /data/dockerfile/
/data/dockerfile/
├── system
│   ├── alpine
│   ├── centos
│   ├── debian
│   └── ubuntu
└── web
├── apache
├── jdk
├── nginx
└── tomcat

10 directories, 0 files

#下载基础镜像
[root@rocky8 ~]# docker pull centos:centos7.7.1908
[root@rocky8 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos centos7.7.1908 08d05d1d5859 5 years ago 204MB

先制作基于基础镜像的系统Base镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#先制作基于基础镜像的系统base镜像
[root@rocky8 ~]# cd /data/dockerfile/system/centos/

#创建Dockerfile,注意可以是dockerfile,但无语法着色功能
[root@rocky8 centos]# vim Dockerfile
[root@rocky8 centos]# cat Dockerfile
FROM centos:centos7.7.1908
LABEL maintainer="wshuaiqing.cn"
RUN rm -rf /etc/yum.repos.d/* && curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo \
&& curl -o /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo \
&& yum install -y vim-enhanced tcpdump lrzsz tree telnet bash-completion net-tools wget curl bzip2 lsof zip unzip nfs-utils gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel zlib-devel vim \
&& yum clean all \
&& rm -rf /etc/localtime \
&& ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

[root@rocky8 centos]# docker build -t centos7-base:v1 .
[root@rocky8 centos]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
centos7-base v1 4215b0f03391 2 minutes ago 435MB
centos centos7.7.1908 08d05d1d5859 5 years ago 204MB

[root@rocky8 centos]# docker history centos7-base:v1
IMAGE CREATED CREATED BY SIZE COMMENT
4215b0f03391 2 minutes ago RUN /bin/sh -c rm -rf /etc/yum.repos.d/* && … 231MB buildkit.dockerfile.v0
<missing> 2 minutes ago LABEL maintainer=wshuaiqing.cn 0B buildkit.dockerfile.v0
<missing> 5 years ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 5 years ago /bin/sh -c #(nop) LABEL org.label-schema.sc… 0B
<missing> 5 years ago /bin/sh -c #(nop) ADD file:3e2a127b44ed01afc… 204MB

实战案例: Dockerfile 制作基于Base镜像的 nginx 镜像

在Dockerfile目录下准备编译安装的相关文件

1
2
3
4
5
6
7
8
[root@rocky8 ~]# mkdir /data/dockerfile/web/nginx/1.26
[root@rocky8 ~]# cd /data/dockerfile/web/nginx/1.26
[root@rocky8 1.26]# wget https://nginx.org/download/nginx-1.26.3.tar.gz
[root@rocky8 1.26]# mkdir app
[root@rocky8 1.26]# echo "Test page in app" > app/index.html
[root@rocky8 1.26]# tar zcf app.tar.gz app
[root@rocky8 1.26]# ls
app app.tar.gz nginx-1.26.3.tar.gz

在一台测试机进行编译安装同一版本的nginx 生成模版配置文件

1
2
3
4
5
6
7
8
9
10
11
12
[root@rocky8 src]# yum install -y vim-enhanced tcpdump lrzsz tree telnet bash-completion net-tools wget curl bzip2 lsof zip unzip nfs-utils gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel zlib-devel

[root@rocky8 nginx-1.26.3]# ./configure --prefix=/apps/nginx && make -j 4 && make install


#将配置文件复制到nginx镜像的服务器相应目录下
[root@rocky8 ~]# scp /apps/nginx/conf/nginx.conf /data/dockerfile/web/nginx/1.26/

[root@rocky8 ~]# vim /data/dockerfile/web/nginx/1.26/nginx.conf
user nginx;
worker_processes 1;
daemon off; #增加此行,前台运行nginx

编写Dockerfile文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# cd /data/dockerfile/web/nginx/1.26
[root@rocky8 1.26]# vim Dockerfile
[root@rocky8 1.26]# cat Dockerfile
FROM centos7-base:v1
LABEL maintainers="wshuaiqing.cn"
ADD nginx-1.26.3.tar.gz /usr/local/src
RUN cd /usr/local/src/nginx-1.26.3 \
&& ./configure --prefix=/apps/nginx \
&& make && make install \
&& rm -rf /usr/local/src/nginx* \
&& useradd -r nginx
COPY nginx.conf /apps/nginx/conf/
ADD app.tar.gz /apps/nginx/html/
EXPOSE 80 443
CMD ["/apps/nginx/sbin/nginx"]

生成nginx镜像

1
2
3
4
5
6
7
8
9
[root@rocky8 1.26]# ls
app app.tar.gz Dockerfile nginx-1.26.3.tar.gz nginx.conf

[root@rocky8 1.26]# docker build -t nginx-centos7:1.26.1 .
[root@rocky8 1.26]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx-centos7 1.26.1 e3157f879258 10 seconds ago 446MB
centos7-base v1 4215b0f03391 37 minutes ago 435MB
centos centos7.7.1908 08d05d1d5859 5 years ago 204MB

生成的容器测试镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@rocky8 ~]# docker run -d -p 80:80 nginx-centos7:1.26.1 
7afa7df01d49a11f2ba22582ed5e90df89c0de35e6aff907cf6513ef072eb17b

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7afa7df01d49 nginx-centos7:1.26.1 "/apps/nginx/sbin/ng…" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 443/tcp focused_golick

[root@rocky8 ~]# docker exec -it 7afa7df01d49 bash
[root@7afa7df01d49 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 20604 2404 ? Ss 20:33 0:00 nginx
nginx 7 0.0 0.0 21048 2300 ? S 20:33 0:00 nginx
root 14 0.5 0.0 12368 3608 pts/0 Ss 20:33 0:00 bash
root 34 0.0 0.0 51772 3536 pts/0 R+ 20:33 0:00 ps au
[root@7afa7df01d49 /]# exit
exit

[root@rocky8 ~]# curl 127.0.0.1/app/
Test page in app

实战案例: Dockerfile 直接制作 nginx 镜像

在Dockerfile目录下准备编译安装的相关文件

1
2
3
4
5
6
7
8
9
10
[root@rocky8 ~]# cd /data/dockerfile/web/nginx/1.26/

[root@rocky8 1.26]# wget https://nginx.org/download/nginx-1.26.3.tar.gz
[root@rocky8 1.26]# ls
Dockerfile index.html nginx-1.26.3.tar.gz nginx.conf

[root@rocky8 1.26]# vim nginx.conf
user nginx;
worker_processes 1;
#daemon off;

编写Dockerfile文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@rocky8 1.26]# cat Dockerfile 
FROM centos:centos7.7.1908
LABEL maintainers="wshuaiqing.cn"
RUN rm -rf /etc/yum.repos.d/* && curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo \
&& curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
RUN yum install -y vim make gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel \
&& useradd -r -s /sbin/nologin nginx \
&& yum clean all
ADD nginx-1.26.3.tar.gz /usr/local/src
RUN cd /usr/local/src/nginx-1.26.3 \
&& ./configure --prefix=/apps/nginx \
&& make && make install \
&& rm -rf /usr/local/src/nginx*
ADD nginx.conf /apps/nginx/conf/nginx.conf
COPY index.html /apps/nginx/html/index.html
RUN ln -s /apps/nginx/sbin/nginx /usr/sbin/nginx
EXPOSE 80 443
CMD ["nginx","-g","daemon off;"]

生成nginx镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@rocky8 1.26]# vim build.sh
[root@rocky8 1.26]# cat build.sh
#!/bin/bash
docker build -t nginx-centos7:1.26.3-v2 .

[root@rocky8 1.26]# chmod +x build.sh
[root@rocky8 1.26]# ./build.sh
[root@rocky8 1.26]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx-centos7 1.26.3-v2 08c66ca1868e 13 seconds ago 405MB
nginx-centos7 1.26.1 e3157f879258 4 hours ago 446MB
centos7-base v1 4215b0f03391 4 hours ago 435MB
centos centos7.7.1908 08d05d1d5859 5 years ago 204MB

生成容器测试镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 1.26]# docker run -d -p 80:80 nginx-centos7:1.26.3-v2 
600c8cb0a99fb7d35a0074f42a3cf42ea664b6d9eabbd58b54e7cc449eb4cb21

[root@rocky8 1.26]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
600c8cb0a99f nginx-centos7:1.26.3-v2 "nginx -g 'daemon of…" 4 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 443/tcp affectionate_hypatia

[root@rocky8 1.26]# docker exec -it 600c8cb0a99f bash
[root@600c8cb0a99f /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 20604 2432 ? Ss 16:04 0:00 nginx
nginx 7 0.0 0.0 21048 2428 ? S 16:04 0:00 nginx
root 8 0.2 0.0 11844 3064 pts/0 Ss 16:05 0:00 bash
root 24 0.0 0.0 51772 3500 pts/0 R+ 16:05 0:00 ps au

[root@600c8cb0a99f /]# exit
exit

实战案例: 多阶段构建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
[root@rocky8 go-hello]# cat Dockerfile 
FROM golang:1.18-alpine
COPY hello.go /opt
WORKDIR /opt
RUN go build hello.go
CMD "./hello"


[root@rocky8 go-hello]# cat hello.go
package main

import "fmt"

func main() {
fmt.Println("hello,world")
}


[root@rocky8 go-hello]# cat build.sh
#!/bin/bash

docker build -t go-hello:$1 .


[root@rocky8 go-hello]# bash build.sh v1.0
[root@rocky8 go-hello]# docker run --name hello go-hello:v1.0
hello,world


[root@rocky8 go-hello]# cp Dockerfile Dockerfile-v1.0
[root@rocky8 go-hello]# vim Dockerfile
[root@rocky8 go-hello]# cat Dockerfile
FROM golang:1.18-alpine as builder
COPY hello.go /opt
WORKDIR /opt
RUN go build hello.go

FROM alpine:3.15.0
COPY --from=builder /opt/hello /opt/hello
CMD ["/opt/hello"]


[root@rocky8 go-hello]# bash build.sh v2.0
[root@rocky8 go-hello]# docker run --name hello2 go-hello:v2.0
hello,world

[root@rocky8 go-hello]# docker images go-hello
REPOSITORY TAG IMAGE ID CREATED SIZE
go-hello v2.0 f94585d3013e About a minute ago 7.35MB
go-hello v1.0 e127f2c277ec 4 minutes ago 331MB

生产案例: 制作自定义tomcat业务镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 ~]# cd /data/dockerfile/system/centos/
[root@rocky8 centos]# cat Dockerfile
FROM centos:centos7.7.1908
LABEL maintainer="wshuaiqing.cn"
RUN rm -rf /etc/yum.repos.d/* && curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo \
&& curl -o /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo \
&& yum install -y vim-enhanced tcpdump lrzsz tree telnet bash-completion net-tools wget curl bzip2 lsof zip unzip nfs-utils gcc make gcc-c++ glibc glibc-devel pcre pcre-devel openssl openssl-devel systemd-devel zlib-devel vim \
&& yum clean all \
&& rm -rf /etc/localtime \
&& ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 添加系统用户
RUN groupadd www -g 2019 && useradd www -u 2019 -g www

[root@rocky8 centos]# cat build.sh
#!/bin/bash
docker build -t centos7-base:v2 .

[root@rocky8 centos]# bash build.sh
[root@rocky8 centos]# docker images centos7-base:v2
REPOSITORY TAG IMAGE ID CREATED SIZE
centos7-base v2 356779b33302 About a minute ago 435MB

构建JDK 镜像

上传JDK压缩包和profile文件上传到Dockerfile当前目录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#将CentOS7主机上的/etc/profile文件传到 Dockerfile 所在目录下
[root@rocky8 ~]# scp 192.168.1.100:/etc/profile /data/dockerfile/web/jdk/

#修改profile文件,加下面四行相关变量
[root@rocky8 ~]# vim /data/dockerfile/web/jdk/profile
[root@rocky8 ~]# tail -5 /data/dockerfile/web/jdk/profile

export JAVA_HOME=/usr/local/jdk
export TOMCAT_HOME=/apps/tomcat
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$TOMCAT_HOME/bin:$PATH
export CLASSPATH=.$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$JAVA_HOME/lib/tools.jar

#下载jdk文件传到Dockfile目录下
#https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
[root@rocky8 ~]# tree /data/dockerfile/web/jdk/
/data/dockerfile/web/jdk/
├── jdk-8u441-linux-x64.tar.gz
└── profile

0 directories, 2 files
准备Dockerfile文件
1
2
3
4
5
6
7
8
9
10
11
[root@rocky8 ~]# vim /data/dockerfile/web/jdk/Dockerfile
[root@rocky8 ~]# cat /data/dockerfile/web/jdk/Dockerfile
FROM centos7-base:v2
LABEL maintainer="wshuaiqing.cn"
ADD jdk-8u441-linux-x64.tar.gz /usr/local/src/
RUN ln -s /usr/local/src/jdk1.8.0_441 /usr/local/jdk
ADD profile /etc/profile
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin

执行构建脚本制作镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 ~]# vim /data/dockerfile/web/jdk/build.sh
[root@rocky8 ~]# cat /data/dockerfile/web/jdk/build.sh
#!/bin/bash
docker build -t centos7-jdk:8u441 .


[root@rocky8 ~]# tree /data/dockerfile/web/jdk/
/data/dockerfile/web/jdk/
├── build.sh
├── Dockerfile
├── jdk-8u441-linux-x64.tar.gz
└── profile

0 directories, 4 files


[root@rocky8 ~]# cd /data/dockerfile/web/jdk/
[root@rocky8 jdk]# bash build.sh
[root@rocky8 jdk]# docker images centos7-jdk:8u441
REPOSITORY TAG IMAGE ID CREATED SIZE
centos7-jdk 8u441 050430ff2ca1 23 seconds ago 805MB

从镜像启动容器测试

1
2
3
4
5
[root@rocky8 jdk]# docker run -it --rm centos7-jdk:8u441 bash
[root@fabb4583c19b /]# java -version
java version "1.8.0_441"
Java(TM) SE Runtime Environment (build 1.8.0_441-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.441-b07, mixed mode)

从JDK镜像构建tomcat 8 Base镜像

基于自定义的 JDK 基础镜像,构建出通用的自定义 Tomcat 基础镜像,此镜像后期会被多个业务的多个服务共同引用(相同的JDK 版本和Tomcat 版本)

上传tomcat 压缩包
1
2
3
[root@rocky8 ~]# mkdir /data/dockerfile/web/tomcat/tomcat-base-8.5.50
[root@rocky8 ~]# cd /data/dockerfile/web/tomcat/tomcat-base-8.5.50
[root@rocky8 tomcat-base-8.5.50]# wget https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.50/bin/apache-tomcat-8.5.50.tar.gz
编辑Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@rocky8 tomcat-base-8.5.50]# vim Dockerfile
[root@rocky8 tomcat-base-8.5.50]# cat Dockerfile
FROM centos7-jdk:8u441
LABEL maintainer="wshuaiqing.cn"
#env
ENV TZ "Asia/Shanghai"
ENV LANG en_US.UTF-8
ENV TERM xterm
ENV TOMCAT_MAJOR_VERSION 8
ENV TOMCAT_MINOR_VERSION 8.5.50
ENV APP_DIR ${CATALINA_HOME}/webapps
RUN mkdir /apps
ADD apache-tomcat-8.5.50.tar.gz /apps
RUN ln -s /apps/apache-tomcat-8.5.50 /apps/tomcat
通过脚本构建tomcat 基础镜像
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 tomcat-base-8.5.50]# vim build.sh
[root@rocky8 tomcat-base-8.5.50]# cat build.sh
#!/bin/bash
docker build -t tomcat-base:v8.5.50 .

[root@rocky8 tomcat-base-8.5.50]# tree
.
├── apache-tomcat-8.5.50.tar.gz
├── build.sh
└── Dockerfile

0 directories, 3 files

[root@rocky8 tomcat-base-8.5.50]# bash build.sh
[root@rocky8 tomcat-base-8.5.50]# docker images tomcat-base:v8.5.50
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat-base v8.5.50 49991f088aa0 About a minute ago 819MB
验证镜像构建完成
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@rocky8 tomcat-base-8.5.50]# docker run -itt --rm -p 8080:8080 tomcat-base:v8.5.50 bash

[root@c42f13fbb34d /]# /apps/tomcat/bin/catalina.sh start
Using CATALINA_BASE: /apps/tomcat
Using CATALINA_HOME: /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME: /usr/local/jdk/jre
Using CLASSPATH: /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar
Tomcat started.

[root@c42f13fbb34d /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 0 127.0.0.1:8005 :::* LISTEN
tcp6 0 0 :::8009 :::* LISTEN
tcp6 0 0 :::8080 :::* LISTEN

构建业务镜像1

创建tomcat-app1和tomcat-app2两个目录,代表不同的两个基于tomcat的业务。

准备tomcat的配置文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@rocky8 ~]# mkdir -p /data/dockerfile/web/tomcat/tomcat-app{1,2}
[root@rocky8 ~]# tree /data/dockerfile/web/tomcat/
/data/dockerfile/web/tomcat/
├── tomcat-app1
├── tomcat-app2
└── tomcat-base-8.5.50
├── apache-tomcat-8.5.50.tar.gz
├── build.sh
└── Dockerfile

3 directories, 3 files

#上传和修改server.xml
[root@rocky8 ~]# cd /data/dockerfile/web/tomcat/tomcat-base-8.5.50/
[root@rocky8 tomcat-base-8.5.50]# tar xf apache-tomcat-8.5.50.tar.gz
[root@rocky8 tomcat-base-8.5.50]# cp apache-tomcat-8.5.50/conf/server.xml /data/dockerfile/web/tomcat/tomcat-app1/
[root@rocky8 tomcat-base-8.5.50]# cd /data/dockerfile/web/tomcat/tomcat-app1/
[root@rocky8 tomcat-app1]# vim server.xml
......
<Host name="localhost" appBase="/data/tomcat/webapps"
unpackWARs="true" autoDeploy="true">
......

40

准备自定义页面
1
2
3
[root@rocky8 tomcat-app1]# mkdir app
[root@rocky8 tomcat-app1]# echo "Tomcat Page in app1" > app/index.jsp
[root@rocky8 tomcat-app1]# tar zcf app.tar.gz app
准备容器启动执行脚本
1
2
3
4
5
6
7
8
9
[root@rocky8 tomcat-app1]# vim run_tomcat.sh
[root@rocky8 tomcat-app1]# cat run_tomcat.sh
#!/bin/bash
echo "nameserver 180.76.76.76" > /etc/resolv.conf
su - www -c "/apps/tomcat/bin/catalina.sh start"
su - www -c "tail -f /etc/hosts"


[root@rocky8 tomcat-app1]# chmod +x run_tomcat.sh
准备Dockerfile
1
2
3
4
5
6
7
8
9
[root@rocky8 tomcat-app1]# vim Dockerfile
[root@rocky8 tomcat-app1]# cat Dockerfile
FROM tomcat-base:v8.5.50
ADD server.xml /apps/tomcat/conf/server.xml
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD app.tar.gz /data/tomcat/webapps/
RUN chown -R www.www /apps/ /data/tomcat/
EXPOSE 8080 8009
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
执行构建脚本制作镜像
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@rocky8 tomcat-app1]# vim build.sh
[root@rocky8 tomcat-app1]# cat build.sh
#!/bin/bash
docker build -t tomcat-web:app1 .
[root@rocky8 tomcat-app1]# tree
.
├── app
│   └── index.jsp
├── app.tar.gz
├── build.sh
├── Dockerfile
├── run_tomcat.sh
└── server.xml

1 directory, 6 files

[root@rocky8 tomcat-app1]# bash build.sh
[root@rocky8 tomcat-app1]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat-web app1 ec2c46689170 11 seconds ago 834MB
tomcat-base v8.5.50 49991f088aa0 3 hours ago 819MB
tomcat-bash v8.5.50 49991f088aa0 3 hours ago 819MB
centos7-jdk 8u441 050430ff2ca1 3 hours ago 805MB
centos7-base v2 356779b33302 4 hours ago 435MB
go-hello v2.0 f94585d3013e 4 hours ago 7.35MB
go-hello v1.0 e127f2c277ec 4 hours ago 331MB
nginx-centos7 1.26.3-v2 08c66ca1868e 4 hours ago 405MB
nginx-centos7 1.26.1 e3157f879258 7 hours ago 446MB
centos7-base v1 4215b0f03391 8 hours ago 435MB
centos centos7.7.1908 08d05d1d5859 5 years ago 204MB
从镜像启动测试容器
1
2
[root@rocky8 tomcat-app1]# docker run -d -p 8080:8080 tomcat-web:app1 
c45493cae1a6bfa4999b55b37576cfa294dcbdad27a821c19c24f1561c6aa80a
访问测试
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
[root@rocky8 tomcat-app1]# docker run -d -p 8080:8080 tomcat-web:app1 
a21fcd34f711bc27379fea90137805dfc8b79c55123b1919c1e7b6154d4f52a4

[root@rocky8 tomcat-app1]# curl 127.0.0.1:8080/app/
Tomcat Page in app1

[root@rocky8 tomcat-app1]# docker exec -it a21fcd bash
[root@a21fcd34f711 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 13308 3108 ? Ss 04:02 0:00 /bin/
www 26 5.3 2.3 5460176 136224 ? Sl 04:02 0:02 /usr/
root 28 0.0 0.0 83600 4528 ? S 04:02 0:00 su -
www 29 0.0 0.0 4420 684 ? Ss 04:02 0:00 tail
root 91 0.4 0.0 13972 4068 pts/0 Ss 04:03 0:00 bash
root 111 0.0 0.0 53372 3940 pts/0 R+ 04:03 0:00 ps au

[root@a21fcd34f711 /]# vim /data/tomcat/webapps/app/index.jsp
[root@a21fcd34f711 /]# cat /data/tomcat/webapps/app/index.jsp
Tomcat Page in app1 v2

[root@a21fcd34f711 /]# /apps/tomcat/bin/catalina.sh stop
Using CATALINA_BASE: /apps/tomcat
Using CATALINA_HOME: /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME: /usr/local/jdk/jre
Using CLASSPATH: /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar

[root@a21fcd34f711 /]# /apps/tomcat/bin/catalina.sh start
Using CATALINA_BASE: /apps/tomcat
Using CATALINA_HOME: /apps/tomcat
Using CATALINA_TMPDIR: /apps/tomcat/temp
Using JRE_HOME: /usr/local/jdk/jre
Using CLASSPATH: /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar
Tomcat started.

[root@a21fcd34f711 /]# exit
exit

[root@rocky8 tomcat-app1]# curl 127.0.0.1:8080/app/
Tomcat Page in app1 v2

构建业务镜像2

准备自定义页面和其它数据
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@rocky8 tomcat]# pwd
/data/dockerfile/web/tomcat

[root@rocky8 tomcat]# cp -a tomcat-app1/* tomcat-app2/
[root@rocky8 tomcat]# tree tomcat-app2/
tomcat-app2/
├── app
│   └── index.jsp
├── app.tar.gz
├── build.sh
├── Dockerfile
├── run_tomcat.sh
└── server.xml

1 directory, 6 files

[root@rocky8 tomcat]# cd tomcat-app2/
[root@rocky8 tomcat-app2]# vim app/index.jsp
[root@rocky8 tomcat-app2]# cat app/index.jsp
Tomcat Page in app2

[root@rocky8 tomcat-app2]# rm -rf app.tar.gz
[root@rocky8 tomcat-app2]# tar zcf app.tar.gz app
准备容器启动脚本run_tomcat.sh

和业务1一样不变

准备Dockerfile

和业务1一样不变

执行构建脚本制作镜像
1
2
3
4
5
6
7
8
9
10
[root@rocky8 tomcat-app2]# vim build.sh 
[root@rocky8 tomcat-app2]# cat build.sh
#!/bin/bash
docker build -t tomcat-web:app2 .


[root@rocky8 tomcat-app2]# bash build.sh
[root@rocky8 tomcat-app2]# docker images tomcat-web:app2
REPOSITORY TAG IMAGE ID CREATED SIZE
tomcat-web app2 5f2fe69f2115 23 seconds ago 834MB
从镜像启动容器测试
1
2
[root@rocky8 tomcat-app2]# docker run -d -p 8082:8080 tomcat-web:app2 
65714811a3c2fff043aa44948ad16efc350a35da4192edb54c2a9200283db3e5
访问测试
1
2
[root@rocky8 tomcat-app2]# curl 127.0.0.1:8082/app/
Tomcat Page in app2

生产案例: 构建haproxy镜像

准备相关文件

1
2
3
4
5
6
7
8
9
10
11
12
13
#准备haproxy源码文件
[root@rocky8 ~]# mkdir -p /data/dockerfile/web/haproxy/2.1.2-centos7
[root@rocky8 ~]# cd /data/dockerfile/web/haproxy/2.1.2-centos7
[root@rocky8 2.1.2-centos7]# wget http://www.haproxy.org/download/2.1/src/haproxy-2.1.2.tar.gz

#准备haproxy启动脚本
[root@rocky8 2.1.2-centos7]# vim run_haproxy.sh
[root@rocky8 2.1.2-centos7]# cat run_haproxy.sh
#!/bin/bash
haproxy -f /etc/haproxy/haproxy.cfg
tail -f /etc/hosts

[root@rocky8 2.1.2-centos7]# chmod +x run_haproxy.sh

准备haproxy配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@rocky8 2.1.2-centos7]# vim haproxy.cfg
[root@rocky8 2.1.2-centos7]# cat haproxy.cfg
global
chroot /apps/haproxy
#stats socket /var/lib/haproxy/haproxy.sock mode 600 level admin
uid 99
gid 99
daemon
nbproc 1
pidfile /apps/haproxy/run/haproxy.pid
log 127.0.0.1 local3 info
defaults
option http-keep-alive
option forwardfor
mode http
timeout connect 300000ms
timeout client 300000ms
timeout server 300000ms

listen stats
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth haadmin:123456

listen web_port
bind 0.0.0.0:80
mode http
log global
balance roundrobin
server web1 192.168.1.12:8080 check inter 3000 fall 2 rise 5
server web2 192.168.1.13:8080 check inter 3000 fall 2 rise 5

准备Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 2.1.2-centos7]# vim Dockerfile
[root@rocky8 2.1.2-centos7]# cat Dockerfile
FROM centos7-base:v1
LABEL maintainer="wshuaiqing.cn"
ADD haproxy-2.1.2.tar.gz /usr/local/src/
RUN cd /usr/local/src/haproxy-2.1.2 \
&& make ARCH=x86_64 TARGET=linux-glibc USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1 USE_CPU_AFFINITY=1 PREFIX=/apps/haproxy \
&& make install PREFIX=/apps/haproxy \
&& ln -s /apps/haproxy/sbin/haproxy /usr/sbin/ \
&& mkdir /apps/haproxy/run \
&& rm -rf /usr/local/src/haproxy*
ADD haproxy.cfg /etc/haproxy/
ADD run_haproxy.sh /usr/bin
EXPOSE 80 9999
CMD ["run_haproxy.sh"]

准备构建脚本构建haproxy镜像

1
2
3
4
5
6
7
8
9
10
11
12
[root@rocky8 2.1.2-centos7]# vim build.sh
[root@rocky8 2.1.2-centos7]# cat build.sh
#!/bin/bash
docker build -t haproxy-centos7:2.1.2 .

[root@rocky8 2.1.2-centos7]# ls
build.sh Dockerfile haproxy-2.1.2.tar.gz haproxy.cfg run_haproxy.sh

[root@rocky8 2.1.2-centos7]# bash build.sh
[root@rocky8 2.1.2-centos7]# docker images haproxy-centos7:2.1.2
REPOSITORY TAG IMAGE ID CREATED SIZE
haproxy-centos7 2.1.2 02fcdab2f4db 10 minutes ago 460MB

从镜像启动容器

1
2
3
4
5
6
[root@rocky8 2.1.2-centos7]# docker run -d -p 80:80 -p 9999:9999 haproxy-centos7:2.1.2 
7032ef4a9005920386b9cc7e6ccd053e10b46f38aef72c42861fc98b2b594e37

[root@rocky8 2.1.2-centos7]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7032ef4a9005 haproxy-centos7:2.1.2 "run_haproxy.sh" 13 seconds ago Up 13 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:9999->9999/tcp, :::9999->9999/tcp affectionate_shannon

在另外两台主机启动容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#导出本地相关镜像
[root@rocky8 ~]# docker save centos7-base:v2 -o /data/centos7-base.tar.gz
[root@rocky8 ~]# docker save centos7-jdk:8u441 -o /data/centos7-jdk.tar.gz
[root@rocky8 ~]# docker save tomcat-base:v8.5.50 -o /data/tomcat-base.tar.gz
[root@rocky8 ~]# docker save tomcat-web:app1 -o /data/tomcat-web-app1.tar.gz
[root@rocky8 ~]# docker save tomcat-web:app2 -o /data/tomcat-web-app2.tar.gz
[root@rocky8 ~]# ls /data/
centos7-base.tar.gz dockerfile tomcat-web-app1.tar.gz
centos7-jdk.tar.gz tomcat-base.tar.gz tomcat-web-app2.tar.gz

#将镜像复制到另外两台主机
[root@rocky8 ~]# scp /data/*.gz 192.168.1.12:/data/
[root@rocky8 ~]# scp /data/*.gz 192.168.1.13:/data/

#在另外两台主机上执行下面操作导入镜像
[root@rocky8 ~]# ls /data/
centos7-base.tar.gz tomcat-base.tar.gz tomcat-web-app2.tar.gz
centos7-jdk.tar.gz tomcat-web-app1.tar.gz

[root@rocky8 ~]# for i in /data/*.gz;do docker load -i $i;done

#在另外两台主机上创建相关容器
[root@rocky8 ~]# docker run -d -p 8080:8080 tomcat-web:app1
1ce807b8bde21115cacec3a3bede4233a584666b587b62779a78ec19107a53b0

[root@rocky8 ~]# docker run -d -p 8080:8080 tomcat-web:app2
8d37533a827e502df3f5725d5e92ce6729d96a38426aae8fa44eccccba951314

web访问验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@rocky8 ~]# curl 192.168.1.11/app/
Tomcat Page in app1
[root@rocky8 ~]# curl 192.168.1.11/app/
Tomcat Page in app2
[root@rocky8 ~]# curl 192.168.1.11/app/
Tomcat Page in app1
[root@rocky8 ~]# curl 192.168.1.11/app/
Tomcat Page in app2

[root@rocky8 ~]# docker exec -it 01e717 bash
[root@01e717049173 /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:9999 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN

[root@01e717049173 /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11704 2664 ? Ss 05:30 0:00 /bin/bash /usr/bin/run_haproxy.sh
nobody 8 0.0 1.2 492416 73004 ? Ssl 05:30 0:00 haproxy -f /etc/haproxy/haproxy.cfg
root 9 0.0 0.0 4420 760 ? S 05:30 0:00 tail -f /etc/hosts
root 14 0.0 0.0 12368 3596 pts/0 Ss 05:31 0:00 bash
root 36 0.0 0.0 51772 3564 pts/0 R+ 05:32 0:00 ps aux

41

42

1
2
3
4
5
6
7
8
9
#在第二台主机上停止容器
[root@rocky8 ~]# docker stop 8d37533a827e
8d37533a827e

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8d37533a827e tomcat-web:app2 "/apps/tomcat/bin/ru…" 11 minutes ago Exited (137) 3 seconds ago eager_hermann

#观察状态页,发现后端服务器down

43

1
2
3
4
5
6
7
8
9
#在第二台主机上恢复启动容器
[root@rocky8 ~]# docker start 8d37533a827e
8d37533a827e

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8d37533a827e tomcat-web:app2 "/apps/tomcat/bin/ru…" 12 minutes ago Up 3 seconds 8009/tcp, 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp eager_hermann

#再次观察状态页,发现后端服务器上线

44

生产案例: 基于 Alpine 基础镜像制作 Nginx 镜像

制作 Alpine 的自定义系统镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
#下载alpine镜像,打新标签
[root@rocky8 ~]# docker pull alpine
[root@rocky8 ~]# docker history alpine:latest
IMAGE CREATED CREATED BY SIZE COMMENT
aded1e1a5b37 7 weeks ago CMD ["/bin/sh"] 0B buildkit.dockerfile.v0
<missing> 7 weeks ago ADD alpine-minirootfs-3.21.3-x86_64.tar.gz /… 7.83MB buildkit.dockerfile.v0

[root@rocky8 ~]# docker tag alpine:latest alpine:3.21
[root@rocky8 ~]# docker images alpine
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine 3.21 aded1e1a5b37 7 weeks ago 7.83MB
alpine latest aded1e1a5b37 7 weeks ago 7.83MB

#准备相关文件
[root@rocky8 ~]# cd /data/dockerfile/system/alpine/
[root@rocky8 alpine]# vim repositories
[root@rocky8 alpine]# cat repositories
http://mirrors.aliyun.com/alpine/v3.21/main
http://mirrors.aliyun.com/alpine/v3.21/community


#准备Dockerfile文件
[root@rocky8 alpine]# vim Dockerfile
[root@rocky8 alpine]# cat Dockerfile
FROM alpine:3.21
LABEL maintainer="wshuaiqing.cn"
COPY repositories /etc/apk/repositories
RUN apk update && apk --no-cache add \
iotop net-tools pstree wget zip unzip iproute2 \
gcc libgcc musl-dev make \
curl-dev libevent libevent-dev \
libnfs zlib-dev pcre-dev pcre pcre2


#准备构建脚本
[root@rocky8 alpine]# vim build.sh
[root@rocky8 alpine]# cat build.sh
#!/bin/bash
docker build -t alpine-base:3.21 .


[root@rocky8 alpine]# bash build.sh
[root@rocky8 alpine]# docker images alpine*
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine-base 3.21 4cbc214985ab 40 seconds ago 238MB
alpine 3.21 aded1e1a5b37 7 weeks ago 7.83MB
alpine latest aded1e1a5b37 7 weeks ago 7.83MB

制作基于 Alpine 自定义镜像的 Nginx 镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#准备相关文件
[root@rocky8 alpine]# mkdir /data/dockerfile/web/nginx/1.16.1-alpine/
[root@rocky8 alpine]# cd /data/dockerfile/web/nginx/1.16.1-alpine/
[root@rocky8 1.16.1-alpine]# wget http://nginx.org/download/nginx-1.16.1.tar.gz
[root@rocky8 1.16.1-alpine]# echo Test Page based nginx-alpine > index.html
[root@rocky8 1.16.1-alpine]# tar xf nginx-1.16.1.tar.gz
[root@rocky8 1.16.1-alpine]# cp nginx-1.16.1/conf/nginx.conf .
[root@rocky8 1.16.1-alpine]# rm -rf nginx-1.16.1
[root@rocky8 1.16.1-alpine]# vim nginx.conf
user nginx;
worker_processes 1;
daemon off;

#编写Dockerfile文件
[root@rocky8 1.16.1-alpine]# vim Dockerfile
[root@rocky8 1.16.1-alpine]# cat Dockerfile
FROM alpine-base:3.21
LABEL maintainer="wshuaiqing.cn"
ADD nginx-1.16.1.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.16.1 && ./configure --prefix=/apps/nginx && make \
&& make install && ln -s /apps/nginx/sbin/nginx /usr/bin/
RUN addgroup -g 2019 -S nginx && adduser -s /sbin/nologin -S -D -u 2019 -G nginx nginx
COPY nginx.conf /apps/nginx/conf/nginx.conf
ADD index.html /apps/nginx/html/index.html
RUN chown -R nginx:nginx /apps/nginx/
EXPOSE 80 443
CMD ["nginx"]

#构建镜像
[root@rocky8 1.16.1-alpine]# cat build.sh
#!/bin/bash
docker build -t nginx-alpine:1.16.1 .

[root@rocky8 1.16.1-alpine]# bash build.sh
[root@rocky8 1.16.1-alpine]# docker images nginx-alpine
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx-alpine 1.16.1 e750a0c3b172 27 seconds ago 262MB

#生成容器测试镜像
[root@rocky8 1.16.1-alpine]# docker run -d -p 80:80 nginx-alpine:1.16.1 0338dec04d3327718af273dd4ae29497f8e714da76c7c0a5eb052be513bf6968
[root@rocky8 1.16.1-alpine]# curl 127.0.0.1
Test Page based nginx-alpine

生产案例: 基于 Ubuntu 基础镜像制作 Nginx 镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
#下载ubuntu1804镜像
[root@rocky8 ~]# docker pull ubuntu:18.04
[root@rocky8 ~]# docker images ubuntu*
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu 18.04 f9a80a55f492 22 months ago 63.2MB

#准备相关文件
[root@rocky8 ~]# mkdir /data/dockerfile/web/nginx/1.16.1-ubuntu1804
[root@rocky8 ~]# cd /data/dockerfile/web/nginx/1.16.1-ubuntu1804
[root@rocky8 1.16.1-ubuntu1804]# vim sources.list
[root@rocky8 1.16.1-ubuntu1804]# cat sources.list
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe mul
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe

deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted uni
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted

deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted univ
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted

deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted uni
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted

deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted un
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricte

[root@rocky8 1.16.1-ubuntu1804]# wget http://nginx.org/download/nginx-1.16.1.tar.gz
[root@rocky8 1.16.1-ubuntu1804]# cp ../1.16.1-alpine/nginx.conf .
[root@rocky8 1.16.1-ubuntu1804]# cat nginx.conf
user nginx;
worker_processes 1;
daemon off;

[root@rocky8 1.16.1-ubuntu1804]# echo Test Page based nginx-ubuntu1804 > index.html

#编写Dockerfile文件
[root@rocky8 1.16.1-ubuntu1804]# vim Dockerfile
[root@rocky8 1.16.1-ubuntu1804]# cat Dockerfile
FROM ubuntu:18.04
LABEL maintainer="wshuaiqing.cn"
COPY sources.list /etc/apt/sources.list
RUN apt update && apt install -y \
ca-certificates lrzsz tree unzip zip \
gcc make \
openssh-server nfs-kernel-server nfs-common \
openssl libssl-dev zlib1g-dev \
libpcre3 libpcre3-dev
ADD nginx-1.16.1.tar.gz /usr/local/src
RUN cd /usr/local/src/nginx-1.16.1 && ./configure --prefix=/apps/nginx && make \
&& make install && ln -s /apps/nginx/sbin/nginx /usr/bin && rm -rf /usr/local/src/nginx-1.16.1*
ADD nginx.conf /apps/nginx/conf/nginx.conf
ADD index.html /apps/nginx/html/index.html
RUN groupadd -g 2019 nginx && useradd -g nginx -s /usr/sbin/nologin -u 2019 nginx \
&& chown -R nginx.nginx /apps/nginx
EXPOSE 80 443
CMD ["nginx"]


#构建镜像
[root@rocky8 1.16.1-ubuntu1804]# vim build.sh
[root@rocky8 1.16.1-ubuntu1804]# cat build.sh
#!/bin/bash
docker build -t nginx-ubuntu1804:1.16.1 .

[root@rocky8 1.16.1-ubuntu1804]# ls
build.sh index.html nginx.conf
Dockerfile nginx-1.16.1.tar.gz sources.list

[root@rocky8 1.16.1-ubuntu1804]# bash build.sh
[root@rocky8 1.16.1-ubuntu1804]# docker images nginx-ubuntu1804
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx-ubuntu1804 1.16.1 7d7aa6c8e4ef 3 minutes ago 394MB

#启动容器测试镜像
[root@rocky8 1.16.1-ubuntu1804]# docker run -d -p 80:80 nginx-ubuntu1804:1.16.1
a0e60f7059745ab55e9f724f38ad7b65f540d368bd9b9851fef2e901b54037d0

[root@rocky8 1.16.1-ubuntu1804]# curl 127.0.0.1
Test Page based nginx-ubuntu1804

Docker 数据管理

45

Docker镜像由多个只读层叠加而成,启动容器时,Docker会加载只读镜像层并在镜像栈顶部添加一个读写层

如果运行中的容器修改了现有的一个已经存在的文件,那该文件将会从读写层下面的只读层复制到读写层,该文件的只读版本仍然存在,只是已经被读写层中该文件的副本所隐藏,此即“写时复制(COW copy on write)”机制

如果将正在运行中的容器修改生成了新的数据,那么新产生的数据将会被复制到读写层,进行持久化保存,这个读写层也就是容器的工作目录,也为写时复制(COW) 机制。

COW机制节约空间,但会导致性低下,虽然关闭重启容器,数据不受影响,但会随着容器的删除,其对应的可写层也会随之而删除,即数据也会丢失.如果容器需要持久保存数据,并不影响性能可以用数据卷技术实现

如下图是将对根的数据写入到了容器的可写层,但是把/data 中的数据写入到了一个另外的volume 中用于数据持久化

46

容器的数据管理介绍

Docker镜像是分层设计的,镜像层是只读的,通过镜像启动的容器添加了一层可读写的文件系统,用户写入的数据都保存在这一层中。

Docker容器的分层

容器的数据分层目录

  • LowerDir: image 镜像层,即镜像本身,只读
  • UpperDir: 容器的上层,可读写 ,容器变化的数据存放在此处
  • MergedDir: 容器的文件系统,使用Union FS(联合文件系统)将lowerdir 和 upperdir 合并完成后给容器使用,最终呈现给用户的统一视图
  • WorkDir: 容器在宿主机的工作目录,挂载后内容会被清空,且在使用过程中其内容用户不可见

范例: 查看指定容器数据分层

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# docker inspect aded1e1a5b37
"MergedDir": "/var/lib/docker/overlay2/7f4c0529efeac793e40f797caf266129cd2a47e7353421e123db0e413c4f7304/merged",
"UpperDir": "/var/lib/docker/overlay2/7f4c0529efeac793e40f797caf266129cd2a47e7353421e123db0e413c4f7304/diff",
"WorkDir": "/var/lib/docker/overlay2/7f4c0529efeac793e40f797caf266129cd2a47e7353421e123db0e413c4f7304/work"


[root@rocky8 ~]# ll -i /var/lib/docker/overlay2/7f4c0529efeac793e40f797caf266129cd2a47e7353421e123db0e413c4f7304
total 8
69311616 -rw------- 1 root root 0 Apr 10 05:52 committed
101962085 drwxr-xr-x 19 root root 4096 Apr 8 19:49 diff
68334207 -rw-r--r-- 1 root root 26 Apr 8 19:49 link



[root@rocky8 ~]# mount
overlay on /var/lib/docker/overlay2/262df7accd0cfc5a9419c286311d5aa2227036d46f53c90fae3f335a8035bd92/merged type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/XSKSD5QI45KMGQLHL2SPMZZWE5:/var/lib/docker/overlay2/l/R3QW44B3Y7JFMBAW7OQ2SAHXAI,upperdir=/var/lib/docker/overlay2/262df7accd0cfc5a9419c286311d5aa2227036d46f53c90fae3f335a8035bd92/diff,workdir=/var/lib/docker/overlay2/262df7accd0cfc5a9419c286311d5aa2227036d46f53c90fae3f335a8035bd92/work)
nsfs on /run/docker/netns/9d7642d20c01 type nsfs (rw)

哪些数据需要持久化

有状态的协议

1
2
有状态协议就是就通信双方要记住双方,并且共享一些信息。而无状态协议的通信每次都是独立的,与上一次的通信没什么关系。
"状态”可以理解为“记忆”,有状态对应有记忆,无状态对应无记忆

47

  • 下层是无状态的http请求服务,上层为有状态
  • 左侧为不需要存储的服务,右侧为需要存储的部分服务

容器数据持久保存方式

如果要将写入到容器的数据永久保存,则需要将容器中的数据保存到宿主机的指定目录

Docker的数据类型分为两种:

  • 数据卷(Data Volume): 直接将宿主机目录挂载至容器的指定的目录 ,推荐使用此种方式,此方式较常用
  • 数据卷容器(Data Volume Container): 间接使用宿主机空间,数据卷容器是将宿主机的目录挂载至一个专门的数据卷容器,然后让其他容器通过数据卷容器读写宿主机的数据 ,此方式不常用

48

数据卷(data volume)

数据卷特点和使用

数据卷实际上就是宿主机上的目录或者是文件,可以被直接mount到容器当中使用

实际生成环境中,需要针对不同类型的服务、不同类型的数据存储要求做相应的规划,最终保证服务的可扩展性、稳定性以及数据的安全性

数据卷使用场景

  • 数据库
  • 日志输出
  • 静态web页面
  • 应用配置文件
  • 多容器间目录或文件共享

数据卷的特点

  • 数据卷是目录或者文件,并且可以在多个容器之间共同使用,实现容器之间共享和重用
  • 对数据卷更改数据在所有容器里面会立即更新。
  • 数据卷的数据可以持久保存,即使删除使用使用该容器卷的容器也不影响。
  • 在容器里面的写入数据不会影响到镜像本身,即数据卷的变化不会影响镜像的更新
  • 依赖于宿主机目录,宿主机出问题,上面容器会受影响,当宿主机较多时,不方便统一管理
  • 匿名和命名数据卷在容器启动时初始化,如果容器使用的镜像在挂载点包含了数据,会拷贝到新初始化的数据卷中

数据卷分类

启动容器时,可以指定使用数据卷实现容器数据的持久化,数据卷有三种

  • 指定宿主机目录或文件: 指定宿主机的具体路径和容器路径的挂载关系,此方式不会创建数据卷
  • 匿名卷: 不指定数据名称,只指定容器内目录路径充当挂载点,docker自动指定宿主机的路径进行挂载,此方式会创建匿名数据卷,Dockerfile中VOLUME指定的卷即为此种
  • 命名卷: 指定数据卷的名称和容器路径的挂载关系,此方式会创建命名数据卷

数据卷使用方法

docker run 命令的以下格式可以实现数据卷

1
2
3
4
5
-v, --volume=[host-src:]container-dest[:<options>]

<options>
ro 从容器内对此数据卷是只读,不写此项默认为可读可写
rw 从容器内对此数据卷可读可写,此为默认值

方式1

1
2
#指定宿主机目录或文件格式: 
-v <宿主机绝对路径的目录或文件>:<容器目录或文件>[:ro] #将宿主机目录挂载容器目录,两个目录都可自动创建

方式2

1
2
3
4
5
#匿名卷,只指定容器内路径,没有指定宿主机路径信息,宿主机自动生成/var/lib/docker/volumes/<卷ID>/_data目录,并挂载至容器指定路径
-v <容器内路径>

#示例:
docker run --name nginx -v /etc/nginx nginx

方式3

1
2
3
4
5
6
7
8
#命名卷将固定的存放在/var/lib/docker/volumes/<卷名>/_data
-v <卷名>:<容器目录路径>
#可以通过以下命令事先创建,如可没有事先创建卷名,docker run时也会自动创建卷
docker volume create <卷名>

#示例:
docker volume create vol1 #也可以事先不创建
docker run -d -p 80:80 --name nginx01 -v vol1:/usr/share/nginx/html nginx

docker rm 的 -v 选项可以删除容器时,同时删除相关联的匿名卷

1
-v, --volumes   Remove the volumes associated with the container

管理数据卷命令

1
2
3
4
5
6
7
8
docker volume COMMAND

Commands:
create Create a volume
inspect Display detailed information on one or more volumes
ls List volumes
prune Remove all unused local volumes
rm Remove one or more volumes

查看数据卷的挂载关系

1
docker inspect --format="{{.Mounts}}" <容器ID>

范例: 删除所有数据卷

1
[root@ubuntu1804 ~]# docker volume rm `docker volume ls -q`

范例:创建命名卷并删除

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@ubuntu1804 ~]#docker volume create mysql-vol
mysql-vol

[root@ubuntu1804 ~]#docker volume ls
DRIVER VOLUME NAME
local mysql-vol

[root@ubuntu1804 ~]#tree /var/lib/docker/volumes/
/var/lib/docker/volumes/
├── metadata.db
└── mysql-vol
└── _data

2 directories, 1 file

[root@ubuntu1804 ~]#docker volume rm mysql-vol
mysql-vol

[root@ubuntu1804 ~]#docker volume ls
DRIVER VOLUME NAME

范例:删除不再使用的数据卷

1
2
3
4
5
6
7
8
9
10
11
12
[root@ubuntu1804 ~]#docker volume ls
DRIVER VOLUME NAME
local 897bd48c5c5e2067627d5c6d10dad17d4793132a638986c16f36820663728ee1

[root@ubuntu1804 ~]#docker volume prune -f
Deleted Volumes:
897bd48c5c5e2067627d5c6d10dad17d4793132a638986c16f36820663728ee1
Total reclaimed space: 219.5MB


[root@ubuntu1804 ~]#docker volume ls
DRIVER VOLUME NAME

关于匿名数据卷和命名数据卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
命名卷就是有名字的卷,使用 docker volume create <卷名> 形式创建并命名的卷;而匿名卷就是没名
字的卷,一般是 docker run -v /data 这种不指定卷名的时候所产生,或者 Dockerfile 里面的定义直接使用的。

有名字的卷,在用过一次后,以后挂载容器的时候还可以使用,因为有名字可以指定。所以一般需要保存的数据使用命名卷保存。
而匿名卷则是随着容器建立而建立,随着容器消亡而淹没于卷列表中(对于 docker run 匿名卷不会被自动
删除)。 因此匿名卷只存放无关紧要的临时数据,随着容器消亡,这些数据将失去存在的意义。

Dockerfile中指定VOLUME为匿名数据卷,其目的只是为了将某个路径确定为卷。

按照最佳实践的要求,不应该在容器存储层内进行数据写入操作,所有写入应该使用卷。如果定制镜像的时
候,就可以确定某些目录会发生频繁大量的读写操作,那么为了避免在运行时由于用户疏忽而忘记指定卷,导
致容器发生存储层写入的问题,就可以在 Dockerfile 中使用 VOLUME 来指定某些目录为匿名卷。这样即使用户忘记了指定卷,也不会产生不良的后果。
这个设置可以在运行时覆盖。通过 docker run 的 -v 参数或者 docker-compose.yml 的 volumes
指定。使用命名卷的好处是可以复用,其它容器可以通过这个命名数据卷的名字来指定挂载,共享其内容(不过要注意并发访问的竞争问题)。

比如,Dockerfile 中说 VOLUME /data,那么如果直接 docker run,其 /data 就会被挂载为匿名
卷,向 /data 写入的操作不会写入到容器存储层,而是写入到了匿名卷中。但是如果运行时 docker run
-v mydata:/data,这就覆盖了 /data 的挂载设置,要求将 /data 挂载到名为 mydata 的命名卷中。
所以说 Dockerfile 中的 VOLUME 实际上是一层保险,确保镜像运行可以更好的遵循最佳实践,不向容器存储层内进行写入操作。

数据卷默认可能会保存于 /var/lib/docker/volumes,不过一般不需要、也不应该访问这个位置。

实战案例: 目录数据卷

在宿主机创建容器所使用的目录

1
2
[root@rocky8 ~]# mkdir /data/testdir
[root@rocky8 ~]# echo Test page on host > /data/testdir/index.html

查看容器相关目录路径

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@rocky8 ~]# docker images nginx*
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx-ubuntu1804 1.16.1 7d7aa6c8e4ef About an hour ago 394MB
nginx-alpine 1.16.1 e7fc09b41ec4 2 hours ago 262MB
nginx-centos7 1.26.3-v2 08c66ca1868e 8 hours ago 405MB
nginx-centos7 1.26.1 e3157f879258 12 hours ago 446MB
nginx latest 53a18edff809 2 months ago 192MB

[root@rocky8 ~]# docker run -it --rm nginx-alpine:1.16.1 sh
/ # cat /apps/nginx/conf/nginx.conf
location / {
root html;
index index.html index.htm;
}

/ # cat apps/nginx/html/index.html
Test Page based nginx-alpine

/ # exit

引用宿主机的数据卷启动容器

引用同一个数据卷目录,开启多个容器,实现多个容器共享数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@rocky8 ~]# docker run -d -v /data/testdir/:/apps/nginx/html/ -p 8081:80 nginx-alpine:1.16.1 
3c4f3b870399b42199e7a79c4f74403ccf05ff515dec5b3da906c95b122a851d

[root@rocky8 ~]# docker run -d -v /data/testdir/:/apps/nginx/html/ -p 8082:80 nginx-alpine:1.16.1
53e62af7dc575b5a9b1d5146579a5e3c41abcad947bdcfe29ca7eb8d3a40333d

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
53e62af7dc57 nginx-alpine:1.16.1 "nginx" 6 seconds ago Up 5 seconds 443/tcp, 0.0.0.0:8082->80/tcp, :::8082->80/tcp hungry_kilby
3c4f3b870399 nginx-alpine:1.16.1 "nginx" 13 seconds ago Up 12 seconds 443/tcp, 0.0.0.0:8081->80/tcp, :::8081->80/tcp blissful_black

[root@rocky8 ~]# curl 127.0.0.1:8081
Test page on host

[root@rocky8 ~]# curl 127.0.0.1:8082
Test page on host

进入到容器内测试写入数据

进入其中一个容器写入数据,可以其它容器的数据也变化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
[root@rocky8 ~]# docker exec -it 53e62af7dc57 sh
/ # df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 52403200 16849844 35553356 32% /
tmpfs 65536 0 65536 0% /dev
tmpfs 2845540 0 2845540 0% /sys/fs/cgroup
shm 65536 0 65536 0% /dev/shm
/dev/mapper/rl-root 52403200 16849844 35553356 32% /etc/resolv.conf
/dev/mapper/rl-root 52403200 16849844 35553356 32% /etc/hostname
/dev/mapper/rl-root 52403200 16849844 35553356 32% /etc/hosts
/dev/mapper/rl-root 52403200 16849844 35553356 32% /apps/nginx/html
tmpfs 2845540 0 2845540 0% /proc/acpi
tmpfs 65536 0 65536 0% /proc/kcore
tmpfs 65536 0 65536 0% /proc/keys
tmpfs 65536 0 65536 0% /proc/timer_list
tmpfs 65536 0 65536 0% /proc/sched_debug
tmpfs 2845540 0 2845540 0% /proc/scsi
tmpfs 2845540 0 2845540 0% /sys/firmware

/ # cat /apps/nginx/html/index.html
Test page on host

/ # echo Test page v2 on host > /apps/nginx/html/index.html

#进入另一个容器看到数据变化
[root@rocky8 ~]# docker exec -it 3c4f3b870399 sh
/ # cat /apps/nginx/html/index.html
Test page v2 on host

[root@rocky8 ~]# curl 127.0.0.1:8081
Test page v2 on host

[root@rocky8 ~]# curl 127.0.0.1:8082
Test page v2 on host

在宿主机修改数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@rocky8 ~]# echo Test page v3 on host > /data/testdir/index.html 
[root@rocky8 ~]# cat /data/testdir/index.html
Test page v3 on host

[root@rocky8 ~]# curl 127.0.0.1:8081
Test page v3 on host

[root@rocky8 ~]# curl 127.0.0.1:8082
Test page v3 on host


[root@rocky8 ~]# docker exec -it 3c4f3b870399 sh
/ # cat /apps/nginx/html/index.html
Test page v3 on host

只读方法挂载数据卷

默认数据卷为可读可写,加ro选项,可以实现只读挂载,对于不希望容器修改的数据,比如: 配置文件,脚本等,可以用此方式挂载

1
2
3
4
5
6
7
8
9
[root@rocky8 ~]# docker run -d -v /data/testdir/:/apps/nginx/html/:ro -p 8004:80 nginx-alpine:1.16.1 
d4b616f58917b5516b280b442f4bd71e03e4ef244e67fa4b698a401935e2e169

[root@rocky8 ~]# docker exec -it d4b616 sh
/ # cat /apps/nginx/html/index.html
Test page v3 on host

/ # echo test > /apps/nginx/html/index.html
sh: can't create /apps/nginx/html/index.html: Read-only file system

删除容器

删除容器后,宿主机的数据卷还存在,可继续给新的容器使用

1
2
3
[root@rocky8 ~]# docker rm -f `docker ps -aq`
[root@rocky8 ~]# cat /data/testdir/index.html
Test page v3 on host

实战案例: MySQL使用的数据卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
[root@rocky8 ~]# docker pull mysql:5.7.30
[root@rocky8 ~]# docker images mysql
REPOSITORY TAG IMAGE ID CREATED SIZE
mysql 8.0.29-oracle 33037edcac9b 2 years ago 444MB
mysql 5.7.30 9cfcce23593a 4 years ago 448MB

[root@rocky8 ~]# docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=000000 mysql:5.7.30
61ad12a974dcaeb49688862c20746cd81da7a8dd62fde4b7dac67836a99b9dd0

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61ad12a974dc mysql:5.7.30 "docker-entrypoint.s…" 16 seconds ago Up 15 seconds 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp, 33060/tcp hungry_williams

[root@rocky8 ~]# docker exec -it 61ad12a974dc bash
root@61ad12a974dc:/# cat /etc/issue
Debian GNU/Linux 10 \n \l

root@61ad12a974dc:/# cat /etc/mysql/my.cnf
......
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/


root@61ad12a974dc:/# cat /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql #数据存放路径
#log-error = /var/log/mysql/error.log
# By default we only accept connections from localhost
#bind-address = 127.0.0.1
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0


root@61ad12a974dc:/# pstree -p
mysqld(1)-+-{mysqld}(128)
|-{mysqld}(129)
|-{mysqld}(130)
|-{mysqld}(131)
|-{mysqld}(132)
|-{mysqld}(133)
|-{mysqld}(134)
|-{mysqld}(135)
|-{mysqld}(136)
|-{mysqld}(137)
|-{mysqld}(138)
|-{mysqld}(139)
|-{mysqld}(141)
|-{mysqld}(142)
|-{mysqld}(143)
|-{mysqld}(144)
|-{mysqld}(145)
|-{mysqld}(146)
|-{mysqld}(147)
|-{mysqld}(148)
|-{mysqld}(149)
|-{mysqld}(150)
|-{mysqld}(151)
|-{mysqld}(152)
|-{mysqld}(153)
`-{mysqld}(154)

#另一个终端
[root@rocky8 /]# mysql -uroot -p000000 -h127.0.0.1
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)

mysql> create database dockerdb;
Query OK, 1 row affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| dockerdb |
| mysql |
| performance_schema |
| sys |
+--------------------+
5 rows in set (0.01 sec)

#删除容器后,再创建新的容器,数据库信息丢失
[root@rocky8 ~]# docker rm -f 61ad12a974dc
61ad12a974dc

[root@rocky8 ~]# docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=000000 mysql:5.7.30
a5bf1eb53a617f46507980646497794b59acfb3312bf21f9468e4846ad8b17ee

[root@rocky8 /]# mysql -uroot -p000000 -h127.0.0.1
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)

#利用数据卷创建容器
[root@rocky8 ~]# docker run -d --name mysql -p 3306:3306 -v \
/data/mysql/:/var/lib/mysql/ -e MYSQL_ROOT_PASSWORD=000000 mysql:5.7.30
a791bfede6c626a74824f9dd7cdbe86372849ee34e290d3deea9f8714f43f1ae

[root@rocky8 ~]# mysql -uroot -p000000 -h127.0.0.1 -e "create database dockerdb;show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| dockerdb |
| mysql |
| performance_schema |
| sys |
+--------------------+

#删除容器,数据存放在挂载数据卷中,不会删除
[root@rocky8 ~]# docker rm -f mysql
mysql

[root@rocky8 ~]# ls /data/mysql/
auto.cnf dockerdb ibtmp1 server-cert.pem
ca-key.pem ib_buffer_pool mysql server-key.pem
ca.pem ibdata1 performance_schema sys
client-cert.pem ib_logfile0 private_key.pem
client-key.pem ib_logfile1 public_key.pem

#重新创建新容器,之前数据还在
[root@rocky8 ~]# docker run -d --name mysql -p 3306:3306 -v /data/mysql/:/var/lib/mysql/ -e MYSQL_ROOT_PASSWORD=000000 mysql:5.7.30
705ff0ea8fcb7d882a19ed4e3e1a980842d2ecef27eeb24960684945cfca74f8

[root@rocky8 ~]# mysql -uroot -p000000 -h127.0.0.1 -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| dockerdb |
| mysql |
| performance_schema |
| sys |
+--------------------+


#指定多个数据卷,创建MySQL
[root@rocky8 ~]# docker run --name mysql-test1 \
-v /data/mysql/:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=000000 \
-e MYSQL_DATABASE=wordpress -e MYSQL_USER=wpuser -e MYSQL_PASSWORD=000000 /
-d -p 3306:3306 mysql:5.7.30


[root@rocky8 ~]# cat env.list
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=wppass

[root@rocky8 ~]# docker run --name mysql-test2 \
-v /root/mysql/:/etc/mysql/conf.d -v /data/mysql2:/var/lib/mysql \
--env-file=env.list -d -p 3307:3306 mysql:5.7.30

实战案例: 文件数据卷

文件挂载用于很少更改文件内容的场景,比如: nginx 的配置文件、tomcat的配置文件等。

准备相关文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# mkdir /data/{bin,testapp,logs}
[root@rocky8 ~]# echo testapp v1 > /data/testapp/index.html
[root@rocky8 ~]# cat /data/testapp/index.html
testapp v1

[root@rocky8 ~]# cp /data/dockerfile/web/tomcat/tomcat-base-8.5.50/apache-tomcat-8.5.50/bin/catalina.sh /data/bin/
[root@rocky8 ~]# vim /data/bin/catalina.sh
#加下面tomcat的优化参数行(一行)
# -----------------------------------------------------------------------------
JAVA_OPTS="-server -Xms4g -Xmx4g -Xss512k -Xmn1g -XX:CMSInitiatingOccupancyFraction=65 -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=10 -XX:NewSize=2048M -XX:MaxNewSize=2048M -XX:NewRatio=2 -XX:PermSize=128m -XX:MaxPermSize=512m -XX:CMSFullGCsBeforeCompaction=5 -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods"

# OS specific support. $var _must_ be set to either true or false.

[root@rocky8 ~]# chown 2019:2019 /data/bin/catalina.sh
[root@rocky8 ~]# chown 2019:2019 /data/logs/
[root@rocky8 ~]# ll /data/bin/catalina.sh
-rwxr-x--- 1 2019 2019 24323 Apr 10 14:31 /data/bin/catalina.sh

引用文件数据卷启动容器

同时挂载可读可写方式的目录数据卷和只读方式的文件数据卷,实现三个数据卷的挂载,数据,日志和启动脚本

1
2
3
4
5
[root@rocky8 ~]# docker run -d -v /data/bin/catalina.sh:/apps/tomcat/bin/catalina.sh:ro \
-v /data/testapp:/data/tomcat/webapps/testapp \
-v /data/logs:/apps/tomcat/logs \
-p 8080:8080 tomcat-web:app1
d83a282d18ddb7abd8aedb33c5049db6ea477e1788be1e48c90b630f0493f4d1

验证容器可以访问

1
2
3
4
5
6
7
8
9
10
11
[root@rocky8 ~]# curl 127.0.0.1:8080/testapp/
testapp v1

[root@rocky8 ~]# ls -l /data/logs/
total 40
-rw-r----- 1 2019 2019 16304 Apr 10 14:47 catalina.2025-04-10.log
-rw-r----- 1 2019 2019 17248 Apr 10 14:47 catalina.out
-rw-r----- 1 2019 2019 0 Apr 10 14:43 host-manager.2025-04-10.log
-rw-r----- 1 2019 2019 0 Apr 10 14:43 localhost.2025-04-10.log
-rw-r----- 1 2019 2019 230 Apr 10 14:47 localhost_access_log.2025-04-10.txt
-rw-r----- 1 2019 2019 0 Apr 10 14:43 manager.2025-04-10.log

直接修改宿主机的数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#宿主机修改目录数据卷
[root@rocky8 ~]# echo testapp v2 > /data/testapp/index.html
[root@rocky8 ~]# curl 127.0.0.1:8080/testapp/
testapp v2

[root@rocky8 ~]# ll /data/bin/catalina.sh
-rwxr-x--- 1 2019 2019 24323 Apr 10 14:31 /data/bin/catalina.sh

[root@rocky8 ~]# ll /data/bin/catalina.sh
-rwxr-x--- 1 2019 2019 24323 Apr 10 14:31 /data/bin/catalina.sh

[root@rocky8 ~]# echo >> /data/bin/catalina.sh
[root@rocky8 ~]# ll /data/bin/catalina.sh
-rwxr-x--- 1 2019 2019 24324 Apr 10 14:53 /data/bin/catalina.sh

进入容器修改数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@rocky8 ~]# docker exec -it d83a282d18dd bash
[root@d83a282d18dd /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp6 0 0 127.0.0.1:8005 :::* LISTEN
tcp6 0 0 :::8009 :::* LISTEN
tcp6 0 0 :::8080 :::* LISTEN

#文件数据卷上的文件为只读
[root@d83a282d18dd /]# echo >> /apps/tomcat/bin/catalina.sh
bash: /apps/tomcat/bin/catalina.sh: Read-only file system

#目录数据卷可读可写
[root@d83a282d18dd /]# cat /data/tomcat/webapps/testapp/index.html
testapp v2

[root@d83a282d18dd /]# echo testapp v3 > /data/tomcat/webapps/testapp/index.html
[root@d83a282d18dd /]# cat /data/tomcat/webapps/testapp/index.html
testapp v3

[root@rocky8 ~]# curl 127.0.0.1:8080/testapp/
testapp v3

查看容器中挂载和进程信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@d83a282d18dd /]# mount
......
/dev/mapper/rl-root on /etc/resolv.conf type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rl-root on /etc/hostname type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rl-root on /etc/hosts type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rl-root on /apps/apache-tomcat-8.5.50/logs type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rl-root on /apps/apache-tomcat-8.5.50/bin/catalina.sh type xfs (ro,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/rl-root on /data/tomcat/webapps/testapp type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
......

[root@d83a282d18dd /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
overlay 52403200 22521664 29881536 43% /
tmpfs 65536 0 65536 0% /dev
tmpfs 2845540 0 2845540 0% /sys/fs/cgroup
shm 65536 0 65536 0% /dev/shm
/dev/mapper/rl-root 52403200 22521664 29881536 43% /etc/hosts
tmpfs 2845540 0 2845540 0% /proc/acpi
tmpfs 2845540 0 2845540 0% /proc/scsi
tmpfs 2845540 0 2845540 0% /sys/firmware

[root@d83a282d18dd /]# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 13308 3060 ? Ss 14:47 0:00 /bin/bash /apps/tomcat/bin/run_tomcat.sh
www 26 0.4 3.1 8234820 178880 ? Sl 14:47 0:03 /usr/local/jdk/bin/java -Djava.util.logging.config.file=/apps/tomca
root 27 0.0 0.0 83600 4580 ? S 14:47 0:00 su - www -c tail -f /etc/hosts
www 29 0.0 0.0 4420 672 ? Ss 14:47 0:00 tail -f /etc/hosts
root 145 0.0 0.0 13972 4088 pts/0 Ss 14:57 0:00 bash
root 170 0.0 0.0 53372 3880 pts/0 R+ 14:59 0:00 ps aux

[root@d83a282d18dd /]# ps aux | grep java
www 26 0.4 3.1 8234820 178880 ? Sl 14:47 0:03 /usr/local/jdk/bin/java -Djava.util.logging.config.file=/apps/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -server -Xms4g -Xmx4g -Xss512k -Xmn1g -XX:CMSInitiatingOccupancyFraction=65 -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=10 -XX:NewSize=2048M -XX:MaxNewSize=2048M -XX:NewRatio=2 -XX:PermSize=128m -XX:MaxPermSize=512m -XX:CMSFullGCsBeforeCompaction=5 -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:LargePageSizeInBytes=128m -XX:+UseFastAccessorMethods -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Dignore.endorsed.dirs= -classpath /apps/tomcat/bin/bootstrap.jar:/apps/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/apps/tomcat -Dcatalina.home=/apps/tomcat -Djava.io.tmpdir=/apps/tomcat/temp org.apache.catalina.startup.Bootstrap start
root 174 0.0 0.0 10708 2232 pts/0 S+ 15:00 0:00 grep --color=auto java

实战案例: 匿名数据卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

#利用匿名数据卷创建容器
[root@rocky8 ~]# docker run -d -p 80:80 --name nginx01 -v /usr/share/nginx/html nginx
b51e5babee75887b6579837ad79a548e71774c2e2bc6497ddeac7972c5f055df

[root@rocky8 ~]# curl 127.0.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#查看自动生成的匿名数据卷
[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME
local ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964

#查看匿名数据卷的详细信息
[root@rocky8 ~]# docker volume inspect ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964
[
{
"CreatedAt": "2025-04-10T15:05:00+08:00",
"Driver": "local",
"Labels": {
"com.docker.volume.anonymous": ""
},
"Mountpoint": "/var/lib/docker/volumes/ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964/_data",
"Name": "ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964",
"Options": null,
"Scope": "local"
}
]


[root@rocky8 ~]# docker inspect --format="{{.Mounts}}" nginx01
[{volume ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964 /var/lib/docker/volumes/ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964/_data /usr/share/nginx/html local true }]

#查看匿名数据卷的文件
[root@rocky8 ~]# ls /var/lib/docker/volumes/ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964/_data
50x.html index.html

#修改宿主机中匿名数据卷的文件
[root@rocky8 ~]# echo Anonymous Volume > /var/lib/docker/volumes/ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964/_data/index.html
[root@rocky8 ~]# curl 127.0.0.1
Anonymous Volume

#删除容器不会删除匿名数据卷
[root@rocky8 ~]# docker rm -f nginx01
nginx01

[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME
local ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964

[root@rocky8 ~]# docker run -d -p 80:80 --name nginx01 -v /usr/share/nginx/html nginx
e9d657fa9cc59c027201779a36cc87a1dd5ec419f138ab7c32ced9fe75a2b29e

[root@rocky8 ~]# cat /var/lib/docker/volumes/ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964/_data/index.html
Anonymous Volume

#删除匿名数据卷
[root@rocky8 ~]# docker volume rm ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964
ef90c2af7491adf2f522a7a312e939d2fb5339f5e7561ba17d954693432d0964

实战案例: 命名数据卷

创建命名数据卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# docker volume create vol1
vol1
[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME
local vol1
[root@rocky8 ~]# docker inspect vol1
[
{
"CreatedAt": "2025-04-10T15:13:44+08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/vol1/_data",
"Name": "vol1",
"Options": null,
"Scope": "local"
}
]

使用命名数据卷创建容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
[root@rocky8 ~]# docker run -d -p 8001:80 --name nginx01 -v vol1:/usr/share/nginx/html nginx
67e8b53447bd457d3010f395a5667a9c15a4f583788a87bdec0d283dcb90b6f4

[root@rocky8 ~]# curl 127.0.0.1:8001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#显示命名数据卷
[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME
local vol1

#查看命名数据卷详解信息
[root@rocky8 ~]# docker volume inspect vol1
[
{
"CreatedAt": "2025-04-10T15:13:44+08:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/vol1/_data",
"Name": "vol1",
"Options": null,
"Scope": "local"
}
]

[root@rocky8 ~]# docker inspect --format="{{.Mounts}}" nginx01
[{volume vol1 /var/lib/docker/volumes/vol1/_data /usr/share/nginx/html local z true }]

#查看命名数据卷的文件
[root@rocky8 ~]# ls /var/lib/docker/volumes/vol1/_data/
50x.html index.html

#修改宿主机命名数据卷的文件
[root@rocky8 ~]# echo nginx vol1 website > /var/lib/docker/volumes/vol1/_data/index.html
[root@rocky8 ~]# curl 127.0.0.1:8001
nginx vol1 website

#利用现在的命名数据卷再创建新容器,可以和原有容器共享同一个命名数据卷的数据
[root@rocky8 ~]# docker run -d -p 8002:80 --name nginx02 -v vol1:/usr/share/nginx/html nginx
abc2c0b606f505b0ac6e41ac8a627714e66c66c2ac6cade24a32c6a84d85ed01

[root@rocky8 ~]# curl 127.0.0.1:8002
nginx vol1 website

创建容器时自动创建命名数据卷

1
2
3
4
5
6
7
8
#创建容器自动创建命名数据卷
[root@rocky8 ~]# docker run -d -p 8003:80 --name nginx03 -v vol2:/usr/share/nginx/html nginx
136c337d758f92dd92a42c19850be9ebc72d35d697d832a34a8fb6da523d2450

[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME
local vol1
local vol2

删除数据卷

1
2
3
4
5
6
#删除指定的命名数据卷
[root@rocky8 ~]# docker volume rm vol1
vol1

#清理全部不再使用的卷
[root@rocky8 ~]# docker volume prune -f

实战案例:实现 wordpress 持久化

1
2
3
4
5
6
7
8
[root@rocky8 ~]# docker run -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 \
-e MYSQL_DATABASE=wordpress -e MYSQL_USER=wordpress -e MYSQL_PASSWORD=123456 \
--name mysql -d -v /data/mysql/:/var/lib/mysql --restart=always mysql:
8.0.29-oracle


[root@rocky8 ~]# docker run -d -p 80:80 --name wordpress \
-v /data/wordpress:/var/www/html --restart=always wordpress:php7.4-apache

数据卷容器

数据卷容器介绍

49

在Dockerfile中创建的是匿名数据卷,无法直接实现多个容器之间共享数据

数据卷容器最大的功能是可以让数据在多个docker容器之间共享

如下图所示: 即可以让B容器访问A容器的内容,而容器C也可以访问A容器的内容,即可以实现A,B,C 三个容器之间的数据读写共享。

50

相当于先要创建一个后台运行的容器作为 Server,用于提供数据卷,这个卷可以为其他容器提供数据存储服务,其他使用此卷的容器作为client端 ,但此方法并不常使用

缺点: 因为依赖一个 Server 的容器,所以此 Server 容器出了问题,其它 Client容器都会受影响

使用数据卷容器

启动容器时,指定使用数据卷容器

docker run 命令的以下选项可以实现数据卷容器,格式如下:

1
--volumes-from <数据卷容器>     Mount volumes from the specified container(s)

实战案例: 数据卷容器

创建一个数据卷容器 Server

先创建一个挂载宿主机的数据目录的容器,且可以无需启动

范例: 使用之前的镜像创建数据卷容器

1
2
3
4
5
6
7
8
9
#数据卷容器一般无需映射端口
[root@rocky8 ~]# docker run -d --name volume-server \
-v /data/bin/catalina.sh:/apps/tomcat/bin/catalina.sh:ro \
-v /data/testapp:/data/tomcat/webapps/testapp tomcat-web:app1
1c9e7ccc82c8f777b387cc83789bb5334174e6c872ec2e7ca52b9393973b27a3

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1c9e7ccc82c8 tomcat-web:app1 "/apps/tomcat/bin/ru…" 15 seconds ago Up 14 seconds 8009/tcp, 8080/tcp volume-server

启动多个数据卷容器 Client

1
2
3
4
5
6
7
8
9
10
11
[root@rocky8 ~]# docker run -d --name client1 --volumes-from volume-server -p 8081:8080 tomcat-web:app1 
2fb21a5681ca662f7113d3cb27dc2ed8b69a332821e884ecb2c544521c00c527

[root@rocky8 ~]# docker run -d --name client2 --volumes-from volume-server -p 8082:8080 tomcat-web:app1
7d0cf30474af6cf379049927ba3b4a36fc612f586b35cadb1adcef130391c1a1

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7d0cf30474af tomcat-web:app1 "/apps/tomcat/bin/ru…" 23 seconds ago Up 22 seconds 8009/tcp, 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp client2
2fb21a5681ca tomcat-web:app1 "/apps/tomcat/bin/ru…" 33 seconds ago Up 32 seconds 8009/tcp, 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp client1
1c9e7ccc82c8 tomcat-web:app1 "/apps/tomcat/bin/ru…" 2 minutes ago Up 2 minutes 8009/tcp, 8080/tcp volume-server

验证访问

1
2
3
4
5
[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v3

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v3

进入容器测试读写

读写权限依赖于源数据卷Server容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#进入 Server 容器修改数据
[root@rocky8 ~]# docker exec -it volume-server bash
[root@1c9e7ccc82c8 /]# cat /data/tomcat/webapps/testapp/index.html
testapp v3

[root@1c9e7ccc82c8 /]# echo testapp v4 > /data/tomcat/webapps/testapp/index.html

[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v4

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v4

#进入 Client 容器修改数据
[root@rocky8 ~]# docker exec -it client1 bash
[root@2fb21a5681ca /]# cat /data/tomcat/webapps/testapp/index.html
testapp v4

[root@2fb21a5681ca /]# echo testapp v5 > /data/tomcat/webapps/testapp/index.html
[root@2fb21a5681ca /]# cat /data/tomcat/webapps/testapp/index.html
testapp v5

[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v5

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v5

在宿主机直接修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# cat /data/testapp/index.html 
testapp v5

[root@rocky8 ~]# echo testapp v6 > /data/testapp/index.html
[root@rocky8 ~]# cat /data/testapp/index.html
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v6

[root@rocky8 ~]# docker exec -it volume-server cat /data/tomcat/webapps/testapp/index.html
testapp v6

关闭卷容器Server测试能否启动新容器

关闭卷容器Server,仍然可以创建新的client容器及访问旧的client容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@rocky8 ~]# docker stop volume-server 
volume-server

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7d0cf30474af tomcat-web:app1 "/apps/tomcat/bin/ru…" 11 minutes ago Up 11 minutes 8009/tcp, 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp client2
2fb21a5681ca tomcat-web:app1 "/apps/tomcat/bin/ru…" 11 minutes ago Up 11 minutes 8009/tcp, 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp client1

[root@rocky8 ~]# docker run -d --name client3 --volumes-from volume-server -p 8083:8080 tomcat-web:app1
d33c495361824608003a0543bef493bc99cf7b85d74597df3ac15a5ea0d4128b

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d33c49536182 tomcat-web:app1 "/apps/tomcat/bin/ru…" 15 seconds ago Up 14 seconds 8009/tcp, 0.0.0.0:8083->8080/tcp, :::8083->8080/tcp client3
7d0cf30474af tomcat-web:app1 "/apps/tomcat/bin/ru…" 12 minutes ago Up 12 minutes 8009/tcp, 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp client2
2fb21a5681ca tomcat-web:app1 "/apps/tomcat/bin/ru…" 12 minutes ago Up 12 minutes 8009/tcp, 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp client1
1c9e7ccc82c8 tomcat-web:app1 "/apps/tomcat/bin/ru…" 14 minutes ago Exited (137) About a minute ago volume-server

[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8083/testapp/
testapp v6

删除源卷容器Server,访问client和创建新的client容器

删除数据卷容器后,旧的client 容器仍能访问,但无法再创建新的client容器

删除数据卷容器后,旧的client 容器仍能访问,但无法再创建基于数据卷容器的新的client容器,但可以创建基于已创建的Client容器的Client容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 ~]# docker rm -f volume-server 
volume-server

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d33c49536182 tomcat-web:app1 "/apps/tomcat/bin/ru…" 3 minutes ago Up 3 minutes 8009/tcp, 0.0.0.0:8083->8080/tcp, :::8083->8080/tcp client3
7d0cf30474af tomcat-web:app1 "/apps/tomcat/bin/ru…" 15 minutes ago Up 15 minutes 8009/tcp, 0.0.0.0:8082->8080/tcp, :::8082->8080/tcp client2
2fb21a5681ca tomcat-web:app1 "/apps/tomcat/bin/ru…" 15 minutes ago Up 15 minutes 8009/tcp, 0.0.0.0:8081->8080/tcp, :::8081->8080/tcp client1

[root@rocky8 ~]# docker run -d --name client4 --volumes-from volume-server -p 8084:8080 tomcat-web:app1
docker: Error response from daemon: No such container: volume-server.
See 'docker run --help'.

[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8083/testapp/
testapp v6

重新创建容器卷 Server

重新创建容器卷容器后,还可继续创建新client 容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@rocky8 ~]# docker run -d --name volume-server \
-v /data/bin/catalina.sh:/apps/tomcat/bin/catalina.sh:ro \
-v /data/testapp:/data/tomcat/webapps/testapp tomcat-web:app1
b60540d21cce2f61413cb230455bcb7a9534f0e847ef9f459d095ae6ef8f1ce9

[root@rocky8 ~]# docker run -d --name client4 --volumes-from volume-server -p 8084:8080 tomcat-web:app1
e8813afeaeaeb0682bd02b3b0c46fff35bbc8a6bb790310bc1d76922fe004a44

[root@rocky8 ~]# curl 127.0.0.1:8081/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8082/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8083/testapp/
testapp v6

[root@rocky8 ~]# curl 127.0.0.1:8084/testapp/
testapp v6

利用数据卷容器备份指定容器的数据卷实现

由于匿名数据卷在宿主机中的存储位置不确定,所以为了方便的备份匿名数据卷,可以利用数据卷容器实现数据卷的备份

51

1
2
3
4
5
6
7
8
9
10
11
12
13
#在执行备份命令容器上执行备份方式
docker run -it --rm --volumes-from [container name] -v $(pwd):/backup ubuntu
root@ca5bb2c1f877:/#tar cvf /backup/backup.tar [container data volume]


#说明
[container name] #表示需要备份的容器
[container data volume] #表示容器内的需要备份的数据卷对应的目录


#还原方式
docker run -it --rm --volumes-from [container name] -V $(pwd):/backup ubuntu
root@ca5bb2c1f877:/#tar xvf /backup/backup.tar -C [container data volume]

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
#创建需要备份的匿名数据卷容器
[root@rocky8 ~]# docker run -it -v /datavolume1 --name volume-server centos:8 bash

[root@74d45f88e427 /]# ls
bin dev home lib64 media opt root sbin sys usr
datavolume1 etc lib lost+found mnt proc run srv tmp var

[root@74d45f88e427 /]# touch /datavolume1/centos.txt
[root@74d45f88e427 /]# exit
exit

[root@rocky8 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
74d45f88e427 centos:8 "bash" 34 seconds ago Exited (0) 9 seconds ago volume-server


#基于前面的匿名数据卷容器创建执行备份操作的容器
[root@rocky8 ~]# docker run -it --rm --volumes-from volume-server -v ~/backup:/backup --name backup-server ubuntu
root@c02524bf5791:/# ls backup/
root@c02524bf5791:/# ls
backup boot dev home lib64 mnt proc run srv tmp var
bin datavolume1 etc lib media opt root sbin sys usr

root@c02524bf5791:/# ls backup/
root@c02524bf5791:/# ls /datavolume1/
centos.txt

root@c02524bf5791:/# cd /datavolume1/
root@c02524bf5791:/datavolume1# tar cvf /backup/data.tar .
./
./centos.txt

root@c02524bf5791:/datavolume1# exit
exit

[root@rocky8 ~]# ls backup/
data.tar

#删除容器的数据
[root@rocky8 ~]# docker start -i volume-server
[root@74d45f88e427 /]# rm -rf /datavolume1/*
[root@74d45f88e427 /]# ls /datavolume1/
[root@74d45f88e427 /]# exit
exit

#进行还原
[root@rocky8 ~]# docker run --rm --volumes-from volume-server -v ~/backup:/backup --name backup-server ubuntu tar xvf /backup/data.tar -C /datavolume1/
./
./centos.txt

#验证是否还原
[root@rocky8 ~]# docker start -i volume-server
[root@74d45f88e427 /]# ls /datavolume1/
centos.txt

范例: 利用数据卷容器备份MySQL数据库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#MySQL容器默认使用了匿名卷
[root@rocky8 ~]# docker run -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 mysql:5.7.30
4a2d92bfb3e1f1202fd589ce4811d20a4252648118e3b70003f3eac65d112abf

[root@rocky8 ~]# docker volume ls
DRIVER VOLUME NAME
local 3598d975cc6a8b2f621e1e19e4109fcd1ae1815bf160b2306f80d2f2810aaa85

#备份数据库
[root@rocky8 ~]# docker run -it --rm --volumes-from mysql -v $(pwd):/backup centos:8 tar cf /backup/mysql.tar /var/lib/mysql

#删除数据库文件
[root@rocky8 ~]# rm -rf /var/lib/docker/volumes/3598d975cc6a8b2f621e1e19e4109fcd1ae1815bf160b2306f80d2f2810aaa85/_data/*

#还原数据库
[root@rocky8 ~]# docker run -it --rm --volumes-from mysql -v $(pwd):/backup centos:8 tar xf /backup/mysql.tar -C /var/lib/mysql

数据卷容器总结

将提供卷的容器Server 删除,已经运行的容器Client依然可以使用挂载的卷,因为容器是通过挂载访问数据的,但是无法创建新的卷容器客户端,但是再把卷容器Server创建后即可正常创建卷容器Client,此方式可以用于线上共享数据目录等环境,因为即使数据卷容器被删除了,其他已经运行的容器依然可以挂载使用

由此可知, 数据卷容器的功能只是将数据挂载信息传递给了其它使用数据卷容器的容器,而数据卷容器本身并不提供数据存储功能

数据卷容器可以作为共享的方式为其他容器提供文件共享,类似于NFS共享,可以在生产中启动一个实例挂载本地的目录,然后其他的容器分别挂载此容器的目录,即可保证各容器之间的数据一致性

数据卷容器的 Server 和 Client 可以不使用同一个镜像生成

当创建Client容器时,会复制Server容器的数据卷信息,后续Server容器状态和存在与否,都不会影响Client容器使用的数据卷

当Server容器删除后,不能再基于Server容器创建新的Client容器,但可以基于已存在的Client容器来创建新的Client容器

最终实现了多个客户端容器共享相同的持久化宿主机的存储方案

Docker 网络管理

52

docker容器创建后,必不可少的要和其它主机或容器进行网络通信

官方文档:

1
https://docs.docker.com/network/

Docker的默认的网络通信

Docker安装后默认的网络设置

Docker服务安装完成之后,默认在每个宿主机会生成一个名称为docker0的网卡其IP地址都是172.17.0.1/16

范例: 安装Docker的默认的网络配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:59:31:5b:67 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:59ff:fe31:5b67/64 scope link
valid_lft forever preferred_lft forever


[root@rocky8 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.024259315b67 no

创建容器后的网络配置

每次新建容器后

  • 宿主机多了一个虚拟网卡,和容器的网卡组合成一个网卡,比如: 137: veth8ca6d43@if136,而在容器内的网卡名为136,可以看出和宿主机的网卡之间的关联
  • 容器会自动获取一个172.17.0.0/16网段的随机地址,默认从172.17.0.2开始分配给第1个容器使用,第2个容器为172.17.0.3,以此类推
  • 容器获取的地址并不固定,每次容器重启,可能会发生地址变化

创建第一个容器后的网络状态

范例: 创建容器,容器自动获取IP地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@rocky8 ~]# docker run -it --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
366: eth0@if367: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 7f5ae721e596

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7f5ae721e596 alpine "sh" 30 seconds ago Up 30 seconds epic_antonelli

范例: 新建第一个容器,宿主机的网卡多了一个新网卡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:59:31:5b:67 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:59ff:fe31:5b67/64 scope link
valid_lft forever preferred_lft forever
367: veth753221b@if366: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 1e:17:3c:42:c4:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1c17:3cff:fe42:c415/64 scope link
valid_lft forever preferred_lft forever

范例: 查看新建容器后桥接状态

1
2
3
[root@rocky8 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.024259315b67 no veth753221b

创建第二个容器后面的网络状态

范例: 再次创建第二个容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@rocky8 ~]# docker run -it --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
368: eth0@if369: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 7e2db63c597b

/ # ping 7f5ae721e596
PING 7f5ae721e596 (205.178.189.129): 56 data bytes
64 bytes from 205.178.189.129: seq=0 ttl=127 time=252.690 ms
64 bytes from 205.178.189.129: seq=1 ttl=127 time=278.889 ms

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7e2db63c597b alpine "sh" 2 minutes ago Up 2 minutes hardcore_easley
7f5ae721e596 alpine "sh" 4 minutes ago Up 4 minutes epic_antonelli

范例: 新建第二个容器后宿主机又多了一个虚拟网卡

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:59:31:5b:67 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:59ff:fe31:5b67/64 scope link
valid_lft forever preferred_lft forever
367: veth753221b@if366: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 1e:17:3c:42:c4:15 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::1c17:3cff:fe42:c415/64 scope link
valid_lft forever preferred_lft forever
369: veth721158f@if368: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 5e:5d:fc:33:c1:93 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::5c5d:fcff:fe33:c193/64 scope link
valid_lft forever preferred_lft forever

范例: 查看新建第二个容器后桥接状态

1
2
3
4
[root@rocky8 ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.024259315b67 no veth721158f
veth753221b

容器间的通信

同一个宿主机的不同容器可相互通信

默认情况下

  • 同一个宿主机的不同容器之间可以相互通信
1
2
dockerd   --icc   Enable inter-container communication (default true)
--icc=false #此配置可以禁止同一个宿主机的容器之间通信
  • 不同宿主机之间的容器IP地址重复,默认不能相互通信

范例: 同一个宿主机的容器之间访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@rocky8 ~]# docker run -it --rm alpine sh
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 7f5ae721e596


[root@rocky8 ~]# docker run -it --rm alpine sh
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 7e2db63c597b

/ # ping 7f5ae721e596
PING 7f5ae721e596 (205.178.189.129): 56 data bytes
64 bytes from 205.178.189.129: seq=0 ttl=127 time=252.690 ms
64 bytes from 205.178.189.129: seq=1 ttl=127 time=278.889 ms

禁止同一个宿主机的不同容器间通信

范例: 同一个宿主机不同容器间禁止通信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#dockerd 的 --icc=false 选项可以禁止同一个宿主机的不同容器间通信
[root@rocky8 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --icc=false

[root@rocky8 ~]# systemctl daemon-reload
[root@rocky8 ~]# systemctl restart docker.service

#创建两个容器,测试无法通信
[root@rocky8 ~]# docker run -it --rm alpine sh
/ # hostname -i
172.17.0.2

[root@rocky8 ~]# docker run -it --rm alpine sh
/ # hostname -i
172.17.0.3

/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
--- 172.17.0.2 ping statistics ---
13 packets transmitted, 0 packets received, 100% packet loss

范例: 在第二个宿主机上创建容器,跨宿主机的容器之间默认不能通信

1
2
3
4
5
6
[root@rocky8 /]# docker run -it --rm alpine  sh
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
^C
--- 172.17.0.3 ping statistics ---
7 packets transmitted, 0 packets received, 100% packet loss

修改默认docker0网桥的网络配置

默认docker后会自动生成一个docker0的网桥,使用的IP是172.17.0.1/16,可能和宿主机的网段发生冲突,可以将其修改为其它网段的地址,避免冲突

范例: 将docker0的IP修改为指定IP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#方法1
[root@ubuntu1804 ~]# vim /etc/docker/daemon.json
[root@ubuntu1804 ~]# cat /etc/docker/daemon.json
{
"bip": "192.168.100.1/24",
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ubuntu1804 ~]# systemctl restart docker.service

#方法2
[root@ubuntu1804 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=192.168.100.1/24

[root@ubuntu1804 ~]#systemctl daemon-reload
[root@ubuntu1804 ~]#systemctl restart docker.service

#注意两种方法不可混用,否则将无法启动docker服务

#验证结果
[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:59:31:5b:67 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:59ff:fe31:5b67/64 scope link
valid_lft forever preferred_lft forever

修改默认网络设置使用自定义网桥

新建容器默认使用docker0的网络配置,可以修改默认指向自定义的网桥网络

范例: 用自定义的网桥代替默认的docker0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
#查看默认网络
[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:59:31:5b:67 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:59ff:fe31:5b67/64 scope link
valid_lft forever preferred_lft forever

[root@rocky8 ~]# yum install -y bridge-utils
[root@rocky8 ~]# brctl addbr br0
[root@rocky8 ~]# ip a a 192.168.100.1/24 dev br0
[root@rocky8 ~]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no
docker0 8000.024259315b67 no

[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:59:31:5b:67 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:59ff:fe31:5b67/64 scope link
valid_lft forever preferred_lft forever
378: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether d6:75:ba:11:b8:f2 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 scope global br0
valid_lft forever preferred_lft forever


[root@rocky8 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0
[root@rocky8 ~]# systemctl daemon-reload
[root@rocky8 ~]# systemctl restart docker.service

# 如果daemon.json中也定义了pid则会报错
[root@rocky8 ~]# systemctl restart docker.service
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.

[root@rocky8 ~]# ps -ef | grep dockerd
root 155119 1 0 18:10 ? 00:00:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -b br0
root 155289 133677 0 18:12 pts/1 00:00:00 grep --color=auto dockerd

[root@rocky8 ~]# docker run --rm alpine hostname -i
192.168.100.2

容器名称互联

新建容器时,docker会自动分配容器名称,容器ID和IP地址,导致容器名称,容器ID和IP都不固定,那么如何区分不同的容器,实现和确定目标容器的通信呢?解决方案是给容器起个固定的名称,容器之间通过固定名称实现确定目标的通信

有两种固定名称:

  • 容器名称
  • 容器名称的别名

注意: 两种方式都最少需要两个容器才能实现

通过容器名称互联

容器名称介绍

即在同一个宿主机上的容器之间可以通过自定义的容器名称相互访问,比如: 一个业务前端静态页面是使用nginx,动态页面使用的是tomcat,另外还需要负载均衡调度器,如: haproxy 对请求调度至nginx和tomcat的容器,由于容器在启动的时候其内部IP地址是DHCP随机分配的,而给容器起个固定的名称,则是相对比较固定的,因此比较适用于此场景

注意: 如果被引用的容器地址变化,必须重启当前容器才能生效

容器名称实现

docker run 创建容器,可使用–link选项实现容器名称的引用,其本质就是在容器内的/etc/hosts中添加–link后指定的容器的IP和主机名的对应关系,从而实现名称解析

1
2
3
4
5
--link list                     #Add link to another container

格式:
docker run --name <容器名称> #先创建指定名称的容器
docker run --link <目标通信的容器ID或容器名称> #再创建容器时引用上面容器的名称

实战案例1: 使用容器名称进行容器间通信

1. 先创建第一个指定容器名称的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@rocky8 ~]# docker run -it --name server1 --rm alpine sh
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.100.2 4be083330070

/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
381: eth0@if382: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:c0:a8:64:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.2/24 brd 192.168.100.255 scope global eth0
valid_lft forever preferred_lft forever

/ # ping 192.168.100.2
PING 192.168.100.2 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=64 time=0.082 ms
64 bytes from 192.168.100.2: seq=1 ttl=64 time=0.082 ms
^C
--- 192.168.100.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.082/0.082/0.082 ms

/ # ping server1
ping: bad address 'server1'

/ # ping 4be083330070
PING 4be083330070 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=64 time=0.042 ms
64 bytes from 192.168.100.2: seq=1 ttl=64 time=0.065 ms
^C
--- 4be083330070 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.042/0.053/0.065 ms

2. 新建第二个容器时引用第一个容器的名称

会自动将第一个主机的名称加入/etc/hosts文件,从而可以利用第一个容器名称进行访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
[root@rocky8 ~]# docker run -it --name server2 --rm  --link server1 alpine sh
/ # env
HOSTNAME=a10a01eee894
SHLVL=1
HOME=/root
SERVER1_NAME=/server2/server1
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.100.2 server1 4be083330070
192.168.100.3 a10a01eee894

/ # ping server1
PING server1 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=64 time=0.229 ms
64 bytes from 192.168.100.2: seq=1 ttl=64 time=0.133 ms
^C
--- server1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.133/0.181/0.229 ms

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.100.2 server1 4be083330070
192.168.100.3 a10a01eee894

/ # ping server2
ping: bad address 'server2'

/ # ping 4be083330070
PING 4be083330070 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=64 time=0.118 ms
^C
--- 4be083330070 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.118/0.118/0.118 ms

/ # ping a10a01eee894
PING a10a01eee894 (192.168.100.3): 56 data bytes
64 bytes from 192.168.100.3: seq=0 ttl=64 time=0.100 ms
64 bytes from 192.168.100.3: seq=1 ttl=64 time=0.077 ms
^C
--- a10a01eee894 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.077/0.088/0.100 ms

[root@rocky8 /]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a10a01eee894 alpine "sh" 2 minutes ago Up 2 minutes server2
4be083330070 alpine "sh" 5 minutes ago Up 5 minutes server1

实战案例2: 实现 wordpress 和 MySQL 两个容器互连

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@rocky8 ~]# mkdir -p lamp_docker/mysql
[root@rocky8 ~]# vim lamp_docker/env_mysql.list
[root@rocky8 ~]# vim lamp_docker/env_wordpress.list
[root@rocky8 ~]# vim lamp_docker/mysql/mysql_test.cnf

[root@rocky8 ~]# tree lamp_docker/
lamp_docker/
├── env_mysql.list
├── env_wordpress.list
└── mysql
└── mysql_test.cnf

1 directory, 3 files

[root@rocky8 ~]# cat lamp_docker/env_mysql.list
MYSQL_ROOT_PASSWORD=123456
MYSQL_DATABASE=wordpress
MYSQL_USER=wpuser
MYSQL_PASSWORD=wppass

[root@rocky8 ~]# cat lamp_docker/env_wordpress.list
WORDPRESS_DB_HOST=mysql:3306
WORDPRESS_DB_NAME=wordpress
WORDPRESS_DB_USER=wpuser
WORDPRESS_DB_PASSWORD=wppass
WORDPRESS_TABLE_PREFIX=wp_

[root@rocky8 ~]# cat lamp_docker/mysql/mysql_test.cnf
[mysqld]
server-id=100
log-bin=mysql-bin

[root@rocky8 ~]# docker run --name mysql \
-v /root/lamp_docker/mysql/:/etc/mysql/conf.d \
-v /data/mysql:/var/lib/mysql \
--env-file=/root/lamp_docker/env_mysql.list \
-d -p 3306:3306 mysql:5.7.30

[root@rocky8 ~]# docker run -d --name wordpress \
--link mysql -v /data/wordpress:/var/www/html/wp-content \
--env-file=/root/lamp_docker/env_wordpress.list \
-p 80:80 wordpress:php7.4-apache

53

54

通过自定义容器别名互联

容器别名介绍

自定义的容器名称可能后期会发生变化,那么一旦名称发生变化,容器内程序之间也必须要随之发生变化,比如:程序通过固定的容器名称进行服务调用,但是容器名称发生变化之后再使用之前的名称肯定是无法成功调用,每次都进行更改的话又比较麻烦,因此可以使用自定义别名的方式解决,即容器名称可以随意更改,只要不更改别名即可

容器别名实现

命令格式:

1
2
3
4
5
docker run --name <容器名称> 
#先创建指定名称的容器

docker run --name <容器名称> --link <目标容器名称>:"<容器别名1> <容器别名2> ..."
#给上面创建的容器起别名,来创建新容器

实战案例: 使用容器别名

范例: 创建第三个容器,引用前面创建的容器,并起别名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@rocky8 ~]# docker run -it --name server1 --rm alpine sh
[root@rocky8 ~]# docker run -it --rm --name server3 --link server1:server1-alias alpine
/ # env
HOSTNAME=b593c2b99a62
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SERVER1_ALIAS_NAME=/server3/server1-alias

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 server1-alias 0c6696f75e1d server1
172.17.0.5 b593c2b99a62

/ # ping server1
PING server1 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: seq=0 ttl=64 time=0.118 ms

/ # ping server1-alias
PING server1-alias (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: seq=0 ttl=64 time=0.129 ms

范例: 创建第四个容器,引用前面创建的容器,并起多个别名

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@rocky8 ~]# docker run -it --rm --name server4 --link server1:"server1-alias1 server1-alias2" alpine
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.4 server1-alias1 server1-alias2 0c6696f75e1d server1
172.17.0.6 d969ab602d04

/ # ping server1-alias2
PING server1-alias2 (172.17.0.4): 56 data bytes
64 bytes from 172.17.0.4: seq=0 ttl=64 time=0.151 ms
64 bytes from 172.17.0.4: seq=1 ttl=64 time=0.128 ms

Docker 网络连接模式

网络模式介绍

55

Docker 的网络支持5种网络模式:

  • none
  • bridge
  • host
  • container
  • network-name

范例: 查看默认的网络模式有三个

1
2
3
4
5
[root@rocky8 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
4048bc584ca0 bridge bridge local
aef2f5228637 host host local
3709f184390d none null local

网络模式指定

默认新建的容器使用Bridge模式,创建容器时,docker run 命令使用以下选项指定网络模式

格式

1
2
3
4
5
6
7
8
9
docker run --network <mode>
docker run --net=<mode>

<mode>: 可是以下值
none
bridge
host
container:<容器名或容器ID>
<自定义网络名称>

bridge网络模式

bridge 网络模式架构

56

本模式是docker的默认模式,即不指定任何模式就是bridge模式,也是使用比较多的模式,此模式创建的容器会为每一个容器分配自己的网络 IP 等信息,并将容器连接到一个虚拟网桥与外界通信

57

可以和外部网络之间进行通信,通过SNAT访问外网,使用DNAT可以让容器被外部主机访问,所以此模式也称为NAT模式

此模式宿主机需要启动ip_forward功能

bridge网络模式特点

  • 网络资源隔离: 不同宿主机的容器无法直接通信,各自使用独立网络
  • 无需手动配置: 容器默认自动获取172.17.0.0/16的IP地址,此地址可以修改
  • 可访问外网: 利用宿主机的物理网卡,SNAT连接外网
  • 外部主机无法直接访问容器: 可以通过配置DNAT接受外网的访问
  • 低性能较低: 因为可通过NAT,网络转换带来更的损耗
  • 端口管理繁琐: 每个容器必须手动指定唯一的端口,容器产生端口冲容

bridge 模式的默认设置

范例: 查看bridge模式信息

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
[root@rocky8 ~]# docker network inspect bridge 
[
{
"Name": "bridge",
"Id": "4048bc584ca02c83479e862bd60c153886afb02987e57b4c04a2d73e916ad516",
"Created": "2025-04-11T14:19:27.916624164+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"8cc94ae40dd2e69ca1aee9e7ed9ff756445c28f35d5bc41dc59de16770175d14": {
"Name": "mysql",
"EndpointID": "b5fdf2a3dda5ea2e746ab9eb6794f59d53bddfcca7a85b92069e40b6bd236c12",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"db145a4d599893bd37609ba258bbe336a06257d31ee2f70535ae095a505d4974": {
"Name": "wordpress",
"EndpointID": "2e3b69ecb7b72b4aa935fe28e7373bc91b760ee4d47dec67de5e0190c074c1c1",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

范例: 宿主机的网络状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#安装docker后.默认启用ip_forward
[root@rocky8 ~]# cat /proc/sys/net/ipv4/ip_forward
1

[root@rocky8 ~]# iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
11 572 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
11 711 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:3306
0 0 MASQUERADE tcp -- * * 172.17.0.3 172.17.0.3 tcp dpt:80

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 to:172.17.0.2:3306
9 468 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.17.0.3:80

范例: 通过宿主机的物理网卡利用SNAT访问外部网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#在另一台主机上建立httpd服务器
[root@centos7 ~]#systemctl is-active httpd
active
#启动容器,默认是bridge网络模式
[root@ubuntu1804 ~]#docker run -it --rm alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
166: eth0@if167: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

#可能访问其它宿主机
/ # ping 10.0.0.7
PING 10.0.0.7 (10.0.0.7): 56 data bytes
64 bytes from 10.0.0.7: seq=0 ttl=63 time=0.764 ms

/ # ping www.baidu.com
PING www.baidu.com (61.135.169.125): 56 data bytes
64 bytes from 61.135.169.125: seq=0 ttl=127 time=5.182 ms

/ # traceroute 10.0.0.7
traceroute to 10.0.0.7 (10.0.0.7), 30 hops max, 46 byte packets
1 172.17.0.1 (172.17.0.1) 0.008 ms 0.008 ms 0.007 ms
2 10.0.0.7 (10.0.0.7) 0.255 ms 0.510 ms 0.798 ms

/ # wget -qO - 10.0.0.7
Website on 10.0.0.7

/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

[root@centos7 ~]# curl 127.0.0.1
Website on 10.0.0.7

[root@centos7 ~]# tail /var/log/httpd/access_log
127.0.0.1 - - [01/Feb/2020:19:31:16 +0800] "GET / HTTP/1.1" 200 20 "-" "curl/7.29.0"
10.0.0.100 - - [01/Feb/2020:19:31:21 +0800] "GET / HTTP/1.1" 200 20 "-" "Wget"

修改默认的 bridge 模式网络配置

有两种方法修改默认的bridge 模式的网络配置,但两种方式只能选一种,否则会导致冲容,docker服务无法启动

范例: 修改bridge模式默认的网段方法1

1
2
3
4
5
[root@ubuntu1804 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=10.100.0.1/24

[root@ubuntu1804 ~]# systemctl daemon-reload
[root@ubuntu1804 ~]# systemctl restart docker

范例: 修改bridge网络配置方法2

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@ubuntu1804 ~]#vim /etc/docker/daemon.json
{
"hosts": ["tcp://0.0.0.0:2375", "fd://"],
"bip": "192.168.100.100/24", #分配docker0网卡的IP,24是容器IP的netmask
"fixed-cidr": "192.168.100.128/26", #分配容器IP范围,26不是容器IP的子网掩码,只表示地址范围
"fixed-cidr-v6": "2001:db8::/64",
"mtu": 1500,
"default-gateway": "192.168.100.200", #网关必须和bip在同一个网段
"default-gateway-v6": "2001:db8:abcd::89",
"dns": [ "1.1.1.1", "8.8.8.8"]
}

[root@ubuntu1804 ~]#systemctl restart docker

Host 模式

58

如果指定host模式启动的容器,那么新创建的容器不会创建自己的虚拟网卡,而是直接使用宿主机的网卡和IP地址,因此在容器里面查看到的IP信息就是宿主机的信息,访问容器的时候直接使用宿主机IP+容器端口即可,不过容器内除网络以外的其它资源,如: 文件系统、系统进程等仍然和宿主机保持隔离

此模式由于直接使用宿主机的网络无需转换,网络性能最高,但是各容器内使用的端口不能相同,适用于运行容器端口比较固定的业务

Host 网络模式特点:

  • 使用参数 –network host 指定
  • 共享宿主机网络
  • 网络性能无损耗
  • 网络故障排除相对简单
  • 各容器网络无隔离
  • 网络资源无法分别统计
  • 端口管理困难: 容易产生端口冲突
  • 不支持端口映射

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
#查看宿主机的网络设置
[root@rocky8 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:6dff:fe39:e862 prefixlen 64 scopeid 0x20<link>
ether 02:42:6d:39:e8:62 txqueuelen 0 (Ethernet)
RX packets 221 bytes 460817 (450.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 218 bytes 827111 (807.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.11 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::20c:29ff:fe71:6eaf prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:71:6e:af txqueuelen 1000 (Ethernet)
RX packets 14534 bytes 16576014 (15.8 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 3734 bytes 769276 (751.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0


[root@rocky8 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens160

#打开容器前确认宿主机的80/tcp端口没有打开
[root@rocky8 ~]# ss -lnt | grep 80

#创建host模式的容器
[root@rocky8 ~]# docker run -d --network host --name web1 nginx
f63085341930c4ca47d765a7029c730c10eb6a517d06fed86422dd0aa9b7e52a

#创建容器后,宿主机的80/tcp端口打开
[root@rocky8 ~]# ss -lnt | grep 80
LISTEN 0 511 0.0.0.0:80 0.0.0.0:*
LISTEN 0 511 [::]:80 [::]:*

#进入容器
[root@rocky8 ~]# docker exec -it web1 bash

#进入容器后仍显示宿主机的主机名提示符信息
root@rocky8:/# hostname
rocky8


#从容器访问远程主机
[root@rocky8 ~]# curl 192.168.1.11
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

范例: host模式下端口映射无法实现

1
2
3
4
5
6
7
8
[root@rocky8 ~]# docker run -d --network host --name web2 -p 81:80 nginx
WARNING: Published ports are discarded when using host network mode
2036d93321c622cf004bf38ff6f78c64c327eb6d6446266f3b34e6faddaf3aab

#host模块下端口映射不成功,但是容器可以启动
[root@rocky8 ~]# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2036d93321c6 nginx "/docker-entrypoint.…" 25 seconds ago Exited (1) 22 seconds ago web2

范例: 对比前面host模式的容器和bridge模式的端口映射

1
2
3
4
5
6
7
8
[root@rocky8 ~]# docker port web1
[root@rocky8 ~]# docker port web2
[root@rocky8 ~]# docker run -d --network bridge -p 8001:80 --name web3 nginx
2a97a0f7764c336c45fb6302c66fc6ee5af647d50fea68f5b2957a9f5d820bd1

[root@rocky8 ~]# docker port web3
80/tcp -> 0.0.0.0:8001
80/tcp -> [::]:8001

none 模式

在使用none 模式后,Docker 容器不会进行任何网络配置,没有网卡、没有IP也没有路由,因此默认无法与外界通信,需要手动添加网卡配置IP等,所以极少使用

none模式特点

  • 使用参数 –network none指定
  • 默认无网络功能,无法和外部通信
  • 无法实现端口映射
  • 适用于测试环境

范例: 启动none模式的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@rocky8 ~]# docker run -d --network none -p 8001:80 --name web1-none nginx
1e06466d3bf508c5e630909e741365339f4877095f0092fd267b79c335766698

[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1e06466d3bf5 nginx "/docker-entrypoint.…" 4 seconds ago Up 3 seconds web1-none

[root@rocky8 ~]# docker exec -it web1-none
[root@5207dcbd0aee /]# ifconfig -a
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

[root@5207dcbd0aee /]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface

[root@5207dcbd0aee /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN

[root@5207dcbd0aee /]# ping www.baidu.com
ping: www.baidu.com: Name or service not known

[root@5207dcbd0aee /]# ping 172.17.0.1
connect: Network is unreachable

Container 模式

59

使用此模式创建的容器需指定和一个已经存在的容器共享一个网络,而不是和宿主机共享网络,新创建的容器不会创建自己的网卡也不会配置自己的IP,而是和一个被指定的已经存在的容器共享IP和端口范围,因此这个容器的端口不能和被指定容器的端口冲突,除了网络之外的文件系统、进程信息等仍然保持相互隔离,两个容器的进程可以通过lo网卡进行通信

Container 模式特点

  • 使用参数 –-network container:名称或ID指定
  • 与宿主机网络空间隔离
  • 空器间共享网络空间
  • 适合频繁的容器间的网络通信
  • 直接使用对方的网络,较少使用

范例: 通过容器模式实现 wordpress

1
2
3
4
5
6
7
8
9
[root@rocky8 ~]# docker run -d -p 80:80 --name wordpress \
-v /data/wordpress:/var/www/html --restart=always wordpress:php7.4-apache
82c534f135fb2340632b450142b712ad762f4e62e69662e334324b24ad15b3e4

[root@rocky8 ~]# docker run --network container:wordpress \
-e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_DATABASE=wordpress \
-e MYSQL_USER=wordpress -e MYSQL_PASSWORD=123456 \
-v /data/mysql:/var/lib/mysql --name mysql -d --restart=always mysql:8.0.29-oracle
515083b8ced2fe1bfff6ab8c541f36b814d250980f7f4c69f5d6ba81390bac96

60

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
#创建第一个容器
[root@rocky8 ~]# docker run -it --name server1 -p 80:80 alpine sh
/ # ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:736 (736.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)


/ # netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State

#在另一个终端执行下面操作
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
92287d1cc7fe alpine "sh" 40 seconds ago Up 39 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp server1

[root@rocky8 ~]# docker port server1
80/tcp -> 0.0.0.0:80
80/tcp -> [::]:80


#无法访问web服务
[root@rocky8 ~]# curl 127.0.0.1
curl: (56) Recv failure: Connection reset by peer

#创建第二个容器,基于第一个容器的container的网络模式
[root@rocky8 ~]# docker run -d --name server2 --network container:server1 nginx
1ddae1258f466d5043dcf25dc97084bad747261c22dc21bc1dcbd70cf07a5aef

#可以访问web服务
[root@rocky8 ~]# curl 127.0.0.1
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

#和第一个容器共享相同的网络
[root@rocky8 ~]# docker exec -it server2 sh
# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 92287d1cc7fe

#可访问外网
ping www.baidu.com
64 bytes from 61.135.169.121 (61.135.169.121): icmp_seq=1 ttl=127 time=3.99 ms

范例: 第一个容器使用host网络模式,第二个容器与之共享网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[root@ubuntu1804 ~]# docker run -d --name c1 --network host nginx-centos7.8:v5.0-1.18.0
5a60804f3917d82dfe32db140411cf475f20acce0fe4674d94e4557e1003d8e0

[root@ubuntu1804 ~]# docker run -it --name c2 --network container:c1 centos7.8:v1.0

[root@ubuntu1804 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 00:0c:29:63:8b:ac brd ff:ff:ff:ff:ff:ff
inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe63:8bac/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:24:86:98:fb brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:24ff:fe86:98fb/64 scope link
valid_lft forever preferred_lft forever

[root@ubuntu1804 ~]# docker exec -it c1 bash
[root@ubuntu1804 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 00:0c:29:63:8b:ac brd ff:ff:ff:ff:ff:ff
inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe63:8bac/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:24:86:98:fb brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:24ff:fe86:98fb/64 scope link
valid_lft forever preferred_lft forever

范例:第一个容器使用none网络模式,第二个容器与之共享网络

1
2
3
4
5
6
7
8
9
10
11
[root@ubuntu1804 ~]#docker run -d --name c1 --network none nginx-centos7.8:v5.0-1.18.0
caf5b57299c8359f21f30b8894c5f8496ff39b44ead6a732056000689cb0c91c

[root@ubuntu1804 ~]#docker run -it --name c2 --network container:c1 centos7.8:v1.0

[root@caf5b57299c8 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever

自定义网络模式

除了以上的网络模式,也可以自定义网络,使用自定义的网段地址,网关等信息

注意: 自定义网络内的容器可以直接通过容器名进行相互的访问,而无需使用 –link

可以使用自定义网络模式,实现不同集群应用的独立网络管理,而互不影响,而且在网一个网络内,可以直接利用容器名相互访问,非常便利

自定义网络实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ubuntu1804 ~]# docker network --help

Usage: docker network COMMAND

Manage networks

Commands:
connect Connect a container to a network
create Create a network
disconnect Disconnect a container from a network
inspect Display detailed information on one or more networks
ls List networks
prune Remove all unused networks
rm Remove one or more networks

创建自定义网络:

1
2
3
docker network create -d <mode> --subnet <CIDR> --gateway <网关> <自定义网络名称>

#注意mode不支持host和none,默认是bridge模式

查看自定义网络信息

1
docker network inspect <自定义网络名称或网络ID>

引用自定义网络

1
2
3
docker run --network <自定义网络名称> <镜像名称>
docker run --net <自定义网络名称> --ip <指定静态IP> <镜像名称>
#注意:静态IP只支持自定义网络模型

删除自定义网络

1
doccker network rm <自定义网络名称或网络ID>

范例:内置的三个网络无法删除

1
2
3
4
5
6
7
8
9
10
11
12
[root@ubuntu1804 ~]# docker network rm test-net
test-net

[root@ubuntu1804 ~]# docker network rm none
Error response from daemon: none is a pre-defined network and cannot be removed

[root@ubuntu1804 ~]# docker network rm bridge
Error response from daemon: bridge is a pre-defined network and cannot be
removed

[root@ubuntu1804 ~]# docker network rm host
Error response from daemon: host is a pre-defined network and cannot be removed

实战案例: 自定义网络

创建自定义的网络
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
[root@rocky8 ~]# docker network create -d bridge --subnet 172.27.0.0/16 --gateway 172.27.0.1 test-net
26dc4700293ee569474a78798d7ff862fbd4648e18539f349824130a914294ad

[root@rocky8 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
4048bc584ca0 bridge bridge local
aef2f5228637 host host local
3709f184390d none null local
26dc4700293e test-net bridge local

[root@rocky8 ~]# docker inspect test-net
[
{
"Name": "test-net",
"Id": "26dc4700293ee569474a78798d7ff862fbd4648e18539f349824130a914294ad",
"Created": "2025-04-11T16:18:45.152295655+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]

[root@rocky8 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:0c:29:71:6e:af brd ff:ff:ff:ff:ff:ff
altname enp3s0
inet 192.168.1.11/24 brd 192.168.1.255 scope global noprefixroute ens160
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe71:6eaf/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:6d:39:e8:62 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:6dff:fe39:e862/64 scope link
valid_lft forever preferred_lft forever
#新添加了一个虚拟网卡
28: br-26dc4700293e: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b8:53:5e:8e brd ff:ff:ff:ff:ff:ff
inet 172.27.0.1/16 brd 172.27.255.255 scope global br-26dc4700293e
valid_lft forever preferred_lft forever

#新加了一个网桥
[root@rocky8 ~]# brctl show
bridge name bridge id STP enabled interfaces
br-26dc4700293e 8000.0242b8535e8e no
docker0 8000.02426d39e862 no

[root@rocky8 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.2 0.0.0.0 UG 100 0 0 ens160
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.27.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-26dc4700293e
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens160
利用自定义的网络创建容器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
[root@rocky8 ~]# docker run -it --rm --network test-net alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
29: eth0@if30: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.27.0.1 0.0.0.0 UG 0 0 0 eth0
172.27.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

/ # cat /etc/resolv.conf
nameserver 127.0.0.11
options ndots:0

/ # ping -c1 www.baidu.com
PING www.baidu.com (220.181.111.1): 56 data bytes
64 bytes from 220.181.111.1: seq=0 ttl=127 time=38.511 ms

#再开一个新终端窗口查看网络
[root@rocky8 ~]# docker inspect test-net
[
{
"Name": "test-net",
"Id": "26dc4700293ee569474a78798d7ff862fbd4648e18539f349824130a914294ad",
"Created": "2025-04-11T16:18:45.152295655+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
#出现此网络中容器的网络信息
"Containers": {
"d5da2c9ec06cddc34d4dfa8b67af3aa13e0d58f97d007a25193f7788525287b9": {
"Name": "brave_pasteur",
"EndpointID": "d27c8cc7f6afaaeab19043a01dc6de766d08b5213b1ce3b9910902f25000ca8d",
"MacAddress": "02:42:ac:1b:00:02",
"IPv4Address": "172.27.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

实战案例: 自定义网络中的容器之间通信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[root@rocky8 ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
4048bc584ca0 bridge bridge local
aef2f5228637 host host local
3709f184390d none null local
26dc4700293e test-net bridge local

[root@rocky8 ~]# docker run -it --rm --network test-net --name test1 alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.27.0.2 2625d628d599

[root@rocky8 ~]# docker run -it --rm --network test-net --name test2 alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
33: eth0@if34: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:1b:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.3/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.27.0.3 153409970a1c

/ # ping -c1 test1
PING test1 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=64 time=0.102 ms

结论: 自定义网络中的容器之间可以直接利用容器名进行通信

实战案例: 利用自定义网络实现 wordpress

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@rocky8 ~]# docker network create -d bridge --subnet 172.27.0.0/16 --gateway 172.27.0.1 bridge2
63c2670c48f7f16f33a6c0dce4a30a8fd5f9756f2fc5ecec7e0f1f4858417ea6

[root@rocky8 ~]# docker run -d -p 8080:80 --network bridge2 --name wordpress2 \
-v /data/wordpress2:/var/www/html --restart=always wordpress:php7.4-apache

d2ca13c0a23ebca999811ad8e140c03f1b085c025119fec34afde3d0d83460fd


[root@rocky8 ~]# docker run --network container:wordpress2 \
-e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_DATABASE=wordpress \
-e MYSQL_USER=wordpress -e MYSQL_PASSWORD=123456 \
--name mysql2 -d -v /data/mysql2:/var/lib/mysql \
--restart=always mysql:8.0.29-oracle

5cf68ca5de3bc93b3bced8c1b8ec388e8724e35d85c9fa4fac27efb4682ad705

61

实战案例: 利用自定义网络实现 Redis Cluster

62

创建自定义网络
1
2
[root@ubuntu1804 ~]# docker network create net-redis --subnet 172.18.0.0/16
09b9dded99787835dccc029e16fa2782292d22c3e258f60a1db15d44e7a3bd93
创建6个redis容器配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
# 通过脚本创建六个redis容器配置
[root@ubuntu1804 ~]# for port in {1..6};do
mkdir -p /data/redis/node-${port}/conf
cat >> /data/redis/node-${port}/conf/redis.conf << EOF
port 6379
bind 0.0.0.0
masterauth 123456
requirepass 123456
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.18.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done


[root@ubuntu1804 ~]# tree /data/redis/
/data/redis/
├── node-1
│ └── conf
│ └── redis.conf
├── node-2
│ └── conf
│ └── redis.conf
├── node-3
│ └── conf
│ └── redis.conf
├── node-4
│ └── conf
│ └── redis.conf
├── node-5
│ └── conf
│ └── redis.conf
└── node-6
└── conf
└── redis.conf
12 directories, 6 files


[root@ubuntu1804 ~]# cat /data/redis/node-1/conf/redis.conf
port 6379
bind 0.0.0.0
masterauth 123456
requirepass 123456
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.18.0.11
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
创建6个 redis 容器
1
2
3
4
5
6
7
# 通过脚本运行六个redis容器
[root@ubuntu1804 ~]# for port in {1..6};do
docker run -p 637${port}:6379 -p 1667${port}:16379 --name redis-${port} \
-v /data/redis/node-${port}/data:/data \
-v /data/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net net-redis --ip 172.18.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done
创建 redis cluster
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#连接redis cluster
[root@ubuntu1804 ~]# docker exec -it redis-1 /bin/sh
/data # redis-cli -a 123456
127.0.0.1:6379> exit

#不支持 { } 扩展
/data # echo {1..10}
{1..10}
/data # echo $-
smi

# 创建集群
/data # redis-cli -a 123456 --cluster create 172.18.0.11:6379 172.18.0.12:6379 172.18.0.13:6379 172.18.0.14:6379 172.18.0.15:6379 172.18.0.16:6379 --cluster-replicas 1
Can I set the above configuration? (type 'yes' to accept): #输入yes
测试访问 redis cluster
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#连接redis cluster
/data # redis-cli -a 123456 -c
127.0.0.1:6379> cluster info
cluster_known_nodes:6

127.0.0.1:6379> cluster nodes
#看到172.18.0.{11,12,13}为master,172.18.0.{14,15,16}为slave
#以下为master/slave关系
#172.18.0.11<--->172.18.0.15
#172.18.0.12<--->172.18.0.16
#172.18.0.13<--->172.18.0.14

#添加key到redis-2上
127.0.0.1:6379> set name wang
-> Redirected to slot [5798] located at 172.18.0.12:6379
OK

#添加key到redis-1上
172.18.0.12:6379> set title cto
-> Redirected to slot [2217] located at 172.18.0.11:6379
OK

172.18.0.11:6379> get name
-> Redirected to slot [5798] located at 172.18.0.12:6379
"wang"

172.18.0.12:6379> get title
-> Redirected to slot [2217] located at 172.18.0.11:6379
"cto"
测试故障实现 redis cluster 高可用性
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#模拟redis-2故障
[root@ubuntu1804 ~]# docker stop redis-2
redis-2

#再次查看cluster状态,可以看到redis-2出错
[root@ubuntu1804 ~]# docker exec -it redis-1 /bin/sh
/data # redis-cli -a 123456 --cluster check 127.0.0.1:6379
Could not connect to Redis at 172.18.0.12:6379: Host is unreachable

#查看到 172.18.0.16提升为新的master
172.18.0.16:6379 (06295ce4...) -> 1 keys | 5462 slots | 0 slaves.
172.18.0.13:6379 (599f69b4...) -> 0 keys | 5461 slots | 1 slaves.
172.18.0.15:6379 (2f69287f...) -> 1 keys | 5461 slots | 1 slaves.

/data # redis-cli -a 123456 -c
127.0.0.1:6379> cluster nodes
9b6ab0b8f75516d6acd9d566d0d349f1fdd29540 172.18.0.12:6379@16379 master,fail -
1595404533839 1595404532528 2 connected

127.0.0.1:6379> get name
-> Redirected to slot [5798] located at 172.18.0.16:6379
"wang"

172.18.0.16:6379> get title
-> Redirected to slot [2217] located at 172.18.0.15:6379
"cto"

同一个宿主机之间不同网络的容器通信

开两个容器,一个使用自定义网络容器,一个使用默认brideg网络的容器,默认因iptables规则导致无法通信

63

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@ubuntu1804 ~]# docker run -it --rm --name test1 alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # ping 172.27.0.2 #无法ping通自定义网络容器
PING 172.27.0.2 (172.27.0.2): 56 data bytes

[root@ubuntu1804 ~]# docker run -it --rm --network test-net --name test2 alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # ping 172.17.0.2 #无法ping 通默认的网络容器
PING 172.27.0.2 (172.17.0.2): 56 data bytes

实战案例 1: 修改iptables实现同一宿主机上的不同网络的容器间通信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
#确认开启ip_forward
[root@rocky8 ~]# cat /proc/sys/net/ipv4/ip_forward
1

#默认网络和自定义网络是两个不同的网桥
[root@rocky8 ~]# brctl show
bridge name bridge id STP enabled interfaces
br-63c2670c48f7 8000.024244685368 no vethe6fc5e8
docker0 8000.02426d39e862 no vethe659da8

[root@rocky8 ~]# iptables -vnL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
884 1423K DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
884 1423K DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
427 846K ACCEPT all -- * br-63c2670c48f7 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
20 1040 DOCKER all -- * br-63c2670c48f7 0.0.0.0/0 0.0.0.0/0
437 576K ACCEPT all -- br-63c2670c48f7 !br-63c2670c48f7 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- br-63c2670c48f7 br-63c2670c48f7 0.0.0.0/0 0.0.0.0/0
1399 1817K ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
27 1532 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
542 526K ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
7 492 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
20 1040 ACCEPT tcp -- !br-63c2670c48f7 br-63c2670c48f7 0.0.0.0/0 172.27.0.2 tcp dpt:80

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
437 576K DOCKER-ISOLATION-STAGE-2 all -- br-63c2670c48f7 !br-63c2670c48f7 0.0.0.0/0 0.0.0.0/0
542 526K DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
2860 3768K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * br-63c2670c48f7 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
982 1101K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
2860 3768K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0


[root@rocky8 ~]# iptables-save
# Generated by iptables-save v1.8.5 on Fri Apr 11 17:23:55 2025
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-63c2670c48f7 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-63c2670c48f7 -j DOCKER
-A FORWARD -i br-63c2670c48f7 ! -o br-63c2670c48f7 -j ACCEPT
-A FORWARD -i br-63c2670c48f7 -o br-63c2670c48f7 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.27.0.2/32 ! -i br-63c2670c48f7 -o br-63c2670c48f7 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-63c2670c48f7 ! -o br-63c2670c48f7 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-63c2670c48f7 -j DROP #注意此行规则
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP #注意此行规则
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Apr 11 17:23:55 2025
# Generated by iptables-save v1.8.5 on Fri Apr 11 17:23:55 2025
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.27.0.0/16 ! -o br-63c2670c48f7 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.27.0.2/32 -d 172.27.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i br-63c2670c48f7 -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i br-63c2670c48f7 -p tcp -m tcp --dport 8080 -j DNAT --to-destination 172.27.0.2:80
COMMIT
# Completed on Fri Apr 11 17:23:55 2025

[root@rocky8 ~]# iptables-save > iptables.rule
[root@rocky8 ~]# vim iptables.rule
#修改下面两行的规则
-A DOCKER-ISOLATION-STAGE-2 -o br-63c2670c48f7 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j ACCEPT
#或者执行下面命令
[root@ubuntu1804 ~]# iptables -I DOCKER-ISOLATION-STAGE-2 -j ACCEPT


[root@rocky8 ~]# iptables-restore < iptables.rule


#再次两个容器之间可以相互通信

/ # ping 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=63 time=0.163 ms
64 bytes from 172.27.0.2: seq=1 ttl=63 time=0.128 ms

实战案例 2: 通过解决docker network connect 实现同一个宿主机不同网络的容器间通信

可以使用docker network connect命令实现同一个宿主机不同网络的容器间相互通信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#将CONTAINER连入指定的NETWORK中,使此CONTAINER可以与NETWORK中的其它容器进行通信
docker network connect [OPTIONS] NETWORK CONTAINER

Connect a container to a network
Options:
--alias strings Add network-scoped alias for the container
--driver-opt strings driver options for the network
--ip string IPv4 address (e.g., 172.30.100.104)
--ip6 string IPv6 address (e.g., 2001:db8::33)
--link list Add link to another container
--link-local-ip strings Add a link-local address for the container

#将CONTAINER与指定的NETWORK断开连接,使此CONTAINER可以与CONTAINER中的其它容器进行无法通信
#如果将容器从自定义的网络删除,将加入默认的网络,即docker0网桥中,获取172.17.0.0/16
#如果将容器从默认的网络docker0删除,将加入none网络

docker network disconnect [OPTIONS] NETWORK CONTAINER

Disconnect a container from a network

Options:
-f, --force Force the container to disconnect from a network
上面案例中test1和test2的容器间默认无法通信
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
#每个网络中有属于此网络的容器信息
[root@ubuntu1804 ~]# docker network inspect bridge
[
{
"Name": "bridge",
"Id":
"c2f770f19400aa482054a92f2ff6ce54cae2ed45a15c7d98e0959c64dfefd58d",
"Created": "2020-07-22T09:23:20.265208248+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9bc707b1a810c4bab39a4c0ed3ff5867cc45b21fe8ae6737f2a9d0163ed2c7a9":
{
"Name": "test1",
"EndpointID":
"475bba6925c426158b3c523e07b6773c884d404d82e6c19d5e4a41f54f8856c2",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

#每个网络中有属于此网络的容器信息
[root@ubuntu1804 ~]# docker network inspect test-net
[
{
"Name": "test-net",
"Id":
"00ab0f2d29e82d387755e1bea19532dc279fa134a565e496d308ec62f7edf434",
"Created": "2020-07-22T09:59:09.431393706+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"c3446876a38b3d7e70ca35429051dea7373643a95689d22f252faedc31f3c427":
{
"Name": "test2",
"EndpointID":
"13fb11baeca7e90abdc9183334315e95df4a55367d3add1472d741a556cb662c",
"MacAddress": "02:42:ac:1b:00:02",
"IPv4Address": "172.27.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
让默认网络中容器test1可以连通自定义网络test-net的容器test2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
[root@ubuntu1804 ~]# docker network connect test-net test1
[root@ubuntu1804 ~]# docker network inspect test-net
[
{
"Name": "test-net",
"Id":
"00ab0f2d29e82d387755e1bea19532dc279fa134a565e496d308ec62f7edf434",
"Created": "2020-07-22T09:59:09.431393706+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9bc707b1a810c4bab39a4c0ed3ff5867cc45b21fe8ae6737f2a9d0163ed2c7a9":
{
"Name": "test1",
"EndpointID":
"600891a1f0727f0fddcb9c123540d02963a30a54d011554e0dfd1c108ecabdd2",
"MacAddress": "02:42:ac:1b:00:03",
"IPv4Address": "172.27.0.3/16",
"IPv6Address": ""
},
"c3446876a38b3d7e70ca35429051dea7373643a95689d22f252faedc31f3c427":
{
"Name": "test2",
"EndpointID":
"13fb11baeca7e90abdc9183334315e95df4a55367d3add1472d741a556cb662c",
"MacAddress": "02:42:ac:1b:00:02",
"IPv4Address": "172.27.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

#在test1容器中可以看到新添加了一个网卡,并且分配了test-net网络的IP信息
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
29: eth1@if30: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:1b:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.3/16 brd 172.27.255.255 scope global eth1
valid_lft forever preferred_lft forever

#test1可以连接test2容器
/ # ping -c1 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=64 time=0.100 ms
--- 172.27.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.100/0.100/0.100 ms

#在test2容器中没有变化,仍然无法连接test1
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # ping -c1 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
^C
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss
让自定义网络中容器test2可以连通默认网络的容器test1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
#将自定义网络中的容器test2也加入到默认网络中,使之和默认网络中的容器test1通信
[root@ubuntu1804 ~]# docker network connect bridge test2
[root@ubuntu1804 ~]# docker network inspect bridge
[
{
"Name": "bridge",
"Id":
"c2f770f19400aa482054a92f2ff6ce54cae2ed45a15c7d98e0959c64dfefd58d",
"Created": "2020-07-22T09:23:20.265208248+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"9bc707b1a810c4bab39a4c0ed3ff5867cc45b21fe8ae6737f2a9d0163ed2c7a9":
{
"Name": "test1",
"EndpointID":
"475bba6925c426158b3c523e07b6773c884d404d82e6c19d5e4a41f54f8856c2",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"c3446876a38b3d7e70ca35429051dea7373643a95689d22f252faedc31f3c427":
{
"Name": "test2",
"EndpointID":
"a049010b37dd5a40c1ff8e8d0b327b70727316dd86b2c69c05231bfd6c985af6",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

#确认自定义网络的容器test2中添加了新网卡,并设置默认网络的IP信息
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever
31: eth1@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever

#test2可以连接test1容器
/ # ping -c1 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.128 ms
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.128/0.128/0.128 ms

#在test1中可以利用test2容器名通信
/ # ping -c1 test2
PING test2 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=0 ttl=64 time=0.076 ms

#在test2中可以利test1容器名通信
/ # ping -c1 test1
PING test1 (172.27.0.3): 56 data bytes
64 bytes from 172.27.0.3: seq=0 ttl=64 time=0.075 ms
断开不同网络中容器的通信
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
#将test1 断开和网络test-net中其它容器的通信
[root@ubuntu1804 ~]# docker network disconnect test-net test1

#在容器test1中无法和test2通信
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
27: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # ping -c1 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes
--- 172.27.0.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

#在容器test2中仍能和test1通信
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever
31: eth1@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth1
valid_lft forever preferred_lft forever

/ # ping -c1 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.085 ms

#将test2 断开和默认网络中其它容器的通信
[root@ubuntu1804 ~]# docker network disconnect bridge test2

#在容器test2中无法和test1通信
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
valid_lft forever preferred_lft forever

/ # ping -c1 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

实现跨宿主机的容器之间网络互联

同一个宿主机之间的各个容器之间是可以直接通信的,但是如果访问到另外一台宿主机的容器呢?

方式1: 利用桥接实现跨宿主机的容器间互联

64

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#分别将两个宿主机都执行下面操作
[root@ubuntu1804 ~]# apt -y install bridge-utils
[root@ubuntu1804 ~]# brctl addif docker0 eth0

#在两个宿主机上各启动一个容器,需要确保IP不同,相互测试访问
#第一个宿主机的容器
[root@ubuntu1804 ~]# docker run -it --name b1 busybox
/ # hostname -i
172.17.0.2

/ # httpd -h /data/html/ -f -v
[::ffff:172.17.0.3]:42488:response:200


#第二个宿主机的容器
[root@ubuntu1804 ~]# docker run -it --name b2 busybox
/ # hostname -i
172.17.0.3

/#wget -q0 - http://172.17.0.2
httpd website in busybox

方式2: 利用NAT实现跨主机的容器间互联

docker跨主机互联实现说明

跨主机互联是说A宿主机的容器可以访问B主机上的容器,但是前提是保证各宿主机之间的网络是可以相互通信的,然后各容器才可以通过宿主机访问到对方的容器

实现原理: 是在宿主机做一个网络路由就可以实现A宿主机的容器访问B主机的容器的目的

注意: 此方式只适合小型网络环境,复杂的网络或者大型的网络可以使用google开源的k8s进行互联

修改各宿主机网段

Docker默认网段是172.17.0.x/24,而且每个宿主机都是一样的,因此要做路由的前提就是各个主机的网络不能一致

第一个宿主机A上更改网段
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
[root@ubuntu1804 ~]# vim /etc/docker/daemon.json 
[root@ubuntu1804 ~]# cat /etc/docker/daemon.json
{
"bip": "192.168.100.1/24",
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ubuntu1804 ~]# systemctl restart docker
[root@ubuntu1804 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe6b:54d3/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:e0:ef:72:05 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.1/24 brd 192.168.100.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e0ff:feef:7205/64 scope link
valid_lft forever preferred_lft forever


[root@ubuntu1804 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
第二个宿主机B更改网段
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[root@ubuntu1804 ~]# vim /etc/docker/daemon.json 
{
"bip": "192.168.200.1/24",
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}

[root@ubuntu1804 ~]# systemctl restart docker
[root@ubuntu1804 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP
group default qlen 1000
link/ether 00:0c:29:01:f3:0c brd ff:ff:ff:ff:ff:ff
inet 10.0.0.102/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe01:f30c/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:e8:c0:a4:d8 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.1/24 brd 192.168.200.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e8ff:fec0:a4d8/64 scope link
valid_lft forever preferred_lft forever


[root@ubuntu1804 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.200.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0

在两个宿主机分别启动一个容器

第一个宿主机启动容器server1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@ubuntu1804 ~]# docker run -it --name server1 --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:c0:a8:64:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.100.2/24 brd 192.168.100.255 scope global eth0
valid_lft forever preferred_lft forever


/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.100.1 0.0.0.0 UG 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

第二个宿主机启动容器server2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@ubuntu1804 ~]# docker run -it --name server2 --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:c0:a8:c8:02 brd ff:ff:ff:ff:ff:ff
inet 192.168.200.2/24 brd 192.168.200.255 scope global eth0
valid_lft forever preferred_lft forever


/ # route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.200.1 0.0.0.0 UG 0 0 0 eth0
192.168.200.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

从第一个宿主机的容器server1无法和第二个宿主机的server2相互访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@ubuntu1804 ~]# docker run -it --name server1 --rm alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:0a:64:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.100.0.2/16 brd 10.100.255.255 scope global eth0
valid_lft forever preferred_lft forever


/ # ping -c1 192.168.200.2
PING 192.168.200.2 (192.168.200.2): 56 data bytes
--- 192.168.200.2 ping statistics ---
1 packets transmitted, 0 packets received, 100% packet loss

添加静态路由和iptables规则

在各宿主机添加静态路由,网关指向对方宿主机的IP

在第一台宿主机添加静态路由和iptables规则
1
2
3
4
5
6
7
#添加路由
[root@ubuntu1804 ~]# route add -net 192.168.200.0/24 gw 10.0.0.102

#修改iptables规则
[root@ubuntu1804 ~]# iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT
#或者修改FORWARD默认规则
[root@ubuntu1804 ~]# iptables -P FORWARD ACCEPT
在第二台宿主机添加静态路由和iptables规则
1
2
3
4
5
6
7
#添加路由
[root@ubuntu1804 ~]#route add -net 192.168.100.0/24 gw 10.0.0.101

#修改iptables规则
[root@ubuntu1804 ~]#iptables -A FORWARD -s 10.0.0.0/24 -j ACCEPT
#或者修改FORWARD默认规则
[root@ubuntu1804 ~]#iptables -P FORWARD ACCEPT

测试跨宿主机之间容器互联

宿主机A的容器server1访问宿主机B容器server2,同时在宿主机B上tcpdump抓包观察

1
2
3
4
5
6
7
8
9
10
11
12
13
/ # ping -c1 192.168.200.2
PING 192.168.200.2 (192.168.200.2): 56 data bytes
64 bytes from 192.168.200.2: seq=0 ttl=62 time=1.022 ms
--- 192.168.200.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.022/1.022/1.022 ms

#宿主机B的抓包可以观察到
[root@ubuntu1804 ~]# tcpdump -i eth0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
16:57:37.912925 IP 10.0.0.101 > 192.168.200.2: ICMP echo request, id 2560, seq 0, length 64
16:57:37.913208 IP 192.168.200.2 > 10.0.0.101: ICMP echo reply, id 2560, seq 0, length 64

宿主机B的容器server2访问宿主机B容器server1,同时在宿主机A上tcpdump抓包观察

1
2
3
4
5
6
7
8
9
10
11
12
13
14
/ # ping -c1 192.168.100.2
PING 192.168.100.2 (192.168.100.2): 56 data bytes
64 bytes from 192.168.100.2: seq=0 ttl=62 time=1.041 ms
--- 192.168.100.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 1.041/1.041/1.041 ms


#宿主机A的抓包可以观察到
[root@ubuntu1804 ~]# tcpdump -i eth0 -nn icmp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
16:59:11.775784 IP 10.0.0.102 > 192.168.100.2: ICMP echo request, id 2560, seq 0, length 64
16:59:11.776113 IP 192.168.100.2 > 10.0.0.102: ICMP echo reply, id 2560, seq 0, length 64

创建第三个容器测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
#在第二个宿主机B上启动第一个提供web服务的nginx容器server3
#注意无需打开端口映射
[root@ubuntu1804 ~]# docker run -d --name server3 centos7-nginx:1.6.1
69fc554fd00e4f7880c139283b64f2701feafb91047b217906b188c1f461b699

[root@ubuntu1804 ~]# docker exec -it server3 bash
[root@69fc554fd00e /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.200.3 netmask 255.255.255.0 broadcast 192.168.200.255
ether 02:42:c0:a8:c8:03 txqueuelen 0 (Ethernet)
RX packets 8 bytes 656 (656.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

#从server1中访问server3的页面可以成功
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
state UP
link/ether 02:42:0a:64:00:02 brd ff:ff:ff:ff:ff:ff
inet 10.100.0.2/16 brd 10.100.255.255 scope global eth0
valid_lft forever preferred_lft forever


/ # wget -qO - http://192.168.200.3/app
Test Page in app

#从server3容器观察访问日志,可以看到来自于第一个宿主机,而非server1容器
[root@69fc554fd00e /]# tail -f /apps/nginx/logs/access.log
10.0.0.101 - - [02/Feb/2020:09:02:00 +0000] "GET /app HTTP/1.1" 301 169 "-" "Wget"


#用tcpdump抓包80/tcp的包,可以观察到以下内容
[root@ubuntu1804 ~]# tcpdump -i eth0 -nn port 80
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode

17:03:35.885768 IP 192.168.200.3.80 > 10.0.0.101.43578: Flags [S.], seq
2298407060, ack 3672256869, win 28960, options [mss 1460,sackOK,TS val
3131173298 ecr 4161963574,nop,wscale 7], length 0

容器单机编排工具 Docker Compose

Docker Compose介绍

65

当在宿主机启动较多的容器时候,如果都是手动操作会觉得比较麻烦而且容易出错,此时推荐使用 docker 单机编排工具 docker-compose

docker-compose 是 docker 容器的一种单机编排服务,docker-compose 是一个管理多个容器的工具,比如: 可以解决容器之间的依赖关系,就像启动一个nginx 前端服务的时候会调用后端的tomcat,那就得先启动tomcat,但是启动tomcat 容器还需要依赖数据库,那就还得先启动数据库,docker-compose 可以用来解决这样的嵌套依赖关系,并且可以替代docker命令对容器进行创建、启动和停止等手工的操作

因此,如果说docker命令就像linux的命令,docker compose就像shell脚本,可以自动的执行容器批量操作,从而实现自动化的容器管理,或者说docker命令相当于ansible命令,那么docker compose文件,就相当于ansible-playbook的yaml文件

docker-compose 项目是Docker 官方的开源项目,负责实现对Docker 容器集群的快速编排,docker-compose 将所管理的容器分为三层,分别是工程(project),服务(service)以及容器(container)

github地址: https://github.com/docker/compose

官方地址: https://docs.docker.com/compose/

安装和准备

安装Docker Compose

方法1: 在线安装,通过pip安装

python-pip 包将安装一个 pip 的命令,pip 命令是一个python 安装包的安装工具,其类似于ubuntu 的 apt 或者 redhat 的yum,但是pip 只安装 python 相关的安装包,可以在多种操作系统安装和使用pip

此方式当前安装的版本较新,为docker_compose-1.25.3,推荐使用

1
2
3
4
5
6
7
8
Ubuntu:   
# apt update
# apt install -y python-pip

CentOS:
# yum install epel-release
# yum install -y python-pip
# pip install --upgrade pip

范例: 基于python3 安装 docker-compose

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#配置加速
[root@ubuntu2004 ~]# mkdir ~/.pip
[root@ubuntu2004 ~]# cat > ~/.pip/pip.conf <<-EOF
[global]
index-url = https://pypi.tuna.tsinghua.edu.cn/simple
EOF

[root@ubuntu2004 ~]# apt -y install python3-pip
[root@ubuntu2004 ~]# pip3 install --upgrade pip
[root@ubuntu2004 ~]# pip3 install docker-compose
[root@ubuntu2004 ~]# docker-compose --version
docker-compose version 1.27.4, build unknown

#基于python2安装docker-compose
[root@ubuntu1804 ~]# apt -y install python-pip
[root@ubuntu1804 ~]# pip install docker-compose
[root@ubuntu1804 ~]# docker-compose --version
docker-compose version 1.25.3, build unknown

方法2: 在线直接从包仓库安装

此方法安装的版本较旧,不推荐使用

1
2
3
4
5
6
7
8
9
10
#ubuntu安装,此为默认版本
[root@ubuntu1804 ~]# apt -y install docker-compose
[root@ubuntu1804 ~]# docker-compose --version
docker-compose version 1.17.1, build unknown


#CentOS7安装,依赖EPEL源
[root@centos7 ~]# yum -y install docker-compose
[root@centos7 ~]# docker-compose --version
docker-compose version 1.18.0, buil 8dd22a9

方法3: 离线安装,直接从github或国内镜像站下载安装对应版本

参看说明: https://github.com/docker/compose/releases

此方法安装版本可方便指定,推荐方法,但网络下载较慢

1
2
3
4
5
6
7
8
9
10
[root@ubuntu1804 ~]# curl -L https://github.com/docker/compose/releases/download/1.25.3/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

[root@rocky8 ~]# curl -L https://github.com/docker/compose/releases/download/v2.34.0/docker-compose-`uname -s`-`uname -m`_64 -o /usr/local/bin/docker-compose

#从国内镜像站下载
[root@ubuntu1804 ~]# curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.3/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

[root@rocky8 ~]# curl -L https://mirrors.aliyun.com/docker-toolbox/linux/compose/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose

[root@ubuntu1804 ~]# chmod +x /usr/local/bin/docker-compose

查看命令格式

官方文档: https://docs.docker.com/compose/reference/

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
docker-compose --help
Define and run multi-container applications with Docker.
Usage:
docker-compose [-f <arg>...] [options] [COMMAND] [ARGS...]
docker-compose -h|--help


#选项说明:
-f,–file FILE #指定Compose 模板文件,默认为docker-compose.yml
-p,–project-name NAME #指定项目名称,默认将使用当前所在目录名称作为项目名。
--verbose #显示更多输出信息
--log-level LEVEL #定义日志级别 (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--no-ansi #不显示ANSI 控制字符
-v, --version #显示版本


#以下为命令选项,需要在docker-compose.yml|yaml 文件所在在目录里执行
config -q #查看当前配置,没有错误不输出任何信息
up #创建并启动容器
build #构建镜像
bundle #从当前docker compose 文件生成一个以<当前目录>为名称的json格式的Docker Bundle 备份文件
create #创建服务
down #停止和删除所有容器、网络、镜像和卷
events #从容器接收实时事件,可以指定json 日志格式
exec #进入指定容器进行操作
help #显示帮助细信息
images #显示镜像信息
kill #强制终止运行中的容器
logs #查看容器的日志
pause #暂停服务
port #查看端口
ps #列出容器
pull #重新拉取镜像,镜像发生变化后,需要重新拉取镜像
push #上传镜像
restart #重启服务
rm #删除已经停止的服务
run #一次性运行容器
scale #设置指定服务运行的容器个数,新版已废弃
start #启动服务
stop #停止服务
top #显示容器运行状态
unpause #取消暂定

范例:

1
2
[root@rocky8 ~]# docker-compose -v
Docker Compose version v2.34.0

docker compose 文件格式

官方文档: https://docs.docker.com/compose/compose-file/

docker compose 文件是一个yaml格式的文件,所以注意行首的缩进很严格

默认docker-compose命令会调用当前目录下的docker-compose.yml的文件,因此一般执行docker-compose命令前先进入docker-compose.yml文件所在目录

docker compose文件的格式很不同版本,版本不同,语法和格式有所不同,参看以下列表

Compose file format Docker Engine release
3.7 18.06.0+
3.6 18.02.0+
3.5 17.12.0+
3.4 17.09.0+
3.3 17.06.0+
3.2 17.04.0+
3.1 1.13.1+
3.0 1.13.0+
2.4 17.12.0+
2.3 17.06.0+
2.3 1.13.0+
2.1 1.12.0+
2.0 1.10.0+
1.0 1.9.1.+

docker compose版本众多,以下通过具体示例说明docker compose的使用方法

从 docker compose 启动单个容器

注意: 使用Docker compose之前,先要安装docker

创建 docker compose文件

docker compose 文件可在任意目录,创建文件名为docker-compose.yml 配置文件,要注意前后的缩进

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@rocky8 ~]# docker-compose --version
Docker Compose version v2.34.0

[root@rocky8 ~]# mkdir /data/docker-compose
[root@rocky8 ~]# cd /data/docker-compose
[root@rocky8 docker-compose]# vim docker-compose.yml
[root@rocky8 docker-compose]# cat docker-compose.yml
services:
nginx-web:
image: nginx-centos7:1.26.3
container_name: nginx-web
expose:
- 80
- 443
ports:
- "80:80"
- "443:443"

查看配置和格式检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@rocky8 docker-compose]# docker-compose config 
name: docker-compose
services:
nginx-web:
container_name: nginx-web
expose:
- "80"
- "443"
image: nginx-centos7:1.26.3
networks:
default: null
ports:
- mode: ingress
target: 80
published: "80"
protocol: tcp
- mode: ingress
target: 443
published: "443"
protocol: tcp
networks:
default:
name: docker-compose_default

[root@rocky8 docker-compose]# docker-compose config -q

#改错ocker-compose文件格式
[root@rocky8 docker-compose]# cat docker-compose.yml
service: #少了s
nginx-web:
image: nginx-centos7:1.26.3
container_name: nginx-web
expose:
- 80
- 443
ports:
- "80:80"
- "443:443"

[root@rocky8 docker-compose]# docker-compose config
validating /data/docker-compose/docker-compose.yml: (root) Additional property service is not allowed

启动容器

**注意: 必须要在docker compose文件所在的目录执行 **

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#前台启动

##########################################################################
[root@ubuntu1804 docker-compose]# docker-compose up
Pulling service-nginx-web (harbor.wang.org/example/nginx-centos7-base:1.6.1)...
ERROR: Get https://harbor.wang.org/v2/: dial tcp 10.0.0.102:443: connect:
connection refused

[root@ubuntu1804 docker-compose]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry harbor.wang.org

[root@ubuntu1804 docker-compose]# systemctl daemon-reload
[root@ubuntu1804 docker-compose]# systemctl restart docker
##########################################################################

[root@rocky8 docker-compose]# docker-compose up
[+] Running 2/2
✔ Network docker-compose_default Created 0.1s
✔ Container nginx-web Create... 0.0s
Attaching to nginx-web

#以上是前台执行不退出

验证docker compose执行结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#上面命令是前台执行,所以要查看结果,可以再开一个终端窗口进行观察
[root@rocky8 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e2629c116bc4 nginx-centos7:1.26.3 "nginx -g 'daemon of…" 3 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp nginx-web

[root@rocky8 ~]# docker-compose ps
no configuration file provided: not found
[root@rocky8 ~]# cd /data/docker-compose/

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 4 minutes ago Up 4 minutes 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

[root@rocky8 docker-compose]# curl 127.0.0.1
Test page in app

[root@rocky8 docker-compose]# docker-compose images
CONTAINER REPOSITORY TAG IMAGE ID SIZE
nginx-web nginx-centos7 1.26.3 08c66ca1868e 405MB

[root@rocky8 docker-compose]# docker-compose exec nginx-web bash
[root@e2629c116bc4 /]# tail /apps/nginx/logs/access.log
172.18.0.1 - - [12/Apr/2025:03:11:22 +0000] "GET / HTTP/1.1" 200 17 "-" "curl/7.61.1"
172.18.0.1 - - [12/Apr/2025:03:12:55 +0000] "GET /app/ HTTP/1.1" 404 153 "-" "curl/7.61.1"
172.18.0.1 - - [12/Apr/2025:03:12:58 +0000] "GET / HTTP/1.1" 200 17 "-" "curl/7.61.1"

结束前台执行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#ctrl+c键,结束容器
Gracefully stopping... (press Ctrl+C again to force)
[+] Stopping 1/1
✔ Container nginx-web Stopped 0.3s

[root@rocky8 docker-compose]# docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 7 minutes ago Exited (0) 23 seconds ago

[root@rocky8 docker-compose]# docker-compose start
[+] Running 1/1
✔ Container nginx-web Started 0.4s


[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 8 minutes ago Up 5 seconds 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp


#关闭容器
[root@rocky8 docker-compose]# docker-compose kill
[+] Killing 1/1
✔ Container nginx-web Killed 0.1s

[root@rocky8 docker-compose]# docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 8 minutes ago Exited (137) 6 seconds ago

删除容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@rocky8 docker-compose]# docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 8 minutes ago Exited (137) 6 seconds ago

#只删除停止的容器
[root@rocky8 docker-compose]# docker-compose rm
? Going to remove nginx-web Yes
[+] Removing 1/1
✔ Container nginx-web Removed 0.0s

[root@rocky8 docker-compose]# docker-compose up -d
[+] Running 1/1
✔ Container nginx-web Started 0.4s

[root@rocky8 docker-compose]# docker-compose rm
No stopped containers


#停止并删除容器及镜像
[root@rocky8 docker-compose]# docker-compose down
[+] Running 2/2
✔ Container nginx-web Remove... 0.2s
✔ Network docker-compose_default Removed 0.1s

[root@rocky8 docker-compose]# docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS

#也会自动删除镜像
[root@rocky8 docker-compose]# docker-compose images
CONTAINER REPOSITORY TAG IMAGE ID SIZE

后台执行

1
2
3
4
5
6
7
8
9
10
11
[root@rocky8 docker-compose]# docker-compose up -d
[+] Running 2/2
✔ Network docker-compose_default Created 0.1s
✔ Container nginx-web Starte... 0.5s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 9 seconds ago Up 8 seconds 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

[root@rocky8 docker-compose]# curl 127.0.0.1
Test page in app

停止和启动与日志查看

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@rocky8 docker-compose]# docker-compose stop
[+] Stopping 1/1
✔ Container nginx-web Stopped 0.2s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS

[root@rocky8 docker-compose]# docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web About a minute ago Exited (0) 8 seconds ago

[root@rocky8 docker-compose]# docker-compose start
[+] Running 1/1
✔ Container nginx-web Started 0.4s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web About a minute ago Up 3 seconds 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

[root@rocky8 docker-compose]# docker-compose restart
[+] Restarting 1/1
✔ Container nginx-web Started 0.5s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 2 minutes ago Up 4 seconds 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

#执行上面操作时,可以同时开一个终端,观察日事件
[root@rocky8 docker-compose]# docker-compose events
2025-04-12 11:23:46.851655 container kill d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde (name=nginx-web, image=nginx-centos7:1.26.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainers=wshuaiqing.cn, org.label-schema.build-date=20191024, signal=15, org.label-schema.name=CentOS Base Image)

2025-04-12 11:23:46.991651 container stop d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde (org.label-schema.name=CentOS Base Image, org.label-schema.vendor=CentOS, maintainers=wshuaiqing.cn, image=nginx-centos7:1.26.3, name=nginx-web, org.label-schema.build-date=20191024, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)

2025-04-12 11:23:46.993807 container die d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde (org.label-schema.schema-version=1.0, image=nginx-centos7:1.26.3, org.label-schema.build-date=20191024, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainers=wshuaiqing.cn, execDuration=51, exitCode=0, name=nginx-web, org.label-schema.name=CentOS Base Image)

2025-04-12 11:23:47.386300 container start d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde (org.label-schema.build-date=20191024, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainers=wshuaiqing.cn, org.label-schema.name=CentOS Base Image, image=nginx-centos7:1.26.3, name=nginx-web)

2025-04-12 11:23:47.386409 container restart d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde (maintainers=wshuaiqing.cn, org.label-schema.vendor=CentOS, name=nginx-web, org.label-schema.name=CentOS Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, image=nginx-centos7:1.26.3, org.label-schema.build-date=20191024)

#以json格式显示日志
[root@rocky8 docker-compose]# docker-compose events --json
{"action":"kill","attributes":{"image":"nginx-centos7:1.26.3","maintainers":"wshuaiqing.cn","name":"nginx-web","org.label-schema.build-date":"20191024","org.label-schema.license":"GPLv2","org.label-schema.name":"CentOS Base Image","org.label-schema.schema-version":"1.0","org.label-schema.vendor":"CentOS","signal":"15"},"id":"d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde","service":"nginx-web","time":"2025-04-12T11:24:20.453557731+08:00","type":"container"}
{"action":"stop","attributes":{"image":"nginx-centos7:1.26.3","maintainers":"wshuaiqing.cn","name":"nginx-web","org.label-schema.build-date":"20191024","org.label-schema.license":"GPLv2","org.label-schema.name":"CentOS Base Image","org.label-schema.schema-version":"1.0","org.label-schema.vendor":"CentOS"},"id":"d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde","service":"nginx-web","time":"2025-04-12T11:24:20.590896139+08:00","type":"container"}
{"action":"die","attributes":{"execDuration":"33","exitCode":"0","image":"nginx-centos7:1.26.3","maintainers":"wshuaiqing.cn","name":"nginx-web","org.label-schema.build-date":"20191024","org.label-schema.license":"GPLv2","org.label-schema.name":"CentOS Base Image","org.label-schema.schema-version":"1.0","org.label-schema.vendor":"CentOS"},"id":"d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde","service":"nginx-web","time":"2025-04-12T11:24:20.592716819+08:00","type":"container"}
{"action":"start","attributes":{"image":"nginx-centos7:1.26.3","maintainers":"wshuaiqing.cn","name":"nginx-web","org.label-schema.build-date":"20191024","org.label-schema.license":"GPLv2","org.label-schema.name":"CentOS Base Image","org.label-schema.schema-version":"1.0","org.label-schema.vendor":"CentOS"},"id":"d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde","service":"nginx-web","time":"2025-04-12T11:24:20.95955644+08:00","type":"container"}
{"action":"restart","attributes":{"image":"nginx-centos7:1.26.3","maintainers":"wshuaiqing.cn","name":"nginx-web","org.label-schema.build-date":"20191024","org.label-schema.license":"GPLv2","org.label-schema.name":"CentOS Base Image","org.label-schema.schema-version":"1.0","org.label-schema.vendor":"CentOS"},"id":"d16880d07a9f9781d0c1afe8e0344983e81e8d89a5715aee302e4e9e4ff18dde","service":"nginx-web","time":"2025-04-12T11:24:20.959621363+08:00","type":"container"}

暂停和恢复

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 docker-compose]# docker-compose pause
[+] Pausing 1/1
✔ Container nginx-web Paused 0.0s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 5 minutes ago Up 2 minutes (Paused) 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

[root@rocky8 docker-compose]# curl -m 1 127.0.0.1
curl: (28) Operation timed out after 1002 milliseconds with 0 bytes received

[root@rocky8 docker-compose]# docker-compose unpause
[+] Running 1/1
✔ Container nginx-web Unpaused 0.0s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 6 minutes ago Up 3 minutes 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp

[root@rocky8 docker-compose]# curl -m 1 127.0.0.1
Test page in app

从 docker compose 启动多个容器

编辑 docker-compose 文件并使用数据卷

注意: 同一个文件 ,数据卷的优先级比镜像内的文件优先级高

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@rocky8 docker-compose]# cat docker-compose.yml 
services:
nginx-web:
image: nginx-centos7:1.26.3
container_name: nginx-web
volumes:
- /data/nginx:/apps/nginx/html #指定数据卷,将宿主机/data/nginx挂载到容器/apps/nginx/html
expose:
- 80
- 443
ports:
- "80:80"
- "443:443"

tomcat-app1:
image: tomcat-web:app1
container_name: tomcat-app1
expose:
- 8080
ports:
- "8081:8080"

tomcat-app2:
image: tomcat-web:app2
container_name: tomcat-app2
expose:
- 8080
ports:
- "8082:8080"

[root@rocky8 docker-compose]# docker-compose config -q

#在宿主机准备nginx测试页面文件
[root@rocky8 docker-compose]# mkdir /data/nginx
[root@rocky8 docker-compose]# echo Docker compose test page > /data/nginx/index.html

启动容器并验证结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@rocky8 docker-compose]# docker-compose up -d
[+] Running 3/3
✔ Container tomcat-app2 Started 1.2s
✔ Container nginx-web Started 1.1s
✔ Container tomcat-app1 Started 1.1s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
nginx-web nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 19 seconds ago Up 17 seconds 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp
tomcat-app1 tomcat-web:app1 "/apps/tomcat/bin/ru…" tomcat-app1 19 seconds ago Up 17 seconds 8009/tcp, 0.0.0.0:8081->8080/tcp, [::]:8081->8080/tcp
tomcat-app2 tomcat-web:app2 "/apps/tomcat/bin/ru…" tomcat-app2 19 seconds ago Up 17 seconds 8009/tcp, 0.0.0.0:8082->8080/tcp, [::]:8082->8080/tcp


[root@rocky8 ~]# curl 127.0.0.1
Docker compose test page

[root@rocky8 ~]# curl 127.0.0.1:8081/app/
Tomcat Page in app1

[root@rocky8 ~]# curl 127.0.0.1:8082/app/
Tomcat Page in app2

指定同时启动容器的数量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@rocky8 docker-compose]# cat docker-compose.yml
services:
nginx-web:
image: nginx-centos7:1.26.3
#container_name: nginx-web #同时启动多个同一镜像的容器,不要指定容器名称,否则会冲突
expose:
- 80
- 443
#ports: #同时启动多个同一镜像的容器,不要指定端口号,否则会冲突
# - "80:80"
# - "443:443"

#再加一个service
tomcat-app1:
image: tomcat-web:app1


[root@rocky8 docker-compose]# docker-compose ps -a
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS

[root@rocky8 docker-compose]# docker-compose up -d --scale nginx-web=2
[+] Running 4/4
✔ Network docker-compose_default Created 0.1s
✔ Container docker-compose-tomcat-app1-1 Started 0.6s
✔ Container docker-compose-nginx-web-2 Started 1.3s
✔ Container docker-compose-nginx-web-1 Started 0.6s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-compose-nginx-web-1 nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 9 seconds ago Up 7 seconds 80/tcp, 443/tcp
docker-compose-nginx-web-2 nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 9 seconds ago Up 7 seconds 80/tcp, 443/tcp
docker-compose-tomcat-app1-1 tomcat-web:app1 "/apps/tomcat/bin/ru…" tomcat-app1 9 seconds ago Up 7 seconds 8009/tcp, 8080/tcp

[root@rocky8 docker-compose]# docker-compose up -d --scale nginx-web=3 --scale tomcat-app1=2
[+] Running 5/5
✔ Container docker-compose-tomcat-app1-1 Running 0.0s
✔ Container docker-compose-nginx-web-1 Running 0.0s
✔ Container docker-compose-nginx-web-2 Running 0.0s
✔ Container docker-compose-nginx-web-3 Started 0.7s
✔ Container docker-compose-tomcat-app1-2 Started 0.8s

[root@rocky8 docker-compose]# docker-compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-compose-nginx-web-1 nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web About a minute ago Up About a minute 80/tcp, 443/tcp
docker-compose-nginx-web-2 nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web About a minute ago Up About a minute 80/tcp, 443/tcp
docker-compose-nginx-web-3 nginx-centos7:1.26.3 "nginx -g 'daemon of…" nginx-web 7 seconds ago Up 6 seconds 80/tcp, 443/tcp
docker-compose-tomcat-app1-1 tomcat-web:app1 "/apps/tomcat/bin/ru…" tomcat-app1 About a minute ago Up About a minute 8009/tcp, 8080/tcp
docker-compose-tomcat-app1-2 tomcat-web:app1 "/apps/tomcat/bin/ru…" tomcat-app1 7 seconds ago Up 6 seconds 8009/tcp, 8080/tcp

扩容和缩容

注意: 新版中scale 命令已废弃

推荐使用docker-compose up --scale

1
2
3
4
5
6
7
8
9
10
#扩容
[root@rocky8 docker-compose]# docker-compose scale nginx-web=5

[root@rocky8 docker-compose]# docker-compose up --scale nginx-web=5


#缩容为0,即删除容器
[root@rocky8 docker-compose]# docker-compose scale nginx-web=0

[root@rocky8 docker-compose]# docker-compose up --scale nginx-web=0

实战案例: 实现 Wordpress 应用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
services:

db:
image: mysql:8.0
container_name: db
restart: unless-stopped
environment:
- MYSQL_DATABASE=wordpress
- MYSQL_ROOT_PASSWORD=123456
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=123456
volumes:
- dbdata:/var/lib/mysql
networks:
- wordpress-network

wordpress:
depends_on:
- db
image: wordpress:5.8.3-apache
container_name: wordpress
restart: unless-stopped
ports:
- "80:80"
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=wordpress
- WORDPRESS_DB_PASSWORD=123456
- WORDPRESS_DB_NAME=wordpress
volumes:
- wordpress:/var/www/html
networks:
- wordpress-network

volumes:
wordpress:
dbdata:

networks:
wordpress-network:
driver: bridge
ipam:
config:
- subnet: 172.30.0.0/16

实战案例: 搭建运维平台 Spug

Spug是面向中小型企业设计的轻量级无Agent的自动化运维平台,整合了主机管理、主机批量执行、主机在线终端、文件在线上传下载、应用发布部署、在线任务计划、配置中心、监控、报警等一系列功 能。

Spug是上海时巴克科技有限公司旗下的开源运维项目,公司旗下现有产品「Spug开源运维平台」「Spug推送助手」,公司专注为中小企业服务。

参考链接

1
https://spug.cc/docs/install-docker

范例: 基于docker-compse 部署 Spug

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#安装docker和docker-compse


#创建docker-compse.yml文件
#vi docker-compose.yml

version: "3.9"
services:
db:
image: mariadb:10.8
container_name: spug-db
restart: always
command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- /data/spug/mysql:/var/lib/mysql
environment:
- MYSQL_DATABASE=spug
- MYSQL_USER=spug
- MYSQL_PASSWORD=spug.cc
- MYSQL_ROOT_PASSWORD=spug.cc
spug:
image: openspug/spug-service
container_name: spug
privileged: true
restart: always
volumes:
- /data/spug/service:/data/spug
- /data/spug/repos:/data/repos
ports:
# 如果80端口被占用可替换为其他端口,例如: - "8000:80"
- "80:80"
environment:
- SPUG_DOCKER_VERSION=v3.2.1
- MYSQL_DATABASE=spug
- MYSQL_USER=spug
- MYSQL_PASSWORD=spug.cc
- MYSQL_HOST=db
- MYSQL_PORT=3306
depends_on:
- db


#启动项目
#docker compose up -d

#初始化:以下操作会创建一个用户名为 admin 密码为 spug.dev 的管理员账户,可自行替换管理员账户/密码。
docker exec spug init_spug admin spug.dev

#访问测试:在浏览器中输入 http://localhost:80 访问(默认账户密码在上面一步初始化时设置)。

案例: 一键生成 Docker Compose

利用网站将docker 命令自动生成 Docker Compse

1
https://www.composerize.com/

66

Docker 仓库管理

Docker仓库,类似于yum仓库,是用来保存镜像的仓库。为了方便的管理和使用docker镜像,可以将镜像集中保存至Docker仓库中,将制作好的镜像push到仓库集中保存,在需要镜像时,从仓库中pull镜像即可。

Docker 仓库分为公有云仓库和私有云仓库

公有云仓库: 由互联网公司对外公开的仓库

  • 官方
  • 阿里云等第三方仓库

私有云仓库: 组织内部搭建的仓库,一般只为组织内部使用,常使用下面软件搭建仓库

  • docker registory
  • docker harbor

官方 Docker 仓库

将自制的镜像上传至docker仓库;https://hub.docker.com/

注册账户

访问hub.docker.com注册账户,并登录

67
68

69
70
71

使用用户仓库管理镜像

每个注册用户都可以上传和管理自已的镜像

用户登录

上传镜像前需要执行docker login命令登录,登录后生成~/.docker/config.json文件保存验证信息

格式

1
2
3
4
5
6
docker login [OPTIONS] [SERVER]

选项:
-p, --password string Password
--password-stdin Take the password from stdin
-u, --username string Username

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#登录docker官方仓库方法1
[root@ubuntu1804 ~]# docker login -u wangxiaochun -pP@ssw0rd! docker.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


#登录docker官方仓库方法2
[root@ubuntu1804 ~]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't
have a Docker ID, head over to https://hub.docker.com to create one.
Username: wangxiaochun
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


#登录成功后,自动生成验证信息,下次会自动登录,而无需手动登录
[root@ubuntu1804 ~]#cat .docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "d2FuZ3hpYW9jaHVuOmxidG9vdGgwNjE4"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}
}

给本地镜像打标签

上传本地镜像前必须先给上传的镜像用docker tag 命令打标签

标签格式: docker.io/用户帐号/镜像名:TAG

范例:

1
2
3
[root@ubuntu1804 ~]# docker tag alpine:3.11 docker.io/wangxiaochun/alpine:3.11-v1
[root@ubuntu1804 ~]# docker images
wangxiaochun/alpine 3.11-v1 e7d92cdc71fe 12 days ago 5.59MB

上传本地镜像至官网

1
2
3
4
5
6
7
8
#如tag省略,将上传指定REPOSITORY的所有版本,如下示例
#[root@ubuntu1804 ~]# docker push docker.io/wangxiaochun/alpine

[root@ubuntu1804 ~]# docker push docker.io/wangxiaochun/alpine:3.11-v1
The push refers to repository [docker.io/wangxiaochun/alpine]
5216338b40a7: Mounted from wanglinux/alpine-base
3.11-v1: digest:
sha256:ddba4d27a7ffc3f86dd6c2f92041af252a1f23a8e742c90e6e1297bfa1bc0c45 size: 528

在官网验证上传的镜像

72
73
74

下载上传的镜像并创建容器

在另一台主机上下载镜像

1
2
3
4
5
6
7
8
9
10
11
[root@centos7 ~]# docker pull wangxiaochun/alpine:3.11-v1

[root@centos7 ~]# docker run -it --rm wangxiaochun/alpine:3.11-v1 sh
/ # cat /etc/issue
Welcome to Alpine Linux 3.11
Kernel \r on an \m (\l)

/ # du -sh /
5.6M /

/ # exit

使用组织管理镜像

组织类似于名称空间,每个组织的名称全网站唯一,一个组织可以有多个用户帐户使用,并且可以指定不同用户对组织内的仓库不同的权限

三种不同权限

  • Read-only: Pull and view repository details and builds
  • Read &Write: Pull, push, and view a repository; view, cancel, retry or trigger builds
  • Admin: Pull, push, view, edit, and delete a repository; edit build settings; update the repository description

创建组织

75
76
77

创建组织内的团队,并分配权限

78
79
80

上传镜像前登录帐号

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@ubuntu1804 ~]# docker login docker.io 
Login Succeeded

[root@ubuntu1804 ~]# cat .docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "d2FuZ3hpYW9jaHVuOmxidG9vdGgwNjE4"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}
}

给本地镜像打标签

1
[root@ubuntu1804 ~]# docker tag alpine-base:3.11 docker.io/wanglinux/alpine-base:3.11

上传镜像到指定的组织

1
[root@ubuntu1804 ~]# docker push docker.io/wanglinux/alpine-base:3.11

在网站看查看上传的镜像

81
82
83

下载上传的镜像并运行容器

在另一台主机上下载镜像

1
2
3
4
5
6
7
8
9
10
[root@centos7 ~]# docker pull wanglinux/alpine-base:3.11
[root@centos7 ~]# docker run -it --rm wanglinux/alpine-base:3.11 sh
/ # cat /etc/issue
Welcome to Alpine Linux 3.11
Kernel \r on an \m (\l)

/ # du -sh /
190.1M /

/ # exit

阿里云Docker仓库

84

注册和登录阿里云仓库

用浏览器访问http://cr.console.aliyun.com,输入注册的用户信息登录网站

85
86

设置仓库专用管理密码

87
88
89

创建仓库

此步可不事先执行,docker push 时可以自动创建私有仓库

90
91
92
93

查看仓库的路径用于上传镜像使用

94

上传镜像前先登录阿里云

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#用前面设置的专用仓库管理密码登录
[root@ubuntu1804 ~]#docker login --username=29308620@qq.com registry.cn-beijing.aliyuncs.com

#登录密码保存在~/.docker/config.json文件中,下次将不会需要再输入密码登录
[root@ubuntu1804 ~]#cat .docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "d2FuZ3hpYW9jaHVuOmxidG9vdGgwNjE4"
},
"registry.cn-beijing.aliyuncs.com": {
"auth": "MjkzMDg2MjBAcXEuY29tOmxidG9vdGgwNjE4"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.5 (linux)"
}
}

给上传的镜像打标签

1
2
[root@ubuntu1804 ~]# docker tag alpine-base:3.11 registry.cn-beijing.aliyuncs.com/wangxiaochun/alpine:3.11-v1
[root@ubuntu1804 ~]# docker tag centos7-base:v1 registry.cn-beijing.aliyuncs.com/wangxiaochun/centos7-base:v1

上传镜像至阿里云

1
2
[root@ubuntu1804 ~]# docker push registry.cn- beijing.aliyuncs.com/wangxiaochun/alpine:3.11-v1
[root@ubuntu1804 ~]#docker push registry.cn-beijing.aliyuncs.com/wangxiaochun/centos7-base:v1

在网站查看上传的镜像

95

从另一台主机上下载刚上传的镜像并运行容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@centos7 ~]# docker pull registry.cn-beijing.aliyuncs.com/wangxiaochun/alpine:3.11-v1

[root@centos7 ~]#docker run -it --rm b162eecf4da9 sh
/ # cat /etc/issue
Welcome to Alpine Linux 3.11
Kernel \r on an \m (\l)

/ # du -sh /
190.1M /

/ # exit

#上传的centos7-base:v1为私有镜像,需要登录才能下载
[root@centos7 ~]# docker pull registry.cn-beijing.aliyuncs.com/wangxiaochun/centos7-base:v1
Error response from daemon: pull access denied for registry.cn-beijing.aliyuncs.com/wangxiaochun/centos7-base, repository does not exist or may
require 'docker login': denied: requested access to the resource is denied

[root@centos7 ~]# docker login registry.cn-beijing.aliyuncs.com

[root@centos7 ~]# docker pull registry.cn-beijing.aliyuncs.com/wangxiaochun/centos7-base:v1

私有云单机仓库Docker Registry

Docker Registry 介绍

Docker Registry作为Docker的核心组件之一负责单主机的镜像内容的存储与分发,客户端的docker pull以及push命令都将直接与registry进行交互,最初版本的registry 由Python实现,由于设计初期在安全性,性能以及API的设计上有着诸多的缺陷,该版本在0.9之后停止了开发,由新项目distribution(新的docker register被称为Distribution)来重新设计并开发下一代registry,新的项目由go语言开发,所有的API,底层存储方式,系统架构都进行了全面的重新设计已解决上一代registry中存在的问题,2016年4月份registry 2.0正式发布,docker 1.6版本开始支持registry 2.0,而八月份随着docker 1.8 发布,docker hub正式启用2.1版本registry全面替代之前版本 registry,新版registry对镜像存储格式进行了重新设计并和旧版不兼容,docker 1.5和之前的版本无法读取2.0的镜像,另外,Registry 2.4版本之后支持了回收站机制,也就是可以删除镜像了,在2.4版本之前是无法支持删除镜像的,所以如果你要使用最好是大于Registry 2.4版本的

官方文档地址: https://docs.docker.com/registry/

官方github 地址: https://github.com/docker/distribution

官方部署文档: https://github.com/docker/docker.github.io/blob/master/registry/deploying.md

96
97

以下介绍通过官方提供的docker registry 镜像来简单搭建本地私有仓库环境

环境: 三台主机

10.0.0.100: 充当registry仓库服务器

10.0.0.101: 上传镜像

10.0.0.102: 下载镜像

下载 docker registry 镜像

1
2
3
4
[root@ubuntu1804 ~]# docker pull registry:2.7.1
[root@ubuntu1804 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry 2.7.1 708bc6af7e5e 6 days ago 25.8MB

搭建单机仓库

创建授权用户密码使用目录

1
[root@ubuntu1804 ~]# mkdir -p /etc/docker/auth

创建授权的registry用户和密码

创建registry用户,用于上传和下载镜像

1
2
3
4
5
6
7
8
[root@ubuntu1804 ~]# apt -y install apache2-utils
[root@ubuntu1804 ~]# htpasswd -Bbn wang 123456 > /etc/docker/auth/registry
[root@ubuntu1804 ~]# cat /etc/docker/auth/registry
wang:$2y$05$nlRIIYEUBTSLdN2PkzodUue4ry7X/UyscpkkEufTDhEdI8nsyJMR6


#旧版本可以按下面方法生成用户和密码文件
[root@ubuntu1804 ~]# docker run --entrypoint htpasswd registry:2.7.1 -Bbn wang 123456 > /etc/docker/auth/registry

启动docker registry 容器

1
2
3
4
5
6
[root@ubuntu1804 ~]# docker run -d -p 5000:5000 --restart=always --name registry \
-v /etc/docker/auth:/auth -e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/registry registry:2.7.1

998f970dd8ca6b98002f20ae27330fe607ca78f35bedcc8a6180688e48a907a7

验证端口和容器

1
2
3
4
5
6
[root@ubuntu1804 ~]#docker ps 
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
998f970dd8ca registry:2.7.1 "/entrypoint.sh /etc…" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp registry

[root@ubuntu1804 ~]# ss -ntl
LISTEN 0 128 *:5000 *:*

登录仓库

直接登录报错

1
2
3
4
5
6
#docker login 默认使用https登录,而docker registry为http,所以默认登录失败
[root@ubuntu1804 ~]# docker login 10.0.0.100:500
Username: wang
Password:
Error response from daemon: Get https://10.0.0.100:500/v2/: dial tcp
10.0.0.100:500: connect: connection refused

将registry仓库服务器地址加入service 单元文件

官方文档:

1
https://docs.docker.com/registry/insecure/

范例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#修改配置让docker login支持http协议
[root@ubuntu1804 ~]# vim /lib/systemd/system/docker.service
[root@ubuntu1804 ~]# grep ExecStart /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 10.0.0.100:5000

#或者修改下面文件
[root@ubuntu1804 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"],
"insecure-registries": ["10.0.0.100:5000"]
}

[root@ubuntu1804 ~]# systemctl daemon-reload
[root@ubuntu1804 ~]# systemctl restart docker
[root@ubuntu1804 ~]# ps aux|grep dockerd
root 2092 1.3 8.4 757088 83056 ? Ssl 19:19 0:00
/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 10.0.0.100:5000

再次登录验证成功

在10.0.0.101主机上执行下面登录

1
2
3
4
5
6
7
8
[root@ubuntu1804 ~]# docker login 10.0.0.100:5000
Username: wang
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

打标签并上传镜像

在10.0.0.101主机上执行打标签上传

1
2
[root@ubuntu1804 ~]# docker tag centos7-base:v1 10.0.0.100:5000/centos7-base:v1
[root@ubuntu1804 ~]# docker push 10.0.0.100:5000/centos7-base:v1

下载镜像并启动容器

在10.0.0.102主机上下载镜像并启动容器

先修改docker的service 文件

1
2
3
4
5
6
[root@ubuntu1804 ~]# vim /lib/systemd/system/docker.service
[root@ubuntu1804 ~]# grep ExecStart /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 10.0.0.100:5000

[root@ubuntu1804 ~]# systemctl daemon-reload
[root@ubuntu1804 ~]# systemctl restart docker

登录registry仓库服务器

1
2
3
4
5
6
7
8
[root@ubuntu1804 ~]# docker login 10.0.0.100:5000
Username: wang
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

下载镜像并启动容器

1
2
3
4
5
6
7
[root@ubuntu1804 ~]# docker pull 10.0.0.100:5000/centos7-base:v1
[root@ubuntu1804 ~]# docker run -it --rm 10.0.0.100:5000/centos7-base:v1 bash
[root@2bcb26b1b568 /]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)

[root@2bcb26b1b568 /]# exit
exit

Docker 之分布式仓库 Harbor

Harbor 介绍和架构

Harbor 介绍

98

Harbor是一个用于存储和分发Docker镜像的企业级Registry服务器,由VMware开源,其通过添加一些企业必需的功能特性,例如安全、标识和管理等,扩展了开源 Docker Distribution。作为一个企业级私有Registry服务器,Harbor 提供了更好的性能和安全。提升用户使用Registry构建和运行环境传输镜像的效率。Harbor支持安装在多个Registry节点的镜像资源复制,镜像全部保存在私有 Registry 中, 确保数据和知识产权在公司内部网络中管控,另外,Harbor也提供了高级的安全特性,诸如用户管理,访问控制和活动审计等

vmware 官方开源服务: https://vmware.github.io/

harbor 官方github 地址: https://github.com/vmware/harbor

harbor 官方网址: https://goharbor.io/

harbor 官方文档: https://goharbor.io/docs/

github文档: https://github.com/goharbor/harbor/tree/master/docs

Harbor功能官方介绍

  • 基于角色的访问控制: 用户与Docker镜像仓库通过“项目”进行组织管理,一个用户可以对多个镜像仓库在同一命名空间(project)里有不同的权限
  • 镜像复制: 镜像可在多个Registry实例中复制(同步)。尤其适合于负载均衡,高可用,混合云和多云的场景
  • 图形化用户界面: 用户可以通过浏览器来浏览,检索当前Docker镜像仓库,管理项目和命名空间
  • AD/LDAP 支: Harbor可以集成企业内部已有的AD/LDAP,用于鉴权认证管理
  • 审计管理: 所有针对镜像仓库的操作都可以被记录追溯,用于审计管理
  • 国际化: 已拥有英文、中文、德文、日文和俄文的本地化版本。更多的语言将会添加进来
  • RESTful API: 提供给管理员对于Harbor更多的操控, 使得与其它管理软件集成变得更容易
  • 部署简单: 提供在线和离线两种安装工具, 也可以安装到vSphere平台(OVA方式)虚拟设备

Harbor 组成

99

1
2
3
4
5
6
7
8
9
10
11
12
#harbor是由很多容器组成实现完整功能
[root@ubuntu1804 ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ec3c3885407 goharbor/nginx-photon:v1.7.6 "nginx -g 'daemon of…" About a minute ago Up About a minute (healthy) 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp nginx
5707b4ac41d8 goharbor/harbor-portal:v1.7.6 "nginx -g 'daemon of…" About a minute ago Up About a minute (healthy) 80/tcp harbor-portal
0ed230b9b714 goharbor/harbor-jobservice:v1.7.6 "/harbor/start.sh" About a minute ago Up About a minute harbor-jobservice
fec659188349 goharbor/harbor-core:v1.7.6 "/harbor/start.sh" About a minute ago Up About a minute (healthy) harbor-core
910d14c1d7f7 goharbor/harbor-adminserver:v1.7.6 "/harbor/start.sh" 2 minutes ago Up About a minute (healthy) harbor-adminserver
4348f503aa0e goharbor/harbor-db:v1.7.6 "/entrypoint.sh post…" 2 minutes ago Up About a minute (healthy) 5432/tcp harbor-db
beff6886f0f1 goharbor/harbor-registryctl:v1.7.6 "/harbor/start.sh" 2 minutes ago Up About a minute (healthy) registryctl
428c99d274bf goharbor/registry-photon:v2.6.2-v1.7.6 "/entrypoint.sh /etc…" 2 minutes ago Up About a minute (healthy) 5000/tcp registry
775b4026fa4e goharbor/redis-photon:v1.7.6 "dockerentrypoint.s…" 2 minutes ago Up About a minute 6379/tcp redis
  • Proxy: 对应启动组件nginx。它是一个nginx反向代理,代理Notary client(镜像认证)、Docker client(镜像上传下载等)和浏览器的访问请求(Core Service)给后端的各服务
  • UI(Core Service): 对应启动组件harbor-ui。底层数据存储使用mysql数据库,主要提供了四个 子功能:
    • UI: 一个web管理页面ui
    • API: Harbor暴露的API服务
    • Auth: 用户认证服务,decode后的token中的用户信息在这里进行认证;auth后端可以接db、ldap、uaa三种认证实现
    • Token服务(上图中未体现): 负责根据用户在每个project中的role来为每一个docker push/pull命令发布一个token,如果从docker client发送给registry的请求没有带token, registry会重定向请求到token服务创建token
  • Registry: 对应启动组件registry。负责存储镜像文件,和处理镜像的pull/push命令。Harbor对镜像进行强制的访问控制,Registry会将客户端的每个pull、push请求转发到token服务来获取有效的token
  • Admin Service: 对应启动组件harbor-adminserver。是系统的配置管理中心附带检查存储用量,ui和jobserver启动时候需要加载adminserver的配置
  • Job Sevice: 对应启动组件harbor-jobservice。负责镜像复制工作的,他和registry通信,从一个registry pull镜像然后push到另一个registry,并记录job_log
  • Log Collector: 对应启动组件harbor-log。日志汇总组件,通过docker的log-driver把日志汇总到一起
  • DB: 对应启动组件harbor-db,负责存储project、 user、 role、replication、image_scan、access等的metadata数据

安装 Harbor

下载地址: https://github.com/vmware/harbor/releases

安装文档: https://github.com/goharbor/harbor/blob/master/docs/install-config/_index.md

环境准备: 共四台主机

  • 两台主机harbor服务器,地址: 10.0.0.101|102
  • 两台主机harbor客户端上传和下载镜像

100

下载Harbor安装包并解压缩

必须先安装 docker再安装 docker compose

否则会报以下错误

1
2
3
4
5
[root@ubuntu1804 ~]# /apps/harbor/install.sh 
[Step 0]: checking installation environment ...

Note: docker version: 19.03.5
✖ Need to install docker-compose(1.7.1+) by yourself first and run this script again

以下使用 harbor 稳定版本1.7.6 安装包

方法1: 下载离线完整安装包,推荐使用

1
2
3
[root@ubuntu1804 ~]# wget https://storage.googleapis.com/harbor-releases/release-1.7.0/harbor-offline-installer-v1.7.6.tgz

[root@ubuntu1804 ~]# wget https://github.com/goharbor/harbor/releases/download/v2.5.2/harbor-offline-installer-v2.5.2.tgz

方法2: 下载在线安装包 ,比较慢,不是很推荐

1
[root@ubuntu1804 ~]# wget https://storage.googleapis.com/harbor-releases/release-1.7.0/harbor-online-installer-v1.7.6.tgz

解压缩离线包

1
2
[root@ubuntu1804 ~]# mkdir /apps
[root@ubuntu1804 ~]# tar xvf harbor-offline-installer-v1.7.6.tgz -C /apps/

编辑 harbor 配置文件

最新文档: https://github.com/goharbor/harbor/blob/master/docs/install-config/configure-yml-file.md

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#新版配置文件为yml格式
[root@ubuntu2004~]# mv /apps/harbor/harbor.yml.tmpl /apps/harbor/harbor.yml
[root@ubuntu2004 ~]# mv /apps/harbor/harbor.yml

#旧版配置文件为文本格式
[root@ubuntu1804 ~]# vim /apps/harbor/harbor.cfg

#只需要修改下面两行
hostname = 10.0.0.101 #修改此行,指向当前主机IP 或 FQDN,建议配置IP
harbor_admin_password = 123456 #修改此行指定harbor登录用户admin的密码,默认用户/密码:admin/Harbor12345

#可选项
ui_url_protocol = http #默认即可,如果修改为https,需要指定下面证书路径
ssl_cert = /data/cert/server.crt #默认即可,https时,需指定下面证书文件路径
ss_cert_key = /data/cert/server.key #默认即可,https时,需指定下面私钥文件路径

运行 harbor 安装脚本

1
2
3
4
5
6
7
8
#先安装python
root@ubuntu1804 ~]# apt -y install python

#安装docker harbor
root@ubuntu1804 ~]# /apps/harbor/install.sh

#安装harbor后会自动开启很多相关容器
[root@ubuntu1804 ~]# docker ps

实现开机自动启动 harbor

方法1: 通过service文件实现
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@harbor ~]# vim /lib/systemd/system/harbor.service
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target

[root@harbor ~]# systemctl daemon-reload
[root@harbor ~]# systemctl enable harbor
方法2: 通过 rc.local实现
1
2
3
4
5
6
[root@harbor ~]# cat /etc/rc.local 
#!/bin/bash
cd /apps/harbor
/usr/bin/docker-compose up

[root@harbor ~]# chmod +x /etc/rc.local

登录 harbor 主机网站

用浏览器访问: http://10.0.0.101/

  • 用户名: admin
  • 密码: 即前面harbor.cfg中指定的密码

101
102

实战案例: 一键安装Harbor脚本

安装harbor 1.7.6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
#!/bin/bash

HARBOR_VERSION=2.7.0
#HARBOR_VERSION=2.6.1
#HARBOR_VERSION=2.6.0
HARBOR_BASE=/apps
HARBOR_NAME=10.0.0.202
#HARBOR_NAME=`hostname -I|awk '{print $1}'`

DOCKER_VERSION="20.10.20"
#DOCKER_VERSION="19.03.14"
DOCKER_URL="http://mirrors.ustc.edu.cn"
#DOCKER_URL="https://mirrors.tuna.tsinghua.edu.cn"

DOCKER_COMPOSE_VERSION=2.6.1
#DOCKER_COMPOSE_VERSION=1.29.2
DOCKER_COMPOSE_FILE=docker-compose-Linux-x86_64


HARBOR_ADMIN_PASSWORD=123456

HARBOR_IP=`hostname -I|awk '{print $1}'`


COLOR_SUCCESS="echo -e \\033[1;32m"
COLOR_FAILURE="echo -e \\033[1;31m"
END="\033[m"

. /etc/os-release
UBUNTU_DOCKER_VERSION="5:${DOCKER_VERSION}~3-0~${ID}-${UBUNTU_CODENAME}"

color () {
RES_COL=60
MOVE_TO_COL="echo -en \\033[${RES_COL}G"
SETCOLOR_SUCCESS="echo -en \\033[1;32m"
SETCOLOR_FAILURE="echo -en \\033[1;31m"
SETCOLOR_WARNING="echo -en \\033[1;33m"
SETCOLOR_NORMAL="echo -en \E[0m"
echo -n "$1" && $MOVE_TO_COL
echo -n "["
if [ $2 = "success" -o $2 = "0" ] ;then
${SETCOLOR_SUCCESS}
echo -n $" OK "
elif [ $2 = "failure" -o $2 = "1" ] ;then
${SETCOLOR_FAILURE}
echo -n $"FAILED"
else
${SETCOLOR_WARNING}
echo -n $"WARNING"
fi
${SETCOLOR_NORMAL}
echo -n "]"
echo
}


install_docker(){
if [ $ID = "centos" -o $ID = "rocky" ];then
if [ $VERSION_ID = "7" ];then
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker]
name=docker
gpgcheck=0
#baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/
baseurl=${DOCKER_URL}/docker-ce/linux/centos/7/x86_64/stable/
EOF
else
cat > /etc/yum.repos.d/docker.repo <<EOF
[docker]
name=docker
gpgcheck=0
#baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/
baseurl=${DOCKER_URL}/docker-ce/linux/centos/8/x86_64/stable/
EOF
fi
yum clean all
${COLOR_FAILURE} "Docker有以下版本"${END}
yum list docker-ce --showduplicates
${COLOR_FAILURE}"5秒后即将安装: docker-"${DOCKER_VERSION}" 版本....."${END}
${COLOR_FAILURE}"如果想安装其它Docker版本,请按ctrl+c键退出,修改版本再执行"${END}
sleep 5
yum -y install docker-ce-$DOCKER_VERSION docker-ce-cli-$DOCKER_VERSION \
|| { color "Base,Extras的yum源失败,请检查yum源配置" 1;exit; }
else
dpkg -s docker-ce &> /dev/null && $COLOR"Docker已安装,退出" 1 && exit
apt update || { color "更新包索引失败" 1 ; exit 1; }
apt -y install apt-transport-https ca-certificates curl software-properties-common || \
{ color "安装相关包失败" 1 ; exit 2; }
curl -fsSL ${DOCKER_URL}/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] ${DOCKER_URL}/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
apt update
${COLOR_FAILURE} "Docker有以下版本"${END}
apt-cache madison docker-ce
${COLOR_FAILURE}"5秒后即将安装: docker-"${UBUNTU_DOCKER_VERSION}" 版本....."${END}
${COLOR_FAILURE}"如果想安装其它Docker版本,请按ctrl+c键退出,修改版本再执行"${END}
sleep 5
apt -y install docker-ce=${UBUNTU_DOCKER_VERSION} docker-ce-cli=${UBUNTU_DOCKER_VERSION}
fi
if [ $? -eq 0 ];then
color "安装软件包成功" 0
else
color "安装软件包失败,请检查网络配置" 1
exit
fi

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"],
"insecure-registries": ["harbor.wang.org"]
}
EOF
systemctl daemon-reload
systemctl enable docker
systemctl restart docker
docker version && color "Docker 安装成功" 0 || color "Docker 安装失败" 1
echo 'alias rmi="docker images -qa|xargs docker rmi -f"' >> ~/.bashrc
echo 'alias rmc="docker ps -qa|xargs docker rm -f"' >> ~/.bashrc
}



install_docker_compose(){
if [ $ID = "centos" -o $ID = "rocky" ];then
${COLOR_SUCCESS}"开始安装 Docker compose....."${END}
sleep 1
if [ ! -e ${DOCKER_COMPOSE_FILE} ];then
#curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/${DOCKER_COMPOSE_FILE} -o /usr/bin/docker-compose
curl -L https://get.daocloud.io/docker/compose/releases/download/v${DOCKER_COMPOSE_VERSION}/docker-compose-$(uname -s)-$(uname -m) -o /usr/bin/docker-compose
else
mv ${DOCKER_COMPOSE_FILE} /usr/bin/docker-compose
fi
chmod +x /usr/bin/docker-compose
else
apt -y install docker-compose
fi
if docker-compose --version ;then
${COLOR_SUCCESS}"Docker Compose 安装完成"${END}
else
${COLOR_FAILURE}"Docker compose 安装失败"${END}
exit
fi
}

install_harbor(){
${COLOR_SUCCESS}"开始安装 Harbor....."${END}
sleep 1
if [ ! -e harbor-offline-installer-v${HARBOR_VERSION}.tgz ] ;then
wget https://github.com/goharbor/harbor/releases/download/v${HARBOR_VERSION}/harbor-offline-installer-v${HARBOR_VERSION}.tgz || ${COLOR_FAILURE} "下载失败!" ${END}
fi
[ -d ${HARBOR_BASE} ] || mkdir ${HARBOR_BASE}
tar xvf harbor-offline-installer-v${HARBOR_VERSION}.tgz -C ${HARBOR_BASE}
cd ${HARBOR_BASE}/harbor
cp harbor.yml.tmpl harbor.yml
sed -ri "/^hostname/s/reg.mydomain.com/${HARBOR_NAME}/" harbor.yml
sed -ri "/^https/s/(https:)/#\1/" harbor.yml
sed -ri "s/(port: 443)/#\1/" harbor.yml
sed -ri "/certificate:/s/(.*)/#\1/" harbor.yml
sed -ri "/private_key:/s/(.*)/#\1/" harbor.yml
sed -ri "s/Harbor12345/${HARBOR_ADMIN_PASSWORD}/" harbor.yml
sed -i 's#^data_volume: /data#data_volume: /data/harbor#' harbor.yml
#mkdir -p /data/harbor
${HARBOR_BASE}/harbor/install.sh && ${COLOR_SUCCESS}"Harbor 安装完成"${END} || ${COLOR_FAILURE}"Harbor 安装失败"${END}
cat > /lib/systemd/system/harbor.service <<EOF
[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/vmware/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f ${HARBOR_BASE}/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f ${HARBOR_BASE}/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable harbor &>/dev/null || ${COLOR}"Harbor已配置为开机自动启动"${END}
if [ $? -eq 0 ];then
echo
color "Harbor安装完成!" 0
echo "-------------------------------------------------------------------"
echo -e "请访问链接: \E[32;1mhttp://${HARBOR_IP}/\E[0m"
echo -e "用户和密码: \E[32;1madmin/${HARBOR_ADMIN_PASSWORD}\E[0m"
else
color "Harbor安装失败!" 1
exit
fi
echo "$HARBOR_IP $HARBOR_NAME" >> /etc/hosts
}



docker info &> /dev/null && ${COLOR_FAILURE}"Docker已安装"${END} || install_docker

docker-compose --version &> /dev/null && ${COLOR_FAILURE}"Docker Compose已安装"${END} || install_docker_compose

install_harbor

使用单主机 Harbor

建立项目

harbor上必须先建立项目,才能上传镜像

103
104
105

命令行登录 harbor

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#方法1
[root@ubuntu1804 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 10.0.0.101 --insecure-registry 10.0.0.102

#方法2
[root@ubuntu1804 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"],
"insecure-registries": ["10.0.0.101:80""10.0.0.102:80"] #说明: ":80"可省略
}

[root@ubuntu1804 ~]# systemctl daemon-reload
[root@ubuntu1804 ~]# systemctl restart docker
[root@ubuntu1804 ~]# docker login 10.0.0.101

给本地镜像打标签并上传到 Harbor

修改 images 的名称,不修改成指定格式无法将镜像上传到 harbor仓库

格式为:

1
Harbor主机IP/项目名/image名:版本

范例:

1
2
3
4
5
#上传镜像前,必须先登录harbor
[root@ubuntu1804 ~]# docker login 10.0.0.101

[root@ubuntu1804 ~]# docker tag alpine-base:3.11 10.0.0.101/example/alpine-base:3.11
[root@ubuntu1804 ~]# docker push 10.0.0.101/example/alpine-base:3.11

访问harbor网站验证上传镜像成功

106

范例: 如果不事先建立项目,上传镜像失败

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@ubuntu1804 ~]# docker tag centos7-base:v1 10.0.0.101/example2/centos7-base:v1
[root@ubuntu1804 ~]# docker push 10.0.0.101/example2/centos7-base:v1
The push refers to repository [10.0.0.101/example2/centos7-base]
2073413aebd6: Preparing
6ec9af97c369: Preparing
034f282942cd: Preparing
denied: requested access to the resource is denied


[root@ubuntu1804 ~]# docker tag centos7-base:v1 10.0.0.101/example/centos7-base:v1
[root@ubuntu1804 ~]# docker push 10.0.0.101/example/centos7-base:v1
The push refers to repository [10.0.0.101/example/centos7-base]
2073413aebd6: Pushed
6ec9af97c369: Pushed
034f282942cd: Pushed
v1: digest:
sha256:02cd943f2569c7c55f08a979fd9661f1fd7893c424bca7b343188654ba63d98d size: 949

107

可以看到操作的日志记录

108

下载 Harbor 的镜像

在10.0.0.103的CentOS 7 的主机上无需登录,即可下载镜像

下载前必须修改docker的service 文件,加入harbor服务器的地址才可以下载

范例: 修改docker的service文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@centos7 ~]# docker pull 10.0.0.101/example/centos7-base:v1
Error response from daemon: Get https://10.0.0.101/v2/: dial tcp 10.0.0.101:443: connect: connection refused

#方法1
[root@ubuntu1804 ~]# vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 10.0.0.101 --insecure-registry 10.0.0.102

#方法2
[root@ubuntu1804 ~]# vim/etc/docker/daemon.json
{
"insecure-registries": ["10.0.0.101", "10.0.0.102"]
}

[root@centos7 ~]# systemctl daemon-reload
[root@centos7 ~]# systemctl restart docker

范例: 从harbor下载镜像

1
[root@centos7 ~]# docker pull 10.0.0.101/example/centos7-base:v1

创建自动打标签上传镜像脚本

1
2
3
4
5
6
7
8
9
10
11
#在10.0.0.100上修改以前的build.sh脚本
[root@ubuntu1804 ~]# cd /data/dockerfile/web/nginx/1.16.1-alpine/
[root@ubuntu1804 1.16.1-alpine]# vim build.sh
[root@ubuntu1804 1.16.1-alpine]# cat build.sh
#!/bin/bash
TAG=$1
docker build -t 10.0.0.101/example/nginx-alpine:1.16.1-${TAG} .
docker push 10.0.0.101/example/nginx-alpine:1.16.1-${TAG}
docker rmi -f 10.0.0.101/example/nginx-alpine:1.16.1-${TAG}

[root@ubuntu1804 1.16.1-alpine]# bash build.sh v1

登录harbor网站验证脚本上传镜像成功

109

修改 Harbor 配置

后期如果修改harbor配置,比如: 修改IP地址等,可执行以下步骤生效

方法1:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@ubuntu1804 ~]# cd /apps/harbor/
[root@ubuntu1804 harbor]# docker-compose stop

#所有相关容器都退出
[root@ubuntu1804 harbor]# docker ps -a
......

#修改harbor配置
[root@ubuntu1804 harbor]# vim harbor.cfg

#更新配置
[root@ubuntu1804 ~]# /apps/harbor/prepare

#重新启动docker compose
[root@ubuntu1804 harbor]# docker-compose start

#相关容器自动启动
[root@ubuntu1804 harbor]# docker ps
......

方法2:

1
[root@ubuntu1804 ~]# /apps/harbor/install.sh

110

Harbor支持基于策略的Docker镜像复制功能,这类似于MySQL的主从同步,其可以实现不同的数据中心、不同的运行环境之间同步镜像,并提供友好的管理界面,大大简化了实际运维中的镜像管理工作,已经有用很多互联网公司使用harbor搭建内网docker仓库的案例,并且还有实现了双向复制功能

安装第二台 harbor主机

参考6.4.2的过程,在第二台主机上安装部署好harbor,并登录系统

注意: harbor.cfg中配置 hostname = 10.0.0.102

111

第二台harbor上新建项目

参考第一台harbor服务器的项目名称,在第二台harbor服务器上新建与之同名的项目

112

第二台harbor上仓库管理中新建目标

参考第一台主机信息,新建复制(同步)目标信息,将第一台主机设为复制的目标

113

输入第一台harbor服务器上的主机10.0.0.101,目标名(即项目名)example和用户信息及密码admin

114
115

第二台harbor上新建复制规则实现到第一台harbor的单向复制

在第二台harbor上建立复制的目标主机,将第二台harbor上面的镜像复制到第一台harbor上

116
117

在第一台harbor主机上重复上面操作

以上操作,只是实现了从第二台harbor主机10.0.0.102到第一台harbor主机10.0.101的单向同步

在第一台harbor上再执行下面操作,才实现双向同步

118
119

确认同步成功

在第二台harbor主机上可以查看到从第一台主机同步过来的镜像

120

也可以查看到同步日志

121

上传镜像观察是否可以双高同步

1
2
3
4
[root@ubuntu1804 ~]# docker tag tomcat-web:app1 10.0.0.101/example/tomcat-web:app1
[root@ubuntu1804 ~]# docker push 10.0.0.101/example/tomcat-web:app1
[root@ubuntu1804 ~]# docker tag tomcat-web:app2 10.0.0.102/example/tomcat-web:app2
[root@ubuntu1804 ~]# docker push 10.0.0.102/example/tomcat-web:app2

122
123

删除镜像观察是否可自动同步

124
125
126
127

配置 Nginx 做为反向代理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#配置Nginx反向代理
[root@ubuntu2004 ~]# cat /etc/nginx/conf.d/harbor.wang.org.conf
upstream harbor {
ip_hash;
server harbor1.wang.org:80;
server harbor2.wang.org:80;
}
server {
listen 80;
server_name harbor.wang.org;
client_max_body_size 10g;
location / {
proxy_pass http://harbor;
}
}

#客户端docker配置
[root@rocky8 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"],
"insecure-registries": ["harbor.wang.org"]
}


[root@rocky8 ~]# systemctl restart docker


#客户端docker配置名称解析
[root@rocky8 ~]# vim /etc/hosts
10.0.0.100 harbor.wang.org

#如果harbor配置中的hostname: 指定harbor1.wang.org和harbor2.wang.org名称,还需要加下面解析
10.0.0.101 harbor1.wang.org
10.0.0.102 harbor2.wang.org

Harbor 安全 Https 配置

基于安全考虑,生产建议采用 https 代替 http

新版实现实现 Harbor 的 Https 认证

新版2.5.0的Https实现方法出现了一些变化

官方文档:

1
https://goharbor.io/docs/2.5.0/install-config/configure-https/
生成 Harbor 服务器证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
#生成ca的私钥
openssl genrsa -out ca.key 4096

#生成ca的自签名证书
openssl req -x509 -new -nodes -sha512 -days 3650 \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=wang.org" \
-key ca.key \
-out ca.crt

#生成harbor主机的私钥
openssl genrsa -out harbor.wang.org.key 4096

#生成harbor主机的证书申请
openssl req -sha512 -new \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.wang.org" \
-key harbor.wang.org.key \
-out harbor.wang.org.csr


#创建x509 v3 扩展文件(新版新增加的要求)
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=wang.org
DNS.2=wang
DNS.3=harbor.wang.org
EOF


#给 harbor主机颁发证书
openssl x509 -req -sha512 -days 3650 \
-extfile v3.ext \
-CA ca.crt -CAkey ca.key -CAcreateserial \
-in harbor.wang.org.csr \
-out harbor.wang.org.crt


#最终文件列表如下
ca.crt ca.key ca.srl harbor.wang.org.crt harbor.wang.org.csr
harbor.wang.org.key v3.ext

注意: 如果不生成创建x509 v3 扩展文件,会出现下面提示错误

1
2
3
4
5
6
docker login harbor.wang.org
Username: admin
Password:
Error response from daemon: Get "https://harbor.wang.org/v2/": x509: certificate
relies on legacy Common Name field, use SANs or temporarily enable Common Name
matching with GODEBUG=x509ignoreCN=0
配置 Harbor 服务器使用证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
mkdir /data/harbor/certs/
cp harbor.wang.org.crt harbor.wang.org.key /data/harbor/certs/

vim /apps/harbor/harbor.yml
......
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /data/harbor/certs/harbor.wang.org.crt
private_key: /data/harbor/certs/harbor.wang.org.key


#使上面的配置生效
cd /apps/harbor/
./prepare
docker-compose down -v
docker-compose up -d

输入下面 http 链接自动跳转到 https

1
http://harbor.wang.org

128
129
130

配置 Docker 客户端使用证书文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#转换harbor的crt证书文件为cert后缀,docker识别crt文件为CA证书,cert为客户端证书服务器
openssl x509 -inform PEM -in harbor.wang.org.crt -out harbor.wang.org.cert

#或者
cp -a harbor.wang.org.crt harbor.wang.org.cert
#比较两个文件的不同
md5sum harbor.wang.org.crt harbor.wang.org.cert

#在docker客户端使用上面的证书文件
mkdir -pv /etc/docker/certs.d/harbor.wang.org/
cp harbor.wang.org.cert harbor.wang.org.key ca.crt /etc/docker/certs.d/harbor.wang.org/

#上面证书配置无需重启服务即生效
#在docker客户端登录harbor服务器,注意:此时无需再配置insecure-registries项即可登录
docker login harbor.wang.org

Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
Docker 客户端测试推送和拉取镜像

登录harbor 查看推送命令

131

1
2
3
docker push wangxiaochun/busybox:1.30.0
docker tag wangxiaochun/busybox:1.30.0 harbor.wang.org/library/busybox:1.30.0
docker push harbor.wang.org/library/busybox:1.30.0

验证推送是否成功

132

验证拉取

1
docker pull harbor.wang.org/library/busybox:1.30.0

旧版实现 Harbor 的 Https 认证

旧版harbor默认使用http,为了安全,可以使用https

实现Harbor 的 https 认证
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#安装docker
[root@ubuntu1804 ~]# bash install_docker_for_ubuntu1804.sh

#安装docker compose
[root@ubuntu1804 ~]# curl -L https://github.com/docker/compose/releases/download/1.25.3/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

[root@ubuntu1804 ~]# chmod +x /usr/local/bin/docker-compose
[root@ubuntu1804 ~]# docker-compose --version
docker-compose version 1.25.3, build d4d1b42b

#下载harbor离线安装包且解压缩
[root@ubuntu1804 ~]# wget https://storage.googleapis.com/harbor-releases/release-1.7.0/harbor-offline-installer-v1.7.6.tgz
[root@ubuntu1804 ~]# mkdir /apps
[root@ubuntu1804 ~]# tar xvf harbor-offline-installer-v1.7.6.tgz -C /apps/


#生成私钥和证书
[root@ubuntu1804 ~]# touch /root/.rnd
[root@ubuntu1804 ~]# mkdir /apps/harbor/certs/
[root@ubuntu1804 ~]# cd /apps/harbor/certs/

#生成CA证书
[root@ubuntu1804 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout ca.key -x509 -subj "/CN=ca.wang.org" -days 365 -out ca.crt

#生成harbor主机的证书申请
[root@ubuntu1804 certs]# openssl req -newkey rsa:4096 -nodes -sha256 -subj "/CN=harbor.wang.org" -keyout harbor.wang.org.key -out harbor.wang.org.csr

#给harbor主机颁发证书
[root@ubuntu1804 certs]# openssl x509 -req -in harbor.wang.org.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out harbor.wang.org.crt

[root@ubuntu1804 ~]# tree /apps/harbor/certs
/apps/harbor/certs
├── ca.crt
├── ca.key
├── ca.srl
├── harbor.wang.org.crt
├── harbor.wang.org.csr
└── harbor.wang.org.key
0 directories, 6 files

[root@ubuntu1804 ~]# vim /apps/harbor/harbor.cfg
hostname = harbor.wang.org
ui_url_protocol = https
ssl_cert = /apps/harbor/certs/harbor.wang.org.crt
ssl_cert_key = /apps/harbor/certs/harbor.wang.org.key
harbor_admin_password = 123456

[root@ubuntu1804 ~]# apt -y install python
[root@ubuntu1804 ~]# /apps/harbor/install.sh
用https方式访问harbor网站

修改/etc/hosts文件

1
10.0.0.103 harbor.wang.org

打开浏览器,访问http://harbor.wang.org ,可以看到以下界面

133
134
135
136
137

在harbor网站新建项目

138

在客户端下载CA的证书

直接登录和上传下载镜像会报错

1
2
3
4
5
6
7
8
9
[root@ubuntu1804 ~]# vim /etc/hosts
10.0.0.103 harbor.wang.org

#没有证书验证,直接登录失败
[root@ubuntu1804 ~]# docker login harbor.wang.org
Username: admin
Password:
Error response from daemon: Get https://harbor.wang.org/v2/: x509: certificate
signed by unknown authority

在客户端下载ca的证书

1
2
3
4
5
6
7
8
9
[root@ubuntu1804 ~]# mkdir -pv /etc/docker/certs.d/harbor.wang.org/
[root@ubuntu1804 ~]# scp -r harbor.wang.org:/apps/harbor/certs/ca.crt /etc/docker/certs.d/harbor.wang.org/
[root@ubuntu1804 ~]# tree /etc/docker/certs.d/
/etc/docker/certs.d/
└── harbor.wang.org
└── ca.crt

1 directory, 1 file
#上面证书配置无需重启服务即生效
从客户端上传镜像
1
2
3
4
5
6
#先登录系统
[root@ubuntu1804 ~]# docker login harbor.wang.org

#上传镜像
[root@ubuntu1804 ~]# docker tag alpine:3.11 harbor.wang.org/example/alpine:3.11
[root@ubuntu1804 ~]# docker push harbor.wang.org/example/alpine:3.11

在harbor网站上验证上传的镜像

139

在客户端下载镜像
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@centos7 ~]# vim /etc/hosts
10.0.0.103 harbor.wang.org

[root@centos7 ~]# docker pull harbor.wang.org/example/alpine:3.11
Error response from daemon: Get https://harbor.wang.org/v2/: x509: certificate
signed by unknown authority

[root@centos7 ~]# mkdir -pv /etc/docker/certs.d/harbor.wang.org/
[root@centos7 ~]# scp -r harbor.wang.org:/apps/harbor/certs/ca.crt /etc/docker/certs.d/harbor.wang.org/
[root@centos7 ~]# tree /etc/docker/certs.d/
/etc/docker/certs.d/
└── harbor.wang.org
└── ca.crt

1 directory, 1 file

[root@centos7 ~]# docker pull harbor.wang.org/example/alpine:3.11

Docker 的资源限制

Docker 资源限制

容器资源限制介绍

官方文档: https://docs.docker.com/config/containers/resource_constraints/

默认情况下,容器没有资源的使用限制,可以使用主机内核调度程序允许的尽可能多的资源

Docker 提供了控制容器使用资源的方法,可以限制容器使用多少内存或 CPU等, 在docker run 命令的运行时配置标志实现资源限制功能。

其中许多功能都要求宿主机的内核支持,要检查是否支持这些功能,可以使用docker info 命令 ,如果内核中的某项特性可能会在输出结尾处看到警告, 如下所示:

1
WARNING: No swap limit support #没有启用 swap 限制功能会出现此提示警报

可通过修改内核参数消除以上警告

官方文档: https://docs.docker.com/install/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities

范例: 修改内核参数消除以上警告

1
2
3
4
5
6
7
8
9
10
11
[root@ubuntu1804 ~]# docker info 
......
WARNING: No swap limit support

#修改内核参数
[root@ubuntu1804 ~]# vim /etc/default/grub
GRUB_CMDLINE_LINUX="cgroup_enable=memory net.ifnames=0 swapaccount=1"

[root@ubuntu1804 ~]# update-grub
[root@ubuntu1804 ~]# reboot
[root@ubuntu1804 ~]# docker info

OOM (Out of Memory Exception)

对于 Linux 主机,如果没有足够的内存来执行其他重要的系统任务,将会抛出OOM (Out of Memory Exception,内存溢出、内存泄漏、内存异常 ),随后系统会开始杀死进程以释放内存, 凡是运行在宿主机的进程都有可能被 kill ,包括 Dockerd和其它的应用程序, 如果重要的系统进程被 Kill,会导致和该进程相关的服务全部宕机。通常越消耗内存比较大的应用越容易被kill,比如: MySQL数据库,Java程序等

范例: OOM发生后的日志信息

140

产生 OOM 异常时, Dockerd尝试通过调整 Docker 守护程序上的 OOM 优先级来减轻这些风险,以便它比系统上的其他进程更不可能被杀死但是每个容器的 OOM 优先级并未调整, 这使得单个容器被杀死的可能性比 Docker守护程序或其他系统进程被杀死的可能性更大,不推荐通过在守护程序或容器上手动设置– oom -score-adj为极端负数,或通过在容器上设置 – oom-kill-disable来绕过这些安全措施

OOM 优先级机制:

linux会为每个进程计算一个分数,最终将分数最高的kill

1
2
3
4
5
6
7
8
/proc/PID/oom_score_adj 
#范围为 -1000 到 1000,值越高容易被宿主机 kill掉,如果将该值设置为 -1000 ,则进程永远不会被宿主机 kernel kill

/proc/PID/oom_adj
#范围为 -17 到+15 ,取值越高越容易被干掉,如果是 -17 , 则表示不能被 kill ,该设置参数的存在是为了和旧版本的 Linux 内核兼容。

/proc/PID/oom_score
#这个值是系统综合进程的内存消耗量、 CPU 时间 (utime + 存活时间 (uptime - start time) 和oom_adj 计算出的进程得分 ,消耗内存越多得分越高,容易被宿主机 kernel 强制杀死

范例: 查看OOM相关值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
#按内存排序
top - 18:50:03 up 8:49, 3 users, load average: 0.00, 0.00, 0.00
Tasks: 200 total, 2 running, 198 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.1 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.1 hi, 0.1 si, 0.0
MiB Mem : 5557.6 total, 3371.1 free, 899.1 used, 1287.5 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 4370.2 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
891 root 20 0 618376 29368 15168 S 0.3 0.5 1:28.19
1547 systemd+ 20 0 2157152 410584 32848 S 0.3 7.2 1:11.16
1755 root 20 0 1236464 13816 8412 S 0.3 0.2 0:06.77
39814 root 20 0 269208 4684 3848 R 0.3 0.1 0:00.02
1 root 20 0 238236 10724 8056 S 0.0 0.2 0:02.77
2 root 20 0 0 0 0 S 0.0 0.0 0:00.03
3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00
4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00
5 root 0 -20 0 0 0 I 0.0 0.0 0:00.00
7 root 0 -20 0 0 0 I 0.0 0.0 0:00.00
10 root 0 -20 0 0 0 I 0.0 0.0 0:00.00
11 root 20 0 0 0 0 S 0.0 0.0 0:00.00
12 root 20 0 0 0 0 S 0.0 0.0 0:00.00
13 root 20 0 0 0 0 S 0.0 0.0 0:00.06
14 root 20 0 0 0 0 I 0.0 0.0 0:03.88
15 root rt 0 0 0 0 S 0.0 0.0 0:00.02
16 root rt 0 0 0 0 S 0.0 0.0 0:00.00


[root@rocky8 ~]# cat /proc/1547/oom_adj
0

[root@rocky8 ~]# cat /proc/1547/oom_score
714

[root@rocky8 ~]# cat /proc/1547/oom_score_adj
0

[root@rocky8 ~]# cat /proc/891/oom_adj
0

[root@rocky8 ~]# cat /proc/891/oom_score
670

[root@rocky8 ~]# cat /proc/891/oom_score_adj
0

#docker服务进程的OOM默认值
[root@rocky8 ~]# cat /proc/`pidof dockerd`/oom_adj
-8

[root@rocky8 ~]# cat /proc/`pidof dockerd`/oom_score
348

[root@rocky8 ~]# cat /proc/`pidof dockerd`/oom_score_adj
-500

Stress-ng 压力测试工具

Stress-ng 工具介绍

141

stress-ng是一个压力测试工具,可以通过软件仓库进行安装,也提供了docker版本的容器

官方链接:https://kernel.ubuntu.com/~cking/stress-ng/

官方文档:https://wiki.ubuntu.com/Kernel/Reference/stress-ng

142

stress-ng 安装

范例: 软件包方式安装

1
2
[root@centos7 ~]# yum -y install stress-ng
[root@ubuntu1804 ~]# apt -y install stress-ng

范例: 容器方式安装

1
[root@ubuntu1804 ~]# docker pull lorel/docker-stress-ng

Stress-ng 使用

范例: 查看帮助

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
[root@rocky8 ~]# stress-ng --help
stress-ng, version 0.15.00 (gcc 8.5, x86_64 Linux 4.18.0-553.el8_10.x86_64) 💻🔥

Usage: stress-ng [OPTION [ARG]]

General control options:
--abort abort all stressors if any stressor fails
--aggressive enable all aggressive options
-a N, --all N start N workers of each stress test
-b N, --backoff N wait of N microseconds before work starts
--class name specify a class of stressors, use with
--sequential
-n, --dry-run do not run
--ftrace enable kernel function call tracing
-h, --help show help
--ignite-cpu alter kernel controls to make CPU run hot
--ionice-class C specify ionice class (idle, besteffort, realtime)
--ionice-level L specify ionice level (0 max, 7 min)
-j, --job jobfile run the named jobfile
-k, --keep-name keep stress worker names to be 'stress-ng'
--keep-files do not remove files or directories
--klog-check check kernel message log for errors
--log-brief less verbose log messages
--log-file filename log messages to a log file
--maximize enable maximum stress options
--max-fd set maximum file descriptor limit
--mbind set NUMA memory binding to specific nodes
-M, --metrics print pseudo metrics of activity
--metrics-brief enable metrics and only show non-zero results
--minimize enable minimal stress options
--no-madvise don't use random madvise options for each mmap
--no-rand-seed seed random numbers with the same constant
--oomable Do not respawn a stressor if it gets OOM'd
--oom-avoid Try to avoid stressors from being OOM'
--page-in touch allocated pages that are not in core
--parallel N synonym for 'all N'
--pathological enable stressors that are known to hang a machine
--perf display perf statistics
-q, --quiet quiet output
-r, --random N start N random workers
--sched type set scheduler type
--sched-prio N set scheduler priority level N
--sched-period N set period for SCHED_DEADLINE to N nanosecs
(Linux only)
--sched-runtime N set runtime for SCHED_DEADLINE to N nanosecs
(Linux only)
--sched-deadline N set deadline for SCHED_DEADLINE to N nanosecs
(Linux only)
--sched-reclaim set reclaim cpu bandwidth for deadline scheduler
(Linux only)
--seed N set the random number generator seed with a 64
bit value
--sequential N run all stressors one by one, invoking N of them
--skip-silent silently skip unimplemented stressors
--stressors show available stress tests
--smart show changes in S.M.A.R.T. data
--syslog log messages to the syslog
--taskset use specific CPUs (set CPU affinity)
--temp-path path specify path for temporary directories and files
--thrash force all pages in causing swap thrashing
-t N, --timeout T timeout after T seconds
--timer-slack enable timer slack mode
--times show run time summary at end of the run
--timestamp timestamp log output
--tz collect temperatures from thermal zones (Linux
only)
-v, --verbose verbose output
--verify verify results (not available on all tests)
--verifiable show stressors that enable verification via
--verify
-V, --version show version
-Y, --yaml file output results to YAML formatted file
-x, --exclude list of stressors to exclude (not run)

Stressor specific options:
--access N start N workers that stress file access
permissions
--access-ops N stop after N file access bogo operations
--af-alg N start N workers that stress AF_ALG socket domain
--af-alg-dump dump internal list from /proc/crypto to stdout
--af-alg-ops N stop after N af-alg bogo operations
--affinity N start N workers that rapidly change CPU affinity
--affinity-delay D delay in nanoseconds between affinity changes
--affinity-ops N stop after N affinity bogo operations
--affinity-pin keep per stressor threads pinned to same CPU
--affinity-rand change affinity randomly rather than sequentially
--affinity-sleep sleep in nanoseconds between affinity changes
--aio N start N workers that issue async I/O requests
--aio-ops N stop after N bogo async I/O requests
--aio-requests N number of async I/O requests per worker
--aiol N start N workers that exercise Linux async I/O
--aiol-ops N stop after N bogo Linux aio async I/O requests
--aiol-requests N number of Linux aio async I/O requests per worker
--apparmor start N workers exercising AppArmor interfaces
--apparmor-ops N stop after N bogo AppArmor worker bogo operations
--alarm N start N workers exercising alarm timers
--alarm-ops N stop after N alarm bogo operations
--atomic start N workers exercising GCC atomic operations
--atomic-ops stop after N bogo atomic bogo operations
--bad-altstack N start N workers exercising bad signal stacks
--bad-altstack-ops N stop after N bogo signal stack SIGSEGVs
--bad-ioctl N start N stressors that perform illegal read
ioctls on devices
--bad-ioctl-ops N stop after N bad ioctl bogo operations
-B N, --bigheap N start N workers that grow the heap using
realloc()
--bigheap-growth N grow heap by N bytes per iteration
--bigheap-ops N stop after N bogo bigheap operations
--bind-mount N start N workers exercising bind mounts
--bind-mount-ops N stop after N bogo bind mount operations
--binderfs N start N workers exercising binderfs
--binderfs-ops N stop after N bogo binderfs operations
--branch N start N workers that force branch misprediction
--branch-ops N stop after N branch misprediction branches
--brk N start N workers performing rapid brk calls
--brk-mlock attempt to mlock newly mapped brk pages
--brk-notouch don't touch (page in) new data segment page
--brk-ops N stop after N brk bogo operations
--bsearch N start N workers that exercise a binary search
--bsearch-ops N stop after N binary search bogo operations
--bsearch-size N number of 32 bit integers to bsearch
-C N, --cache N start N CPU cache thrashing workers
--cache-cldemote cache line demote (x86 only)
--cache-clflushopt optimized cache line flush (x86 only)
--cache-enable-all enable all cache options
(fence,flush,sfence,etc..)
--cache-fence serialize stores
--cache-flush flush cache after every memory write (x86 only)
--cache-level N only exercise specified cache
--cache-no-affinity do not change CPU affinity
--cache-ops N stop after N cache bogo operations
--cache-prefetch prefetch on memory reads/writes
--cache-sfence serialize stores with sfence
--cache-ways N only fill specified number of cache ways
--cache-wb cache line writeback (x86 only)
--cacheline N start N workers that exercise cachelines
--cacheline-affinity modify CPU affinity
--cacheline-method M use cacheline stressing method M
--cacheline-ops N stop after N cacheline bogo operations
--cap N start N workers exercising capget
--cap-ops N stop cap workers after N bogo capget operations
--chattr N start N workers thrashing chattr file mode bits
--chattr-ops N stop chattr workers after N bogo operations
--chdir N start N workers thrashing chdir on many paths
--chdir-dirs N select number of directories to exercise chdir on
--chdir-ops N stop chdir workers after N bogo chdir operations
--chmod N start N workers thrashing chmod file mode bits
--chmod-ops N stop chmod workers after N bogo operations
--chown N start N workers thrashing chown file ownership
--chown-ops N stop chown workers after N bogo operations
--chroot N start N workers thrashing chroot
--chroot-ops N stop chroot workers after N bogo operations
--clock N start N workers thrashing clocks and POSIX timers
--clock-ops N stop clock workers after N bogo operations
--clone N start N workers that rapidly create and reap
clones
--clone-max N set upper limit of N clones per worker
--clone-ops N stop after N bogo clone operations
--close N start N workers that exercise races on close
--close-ops N stop after N bogo close operations
--context N start N workers exercising user context
--context-ops N stop context workers after N bogo operations
--copy-file N start N workers that copy file data
--copy-file-bytes N specify size of file to be copied
--copy-file-ops N stop after N copy bogo operations
-c N, --cpu N start N workers that perform CPU only loading
-l P, --cpu-load P load CPU by P %, 0=sleep, 100=full load (see -c)
--cpu-load-slice S specify time slice during busy load
--cpu-method M specify stress cpu method M, default is all
--cpu-old-metrics use old CPU metrics instead of normalized metrics
--cpu-ops N stop after N cpu bogo operations
--cpu-online N start N workers offlining/onlining the CPUs
--cpu-online-ops N stop after N offline/online operations
--crypt N start N workers performing password encryption
--crypt-ops N stop after N bogo crypt operations
--cyclic N start N cyclic real time benchmark stressors
--cyclic-dist N calculate distribution of interval N nanosecs
--cyclic-method M specify cyclic method M, default is clock_ns
--cyclic-ops N stop after N cyclic timing cycles
--cyclic-policy P used rr or fifo scheduling policy
--cyclic-prio N real time scheduling priority 1..100
--cyclic-samples N number of latency samples to take
--cyclic-sleep N sleep time of real time timer in nanosecs
--daemon N start N workers creating multiple daemons
--daemon-ops N stop when N daemons have been created
--dccp N start N workers exercising network DCCP I/O
--dccp-domain D specify DCCP domain, default is ipv4
--dccp-if I use network interface I, e.g. lo, eth0, etc.
--dccp-ops N stop after N DCCP bogo operations
--dccp-opts option DCCP data send options [send|sendmsg|sendmmsg]
--dccp-port P use DCCP ports P to P + number of workers - 1
--dekker N start N workers that exercise ther Dekker
algorithm
--dekker-ops N stop after N dekker mutex bogo operations
-D N, --dentry N start N dentry thrashing stressors
--dentry-ops N stop after N dentry bogo operations
--dentry-order O specify unlink order (reverse, forward, stride)
--dentries N create N dentries per iteration
--dev N start N device entry thrashing stressors
--dev-file name specify the /dev/ file to exercise
--dev-ops N stop after N device thrashing bogo ops
--dev-shm N start N /dev/shm file and mmap stressors
--dev-shm-ops N stop after N /dev/shm bogo ops
--dir N start N directory thrashing stressors
--dir-dirs N select number of directories to exercise dir on
--dir-ops N stop after N directory bogo operations
--dirdeep N start N directory depth stressors
--dirdeep-bytes N size of files to create per level (see
--dirdeep-files)
--dirdeep-dirs N create N directories per level
--dirdeep-files N create N files per level (see --dirdeep-bytes)
--dirdeep-inodes N create a maximum N inodes (N can also be %)
--dirdeep-ops N stop after N directory depth bogo operations
--dirmany N start N directory file populating stressors
--dirmany-filsize specify size of files (default 0
--dirmany-ops N stop after N directory file bogo operations
--dnotify N start N workers exercising dnotify events
--dnotify-ops N stop dnotify workers after N bogo operations
--dup N start N workers exercising dup/close
--dup-ops N stop after N dup/close bogo operations
--dynlib N start N workers exercising dlopen/dlclose
--dynlib-ops N stop after N dlopen/dlclose bogo operations
--efivar N start N workers that read EFI variables
--efivar-ops N stop after N EFI variable bogo read operations
--enosys N start N workers that call non-existent system
calls
--enosys-ops N stop after N enosys bogo operations
--env N start N workers setting environment vars
--env-ops N stop after N env bogo operations
--epoll N start N workers doing epoll handled socket
activity
--epoll-domain D specify socket domain, default is unix
--epoll-ops N stop after N epoll bogo operations
--epoll-port P use socket ports P upwards
--epoll-sockets N specify maximum number of open sockets
--eventfd N start N workers stressing eventfd read/writes
--eventfs-nonblock poll with non-blocking I/O on eventfd fd
--eventfd-ops N stop eventfd workers after N bogo operations
--exec N start N workers spinning on fork() and exec()
--exec-fork-method M select exec fork method: clone fork spawn vfork
--exec-max P create P workers per iteration, default is 4096
--exec-method M select exec method: all, execve, execveat
--exec-no-pthread do not use pthread_create
--exec-ops N stop after N exec bogo operations
--exit-group N start N workers that exercise exit_group
--exit-group-ops N stop exit_group workers after N bogo exit_group
loops
--fallocate N start N workers fallocating 16MB files
--fallocate-bytes N specify size of file to allocate
--fallocate-ops N stop after N fallocate bogo operations
--fanotify N start N workers exercising fanotify events
--fanotify-ops N stop fanotify workers after N bogo operations
--far-branch N start N far branching workers
--far-branch-ops N stop after N far branching bogo operations
--fault N start N workers producing page faults
--fault-ops N stop after N page fault bogo operations
--fcntl N start N workers exercising fcntl commands
--fcntl-ops N stop after N fcntl bogo operations
--fiemap N start N workers exercising the FIEMAP ioctl
--fiemap-bytes N specify size of file to fiemap
--fiemap-ops N stop after N FIEMAP ioctl bogo operations
--fifo N start N workers exercising fifo I/O
--fifo-ops N stop after N fifo bogo operations
--fifo-readers N number of fifo reader stressors to start
--file-ioctl N start N workers exercising file specific ioctls
--file-ioctl-ops N stop after N file ioctl bogo operations
--filename N start N workers exercising filenames
--filename-ops N stop after N filename bogo operations
--filename-opts opt specify allowed filename options
--flock N start N workers locking a single file
--flock-ops N stop after N flock bogo operations
--flushcache N start N CPU instruction + data cache flush
workers
--flushcache-ops N stop after N flush cache bogo operations
-f N, --fork N start N workers spinning on fork() and exit()
--fork-max P create P workers per iteration, default is 1
--fork-ops N stop after N fork bogo operations
--fork-vm enable extra virtual memory pressure
--fp-error N start N workers exercising floating point errors
--fp-error-ops N stop after N fp-error bogo operations
--fpunch N start N workers punching holes in a 16MB file
--fpunch-ops N stop after N punch bogo operations
--fsize N start N workers exercising file size limits
--fsize-ops N stop after N fsize bogo operations
--fstat N start N workers exercising fstat on files
--fstat-dir path fstat files in the specified directory
--fstat-ops N stop after N fstat bogo operations
--full N start N workers exercising /dev/full
--full-ops N stop after N /dev/full bogo I/O operations
--funccall N start N workers exercising 1 to 9 arg functions
--funccall-method M select function call method M
--funccall-ops N stop after N function call bogo operations
--funcret N start N workers exercising function return
copying
--funcret-method M select method of exercising a function return
type
--funcret-ops N stop after N function return bogo operations
--futex N start N workers exercising a fast mutex
--futex-ops N stop after N fast mutex bogo operations
--get N start N workers exercising the get*() system
calls
--get-ops N stop after N get bogo operations
--getdent N start N workers reading directories using
getdents
--getdent-ops N stop after N getdents bogo operations
--getrandom N start N workers fetching random data via
getrandom()
--getrandom-ops N stop after N getrandom bogo operations
--goto N start N workers that exercise heavy branching
--goto-direction D select goto direction forward, backward, random
--goto-ops N stop after 1024 x N goto bogo operations
--gpu N start N GPU worker
--gpu-devnode name specify CPU device node name
--gpu-frag N specify shader core usage per pixel
--gpu-ops N stop after N gpu render bogo operations
--gpu-tex-size N specify upload texture NxN
--gpu-upload N specify upload texture N times per frame
--gpu-xsize X specify framebuffer size x
--gpu-ysize Y specify framebuffer size y
--handle N start N workers exercising name_to_handle_at
--handle-ops N stop after N handle bogo operations
--hash N start N workers that exercise various hash
functions
--hash-method M specify stress hash method M, default is all
--hash-ops N stop after N hash bogo operations
-d N, --hdd N start N workers spinning on write()/unlink()
--hdd-bytes N write N bytes per hdd worker (default is 1GB)
--hdd-ops N stop after N hdd bogo operations
--hdd-opts list specify list of various stressor options
--hdd-write-size N set the default write size to N bytes
--heapsort N start N workers heap sorting 32 bit random
integers
--heapsort-ops N stop after N heap sort bogo operations
--heapsort-size N number of 32 bit integers to sort
--hrtimers N start N workers that exercise high resolution
timers
--hrtimers-adjust adjust rate to try and maximum timer rate
--hrtimers-ops N stop after N bogo high-res timer bogo operations
--hsearch N start N workers that exercise a hash table search
--hsearch-ops N stop after N hash search bogo operations
--hsearch-size N number of integers to insert into hash table
--icache N start N CPU instruction cache thrashing workers
--icache-ops N stop after N icache bogo operations
--icmp-flood N start N ICMP packet flood workers
--icmp-flood-ops N stop after N ICMP bogo operations (ICMP packets)
--idle-page N start N idle page scanning workers
--idle-page-ops N stop after N idle page scan bogo operations
--inode-flags N start N workers exercising various inode flags
--inode-flags-ops N stop inode-flags workers after N bogo operations
--inotify N start N workers exercising inotify events
--inotify-ops N stop inotify workers after N bogo operations
-i N, --io N start N workers spinning on sync()
--io-ops N stop sync I/O after N io bogo operations
--iomix N start N workers that have a mix of I/O operations
--iomix-bytes N write N bytes per iomix worker (default is 1GB)
--iomix-ops N stop iomix workers after N iomix bogo operations
--ioport N start N workers exercising port I/O
--ioport-ops N stop ioport workers after N port bogo operations
--ioprio N start N workers exercising set/get iopriority
--ioprio-ops N stop after N io bogo iopriority operations
--io-uring N start N workers that issue io-uring I/O requests
--io-uring-ops N stop after N bogo io-uring I/O requests
--ipsec-mb N start N workers exercising the IPSec MB encoding
--ipsec-mb-feature F specify CPU feature F
--ipsec-mb-jobs N specify number of jobs to run per round (default
1)
--ipsec-mb-ops N stop after N ipsec bogo encoding operations
--itimer N start N workers exercising interval timers
--itimer-ops N stop after N interval timer bogo operations
--itimer-rand enable random interval timer frequency
--jpeg N start N workers that burn cycles with no-ops
--jpeg-height N image height in pixels
--jpeg-image type image type: one of brown, flat, gradient, noise,
plasma or xstripes
--jpeg-ops N stop after N jpeg bogo no-op operations
--jpeg-quality Q compression quality 1 (low) .. 100 (high)
--jpeg-width N image width in pixels
--judy N start N workers that exercise a judy array search
--judy-ops N stop after N judy array search bogo operations
--judy-size N number of 32 bit integers to insert into judy
array
--kcmp N start N workers exercising kcmp
--kcmp-ops N stop after N kcmp bogo operations
--key N start N workers exercising key operations
--key-ops N stop after N key bogo operations
--kill N start N workers killing with SIGUSR1
--kill-ops N stop after N kill bogo operations
--klog N start N workers exercising kernel syslog
interface
--klog-ops N stop after N klog bogo operations
--kvm N start N workers exercising /dev/kvm
--kvm-ops N stop after N kvm create/run/destroy operations
--l1cache N start N CPU level 1 cache thrashing workers
--l1cache-line-size N specify level 1 cache line size
--l1cache-sets N specify level 1 cache sets
--l1cache-size N specify level 1 cache size
--l1cache-ways N only fill specified number of cache ways
--landlock N start N workers stressing landlock file
operations
--landlock-ops N stop after N landlock bogo operations
--lease N start N workers holding and breaking a lease
--lease-breakers N number of lease breaking workers to start
--lease-ops N stop after N lease bogo operations
--link N start N workers creating hard links
--link-ops N stop after N link bogo operations
--link-sync enable sync'ing after linking/unlinking
--list N start N workers that exercise list structures
--list-method M select list method: all, circleq, list, slist,
slistt, stailq, tailq
--list-ops N stop after N bogo list operations
--list-size N N is the number of items in the list
--llc-affinity N start N workers exercising low level cache over
all CPUs
--llc-affinity-ops N stop after N low-level-cache bogo operations
--loadavg N start N workers that create a large load average
--loadavg-ops N stop load average workers after N bogo operations
--loadavg-max N set upper limit on number of pthreads to create
--locka N start N workers locking a file via advisory locks
--locka-ops N stop after N locka bogo operations
--lockbus N start N workers locking a memory increment
--lockbus-nosplit disable split locks
--lockbus-ops N stop after N lockbus bogo operations
--lockf N start N workers locking a single file via lockf
--lockf-nonblock don't block if lock cannot be obtained, re-try
--lockf-ops N stop after N lockf bogo operations
--lockofd N start N workers using open file description
locking
--lockofd-ops N stop after N lockofd bogo operations
--longjmp N start N workers exercising setjmp/longjmp
--longjmp-ops N stop after N longjmp bogo operations
--loop N start N workers exercising loopback devices
--loop-ops N stop after N bogo loopback operations
--lsearch N start N workers that exercise a linear search
--lsearch-ops N stop after N linear search bogo operations
--lsearch-size N number of 32 bit integers to lsearch
--madvise N start N workers exercising madvise on memory
--madvise-ops N stop after N bogo madvise operations
--malloc N start N workers exercising malloc/realloc/free
--malloc-bytes N allocate up to N bytes per allocation
--malloc-max N keep up to N allocations at a time
--malloc-ops N stop after N malloc bogo operations
--malloc-pthreads N number of pthreads to run concurrently
--malloc-thresh N threshold where malloc uses mmap instead of sbrk
--malloc-touch touch pages force pages to be populated
--malloc-zerofree zero free'd memory
--matrix N start N workers exercising matrix operations
--matrix-method M specify matrix stress method M, default is all
--matrix-ops N stop after N maxtrix bogo operations
--matrix-size N specify the size of the N x N matrix
--matrix-yx matrix operation is y by x instead of x by y
--matrix-3d N start N workers exercising 3D matrix operations
--matrix-3d-method M specify 3D matrix stress method M, default is all
--matrix-3d-ops N stop after N 3D maxtrix bogo operations
--matrix-3d-size N specify the size of the N x N x N matrix
--matrix-3d-zyx matrix operation is z by y by x instead of x by y
by z
--mcontend N start N workers that produce memory contention
--mcontend-ops N stop memory contention workers after N bogo-ops
--membarrier N start N workers performing membarrier system
calls
--membarrier-ops N stop after N membarrier bogo operations
--memcpy N start N workers performing memory copies
--memcpy-method M set memcpy method (M = all, libc, builtin,
naive..)
--memcpy-ops N stop after N memcpy bogo operations
--memfd N start N workers allocating memory with
memfd_create
--memfd-bytes N allocate N bytes for each stress iteration
--memfd-fds N number of memory fds to open per stressors
--memfd-ops N stop after N memfd bogo operations
--memhotplug N start N workers that exercise memory hotplug
--memhotplug-ops N stop after N memory hotplug operations
--memrate N start N workers exercised memory read/writes
--memrate-bytes N size of memory buffer being exercised
--memrate-ops N stop after N memrate bogo operations
--memrate-rd-mbs N read rate from buffer in megabytes per second
--memrate-wr-mbs N write rate to buffer in megabytes per second
--memthrash N start N workers thrashing a 16MB memory buffer
--memthrash-method M specify memthrash method M, default is all
--memthrash-ops N stop after N memthrash bogo operations
--mergesort N start N workers merge sorting 32 bit random
integers
--mergesort-ops N stop after N merge sort bogo operations
--mergesort-size N number of 32 bit integers to sort
--mincore N start N workers exercising mincore
--mincore-ops N stop after N mincore bogo operations
--mincore-random randomly select pages rather than linear scan
--misaligned N start N workers performing misaligned read/writes
--misaligned-method M use misaligned memory read/write method
--misaligned-ops N stop after N misaligned bogo operations
--mknod N start N workers that exercise mknod
--mknod-ops N stop after N mknod bogo operations
--mlock N start N workers exercising mlock/munlock
--mlock-ops N stop after N mlock bogo operations
--mlockmany N start N workers exercising many mlock/munlock
processes
--mlockmany-ops N stop after N mlockmany bogo operations
--mlockmany-procs N use N child processes to mlock regions
--mmap N start N workers stressing mmap and munmap
--mmap-async using asynchronous msyncs for file based mmap
--mmap-bytes N mmap and munmap N bytes for each stress iteration
--mmap-file mmap onto a file using synchronous msyncs
--mmap-mprotect enable mmap mprotect stressing
--mmap-odirect enable O_DIRECT on file
--mmap-ops N stop after N mmap bogo operations
--mmap-osync enable O_SYNC on file
--mmapaddr N start N workers stressing mmap with random
addresses
--mmapaddr-ops N stop after N mmapaddr bogo operations
--mmapfixed N start N workers stressing mmap with fixed
mappings
--mmapfixed-ops N stop after N mmapfixed bogo operations
--mmapfork N start N workers stressing many forked
mmaps/munmaps
--mmapfork-ops N stop after N mmapfork bogo operations
--mmaphuge N start N workers stressing mmap with huge mappings
--mmaphuge-mmaps N select number of memory mappings per iteration
--mmaphuge-ops N stop after N mmaphuge bogo operations
--mmapmany N start N workers stressing many mmaps and munmaps
--mmapmany-ops N stop after N mmapmany bogo operations
--mprotect N start N workers exercising mprotect on memory
--mprotect-ops N stop after N bogo mprotect operations
--mq N start N workers passing messages using POSIX
messages
--mq-ops N stop mq workers after N bogo messages
--mq-size N specify the size of the POSIX message queue
--mremap N start N workers stressing mremap
--mremap-bytes N mremap N bytes maximum for each stress iteration
--mremap-lock mlock remap pages, force pages to be unswappable
--mremap-ops N stop after N mremap bogo operations
--msg N start N workers stressing System V messages
--msg-ops N stop msg workers after N bogo messages
--msg-types N enable N different message types
--msync N start N workers syncing mmap'd data with msync
--msync-bytes N size of file and memory mapped region to msync
--msync-ops N stop msync workers after N bogo msyncs
--msyncmany N start N workers stressing msync on many mapped
pages
--msyncmany-ops N stop after N msyncmany bogo operations
--munmap N start N workers stressing munmap
--munmap-ops N stop after N munmap bogo operations
--mutex N start N workers exercising mutex operations
--mutex-affinity change CPU affinity randomly across locks
--mutex-ops N stop after N mutex bogo operations
--mutex-procs N select the number of concurrent processes
--nanosleep N start N workers performing short sleeps
--nanosleep-ops N stop after N bogo sleep operations
--netdev N start N workers exercising netdevice ioctls
--netdev-ops N stop netdev workers after N bogo operations
--netlink-proc N start N workers exercising netlink process events
--netlink-proc-ops N stop netlink-proc workers after N bogo events
--netlink-task N start N workers exercising netlink tasks events
--netlink-task-ops N stop netlink-task workers after N bogo events
--nice N start N workers that randomly re-adjust nice
levels
--nice-ops N stop after N nice bogo operations
--nop N start N workers that burn cycles with no-ops
--nop-instr INSTR specify nop instruction to use
--nop-ops N stop after N nop bogo no-op operations
--null N start N workers writing to /dev/null
--null-ops N stop after N /dev/null bogo write operations
--numa N start N workers stressing NUMA interfaces
--numa-ops N stop after N NUMA bogo operations
--oom-pipe N start N workers exercising large pipes
--oom-pipe-ops N stop after N oom-pipe bogo operations
--opcode N start N workers exercising random opcodes
--opcode-method M set opcode stress method (M = random, inc, mixed,
text)
--opcode-ops N stop after N opcode bogo operations
-o N, --open N start N workers exercising open/close
--open-fd open files in /proc/$pid/fd
--open-max N specficify maximum number of files to open
--open-ops N stop after N open/close bogo operations
--pagemove N start N workers that shuffle move pages
--pagemove-bytes N size of mmap'd region to exercise page moving in
bytes
--pagemove-ops N stop after N page move bogo operations
--pageswap N start N workers that swap pages out and in
--pageswap-ops N stop after N page swap bogo operations
--pci N start N workers that read and mmap PCI regions
--pci-ops N stop after N PCI bogo operations
--personality N start N workers that change their personality
--personality-ops N stop after N bogo personality calls
--peterson N start N workers that exercise Peterson's
algorithm
--peterson-ops N stop after N peterson mutex bogo operations
--physpage N start N workers performing physical page lookup
--physpage-ops N stop after N physical page bogo operations
--pidfd N start N workers exercising pidfd system call
--pidfd-ops N stop after N pidfd bogo operations
--ping-sock N start N workers that exercises a ping socket
--ping-sock-ops N stop after N ping sendto messages
-p N, --pipe N start N workers exercising pipe I/O
--pipe-data-size N set pipe size of each pipe write to N bytes
--pipe-ops N stop after N pipe I/O bogo operations
--pipe-size N set pipe size to N bytes
-p N, --pipeherd N start N multi-process workers exercising pipes
I/O
--pipeherd-ops N stop after N pipeherd I/O bogo operations
--pipeherd-yield force processes to yield after each write
--pkey N start N workers exercising pkey_mprotect
--pkey-ops N stop after N bogo pkey_mprotect bogo operations
--plugin N start N workers exercising random plugins
--plugin-method M set plugin stress method
--plugin-ops N stop after N plugin bogo operations
--plugin-so file specify plugin shared object file
-P N, --poll N start N workers exercising zero timeout polling
--poll-fds N use N file descriptors
--poll-ops N stop after N poll bogo operations
--procfs N start N workers reading portions of /proc
--procfs-ops N stop procfs workers after N bogo read operations
--prefetch N start N workers exercising memory prefetching
--prefetch-l3-size N specify the L3 cache size of the CPU
--prefetch-ops N stop after N bogo prefetching operations
--procfs N start N workers reading portions of /proc
--procfs-ops N stop procfs workers after N bogo read operations
--pthread N start N workers that create multiple threads
--pthread-max P create P threads at a time by each worker
--pthread-ops N stop pthread workers after N bogo threads created
--ptrace N start N workers that trace a child using ptrace
--ptrace-ops N stop ptrace workers after N system calls are
traced
--pty N start N workers that exercise pseudoterminals
--pty-max N attempt to open a maximum of N ptys
--pty-ops N stop pty workers after N pty bogo operations
-Q N, --qsort N start N workers qsorting 32 bit random integers
--qsort-ops N stop after N qsort bogo operations
--qsort-size N number of 32 bit integers to sort
--quota N start N workers exercising quotactl commands
--quota-ops N stop after N quotactl bogo operations
--race-sched N start N workers that race cpu affinity
--race-sched-ops N stop after N bogo race operations
--race-sched-method M method M: all, rand, next, prev, yoyo, randinc
--radixsort N start N workers radix sorting random strings
--radixsort-ops N stop after N radixsort bogo operations
--radixsort-size N number of strings to sort
--randlist N start N workers that exercise random ordered list
--randlist-compact reduce mmap and malloc overheads
--randlist-items N number of items in the random ordered list
--randlist-ops N stop after N randlist bogo no-op operations
--randlist-size N size of data in each item in the list
--ramfs N start N workers exercising ramfs mounts
--ramfs-size N set the ramfs size in bytes, e.g. 2M is 2MB
--ramfs-fill attempt to fill ramfs
--ramfs-ops N stop after N bogo ramfs mount operations
--rawdev N start N workers that read a raw device
--rawdev-method M specify the rawdev read method to use
--rawdev-ops N stop after N rawdev read operations
--rawpkt N start N workers exercising raw packets
--rawpkt-ops N stop after N raw packet bogo operations
--rawpkt-port P use raw packet ports P to P + number of workers -
1
--rawsock N start N workers performing raw socket
send/receives
--rawsock-ops N stop after N raw socket bogo operations
--rawsock-port P use socket P to P + number of workers - 1
--rawudp N start N workers exercising raw UDP socket I/O
--rawudp-if I use network interface I, e.g. lo, eth0, etc.
--rawudp-ops N stop after N raw socket UDP bogo operations
--rawudp-port P use raw socket ports P to P + number of workers -
1
--rdrand N start N workers exercising rdrand (x86 only)
--rdrand-ops N stop after N rdrand bogo operations
--rdrand-seed use rdseed instead of rdrand
--readahead N start N workers exercising file readahead
--readahead-bytes N size of file to readahead on (default is 1GB)
--readahead-ops N stop after N readahead bogo operations
--reboot N start N workers that exercise bad reboot calls
--reboot-ops N stop after N bogo reboot operations
--regs N start N workers exercising CPU generic registers
--regs-ops N stop after N x 1000 rounds of register shuffling
--remap N start N workers exercising page remappings
--remap-ops N stop after N remapping bogo operations
-R, --rename N start N workers exercising file renames
--rename-ops N stop after N rename bogo operations
--resched N start N workers that spawn renicing child
processes
--resched-ops N stop after N nice bogo nice'd yield operations
--resources N start N workers consuming system resources
--resources-ops N stop after N resource bogo operations
--revio N start N workers performing reverse I/O
--revio-ops N stop after N revio bogo operations
--ring-pipe N start N workers exercising a ring of pipes
--ring-pipe-num number of pipes to use
--ring-pipe-ops N stop after N ring pipe I/O bogo operations
--ring-pipe-size size of data to be written and read
--ring-pipe-splice use splice instread of read+write
--rmap N start N workers that stress reverse mappings
--rmap-ops N stop after N rmap bogo operations
--rmap N start N workers that stress reverse mappings
--rmap-ops N stop after N rmap bogo operations
--rseq N start N workers that exercise restartable
sequences
--rseq-ops N stop after N bogo restartable sequence operations
--rtc N start N workers that exercise the RTC interfaces
--rtc-ops N stop after N RTC bogo operations
--schedpolicy N start N workers that exercise scheduling policy
--schedpolicy-ops N stop after N scheduling policy bogo operations
--sctp N start N workers performing SCTP send/receives
--sctp-domain D specify sctp domain, default is ipv4
--sctp-if I use network interface I, e.g. lo, eth0, etc.
--sctp-ops N stop after N SCTP bogo operations
--sctp-port P use SCTP ports P to P + number of workers - 1
--sctp-sched S specify sctp scheduler
--seal N start N workers performing fcntl SEAL commands
--seal-ops N stop after N SEAL bogo operations
--seccomp N start N workers performing seccomp call filtering
--seccomp-ops N stop after N seccomp bogo operations
--secretmem N start N workers that use secretmem mappings
--secretmem-ops N stop after N secretmem bogo operations
--seek N start N workers performing random seek r/w IO
--seek-ops N stop after N seek bogo operations
--seek-punch punch random holes in file to stress extents
--seek-size N length of file to do random I/O upon
--sem N start N workers doing semaphore operations
--sem-ops N stop after N semaphore bogo operations
--sem-procs N number of processes to start per worker
--sem-sysv N start N workers doing System V semaphore
operations
--sem-sysv-ops N stop after N System V sem bogo operations
--sem-sysv-procs N number of processes to start per worker
--sendfile N start N workers exercising sendfile
--sendfile-ops N stop after N bogo sendfile operations
--sendfile-size N size of data to be sent with sendfile
-f N, --session N start N workers that exercise new sessions
--session-ops N stop after N session bogo operations
--set N start N workers exercising the set*() system
calls
--set-ops N stop after N set bogo operations
--shellsort N start N workers shell sorting 32 bit random
integers
--shellsort-ops N stop after N shell sort bogo operations
--shellsort-size N number of 32 bit integers to sort
--shm N start N workers that exercise POSIX shared memory
--shm-bytes N allocate/free N bytes of POSIX shared memory
--shm-ops N stop after N POSIX shared memory bogo operations
--shm-segs N allocate N POSIX shared memory segments per
iteration
--shm-sysv N start N workers that exercise System V shared
memory
--shm-sysv-bytes N allocate and free N bytes of shared memory per
loop
--shm-sysv-ops N stop after N shared memory bogo operations
--shm-sysv-segs N allocate N shared memory segments per iteration
--sigabrt N start N workers generating segmentation faults
--sigabrt-ops N stop after N bogo segmentation faults
--sigchld N start N workers that handle SIGCHLD
--sigchld-ops N stop after N bogo SIGCHLD signals
--sigfd N start N workers reading signals via signalfd
reads
--sigfd-ops N stop after N bogo signalfd reads
--sigfpe N start N workers generating floating point math
faults
--sigfpe-ops N stop after N bogo floating point math faults
--sigio N start N workers that exercise SIGIO signals
--sigio-ops N stop after N bogo sigio signals
--signal N start N workers that exercise signal
--signal-ops N stop after N bogo signals
--signest N start N workers generating nested signals
--signest-ops N stop after N bogo nested signals
--sigpending N start N workers exercising sigpending
--sigpending-ops N stop after N sigpending bogo operations
--sigpipe N start N workers exercising SIGPIPE
--sigpipe-ops N stop after N SIGPIPE bogo operations
--sigq N start N workers sending sigqueue signals
--sigq-ops N stop after N sigqueue bogo operations
--sigrt N start N workers sending real time signals
--sigrt-ops N stop after N real time signal bogo operations
--sigsegv N start N workers generating segmentation faults
--sigsegv-ops N stop after N bogo segmentation faults
--sigsuspend N start N workers exercising sigsuspend
--sigsuspend-ops N stop after N bogo sigsuspend wakes
--sigtrap N start N workers generating segmentation faults
--sigtrap-ops N stop after N bogo segmentation faults
--skiplist N start N workers that exercise a skiplist search
--skiplist-ops N stop after N skiplist search bogo operations
--skiplist-size N number of 32 bit integers to add to skiplist
--sleep N start N workers performing various duration
sleeps
--sleep-max P create P threads at a time by each worker
--sleep-ops N stop after N bogo sleep operations
--smi N start N workers that trigger SMIs
--smi-ops N stop after N SMIs have been triggered
-S N, --sock N start N workers exercising socket I/O
--sock-domain D specify socket domain, default is ipv4
--sock-if I use network interface I, e.g. lo, eth0, etc.
--sock-nodelay disable Nagle algorithm, send data immediately
--sock-ops N stop after N socket bogo operations
--sock-opts option socket options [send|sendmsg|sendmmsg]
--sock-port P use socket ports P to P + number of workers - 1
--sock-protocol use socket protocol P, default is tcp, can be
mptcp
--sock-type T socket type (stream, seqpacket)
--sock-zerocopy enable zero copy sends
--sockabuse N start N workers abusing socket I/O
--sockabuse-ops N stop after N socket abusing bogo operations
--sockabuse-port P use socket ports P to P + number of workers - 1
--sockdiag N start N workers exercising sockdiag netlink
--sockdiag-ops N stop sockdiag workers after N bogo messages
--sockfd N start N workers sending file descriptors over
sockets
--sockfd-ops N stop after N sockfd bogo operations
--sockfd-port P use socket fd ports P to P + number of workers -
1
--sockpair N start N workers exercising socket pair I/O
activity
--sockpair-ops N stop after N socket pair bogo operations
--sockmany N start N workers exercising many socket
connections
--sockmany-if I use network interface I, e.g. lo, eth0, etc.
--sockmany-ops N stop after N sockmany bogo operations
--sockmany-port use socket ports P to P + number of workers - 1
--softlockup N start N workers that cause softlockups
--softlockup-ops N stop after N softlockup bogo operations
--spawn N start N workers spawning stress-ng using
posix_spawn
--spawn-ops N stop after N spawn bogo operations
--sparsematrix N start N workers that exercise a sparse matrix
--sparsematrix-items NN is the number of items in the spare matrix
--sparsematrix-method Mselect storage method: all, hash, judy, list or
rb
--sparsematrix-ops N stop after N bogo sparse matrix operations
--sparsematrix-size N M is the width and height X x Y of the matrix
--splice N start N workers reading/writing using splice
--splice-bytes N number of bytes to transfer per splice call
--splice-ops N stop after N bogo splice operations
--stack N start N workers generating stack overflows
--stack-fill fill stack, touches all new pages
--stack-mlock mlock stack, force pages to be unswappable
--stack-ops N stop after N bogo stack overflows
--stack-pageout use madvise to try to swap out stack
--stackmmap N start N workers exercising a filebacked stack
--stackmmap-ops N stop after N bogo stackmmap operations
--str N start N workers exercising lib C string functions
--str-method func specify the string function to stress
--str-ops N stop after N bogo string operations
--stream N start N workers exercising memory bandwidth
--stream-index specify number of indices into the data (0..3)
--stream-l3-size N specify the L3 cache size of the CPU
--stream-madvise M specify mmap'd stream buffer madvise advice
--stream-ops N stop after N bogo stream operations
--swap N start N workers exercising swapon/swapoff
--swap-ops N stop after N swapon/swapoff operations
-s N, --switch N start N workers doing rapid context switches
--switch-freq N set frequency of context switches
--switch-method M mq | pipe | sem-sysv
--switch-ops N stop after N context switch bogo operations
--symlink N start N workers creating symbolic links
--symlink-ops N stop after N symbolic link bogo operations
--symlink-sync enable sync'ing after symlinking/unsymlinking
--sync-file N start N workers exercise sync_file_range
--sync-file-bytes N size of file to be sync'd
--sync-file-ops N stop after N sync_file_range bogo operations
--syncload N start N workers that synchronize load spikes
--syncload-msbusy M maximum busy duration in milliseconds
--syncload-mssleep M maximum sleep duration in milliseconds
--syncload-ops N stop after N syncload bogo operations
--sysbadaddr N start N workers that pass bad addresses to
syscalls
--sysbadaddr-ops N stop after N sysbadaddr bogo syscalls
--syscall N start N workers that exercise a wide range of
system calls
--syscall-ops N stop after N syscall bogo operations
--sysinfo N start N workers reading system information
--sysinfo-ops N stop after sysinfo bogo operations
--sysinval N start N workers that pass invalid args to
syscalls
--sysinval-ops N stop after N sysinval bogo syscalls
--sysfs N start N workers reading files from /sys
--sysfs-ops N stop after sysfs bogo operations
--tee N start N workers exercising the tee system call
--tee-ops N stop after N tee bogo operations
-T N, --timer N start N workers producing timer events
--timer-freq F run timer(s) at F Hz, range 1 to 1000000000
--timer-ops N stop after N timer bogo events
--timer-rand enable random timer frequency
--timerfd N start N workers producing timerfd events
--timerfd-fds N number of timerfd file descriptors to open
--timerfd-freq F run timer(s) at F Hz, range 1 to 1000000000
--timerfd-ops N stop after N timerfd bogo events
--timerfd-rand enable random timerfd frequency
--tlb-shootdown N start N workers that force TLB shootdowns
--tlb-shootdown-ops N stop after N TLB shootdown bogo ops
--tmpfs N start N workers mmap'ing a file on tmpfs
--tmpfs-mmap-async using asynchronous msyncs for tmpfs file based
mmap
--tmpfs-mmap-file mmap onto a tmpfs file using synchronous msyncs
--tmpfs-ops N stop after N tmpfs bogo ops
--touch N start N stressors that touch and remove files
--touch-method specify method to touch tile file, open | create
--touch-ops N stop after N touch bogo operations
--touch-opts touch open options
all,direct,dsync,excl,noatime,sync,trunc
--tree N start N workers that exercise tree structures
--tree-method M select tree method: all,avl,binary,btree,rb,splay
--tree-ops N stop after N bogo tree operations
--tree-size N N is the number of items in the tree
--tsc N start N workers reading the time stamp counter
--tsc-ops N stop after N TSC bogo operations
--tsearch N start N workers that exercise a tree search
--tsearch-ops N stop after N tree search bogo operations
--tsearch-size N number of 32 bit integers to tsearch
--tun N start N workers exercising tun interface
--tun-ops N stop after N tun bogo operations
--tun-tap use TAP interface instead of TUN
--udp N start N workers performing UDP send/receives
--udp-domain D specify domain, default is ipv4
--udp-gro enable UDP-GRO
--udp-if I use network interface I, e.g. lo, eth0, etc.
--udp-lite use the UDP-Lite (RFC 3828) protocol
--udp-ops N stop after N udp bogo operations
--udp-port P use ports P to P + number of workers - 1
--udp-flood N start N workers that performs a UDP flood attack
--udp-flood-domain D specify domain, default is ipv4
--udp-flood-if I use network interface I, e.g. lo, eth0, etc.
--udp-flood-ops N stop after N udp flood bogo operations
--unshare N start N workers exercising resource unsharing
--unshare-ops N stop after N bogo unshare operations
--uprobe N start N workers that generate uprobe events
--uprobe-ops N stop after N uprobe events
-u N, --urandom N start N workers reading /dev/urandom
--urandom-ops N stop after N urandom bogo read operations
--userfaultfd N start N page faulting workers with userspace
handling
--userfaultfd-ops N stop after N page faults have been handled
--usersyscall N start N workers exercising a userspace system
call handler
--usersyscall-ops N stop after N successful SIGSYS system callls
--utime N start N workers updating file timestamps
--utime-fsync force utime meta data sync to the file system
--utime-ops N stop after N utime bogo operations
--vdso N start N workers exercising functions in the VDSO
--vdso-func F use just vDSO function F
--vdso-ops N stop after N vDSO function calls
--vecfp N start N workers performing vector math ops
--vecfp-ops N stop after N vector math bogo operations
--vecmath N start N workers performing vector math ops
--vecmath-ops N stop after N vector math bogo operations
--vecshuf N start N workers performing vector shuffle ops
--vecshuf-method M select vector shuffling method
--vecshuf-ops N stop after N vector shuffle bogo operations
--vecwide N start N workers performing vector math ops
--vecwide-ops N stop after N vector math bogo operations
--verity N start N workers exercising file verity ioctls
--verity-ops N stop after N file verity bogo operations
--vfork N start N workers spinning on vfork() and exit()
--vfork-ops N stop after N vfork bogo operations
--vfork-max P create P processes per iteration, default is 1
--vforkmany N start N workers spawning many vfork children
--vforkmany-ops N stop after spawning N vfork children
--vforkmany-vm enable extra virtual memory pressure
-m N, --vm N start N workers spinning on anonymous mmap
--vm-bytes N allocate N bytes per vm worker (default 256MB)
--vm-hang N sleep N seconds before freeing memory
--vm-keep redirty memory instead of reallocating
--vm-locked lock the pages of the mapped region into memory
--vm-madvise M specify mmap'd vm buffer madvise advice
--vm-method M specify stress vm method M, default is all
--vm-ops N stop after N vm bogo operations
--vm-populate populate (prefault) page tables for a mapping
--vm-addr N start N vm address exercising workers
--vm-addr-ops N stop after N vm address bogo operations
--vm-rw N start N vm read/write process_vm* copy workers
--vm-rw-bytes N transfer N bytes of memory per bogo operation
--vm-rw-ops N stop after N vm process_vm* copy bogo operations
--vm-segv N start N workers that unmap their address space
--vm-segv-ops N stop after N vm-segv unmap'd SEGV faults
--vm-splice N start N workers reading/writing using vmsplice
--vm-splice-bytes N number of bytes to transfer per vmsplice call
--vm-splice-ops N stop after N bogo splice operations
--wait N start N workers waiting on child being
stop/resumed
--wait-ops N stop after N bogo wait operations
--watchdog N start N workers that exercise /dev/watchdog
--watchdog-ops N stop after N bogo watchdog operations
--wcs N start N workers on lib C wide char string
functions
--wcs-method func specify the wide character string function to
stress
--wcs-ops N stop after N bogo wide character string
operations
--x86syscall N start N workers exercising functions using
syscall
--x86syscall-func F use just syscall function F
--x86syscall-ops N stop after N syscall function calls
--xattr N start N workers stressing file extended
attributes
--xattr-ops N stop after N bogo xattr operations
-y N, --yield N start N workers doing sched_yield() calls
--yield-ops N stop after N bogo yield operations
--zero N start N workers reading /dev/zero
--zero-ops N stop after N /dev/zero bogo read operations
--zlib N start N workers compressing data with zlib
--zlib-level L specify zlib compression level 0=fast, 9=best
--zlib-mem-level L specify zlib compression state memory usage
1=minimum, 9=maximum
--zlib-method M specify zlib random data generation method M
--zlib-ops N stop after N zlib bogo compression operations
--zlib-strategy S specify zlib strategy 0=default, 1=filtered,
2=huffman only, 3=rle, 4=fixed
--zlib-stream-bytes S specify the number of bytes to deflate until the
current stream will be closed
--zlib-window-bits W specify zlib window bits -8-(-15) | 8-15 | 24-31
| 40-47
--zombie N start N workers that rapidly create and reap
zombies
--zombie-max N set upper limit of N zombies per worker
--zombie-ops N stop after N bogo zombie fork operations

Example: stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10s

Note: Sizes can be suffixed with B,K,M,G and times with s,m,h,d,y

容器的内存限制

Docker 可以强制执行硬性内存限制,即只允许容器使用给定的内存大小。

Docker 也可以执行非硬性内存限制,即容器可以使用尽可能多的内存,除非内核检测到主机上的内存不够用了

内存相关选项

官文文档: https://docs.docker.com/config/containers/resource_constraints/

以下设置大部分的选项取正整数,跟着一个后缀 b , k , m , g ,表示字节,千字节,兆字节或千兆字节

选项 描述
-m , –memory= 容器可以使用的最大物理内存量,硬限制,此选项最小允许值为 4m (4 MB),此项较常用
–memory-swap 允许此容器交换到磁盘的内存量,必须先用-m 对内存限制才可以使用,详细说明如下
–memory-swappiness 设置容器使用交换分区的倾向性,值越高表示越倾向于使用swap分区,范围为0-100,0为能不用就不用,100为能用就用,N表示内存使用率达到N%时,就会使用swap空间
–memory-reservation 允许指定小于 –memory 的软限制 ,当 Docker 检测到主机上的争用或内存不足时会激活该限制,如果使– memory-reservation,则必须将其设置为低于 –memory 才能使其优先生效。 因为它是软限制,所以不能保证容器不超过限制
–kernel-memory 容器可以使用的最大内核内存量,最小为 4m,由于内核内存与用户空间内存隔离,因此无法与用户空间内存直接交换,因此内核内存不足的容器可能会阻塞宿主机资源,这会对主机和其他容器或者其他服务进程产生影响,因此不建议设置内核内存大小
–oom-kill-disable 默认情况下,如果发生内存不足(OOM)错误,则内核将终止容器中的进程。要更改此行为,请使用该 –oom-kill-disable 选项。建议仅在设置了该 -m/–memory 选项的容器上禁用OOM。如果 -m 未设置该标志,则主机可能会用完内存,内核可能需要终止主机系统的进程以释放内存
1
2
3
4
5
6
7
8
[root@ubuntu1804 ~]# docker run -e MYSQL_ROOT_PASSWORD=123456 -it --rm -m 1g --oom-kill-disable mysql:5.7.30
2020-02-04 13:11:54+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.29-1debian9 started.
2020-02-04 13:11:54+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2020-02-04 13:11:54+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.29-1debian9 started.
2020-02-04 13:11:54+00:00 [Note] [Entrypoint]: Initializing database files
......
Version: '5.7.29' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL
Community Server (GPL)

范例:

1
2
3
4
5
6
7
[root@ubuntu1804 ~]# sysctl -a |grep swappiness
sysctl: reading key "net.ipv6.conf.all.stable_secret"
sysctl: reading key "net.ipv6.conf.default.stable_secret"
sysctl: reading key "net.ipv6.conf.docker0.stable_secret"
sysctl: reading key "net.ipv6.conf.eth0.stable_secret"
sysctl: reading key "net.ipv6.conf.lo.stable_secret"
vm.swappiness = 60

swap限制

kubernets 对swap的要求

K8s 1.8.3更新日志: 宿主机开启交换分区,会在安装之前的预检查环节提示相应错误信息: https://github.com/kubernetes/kubernetes/blob/release-.8/CHANGELOG-1.8.md

143

docker run 命令可以使用–memory-swap 选项控制swap的使用

1
--memory-swap #只有在设置了 --memory 后才会有意义。使用 Swap,可以让容器将超出限制部分的内存置换到磁盘上,WARNING: 经常将内存交换到磁盘的应用程序会降低性能

**不同的–memory-swap 设置会产生不同的效果: **

–memory-swap –memory 功能
正数S 正数M 容器可用内存总空间为S,其中ram为M,swap为 S-M,若S=M,则无可用swap资源
0 正数M 相当于未设置swap(unset)
unset 正数M 若主机(Docker Host) 启用于swap , 则容器的可用swap 为2*M
-1 正数M 若主机(Docker Host)启用了swap ,则容器可使用最大至主机上所有swap空间
1
2
3
4
5
6
7
8
9
10
11
12
-memory-swap     #值为正数, 那么--memory 和--memory-swap 都必须要设置,--memory-swap 表示
你能使用的内存和 swap 分区大小的总和,例如: --memory=300m, --memory-swap=1g,
那么该容器能够使用 300m 物理内存和 700m swap,即--memory 是实际物理内存大小值不变,
而 swap 的实际大小计算方式为(--memory-swap)-(--memory)=容器可用 swap
--memory-swap #如果设置为 0,则忽略该设置,并将该值视为未设置,即未设置交换分区
--memory-swap #如果等于--memory 的值,并且--memory 设置为正整数,容器无权访问 swap
-memory-swap #如果未设置,如果宿主机开启了 swap,则实际容器的swap 值最大为 2x( --memory),
即两倍于物理内存大小,例如,如果--memory="300m"与--memory-swap没有设置,该容器可
以使用300m总的内存和600m交撒空间,但是并不准确(在容器中使用free 命令所看到的 swap 空间
并不精确,毕竟每个容器都可以看到具体大小,宿主机的 swap 是有上限的,而且不是所有容器看到
的累计大小)
--memory-swap #如果设置为-1,如果宿主机开启了 swap,则容器可以使用主机上 swap 的最大空间

注意: 在容器中执行free命令看到的是宿主机的内存和swap使用,而非容器自身的swap使用情况

范例: 在容器中查看内存

1
2
3
4
5
6
7
8
9
10
11
[root@ubuntu1804 ~]# free 
total used free shared buff/cache available
Mem: 3049484 278484 1352932 10384 1418068 2598932
Swap: 1951740 0 1951740

[root@ubuntu1804 ~]# docker run -it --rm -m 2G centos:centos7.7.1908 bash
[root@f5d387b5022f /]# free
total used free shared buff/cache available
Mem: 3049484 310312 1320884 10544 1418288 2566872
Swap: 1951740 0 1951740

使用stress-ng测试内存限制

假如一个容器未做内存使用限制,则该容器可以利用到系统内存最大空间,默认创建的容器没有做内存资源限制。

范例: 默认一个workers 分配256M内存,2个即占512M内存

1
2
3
4
5
6
7
8
[root@ubuntu1804 ~]# docker run --name c1 -it --rm lorel/docker-stress-ng --vm 2 

#因上一个命令是前台执行,下面在另一个终端窗口中执行,可以看到占用512M左右内存
[root@ubuntu1804 ~]#docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
fd184869ff7e c1 91.00% 524.3MiB / 962MiB
54.50% 766B / 0B 860kB / 0B 5

范例: 指定内存最大值

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@ubuntu1804 ~]# docker run --name c1 -it --rm -m 300m lorel/docker-stress-ng --vm 2 

[root@ubuntu1804 ~]# vim /etc/default/grub
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 net.ifnames=0"

[root@ubuntu1804 ~]# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-29-generic
Found initrd image: /boot/initrd.img-4.15.0-29-generic
done

[root@ubuntu1804 ~]# reboot
[root@ubuntu1804 ~]# docker run --name c1 -it --rm -m 300m lorel/docker-stress-ng --vm 2

#在另一个终端窗口执行
[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
6a93f6b22034 c1 27.06% 297.2MiB / 300MiB
99.07% 1.45kB / 0B 4.98GB / 5.44GB 5

范例:

1
2
3
4
5
6
7
8
9
10
[root@ubuntu1804 ~]# docker run --name c2 -it --rm lorel/docker-stress-ng --vm 4 
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 vm

#一次性查看资源使用情况
[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
fd5fff3c04f7 c2 21.20% 591.1MiB / 962MiB
61.45% 1.31kB / 0B 1.07GB / 46.6MB 9

范例: 容器占用内存造成OOM

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@ubuntu1804 ~]# docker run -it --rm --name c1 lorel/docker-stress-ng --vm 6 
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 6 vm

#另一个终端窗中同时执行下面命令
[root@ubuntu1804 ~]# docker run -it --rm --name c2 lorel/docker-stress-ng --vm 6
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 6 vm

[root@ubuntu1804 ~]# docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
f33cebf5b55d c2 -- -- / --
-- -- -- --
b14b597c5a4f cool_banach -- -- / --
-- -- -- --

#观察日志出现OOM现象
[root@ubuntu1804 ~]# tail /var/log/syslog
Feb 4 22:59:40 ubuntu1804 kernel: [ 785.928842] Out of memory: Kill process 2570 (stress-ng-vm) score 1090 or sacrifice child
Feb 4 22:59:40 ubuntu1804 kernel: [ 785.929493] Killed process 2570 (stress-ng-vm) total-vm:268416kB, anon-rss:170352kB, file-rss:632kB, shmem-rss:28kB
Feb 4 22:59:40 ubuntu1804 kernel: [ 786.018319] oom_reaper: reaped process 2570 (stress-ng-vm), now anon-rss:0kB, file-rss:0kB, shmem-rss:28kB

范例: 查看内存限制

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#启动两个工作进程,每个工作进程最大允许使用内存 256M,且宿主机不限制当前容器最大内存
[root@ubuntu1804 ~]# docker run -it --name c1 --rm lorel/docker-stress-ng --vm 2
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

[root@ubuntu1804 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
13e46172e1ae lorel/docker-stress-ng "/usr/bin/stress-ng …" 24 seconds
ago Up 22 seconds gallant_moore

[root@ubuntu1804 ~]# ls /sys/fs/cgroup/memory/docker/
13e46172e1ae8593569f05a3bebc7b41b7839da44369d43b29102661364ac2cd
memory.kmem.tcp.limit_in_bytes memory.numa_stat
cgroup.clone_children
memory.kmem.tcp.max_usage_in_bytes memory.oom_control
cgroup.event_control
memory.kmem.tcp.usage_in_bytes memory.pressure_level
cgroup.procs
memory.kmem.usage_in_bytes memory.soft_limit_in_bytes
memory.failcnt
memory.limit_in_bytes memory.stat
memory.force_empty
memory.max_usage_in_bytes memory.swappiness
memory.kmem.failcnt
memory.memsw.failcnt memory.usage_in_bytes
memory.kmem.limit_in_bytes
memory.memsw.limit_in_bytes memory.use_hierarchy
memory.kmem.max_usage_in_bytes
memory.memsw.max_usage_in_bytes notify_on_release
memory.kmem.slabinfo
memory.memsw.usage_in_bytes tasks
memory.kmem.tcp.failcnt
memory.move_charge_at_immigrate

[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/13e46172e1ae8593569f05a3bebc7b41b7839da44369d43b29102661364ac2cd/memory.limit_in_bytes
9223372036854771712

[root@ubuntu1804 ~]# echo 2^63|bc
9223372036854775808

范例: 内存限制200m

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#宿主机限制容器最大内存使用:   
[root@ubuntu1804 ~]# docker run -it --rm --name c1 -m 200M lorel/docker-stress-ng --vm 2 --vm-bytes 256M

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
f69729b2acc1 sleepy_haibt 85.71% 198MiB / 200MiB
98.98% 1.05kB / 0B 697MB / 60.4GB 5

#查看宿主机基于 cgroup 对容器进行内存资源的大小限制
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/f69729b2acc16e032658a4efdab64d21ff97dcb6746d1cef451ed82d5c98a81f/memory.limit_in_bytes
209715200

[root@ubuntu1804 ~]# echo 209715200/1024/1024|bc
200

#动态修改内存限制
[root@ubuntu1804 ~]# echo 300*1024*1024|bc
314572800

[root@ubuntu1804 ~]# echo 314572800 > /sys/fs/cgroup/memory/docker/f69729b2acc16e032658a4efdab64d21ff97dcb6746d1cef451ed82d5c98a81f/memory.limit_in_bytes
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/f69729b2acc16e032658a4efdab64d21ff97dcb6746d1cef451ed82d5c98a81f/memory.limit_in_bytes
314572800

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
f69729b2acc1 sleepy_haibt 76.69% 297.9MiB / 300MiB
99.31% 1.05kB / 0B 1.11GB / 89.1GB 5

#通过echo 命令可以改内存限制的值,但是可以在原基础之上增大内存限制,缩小内存限制会报错write
error: Device or resource busy

[root@ubuntu1804 ~]# echo 209715200 > /sys/fs/cgroup/memory/docker/f69729b2acc16e032658a4efdab64d21ff97dcb6746d1cef451ed82d5c98a81f/memory.limit_in_bytes
-bash: echo: write error: Device or resource busy

[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/f69729b2acc16e032658a4efdab64d21ff97dcb6746d1cef451ed82d5c98a81f/memory.limit_in_bytes
314572800

范例: 内存大小软限制

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@ubuntu1804 ~]# docker run -it --rm -m 256m --memory-reservation 128m --name c1 lorel/docker-stress-ng --vm 2 --vm-bytes 256M 
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
aeb38acde581 c1 72.45% 253.9MiB / 256MiB 99.20%
976B / 0B 9.47GB / 39.4GB 5

#查看硬限制
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/aeb38acde58155d421f998a54e9a99ab60635fe00c9070da050cc49a2f62d274/memory.limit_in_bytes
268435456

#查看软限制
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/aeb38acde58155d421f998a54e9a99ab60635fe00c9070da050cc49a2f62d274/memory.soft_limit_in_bytes
134217728

#软限制不能高于硬限制
[root@ubuntu1804 ~]# docker run -it --rm --name c1 -m 256m --memory-reservation 257m --name c1 lorel/docker-stress-ng --vm 2 --vm-bytes 256M
docker: Error response from daemon: Minimum memory limit can not be less than memory reservation limit, see usage.
See 'docker run --help'.

关闭OOM 机制:

1
2
3
4
5
# docker run -it --rm -m 256m --oom-kill-disable --name c1 lorel/docker-stress-ng --vm 2 --vm-bytes 256M 
# cat /sys/fs/cgroup/memory/docker/容器 ID/memory.oom_control
oom_kill_disable 1
under_oom 1
oom_kill 0

范例: 关闭OOM机制

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#查看docker OOM机制默认值
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/memory.oom_control
oom_kill_disable 0
under_oom 0
oom_kill 0

#启动容器时关闭OOM机制
[root@ubuntu1804 ~]# docker run -it --rm -m 200m --name c1 --oom-kill-disable lorel/docker-stress-ng --vm 2 --vm-bytes 256M
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
b655d88228c0 silly_borg 0.00% 197.2MiB / 200MiB
98.58% 1.31kB / 0B 1.84MB / 484MB 5

[root@ubuntu1804 ~]# cat /sys/fs/cgroup/memory/docker/b655d88228c04d7db6a6ad833ed3d05d4cd596ef09834382e17942db0295dc0c/memory.oom_control
oom_kill_disable 1
under_oom 1
oom_kill 0

**交换分区限制: **

1
2
3
4
# docker run -it --rm -m 256m --memory-swap 512m --name c1 centos bash 

# cat /sys/fs/cgroup/memory/docker/容器 ID/memory.memsw.limit_in_bytes
536870912 #返回值

范例:

1
2
3
4
5
6
7
[root@ubuntu1804 ~]#docker run -it --rm --name c1 -m 200m --memory-swap 512m lorel/docker-stress-ng --vm 2 
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 2 vm

#宿主机cgroup验证:
[root@ubuntu1804 ~]#cat /sys/fs/cgroup/memory/docker/23733a0cafa21f3e94ca8c96110978b12e53076261f1b92fd2052bafe659c8ab/memory.memsw.limit_in_bytes
536870912

容器的 CPU 限制

容器的 CPU 限制介绍

官方文档说明: https://docs.docker.com/config/containers/resource_constraints/

一个宿主机,有几十个核心的CPU,但是宿主机上可以同时运行成百上千个不同的进程用以处理不同的任务,多进程共用一个 CPU 的核心为可压缩资源,即一个核心的 CPU 可以通过调度而运行多个进程,但是同一个单位时间内只能有一个进程在 CPU 上运行,那么这么多的进程怎么在 CPU 上执行和调度的呢?

Linux kernel 进程的调度基于CFS(Completely Fair Scheduler),完全公平调度

服务器资源密集型

  • CPU 密集型的场景: 计算密集型任务的特点是要进行大量的计算,消耗CPU 资源,比如计算圆周率、数据处理、对视频进行高清解码等等,全靠CPU 的运算能力。
  • IO 密集型的场景: 涉及到网络、磁盘IO 的任务都是IO 密集型任务,这类任务的特点是 CPU 消耗很少,任务的大部分时间都在等待 IO 操作完成(因为 IO 的速度远远低于 CPU 和内存的速度),比如 Web 应用,高并发,数据量大的动态网站来说,数据库应该为IO 密集型

CFS原理

cfs定义了进程调度的新模型,它给cfs_rq(cfs的run queue)中的每一个进程安排一个虚拟时钟vruntime。如果一个进程得以执行,随着时间的增长,其vruntime将不断增大。没有得到执行的进程vruntime不变, 而调度器总是选择vruntime跑得最慢的那个进程来执行。这就是所谓的“完全公平”。为了区别不同优先级的进程,优先级高的进程vruntime增长得慢,以至于它可能得到更多的运行机会。CFS的意义在于, 在一个混杂着大量计算型进程和IO交互进程的系统中,CFS调度器相对其它调度器在对待IO交互进程要更加友善和公平。

配置默认的CFS调度程序

默认情况下,每个容器对主机的CPU周期的访问都是不受限制的。可以设置各种约束,以限制给定容器对主机CPU周期的访问。大多数用户使用并配置默认的CFS调度程序。在Docker 1.13及更高版本中,还可以配置 realtime scheduler。

CFS是用于常规Linux进程的Linux内核CPU调度程序。通过几个运行时标志,可以配置对容器拥有的CPU资源的访问量。使用这些设置时,Docker会在主机上修改容器cgroup的设置。

选项 描述
–cpus= 指定一个容器可以使用多少个可用的CPU核心资源。例如,如果主机有两个CPU,如果设置了 –cpus=”1.5” ,则可以保证容器最多使用1.5个的CPU(如果是4核CPU,那么还可以是4核心上每核用一点,但是总计是1.5核心的CPU)。这相当于设置 –cpu-period=”100000” 和 –cpu-quota=”150000” 。此设置可在Docker 1.13及更高版本中可用,目的是替代–cpu-period和–cpu-quota两个参数,从而使配置更简单,但是最大不能超出宿主机的CPU总核心数(在操作系统看到的CPU超线程后的数值),此项较常用
–cpu-period= 过时选项,指定CPU CFS调度程序周期,必须与 –cpu-quota 一起使用 。默认为100微秒。大多数用户不会更改默认设置。如果使用Docker 1.13或更高版本,请改用 –cpus
–cpu-quota= 过时选项,在容器上添加 CPU CFS 配额,计算方式为 cpu-quota / cpu-period的结果值,docker1.13 及以上版本通常使用–cpus 设置此值
–cpuset-cpus 用于指定容器运行的 CPU 编号,也就是所谓的CPU绑定。如果一个或多个CPU,则容器可以使用逗号分隔的列表或用连字符分隔的CPU范围。第一个CPU的编号为0。有效值可能是 0-3 (使用第一,第二,第三和第四CPU)或 1,3(使用第二和第四CPU)
–cpu-shares 用于设置 cfs 中调度的相对最大比例权重,cpu-share 的值越高的容器,将会分得更多的时间片(宿主机多核 CPU 总数为 100%,假如容器 A 为1024,容器 B为 2048,那么容器 B 将最大是容器 A 的可用 CPU 的两倍 ),默认的时间片1024,最大 262144。这是一个软限制。注意:进程数要多个CPU的核数才能看到效果,此值不能设置太小

使用 Stress-ng 测试 Cpu 配置

范例: 查看 stress-n 关于cpu的帮助

1
2
3
4
5
6
[root@ubuntu1804 ~]#docker run -it --rm --name c1 lorel/docker-stress-ng |grep cpu
-c N, --cpu N start N workers spinning on sqrt(rand())
--cpu-ops N stop when N cpu bogo operations completed
-l P, --cpu-load P load CPU by P %%, 0=sleep, 100=full load (see -c)
--cpu-method m specify stress cpu method m, default is all
Example: stress-ng --cpu 8 --io 4 --vm 2 --vm-bytes 128M --fork 4 --timeout 10s

范例: 不限制容器CPU

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@ubuntu1804 ~]# lscpu |grep CPU
CPU op-mode(s): 32-bit, 64-bit
CPU(s): 6
On-line CPU(s) list: 0-5
CPU family: 6
Model name: Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz
CPU MHz: 2494.236
NUMA node0 CPU(s): 0-5

#占用4个CPU资源.但只是平均的使用CPU资源
[root@ubuntu1804 ~]# docker run -it --rm --name c1 lorel/docker-stress-ng --cpu 4
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu, 4 vm

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
818a85e1da2f frosty_taussig 595.57% 1.037GiB / 2.908GiB
35.64% 1.12kB / 0B 0B / 0B 13

[root@ubuntu1804 ~]# cat /sys/fs/cgroup/cpuset/docker/818a85e1da2f9a4ef297178a9dc09b338b2308108195ad8d4197a1c47febcbff/cpuset.cpus
0-5

[root@ubuntu1804 ~]# top

144

范例: 限制使用CPU

1
2
3
4
5
6
7
8
9
10
11
[root@ubuntu1804 ~]# docker run -it --rm --name c1 --cpus 1.5 lorel/docker-stress-ng --cpu 4 
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu, 4 vm

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
9f8b2e693113 busy_hodgkin 147.71% 786.8MiB / 2.908GiB
26.42% 836B / 0B 0B / 0B 13

[root@ubuntu1804 ~]# top

145

范例: 限制CPU

1
2
3
4
5
6
7
8
9
[root@ubuntu1804 ~]# docker run -it --rm --name c1 --cpu-quota 2000 --cpu-period 1000 lorel/docker-stress-ng --cpu 4 
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu, 4 vm

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE /
LIMIT MEM % NET I/O BLOCK I/O PIDS
bd949bb6698e affectionate_chebyshev 185.03% 1.037GiB /
2.908GiB 35.64% 836B / 0B 0B / 0B 13

范例: 绑定CPU

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#一般不建议绑在0号CPU上,因0号CPU一般会较忙
[root@ubuntu1804 ~]# docker run -it --rm --name c1 --cpus 1.5 --cpuset-cpus 2,4-5 lorel/docker-stress-ng --cpu 4
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu, 4 vm

[root@ubuntu1804 ~]# ps axo pid,cmd,psr |grep stress
1964 /usr/bin/stress-ng --cpu 4 2
1996 /usr/bin/stress-ng --cpu 4 5
1997 /usr/bin/stress-ng --cpu 4 2
1998 /usr/bin/stress-ng --cpu 4 4
1999 /usr/bin/stress-ng --cpu 4 2
2002 grep --color=auto stress 1

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
585879094e73 hungry_albattani 154.35% 1.099GiB / 2.908GiB
37.79% 906B / 0B 0B / 0B 13

[root@ubuntu1804 ~]# cat /sys/fs/cgroup/cpuset/docker/585879094e7382d2ef700947b4454426eee7f943f8d1438fe42ce34df789227b/cpuset.cpus
2,4-5

[root@ubuntu1804 ~]# top

146

范例: 多个容器的CPU利用率比例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#同时开两个容器
[root@ubuntu1804 ~]# docker run -it --rm --name c1 --cpu-shares 1000 lorel/docker-stress-ng --cpu 4
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu, 4 vm

[root@ubuntu1804 ~]# docker run -it --rm --name c2 --cpu-shares 500 lorel/docker-stress-ng --cpu 4
stress-ng: info: [1] defaulting to a 86400 second run per stressor
stress-ng: info: [1] dispatching hogs: 4 cpu, 4 vm

#注意:进程数要多于CPU的核数才能看到效果,如果两个容器使用的CPU总数不超过CPU实际的核心数,两个容器都显示400%
[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
a1d4c6e6802d c2 195.88% 925.3MiB / 2.908GiB
31.07% 726B / 0B 0B / 0B 13
d5944104aff4 c1 398.20% 1.036GiB / 2.908GiB
35.64% 906B / 0B 0B / 0B 13

#查看c1容器的cpu利用比例
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/cpu,cpuacct/docker/d5944104aff40b7b76f536c45a68cd4b98ce466a73416b68819b9643e3f49da7/cpu.shares
1000

#查看c2容器的cpu利用比例
[root@ubuntu1804 ~]# cat /sys/fs/cgroup/cpu,cpuacct/docker/a1d4c6e6802d1b846b33075f3c1e1696376009e85d9ff8756f9a8d93d3da3ca6/cpu.shares
500

#再打开新的容器,cpu分配比例会动态调整
[root@ubuntu1804 ~]# docker run -it --rm --name c3 --cpu-shares 2000 lorel/docker-stress-ng --cpu 4

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
c2d54818e1fe c3 360.15% 664.5MiB / 2.908GiB
22.31% 726B / 0B 1.64GB / 150MB 13
a1d4c6e6802d c2 82.94% 845.2MiB / 2.908GiB
28.38% 936B / 0B 103MB / 4.54MB 13
d5944104aff4 c1 181.18% 930.1MiB / 2.908GiB
31.23% 1.12kB / 0B 303MB / 19.8MB 13

范例: 动态调整cpu shares值

1
2
3
4
5
6
7
8
9
[root@ubuntu1804 ~]# echo 2000 > /sys/fs/cgroup/cpu,cpuacct/docker/a1d4c6e6802d1b846b33075f3c1e1696376009e85d9ff8756f9a8d93d3da3ca6/cpu.shares

[root@ubuntu1804 ~]# docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT
MEM % NET I/O BLOCK I/O PIDS
a1d4c6e6802d c2 389.31% 1.037GiB / 2.908GiB
35.64% 1.01kB / 0B 1.16GB / 14MB 13
d5944104aff4 c1 200.28% 1.036GiB / 2.908GiB
35.63% 1.19kB / 0B 2.66GB / 26.7MB 13