Docker Swarm Mode中部署SpringCloud微服务

[TOC]

在Docker Swarm中部署Spring Cloud的服务.

本来服务注册是由Eureka做的,但是Eureka团队停止对该项目支持.于是我们转而转向consul.

经实践,最终定下来新的微服务部署用到的技术如下.

Docker+Consul+Registrator

实验环境

三台机:

  • node1:192.168.99.100

  • node2:192.168.99.101

  • node3:192.168.99.102

规划如下:

node1: Consul Server,Docker Swarm master
node2: Consul Server,Docker Swarm node
node3: Consul Client,Docker Swarm node

一般建议consul server数量为3~5个,做高可用,但是由于是演示,机器也比较渣,就不启用多台作为实验。

创建一个Overlay网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Usage:	docker network create [OPTIONS] NETWORK

Create a network

Options:
--attachable Enable manual container attachment
--aux-address map Auxiliary IPv4 or IPv6 addresses used by Network driver (default map[])
--config-from string The network from which copying the configuration
--config-only Create a configuration only network
-d, --driver string Driver to manage the Network (default "bridge")
--gateway strings IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range strings Allocate container ip from a sub-range
--ipam-driver string IP Address Management Driver (default "default")
--ipam-opt map Set IPAM driver specific options (default map[])
--ipv6 Enable IPv6 networking
--label list Set metadata on a network
-o, --opt map Set driver specific options (default map[])
--scope string Control the network's scope
--subnet strings Subnet in CIDR format that represents a network segment

在node1上执行

1
docker network create --attachable --driver overlay microservice

创建一个可被普通容器附加的overlay网络,名字为microservice。

用swarm默认的ingress不支持服务发现,会出现服务无法被调用的情况。所以另行创建了一个网络。

现在node1的网络如下:

1
2
3
4
5
6
7
8
docker@node1:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
64c3669ce14f bridge bridge local
176814588fd1 docker_gwbridge bridge local
52b2424b81ec host host local
qmkx8kk9j9ic ingress overlay swarm
5kjkaxmaw2ei microservice overlay swarm

搭建Docker Swarm Mode集群

为了方便进行服务扩缩容,我决定将服务运行在docker swarm上。

下面就搭建一下docker swarm mode集群。

控制节点,初始化docker swarm

1
2
docker swarm init --advertise-addr 192.168.99.100

可选参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Usage:	docker swarm init [OPTIONS]

Initialize a swarm

Options:
--advertise-addr string Advertised address (format: <ip|interface>[:port])
--autolock Enable manager autolocking (requiring an unlock key to start a stopped manager)
--availability string Availability of the node ("active"|"pause"|"drain") (default "active")
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default 2160h0m0s)
--data-path-addr string Address or interface to use for data path traffic (format: <ip|interface>)
--dispatcher-heartbeat duration Dispatcher heartbeat period (ns|us|ms|s|m|h) (default 5s)
--external-ca external-ca Specifications of one or more certificate signing endpoints
--force-new-cluster Force create a new cluster from current state
--listen-addr node-addr Listen address (format: <ip|interface>[:port]) (default 0.0.0.0:2377)
--max-snapshots uint Number of additional Raft snapshots to retain
--snapshot-interval uint Number of log entries between Raft snapshots (default 10000)
--task-history-limit int Task history retention limit (default 5)

增加docker swarm节点

在其它机器(node2,node3)上执行命令

1
docker swarm join --token SWMTKN-1-4oz1xnoqi9psb9aty5zfjkeyw2wkq2ziyursfas1eo563dxwpp-1nmwsqsm9l3wsyqkyus8075y0 192.168.99.100:2377

这个命令是你在docker swarm init后生成的。如果你忘记保存了,没关系,在控制节点上用docker swarm join-token可以找到它

Usage:  docker swarm join-token [OPTIONS] (worker|manager)

如:

1
2
3
4
5
docker@node1:~$ docker swarm join-token worker
To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-4oz1xnoqi9psb9aty5zfjkeyw2wkq2ziyursfas1eo563dxwpp-8vdgujjb6dp1i7jxv6nlm4r70 192.168.99.100:2377

执行完毕后,在node1(控制节点)应该可以看到有三个节点

1
2
3
4
5
6
docker@node1:~$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
qx9075p5x2ipteiikkccgqfqp * node1 Ready Active Leader 18.06.0-ce
hwkh4iajbtuv6gxto9gpdpe42 node2 Ready Active 18.06.0-ce
k709bftfox5vyannyj3py0hya node3 Ready Active 18.06.0-ce

由于我没有清空实验环境,之前搭建过,所以这里还看到有一个创建docker swarm后默认的ingress网络

搭建Consul集群和server高可用

官方建议是docker容器方式运行的时候,用host网络模式,因为一致性和gossip对延迟比较敏感.

但是如此一来,在调用consul的时候,负载均衡还要用nginx之类的实现,我打算加入routing mesh网络,然后用服务名来调用,让routing mesh实现负载均衡.

node1上执行,安装consul server

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker run -d -p 8300:8300 -p 8301:8301/tcp -p 8302:8302/tcp -p 8301:8301/udp \
-p 8302:8302/udp -p 8500:8500 -p 8600:8600/udp -p 8600:8600/tcp \
--restart=always \
-h node1 \
--name consul \
--network microservice \
-v /data/consul:/consul/data \
-v /etc/localtime:/etc/localtime:ro \
consul agent \
-server \
-bootstrap-expect=2 \
-node=node1 \
-rejoin \
-client 0.0.0.0 \
-advertise 192.168.99.100 \
-ui

在node2上执行,安装consul server,和第一条有点区别,没有UI

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker run -d -p 8300:8300 -p 8301:8301/tcp -p 8302:8302/tcp -p 8301:8301/udp \
-p 8302:8302/udp -p 8500:8500 -p 8600:8600/udp -p 8600:8600/tcp \
--restart=always \
-h node2 \
--name consul \
--network microservice \
-v /etc/localtime:/etc/localtime:ro \
consul agent \
-server \
-node=node2 \
-rejoin \
-client 0.0.0.0 \
-join 192.168.99.100 \
-advertise 192.168.99.101

在node3上执行,安装consul client

1
2
3
4
5
6
7
8
9
10
11
12
13
docker run -d -p 8300:8300 -p 8301:8301/tcp -p 8302:8302/tcp -p 8301:8301/udp \
-p 8302:8302/udp -p 8500:8500 -p 8600:8600/udp -p 8600:8600/tcp \
--restart=always \
-h node3 \
--name consul \
--network microservice \
-v /etc/localtime:/etc/localtime:ro \
consul agent \
-node=node3 \
-rejoin \
-client 0.0.0.0 \
-join 192.168.99.100 \
-advertise 192.168.99.102

现在在node1容器里面执行consul的命令是可以看到有三个节点信息,其中两个server 一个client,如下

1
2
3
4
5
docker@node1:~$ docker exec consul consul members
Node Address Status Type Build Protocol DC Segment
node1 192.168.99.100:8301 alive server 1.2.2 2 dc1 <all>
node2 192.168.99.101:8301 alive server 1.2.2 2 dc1 <all>
node3 192.168.99.102:8301 alive client 1.2.2 2 dc1 <default>

这里只是测试,很多参数都没特别带上,有些还蛮重要的,比如配置和数据目录,广域网宣告IP,建议去consul官方文档看看,或者看我整理的信息,如下

consul agent命令可选参数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
Usage: consul agent [options]

Starts the Consul agent and runs until an interrupt is received. The
agent represents a single node in a cluster.

HTTP API Options

-datacenter=<value>
Datacenter of the agent.

Command Options

-advertise=<value>
设置的主机IP,不设置,通过容器运行的话,获取的是容器里面的IP

-advertise-wan=<value>
Sets address to advertise on WAN instead of -advertise address.
设置广域网IP,一般跨局域网的话 要指定这个。

-bind=<value>
Sets the bind address for cluster communication.
集群中绑定的IP,默认0.0.0.0

-bootstrap
Sets server to bootstrap mode.
引导模式,自动选择为raft领导者。多个节点指定这个参数将导致一致性不能保证。
属于遗留的参数。

-bootstrap-expect=<value>
Sets server to expect bootstrap mode.
预期服务器的数量,达到预期数量后,自动选举领导者。一般不建议指定。

-client=<value>
Sets the address to bind for client access. This includes RPC, DNS,
HTTP and HTTPS (if configured).
绑定的客户端地址,绑定才能访问。

-config-dir=<value>
Path to a directory to read configuration files from. This
will read every file ending in '.json' as configuration in this
directory in alphabetical order. Can be specified multiple times.
配置目录

-config-file=<value>
Path to a JSON file to read configuration from. Can be specified
multiple times.
配置文件

-config-format=<value>
Config files are in this format irrespective of their extension.
Must be 'hcl' or 'json'
配置格式

-data-dir=<value>
Path to a data directory to store agent state.
数据卷目录

-dev
Starts the agent in development mode.
开发模式,数据将不写入磁盘

-disable-host-node-id
Setting this to true will prevent Consul from using information
from the host to generate a node ID, and will cause Consul to
generate a random node ID instead.
关闭后,Consul不使用节点信息来生成确定性标示,并随机生成标示。挺有有的,可以在一台机运行多个实例的情况避免了node id冲突。当然如果用docker swarm应该是没这方面担忧的,因为它本身就生成了随机的id作为hostname

-disable-keyring-file
Disables the backing up of the keyring to a file.

-dns-port=<value>
DNS port to use.
监听的dns端口,默认是8600

-domain=<value>
Domain to use for DNS interface.
接管解析改域,不会递归查询

-enable-script-checks
Enables health check scripts.
开启健康检查脚本

-encrypt=<value>
Provides the gossip encryption key.
解密consul流量的key,base64编码的16字节,创建方法是consul keygen

-hcl=<value>
hcl config fragment. Can be specified multiple times.

-http-port=<value>
Sets the HTTP API port to listen on.
绑定的http api端口 默认是8500

-join=<value>
Address of an agent to join at start time. Can be specified
multiple times.
启动时加入其它节点

-join-wan=<value>
Address of an agent to join -wan at start time. Can be specified
multiple times.
加入其它局域网节点时用这个。

-log-level=<value>
Log level of the agent.
日志等级,可以看consul moniter获取更多信息。

-node=<value>
Name of this node. Must be unique in the cluster.
节点名字,必须在集群中唯一。

-node-id=<value>
A unique ID for this node across space and time. Defaults to a
randomly-generated ID that persists in the data-dir.
如果不指定,则自动生成,这个ID会被保存在数据目录中,以便重启后,还能获取到它信息。

-node-meta=<key:value>
An arbitrary metadata key/value pair for this node, of the format
`key:value`. Can be specified multiple times.

-non-voting-server
(Enterprise-only) This flag is used to make the server not
participate in the Raft quorum, and have it only receive the data
replication stream. This can be used to add read scalability to
a cluster in cases where a high volume of reads to servers are
needed.
企业版独有。不参与raft仲裁

-pid-file=<value>
Path to file to store agent PID.
代理启动的pid文件路径,如果指定可以方便找到代理pid,方便其进程管理。

-protocol=<value>
Sets the protocol version. Defaults to latest.
设置consul协议版本,默认最新。consul -v可看

-raft-protocol=<value>
Sets the Raft protocol version. Defaults to latest.
设置raft协议版本

-recursor=<value>
Address of an upstream DNS server. Can be specified multiple times.
设置上游DNS服务,用于递归查询一般。

-rejoin
Ignores a previous leave and attempts to rejoin the cluster.
在服务器暂时断开后,重连重新加入集群

-retry-interval=<value>
Time to wait between join attempts.
尝试重连的间隔

-retry-interval-wan=<value>
Time to wait between join -wan attempts.

-retry-join=<value>
Address of an agent to join at start time with retries enabled. Can
be specified multiple times.

-retry-join-wan=<value>
Address of an agent to join -wan at start time with retries
enabled. Can be specified multiple times.

-retry-max=<value>
Maximum number of join attempts. Defaults to 0, which will retry
indefinitely.

-retry-max-wan=<value>
Maximum number of join -wan attempts. Defaults to 0, which will
retry indefinitely.

-segment=<value>
(Enterprise-only) Sets the network segment to join.777
-serf-lan-bind=<value>
Address to bind Serf LAN listeners to.

-serf-lan-port=<value>
Sets the Serf LAN port to listen on.

-serf-wan-bind=<value>
Address to bind Serf WAN listeners to.

-serf-wan-port=<value>
Sets the Serf WAN port to listen on.

-server
Switches agent to server mode.
启用server模式

-server-port=<value>
Sets the server port to listen on.
设置server端口

-syslog
Enables logging to syslog.
记录日志去syslog,windows系统不可用

-ui
Enables the built-in static web UI server.
启用内置UI面板,不能和下面那个同时启用。

-ui-dir=<value>
Path to directory containing the web UI resources.

Consul服务发现注册

Consul服务发现与注册我是选用了,Registrator来完成的,至于为什么用它,可以看前面我发过的文章。

在每台机上执行这个

1
2
3
4
5
6
7
8
9
10
11
12
docker run -d \
--restart=always \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
-v /etc/localtime:/etc/localtime:ro \
gliderlabs/registrator:latest \
-resync 60 \
-cleanup \
-internal \
-ip <NODE_IP> \
consul://<NODE_IP>:8500

至此,整个搭建就告一段落了。

重点是-internal-ip参数。

开始我用的是默认的,采用根据暴露到外部的端口来自动注册服务,但是很快发现这种方式不适用docker swarm方式运行的服务。

因为它无法识别到routing mesh方式暴露的端口。后来改为用内部暴露的端口注册,这样我们用docker swarm部署的也可以注册上consul了。

这个很多百度谷歌来的文章没提到,一度差点放弃registrator,后来还是老老实实看官方文档发现了这个参数。

另外我觉得你们肯定也会遇到多网络多IP注册的问题的

建议运行的时候用我修改过的registrator吧。详情看这里

Docker Swarm Mode中容器多网络多IP registrator注册服务IP的问题

1
2
3
4
5
6
7
8
9
10
11
12
13
docker run -d \
--restart=always \
--name=registrator \
--net=host \
--volume=/var/run/docker.sock:/tmp/docker.sock \
-v /etc/localtime:/etc/localtime:ro \
doubleshit/registrator:v7 \
-resync 60 \
-cleanup \
-internal \
-ip <NODE_IP> \
-useIpFromNetworkName=microservice \
consul://<NODE_IP>:8500

因为注册上去的是根据内部端口,为了能正确访问到,所以在运行的时候,-p <extenal_port>:<internal_ip> 暴露的端口必须一致!!!

部署微服务

主要注意两点:

  1. 暴露的端口和内部端口号必须一样。

  2. 网络必须指定一样的如我们创建的:--network overlay

其它没什么好讲的了,运行服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71

Usage: docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]

Create a new service

Options:
--config config Specify configurations to expose to the service
--constraint list Placement constraints
--container-label list Container labels
--credential-spec credential-spec Credential spec for managed service account (Windows only)
-d, --detach Exit immediately instead of waiting for the service to converge
--dns list Set custom DNS servers
--dns-option list Set DNS options
--dns-search list Set custom DNS search domains
--endpoint-mode string Endpoint mode (vip or dnsrr) (default "vip")
--entrypoint command Overwrite the default ENTRYPOINT of the image
-e, --env list Set environment variables
--env-file list Read in a file of environment variables
--generic-resource list User defined resources
--group list Set one or more supplementary user groups for the container
--health-cmd string Command to run to check health
--health-interval duration Time between running the check (ms|s|m|h)
--health-retries int Consecutive failures needed to report unhealthy
--health-start-period duration Start period for the container to initialize before counting retries towards unstable (ms|s|m|h)
--health-timeout duration Maximum time to allow one check to run (ms|s|m|h)
--host list Set one or more custom host-to-IP mappings (host:ip)
--hostname string Container hostname
--init Use an init inside each service container to forward signals and reap processes
--isolation string Service container isolation mode
-l, --label list Service labels
--limit-cpu decimal Limit CPUs
--limit-memory bytes Limit Memory
--log-driver string Logging driver for service
--log-opt list Logging driver options
--mode string Service mode (replicated or global) (default "replicated")
--mount mount Attach a filesystem mount to the service
--name string Service name
--network network Network attachments
--no-healthcheck Disable any container-specified HEALTHCHECK
--no-resolve-image Do not query the registry to resolve image digest and supported platforms
--placement-pref pref Add a placement preference
-p, --publish port Publish a port as a node port
-q, --quiet Suppress progress output
--read-only Mount the container's root filesystem as read only
--replicas uint Number of tasks
--reserve-cpu decimal Reserve CPUs
--reserve-memory bytes Reserve Memory
--restart-condition string Restart when condition is met ("none"|"on-failure"|"any") (default "any")
--restart-delay duration Delay between restart attempts (ns|us|ms|s|m|h) (default 5s)
--restart-max-attempts uint Maximum number of restarts before giving up
--restart-window duration Window used to evaluate the restart policy (ns|us|ms|s|m|h)
--rollback-delay duration Delay between task rollbacks (ns|us|ms|s|m|h) (default 0s)
--rollback-failure-action string Action on rollback failure ("pause"|"continue") (default "pause")
--rollback-max-failure-ratio float Failure rate to tolerate during a rollback (default 0)
--rollback-monitor duration Duration after each task rollback to monitor for failure (ns|us|ms|s|m|h) (default 5s)
--rollback-order string Rollback order ("start-first"|"stop-first") (default "stop-first")
--rollback-parallelism uint Maximum number of tasks rolled back simultaneously (0 to roll back all at once) (default 1)
--secret secret Specify secrets to expose to the service
--stop-grace-period duration Time to wait before force killing a container (ns|us|ms|s|m|h) (default 10s)
--stop-signal string Signal to stop the container
-t, --tty Allocate a pseudo-TTY
--update-delay duration Delay between updates (ns|us|ms|s|m|h) (default 0s)
--update-failure-action string Action on update failure ("pause"|"continue"|"rollback") (default "pause")
--update-max-failure-ratio float Failure rate to tolerate during an update (default 0)
--update-monitor duration Duration after each task update to monitor for failure (ns|us|ms|s|m|h) (default 5s)
--update-order string Update order ("start-first"|"stop-first") (default "stop-first")
--update-parallelism uint Maximum number of tasks updated simultaneously (0 to update all at once) (default 1)
-u, --user string Username or UID (format: <name|uid>[:<group|gid>])
--with-registry-auth Send registry authentication details to swarm agents
-w, --workdir string Working directory inside the container

参考资料

关注公众号 尹安灿