- 本文地址: https://www.yangdx.com/2020/11/172.html
- 转载请注明出处
1、前言
本篇,将介绍通过 docker-compose 编排的方式快速搭建 Redis 集群。
计划安排:
- 构建6个容器,名称分别为 redis1 ~ redis6
- 内网 IP 分配 172.10.1.1 ~ 172.10.1.6
- 所有实例的 redis.conf 配置文件相同
- 将6个实例连接为集群
- 集群中故障自动检测与自动恢复
2、配置文件
来看下 docker-compose.yml 内容:
version: "2"
services:
redis1:
image: redis:6
container_name: redis1
volumes:
- ./conf/redis.conf:/etc/redis.conf
- ./data/data1:/data
networks:
backends-redis:
ipv4_address: 172.10.1.1
command: redis-server /etc/redis.conf
redis2:
image: redis:6
container_name: redis2
volumes:
- ./conf/redis.conf:/etc/redis.conf
- ./data/data2:/data
networks:
backends-redis:
ipv4_address: 172.10.1.2
command: redis-server /etc/redis.conf
redis3:
image: redis:6
container_name: redis3
volumes:
- ./conf/redis.conf:/etc/redis.conf
- ./data/data3:/data
networks:
backends-redis:
ipv4_address: 172.10.1.3
command: redis-server /etc/redis.conf
redis4:
image: redis:6
container_name: redis4
volumes:
- ./conf/redis.conf:/etc/redis.conf
- ./data/data4:/data
networks:
backends-redis:
ipv4_address: 172.10.1.4
command: redis-server /etc/redis.conf
redis5:
image: redis:6
container_name: redis5
volumes:
- ./conf/redis.conf:/etc/redis.conf
- ./data/data5:/data
networks:
backends-redis:
ipv4_address: 172.10.1.5
command: redis-server /etc/redis.conf
redis6:
image: redis:6
container_name: redis6
volumes:
- ./conf/redis.conf:/etc/redis.conf
- ./data/data6:/data
networks:
backends-redis:
ipv4_address: 172.10.1.6
command: redis-server /etc/redis.conf
networks:
backends-redis:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.10.0.0/16
gateway: 172.10.0.1
其中,配置文件 ./conf/redis.conf 从 https://raw.githubusercontent.com/redis/redis/6.0/redis.conf
获取,并做如下修改:
- 在
bind 127.0.0.1
前面加 # 号(注释掉) - 在
protected-mode yes
前面加 # 号(注释掉) - 把
cluster-enabled yes
前面的 # 号删除(取消注释)
3、运行实例
在 docker-compose.yml 文件目录下使用指令 docker-compose up
启动容器,使用指令 docker ps
查看运行中的容器实例:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e94b128a89c9 redis:6 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:6385->6379/tcp redis5
e3b48a4f4be1 redis:6 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:6382->6379/tcp redis2
37796d865681 redis:6 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:6383->6379/tcp redis3
3efd30a6cf33 redis:6 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:6386->6379/tcp redis6
acd6dde4d061 redis:6 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:6384->6379/tcp redis4
a5e3002c4f8e redis:6 "docker-entrypoint.s…" 5 minutes ago Up 5 minutes 0.0.0.0:6381->6379/tcp redis1
可以看到,6个 Redis 实例已经启动成功。
Redis 集群至少需要3个节点,因为投票容错机制要求超过半数节点认为某个节点挂了该节点才是挂了,所以2个节点无法构成集群。要保证集群的高可用,需要每个节点都有从节点,也就是备份节点,所以 Redis 集群至少需要6个实例。
4、创建集群
使用指令 redis-cli --cluster create <host1:port1> ... <hostN:portN> --cluster-replicas 1
可以将多个实例连接为集群,--cluster-replicas 1
意思是创建 master 的时候同时创建1个 slave。
进入任一容器,如进入实例1: docker exec -it redis1 bash
,然后在容器内执行指令:
redis-cli --cluster create 172.10.1.1:6379 172.10.1.2:6379 172.10.1.3:6379 172.10.1.4:6379 172.10.1.5:6379 172.10.1.6:6379 --cluster-replicas 1
按回车键执行,输出如下:
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.10.1.5:6379 to 172.10.1.1:6379
Adding replica 172.10.1.6:6379 to 172.10.1.2:6379
Adding replica 172.10.1.4:6379 to 172.10.1.3:6379
M: 6728a6eb85d867689f8822f29e1f34bf983ffc9e 172.10.1.1:6379
slots:[0-5460] (5461 slots) master
M: c2dea8329847238f3f450500153ecef1eefb1dae 172.10.1.2:6379
slots:[5461-10922] (5462 slots) master
M: 775b9f8ae557c4a7543680a25c46910d37a62b2d 172.10.1.3:6379
slots:[10923-16383] (5461 slots) master
S: d0afdd6513bd8be9382759dfa635f1566f26ca99 172.10.1.4:6379
replicates 775b9f8ae557c4a7543680a25c46910d37a62b2d
S: 1c79fb6e06b78d1a629dbfc0d64e2faf9ac33856 172.10.1.5:6379
replicates 6728a6eb85d867689f8822f29e1f34bf983ffc9e
S: 2d67bd9cb3952bec942f468fe9214afdcf4419f4 172.10.1.6:6379
replicates c2dea8329847238f3f450500153ecef1eefb1dae
Can I set the above configuration? (type 'yes' to accept):
集群中1、2、3号实例将被设为主节点,4、5、6实例被设为从节点。最后一句 Can I set the above configuration?
问我们是否同意这样的设置,我们输入 yes
并按回车,接着又输出:
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.10.1.1:6379)
M: 6728a6eb85d867689f8822f29e1f34bf983ffc9e 172.10.1.1:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 1c79fb6e06b78d1a629dbfc0d64e2faf9ac33856 172.10.1.5:6379
slots: (0 slots) slave
replicates 6728a6eb85d867689f8822f29e1f34bf983ffc9e
S: 2d67bd9cb3952bec942f468fe9214afdcf4419f4 172.10.1.6:6379
slots: (0 slots) slave
replicates c2dea8329847238f3f450500153ecef1eefb1dae
M: 775b9f8ae557c4a7543680a25c46910d37a62b2d 172.10.1.3:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
S: d0afdd6513bd8be9382759dfa635f1566f26ca99 172.10.1.4:6379
slots: (0 slots) slave
replicates 775b9f8ae557c4a7543680a25c46910d37a62b2d
M: c2dea8329847238f3f450500153ecef1eefb1dae 172.10.1.2:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
至此,Redis 集群搭建成功!3个主节点分配的 slots(哈希槽)分别映射了0-5460、5461-10922、10933-16383。
集群内置了16384个 slot(哈希槽),并且把所有的物理节点映射到了这16384[0-16383]个 slot 上,或者说把这些 slot 均等的分配给了各个节点。当需要在 Redis 集群存放一个数据(key-value)时,Redis 会先对这个 key 进行 crc16 算法,然后得到一个结果。再把这个结果对16384进行求余,这个余数会对应[0-16383]其中一个槽,进而决定 key-value 存储到哪个节点中。
观察每个实例的 data 数据存储目录,除了生成 dump.rdb 外,还多了一个 nodes.conf,这是集群的缓存配置文件,自动生成的。在 Redis 实例重启时,会根据这个配置自动去连接集群。
5、连接集群节点
Redis 集群是没有统一的入口的,客户端(client)连接集群的时候连接集群中的任一节点(node)即可,集群内部的节点是相互通信的(PING-PONG机制),每个节点都是一个 Redis 实例。
用 redis-cli 连接集群中节点要加 -c
参数,每个节点上均可执行 get、set 等操作,集群会在节点间自动跳转:
$ redis-cli -c -h 172.10.1.2
172.10.1.2:6379> set k1 123
-> Redirected to slot [12706] located at 172.10.1.3:6379
OK
172.10.1.3:6379> set k2 abc
-> Redirected to slot [449] located at 172.10.1.1:6379
OK
172.10.1.1:6379> set k3 2333
OK
172.10.1.1:6379>
使用 cluster info
查看集群状态:
172.10.1.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:2558
cluster_stats_messages_pong_sent:2599
cluster_stats_messages_sent:5157
cluster_stats_messages_ping_received:2594
cluster_stats_messages_pong_received:2558
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:5157
172.10.1.1:6379>
使用 cluster nodes
查看当所有节点:
172.10.1.1:6379> cluster nodes
1c79fb6e06b78d1a629dbfc0d64e2faf9ac33856 172.10.1.5:6379@16379 slave 6728a6eb85d867689f8822f29e1f34bf983ffc9e 0 1604387011310 1 connected
2d67bd9cb3952bec942f468fe9214afdcf4419f4 172.10.1.6:6379@16379 slave c2dea8329847238f3f450500153ecef1eefb1dae 0 1604387010305 2 connected
6728a6eb85d867689f8822f29e1f34bf983ffc9e 172.10.1.1:6379@16379 myself,master - 0 1604387009000 1 connected 0-5460
775b9f8ae557c4a7543680a25c46910d37a62b2d 172.10.1.3:6379@16379 master - 0 1604387009000 3 connected 10923-16383
d0afdd6513bd8be9382759dfa635f1566f26ca99 172.10.1.4:6379@16379 slave 775b9f8ae557c4a7543680a25c46910d37a62b2d 0 1604387008000 3 connected
c2dea8329847238f3f450500153ecef1eefb1dae 172.10.1.2:6379@16379 master - 0 1604387010000 2 connected 5461-10922
172.10.1.1:6379>
从以上节点信息,我们知道,节点间的主从复制关系为:1→5,2→6,3→4。
6、节点故障转移
为了模拟集群故障的自动监测与恢复,我们手动将2号主节点的容器停掉:docker stop redis2
期间,连接其他实例,做一些 get、set 操作,集群开始检查到2号主节点有故障并断开了客户端连接:
172.10.1.1:6379> set k4 2352
-> Redirected to slot [8455] located at 172.10.1.2:6379
Could not connect to Redis at 172.10.1.2:6379: No route to host
Could not connect to Redis at 172.10.1.2:6379: No route to host
(37.83s)
not connected>
使用 cluster nodes
再次查看节点信息:
$ redis-cli -c -h 172.10.1.1
172.10.1.1:6379> cluster nodes
1c79fb6e06b78d1a629dbfc0d64e2faf9ac33856 172.10.1.5:6379@16379 slave 6728a6eb85d867689f8822f29e1f34bf983ffc9e 0 1604387728655 1 connected
2d67bd9cb3952bec942f468fe9214afdcf4419f4 172.10.1.6:6379@16379 master - 0 1604387728000 7 connected 5461-10922
6728a6eb85d867689f8822f29e1f34bf983ffc9e 172.10.1.1:6379@16379 myself,master - 0 1604387729000 1 connected 0-5460
775b9f8ae557c4a7543680a25c46910d37a62b2d 172.10.1.3:6379@16379 master - 0 1604387730663 3 connected 10923-16383
d0afdd6513bd8be9382759dfa635f1566f26ca99 172.10.1.4:6379@16379 slave 775b9f8ae557c4a7543680a25c46910d37a62b2d 0 1604387729658 3 connected
c2dea8329847238f3f450500153ecef1eefb1dae 172.10.1.2:6379@16379 master,fail - 1604387653408 1604387649000 2 connected
172.10.1.1:6379>
此时,2号主节点状态是 fail,不能再提供服务,而6号从节点 slave 提升为了 master,原先2号节点的位置被6号节点顶替了。
7、集群下线
如果集群中任意一个节点挂了,而且该节点没有从节点(备份节点),那么这个集群就挂了。
此时的6号主节点并没有自己的从节点,如果6号节点也出现故障,整个集群就挂了(因为哈希槽 5461-10922 没有其他节点可提供服务)。
我们手动将6号节点也停掉:docker stop redis6
,之后查看节点信息,执行 set 指令时提示集群已下线:
$ redis-cli -c -h 172.10.1.1
172.10.1.1:6379> cluster nodes
1c79fb6e06b78d1a629dbfc0d64e2faf9ac33856 172.10.1.5:6379@16379 slave 6728a6eb85d867689f8822f29e1f34bf983ffc9e 0 1604388600000 1 connected
2d67bd9cb3952bec942f468fe9214afdcf4419f4 172.10.1.6:6379@16379 master,fail? - 1604388585545 1604388582000 7 connected 5461-10922
6728a6eb85d867689f8822f29e1f34bf983ffc9e 172.10.1.1:6379@16379 myself,master - 0 1604388597000 1 connected 0-5460
775b9f8ae557c4a7543680a25c46910d37a62b2d 172.10.1.3:6379@16379 master - 0 1604388598590 3 connected 10923-16383
d0afdd6513bd8be9382759dfa635f1566f26ca99 172.10.1.4:6379@16379 slave 775b9f8ae557c4a7543680a25c46910d37a62b2d 0 1604388600598 3 connected
c2dea8329847238f3f450500153ecef1eefb1dae 172.10.1.2:6379@16379 master,fail - 1604387653408 1604387649000 2 connected
172.10.1.1:6379> set str1 32352
(error) CLUSTERDOWN The cluster is down
172.10.1.1:6379>
8、自动恢复
重新启动2号实例:docker start redis2
,重新启动6号实例:docker start redis6
,之后,查看节点信息:
172.10.1.1:6379> cluster nodes
1c79fb6e06b78d1a629dbfc0d64e2faf9ac33856 172.10.1.5:6379@16379 slave 6728a6eb85d867689f8822f29e1f34bf983ffc9e 0 1604388880000 1 connected
2d67bd9cb3952bec942f468fe9214afdcf4419f4 172.10.1.6:6379@16379 master - 0 1604388878877 7 connected 5461-10922
6728a6eb85d867689f8822f29e1f34bf983ffc9e 172.10.1.1:6379@16379 myself,master - 0 1604388879000 1 connected 0-5460
775b9f8ae557c4a7543680a25c46910d37a62b2d 172.10.1.3:6379@16379 master - 0 1604388878000 3 connected 10923-16383
d0afdd6513bd8be9382759dfa635f1566f26ca99 172.10.1.4:6379@16379 slave 775b9f8ae557c4a7543680a25c46910d37a62b2d 0 1604388880637 3 connected
c2dea8329847238f3f450500153ecef1eefb1dae 172.10.1.2:6379@16379 slave 2d67bd9cb3952bec942f468fe9214afdcf4419f4 0 1604388879635 7 connected
172.10.1.1:6379>
集群恢复正常,但在经过一次故障转移后,2号节点和6号节点的主从关系互换了。
快来评论一下吧!
发表评论