侧边栏壁纸
博主头像
运维匠-运维工程师知识分享经验和最佳实践博主等级

生活百般滋味,人生需要笑对

  • 累计撰写 60 篇文章
  • 累计创建 3 个标签
  • 累计收到 0 条评论

目 录CONTENT

文章目录

Redis集群和高可用

运维匠
2022-12-09 / 0 评论 / 0 点赞 / 5 阅读 / 46599 字
温馨提示:
本文最后更新于 2024-07-15,若内容或图片失效,请留言反馈。部分素材来自网络,若不小心影响到您的利益,请联系我们删除。

Redis集群和高可用

简介

Redis单机服务存在数据和服务的单点问题,而且单机性能也存在着上限,可以利用Redis的集群相关技术来解决这些问题.

1671327877294

主从复制实现

主从命令配置

1670942266531

当配置Redis复制功能时,强烈建议打开主服务器的持久化功能。否则的话,由于延迟等问题,部署的主节点Redis服务应该要避免自动启动。

参考案例: 导致主从服务器数据全部丢失

1.假设节点A为主服务器,并且关闭了持久化。并且节点B和节点C从节点A复制数据
2.节点A崩溃,然后由自动拉起服务重启了节点A.由于节点A的持久化被关闭了,所以重启之后没有任何数据
3.节点B和节点C将从节点A复制数据,但是A的数据是空的,于是就把自身保存的数据副本删除。

在关闭主服务器上的持久化,并同时开启自动拉起进程的情况下,即便使用Sentinel来实现Redis的高可用性,也是非常危险的。因为主服务器可能拉起得非常快,以至于Sentinel在配置的心跳时间间隔内没有检测到主服务器已被重启,然后还是会执行上面的数据丢失的流程。无论何时,数据安全都是极其重要的,所以应该禁止主服务器关闭持久化的同时自动启动。

启用主从同步

Redis Server 默认为 master节点,如果要配置为从节点,需要指定master服务器的IP,端口及连接密码在从节点执行 REPLICAOF MASTER_IP PORT 指令可以启用主从同步复制功能,早期版本使用 SLAVEOF指令

127.0.0.1:6379> REPLICAOF MASTER_IP PORT #新版推荐使用
127.0.0.1:6379> SLAVEOF MasterIP Port #旧版使用,将被淘汰
127.0.0.1:6379> CONFIG SET masterauth <masterpass>
  • 在master实现
127.0.0.1:6379> AUTH 123456
OK
127.0.0.1:6379> INFO replication #查看当前角色默认为master
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:f945fd1714d8d3b78a149c8b2e0d57567ee6cb77
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1361
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1361
127.0.0.1:6379> 
  • 在slave实现
#在slave上设置master的IP和端口,4.0版之前的指令为slaveof
127.0.0.1:6380> REPLICAOF 127.0.0.1 6379  #仍可使用SLAVEOF MasterIP Port
OK
127.0.0.1:6380> 

#在slave上设置master的密码
127.0.0.1:6379> CONFIG SET masterauth 123456

# Replication #角色变为slave
127.0.0.1:6380> INFO replication
# Replication
role:slave
master_host:127.0.0.1  #指向master
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_read_repl_offset:1515
slave_repl_offset:1515
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:f945fd1714d8d3b78a149c8b2e0d57567ee6cb77
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1515
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1362
repl_backlog_histlen:154
127.0.0.1:6380> 

  • 在master实现
# 添加值
127.0.0.1:6379> set class m48
OK
127.0.0.1:6379>
  • 在slave验证是否同步过来
# 在slave执行
127.0.0.1:6380> get class
"m48"
127.0.0.1:6380> 

# 可以看到已经同步成功
  • master实现
#在master上可以看到所有slave信息
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6380,state=online,offset=1907,lag=0
master_failover_state:no-failover
master_replid:f945fd1714d8d3b78a149c8b2e0d57567ee6cb77
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1907
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1907
127.0.0.1:6379> 

删除主从同步

  • 在slave实现
# 在从节点执行 REPLICAOF NO ONE 指令可以取消主从复制
#取消复制,在slave上执行REPLICAOF NO ONE,会断开和master的连接不再主从复制, 但不会清除slave
上已有的数据
127.0.0.1:6380> REPLICAOF no one

验证同步

  • 查看master日志
[root@centos7-master ~]# tail -f /usr/local/src/redis/log/redis_6379.log 
945:M 13 Dec 2022 22:27:17.550 * Synchronization with replica 127.0.0.1:6380 succeeded
945:M 13 Dec 2022 22:42:59.410 # Connection with replica 127.0.0.1:6380 lost.
945:M 13 Dec 2022 22:46:34.373 * Replica 127.0.0.1:6380 asks for synchronization
945:M 13 Dec 2022 22:46:34.373 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for '1200062e6b1065421dec8531ca2d96776029ab3d', my replication IDs are 'f945fd1714d8d3b78a149c8b2e0d57567ee6cb77' and '0000000000000000000000000000000000000000')
945:M 13 Dec 2022 22:46:34.373 * Starting BGSAVE for SYNC with target: disk
945:M 13 Dec 2022 22:46:34.374 * Background saving started by pid 1690
1690:C 13 Dec 2022 22:46:34.376 * DB saved on disk
1690:C 13 Dec 2022 22:46:34.377 * RDB: 2 MB of memory used by copy-on-write
945:M 13 Dec 2022 22:46:34.435 * Background saving terminated with success
945:M 13 Dec 2022 22:46:34.435 * Synchronization with replica 127.0.0.1:6380 succeeded

  • 查看slave日志
[root@centos7-master ~]# tail -f /usr/local/src/redis/log/redis_6380.log
946:S 13 Dec 2022 22:46:34.436 * MASTER <-> REPLICA sync: Finished with success
946:S 13 Dec 2022 22:46:34.437 * Background append only file rewriting started by pid 1691
946:S 13 Dec 2022 22:46:34.470 * AOF rewrite child asks to stop sending diffs.
1691:C 13 Dec 2022 22:46:34.470 * Parent agreed to stop sending diffs. Finalizing AOF...
1691:C 13 Dec 2022 22:46:34.470 * Concatenating 0.00 MB of AOF diff received from parent.
1691:C 13 Dec 2022 22:46:34.470 * SYNC append only file rewrite performed
1691:C 13 Dec 2022 22:46:34.471 * AOF rewrite: 2 MB of memory used by copy-on-write
946:S 13 Dec 2022 22:46:34.536 * Background AOF rewrite terminated with success
946:S 13 Dec 2022 22:46:34.536 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
946:S 13 Dec 2022 22:46:34.536 * Background AOF rewrite finished successfully

修改slave配置文件

[root@centos7-master ~]# vim /usr/local/src/redis/etc/redis6380.conf 
# replicaof <masterip> <masterport>
replicaof 127.0.0.1 6379 #指定master的IP和端口号,我这里在同一台机器上安装了多实例


# masterauth <master-password>
masterauth 123456 #如果密码需要设置

systemctl restart redis

#在master上查看状态
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6380,state=online,offset=3307,lag=1
slave1:ip=127.0.0.1,port=6381,state=online,offset=3307,lag=1
master_failover_state:no-failover
master_replid:f945fd1714d8d3b78a149c8b2e0d57567ee6cb77
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:3307
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:3307
127.0.0.1:6379> 

#停止master的redis服务:systemctl stop redis,在slave上可以观察到以下现象
127.0.0.1:6381> info replication
# Replication
role:slave
master_host:192.168.1.104
master_port:6379
master_link_status:down   #显示down,表示无法连接master
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_read_repl_offset:84
slave_repl_offset:84
master_link_down_since_seconds:14
slave_priority:100
slave_read_only:1
replica_announced:1
connected_slaves:0
master_failover_state:no-failover
master_replid:f6eefc841166e73282b4bab58527081653ddb0d1
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:84
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:15
repl_backlog_histlen:70
127.0.0.1:6381> 

  • slave 只读状态
    验证Slave节点为只读状态, 不支持写入
127.0.0.1:6381> set ll aa
(error) READONLY You can't write against a read only replica.
127.0.0.1:6381> 

Redis实现哨兵架构

以下案例实现一主两从的基于哨兵的高可用Redis架构

1671283453244

  • 先实现主从架构

哨兵的前提是已经实现了Redis的主从复制
注意: master 的配置文件中masterauth 和slave 都必须相同
所有主从节点的 redis.conf 中关健配置
范例: 准备主从环境配置

#在所有主从节点执行
vim redis.conf
bind 0.0.0.0
masterauth "123456"
requirepass "123456"
#或者非交互执行
[root@centos8 ~]#sed -i -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e 's/^# masterauth
.*/masterauth 123456/' -e 's/^# requirepass .*/requirepass 123456/'
/etc/redis.conf
#在所有从节点执行
[root@centos8 ~]#echo "replicaof 192.168.32.133 6379" >> /etc/redis.conf
#在所有主从节点执行
[root@centos8 ~]#systemctl enable --now redis
  • 配置slave1
[root@redis-slave1 ~]#redis-cli -a 123456
Warning: Using a password with '-a' or '-u' option on the command line interface
may not be safe.
127.0.0.1:6379> REPLICAOF 192.168.32.133 6379
OK
127.0.0.1:6379> CONFIG SET masterauth "123456"
OK
  • 配置slave2
[root@redis-slave2 ~]#redis-cli -a 123456
Warning: Using a password with '-a' or '-u' option on the command line interface
may not be safe.
127.0.0.1:6379> REPLICAOF 192.168.32.133 6379
OK
127.0.0.1:6379> CONFIG SET masterauth "123456"
OK
  • 编辑哨兵配置
    sentinel配置
    Sentinel实际上是一个特殊的redis服务器,有些redis指令支持,但很多指令并不支持.默认监听在
    26379/tcp端口.
    哨兵服务可以和Redis服务器分开部署在不同主机,但为了节约成本一般会部署在一起
    所有redis节点使用相同的以下示例的配置文件
#如果是编译安装,在源码目录有sentinel.conf,复制到安装目录即可,
如:/usr/local/src/redis/etc/sentinel.conf
[root@centos8 ~]#cp redis-6.2.5/sentinel.conf /usr/local/src/redis/etc/
[root@centos8 ~]#chown redis.redis /usr/local/src/redis/etc/sentinel.conf
[root@centos8 ~]#vim /etc/redis-sentinel.conf
bind 0.0.0.0
port 26379
daemonize yes
pidfile "redis-sentinel.pid"
logfile "sentinel_26379.log"
dir "/tmp" #工作目录
sentinel monitor mymaster 10.0.0.8 6379 2
#mymaster是集群的名称,此行指定当前mymaster集群中master服务器的地址和端口
#2为法定人数限制(quorum),即有几个sentinel认为master down了就进行故障转移,一般此值是所有
sentinel节点(一般总数是>=3的 奇数,如:3,5,7等)的一半以上的整数值,比如,总数是3,即3/2=1.5,
取整为2,是master的ODOWN客观下线的依据
sentinel auth-pass mymaster 123456

#mymaster集群中master的密码,注意此行要在上面行的下面
sentinel down-after-milliseconds mymaster 30000
#判断mymaster集群中所有节点的主观下线(SDOWN)的时间,单位:毫秒,建议3000
sentinel parallel-syncs mymaster 1

#发生故障转移后,可以同时向新master同步数据的slave的数量,数字越小总同步时间越长,但可以减轻新
master的负载压力
sentinel failover-timeout mymaster 180000
#所有slaves指向新的master所需的超时时间,单位:毫秒
sentinel deny-scripts-reconfig yes #禁止修改脚本
logfile /var/log/redis/sentinel.log
  • 三个哨兵服务器的配置都如下
port 26379
daemonize no
pidfile "/var/run/redis-sentinel.pid"
logfile "/var/log/redis/sentinel.log"
dir "/tmp"
sentinel monitor mymaster 192.168.32.133 6379 2 #修改此行
sentinel auth-pass mymaster 123456 #增加此行
sentinel down-after-milliseconds mymaster 3000 #修改此行
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes
#注意此行自动生成必须唯一,一般不需要修改,如果相同则修改此值需重启redis和sentinel服务
sentinel myid 50547f34ed71fd48c197924969937e738a39975b
.....
# Generated by CONFIG REWRITE
protected-mode no
supervised systemd
sentinel leader-epoch mymaster 0
sentinel known-replica mymaster 10.0.0.28 6379
sentinel known-replica mymaster 10.0.0.18 6379
sentinel current-epoch 0
  • 启动哨兵服务
    将所有哨兵服务器都启动起来
/usr/local/src/redis/bin/redis-sentinel /usr/local/src/redis/etc/sentinel.conf
  • 将服务写成service文件
vim /lib/systemd/system/redis-sentinel.service

[Unit]
Description=Redis Sentinel
After=network.target
[Service]
ExecStart=/usr/local/src/redis/bin/redis-sentinel /usr/local/src/redis/etc/sentinel.conf --supervised systemd
ExecStop=/bin/kill -s QUIT $MAINPID
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755
[Install]
WantedBy=multi-user.target

#注意所有节点的目录权限,否则无法启动服务
[root@redis-master ~]#chown -R redis.redis /usr.local/src/redis/
  • 验证哨兵服务

查看哨兵服务端口状态,端口26379

[root@centos8 log]# ss -ntl
State            Recv-Q           Send-Q                     Local Address:Port                       Peer Address:Port           Process           
LISTEN           0                511                              0.0.0.0:26379                           0.0.0.0:*                                
LISTEN           0                511                              0.0.0.0:6379                            0.0.0.0:*                                
LISTEN           0                128                              0.0.0.0:22                              0.0.0.0:*                                
LISTEN           0                511                                [::1]:6379                               [::]:*                                
LISTEN           0                128                                 [::]:22                                 [::]:*                                
[root@centos8 log]# 
  • Sentinel 运维

手动让主节点下线

127.0.0.1:26379> sentinel failover <masterName>

范例:手动故障转移

vim redis.conf
replica-priority 10 #指定优先级,值越小sentinel会优先将之选为新的master,默为值为100
systemctl restart redis

#或者动态修改
[root@centos8 ~]#redis-cli -a 123456
Warning: Using a password with '-a' or '-u' option on the command line interface
may not be safe.
127.0.0.1:6379> CONFIG GET replica-priority
1) "replica-priority"
2) "100"
127.0.0.1:6379> CONFIG SET replica-priority 99
OK
127.0.0.1:6379> CONFIG GET replica-priority
1) "replica-priority"
2) "99"
[root@centos8 ~]#redis-cli -p 26379
127.0.0.1:26379> sentinel failover mymaster
OK

应用程序连接 Sentinel

Redis 官方支持多种开发语言的客户端:

https://redis.io/clients

客户端连接 Sentinel 工作原理

  1. 客户端获取 Sentinel 节点集合,选举出一个 Sentinel

1671326052932

1.由这个sentinel 通过masterName 获取master节点信息,客户端通过sentinel get-master-addr-by-name master-name这个api来获取对应主节点信息

1671326076150

2.客户端发送role指令确认master的信息,验证当前获取的“主节点”是真正的主节点,这样的目的是为了防止故障转移期间主节点的变化

1671326092067

3.客户端保持和Sentinel节点集合的联系,即订阅Sentinel节点相关频道,时刻获取关于主节点的相关信息,获取新的master 信息变化,并自动连接新的master

1671326121741

java 连接Sentinel哨兵

java 客户端连接Redis:

https://github.com/xetorthio/jedis/blob/master/pom.xml

python 连接 Sentinel 哨兵

[root@centos8 ~]#yum -y install python3 python3-redis
[root@centos8 ~]#vim sentinel_test.py
#!/usr/bin/python3
import redis
from redis.sentinel import Sentinel
#连接哨兵服务器(主机名也可以用域名)
sentinel = Sentinel([('192.168.32.135', 26379),
                   ('192.168.32.133', 26379),
                   ('192.168.32.132', 26379)],
socket_timeout=0.5)
redis_auth_pass = '123456'
#mymaster 是配置哨兵模式的redis集群名称,此为默认值,实际名称按照个人部署案例来填写
#获取主服务器地址
master = sentinel.discover_master('mymaster')
print("master:",master)
#获取从服务器地址
slave = sentinel.discover_slaves('mymaster')
print("slave:",slave)
#获取主服务器进行写入
master = sentinel.master_for('mymaster', socket_timeout=0.5,
password=redis_auth_pass, db=0)
w_ret = master.set('name', 'xy')
#输出:True
#获取从服务器进行读取(默认是round-roubin)
slave = sentinel.slave_for('mymaster', socket_timeout=0.5,
password=redis_auth_pass, db=0)
r_ret = slave.get('name')
print(r_ret)
#输出:xy

chmod +x sentinel_test.py

./sentinel_test.py
master: ('192.168.32.135', 6379)
slave: [('192.168.32.133', 6379)]
b'xy'

Redis Cluster

1671327580442

Redis Cluster 介绍

使用哨兵sentinel 只能解决Redis高可用问题,实现Redis的自动故障转移,但仍然无法解决Redis Master
单节点的性能瓶颈问题
为了解决单机性能的瓶颈,提高Redis 服务整体性能,可以使用分布式集群的解决方案
早期 Redis 分布式集群部署方案:

  • 客户端分区:由客户端程序自己实现写入分配、高可用管理和故障转移等,对客户端的开发实现较为复杂
  • 代理服务:客户端不直接连接Redis,而先连接到代理服务,由代理服务实现相应读写分配,当前代理服务都是第三方实现.此方案中客户端实现无需特殊开发,实现容易,但是代理服务节点仍存有单点故障和性能瓶颈问题。比如:豌豆荚开发的 codis

Redis 3.0 版本之后推出无中心架构的 Redis Cluster ,支持多个master节点并行写入和故障的自动转移动能

Redis cluster 架构

1671328724924

Redis cluster 需要至少 3个master节点才能实现,slave节点数量不限,当然一般每个master都至少对应的有一个slave节点
如果有三个主节点采用哈希槽 hash slot 的方式来分配16384个槽位 slot
此三个节点分别承担的slot 区间可以是如以下方式分配

节点M1 0-5460
节点M2 5461-10922
节点M3 10923-16383

1671328782321

Redis cluster的工作原理

1671328827665

数据分区

如果是单机存储的话,直接将数据存放在单机redis就行了。但是如果是集群存储,就需要考虑到数据分区了。

1671328870673

1671328992908

1671329004830

集群通信

但是寻找槽的过程并不是一次就命中的,比如上图key将要存放在14396槽中,但是并不是一下就锁定了node3节点,可能先去询问node1,然后才访问node3。
而集群中节点之间的通信,保证了最多两次就能命中对应槽所在的节点。因为在每个节点中,都保存了其他节点的信息,知道哪个槽由哪个节点负责。这样即使第一次访问没有命中槽,但是会通知客户端,该槽在哪个节点,这样访问对应节点就能精准命中。

1671329162018

1671329175313

集群伸缩

集群并不是建立之后,节点数就固定不变的,也会有新的节点加入集群或者集群中的节点下线,这就是集群的扩容和缩容。但是由于集群节点和槽息息相关,所以集群的伸缩也对应了槽和数据的迁移

1671329671490

集群扩容

当有新的节点准备好加入集群时,这个新的节点还是孤立节点,加入有两种方式。一个是通过集群节点执行命令来和孤立节点握手,另一个则是使用脚本来添加节点。

1671329705222

1671329719113

1671329733621

集群缩容

1671329778357

1671329787950

1671329813270

1671329824032

1671329836794

故障转移

1671329862455

1671329876909

当从节点走马上任变成主节点之后,就要开始进行替换主节点:

  1. 让该slave节点执行slaveof no one变为master节点
  2. 将故障节点负责的槽分配给该节点
  3. 向集群中其他节点广播Pong消息,表明已完成故障转移
  4. 故障节点重启后,会成为new_master的slave节点

实战案例

基于Redis 5 以上版本的 redis cluster 部署

官方文档:

https://redis.io/topics/cluster-tutorial

创建 redis cluster集群的环境准备

1671337258210

  • 每个Redis 节点采用相同的相同的Redis版本、相同的密码、硬件配置
  • 所有Redis服务器必须没有任何数据
  • 准备六台主机,地址如下:
192.168.32.132
192.168.32.137
192.168.32.140
192.168.32.129
192.168.32.136
192.168.32.138

启用 redis cluster 配置

每个节点安装相同版每个节点修改redis配置,必须开启cluster功能的参数

vim /etc/redis.conf
bind 0.0.0.0
masterauth 123456 #建议配置,否则后期的master和slave主从复制无法成功,还需再配置
requirepass 123456
cluster-enabled yes #取消此行注释,必须开启集群,开启后 redis 进程会有cluster标识
cluster-config-file nodes-6379.conf #取消此行注释,此为集群状态数据文件,记录主从关系
及slot范围信息,由redis cluster 集群自动创建和维护
cluster-require-full-coverage no #默认值为yes,设为no可以防止一个节点不可用导致整
个cluster不可用

以下方式二选一

  • 执行下面命令,批量修改

sed -i.bak -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e '/masterauth/a masterauth 123456' -e '/# requirepass/a requirepass 123456' -e '/# cluster-enabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-require-full-coverage yes/c cluster-require-full-coverage no' /etc/redis.conf
  • 如果是编译安装可以执行下面操作
sed -i.bak -e '/masterauth/a masterauth 123456' -e '/# cluster-enabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-require-full-coverage yes/c cluster-require-full-coverage no' /usr/local/src/redis/etc/redis.conf

开机启动redis

systemctl enable --now redis
# 修改完配置文件重启redis
systemctl restart redis

验证当前Redis服务状态:

#开启了16379的cluster的端口,实际的端口=redis port + 10000
[root@centos7 ~]# ss -ntl
State      Recv-Q Send-Q                       Local Address:Port                                      Peer Address:Port              
LISTEN     0      128                                      *:22                                                   *:*                  
LISTEN     0      100                              127.0.0.1:25                                                   *:*                  
LISTEN     0      511                                      *:16379                                                *:*                  
LISTEN     0      511                                      *:6379                                                 *:*                  
LISTEN     0      128                                   [::]:22                                                [::]:*                  
LISTEN     0      100                                  [::1]:25                                                [::]:*                  
LISTEN     0      511                                  [::1]:16379                                             [::]:*                  
LISTEN     0      511                                  [::1]:6379                                              [::]:*                  
[root@centos7 ~]# 

创建集群

#命令redis-cli的选项 --cluster-replicas 1 表示每个master对应一个slave节点
# 默认前三个为主节点
[root@centos8 etc]# redis-cli -a 123456 --cluster create 192.168.32.132:6379 192.168.32.137:6379 192.168.32.140:6379 192.168.32.129:6379 192.168.32.136:6379 192.168.32.138:6379 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.32.136:6379 to 192.168.32.132:6379
Adding replica 192.168.32.138:6379 to 192.168.32.137:6379
Adding replica 192.168.32.129:6379 to 192.168.32.140:6379
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[0-5460] (5461 slots) master
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[5461-10922] (5462 slots) master
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[10923-16383] (5461 slots) master
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.32.132:6379)
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@centos8 ~]# 

验证集群

  • 查看主从状态
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.32.136,port=6379,state=online,offset=98,lag=1
master_failover_state:no-failover
master_replid:b1bd51213722f38a83c8bb525e8a74e62392a161
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:98
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:98
127.0.0.1:6379> 
  • 验证集群状态

    127.0.0.1:6379> CLUSTER INFO
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6    # 节点数
    cluster_size:3           # 三个集群
    cluster_current_epoch:6
    cluster_my_epoch:1
    cluster_stats_messages_ping_sent:210
    cluster_stats_messages_pong_sent:210
    cluster_stats_messages_sent:420
    cluster_stats_messages_ping_received:205
    cluster_stats_messages_pong_received:210
    cluster_stats_messages_meet_received:5
    cluster_stats_messages_received:420
    127.0.0.1:6379> 
    
    
    #查看任意节点的集群状态
    [root@centos8 ~]# redis-cli -a 123456 --cluster info 192.168.32.137:6379
    Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
    192.168.32.137:6379 (46b54e82...) -> 0 keys | 5462 slots | 1 slaves.
    192.168.32.140:6379 (f49ca2e5...) -> 0 keys | 5461 slots | 1 slaves.
    192.168.32.132:6379 (658dd91e...) -> 0 keys | 5461 slots | 1 slaves.
    [OK] 0 keys in 3 masters.
    0.00 keys per slot on average.
    [root@centos8 ~]# 
    

查看对应关系

[root@centos8 ~]# redis-cli -a 123456 CLUSTER NODES
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379@16379 slave f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 0 1671364792207 3 connected
658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379@16379 myself,master - 0 1671364792000 1 connected 0-5460
f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379@16379 master - 0 1671364792000 3 connected 10923-16383
46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379@16379 master - 0 1671364793216 2 connected 5461-10922
f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379@16379 slave 658dd91e4b51bf06b161e6903d4084c77abd195d 0 1671364793000 1 connected
eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379@16379 slave 46b54e8298e11e77450e232c9a0ee057b362191a 0 1671364792000 2 connected
[root@centos8 ~]# 

测试集群写入数据

1671364901011

  • redis cluster 写入key
#经过算法计算,当前key的槽位需要写入指定的node
[root@centos8 ~]# redis-cli -a 123456 set k1 v1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
(error) MOVED 12706 192.168.32.140:6379  #槽位不在当前node所以无法写入
[root@centos8 ~]# 

#指定槽位对应node可写入
[root@centos8 ~]# redis-cli -h 192.168.32.140 -a 123456 set k1 v1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
OK
[root@centos8 ~]# 

#对应的slave节点可以KEYS *,但GET k1失败,可以到master上执行GET k1
[root@centos8 ~]# redis-cli -h 192.168.32.129 -a 123456 get k1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
(error) MOVED 12706 192.168.32.140:6379
[root@centos8 ~]# redis-cli -h 192.168.32.129 -a 123456 keys "*"
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
1) "k1"
[root@centos8 ~]# 


Redis cluster 管理

集群扩容

扩容适用场景:
当前客户量激增,现有的Redis cluster架构已经无法满足越来越高的并发访问请求,为解决此问题,新购置两台服务器,要求将其动态添加到现有集群,但不能影响业务的正常访问。
注意: 生产环境一般建议master节点为奇数个,比如:3,5,7,以防止脑裂现象

  • 添加节点准备

增加Redis 新节点,需要与之前的Redis node版本和配置一致,然后分别再启动两台Redis node,应为一主一从。

192.168.32.133 主
192.168.32.139  从
# 修改配置文件,主从节点都修改
sed -i.bak -e '/masterauth/a masterauth 123456' -e '/# cluster-enabled yes/a cluster-enabled yes' -e '/# cluster-config-file nodes-6379.conf/a cluster-config-file nodes-6379.conf' -e '/cluster-require-full-coverage yes/c cluster-require-full-coverage no' /usr/local/src/redis/etc/redis.conf

systemctl restart redis
  • 添加新的master节点到集群
    使用以下命令添加新节点,要添加的新redis节点IP和端口添加到的已有的集群中任意节点的IP:端口
add-node new_host:new_port existing_host:existing_port [--slave --master-id
<arg>]
#说明:
new_host:new_port #指定新添加的主机的IP和端口
existing_host:existing_port #指定已有的集群中任意节点的IP和端口
  • Redis 3/4 版本的添加命令:
#把新的Redis 节点192.168.32.133添加到当前Redis集群当中。
[root@redis-node1 ~]#redis-trib.rb add-node 192.168.32.133:6379 192.168.32.132:6379
  • Redis 5 以上版本的添加命令:
#将一台新的主机加入集群
[root@redis-node1 ~]#redis-cli -a 123456 --cluster add-node 192.168.32.133:6379 <当前
任意集群节点>:6379
[root@centos8 data]# redis-cli -a 123456 --cluster add-node 192.168.32.133:6379 192.168.32.132:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 192.168.32.133:6379 to cluster 192.168.32.132:6379
>>> Performing Cluster Check (using node 192.168.32.132:6379)
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.32.133:6379 to make it join the cluster.
[OK] New node added correctly.
[root@centos8 data]# 

  • 在新的master上重新分配槽位
    新的node节点加到集群之后,默认是master节点,但是没有slots,需要重新分配,否则没有槽位将无法访问
    注意: 重新分配槽位需要清空数据,所以需要先备份数据,扩展后再恢复数据
    Redis 3/4 版本命令:
[root@redis-node1 ~]# redis-trib.rb check 10.0.0.67:6379 #当前状态
[root@redis-node1 ~]# redis-trib.rb reshard <任意节点>:6379 #重新分片
[root@redis-node1 ~]# redis-trib.rb fix 10.0.0.67:6379 #如果迁移失败使用此命令修复集群
  • Redis 5以上版本命令:
[root@redis-node1 ~]#redis-cli -a 123456 --cluster reshard <当前任意集群节点>:6379
[root@centos8 data]# redis-cli -a 123456 --cluster reshard 192.168.32.133:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 192.168.32.133:6379)
M: 77cfc3429c8b470331520074faea7c3a21f77d1f 192.168.32.133:6379
   slots: (0 slots) master
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
# 复制新加入的节点的ID,即192.168.32.133的节点ID
What is the receiving node ID? 77cfc3429c8b470331520074faea7c3a21f77d1f
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all     # 选择all
Do you want to proceed with the proposed reshard plan (yes/no)? yes
  • 为新的master指定新的slave节点

当前Redis集群中新的master节点存单点问题,还需要给其添加一个对应slave节点,实现高可用功能
有两种方式:
方法1:在新加节点到集群时,直接将之设置为slave
Redis 3/4 添加命令

redis-trib.rb add-node --slave --master-id
750cab050bc81f2655ed53900fd43d2e64423333 192.168.32.139:6379 <任意集群节点>:6379

Redis 5 以上版本添加命令:

redis-cli -a 123456 --cluster add-node 192.168.32.139:6379 <任意集群节点>:6379 --
cluster-slave --cluster-master-id d6e2eca6b338b717923f64866bd31d42e52edc98

范例:

# 查看当前状态
[root@centos8 ~]# redis-cli -a 123456 --cluster check 192.168.32.132:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.32.132:6379 (658dd91e...) -> 0 keys | 4096 slots | 1 slaves.
192.168.32.133:6379 (77cfc342...) -> 0 keys | 4096 slots | 0 slaves.
192.168.32.140:6379 (f49ca2e5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.32.137:6379 (46b54e82...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.32.132:6379)
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 77cfc3429c8b470331520074faea7c3a21f77d1f 192.168.32.133:6379
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@centos8 ~]# 

#直接加为slave节点
[root@centos8 ~]# redis-cli -a 123456 --cluster add-node 192.168.32.139:6379 192.168.32.132:6379 --cluster-slave --cluster-master-id 77cfc3429c8b470331520074faea7c3a21f77d1f


# 验证
[root@centos8 ~]# redis-cli -a 123456 --cluster check 192.168.32.132:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.32.132:6379 (658dd91e...) -> 0 keys | 4096 slots | 1 slaves.
192.168.32.133:6379 (77cfc342...) -> 0 keys | 4096 slots | 1 slaves.
192.168.32.140:6379 (f49ca2e5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.32.137:6379 (46b54e82...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.

集群缩容

缩容适用场景:
随着业务萎缩用户量下降明显,和领导商量决定将现有Redis集群的8台主机中下线两台主机挪做它用,缩容后性能仍能满足当前业务需求
删除节点过程:
扩容时是先添加node到集群,然后再分配槽位,而缩容时的操作相反,是先将被要删除的node上的槽位迁移到集群中的其他node上,然后 才能再将其从集群中删除,如果一个node上的槽位没有被完全迁移空,删除该node时也会提示有数据出错导致无法删除。

迁移要删除的master节点上面的槽位到其它master
注意: 被迁移Redis master源服务器必须保证没有数据,否则迁移报错并会被强制中断。
Redis 3/4 版本命令

[root@redis-node1 ~]# redis-trib.rb reshard 10.0.0.8:6379
[root@redis-node1 ~]# redis-trib.rb fix 10.0.0.8:6379 #如果迁移失败使用此命令修复集群

Redis 5版本以上命令

# 查看当前状态
[root@centos8 ~]# redis-cli -a 123456 --cluster check 192.168.32.132:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.32.132:6379 (658dd91e...) -> 0 keys | 4096 slots | 1 slaves.
192.168.32.133:6379 (77cfc342...) -> 0 keys | 4096 slots | 1 slaves.
192.168.32.140:6379 (f49ca2e5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.32.137:6379 (46b54e82...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.32.132:6379)
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 77cfc3429c8b470331520074faea7c3a21f77d1f 192.168.32.133:6379
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: a44914056fd3a170850ad572c0e238b499455897 192.168.32.139:6379
   slots: (0 slots) slave
   replicates 77cfc3429c8b470331520074faea7c3a21f77d1f
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@centos8 ~]# 


#连接到任意集群节点,#最后1365个slot从192.168.32.133移动到第一个master节点192.168.32.132上
[root@centos8 ~]# redis-cli -a 123456 --cluster reshard 192.168.32.132:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing Cluster Check (using node 192.168.32.132:6379)
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 77cfc3429c8b470331520074faea7c3a21f77d1f 192.168.32.133:6379
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: a44914056fd3a170850ad572c0e238b499455897 192.168.32.139:6379
   slots: (0 slots) slave
   replicates 77cfc3429c8b470331520074faea7c3a21f77d1f
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 1356 #共4096/3分别给其它三个master节点
What is the receiving node ID? 658dd91e4b51bf06b161e6903d4084c77abd195d # master id
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: 77cfc3429c8b470331520074faea7c3a21f77d1f  # 删除ID,192.168.32.133的ID
Source node #2: done
Do you want to proceed with the proposed reshard plan (yes/no)? yes

# redis-cli -a 123456 --cluster reshard 192.168.32.132:6379 该命令在执行两次
  • 从集群中删除服务器

上面步骤完成后,槽位已经迁移走,但是节点仍然还属于集群成员,因此还需从集群删除该节点
注意: 删除服务器前,必须清除主机上面的槽位,否则会删除主机失败
Redis 3/4命令:

[root@s~]#redis-trib.rb del-node <任意集群节点的IP>:6379
dfffc371085859f2858730e1f350e9167e287073
#dfffc371085859f2858730e1f350e9167e287073 是删除节点的ID
>>> Removing node dfffc371085859f2858730e1f350e9167e287073 from cluster
192.168.7.102:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

Redis 5以上版本命令:

[root@redis-node1 ~]#redis-cli -a 123456 --cluster del-node <任意集群节点的IP>:6379
cb028b83f9dc463d732f6e76ca6bbcd469d948a7
#cb028b83f9dc463d732f6e76ca6bbcd469d948a7是删除节点的ID

范例

# 查看节点信息
[root@centos8 ~]# redis-cli -a 123456 --cluster check 192.168.32.132:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.32.132:6379 (658dd91e...) -> 0 keys | 8164 slots | 1 slaves.
192.168.32.133:6379 (77cfc342...) -> 0 keys | 28 slots | 1 slaves.
192.168.32.140:6379 (f49ca2e5...) -> 1 keys | 4096 slots | 1 slaves.
192.168.32.137:6379 (46b54e82...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 1 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.32.132:6379)
M: 658dd91e4b51bf06b161e6903d4084c77abd195d 192.168.32.132:6379
   slots:[0-6826],[10923-12259] (8164 slots) master
   1 additional replica(s)
M: 77cfc3429c8b470331520074faea7c3a21f77d1f 192.168.32.133:6379
   slots:[12260-12287] (28 slots) master
   1 additional replica(s)
S: ba4bb2dc1f4550e8602f500f1e0021896e78bf54 192.168.32.129:6379
   slots: (0 slots) slave
   replicates f49ca2e55dae53fa0a069ea9e1d35a31ee62731e
M: f49ca2e55dae53fa0a069ea9e1d35a31ee62731e 192.168.32.140:6379
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
M: 46b54e8298e11e77450e232c9a0ee057b362191a 192.168.32.137:6379
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: f720a02fee9c4826d08258b740de008040cf80c5 192.168.32.136:6379
   slots: (0 slots) slave
   replicates 658dd91e4b51bf06b161e6903d4084c77abd195d
S: a44914056fd3a170850ad572c0e238b499455897 192.168.32.139:6379
   slots: (0 slots) slave
   replicates 77cfc3429c8b470331520074faea7c3a21f77d1f
S: eec71072ab8ad4068b4604f7196d881f9b5363e0 192.168.32.138:6379
   slots: (0 slots) slave
   replicates 46b54e8298e11e77450e232c9a0ee057b362191a
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@centos8 ~]# 

# 删除192.168.32.133节点
[root@centos8 ~]# redis-cli -a 123456 --cluster del-node 192.168.32.132:6379 77cfc3429c8b470331520074faea7c3a21f77d1f
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Removing node 77cfc3429c8b470331520074faea7c3a21f77d1f from cluster 192.168.32.132:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
[root@centos8 ~]# 

常见面试题

  • Redis 做什么的,即在哪些场景下使用
  • 如果监控 Redis 是否出现故障
  • Redis客户端timeout报错突然增加,排查思路是怎样的?
  • 请简单描述pipeline功能,为什么pipeline功能会提升redis性能?
  • 本地redis-client访问远程Redis服务出错,说出几种常见的错误?
  • key-value的大小超大或单key的qps超高,会对Redis本身造成什么样的影响、会对访问Redis的其他客户端造成什么样的影响?
  • Zabbix 监控 Redis 哪些监控项
  • RDB和AOF持久化区别
  • docker拉取一个Redis如何实现数据持久化保存
  • Redis 支持哪些数据类型
  • Redis 如何实现消息队列
  • 描述下常见的redis集群架构有哪些,他们之间的优缺点对比
  • 主从复制工作原理
  • Redis 如何实现高可用
  • 哨兵工作原理
  • Redis 集群的工作原理
  • Redis 集群如果避免脑裂
  • Redis 集群最少几个节点为什么?
  • Redis的集群槽位多少个
  • Redis集群中某个节点缺少一个槽位是否能使用
  • Redis数据写入的时候是怎么在各个节点槽位分配数据的
  • Redis的数据存储是以什么样的方式存储
  • Redis集群的各槽位和总槽位之间什么关系
0

评论区