Redis 部署 Redis 安装及连接 官方安装方法说明:
1 https://redis.io/docs/getting-started/installation/
包安装 Redis Ubuntu 安装 Redis 1 2 3 4 5 6 7 8 9 10 [root@ubuntu2204 ~] [root@ubuntu2004 ~] [root@ubuntu2004 ~] |-redis-server(1330)-+-{redis-server}(1331) | |-{redis-server}(1332) | `-{redis-server}(1333) [root@ubuntu2004 ~] LISTEN 0 511 127.0.0.1:6379 0.0.0.0:*
CentOS 安装 Redis 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] LISTEN 0 128 127.0.0.1:6379 0.0.0.0:* [root@centos8 ~] 127.0.0.1:6379> ping PONG 127.0.0.1:6379> info redis_version:5.0.3 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:8c0bf22bfba82c8f redis_mode:standalone os:Linux 4.18.0-147.el8.x86_64 x86_64
编译安装 Redis Redis 源码包官方下载链接:
1 http://download.redis.io/releases/
编译安装 官方的安装方法:
1 2 https://redis.io/docs/getting-started/installation/install-redis-from-source/ https://redis.io/docs/latest/operate/oss_and_stack/install/build-stack/almalinux-rocky-8/
范例: 编译安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 [root@centos8~] [root@centos8~] [root@ubuntu2004 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 redis-6.2.4] [root@centos8 redis-6.2.4] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] /apps/redis/ └── bin ├── redis-benchmark ├── redis-check-aof ├── redis-check-rdb ├── redis-cli ├── redis-sentinel -> redis-server └── redis-server 1 directory, 6 files [root@centos8 ~] [root@centos8 redis-6.2.4]
前台启动 Redis redis-server 是 redis 服务器端的主程序
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@centos8 ~] Usage: ./redis-server [/path/to/redis.conf] [options] ./redis-server - (read config from stdin) ./redis-server -v or --version ./redis-server -h or --help ./redis-server --test-memory <megabytes> Examples: ./redis-server (run the server with default conf) ./redis-server /etc/redis/6379.conf ./redis-server --port 7777 ./redis-server --port 7777 --slaveof 127.0.0.1 8888 ./redis-server /etc/myredis.conf --loglevel verbose Sentinel mode: ./redis-server /etc/sentinel.conf --sentinel
前台启动 redis
1 2 3 [root@centos8 ~] [root@centos8 ~] LISTEN 0 511 127.0.0.1:6379 0.0.0.0:*
范例: 开启 Redis 多实例
1 2 3 4 5 6 7 8 9 10 11 12 13 [root@centos8 ~] [root@centos8 ~] State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 511 *:6379 *:* LISTEN 0 511 *:6380 *:* [root@centos8 ~] redis 4407 1 0 10:56 ? 00:00:01 /apps/redis/bin/redis-server 0.0.0.0:6379 root 4451 963 0 11:05 pts/0 00:00:00 redis-server *:6380 root 4484 4455 0 11:09 pts/1 00:00:00 grep --color=auto redis [root@centos8 ~] 127.0.0.1:6380>
消除启动时的三个Warning提示信息(可选) 前面直接启动Redis时有三个Waring信息,可以用下面方法消除告警,但非强制消除
Tcp backlog 1 2 WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Tcp backlog 是指TCP的第三次握手服务器端收到客户端 ack确认号之后到服务器用Accept函数处理请求前的队列长度,即全连接队列
1 2 3 4 net.core.somaxconn = 1024
overcommit_memory 1 2 3 WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
内核参数说明:
1 2 3 4 内核参数overcommit_memory 实现内存分配策略,可选值有三个:0、1、2 0 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则内存申请失败,并把错误返回给应用进程 1 表示内核允许分配所有的物理内存,而不管当前的内存状态如何 2 表示内核允许分配超过所有物理内存和交换空间总和的内存
范例:
1 2 3 4 vm.overcommit_memory = 1
transparent hugepage 1 2 3 4 5 6 7 8 9 10 WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. 警告:您在内核中启用了透明大页面(THP,不同于一般4k内存页,而为2M)支持。 这将在Redis中造成延迟 和内存使用问题。 要解决此问题,请以root 用户身份运行命令“echo never> /sys/kernel/mm/transparent_hugepage/enabled”,并将其添加到您的/etc/rc.local中,以便在 重启后保留设置。禁用THP后,必须重新启动Redis。
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [root@ubuntu2004 ~] always [madvise] never [root@rocky8 ~] [always] madvise never [root@centos7 ~] [always] madvise never [root@ubuntu2004 ~] echo never > /sys/kernel/mm/transparent_hugepage/enabled[root@ubuntu2004 ~] [root@centos8 ~] [root@centos8 ~] touch /var/lock/subsys/localecho never > /sys/kernel/mm/transparent_hugepage/enabled[root@centos8 ~]
验证是否消除 Warning 重新启动redis 服务不再有前面的三个Waring信息
创建 Redis 用户和设置数据目录权限 1 2 3 4 [root@centos8 ~] [root@centos8 ~]
创建 Redis 服务 Service 文件 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [Unit] Description=Redis persistent key-value database After=network.target [Service] ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis.conf --supervised systemd ExecStop=/bin/kill -s QUIT $MAINPID Type=notify User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 LimitNOFILE=1000000 [Install] WantedBy=multi-user.target
Redis 通过Service方式启动 1 2 3 4 5 6 [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 511 *:6379 *:*
验证客户端连接 Redis
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 [root@centos8 ~] 127.0.0.1:6379> ping PONG 127.0.0.1:6379> info redis_version:5.0.7 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:673d8c0ee1a8872 redis_mode:standalone os:Linux 3.10.0-1062.el7.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:4.8.5 process_id:1669 run_id:5e0420e92e35ad1d740e9431bc655bfd0044a5d1 tcp_port:6379 uptime_in_seconds:140 uptime_in_days:0 hz:10 configured_hz:10 lru_clock:4807524 executable:/apps/redis/bin/redis-server config_file:/apps/redis/etc/redis.conf connected_clients:1 client_recent_max_input_buffer:2 client_recent_max_output_buffer:0 blocked_clients:0 used_memory:575792 used_memory_human:562.30K used_memory_rss:3506176 used_memory_rss_human:3.34M used_memory_peak:575792 used_memory_peak_human:562.30K used_memory_peak_perc:100.18% used_memory_overhead:562590 used_memory_startup:512896 used_memory_dataset:13202 used_memory_dataset_perc:20.99% allocator_allocated:1201392 allocator_active:1531904 allocator_resident:8310784 total_system_memory:1019645952 total_system_memory_human:972.41M used_memory_lua:37888 used_memory_lua_human:37.00K used_memory_scripts:0 used_memory_scripts_human:0B number_of_cached_scripts:0 maxmemory:0 maxmemory_human:0B maxmemory_policy:noeviction allocator_frag_ratio:1.28 allocator_frag_bytes:330512 allocator_rss_ratio:5.43 allocator_rss_bytes:6778880 rss_overhead_ratio:0.42 rss_overhead_bytes:-4804608 mem_fragmentation_ratio:6.57 mem_fragmentation_bytes:2972384 mem_not_counted_for_evict:0 mem_replication_backlog:0 mem_clients_slaves:0 mem_clients_normal:49694 mem_aof_buffer:0 mem_allocator:jemalloc-5.1.0 active_defrag_running:0 lazyfree_pending_objects:0 loading:0 rdb_changes_since_last_save:0 rdb_bgsave_in_progress:0 rdb_last_save_time:1581865688 rdb_last_bgsave_status:ok rdb_last_bgsave_time_sec:-1 rdb_current_bgsave_time_sec:-1 rdb_last_cow_size:0 aof_enabled:0 aof_rewrite_in_progress:0 aof_rewrite_scheduled:0 aof_last_rewrite_time_sec:-1 aof_current_rewrite_time_sec:-1 aof_last_bgrewrite_status:ok aof_last_write_status:ok aof_last_cow_size:0 total_connections_received:1 total_commands_processed:2 instantaneous_ops_per_sec:0 total_net_input_bytes:45 total_net_output_bytes:11475 instantaneous_input_kbps:0.00 instantaneous_output_kbps:0.00 rejected_connections:0 sync_full:0 sync_partial_ok:0 sync_partial_err:0 expired_keys:0 expired_stale_perc:0.00 expired_time_cap_reached_count:0 evicted_keys:0 keyspace_hits:0 keyspace_misses:0 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:0 migrate_cached_sockets:0 slave_expires_tracked_keys:0 active_defrag_hits:0 active_defrag_misses:0 active_defrag_key_hits:0 active_defrag_key_misses:0 role:master connected_slaves:0 master_replid:f7228f0b6203183004fae8db00568f9f73422dc4 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 used_cpu_sys:0.132821 used_cpu_user:0.124317 used_cpu_sys_children:0.000000 used_cpu_user_children:0.000000 cluster_enabled:0 127.0.0.1:6379> exit
实战案例:一键编译安装Redis脚本 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 #!/bin/bash REDIS_VERSION=redis-6.2.5 PASSWORD=123456 INSTALL_DIR=/apps/redis CPUS=`lscpu |awk '/^CPU\(s\)/{print $2}' ` . /etc/os-release color () { RES_COL=60 MOVE_TO_COL="echo -en \\033[${RES_COL} G" SETCOLOR_SUCCESS="echo -en \\033[1;32m" SETCOLOR_FAILURE="echo -en \\033[1;31m" SETCOLOR_WARNING="echo -en \\033[1;33m" SETCOLOR_NORMAL="echo -en \E[0m" echo -n "$1 " && $MOVE_TO_COL echo -n "[" if [ $2 = "success" -o $2 = "0" ] ;then ${SETCOLOR_SUCCESS} echo -n $" OK " elif [ $2 = "failure" -o $2 = "1" ] ;then ${SETCOLOR_FAILURE} echo -n $"FAILED" else ${SETCOLOR_WARNING} echo -n $"WARNING" fi ${SETCOLOR_NORMAL} echo -n "]" echo } prepare (){ if [ $ID = "centos" ];then yum -y install gcc make jemalloc-devel systemd-devel else apt update apt -y install gcc make libjemalloc-dev libsystemd-dev fi if [ $? -eq 0 ];then color "安装软件包成功" 0 else color "安装软件包失败,请检查网络配置" 1 exit fi } install () { if [ ! -f ${REDIS_VERSION} .tar.gz ];then wget http://download.redis.io/releases/${REDIS_VERSION} .tar.gz || { color "Redis 源码下载失败" 1 ; exit ; } fi tar xf ${REDIS_VERSION} .tar.gz cd ${REDIS_VERSION} make -j $CUPS USE_SYSTEMD=yes PREFIX=${INSTALL_DIR} install && color "Redis 编译安装完成" 0 || { color "Redis 编译安装失败" 1 ;exit ; } ln -s ${INSTALL_DIR} /bin/redis-* /usr/bin/ mkdir -p ${INSTALL_DIR} /{etc,log ,data,run} cp redis.conf ${INSTALL_DIR} /etc/ sed -i -e 's/bind 127.0.0.1/bind 0.0.0.0/' -e "/# requirepass/a requirepass $PASSWORD " -e "/^dir .*/c dir ${INSTALL_DIR} /data/" -e "/logfile .*/c logfile ${INSTALL_DIR} /log/redis-6379.log" -e "/^pidfile .*/c pidfile ${INSTALL_DIR} /run/redis_6379.pid" ${INSTALL_DIR} /etc/redis.conf if id redis &> /dev/null ;then color "Redis 用户已存在" 1 else useradd -r -s /sbin/nologin redis color "Redis 用户创建成功" 0 fi chown -R redis.redis ${INSTALL_DIR} cat >> /etc/sysctl.conf <<EOF net.core.somaxconn = 1024 vm.overcommit_memory = 1 EOF sysctl -p if [ $ID = "centos" ];then echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.d/rc.local chmod +x /etc/rc.d/rc.local /etc/rc.d/rc.local else echo -e '#!/bin/bash\necho never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local chmod +x /etc/rc.local /etc/rc.local fi cat > /lib/systemd/system/redis.service <<EOF [Unit] Description=Redis persistent key-value database After=network.target [Service] ExecStart=${INSTALL_DIR}/bin/redis-server ${INSTALL_DIR}/etc/redis.conf --supervised systemd ExecStop=/bin/kill -s QUIT \$MAINPID Type=notify User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 LimitNOFILE=1000000 [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable --now redis &> /dev/null if [ $? -eq 0 ];then color "Redis 服务启动成功,Redis信息如下:" 0 else color "Redis 启动失败" 1 exit fi sleep 2 redis-cli -a $PASSWORD INFO Server 2> /dev/null } prepare install
Redis 的多实例 测试环境中经常使用多实例,需要指定不同实例的相应的端口,配置文件,日志文件等相关配置
范例: 以编译安装为例实现 redis 多实例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 [root@centos8 ~] total 0 drwxr-xr-x 2 redis redis 134 Oct 15 22:13 bin drwxr-xr-x 2 redis redis 69 Oct 15 23:25 data drwxr-xr-x 2 redis redis 75 Oct 15 22:42 etc drwxr-xr-x 2 redis redis 72 Oct 15 23:25 log drwxr-xr-x 2 redis redis 72 Oct 15 22:47 run [root@centos8 ~] /apps/redis/ ├── bin │ ├── redis-benchmark │ ├── redis-check-aof │ ├── redis-check-rdb │ ├── redis-cli │ ├── redis-sentinel -> redis-server │ └── redis-server ├── data │ ├── dump_6379.rdb │ ├── dump_6380.rdb │ └── dump_6381.rdb ├── etc │ ├── redis_6379.conf │ ├── redis_6380.conf │ └── redis_6381.conf ├── log │ ├── redis_6379.log │ ├── redis_6380.log │ └── redis_6381.log └── run ├── redis_6379.pid ├── redis_6380.pid └── redis_6381.pid 5 directories, 18 files [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] bind 0.0.0.0protected-mode yes port 6379 tcp-backlog 511 timeout 0tcp-keepalive 300 daemonize yes supervised no pidfile /apps/redis/run/redis_6379.pid loglevel notice logfile "/apps/redis/log/redis_6379.log" databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump_6379.rdb dir /apps/redis/data/replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-disable-tcp-nodelay no replica-priority 100 lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no appendonly no appendfilename "appendonly_6379.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble yes lua-time-limit 5000 slowlog-log-slower-than 10000 slowlog-max-len 128 latency-monitor-threshold 0 notify-keyspace-events "" hash-max-ziplist-entries 512 hash-max-ziplist-value 64 list-max-ziplist-size -2 list-compress-depth 0 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 hll-sparse-max-bytes 3000 stream-node-max-bytes 4096 stream-node-max-entries 100 activerehashing yes client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 hz 10 dynamic-hz yes aof-rewrite-incremental-fsync yes rdb-save-incremental-fsync yes [root@centos8 ~] port 6380 pidfile /apps/redis/run/redis_6380.pid logfile "/apps/redis/log/redis_6380.log" dbfilename dump_6380.rdb appendfilename "appendonly_6380.aof" [root@centos7 ~] port 6381 pidfile /apps/redis/run/redis_6381.pid logfile "/apps/redis/log/redis_6381.log" dbfilename dump_6381.rdb appendfilename "appendonly_6381.aof" [root@centos8 ~] [Unit] Description=Redis persistent key-value database After=network.target [Service] ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis_6379.conf --supervised systemd ExecStop=/bin/kill -s QUIT $MAINPID Type=notify User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 [Install] WantedBy=multi-user.target [root@centos8 ~] [Unit] Description=Redis persistent key-value database After=network.target [Service] ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis_6380.conf --supervised systemd ExecStop=/bin/kill -s QUIT $MAINPID Type=notify User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 [Install] WantedBy=multi-user.target [root@centos8 ~] [Unit] Description=Redis persistent key-value database After=network.target [Service] ExecStart=/apps/redis/bin/redis-server /apps/redis/etc/redis_6381.conf --supervised systemd ExecStop=/bin/kill -s QUIT $MAINPID Type=notify User=redis Group=redis RuntimeDirectory=redis RuntimeDirectoryMode=0755 [Install] WantedBy=multi-user.target [root@centos8 ~] [root@centos8 ~] [root@centos8 ~]
Redis 相关工具和客户端连接 安装的相关程序介绍 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@ubuntu2204 ~] total 32772 -rwxr-xr-x 1 root root 4366792 Feb 16 21:12 redis-benchmark -rwxr-xr-x 1 root root 8125184 Feb 16 21:12 redis-check-aof -> redis-server -rwxr-xr-x 1 root root 8125184 Feb 16 21:12 redis-check-rdb -> redis-server -rwxr-xr-x 1 root root 4807856 Feb 16 21:12 redis-cli lrwxrwxrwx 1 root root 12 Feb 16 21:12 redis-sentinel -> redis-server -rwxr-xr-x 1 root root 8125184 Feb 16 21:12 redis-server [root@centos8 ~] total 32772 -rwxr-xr-x 1 root root 4366792 Feb 16 21:12 redis-benchmark -rwxr-xr-x 1 root root 8125184 Feb 16 21:12 redis-check-aof -rwxr-xr-x 1 root root 8125184 Feb 16 21:12 redis-check-rdb -rwxr-xr-x 1 root root 4807856 Feb 16 21:12 redis-cli lrwxrwxrwx 1 root root 12 Feb 16 21:12 redis-sentinel -> redis-server -rwxr-xr-x 1 root root 8125184 Feb 16 21:12 redis-server
客户端程序 redis-cli 1 2 3 4 5 redis-cli redis-cli -h <Redis服务器IP> -p <PORT> -a <PASSWORD> --no-auth-warning
程序连接 Redis Redis 支持多种开发语言访问
1 https://redis.io/clients
Shell 脚本访问 Redis 1 2 3 4 5 6 7 8 9 [root@centos8 ~] NUM=100 PASS=123456 for i in `seq $NUM `;do redis-cli -h 127.0.0.1 -a "$PASS " --no-auth-warning set key${i} value${i} echo "key${i} value${i} 写入完成" done echo "$NUM 个key写入完成"
Python 程序连接 Redis python 提供了多种开发库,都可以支持连接访问 Redis
1 https://redis.io/clients
下面选择使用redis-py 库连接 Redis
github redis-py库 :
1 https://github.com/andymccurdy/redis-py
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 [root@ubuntu2204 ~] [root@ubuntu2004 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] import redis pool = redis.ConnectionPool(host="127.0.0.1" ,port=6379,password="123456" ,decode_responses=True) c = redis.Redis(connection_pool=pool) for i in range(100): c.set("k%d" % i,"v%d" % i) data=c.get("k%d" % i) print (data) [root@centos8 ~] ...... 'v94' 'v95' 'v96' 'v97' 'v98' 'v99' [root@centos8 ~] 127.0.0.1:6379> get k10 "v10"
图形工具 有一些第三方开发的图形工具也可以连接redis, 比如: RedisDesktopManager
Docker 容器方式部署 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@ubuntu2204 ~] [root@ubuntu2204 ~] [root@ubuntu2204 ~] [root@ubuntu2204 ~] OK [root@ubuntu2204 ~] wang [root@ubuntu2204 ~] 18 [root@ubuntu2204 ~] OK [root@ubuntu2204 ~] 总用量 4 -rw------- 1 lxd 999 111 1月 16 14:07 dump.rdb [root@ubuntu2204 ~] 10.0.0.202:6379> keys * 1) "age" 2) "name" 10.0.0.202:6379> exit
Redis 配置管理 Redis 配置文件说明
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 bind 0.0.0.0 protected-mode yes port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no supervised no pidfile /var/run/redis_6379.pid loglevel notice logfile "/path/redis.log" databases 16 always-show-logo yes save 900 1 save 300 10 save 60 10000 stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb dir ./ replica-serve-stale-data yes 1、设置为yes (默认设置),从库会继续响应客户端的读请求,此为建议值 2、设置为no,除去特定命令外的任何请求都会返回一个错误"SYNC with master in progress" 。 replica-read-only yes repl-diskless-sync no 1、基于硬盘(disk-backed):为no时,master创建一个新进程dump生成RDB磁盘文件,RDB完成之后由 父进程(即主进程)将RDB文件发送给slaves,此为默认值 2、基于socket(diskless):master创建一个新进程直接dump RDB至slave的网络socket,不经过主进程和硬盘 repl-diskless-sync-delay 5 repl-ping-replica-period 10 repl-timeout 60 repl-disable-tcp-nodelay no repl-backlog-size 512mb repl-backlog-ttl 3600 replica-priority 100 requirepass foobared rename-command maxclients 10000 maxmemory <bytes> appendonly no appendfilename "appendonly.aof" appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb aof-load-truncated yes aof-use-rdb-preamble no lua-time-limit 5000 cluster-enabled yes cluster-config-file nodes-6379.conf cluster-node-timeout 15000 cluster-replica-validity-factor 10 cluster-migration-barrier 1 cluster-require-full-coverage yes cluster-replica-no-failover no slowlog-log-slower-than 10000 slowlog-max-len 128
config 命令实现动态修改配置 config 命令用于查看当前redis配置、以及不重启redis服务实现动态更改redis配置等
注意:不是所有配置都可以动态修改,且此方式无法持久保存
1 2 3 4 5 6 7 8 9 10 11 12 13 14 CONFIG SET parameter value 时间复杂度:O(1) CONFIG SET 命令可以动态地调整 Redis 服务器的配置(configuration)而无须重启。 可以使用它修改配置参数,或者改变 Redis 的持久化(Persistence)方式。 CONFIG SET 可以修改的配置参数可以使用命令 CONFIG GET * 来列出,所有被 CONFIG SET 修改的配置参数都会立即生效。 CONFIG GET parameter 时间复杂度: O(N),其中 N 为命令返回的配置选项数量。 CONFIG GET 命令用于取得运行中的 Redis 服务器的配置参数(configuration parameters),在 Redis 2.4 版本中, 有部分参数没有办法用 CONFIG GET 访问,但是在最新的 Redis 2.6 版本中,所有配置参数都已经可以用 CONFIG GET 访问了。 CONFIG GET 接受单个参数 parameter 作为搜索关键字,查找所有匹配的配置参数,其中参数和值以“键-值对”(key-value pairs)的方式排列。 比如执行 CONFIG GET s* 命令,服务器就会返回所有以 s 开头的配置参数及参数的值:
设置客户端连接密码 1 2 3 4 5 6 7 8 127.0.0.1:6379> CONFIG SET requirepass 123456 OK 127.0.0.1:6379> CONFIG GET requirepass 1) "requirepass" 2) "123456"
获取当前配置 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 127.0.0.1:6379> CONFIG GET * 1) "dbfilename" 2) "dump.rdb" 3) "requirepass" 4) "" 5) "masterauth" 6) "" 7) "cluster-announce-ip" 8) "" 9) "unixsocket" 10) "" 11) "logfile" 12) "/var/log/redis/redis.log" 13) "pidfile" 14) "/var/run/redis_6379.pid" 15) "slave-announce-ip" 16) "" 17) "replica-announce-ip" 18) "" 19) "maxmemory" 20) "0" ...... 127.0.0.1:6379> CONFIG GET bind 1) "bind" 2) "0.0.0.0" 127.0.0.1:6379> CONFIG SET bind 127.0.0.1 (error) ERR Unsupported CONFIG parameter: bind
设置 Redis 使用的最大内存量 1 2 3 4 127.0.0.1:6379> CONFIG SET maxmemory 8589934592 或 1g|G 127.0.0.1:6379> CONFIG GET maxmemory 1) "maxmemory" 2) "8589934592"
慢查询
范例: SLOW LOG
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 [root@centos8 ~] slowlog-log-slower-than 1 slowlog-max-len 1024 127.0.0.1:6379> SLOWLOG LEN (integer ) 14 127.0.0.1:6379> SLOWLOG GET [n] 1) 1) (integer ) 14 2) (integer ) 1544690617 3) (integer ) 4 4) 1) "slowlog" 127.0.0.1:6379> SLOWLOG GET 3 1) 1) (integer ) 7 2) (integer ) 1602901545 3) (integer ) 26 4) 1) "SLOWLOG" 2) "get" 5) "127.0.0.1:38258" 6) "" 2) 1) (integer ) 6 2) (integer ) 1602901540 3) (integer ) 22 4) 1) "SLOWLOG" 2) "get" 3) "2" 5) "127.0.0.1:38258" 6) "" 3) 1) (integer ) 5 2) (integer ) 1602901497 3) (integer ) 22 4) 1) "SLOWLOG" 2) "GET" 5) "127.0.0.1:38258" 6) "" 127.0.0.1:6379> SLOWLOG RESET OK
Redis 持久化 Redis 是基于内存型的NoSQL, 和MySQL是不同的,使用内存进行数据保存
如果想实现数据的持久化,Redis也也可支持将内存数据保存到硬盘文件中
Redis支持两种数据持久化保存方法
RDB:Redis DataBase
AOF:AppendOnlyFile
RDB RDB 工作原理
RDB(Redis DataBase):是基于某个时间点的快照,注意RDB只保留当前最新版本的一个快照
相当于MySQL中的完全备份 RDB 持久化功能所生成的 RDB 文件是一个经过压缩的二进制文件,通过该文件可以还原生成该 RDB 文件时数据库的状态。因为 RDB 文件是保存在磁盘中的,所以即便 Redis 服务进程甚至服务器宕机,只要磁盘中 RDB 文件存在,就能将数据恢复
RDB 支持save和bgsave两种命令实现数据文件的持久化
RDB bgsave 实现快照的具体过程:
注意: save 指令使用主进程进行备份,而不生成新的子进程
首先从redis 主进程先fork生成一个新的子进程,此子进程负责将Redis内存数据保存为一个临时文件tmp-<子进程pid>.rdb,当数据保存完成后,再将此临时文件改名为RDB文件,如果有前一次保存的RDB文件则会被替换,最后关闭此子进程
由于Redis只保留最后一个版本的RDB文件,如果想实现保存多个版本的数据,需要人为实现
范例: save 执行过程会使用主进程进行快照
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@centos7 data] [1] 28684 [root@centos7 data] |-redis-server(28650)-+-{redis-server}(28651) | |-{redis-server}(28652) | |-{redis-server}(28653) | `-{redis-server}(28654) | | `-redis-cli(28684) | `-sshd(23494)---bash(23496)---redis-cli(28601) total 251016 -rw-r--r-- 1 redis redis 189855682 Nov 17 15:02 dump.rdb -rw-r--r-- 1 redis redis 45674498 Nov 17 15:02 temp-28650.rdb
RDB 相关配置 1 2 3 4 5 6 7 8 9 10 11 save 900 1 save 300 10 save 60 10000 dbfilename dump.rdb dir ./ stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes
范例:RDB 相关配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@ubuntu2004 ~] [root@ubuntu2004 ~] 1) "save" 2) "3600 1 300 100 60 10000" [root@ubuntu2004 ~] save ""
实现 RDB 方法
save: 同步,不推荐使用,使用主进程完成快照,因此会阻赛其它命令执行
bgsave: 异步后台执行,不影响其它命令的执行,会开启独立的子进程,因此不会阻赛其它命令执行
配置文件实现自动保存: 在配置文件中制定规则,自动执行bgsave
RDB 模式的优缺点
RDB 模式缺点
不能实时保存数据,可能会丢失自上一次执行RDB备份到当前的内存数据
如果需要尽量避免在服务器故障时丢失数据,那么RDB并不适合。虽然Redis允许设置不同的保存点(save point)来控制保存RDB文件的频率,但是,因为RDB文件需要保存整个数据集的状态,所以它可能并不是一个非常快速的操作。因此一般会超过5分钟以上才保存一次RDB文件。在这种情况下,一旦发生故障停机,就可能会丢失较长时间的数据。
在数据集比较庞大时,fork()子进程可能会非常耗时,造成服务器在一定时间内停止处理客户端请求,如果数据集非常巨大,并且CPU时间非常紧张的话,那么这种停止时间甚至可能会长达整整一秒或更久。另外子进程完成生成RDB文件的时间也会花更长时间.
范例: 手动执行备份RDB
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [root@rocky8 ~] 127.0.0.1:6379> debug populate 5000000 OK (3.96s) 127.0.0.1:6379> dbsize (integer ) 5000000 127.0.0.1:6379> get key:0 "value:0" 127.0.0.1:6379> get key:1 "value:1" 127.0.0.1:6379> get key:2 "value:2" 127.0.0.1:6379> get key:499999 "value:499999" 127.0.0.1:6379> get key:5000000 (nil) 127.0.0.1:6379> bgsave Background saving started [root@rocky8 ~] total 127M -rw-r--r-- 1 redis redis 127M Jun 13 23:07 dump.rdb
范例: 手动备份RDB文件的脚本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 [root@centos7 ~] save "" dbfilename dump_6379.rdb dir "/data/redis" appendonly no [root@centos8 ~] BACKUP=/backup/redis-rdb DIR=/data/redis FILE=dump_6379.rdb PASS=123456 color () { RES_COL=60 MOVE_TO_COL="echo -en \\033[${RES_COL} G" SETCOLOR_SUCCESS="echo -en \\033[1;32m" SETCOLOR_FAILURE="echo -en \\033[1;31m" SETCOLOR_WARNING="echo -en \\033[1;33m" SETCOLOR_NORMAL="echo -en \E[0m" echo -n "$1 " && $MOVE_TO_COL echo -n "[" if [ $2 = "success" -o $2 = "0" ] ;then ${SETCOLOR_SUCCESS} echo -n $" OK " elif [ $2 = "failure" -o $2 = "1" ] ;then ${SETCOLOR_FAILURE} echo -n $"FAILED" else ${SETCOLOR_WARNING} echo -n $"WARNING" fi ${SETCOLOR_NORMAL} echo -n "]" echo } redis-cli -h 127.0.0.1 -a $PASS --no-auth-warning bgsave result=`redis-cli -a $PASS --no-auth-warning info Persistence |grep rdb_bgsave_in_progress| sed -rn 's/.*:([0-9]+).*/\1/p' ` until [ $result -eq 0 ] ;do sleep 1 result=`redis-cli -a $PASS --no-auth-warning info Persistence |awk -F: '/rdb_bgsave_in_progress/{print $2}' ` done DATE=`date +%F_%H-%M-%S` [ -e $BACKUP ] || { mkdir -p $BACKUP ; chown -R redis.redis $BACKUP ; } cp $DIR /$FILE $BACKUP /dump_6379-${DATE} .rdbcolor "Backup redis RDB" 0 [root@centos8 ~] Background saving started Backup redis RDB [ OK ] [root@centos8 ~] total 143M -rw-r--r-- 1 redis redis 143M Oct 21 11:08 dump_6379-2020-10-21_11-08-47.rdb
范例: 观察save 和 bgsave的执行过程
范例: 自动保存
1 2 3 4 [root@centos7 ~] save 60 3
AOF AOF 工作原理
…….
RDB和AOF 的选择 如果主要充当缓存功能,或者可以承受较长时间,比如数分钟数据的丢失, 通常生产环境一般只需启用RDB即可,此也是默认值
如果一点数据都不能丢失,可以选择同时开启RDB和AOF
一般不建议只开启AOF
Redis 常用命令 官方文档:
1 https://redis.io/commands
参考链接:
1 2 3 http://redisdoc.com/ http://doc.redisfans.com/ https://www.php.cn/manual/view/36359.html
INFO 显示当前节点redis运行状态信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 127.0.0.1:6379> INFO redis_version:5.0.3 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:8c0bf22bfba82c8f redis_mode:standalone os:Linux 4.18.0-147.el8.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:8.2.1 process_id:725 run_id:8af0d3fba2b7c5520e0981b125cc49c3ce4d2a2f tcp_port:6379 uptime_in_seconds:18552 ...... [root@ubuntu2004 ~] redis_version:6.2.6 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:7559afb376c61733 [root@ubuntu2004 ~] cluster_enabled:0
SELECT 切换数据库,相当于在MySQL的 USE DBNAME 指令
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@centos8 ~] 127.0.0.1:6379> info cluster cluster_enabled:0 127.0.0.1:6379[15]> SELECT 0 OK 127.0.0.1:6379> SELECT 1 OK 127.0.0.1:6379[1]> SELECT 15 OK 127.0.0.1:6379[15]> SELECT 16 (error) ERR DB index is out of range
注意: 在Redis cluster 模式下不支持多个数据库,会出现下面错误
1 2 3 4 5 6 7 8 [root@centos8 ~] 127.0.0.1:6379> info cluster cluster_enabled:1 127.0.0.1:6379> select 0 OK 127.0.0.1:6379> select 1 (error) ERR SELECT is not allowed in cluster mode
KEYS 查看当前库下的所有key,此命令慎用!
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 127.0.0.1:6379[15]> SELECT 0 OK 127.0.0.1:6379> KEYS * 1) "9527" 2) "9526" 3) "course" 4) "list1" 127.0.0.1:6379> SELECT 1 OK 127.0.0.1:6379[1]> KEYS * (empty list or set ) redis>MSET one 1 two 2 three 3 four 4 OK redis> KEYS *o* 1) "four" 2) "two" 3) "one" redis> KEYS t?? 1) "two" redis> KEYS t[w]* 1) "two" redis> KEYS * 1) "four" 2) "three" 3) "two" 4) "one"
BGSAVE 手动在后台执行RDB持久化操作
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 127.0.0.1:6379[1]> BGSAVE Background saving started [root@centos8 ~] total 4 -rw-r--r-- 1 redis redis 326 Feb 18 22:45 dump.rdb [root@centos8 ~] Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. Background saving started [root@centos8 ~] total 4 -rw-r--r-- 1 redis redis 92 Feb 18 22:54 dump.rdb
DBSIZE 返回当前库下的所有key数量
1 2 3 4 5 6 7 8 127.0.0.1:6379> DBSIZE (integer ) 4 127.0.0.1:6379> SELECT 1 OK 127.0.0.1:6379[1]> DBSIZE (integer ) 0
FLUSHDB 强制清空当前库中的所有key,此命令慎用
1 2 3 4 5 6 7 8 9 10 11 12 13 127.0.0.1:6379[1]> SELECT 0 OK 127.0.0.1:6379> DBSIZE (integer ) 4 127.0.0.1:6379> FLUSHDB OK 127.0.0.1:6379> DBSIZE (integer ) 0 127.0.0.1:6379>
FLUSHALL 强制清空当前Redis服务器所有数据库中的所有key,即删除所有数据,此命令慎用!
1 2 3 4 5 6 127.0.0.1:6379> FLUSHALL OK vim /etc/redis.conf rename-command FLUSHALL "“ #flushdb和flushall 配置和AOF功能冲突,需要设置 appendonly no
SHUTDOWN 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 可用版本: >= 1.0.0 时间复杂度: O(N),其中 N 为关机时需要保存的数据库键数量。 SHUTDOWN 命令执行以下操作: 关闭Redis服务,停止所有客户端连接 如果有至少一个保存点在等待,执行 SAVE 命令 如果 AOF 选项被打开,更新 AOF 文件 关闭 redis 服务器(server) 如果持久化被打开的话, SHUTDOWN 命令会保证服务器正常关闭而不丢失任何数据。 另一方面,假如只是单纯地执行 SAVE 命令,然后再执行 QUIT 命令,则没有这一保证 —— 因为在执行SAVE 之后、执行 QUIT 之前的这段时间中间,其他客户端可能正在和服务器进行通讯,这时如果执行 QUIT 就会造成数据丢失。 vim /etc/redis.conf rename-command shutdown ""
Redis 数据类型 参考资料:http://www.redis.cn/topics/data-types.html
相关命令参考: http://redisdoc.com/
字符串 string 字符串是一种最基本的Redis值类型。Redis字符串是二进制安全的,这意味着一个Redis字符串能包含任意类型的数据,例如: 一张JPEG格式的图片或者一个序列化的Ruby对象。一个字符串类型的值最多能存储512M字节的内容。Redis 中所有 key 都是字符串类型的。此数据类型最为常用
创建一个key set 指令可以创建一个key 并赋值, 使用格式
1 2 3 4 5 6 7 8 9 10 11 12 13 14 SET key value [EX seconds] [PX milliseconds] [NX|XX] 时间复杂度: O(1) 将字符串值 value 关联到 key 。 如果 key 已经持有其他值, SET 就覆写旧值, 无视类型。 当 SET 命令对一个带有生存时间(TTL)的键进行设置之后, 该键原有的 TTL 将被清除。 从 Redis 2.6.12 版本开始, SET 命令的行为可以通过一系列参数来修改: EX seconds : 将键的过期时间设置为 seconds 秒。 执行 SET key value EX seconds 的效果等同于执行 SETEX key seconds value 。 PX milliseconds : 将键的过期时间设置为 milliseconds 毫秒。 执行 SET key value PX milliseconds 的效果等同于执行 PSETEX key milliseconds value 。 NX : 只在键不存在时, 才对键进行设置操作。 执行 SET key value NX 的效果等同于执行 SETNX key value 。 XX : 只在键已经存在时, 才对键进行设置操作。
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 127.0.0.1:6379> set key1 value1 OK 127.0.0.1:6379> get key1 "value1" 127.0.0.1:6379> TYPE key1 string 127.0.0.1:6379> SET title ceo ex 3 OK 127.0.0.1:6379> set NAME wang OK 127.0.0.1:6379> get NAME "wang" 127.0.0.1:6379> get name (nil) 127.0.0.1:6379> set name mage OK 127.0.0.1:6379> get name "mage" 127.0.0.1:6379> get NAME "wang" 127.0.0.1:6379> get title "ceo" 127.0.0.1:6379> setnx title coo (integer ) 0 127.0.0.1:6379> get title "ceo" 127.0.0.1:6379> get title "ceo" 127.0.0.1:6379> set title coo xx OK 127.0.0.1:6379> get title "coo" 127.0.0.1:6379> get age (nil) 127.0.0.1:6379> set age 20 xx (nil) 127.0.0.1:6379> get age (nil)
查看一个key的值 1 2 3 4 5 6 127.0.0.1:6379> get key1 "value1" 127.0.0.1:6379> get name age (error) ERR wrong number of arguments for 'get' command
删除key 1 2 3 4 5 127.0.0.1:6379> DEL key1 (integer ) 1 127.0.0.1:6379> DEL key1 key2 (integer ) 2
批量设置多个key 1 2 127.0.0.1:6379> MSET key1 value1 key2 value2 OK
批量获取多个key 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 127.0.0.1:6379> MGET key1 key2 1) "value1" 2) "value2" 127.0.0.1:6379> KEYS n* 1) "n1" 2) "name" 127.0.0.1:6379> KEYS * 1) "k2" 2) "k1" 3) "key1" 4) "key2" 5) "n1" 6) "name" 7) "k3" 8) "title"
追加key的数据 1 2 3 4 5 127.0.0.1:6379> APPEND key1 " append new value" (integer ) 12 127.0.0.1:6379> get key1 "value1 append new value"
设置新值并返回旧值 1 2 3 4 5 6 7 8 9 127.0.0.1:6379> set name wang OK 127.0.0.1:6379> getset name wange "wang" 127.0.0.1:6379> get name "wange"
返回字符串 key 对应值的字节数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 127.0.0.1:6379> SET name wang OK 127.0.0.1:6379> STRLEN name (integer ) 4 127.0.0.1:6379> APPEND name " xiaochun" (integer ) 13 127.0.0.1:6379> GET name "wang xiaochun" 127.0.0.1:6379> STRLEN name (integer ) 13 127.0.0.1:6379> set name 马哥教育 OK 127.0.0.1:6379> get name "\xe9\xa9\xac\xe5\x93\xa5\xe6\x95\x99\xe8\x82\xb2" 127.0.0.1:6379> strlen name (integer ) 12 127.0.0.1:6379>
判断 key 是否存在 1 2 3 4 5 6 7 8 9 10 11 12 13 14 127.0.0.1:6379> SET name wang ex 10 OK 127.0.0.1:6379> set age 20 OK 127.0.0.1:6379> EXISTS NAME (integer ) 0 127.0.0.1:6379> EXISTS name age (integer ) 2 127.0.0.1:6379> EXISTS name (integer ) 0
获取 key 的过期时长 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 ttl key -1 -2 num 127.0.0.1:6379> TTL key1 (integer ) -1 127.0.0.1:6379> SET name wang EX 100 OK 127.0.0.1:6379> TTL name (integer ) 96 127.0.0.1:6379> TTL name (integer ) 93 127.0.0.1:6379> SET name mage OK 127.0.0.1:6379> TTL name (integer ) -1 127.0.0.1:6379> SET name wang EX 200 OK 127.0.0.1:6379> TTL name (integer ) 198 127.0.0.1:6379> GET name "wang"
重置key的过期时长 1 2 3 4 5 6 7 8 127.0.0.1:6379> TTL name (integer ) 148 127.0.0.1:6379> EXPIRE name 1000 (integer ) 1 127.0.0.1:6379> TTL name (integer ) 999
取消key的期限 即永不过期
1 2 3 4 5 6 7 8 127.0.0.1:6379> TTL name (integer ) 999 127.0.0.1:6379> PERSIST name (integer ) 1 127.0.0.1:6379> TTL name (integer ) -1
数字递增 利用INCR命令簇(INCR, DECR, INCRBY,DECRBY)来把字符串当作原子计数器使用。
1 2 3 4 5 6 7 8 127.0.0.1:6379> set num 10 OK 127.0.0.1:6379> INCR num (integer ) 11 127.0.0.1:6379> get num "11"
数字递减 1 2 3 4 5 6 7 8 127.0.0.1:6379> set num 10 OK 127.0.0.1:6379> DECR num (integer ) 9 127.0.0.1:6379> get num "9"
数字增加 将key对应的数字加decrement(可以是负数)。如果key不存在,操作之前,key就会被置为0。如果key的value类型错误或者是个不能表示成数字的字符串,就返回错误。这个操作最多支持64位有符号的正型数字。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 redis> SET mykey 10 OK redis> INCRBY mykey 5 (integer ) 15 127.0.0.1:6379> get mykey "15" 127.0.0.1:6379> INCRBY mykey -10 (integer ) 5 127.0.0.1:6379> get mykey "5" 127.0.0.1:6379> INCRBY nokey 5 (integer ) 5 127.0.0.1:6379> get nokey "5"
数字减少 decrby 可以减小数值(也可以增加)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 127.0.0.1:6379> SET mykey 10 OK 127.0.0.1:6379> DECRBY mykey 8 (integer ) 2 127.0.0.1:6379> get mykey "2" 127.0.0.1:6379> DECRBY mykey -20 (integer ) 22 127.0.0.1:6379> get mykey "22" 127.0.0.1:6379> DECRBY nokey 3 (integer ) -3 127.0.0.1:6379> get nokey "-3"
列表 list 列表特点
创建列表和数据 LPUSH和RPUSH都可以插入列表
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 LPUSH key value [value …] 时间复杂度: O(1) 将一个或多个值 value 插入到列表 key 的表头 如果有多个 value 值,那么各个 value 值按从左到右的顺序依次插入到表头: 比如说,对空列表 mylist 执行命令 LPUSH mylist a b c ,列表的值将是 c b a ,这等同于原子性地执行 LPUSH mylist a 、 LPUSH mylist b 和 LPUSH mylist c 三个命令。 如果 key 不存在,一个空列表会被创建并执行 LPUSH 操作。 当 key 存在但不是列表类型时,返回一个错误。 RPUSH key value [value …] 时间复杂度: O(1) 将一个或多个值 value 插入到列表 key 的表尾(最右边)。 如果有多个 value 值,那么各个 value 值按从左到右的顺序依次插入到表尾:比如对一个空列表 mylist 执行 RPUSH mylist a b c ,得出的结果列表为 a b c ,等同于执行命令 RPUSH mylist a 、RPUSH mylist b 、 RPUSH mylist c 。 如果 key 不存在,一个空列表会被创建并执行 RPUSH 操作。 当 key 存在但不是列表类型时,返回一个错误。
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 127.0.0.1:6379> LPUSH name mage wang zhang (integer ) 3 127.0.0.1:6379> TYPE name list 127.0.0.1:6379> RPUSH course linux python go (integer ) 3 127.0.0.1:6379> type course list
列表追加新数据 1 2 3 4 5 6 127.0.0.1:6379> LPUSH list1 tom (integer ) 2 127.0.0.1:6379> RPUSH list1 jack (integer ) 3
获取列表长度(元素个数) 1 2 127.0.0.1:6379> LLEN list1 (integer ) 3
获取列表指定位置元素数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 127.0.0.1:6379> LPUSH list1 a b c d (integer ) 4 127.0.0.1:6379> LINDEX list1 0 "d" 127.0.0.1:6379> LINDEX list1 3 "a" 127.0.0.1:6379> LINDEX list1 -1 "a" 127.0.0.1:6379> LPUSH list1 a b c d (integer ) 4 127.0.0.1:6379> LRANGE list1 1 2 1) "c" 2) "b" 127.0.0.1:6379> LRANGE list1 0 3 1) "d" 2) "c" 3) "b" 4) "a" 127.0.0.1:6379> LRANGE list1 0 -1 1) "d" 2) "c" 3) "b" 4) "a" 127.0.0.1:6379> RPUSH list2 zhang wang li zhao (integer ) 4 127.0.0.1:6379> LRANGE list2 1 2 1) "wang" 2) "li" 127.0.0.1:6379> LRANGE list2 2 2 1) "li" 127.0.0.1:6379> LRANGE list2 0 -1 1) "zhang" 2) "wang" 3) "li" 4) "zhao"
修改列表指定索引值
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 127.0.0.1:6379> RPUSH listkey a b c d e f (integer ) 6 127.0.0.1:6379> lrange listkey 0 -1 1) "a" 2) "b" 3) "c" 4) "d" 5) "e" 6) "f" 127.0.0.1:6379> lset listkey 2 java OK 127.0.0.1:6379> lrange listkey 0 -1 1) "a" 2) "b" 3) "java" 4) "d" 5) "e" 6) "f"
删除列表数据
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 127.0.0.1:6379> LPUSH list1 a b c d (integer ) 4 127.0.0.1:6379> LRANGE list1 0 3 1) "d" 2) "c" 3) "b" 4) "a" 127.0.0.1:6379> LPOP list1 "d" 127.0.0.1:6379> LLEN list1 (integer ) 3 127.0.0.1:6379> LRANGE list1 0 2 1) "c" 2) "b" 3) "a" 127.0.0.1:6379> RPOP list1 "a" 127.0.0.1:6379> LLEN list1 (integer ) 2 127.0.0.1:6379> LRANGE list1 0 1 1) "c" 2) "b" 127.0.0.1:6379> LLEN list1 (integer ) 4 127.0.0.1:6379> LRANGE list1 0 3 1) "d" 2) "c" 3) "b" 4) "a" 127.0.0.1:6379> LTRIM list1 1 2 OK 127.0.0.1:6379> LLEN list1 (integer ) 2 127.0.0.1:6379> LRANGE list1 0 1 1) "c" 2) "b" 127.0.0.1:6379> DEL list1 (integer ) 1 127.0.0.1:6379> EXISTS list1 (integer ) 0
集合 set
Set 是一个无序的字符串合集,同一个集合中的每个元素是唯一无重复的,支持在两个不同的集合中对数据进行逻辑处理,常用于取交集,并集,统计等场景,例如: 实现共同的朋友
集合特点
创建集合 1 2 3 4 5 6 7 8 127.0.0.1:6379> SADD set1 v1 (integer ) 1 127.0.0.1:6379> SADD set2 v2 v4 (integer ) 2 127.0.0.1:6379> TYPE set1 set 127.0.0.1:6379> TYPE set2 set
集合中追加数据 1 2 3 4 5 6 7 8 9 127.0.0.1:6379> SADD set1 v2 v3 v4 (integer ) 3 127.0.0.1:6379> SADD set1 v2 (integer ) 0 127.0.0.1:6379> TYPE set1 set 127.0.0.1:6379> TYPE set2 set
获取集合的所有数据 1 2 3 4 5 6 7 8 127.0.0.1:6379> SMEMBERS set1 1) "v4" 2) "v1" 3) "v3" 4) "v2" 127.0.0.1:6379> SMEMBERS set2 1) "v4" 2) "v2"
删除集合中的元素 1 2 3 4 5 6 7 127.0.0.1:6379> sadd goods mobile laptop car (integer ) 3 127.0.0.1:6379> srem goods car (integer ) 1 127.0.0.1:6379> SMEMBERS goods 1) "mobile" 2) "laptop"
集合间操作
取集合的交集 交集:同时属于集合A且属于集合B的元素
可以实现共同的朋友
1 2 3 127.0.0.1:6379> SINTER set1 set2 1) "v4" 2) "v2"
取集合的并集 并集:属于集合A或者属于集合B的元素
1 2 3 4 5 127.0.0.1:6379> SUNION set1 set2 1) "v2" 2) "v4" 3) "v1" 4) "v3"
取集合的差集 差集:属于集合A但不属于集合B的元素
可以实现我的朋友的朋友
1 2 3 127.0.0.1:6379> SDIFF set1 set2 1) "v1" 2) "v3"
有序集合 sorted set Redis有序集合和Redis集合类似,是不包含相同字符串的合集。它们的差别是,每个有序集合的成员都关联着一个双精度浮点型的评分,这个评分用于把有序集合中的成员按最低分到最高分排序。有序集合的成员不能重复,但评分可以重复,一个有序集合中最多的成员数为 2^32 - 1=4294967295个,经常用于排行榜的场景
有序集合特点
有序
无重复元素
每个元素是由score和value组成
score 可以重复
value 不可以重复
创建有序集合 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 127.0.0.1:6379> ZADD zset1 1 v1 (integer ) 1 127.0.0.1:6379> ZADD zset1 2 v2 (integer ) 1 127.0.0.1:6379> ZADD zset1 2 v3 (integer ) 1 127.0.0.1:6379> ZADD zset1 3 v4 (integer ) 1 127.0.0.1:6379> TYPE zset1 zset 127.0.0.1:6379> TYPE zset2 zset 127.0.0.1:6379> ZADD zset2 1 v1 2 v2 3 v3 4 v4 5 v5 (integer ) 5
实现排名 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 127.0.0.1:6379> ZADD course 90 linux 99 go 60 python 50 cloud (integer ) 4 127.0.0.1:6379> ZRANGE course 0 -1 1) "cloud" 2) "python" 3) "linux" 4) "go" 127.0.0.1:6379> ZREVRANGE course 0 -1 1) "go" 2) "linux" 3) "python" 4) "cloud" 127.0.0.1:6379> ZRANGE course 0 -1 WITHSCORES 1) "cloud" 2) "50" 3) "python" 4) "60" 5) "linux" 6) "90" 7) "go" 8) "99" 127.0.0.1:6379> ZREVRANGE course 0 -1 WITHSCORES 1) "go" 2) "99" 3) "linux" 4) "90" 5) "python" 6) "60" 7) "cloud" 8) "50" 127.0.0.1:6379>
查看集合的成员个数 1 2 3 4 5 6 127.0.0.1:6379> ZCARD course (integer ) 4 127.0.0.1:6379> ZCARD zset1 (integer ) 4 127.0.0.1:6379> ZCARD zset2 (integer ) 4
基于索引查找数据 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 127.0.0.1:6379> ZRANGE course 0 2 1) "cloud" 2) "python" 3) "linux" 127.0.0.1:6379> ZRANGE course 0 10 1) "cloud" 2) "python" 3) "linux" 4) "go" 127.0.0.1:6379> ZRANGE zset1 1 3 1) "v2" 2) "v3" 3) "v4" 127.0.0.1:6379> ZRANGE zset1 0 2 1) "v1" 2) "v2" 3) "v3" 127.0.0.1:6379> ZRANGE zset1 2 2 1) "v3"
查询指定数据的排名 1 2 3 4 5 6 127.0.0.1:6379> ZADD course 90 linux 99 go 60 python 50 cloud (integer ) 4 127.0.0.1:6379> ZRANK course go (integer ) 3 127.0.0.1:6379> ZRANK course python (integer ) 1
获取分数 1 2 127.0.0.1:6379> zscore course cloud "30"
删除元素 1 2 3 4 5 6 7 8 9 10 11 12 127.0.0.1:6379> ZADD course 90 linux 199 go 60 python 30 cloud (integer ) 4 127.0.0.1:6379> ZRANGE course 0 -1 1) "cloud" 2) "python" 3) "linux" 4) "go" 127.0.0.1:6379> ZREM course python go (integer ) 2 127.0.0.1:6379> ZRANGE course 0 -1 1) "cloud" 2) "linux"
哈希 hash hash 即字典, 用于保存字符串字段field和字符串值value之间的映射,即key/value做为数据部分,hash特别适合用于存储对象场景.
一个hash最多可以包含2^32-1 个key/value键值对
哈希特点
创建 hash 格式
1 2 3 4 5 6 HSET hash field value 时间复杂度: O(1) 将哈希表 hash 中域 field 的值设置为 value 。 如果给定的哈希表并不存在, 那么一个新的哈希表将被创建并执行 HSET 操作。 如果域 field 已经存在于哈希表中, 那么它的旧值将被新值 value 覆盖。
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 127.0.0.1:6379> HSET 9527 name zhouxingxing age 20 (integer ) 2 127.0.0.1:6379> TYPE 9527 hash 127.0.0.1:6379> hgetall 9527 1) "name" 2) "zhouxingxing" 3) "age" 4) "20" 127.0.0.1:6379> HSET 9527 gender male (integer ) 1 127.0.0.1:6379> hgetall 9527 1) "name" 2) "zhouxingxing" 3) "age" 4) "20" 5) "gender" 6) "male"
查看hash的指定field的value 1 2 3 4 5 6 7 8 127.0.0.1:6379> HGET 9527 name "zhouxingxing" 127.0.0.1:6379> HGET 9527 age "20" 127.0.0.1:6379> HMGET 9527 name age 1) "zhouxingxing" 2) "20" 127.0.0.1:6379>
删除hash 的指定的 field/value 1 2 3 4 5 6 7 8 9 127.0.0.1:6379> HDEL 9527 age (integer ) 1 127.0.0.1:6379> HGET 9527 age (nil) 127.0.0.1:6379> hgetall 9527 1) "name" 2) "zhouxingxing" 127.0.0.1:6379> HGET 9527 name "zhouxingxing"
批量设置hash key的多个field和value 1 2 3 4 5 6 7 8 9 127.0.0.1:6379> HMSET 9527 name zhouxingxing age 50 city hongkong OK 127.0.0.1:6379> HGETALL 9527 1) "name" 2) "zhouxingxing" 3) "age" 4) "50" 5) "city" 6) "hongkong"
查看hash指定field的value 1 2 3 4 5 6 127.0.0.1:6379> HMSET 9527 name zhouxingxing age 50 city hongkong OK 127.0.0.1:6379> HMGET 9527 name age 1) "zhouxingxing" 2) "50" 127.0.0.1:6379>
查看hash的所有field 1 2 3 4 5 6 127.0.0.1:6379> HMSET 9527 name zhouxingxing age 50 city hongkong OK 127.0.0.1:6379> HKEYS 9527 1) "name" 2) "age" 3) "city"
查看hash 所有value 1 2 3 4 5 6 127.0.0.1:6379> HMSET 9527 name zhouxingxing age 50 city hongkong OK 127.0.0.1:6379> HVALS 9527 1) "zhouxingxing" 2) "50" 3) "hongkong"
查看指定 hash的所有field及value 1 2 3 4 5 6 7 8 127.0.0.1:6379> HGETALL 9527 1) "name" 2) "zhouxingxing" 3) "age" 4) "50" 5) "city" 6) "hongkong" 127.0.0.1:6379>
删除 hash 1 2 3 4 5 6 7 127.0.0.1:6379> DEL 9527 (integer ) 1 127.0.0.1:6379> HMGET 9527 name city 1) (nil) 2) (nil) 127.0.0.1:6379> EXISTS 9527 (integer ) 0
消息队列 消息队列: 把要传输的数据放在队列中,从而实现应用之间的数据交换
常用功能: 可以实现多个应用系统之间的解耦,异步,削峰/限流等
常用的消息队列应用: Kafka,RabbitMQ,Redis
消息队列分为两种
生产者/消费者模式: Producer/Consumer
发布者/订阅者模式: Publisher/Subscriber
生产者消费者模式 模式说明 生产者消费者模式下,多个消费者同时监听一个频道(redis用队列实现),但是生产者产生的一个消息只能被最先抢到消息的一个消费者消费一次,队列中的消息由可以多个生产者写入,也可以有不同的消费者取出进行消费处理.此模式应用广泛
生产者生成消息 1 2 3 4 5 6 7 8 9 10 11 12 13 [root@redis ~] 127.0.0.1:6379> AUTH 123456 OK 127.0.0.1:6379> LPUSH channel1 message1 (integer ) 1 127.0.0.1:6379> LPUSH channel1 message2 (integer ) 2 127.0.0.1:6379> LPUSH channel1 message3 (integer ) 3 127.0.0.1:6379> LPUSH channel1 message4 (integer ) 4 127.0.0.1:6379> LPUSH channel1 message5 (integer ) 5
获取所有消息 1 2 3 4 5 6 127.0.0.1:6379> LRANGE channel1 0 -1 1) "message5" 2) "message4" 3) "message3" 4) "message2" 5) "message1"
消费者消费消息 1 2 3 4 5 6 7 8 9 10 11 12 127.0.0.1:6379> RPOP channel1 "message1" 127.0.0.1:6379> RPOP channel1 "message2" 127.0.0.1:6379> RPOP channel1 "message3" 127.0.0.1:6379> RPOP channel1 "message4" 127.0.0.1:6379> RPOP channel1 "message5" 127.0.0.1:6379> RPOP channel1 (nil)
验证队列消息消费完成 1 2 127.0.0.1:6379> LRANGE channel1 0 -1 (empty list or set )
发布者订阅模式 模式说明 在发布者订阅者Publisher/Subscriber模式下,发布者Publisher将消息发布到指定的频道channel,事先监听此channel的一个或多个订阅者Subscriber都会收到相同的消息。即一个消息可以由多个订阅者获取到. 对于社交应用中的群聊、群发、群公告等场景适用于此模式
订阅者订阅频道 1 2 3 4 5 6 7 8 [root@redis ~] 127.0.0.1:6379> AUTH 123456 OK 127.0.0.1:6379> SUBSCRIBE channel01 Reading messages... (press Ctrl-C to quit) 1) "subscribe" 2) "channel01" 3) (integer ) 1
发布者发布消息 1 2 3 4 127.0.0.1:6379> PUBLISH channel01 message1 (integer ) 2 127.0.0.1:6379> PUBLISH channel01 message2 (integer ) 2
各个订阅者都能收到消息 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@redis ~] 127.0.0.1:6379> AUTH 123456 OK 127.0.0.1:6379> SUBSCRIBE channel01 Reading messages... (press Ctrl-C to quit) 1) "subscribe" 2) "channel01" 3) (integer ) 1 1) "message" 2) "channel01" 3) "message1" 1) "message" 2) "channel01" 3) "message2"
订阅多个频道 1 2 127.0.0.1:6379> SUBSCRIBE channel01 channel02
订阅所有频道 1 127.0.0.1:6379> PSUBSCRIBE *
订阅匹配的频道 1 127.0.0.1:6379> PSUBSCRIBE chann*
取消订阅频道 1 2 3 4 127.0.0.1:6379> unsubscribe channel01 1) "unsubscribe" 2) "channel01" 3) (integer ) 0
Redis 集群与高可用 Redis单机服务存在数据和服务的单点问题,而且单机性能也存在着上限,可以利用Redis的集群相关技术来解决这些问题.
Redis 主从复制
Redis 主从复制架构 主从模式(master/slave),和MySQL的主从模式类似,可以实现Redis数据的跨主机的远程备份。
常见客户端连接主从的架构:
程序APP先连接到高可用性 LB 集群提供的虚拟IP,再由LB调度将用户的请求至后端Redis 服务器来真正提供服务
主从复制特点
一个master可以有多个slave
一个slave只能有一个master
数据流向是从master到slave单向的
master 可读可写
slave 只读
主从复制实现 当master出现故障后,可以自动提升一个slave节点变成新的Mster,因此Redis Slave 需要设置和master相同的连接密码,此外当一个Slave提升为新的master 通过持久化实现数据的恢复
当配置Redis复制功能时,强烈建议打开主服务器的持久化功能。否则的话,由于延迟等问题,部署的主节点Redis服务应该要避免自动启动。
参考案例: 导致主从服务器数据全部丢失
1 2 3 1.假设节点A为主服务器,并且关闭了持久化。并且节点B和节点C从节点A复制数据 2.节点A崩溃,然后由自动拉起服务重启了节点A.由于节点A的持久化被关闭了,所以重启之后没有任何数据 3.节点B和节点C将从节点A复制数据,但是A的数据是空的,于是就把自身保存的数据副本删除。
在关闭主服务器上的持久化,并同时开启自动拉起进程的情况下,即便使用Sentinel来实现Redis的高可用性,也是非常危险的。因为主服务器可能拉起得非常快,以至于Sentinel在配置的心跳时间间隔内没有检测到主服务器已被重启,然后还是会执行上面的数据丢失的流程。无论何时,数据安全都是极其重要的,所以应该禁止主服务器关闭持久化的同时自动启动。
主从命令配置 启用主从同步 Redis Server 默认为 master节点,如果要配置为从节点,需要指定master服务器的IP,端口及连接密码
在从节点执行 REPLICAOF MASTER_IP PORT 指令可以启用主从同步复制功能,早期版本使用 SLAVEOF 指令
1 2 3 127.0.0.1:6379> REPLICAOF MASTER_IP PORT #新版推荐使用 127.0.0.1:6379> SLAVEOF MasterIP Port #旧版使用,将被淘汰 127.0.0.1:6379> CONFIG SET masterauth <masterpass>
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 [root@centos8 ~] 127.0.0.1:6379> AUTH 123456 OK 127.0.0.1:6379> INFO replication role:master connected_slaves:0 master_replid:a3504cab4d33e9723a7bc988ff8e022f6d9325bf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379> SET key1 v1-master OK 127.0.0.1:6379> KEYS * 1) "key1" 127.0.0.1:6379> GET key1 "v1-master" 127.0.0.1:6379> [root@centos8 ~] 127.0.0.1:6379> info NOAUTH Authentication required. 127.0.0.1:6379> AUTH 123456 OK 127.0.0.1:6379> INFO replication role:master connected_slaves:0 master_replid:a3504cab4d33e9723a7bc988ff8e022f6d9325bf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379> SET key1 v1-slave-18 OK 127.0.0.1:6379> KEYS * 1) "key1" 127.0.0.1:6379> GET key1 "v1-slave-18" 127.0.0.1:6379> 127.0.0.1:6379> KEYS * 1) "key1" 127.0.0.1:6379> GET key1 "v1-slave-28" 127.0.0.1:6379> 127.0.0.1:6379> INFO replication role:master connected_slaves:0 master_replid:a3504cab4d33e9723a7bc988ff8e022f6d9325bf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379> 127.0.0.1:6379> REPLICAOF 10.0.0.8 6379 OK 127.0.0.1:6379> CONFIG SET masterauth 123456 OK 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:8 master_sync_in_progress:0 slave_repl_offset:42 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:b69908f23236fb20b810d198f7f4539f795e0ee5 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:42 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:42 127.0.0.1:6379> GET key1 "v1-master" 127.0.0.1:6379> INFO replication role:master connected_slaves:2 slave0:ip=10.0.0.18,port=6379,state=online,offset=112,lag=1 slave1:ip=10.0.0.28,port=6379,state=online,offset=112,lag=1 master_replid:dc30f86c2d3c9029b6d07831ae3f27f8dbacac62 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:112 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:112 127.0.0.1:6379>
删除主从同步 在从节点执行 REPLICAOF NO ONE 指令可以取消主从复制
1 2 127.0.0.1:6379> REPLICAOF no one
验证同步 在 master 上观察日志 1 2 3 4 5 6 7 8 9 [root@centos8 ~] 24402:M 06 Oct 2020 09:09:16.448 * Replica 10.0.0.18:6379 asks for synchronization 24402:M 06 Oct 2020 09:09:16.448 * Full resync requested by replica 10.0.0.18:6379 24402:M 06 Oct 2020 09:09:16.448 * Starting BGSAVE for SYNC with target: disk 24402:M 06 Oct 2020 09:09:16.453 * Background saving started by pid 24507 24507:C 06 Oct 2020 09:09:16.454 * DB saved on disk 24507:C 06 Oct 2020 09:09:16.455 * RDB: 2 MB of memory used by copy-on-write 24402:M 06 Oct 2020 09:09:16.489 * Background saving terminated with success 24402:M 06 Oct 2020 09:09:16.490 * Synchronization with replica 10.0.0.18:6379 succeeded
在 slave 节点观察日志 1 2 3 4 5 6 7 8 9 10 11 12 [root@centos8 ~] 24395:S 06 Oct 2020 09:09:16.411 * Connecting to MASTER 10.0.0.8:6379 24395:S 06 Oct 2020 09:09:16.412 * MASTER <-> REPLICA sync started 24395:S 06 Oct 2020 09:09:16.412 * Non blocking connect for SYNC fired the event. 24395:S 06 Oct 2020 09:09:16.412 * Master replied to PING, replication can continue ... 24395:S 06 Oct 2020 09:09:16.414 * Partial resynchronization not possible (no cached master) 24395:S 06 Oct 2020 09:09:16.419 * Full resync from master: 20ec2450b850782b6eeaed4a29a61a25b9a7f4da:0 24395:S 06 Oct 2020 09:09:16.456 * MASTER <-> REPLICA sync : receiving 196 bytes from master 24395:S 06 Oct 2020 09:09:16.456 * MASTER <-> REPLICA sync : Flushing old data 24395:S 06 Oct 2020 09:09:16.456 * MASTER <-> REPLICA sync : Loading DB in memory 24395:S 06 Oct 2020 09:09:16.457 * MASTER <-> REPLICA sync : Finished with success
修改slave节点配置文件 范例:
1 2 3 4 5 6 7 8 9 10 [root@centos8 ~] ....... replicaof 10.0.0.8 6379 ...... masterauth 123456 requirepass 123456 ....... [root@centos8 ~]
master和slave查看状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 127.0.0.1:6379> info replication role:master connected_slaves:1 slave0:ip=10.0.0.18,port=6379,state=online,offset=1104403,lag=0 master_replid:b2517cd6cb3ad1508c516a38caed5b9d2d9a3e73 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1104403 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:55828 repl_backlog_histlen:1048576 127.0.0.1:6379> 127.0.0.1:6379> get key1 "v1-master" 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:6 master_sync_in_progress:0 slave_repl_offset:1104431 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:b2517cd6cb3ad1508c516a38caed5b9d2d9a3e73 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1104431 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:55856 repl_backlog_histlen:1048576 127.0.0.1:6379> 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:down master_last_io_seconds_ago:-1 master_sync_in_progress:0 slave_repl_offset:1104529 master_link_down_since_seconds:4 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:b2517cd6cb3ad1508c516a38caed5b9d2d9a3e73 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1104529 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:55954 repl_backlog_histlen:1048576
Slave 日志 1 2 3 4 5 6 7 8 9 10 11 12 [root@centos8 ~] 24592:S 20 Feb 2020 12:03:58.792 * Connecting to MASTER 10.0.0.8:6379 24592:S 20 Feb 2020 12:03:58.792 * MASTER <-> REPLICA sync started 24592:S 20 Feb 2020 12:03:58.797 * Non blocking connect for SYNC fired the event. 24592:S 20 Feb 2020 12:03:58.797 * Master replied to PING, replication can continue ... 24592:S 20 Feb 2020 12:03:58.798 * Partial resynchronization not possible (no cached master) 24592:S 20 Feb 2020 12:03:58.801 * Full resync from master: b69908f23236fb20b810d198f7f4539f795e0ee5:2440 24592:S 20 Feb 2020 12:03:58.863 * MASTER <-> REPLICA sync : receiving 213 bytes from master 24592:S 20 Feb 2020 12:03:58.863 * MASTER <-> REPLICA sync : Flushing old data 24592:S 20 Feb 2020 12:03:58.863 * MASTER <-> REPLICA sync : Loading DB in memory 24592:S 20 Feb 2020 12:03:58.863 * MASTER <-> REPLICA sync : Finished with success
Master日志 1 2 3 4 5 6 7 8 9 10 11 [root@centos8 ~] 11846:M 20 Feb 2020 12:11:35.171 * DB loaded from disk: 0.000 seconds 11846:M 20 Feb 2020 12:11:35.171 * Ready to accept connections 11846:M 20 Feb 2020 12:11:36.086 * Replica 10.0.0.18:6379 asks for synchronization 11846:M 20 Feb 2020 12:11:36.086 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for 'b69908f23236fb20b810d198f7f4539f795e0ee5' , my replication IDs are '4bff970970c073c1f3d8e8ad20b1c1f126a5f31c' and '0000000000000000000000000000000000000000' ) 11846:M 20 Feb 2020 12:11:36.086 * Starting BGSAVE for SYNC with target: disk 11846:M 20 Feb 2020 12:11:36.095 * Background saving started by pid 11850 11850:C 20 Feb 2020 12:11:36.121 * DB saved on disk 11850:C 20 Feb 2020 12:11:36.121 * RDB: 4 MB of memory used by copy-on-write 11846:M 20 Feb 2020 12:11:36.180 * Background saving terminated with success 11846:M 20 Feb 2020 12:11:36.180 * Synchronization with replica 10.0.0.18:6379 succeeded
slave 只读状态 验证Slave节点为只读状态, 不支持写入
1 2 127.0.0.1:6379> set key1 v1-slave (error) READONLY You can't write against a read only replica.
主从复制故障恢复 主从复制故障恢复过程介绍 slave 节点故障和恢复 当 slave 节点故障时,将Redis Client指向另一个 slave 节点即可,并及时修复故障从节点
master 节点故障和恢复 当 master 节点故障时,需要提升slave为新的master
master故障后,当前还只能手动提升一个slave为新master,不能自动切换。
之后将其它的slave节点重新指定新的master为master节点
Master的切换会导致master_replid发生变化,slave之前的master_replid就和当前master不一致从而会引发所有 slave的全量同步。
主从复制故障恢复实现 假设当前主节点10.0.0.8故障,提升10.0.0.18为新的master
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:1 master_sync_in_progress:0 slave_repl_offset:3794 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:8e8279e461fdf0f1a3464ef768675149ad4b54a3 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:3794 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:3781 repl_backlog_histlen:14
停止slave同步并提升为新的master
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 127.0.0.1:6379> REPLICAOF NO ONE OK (5.04s) 127.0.0.1:6379> info replication role:master connected_slaves:0 master_replid:94901d6b8ff812ec4a4b3ac6bb33faa11e55c274 master_replid2:0083e5a9c96aa4f2196934e10b910937d82b4e19 master_repl_offset:3514 second_repl_offset:3515 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:3431 repl_backlog_histlen:84 127.0.0.1:6379>
测试能否写入数据:
1 2 127.0.0.1:6379> set keytest1 vtest1 OK
修改所有slave 指向新的master节点
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 127.0.0.1:6379> SLAVEOF 10.0.0.18 6379 OK 127.0.0.1:6379> set key100 v100 (error) READONLY You can't write against a read only replica. #查看日志 [root@centos8 ~]# tail -f /var/log/redis/redis.log 1762:S 20 Feb 2020 13:28:21.943 # Connection with master lost. 1762:S 20 Feb 2020 13:28:21.943 * Caching the disconnected master state. 1762:S 20 Feb 2020 13:28:21.943 * REPLICAOF 10.0.0.18:6379 enabled (user request from ' id =5 addr=127.0.0.1:59668 fd=9 name= age=149 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=41 qbuf-free=32727 obl=0 oll=0 omem=0 events=r cmd=slaveof') 1762:S 20 Feb 2020 13:28:21.966 * Connecting to MASTER 10.0.0.18:6379 1762:S 20 Feb 2020 13:28:21.966 * MASTER <-> REPLICA sync started 1762:S 20 Feb 2020 13:28:21.967 * Non blocking connect for SYNC fired the event. 1762:S 20 Feb 2020 13:28:21.968 * Master replied to PING, replication can continue... 1762:S 20 Feb 2020 13:28:21.968 * Trying a partial resynchronization (request 8e8279e461fdf0f1a3464ef768675149ad4b54a3:3991). 1762:S 20 Feb 2020 13:28:21.969 * Successful partial resynchronization with master. 1762:S 20 Feb 2020 13:28:21.969 * MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.
在新master可看到slave
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 127.0.0.1:6379> INFO replication role:master connected_slaves:1 slave0:ip=10.0.0.28,port=6379,state=online,offset=4606,lag=0 master_replid:8e8279e461fdf0f1a3464ef768675149ad4b54a3 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:4606 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:4606 127.0.0.1:6379>
实现 Redis 的级联复制 即实现基于Slave节点的Slave
master和slave1节点无需修改,只需要修改slave2及slave3指向slave1做为mater即可
1 2 3 4 127.0.0.1:6379> REPLICAOF 10.0.0.18 6379 OK 127.0.0.1:6379> CONFIG SET masterauth 123456
在 master 设置key,观察是否同步
1 2 3 4 5 6 7 8 9 10 11 12 13 127.0.0.1:6379> set key2 v2 OK 127.0.0.1:6379> get key2 "v2" 127.0.0.1:6379> get key2 "v2" 127.0.0.1:6379> set key3 v3 (error) READONLY You can't write against a read only replica.
在中间那个slave1查看状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:8 master_sync_in_progress:0 slave_repl_offset:4312 slave_priority:100 slave_read_only:1 connected_slaves:1 slave0:ip=10.0.0.28,port=6379,state=online,offset=4312,lag=0 master_replid:8e8279e461fdf0f1a3464ef768675149ad4b54a3 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:4312 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:4312
主从复制优化 主从复制过程 Redis主从复制分为全量同步和增量同步
Redis 的主从同步是非阻塞的,即同步过程不会影响主服务器的正常访问.
全量复制过程 Full resync
主从节点建立连接,验证身份后,从节点向主节点发送PSYNC(2.8版本之前是SYNC)命令
主节点向从节点发送FULLRESYNC命令,包括runID和offset
从节点保存主节点信息
主节点执行BGSAVE保存RDB文件,同时记录新的记录到buffer中
主节点发送RDB文件给从节点
主节点将新收到buffer中的记录发送至从节点
从节点删除本机的旧数据
从节点加载RDB
从节点同步主节点的buffer信息
范例:查看RUNID
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 127.0.0.1:6379> info server redis_version:7.0.5 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:77bd58d092d1d003 redis_mode:standalone os:Linux 5.4.0-124-generic x86_64 arch_bits:64 monotonic_clock:POSIX clock_gettime multiplexing_api:epoll atomicvar_api:c11-builtin gcc_version:9.4.0 process_id:16407 process_supervised:systemd run_id:9e954950c255644ef291f6be0c579ae893c16aad tcp_port:6379 server_time_usec:1667276559043301 uptime_in_seconds:3463 uptime_in_days:0 hz:10 configured_hz:10 lru_clock:6332175 executable:/apps/redis/bin/redis-server config_file:/apps/redis/etc/redis.conf io_threads_active:0
增量复制过程 partial resynchronization
在主从复制首次完成全量同步之后再次需要同步时,从服务器只要发送当前的offset位置(类似于MySQL的binlog的位置)给主服务器,然后主服务器根据相应的位置将之后的数据(包括写在缓冲区的积压数据)发送给从服务器,再次将其保存到从节点内存即可。
即首次全量复制,之后的复制基本增量复制实现
主从同步完整过程 主从同步完整过程如下:
slave发起连接master,验证通过后,发送PSYNC命令
master接收到PSYNC命令后,执行BGSAVE命令将全部数据保存至RDB文件中,并将后续发生的写操作记录至buffer中
master向所有slave发送RDB文件
master向所有slave发送后续记录在buffer中写操作
slave收到快照文件后丢弃所有旧数据
slave加载收到的RDB到内存
slave 执行来自master接收到的buffer写操作
当slave完成全量复制后,后续master只会先发送slave_repl_offset信息
以后slave比较自身和master的差异,只会进行增量复制的数据即可
复制缓冲区(环形队列)配置参数:
1 2 3 4 5 repl-backlog-size 1mb repl-backlog-ttl 3600
避免全量复制
第一次全量复制不可避免,后续的全量复制可以利用小主节点(内存小),业务低峰时进行全量
节点运行ID不匹配:主节点重启会导致RUNID变化,可能会触发全量复制,可以利用故障转移,例如哨兵或集群,而从节点重启动,不会导致全量复制
复制积压缓冲区不足: 当主节点生成的新数据大于缓冲区大小,从节点恢复和主节点连接后,会导致全量复制.解决方法将repl-backlog-size 调大
避免复制风暴
单主节点复制风暴
当主节点重启,多从节点复制
解决方法:更换复制拓扑
单机器多实例复制风暴
机器宕机后,大量全量复制
解决方法:主节点分散多机器
主从同步优化配置 Redis在2.8版本之前没有提供增量部分复制的功能,当网络闪断或者slave Redis重启之后会导致主从之间的全量同步,即从2.8版本开始增加了部分复制的功能。
性能相关配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 repl-diskless-sync no repl-diskless-sync-delay 5 repl-ping-slave-period 10 repl-timeout 60 repl-disable-tcp-nodelay no repl-backlog-size 1mb repl-backlog-ttl 3600 slave-priority 100 min-replicas-to-write 1 min-slaves-max-lag 20
常见主从复制故障 主从硬件和软件配置不一致 主从节点的maxmemory不一致,主节点内存大于从节点内存,主从复制可能丢失数据
rename-command 命令不一致,如在主节点启用flushdb,从节点禁用此命令,结果在master节点执行 flushdb后,导致slave节点不同步
1 2 3 4 5 6 10822:S 16 Oct 2020 20:03:45.291 3181:S 21 Oct 2020 17:34:50.581
Master 节点密码错误 如果slave节点配置的master密码错误,导致验证不通过,自然将无法建立主从同步关系。
1 2 3 4 5 6 [root@centos8 ~] 24930:S 20 Feb 2020 13:53:57.029 * Connecting to MASTER 10.0.0.8:6379 24930:S 20 Feb 2020 13:53:57.030 * MASTER <-> REPLICA sync started 24930:S 20 Feb 2020 13:53:57.030 * Non blocking connect for SYNC fired the event. 24930:S 20 Feb 2020 13:53:57.030 * Master replied to PING, replication can continue ... 24930:S 20 Feb 2020 13:53:57.031
Redis 版本不一致 不同的redis 版本之间尤其是大版本间可能会存在兼容性问题,如:Redis 3,4,5,6之间
因此主从复制的所有节点应该使用相同的版本
安全模式下无法远程连接 如果开启了安全模式,并且没有设置bind地址和密码,会导致无法远程连接
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] LISTEN 0 128 0.0.0.0:6379 0.0.0.0:* [root@centos8 ~] 10.0.0.8:6379> KEYS * (error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no' , and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside. 10.0.0.38:6379> exit [root@centos8 ~] 127.0.0.1:6379> KEYS * (empty list or set )
Redis 哨兵 Sentinel Redis 集群介绍 主从架构和MySQL的主从复制一样,无法实现master和slave角色的自动切换,即当master出现故障时,不能实现自动的将一个slave 节点提升为新的master节点,即主从复制无法实现自动的故障转移功能,如果想实现转移,则需要手动修改配置,才能将 slave 服务器提升新的master节点.此外只有一个主节点支持写操作,所以业务量很大时会导致Redis服务性能达到瓶颈
需要解决的主从复制以下存在的问题:
master和slave角色的自动切换,且不能影响业务
提升Redis服务整体性能,支持更高并发访问
哨兵 Sentinel 工作原理 哨兵Sentinel从Redis2.6版本开始引用,Redis 2.8版本之后稳定可用。生产环境如果要使用此功能建议使用Redis的2.8版本以上版本
Sentinel 架构和故障转移机制
Sentinel 架构
Sentinel 故障转移
专门的Sentinel 服务进程是用于监控redis集群中Master工作的状态,当Master主服务器发生故障的时候,可以实现Master和Slave的角色的自动切换,从而实现系统的高可用性
Sentinel是一个分布式系统,即需要在多个节点上各自同时运行一个sentinel进程,Sentienl 进程通过流言协议(gossip protocols)来接收关于Master是否下线状态,并使用投票协议(Agreement Protocols)来决定是否执行自动故障转移,并选择合适的Slave作为新的Master
每个Sentinel进程会向其它Sentinel、Master、Slave定时发送消息,来确认对方是否存活,如果发现某个节点在指定配置时间内未得到响应,则会认为此节点已离线,即为主观宕机Subjective Down,简称为 SDOWN
如果哨兵集群中的多数Sentinel进程认为Master存在SDOWN,共同利用 is-master-down-by-addr 命令互相通知后,则认为客观宕机Objectively Down, 简称 ODOWN
接下来利用投票算法,从所有slave节点中,选一台合适的slave将之提升为新Master节点,然后自动修改其它slave相关配置,指向新的master节点,最终实现故障转移failover
Redis Sentinel中的Sentinel节点个数应该为大于等于3且最好为奇数
客户端初始化时连接的是Sentinel节点集合,不再是具体的Redis节点,即 Sentinel只是配置中心不是代理。
Redis Sentinel 节点与普通 Redis 没有区别,要实现读写分离依赖于客户端程序
Sentinel 机制类似于MySQL中的MHA功能,只解决master和slave角色的自动故障转移问题,但单个 Master 的性能瓶颈问题并没有解决
Redis 3.0 之前版本中,生产环境一般使用哨兵模式较多,Redis 3.0后推出Redis cluster功能,可以支持更大规模的高并发环境
Sentinel中的三个定时任务
每10 秒每个sentinel 对master和slave执行info
发现slave节点
确认主从关系
每2秒每个sentinel通过master节点的channel交换信息(pub/sub)
通过sentinel__:hello频道交互
交互对节点的“看法”和自身信息
每1秒每个sentinel对其他sentinel和redis执行ping
实现哨兵架构 以下案例实现一主两从的基于哨兵的高可用Redis架构
哨兵需要先实现主从复制 哨兵的前提是已经实现了Redis的主从复制
注意: master 的配置文件中masterauth 和slave 都必须相同
所有主从节点的 redis.conf 中关健配置
范例: 准备主从环境配置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@centos8 ~] [root@ubuntu2004 ~] [root@centos8 ~] bind 0.0.0.0masterauth "123456" requirepass "123456" [root@centos8 ~] [root@centos8 ~] [root@centos8 ~]
master 服务器状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@redis-master ~] 127.0.0.1:6379> INFO replication role:master connected_slaves:2 slave0:ip=10.0.0.28,port=6379,state=online,offset=112,lag=1 slave1:ip=10.0.0.18,port=6379,state=online,offset=112,lag=0 master_replid:8fdca730a2ae48fb9c8b7e739dcd2efcc76794f3 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:112 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:112 127.0.0.1:6379>
配置 slave1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@redis-slave1 ~] 127.0.0.1:6379> REPLICAOF 10.0.0.8 6379 OK 127.0.0.1:6379> CONFIG SET masterauth "123456" OK 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:4 master_sync_in_progress:0 slave_repl_offset:140 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:8fdca730a2ae48fb9c8b7e739dcd2efcc76794f3 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:140 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:99 repl_backlog_histlen:42
配置 slave2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@redis-slave2 ~] 127.0.0.1:6379> REPLICAOF 10.0.0.8 6379 OK 127.0.0.1:6379> CONFIG SET masterauth "123456" OK 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:3 master_sync_in_progress:0 slave_repl_offset:182 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:8fdca730a2ae48fb9c8b7e739dcd2efcc76794f3 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:182 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:15 repl_backlog_histlen:168
编辑哨兵配置 sentinel配置
Sentinel实际上是一个特殊的redis服务器,有些redis指令支持,但很多指令并不支持.默认监听在26379/tcp端口.
哨兵服务可以和Redis服务器分开部署在不同主机,但为了节约成本一般会部署在一起
所有redis节点使用相同的以下示例的配置文件
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 [root@ubuntu2204 ~] [root@centos8 ~] [root@ubuntu2204 ~] [root@centos8 ~] bind 0.0.0.0port 26379 daemonize yes pidfile "redis-sentinel.pid" logfile "sentinel_26379.log" dir "/tmp" sentinel monitor mymaster 10.0.0.8 6379 2 sentinel auth-pass mymaster 123456 sentinel down-after-milliseconds mymaster 30000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel deny-scripts-reconfig yes logfile /var/log/redis/sentinel.log [root@ubuntu2204 ~] [root@ubuntu2204 ~] protected-mode no port 26379 daemonize no pidfile "/apps/redis/run/redis-sentinel.pid" logfile "/apps/redis/log/redis-sentinel.log" dir "/tmp" sentinel monitor mymaster 10.0.0.102 6379 2 sentinel auth-pass mymaster 123456 sentinel down-after-milliseconds mymaster 3000 acllog-max-len 128 sentinel deny-scripts-reconfig yes sentinel resolve-hostnames no sentinel announce-hostnames no
三个哨兵服务器的配置都如下
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [root@redis-master ~] port 26379 daemonize no pidfile "/var/run/redis-sentinel.pid" logfile "/var/log/redis/sentinel.log" dir "/tmp" sentinel monitor mymaster 10.0.0.8 6379 2 sentinel auth-pass mymaster 123456 sentinel down-after-milliseconds mymaster 3000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel deny-scripts-reconfig yes sentinel myid 50547f34ed71fd48c197924969937e738a39975b ..... protected-mode no supervised systemd sentinel leader-epoch mymaster 0 sentinel known-replica mymaster 10.0.0.28 6379 sentinel known-replica mymaster 10.0.0.18 6379 sentinel current-epoch 0 [root@redis-master ~] [root@redis-master ~]
启动哨兵服务 将所有哨兵服务器都启动起来
1 2 3 4 5 6 7 8 9 10 [root@redis-slave1 ~] sentinel myid 50547f34ed71fd48c197924969937e738a39975c [root@redis-slave2 ~] sentinel myid 50547f34ed71fd48c197924969937e738a39975d [root@redis-master ~] [root@redis-slave1 ~] [root@redis-slave2 ~]
如果是编译安装,在所有哨兵服务器执行下面操作启动哨兵
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 [root@redis-master ~] bind 0.0.0.0port 26379 daemonize yes pidfile "redis-sentinel.pid" Logfile "sentinel_26379.log" dir "/apps/redis/data" sentinel monitor mymaster 10.0.0.8 6379 2 sentinel auth-pass mymaster 123456 sentinel down-after-milliseconds mymaster 15000 sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000 sentinel deny-scripts-reconfig yes [root@redis-master ~] [root@redis-master ~] [Unit] Description=Redis Sentinel After=network.target [Service] ExecStart=/apps/redis/bin/redis-sentinel /apps/redis/etc/sentinel.conf --supervised systemd ExecStop=/bin/kill -s QUIT $MAINPID User=redis Group=redis RuntimeDirectory=redis RuntimeDirectory Mode=0755 [Install] WantedBy=multi-user.target [root@redis-master ~] [root@redis-master ~] [root@redis-master ~]
验证哨兵服务 查看哨兵服务端口状态 1 2 3 4 5 6 7 8 [root@redis-master ~] State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 0.0.0.0:26379 0.0.0.0:* LISTEN 0 128 0.0.0.0:6379 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 128 [::]:26379 [::]:* LISTEN 0 128 [::]:6379 [::]:*
查看哨兵日志 master的哨兵日志
1 2 3 4 5 6 7 8 9 10 11 [root@redis-master ~] 38028:X 20 Feb 2020 17:13:08.702 38028:X 20 Feb 2020 17:13:08.702 38028:X 20 Feb 2020 17:13:08.702 38028:X 20 Feb 2020 17:13:08.702 * supervised by systemd, will signal readiness 38028:X 20 Feb 2020 17:13:08.703 * Running mode=sentinel, port=26379. 38028:X 20 Feb 2020 17:13:08.703 38028:X 20 Feb 2020 17:13:08.704 38028:X 20 Feb 2020 17:13:08.704 38028:X 20 Feb 2020 17:13:08.709 * +slave slave 10.0.0.28:6379 10.0.0.28 6379 @ mymaster 10.0.0.8 6379 38028:X 20 Feb 2020 17:13:08.709 * +slave slave 10.0.0.18:6379 10.0.0.18 6379 @ mymaster 10.0.0.8 6379
slave的哨兵日志
1 2 3 4 5 6 7 8 9 10 11 [root@redis-slave1 ~] 25509:X 20 Feb 2020 17:13:27.435 * Removing the pid file. 25509:X 20 Feb 2020 17:13:27.435 25572:X 20 Feb 2020 17:13:27.448 25572:X 20 Feb 2020 17:13:27.448 25572:X 20 Feb 2020 17:13:27.448 25572:X 20 Feb 2020 17:13:27.448 * supervised by systemd, will signal readiness 25572:X 20 Feb 2020 17:13:27.449 * Running mode=sentinel, port=26379. 25572:X 20 Feb 2020 17:13:27.449 25572:X 20 Feb 2020 17:13:27.449 25572:X 20 Feb 2020 17:13:27.449
当前sentinel状态 在sentinel状态中尤其是最后一行,涉及到masterIP是多少,有几个slave,有几个sentinels,必须是符合全部服务器数量
1 2 3 4 5 6 7 8 9 [root@redis-master ~] 127.0.0.1:26379> INFO sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=10.0.0.8:6379,slaves=2,sentinels=3
停止 Master 节点实现故障转移 停止 Master 节点
查看各节点上哨兵信息:
1 2 3 4 5 6 7 8 9 [root@redis-master ~] 127.0.0.1:26379> INFO sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=10.0.0.18:6379,slaves=2,sentinels=3
故障转移时sentinel的信息:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [root@redis-master ~] 38028:X 20 Feb 2020 17:42:27.362 38028:X 20 Feb 2020 17:42:27.418 38028:X 20 Feb 2020 17:42:27.418 38028:X 20 Feb 2020 17:42:27.418 38028:X 20 Feb 2020 17:42:27.419 38028:X 20 Feb 2020 17:42:27.422 38028:X 20 Feb 2020 17:42:27.475 38028:X 20 Feb 2020 17:42:27.475 38028:X 20 Feb 2020 17:42:27.529 38028:X 20 Feb 2020 17:42:27.529 * +failover-state-send-slaveof-noone slave 10.0.0.18:6379 10.0.0.18 6379 @ mymaster 10.0.0.8 6379 38028:X 20 Feb 2020 17:42:27.613 * +failover-state-wait-promotion slave 10.0.0.18:6379 10.0.0.18 6379 @ mymaster 10.0.0.8 6379 38028:X 20 Feb 2020 17:42:28.506 38028:X 20 Feb 2020 17:42:28.506 38028:X 20 Feb 2020 17:42:28.582 * +slave-reconf-sent slave 10.0.0.28:6379 10.0.0.28 6379 @ mymaster 10.0.0.8 6379 38028:X 20 Feb 2020 17:42:28.736 * +slave-reconf-inprog slave 10.0.0.28:6379 10.0.0.28 6379 @ mymaster 10.0.0.8 6379 38028:X 20 Feb 2020 17:42:28.736 * +slave-reconf-done slave 10.0.0.28:6379 10.0.0.28 6379 @ mymaster 10.0.0.8 6379 38028:X 20 Feb 2020 17:42:28.799 38028:X 20 Feb 2020 17:42:28.799 38028:X 20 Feb 2020 17:42:28.799 * +slave slave 10.0.0.28:6379 10.0.0.28 6379 @ mymaster 10.0.0.18 6379 38028:X 20 Feb 2020 17:42:28.799 * +slave slave 10.0.0.8:6379 10.0.0.8 6379 @ mymaster 10.0.0.18 6379 38028:X 20 Feb 2020 17:42:31.809
验证故障转移
故障转移后redis.conf中的replicaof行的master IP会被修改
1 2 [root@redis-slave2 ~] replicaof 10.0.0.18 6379
哨兵配置文件的sentinel monitor IP 同样也会被修改
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [root@redis-slave1 ~] port 26379 daemonize no pidfile "/var/run/redis-sentinel.pid" logfile "/var/log/redis/sentinel.log" dir "/tmp" sentinel myid 50547f34ed71fd48c197924969937e738a39975b sentinel deny-scripts-reconfig yes sentinel monitor mymaster 10.0.0.18 6379 2 sentinel down-after-milliseconds mymaster 3000 sentinel auth-pass mymaster 123456 sentinel config-epoch mymaster 1 protected-mode no supervised systemd sentinel leader-epoch mymaster 1 sentinel known-replica mymaster 10.0.0.8 6379 sentinel known-replica mymaster 10.0.0.28 6379 sentinel known-sentinel mymaster 10.0.0.28 26379 50547f34ed71fd48c197924969937e738a39975d sentinel current-epoch 1 [root@redis-slave2 ~] port 26379 daemonize no pidfile "/var/run/redis-sentinel.pid" logfile "/var/log/redis/sentinel.log" dir "/tmp" sentinel myid 50547f34ed71fd48c197924969937e738a39975d sentinel deny-scripts-reconfig yes sentinel monitor mymaster 10.0.0.18 6379 2 sentinel down-after-milliseconds mymaster 3000 sentinel auth-pass mymaster 123456 sentinel config-epoch mymaster 1 protected-mode no supervised systemd sentinel leader-epoch mymaster 1 sentinel known-replica mymaster 10.0.0.28 6379 sentinel known-replica mymaster 10.0.0.8 6379 sentinel known-sentinel mymaster 10.0.0.8 26379 50547f34ed71fd48c197924969937e738a39975b sentinel current-epoch 1
验证 Redis 各节点状态 新的master 状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@redis-slave1 ~] 127.0.0.1:6379> INFO replication role:master connected_slaves:1 slave0:ip=10.0.0.28,port=6379,state=online,offset=56225,lag=1 master_replid:75e3f205082c5a10824fbe6580b6ad4437140b94 master_replid2:b2fb4653bdf498691e5f88519ded65b6c000e25c master_repl_offset:56490 second_repl_offset:46451 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:287 repl_backlog_histlen:56204
另一个slave指向新的master
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@redis-slave2 ~] 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.18 master_port:6379 master_link_status:up master_last_io_seconds_ago:0 master_sync_in_progress:0 slave_repl_offset:61029 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:75e3f205082c5a10824fbe6580b6ad4437140b94 master_replid2:b2fb4653bdf498691e5f88519ded65b6c000e25c master_repl_offset:61029 second_repl_offset:46451 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:61029
原 Master 重新加入 Redis 集群 1 2 3 [root@redis-master ~] replicaof 10.0.0.18 6379
在原 master上观察状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [root@redis-master ~] 127.0.0.1:6379> INFO replication role:slave master_host:10.0.0.18 master_port:6379 master_link_status:up master_last_io_seconds_ago:0 master_sync_in_progress:0 slave_repl_offset:764754 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:75e3f205082c5a10824fbe6580b6ad4437140b94 master_replid2:b2fb4653bdf498691e5f88519ded65b6c000e25c master_repl_offset:764754 second_repl_offset:46451 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:46451 repl_backlog_histlen:718304 [root@redis-master ~] 127.0.0.1:26379> INFO sentinel sentinel_masters:1 sentinel_tilt:0 sentinel_running_scripts:0 sentinel_scripts_queue_length:0 sentinel_simulate_failure_flags:0 master0:name=mymaster,status=ok,address=10.0.0.18:6379,slaves=2,sentinels=3 127.0.0.1:26379>
观察新master上状态和日志
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@redis-slave1 ~] 127.0.0.1:6379> INFO replication role:master connected_slaves:2 slave0:ip=10.0.0.28,port=6379,state=online,offset=769027,lag=0 slave1:ip=10.0.0.8,port=6379,state=online,offset=769027,lag=0 master_replid:75e3f205082c5a10824fbe6580b6ad4437140b94 master_replid2:b2fb4653bdf498691e5f88519ded65b6c000e25c master_repl_offset:769160 second_repl_offset:46451 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:287 repl_backlog_histlen:768874 127.0.0.1:6379> [root@redis-slave1 ~] 25717:X 20 Feb 2020 17:42:33.757 25717:X 20 Feb 2020 18:41:29.566
Sentinel 运维 在Sentinel主机手动触发故障切换
1 2 127.0.0.1:26379> sentinel failover <masterName>
范例: 手动故障转移
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@centos8 ~] replica-priority 10 [root@centos8 ~] [root@centos8 ~] 127.0.0.1:6379> CONFIG GET replica-priority 1) "replica-priority" 2) "100" 127.0.0.1:6379> CONFIG SET replica-priority 99 OK 127.0.0.1:6379> CONFIG GET replica-priority 1) "replica-priority" 2) "99" [root@centos8 ~] 127.0.0.1:26379> sentinel failover mymaster OK
应用程序连接 Sentinel Redis 官方支持多种开发语言的客户端:https://redis.io/clients
客户端连接 Sentinel 工作原理
客户端获取 Sentinel 节点集合,选举出一个 Sentinel
由这个sentinel 通过masterName 获取master节点信息,客户端通过sentinel get-master-addr-by-name master-name这个api来获取对应主节点信息
客户端发送role指令确认master的信息,验证当前获取的“主节点”是真正的主节点,这样的目的是为了防止故障转移期间主节点的变化
客户端保持和Sentinel节点集合的联系,即订阅Sentinel节点相关频道,时刻获取关于主节点的相关信息,获取新的master 信息变化,并自动连接新的master
Java 连接Sentinel哨兵 Java 客户端连接Redis:https://github.com/xetorthio/jedis/blob/master/pom.xml
1 2 3 4 5 6 7 #jedis/pom.xml 配置连接redis <properties > <redis-hosts > localhost:6379,localhost:6380,localhost:6381,localhost:6382,localhost:6383,localhost:6384,localhost:6385</redis-hosts > <sentinel-hosts > localhost:26379,localhost:26380,localhost:26381</sentinel-hosts > <cluster-hosts > localhost:7379,localhost:7380,localhost:7381,localhost:7382,localhost:7383,localhost:7384,localhost:7385</cluster-hosts > <github.global.server > github</github.global.server > </properties >
java客户端连接单机的redis是通过Jedis来实现的,java代码用的时候只要创建Jedis对象就可以建多个Jedis连接池来连接redis,应用程序再直接调用连接池即可连接Redis。而Redis为了保障高可用,服务一般都是Sentinel部署方式,当Redis服务中的主服务挂掉之后,会仲裁出另外一台Slaves服务充当Master。这个时候,我们的应用即使使用了Jedis 连接池,如果Master服务挂了,应用将还是无法连接新的Master服务,为了解决这个问题, Jedis也提供了相应的Sentinel实现,能够在Redis Sentinel主从切换时候,通知应用,把应用连接到新的Master服务。
Redis Sentinel的使用也是十分简单的,只是在JedisPool中添加了Sentinel和MasterName参数,JRedis Sentinel底层基于Redis订阅实现Redis主从服务的切换通知,当Reids发生主从切换时,Sentinel会发送通知主动通知Jedis进行连接的切换,JedisSentinelPool在每次从连接池中获取链接对象的时候,都要对连接对象进行检测,如果此链接和Sentinel的Master服务连接参数不一致,则会关闭此连接,重新获取新的Jedis连接对象。
Python 连接 Sentinel 哨兵 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [root@ubuntu2204 ~] [root@centos8 ~] [root@centos8 ~] import redis from redis.sentinel import Sentinel sentinel = Sentinel([('10.0.0.8' , 26379), ('10.0.0.18' , 26379), ('10.0.0.28' , 26379)], socket_timeout=0.5) redis_auth_pass = '123456' master = sentinel.discover_master('mymaster' ) print (master)slave = sentinel.discover_slaves('mymaster' ) print (slave)master = sentinel.master_for('mymaster' , socket_timeout=0.5, password=redis_auth_pass, db=0) w_ret = master.set('name' , 'wang' ) slave = sentinel.slave_for('mymaster' , socket_timeout=0.5, password=redis_auth_pass, db=0) r_ret = slave.get('name' ) print (r_ret)[root@centos8 ~] [root@centos8 ~] ('10.0.0.8' , 6379) [('10.0.0.18' , 6379), ('10.0.0.28' , 6379)] b'wang'
Redis Cluster Redis Cluster 介绍 使用哨兵sentinel 只能解决Redis高可用问题,实现Redis的自动故障转移,但仍然无法解决Redis Master 单节点的性能瓶颈问题
为了解决单机性能的瓶颈,提高Redis 服务整体性能,可以使用分布式集群的解决方案
早期 Redis 分布式集群部署方案:
客户端分区:由客户端程序自己实现写入分配、高可用管理和故障转移等,对客户端的开发实现较为复杂
代理服务:客户端不直接连接Redis,而先连接到代理服务,由代理服务实现相应读写分配,当前代理服务都是第三方实现.此方案中客户端实现无需特殊开发,实现容易,但是代理服务节点仍存有单点故障和性能瓶颈问题。比如:Twitter开源Twemproxy,豌豆荚开发的 codis
Redis 3.0 版本之后推出无中心架构的 Redis Cluster ,支持多个master节点并行写入和故障的自动转移 动能.
Redis Cluster 架构 Redis Cluster 架构
Redis cluster 需要至少 3个master节点才能实现,slave节点数量不限,当然一般每个master都至少对应的有一个slave节点
如果有三个主节点采用哈希槽 hash slot 的方式来分配16384个槽位 slot
此三个节点分别承担的slot 区间可以是如以下方式分配
1 2 3 节点M1 0-5460 节点M2 5461-10922 节点M3 10923-16383
实战案例:基于 Redis 5 以上版本的 Redis Cluster 部署 官方文档:https://redis.io/topics/cluster-tutorial
redis cluster 相关命令
范例: 查看 –cluster 选项帮助
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [root@centos8 ~] Cluster Manager Commands: create host1:port1 ... hostN:portN --cluster-replicas <arg> check host:port --cluster-search-multiple-owners info host:port fix host:port --cluster-search-multiple-owners reshard host:port --cluster-from <arg> --cluster-to <arg> --cluster-slots <arg> --cluster-yes --cluster-timeout <arg> --cluster-pipeline <arg> --cluster-replace rebalance host:port --cluster-weight <node1=w1...nodeN=wN> --cluster-use-empty-masters --cluster-timeout <arg> --cluster-simulate --cluster-pipeline <arg> --cluster-threshold <arg> --cluster-replace add-node new_host:new_port existing_host:existing_port --cluster-slave --cluster-master-id <arg> del-node host:port node_id call host:port command arg arg .. arg set-timeout host:port milliseconds import host:port --cluster-from <arg> --cluster-copy --cluster-replace help For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.
范例: 查看CLUSTER 指令的帮助
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@centos8 ~] 1) CLUSTER <subcommand> arg arg ... arg. Subcommands are: 2) ADDSLOTS <slot> [slot ...] -- Assign slots to current node. 3) BUMPEPOCH -- Advance the cluster config epoch. 4) COUNT-failure-reports <node-id> -- Return number of failure reports for <node-id>. 5) COUNTKEYSINSLOT <slot> - Return the number of keys in <slot>. 6) DELSLOTS <slot> [slot ...] -- Delete slots information from current node. 7) FAILOVER [force|takeover] -- Promote current replica node to being a master. 8) FORGET <node-id> -- Remove a node from the cluster. 9) GETKEYSINSLOT <slot> <count> -- Return key names stored by current node in a slot. 10) FLUSHSLOTS -- Delete current node own slots information. 11) INFO - Return onformation about the cluster. 12) KEYSLOT <key> -- Return the hash slot for <key>. 13) MEET <ip> <port> [bus-port] -- Connect nodes into a working cluster. 14) MYID -- Return the node id . 15) NODES -- Return cluster configuration seen by node. Output format: 16) <id > <ip:port> <flags> <master> <pings> <pongs> <epoch> <link > <slot> ... <slot> 17) REPLICATE <node-id> -- Configure current node as replica to <node-id>. 18) RESET [hard|soft] -- Reset current node (default: soft). 19) SET-config-epoch <epoch> - Set config epoch of current node. 20) SETSLOT <slot> (importing|migrating|stable|node <node-id>) -- Set slot state. 21) REPLICAS <node-id> -- Return <node-id> replicas. 22) SLOTS -- Return information about slots range mappings. Each range is made of: 23) start, end, master and replicas IP addresses, ports and ids
创建 Redis Cluster 集群的环境准备
每个Redis 节点采用相同的相同的Redis版本、相同的密码、硬件配置
所有Redis服务器必须没有任何数据
准备六台主机,地址如下:
1 2 3 4 5 6 10.0.0.8 10.0.0.18 10.0.0.28 10.0.0.38 10.0.0.48 10.0.0.58
启用 Redis Cluster 配置 所有6台主机都执行以下配置
每个节点修改redis配置,必须开启cluster功能的参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@redis-node1 ~]vim /etc/redis.conf bind 0.0.0.0masterauth 123456 requirepass 123456 cluster-enabled yes cluster-config-file nodes-6379.conf cluster-require-full-coverage no [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~]
1 2 3 4 5 6 7 8 9 [root@centos8 ~] LISTEN 0 128 0.0.0.0:16379 0.0.0.0:* LISTEN 0 128 0.0.0.0:6379 0.0.0.0:* [root@centos8 ~] redis 1939 1 0 10:54 ? 00:00:00 /usr/bin/redis-server 0.0.0.0:6379 [cluster] root 1955 1335 0 10:57 pts/0 00:00:00 grep --color=auto redis
创建集群 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 [root@redis-node1 ~] >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 10.0.0.38:6379 to 10.0.0.8:6379 Adding replica 10.0.0.48:6379 to 10.0.0.18:6379 Adding replica 10.0.0.58:6379 to 10.0.0.28:6379 M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master M: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots:[5461-10922] (5462 slots) master M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 S: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 replicates 99720241248ff0e4c6fa65c2385e92468b3b5993 S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 replicates d34da8666a6f587283a1c2fca5d13691407f9462 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .... >>> Performing Cluster Check (using node 10.0.0.8:6379) M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 S: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots: (0 slots) slave replicates 99720241248ff0e4c6fa65c2385e92468b3b5993 M: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. master:10.0.0.8---slave:10.0.0.38 master:10.0.0.18---slave:10.0.0.48 master:10.0.0.28---slave:10.0.0.58 [root@node1 ~] *** ERROR: Invalid configuration for cluster creation. *** Redis Cluster requires at least 3 master nodes. *** This is not possible with 2 nodes and 0 replicas per node. *** At least 3 nodes are required.
验证集群 查看主从状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 [root@redis-node1 ~] role:master connected_slaves:1 slave0:ip=10.0.0.38,port=6379,state=online,offset=896,lag=1 master_replid:3a388865080d779180ff240cb75766e7e57877da master_replid2:0000000000000000000000000000000000000000 master_repl_offset:896 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:896 [root@redis-node2 ~] role:master connected_slaves:1 slave0:ip=10.0.0.48,port=6379,state=online,offset=980,lag=1 master_replid:b9066d3cbf0c5fecc7f4d1d5cb2433999783fa3f master_replid2:0000000000000000000000000000000000000000 master_repl_offset:980 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:980 [root@redis-node3 ~] role:master connected_slaves:1 slave0:ip=10.0.0.58,port=6379,state=online,offset=980,lag=0 master_replid:53208e0ed9305d721e2fb4b3180f75c689217902 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:980 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:980 [root@redis-node4 ~] role:slave master_host:10.0.0.8 master_port:6379 master_link_status:up master_last_io_seconds_ago:1 master_sync_in_progress:0 slave_repl_offset:1036 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:3a388865080d779180ff240cb75766e7e57877da master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1036 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1036 [root@redis-node5 ~] role:slave master_host:10.0.0.18 master_port:6379 master_link_status:up master_last_io_seconds_ago:2 master_sync_in_progress:0 slave_repl_offset:1064 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:b9066d3cbf0c5fecc7f4d1d5cb2433999783fa3f master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1064 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1064 [root@redis-node6 ~] role:slave master_host:10.0.0.28 master_port:6379 master_link_status:up master_last_io_seconds_ago:7 master_sync_in_progress:0 slave_repl_offset:1078 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:53208e0ed9305d721e2fb4b3180f75c689217902 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:1078 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:1078
范例: 查看指定master节点的slave节点信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@centos8 ~] 4f146b1ac51549469036a272c60ea97f065ef832 10.0.0.28:6379@16379 master - 0 1602571565772 12 connected 10923-16383 779a24884dbe1ceb848a685c669ec5326e6c8944 10.0.0.48:6379@16379 slave 97c5dcc3f33c2fc75c7fdded25d05d2930a312c0 0 1602571565000 11 connected 97c5dcc3f33c2fc75c7fdded25d05d2930a312c0 10.0.0.18:6379@16379 master - 0 1602571564000 11 connected 5462-10922 07231a50043d010426c83f3b0788e6b92e62050f 10.0.0.58:6379@16379 slave 4f146b1ac51549469036a272c60ea97f065ef832 0 1602571565000 12 connected a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 10.0.0.8:6379@16379 myself,master - 0 1602571566000 10 connected 0-5461 cb20d58870fe05de8462787cf9947239f4bc5629 10.0.0.38:6379@16379 slave a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 0 1602571566780 10 connected [root@centos8 ~] 1) "cb20d58870fe05de8462787cf9947239f4bc5629 10.0.0.38:6379@16379 slave a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 0 1602571574844 10 connected"
验证集群状态 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@redis-node1 ~] cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:837 cluster_stats_messages_pong_sent:811 cluster_stats_messages_sent:1648 cluster_stats_messages_ping_received:806 cluster_stats_messages_pong_received:837 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:1648 [root@redis-node1 ~] 10.0.0.18:6379 (99720241...) -> 0 keys | 5462 slots | 1 slaves. 10.0.0.28:6379 (d34da866...) -> 0 keys | 5461 slots | 1 slaves. 10.0.0.8:6379 (cb028b83...) -> 0 keys | 5461 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average.
查看对应关系 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 [root@redis-node1 ~] Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379@16379 slave d34da8666a6f587283a1c2fca5d13691407f9462 0 1582344815790 6 connected f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379@16379 slave cb028b83f9dc463d732f6e76ca6bbcd469d948a7 0 1582344811000 4 connected d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379@16379 slave 99720241248ff0e4c6fa65c2385e92468b3b5993 0 1582344815000 5 connected 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379@16379 master - 0 1582344813000 2 connected 5461-10922 d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379@16379 master - 0 1582344814780 3 connected 10923-16383 cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379@16379 myself,master - 0 1582344813000 1 connected 0-5460 [root@redis-node1 ~] 10.0.0.18:6379 (99720241...) -> 0 keys | 5462 slots | 1 slaves. 10.0.0.28:6379 (d34da866...) -> 0 keys | 5461 slots | 1 slaves. 10.0.0.8:6379 (cb028b83...) -> 0 keys | 5461 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. >>> Performing Cluster Check (using node 10.0.0.38:6379) S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 S: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots: (0 slots) slave replicates 99720241248ff0e4c6fa65c2385e92468b3b5993 M: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
测试集群写入数据
redis cluster 写入key 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@redis-node1 ~] (error) MOVED 9189 10.0.0.18:6379 [root@redis-node1 ~] OK [root@redis-node1 ~] "values1" [root@redis-node1 ~] 1) "key1" [root@redis-node1 ~] (error) MOVED 9189 10.0.0.18:6379
redis cluster 计算key所属的slot 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 [root@centos8 ~] 4f146b1ac51549469036a272c60ea97f065ef832 10.0.0.28:6379@16379 master - 0 1602561649000 12 connected 10923-16383 779a24884dbe1ceb848a685c669ec5326e6c8944 10.0.0.48:6379@16379 slave 97c5dcc3f33c2fc75c7fdded25d05d2930a312c0 0 1602561648000 11 connected 97c5dcc3f33c2fc75c7fdded25d05d2930a312c0 10.0.0.18:6379@16379 master - 0 1602561650000 11 connected 5462-10922 07231a50043d010426c83f3b0788e6b92e62050f 10.0.0.58:6379@16379 slave 4f146b1ac51549469036a272c60ea97f065ef832 0 1602561650229 12 connected a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 10.0.0.8:6379@16379 myself,master - 0 1602561650000 10 connected 0-5461 cb20d58870fe05de8462787cf9947239f4bc5629 10.0.0.38:6379@16379 slave a177c5cbc2407ebb6230ea7e2a7de914bf8c2dab 0 1602561651238 10 connected [root@centos8 ~] (integer ) 866 [root@centos8 ~] OK [root@centos8 ~] (integer ) 5798 [root@centos8 ~] (error) MOVED 5798 10.0.0.18:6379 [root@centos8 ~] OK [root@centos8 ~] "wang" [root@centos8 ~] 10.0.0.8:6379> cluster keyslot linux (integer ) 12299 10.0.0.8:6379> set linux love -> Redirected to slot [12299] located at 10.0.0.28:6379 OK 10.0.0.28:6379> get linux "love" 10.0.0.28:6379> exit [root@centos8 ~] "love"
Python 程序实现 Redis Cluster 访问 官网:
1 https://github.com/Grokzen/redis-py-cluster
范例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 [root@ubuntu2204 ~] [root@ubuntu2204 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] from rediscluster import RedisCluster startup_nodes = [ {"host" :"10.0.0.8" , "port" :6379}, {"host" :"10.0.0.18" , "port" :6379}, {"host" :"10.0.0.28" , "port" :6379}, {"host" :"10.0.0.38" , "port" :6379}, {"host" :"10.0.0.48" , "port" :6379}, {"host" :"10.0.0.58" , "port" :6379} ] redis_conn= RedisCluster(startup_nodes=startup_nodes,password='123456' , decode_responses=True) for i in range(0, 10000): redis_conn.set('key' +str(i),'value' +str(i)) print ('key' +str(i)+':' ,redis_conn.get('key' +str(i))) [root@redis-node1 ~] [root@redis-node1 ~] ...... key9998: value9998 key9999: value9999 [root@redis-node1 ~] 10.0.0.8:6379> DBSIZE (integer ) 3331 10.0.0.8:6379> GET key1 (error) MOVED 9189 10.0.0.18:6379 10.0.0.8:6379> GET key2 "value2" 10.0.0.8:6379> GET key3 "value3" 10.0.0.8:6379> KEYS * ...... 3329) "key7832" 3330) "key2325" 3331) "key2880" 10.0.0.8:6379> [root@redis-node1 ~] (integer ) 3340 [root@redis-node1 ~] "value1" [root@redis-node1 ~] (integer ) 3329 [root@redis-node1 ~] "value5"
模拟故障实现故障转移 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 [root@redis-node2 ~] [root@redis-node2 ~] 127.0.0.1:6379> shutdown not connected> exit [root@redis-node2 ~] State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 100 127.0.0.1:25 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 100 [::1]:25 [::]:* [root@redis-node2 ~] Could not connect to Redis at 10.0.0.18:6379: Connection refused 10.0.0.8:6379 (cb028b83...) -> 3331 keys | 5461 slots | 1 slaves. 10.0.0.48:6379 (d04e524d...) -> 3340 keys | 5462 slots | 0 slaves. 10.0.0.28:6379 (d34da866...) -> 3329 keys | 5461 slots | 1 slaves. [OK] 10000 keys in 3 masters. 0.61 keys per slot on average. [root@redis-node2 ~] Could not connect to Redis at 10.0.0.18:6379: Connection refused 10.0.0.8:6379 (cb028b83...) -> 3331 keys | 5461 slots | 1 slaves. 10.0.0.48:6379 (d04e524d...) -> 3340 keys | 5462 slots | 0 slaves. 10.0.0.28:6379 (d34da866...) -> 3329 keys | 5461 slots | 1 slaves. [OK] 10000 keys in 3 masters. 0.61 keys per slot on average. >>> Performing Cluster Check (using node 10.0.0.8:6379) M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 M: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots:[5461-10922] (5462 slots) master M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@redis-node2 ~] 10.0.0.48:6379> INFO replication role:master connected_slaves:0 master_replid:0000698bc2c6452d8bfba68246350662ae41d8fd master_replid2:b9066d3cbf0c5fecc7f4d1d5cb2433999783fa3f master_repl_offset:2912424 second_repl_offset:2912425 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1863849 repl_backlog_histlen:1048576 10.0.0.48:6379> [root@redis-node2 ~] [root@redis-node2 ~] 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379@16379 myself,slave d04e524daec4d8e22bdada7f21a9487c2d3e1057 0 1582352081847 2 connected f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379@16379 slave cb028b83f9dc463d732f6e76ca6bbcd469d948a7 1582352081868 1582352081847 4 connected cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379@16379 master - 1582352081868 1582352081847 1 connected 0-5460 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379@16379 slave d34da8666a6f587283a1c2fca5d13691407f9462 1582352081869 1582352081847 3 connected d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379@16379 master - 1582352081869 1582352081847 7 connected 5461-10922 d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379@16379 master - 1582352081869 1582352081847 3 connected 10923-16383 vars currentEpoch 7 lastVoteEpoch 0 [root@redis-node2 ~] 10.0.0.48:6379> INFO replication role:master connected_slaves:1 slave0:ip=10.0.0.18,port=6379,state=online,offset=2912564,lag=1 master_replid:0000698bc2c6452d8bfba68246350662ae41d8fd master_replid2:b9066d3cbf0c5fecc7f4d1d5cb2433999783fa3f master_repl_offset:2912564 second_repl_offset:2912425 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1863989 repl_backlog_histlen:1048576 10.0.0.48:6379>
实战案例:基于 Redis 4 的 Redis Cluster 部署 准备 Redis Cluster 基本环境配置
准备三台 CentOS 7 主机,已编译安装好Redis,各启动两个Redis实例,分别使用6379和6380端口,从而模拟实现6台Redis实例
1 2 3 10.0.0.7:6379|6380 10.0.0.17:6379|6380 10.0.0.27:6379|6380
准备6个实例:在三个主机上重复下面的操作
范例: 3个节点6个实例
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 etc] [root@redis-node1 etc] [root@redis-node1 etc] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 *:16379 *:* LISTEN 0 128 *:16380 *:* LISTEN 0 128 *:6379 *:* LISTEN 0 128 *:6380 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 [::1]:25 [::]:* LISTEN 0 128 [::]:22 [::]:* [root@redis-node1 ~] redis 71539 1 0 22:13 ? 00:00:00 /apps/redis/bin/redis-server 0.0.0.0:6379 [cluster] redis 71543 1 0 22:13 ? 00:00:00 /apps/redis/bin/redis-server 0.0.0.0:6380 [cluster] root 71553 31781 0 22:15 pts/0 00:00:00 grep --color=auto redis [root@redis-node1 ~] /apps/redis ├── bin │ ├── redis-benchmark │ ├── redis-check-aof │ ├── redis-check-rdb │ ├── redis-cli │ ├── redis-sentinel -> redis-server │ └── redis-server ├── data │ ├── appendonly6380.aof │ ├── appendonly.aof │ ├── nodes-6379.conf │ └── nodes-6380.conf ├── etc │ ├── redis6380.conf │ └── redis.conf ├── log │ ├── redis-6379.log │ └── redis-6380.log └── run ├── redis_6379.pid └── redis_6380.pid 5 directories, 16 files
准备 redis-trib.rb 工具 Redis 3和 4版本需要使用到Redis官方推出的管理redis集群的专用工具redis-trib.rb,redis-trib.rb基于ruby开发,所以需要安装ruby的redis 环境模块,但是如果使用CentOS 7系统yum中的ruby版本太低,不支持Redis-trib.rb工具
1 2 3 4 5 6 7 8 9 10 11 12 13 14 [root@redis-node1 ~] /usr/local/src/redis-4.0.14/src/redis-trib.rb [root@redis-node1 ~] [root@redis-node1 ~] /usr/bin/env: ruby: No such file or directory [root@redis-node1 ~] [root@redis-node1 ~] Fetching: redis-4.1.3.gem (100%) ERROR: Error installing redis: redis requires Ruby version >= 2.3.0.
解决ruby版本问题:
1 2 3 4 5 6 7 8 9 10 11 12 [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ruby-2.5.5] [root@redis-node1 ruby-2.5.5] [root@redis-node1 ruby-2.5.5] /usr/local/bin/ruby [root@redis-node1 ruby-2.5.5] ruby 2.5.5p157 (2019-03-15 revision 67260) [x86_64-linux] [root@redis-node1 ruby-2.5.5]
编译安装高版本的ruby后,运行redis-trib.rb 时仍然还会出错
1 2 3 4 5 [root@redis-node1 ~] Traceback (most recent call last): 2: from /usr/bin/redis-trib.rb:25:in `<main>' 1: from /usr/local/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require' /usr/local/lib/ruby/2.5.0/rubygems/core_ext/kernel_require.rb:59:in `require': cannot load such file -- redis LoadError)
解决上述错误:
1 2 3 4 5 6 7 8 9 [root@redis-node1 ~] Fetching: redis-4.1.3.gem (100%) Successfully installed redis-4.1.3 Parsing documentation for redis-4.1.3 Installing ri documentation for redis-4.1.3 Done installing documentation for redis after 1 seconds 1 gem installed
如果无法在线安装,可以下载redis模块安装包离线安装
redis-trib.rb 命令用法 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 [root@redis-node1 ~] Usage: redis-trib <command > <options> <arguments ...> create host1:port1 ... hostN:portN --replicas <arg> check host:port info host:port fix host:port --timeout <arg> reshard host:port --from <arg> --to <arg> --slots <arg> --yes --timeout <arg> --pipeline <arg> rebalance host:port --weight <arg> --auto-weights --use-empty-masters --timeout <arg> --simulate --pipeline <arg> --threshold <arg> add-node new_host:new_port existing_host:existing_port --slave --master-id <arg> del-node host:port node_id set-timeout host:port milliseconds call host:port command arg arg .. arg import host:port --from <arg> --copy --replace help (show this help )
修改密码 Redis 登录密码 1 2 3 4 [root@redis ~] :password => 123456,
创建 Redis Cluster 集群 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 [root@redis-node1 ~] active active [root@redis-node2 ~] active active [root@redis-node3 ~] active active [root@redis-node1 ~] >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 10.0.0.7:6379 10.0.0.17:6379 10.0.0.27:6379 Adding replica 10.0.0.17:6380 to 10.0.0.7:6379 Adding replica 10.0.0.27:6380 to 10.0.0.17:6379 Adding replica 10.0.0.7:6380 to 10.0.0.27:6379 M: 739cb4c9895592131de418b8bc65990f81b75f3a 10.0.0.7:6379 slots:0-5460 (5461 slots) master S: 0e0beba04cc98da02ebdb5225a11b84aa8062e10 10.0.0.7:6380 replicates a01fd3d81922d6752f7c960f1a75b6e8f28d911b M: dddabb4e19235ec02ae96ab2ce67e295ce0274d7 10.0.0.17:6379 slots:5461-10922 (5462 slots) master S: 34708909088ba562decbc1525a9606e088bdddf1 10.0.0.17:6380 replicates 739cb4c9895592131de418b8bc65990f81b75f3a M: a01fd3d81922d6752f7c960f1a75b6e8f28d911b 10.0.0.27:6379 slots:10923-16383 (5461 slots) master S: aefc6203958859024b8383b2fdb87b9e09411ccd 10.0.0.27:6380 replicates dddabb4e19235ec02ae96ab2ce67e295ce0274d7 Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join .... >>> Performing Cluster Check (using node 10.0.0.7:6379) M: 739cb4c9895592131de418b8bc65990f81b75f3a 10.0.0.7:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) S: 0e0beba04cc98da02ebdb5225a11b84aa8062e10 10.0.0.7:6380 slots: (0 slots) slave replicates a01fd3d81922d6752f7c960f1a75b6e8f28d911b S: 34708909088ba562decbc1525a9606e088bdddf1 10.0.0.17:6380 slots: (0 slots) slave replicates 739cb4c9895592131de418b8bc65990f81b75f3a S: aefc6203958859024b8383b2fdb87b9e09411ccd 10.0.0.27:6380 slots: (0 slots) slave replicates dddabb4e19235ec02ae96ab2ce67e295ce0274d7 M: a01fd3d81922d6752f7c960f1a75b6e8f28d911b 10.0.0.27:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: dddabb4e19235ec02ae96ab2ce67e295ce0274d7 10.0.0.17:6379 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
如果有之前的操作导致Redis集群创建报错,则执行清空数据和集群命令:
1 2 3 4 127.0.0.1:6379> FLUSHALL OK 127.0.0.1:6379> cluster reset OK
查看 Redis Cluster 集群状态 自动生成配置文件记录master/slave对应关系
1 2 3 4 5 6 7 8 9 10 11 [root@redis-node1 ~] 0e0beba04cc98da02ebdb5225a11b84aa8062e10 10.0.0.7:6380@16380 slave a01fd3d81922d6752f7c960f1a75b6e8f28d911b 0 1582383256000 5 connected 34708909088ba562decbc1525a9606e088bdddf1 10.0.0.17:6380@16380 slave 739cb4c9895592131de418b8bc65990f81b75f3a 0 1582383256216 4 connected aefc6203958859024b8383b2fdb87b9e09411ccd 10.0.0.27:6380@16380 slave dddabb4e19235ec02ae96ab2ce67e295ce0274d7 0 1582383257000 6 connected 739cb4c9895592131de418b8bc65990f81b75f3a 10.0.0.7:6379@16379 myself,master - 0 1582383256000 1 connected 0-5460 a01fd3d81922d6752f7c960f1a75b6e8f28d911b 10.0.0.27:6379@16379 master - 0 1582383258230 5 connected 10923-16383 dddabb4e19235ec02ae96ab2ce67e295ce0274d7 10.0.0.17:6379@16379 master - 0 1582383257223 3 connected 5461-10922 vars currentEpoch 6 lastVoteEpoch 0
查看状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 [root@redis-node1 ~] 10.0.0.7:6379 (739cb4c9...) -> 0 keys | 5461 slots | 1 slaves. 10.0.0.27:6379 (a01fd3d8...) -> 0 keys | 5461 slots | 1 slaves. 10.0.0.17:6379 (dddabb4e...) -> 0 keys | 5462 slots | 1 slaves. [OK] 0 keys in 3 masters. 0.00 keys per slot on average. [root@redis-node1 ~] >>> Performing Cluster Check (using node 10.0.0.7:6379) M: 739cb4c9895592131de418b8bc65990f81b75f3a 10.0.0.7:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) S: 0e0beba04cc98da02ebdb5225a11b84aa8062e10 10.0.0.7:6380 slots: (0 slots) slave replicates a01fd3d81922d6752f7c960f1a75b6e8f28d911b S: 34708909088ba562decbc1525a9606e088bdddf1 10.0.0.17:6380 slots: (0 slots) slave replicates 739cb4c9895592131de418b8bc65990f81b75f3a S: aefc6203958859024b8383b2fdb87b9e09411ccd 10.0.0.27:6380 slots: (0 slots) slave replicates dddabb4e19235ec02ae96ab2ce67e295ce0274d7 M: a01fd3d81922d6752f7c960f1a75b6e8f28d911b 10.0.0.27:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: dddabb4e19235ec02ae96ab2ce67e295ce0274d7 10.0.0.17:6379 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@redis-node1 ~] 127.0.0.1:6379> CLUSTER INFO cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:252 cluster_stats_messages_pong_sent:277 cluster_stats_messages_sent:529 cluster_stats_messages_ping_received:272 cluster_stats_messages_pong_received:252 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:529 127.0.0.1:6379> [root@redis-node1 ~] 29a83275db60f1c8f9f6d39b66cbc6c3d5cf20f1 10.0.0.7:6379@16379 myself,master - 0 1601985995000 1 connected 0-5460 3e607de412a8a240e8214c2d7a663cf1523412eb 10.0.0.17:6380@16380 slave 29a83275db60f1c8f9f6d39b66cbc6c3d5cf20f1 0 1601985997092 4 connected 17d0b29d2f50ea9c89d4e6e0cf3ee3ee4f7c4179 10.0.0.7:6380@16380 slave 90b206131d89b0812c626677343df9a11ff1d211 0 1601985995075 5 connected 90b206131d89b0812c626677343df9a11ff1d211 10.0.0.27:6379@16379 master - 0 1601985996084 5 connected 10923-16383 fb34c3a704aefb1e1ef2317b20598d6e1e51c010 10.0.0.17:6379@16379 master - 0 1601985995000 3 connected 5461-10922 c9ea6113a1992695fb86f5368fe6320349b0f8a6 10.0.0.27:6380@16380 slave fb34c3a704aefb1e1ef2317b20598d6e1e51c010 0 1601985996000 6 connected [root@redis-node1 ~] role:master connected_slaves:1 slave0:ip=10.0.0.17,port=6380,state=online,offset=196,lag=0 master_replid:4ee36f9374c796ca4c65a0f0cb2c39304bb2e9c9 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:196 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:196 [root@redis-node1 ~] role:slave master_host:10.0.0.27 master_port:6379 master_link_status:up master_last_io_seconds_ago:2 master_sync_in_progress:0 slave_repl_offset:224 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:dba41cb31c14de7569e597a3d8debc1f0f114c1e master_replid2:0000000000000000000000000000000000000000 master_repl_offset:224 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:224
Python 实现 Redis Cluster 集群访问 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] from rediscluster import RedisCluster startup_nodes = [ {"host" :"10.0.0.7" , "port" :6379}, {"host" :"10.0.0.7" , "port" :6380}, {"host" :"10.0.0.17" , "port" :6379}, {"host" :"10.0.0.17" , "port" :6380}, {"host" :"10.0.0.27" , "port" :6379}, {"host" :"10.0.0.27" , "port" :6380} ] redis_conn= RedisCluster(startup_nodes=startup_nodes,password='123456' , decode_responses=True) for i in range(0, 10000): redis_conn.set('key' +str(i),'value' +str(i)) print ('key' +str(i)+':' ,redis_conn.get('key' +str(i))) [root@redis-node1 ~] [root@redis-node1 ~] ...... key9998: value9998 key9999: value9999
验证脚本写入的状态
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [root@redis-node1 ~] (integer ) 3331 [root@redis-node1 ~] (integer ) 3340 [root@redis-node1 ~] (integer ) 3329 [root@redis-node1 ~] (error) MOVED 9189 10.0.0.17:6379 [root@redis-node1 ~] "value2" [root@redis-node1 ~] "value1" [root@redis-node1 ~] 10.0.0.7:6379 (739cb4c9...) -> 3331 keys | 5461 slots | 1 slaves. 10.0.0.27:6379 (a01fd3d8...) -> 3329 keys | 5461 slots | 1 slaves. 10.0.0.17:6379 (dddabb4e...) -> 3340 keys | 5462 slots | 1 slaves. [OK] 10000 keys in 3 masters. 0.61 keys per slot on average.
模拟故障实现自动的故障转移 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 [root@redis-node1 ~] [root@redis-node1 ~] >>> Performing Cluster Check (using node 10.0.0.27:6379) M: a01fd3d81922d6752f7c960f1a75b6e8f28d911b 10.0.0.27:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: aefc6203958859024b8383b2fdb87b9e09411ccd 10.0.0.27:6380 slots: (0 slots) slave replicates dddabb4e19235ec02ae96ab2ce67e295ce0274d7 S: 0e0beba04cc98da02ebdb5225a11b84aa8062e10 10.0.0.7:6380 slots: (0 slots) slave replicates a01fd3d81922d6752f7c960f1a75b6e8f28d911b M: 34708909088ba562decbc1525a9606e088bdddf1 10.0.0.17:6380 slots:0-5460 (5461 slots) master 0 additional replica(s) M: dddabb4e19235ec02ae96ab2ce67e295ce0274d7 10.0.0.17:6379 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@redis-node1 ~] Feb 22 23:23:13 centos7 redis-server: 71887:M 22 Feb 23:23:13.656 * Saving the final RDB snapshot before exiting. Feb 22 23:23:13 centos7 systemd: Stopped Redis persistent key-value database. Feb 22 23:23:13 centos7 redis-server: 71887:M 22 Feb 23:23:13.660 * DB saved on disk Feb 22 23:23:13 centos7 redis-server: 71887:M 22 Feb 23:23:13.660 * Removing the pid file. Feb 22 23:23:13 centos7 redis-server: 71887:M 22 Feb 23:23:13.660 Feb 22 23:23:13 centos7 systemd: Unit redis.service entered failed state. Feb 22 23:23:13 centos7 systemd: redis.service failed. Feb 22 23:23:30 centos7 redis-server: 72046:S 22 Feb 23:23:30.077 * FAIL message received from dddabb4e19235ec02ae96ab2ce67e295ce0274d7 about 739cb4c9895592131de418b8bc65990f81b75f3a Feb 22 23:23:30 centos7 redis-server: 72046:S 22 Feb 23:23:30.077 Feb 22 23:23:30 centos7 redis-server: 72046:S 22 Feb 23:23:30.701 [root@redis-node1 ~] 10.0.0.27:6379 (a01fd3d8...) -> 3329 keys | 5461 slots | 1 slaves. 10.0.0.17:6380 (34708909...) -> 3331 keys | 5461 slots | 0 slaves. 10.0.0.17:6379 (dddabb4e...) -> 3340 keys | 5462 slots | 1 slaves. [OK] 10000 keys in 3 masters. 0.61 keys per slot on average.
将故障的master恢复后,该节点自动加入集群成为新的slave
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 [root@redis-node1 ~] [root@redis-node1 ~] >>> Performing Cluster Check (using node 10.0.0.27:6379) M: a01fd3d81922d6752f7c960f1a75b6e8f28d911b 10.0.0.27:6379 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: aefc6203958859024b8383b2fdb87b9e09411ccd 10.0.0.27:6380 slots: (0 slots) slave replicates dddabb4e19235ec02ae96ab2ce67e295ce0274d7 S: 739cb4c9895592131de418b8bc65990f81b75f3a 10.0.0.7:6379 slots: (0 slots) slave replicates 34708909088ba562decbc1525a9606e088bdddf1 S: 0e0beba04cc98da02ebdb5225a11b84aa8062e10 10.0.0.7:6380 slots: (0 slots) slave replicates a01fd3d81922d6752f7c960f1a75b6e8f28d911b M: 34708909088ba562decbc1525a9606e088bdddf1 10.0.0.17:6380 slots:0-5460 (5461 slots) master 1 additional replica(s) M: dddabb4e19235ec02ae96ab2ce67e295ce0274d7 10.0.0.17:6379 slots:5461-10922 (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
Redis cluster 管理 集群扩容
扩容适用场景: 当前客户量激增,现有的Redis cluster架构已经无法满足越来越高的并发访问请求,为解决此问题,新购置两台服务器,要求将其动态添加到现有集群,但不能影响业务的正常访问。
注意:生产环境一般建议master节点为奇数个,比如:3,5,7,以防止脑裂现象
添加节点准备 增加Redis 新节点,需要与之前的Redis node版本和配置一致,然后分别再启动两台Redis node,应为一主一从。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@redis-node7 ~] [root@redis-node7 ~] [root@redis-node7 ~] [root@redis-node7 ~] [root@redis-node8 ~] [root@redis-node8 ~] [root@redis-node8 ~] [root@redis-node8 ~]
添加新的master节点到集群 使用以下命令添加新节点,要添加的新redis节点IP和端口添加到的已有的集群中任意节点的IP:端口
1 2 3 4 5 6 add-node new_host:new_port existing_host:existing_port [--slave --master-id <arg>] new_host:new_port existing_host:existing_port
Redis 3/4 版本的添加命令:
1 2 3 4 5 6 7 8 9 [root@redis-node1 ~] [root@redis-node1 ~] 10.0.0.7:6379 (29a83275...) -> 3331 keys | 5461 slots | 1 slaves. 10.0.0.37:6379 (12ca273a...) -> 0 keys | 0 slots | 0 slaves. 10.0.0.27:6379 (90b20613...) -> 3329 keys | 5461 slots | 1 slaves. 10.0.0.17:6379 (fb34c3a7...) -> 3340 keys | 5462 slots | 1 slaves. [OK] 10000 keys in 4 masters. 0.61 keys per slot on average.
Redis 5 以上版本的添加命令:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 [root@redis-node1 ~] >>> Adding node 10.0.0.68:6379 to cluster 10.0.0.58:6379 >>> Performing Cluster Check (using node 10.0.0.58:6379) S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 M: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots: (0 slots) slave replicates d04e524daec4d8e22bdada7f21a9487c2d3e1057 S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. >>> Send CLUSTER MEET to node 10.0.0.68:6379 to make it join the cluster. [OK] New node added correctly. [root@redis-node1 ~] 10.0.0.8:6379 (cb028b83...) -> 6672 keys | 5461 slots | 1 slaves. 10.0.0.68:6379 (d6e2eca6...) -> 0 keys | 0 slots | 0 slaves. 10.0.0.48:6379 (d04e524d...) -> 6679 keys | 5462 slots | 1 slaves. 10.0.0.28:6379 (d34da866...) -> 6649 keys | 5461 slots | 1 slaves. [OK] 20000 keys in 5 masters. 1.22 keys per slot on average. [root@redis-node1 ~] 10.0.0.8:6379 (cb028b83...) -> 6672 keys | 5461 slots | 1 slaves. 10.0.0.68:6379 (d6e2eca6...) -> 0 keys | 0 slots | 0 slaves. 10.0.0.48:6379 (d04e524d...) -> 6679 keys | 5462 slots | 1 slaves. 10.0.0.28:6379 (d34da866...) -> 6649 keys | 5461 slots | 1 slaves. [OK] 20000 keys in 5 masters. 1.22 keys per slot on average. >>> Performing Cluster Check (using node 10.0.0.8:6379) M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) M: d6e2eca6b338b717923f64866bd31d42e52edc98 10.0.0.68:6379 slots: (0 slots) master S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 M: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) S: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots: (0 slots) slave replicates d04e524daec4d8e22bdada7f21a9487c2d3e1057 M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@redis-node1 ~] d6e2eca6b338b717923f64866bd31d42e52edc98 10.0.0.68:6379@16379 master - 0 1582356107260 8 connected 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379@16379 slave d34da8666a6f587283a1c2fca5d13691407f9462 0 1582356110286 6 connected f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379@16379 slave cb028b83f9dc463d732f6e76ca6bbcd469d948a7 0 1582356108268 4 connected d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379@16379 master - 0 1582356105000 7 connected 5461-10922 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379@16379 slave d04e524daec4d8e22bdada7f21a9487c2d3e1057 0 1582356108000 7 connected d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379@16379 master - 0 1582356107000 3 connected 10923-16383 cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379@16379 myself,master - 0 1582356106000 1 connected 0-5460 vars currentEpoch 8 lastVoteEpoch 7 [root@redis-node1 ~] d6e2eca6b338b717923f64866bd31d42e52edc98 10.0.0.68:6379@16379 master - 0 1582356313200 8 connected 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379@16379 slave d34da8666a6f587283a1c2fca5d13691407f9462 0 1582356311000 6 connected f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379@16379 slave cb028b83f9dc463d732f6e76ca6bbcd469d948a7 0 1582356314208 4 connected d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379@16379 master - 0 1582356311182 7 connected 5461-10922 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379@16379 slave d04e524daec4d8e22bdada7f21a9487c2d3e1057 0 1582356312000 7 connected d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379@16379 master - 0 1582356312190 3 connected 10923-16383 cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379@16379 myself,master - 0 1582356310000 1 connected 0-5460 [root@redis-node1 ~] cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:7 cluster_size:3 cluster_current_epoch:8 cluster_my_epoch:1 cluster_stats_messages_ping_sent:17442 cluster_stats_messages_pong_sent:13318 cluster_stats_messages_fail_sent:4 cluster_stats_messages_auth-ack_sent:1 cluster_stats_messages_sent:30765 cluster_stats_messages_ping_received:13311 cluster_stats_messages_pong_received:13367 cluster_stats_messages_meet_received:7 cluster_stats_messages_fail_received:1 cluster_stats_messages_auth-req_received:1 cluster_stats_messages_received:26687
在新的master上重新分配槽位 新的node节点加到集群之后,默认是master节点,但是没有slots,需要重新分配,否则没有槽位将无法访问
注意: 重新分配槽位需要清空数据,所以需要先备份数据,扩展后再恢复数据
Redis 3/4 版本命令:
1 2 3 [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~]
Redis 5以上版本命令:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 [root@redis-node1 ~] >>> Performing Cluster Check (using node 10.0.0.68:6379) M: d6e2eca6b338b717923f64866bd31d42e52edc98 10.0.0.68:6379 slots: (0 slots) master M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[10923-16383] (5461 slots) master 1 additional replica(s) M: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots:[5461-10922] (5462 slots) master 1 additional replica(s) M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots: (0 slots) slave replicates d04e524daec4d8e22bdada7f21a9487c2d3e1057 M: f67f1c02c742cd48d3f48d8c362f9f1b9aa31549 10.0.0.78:6379 slots: (0 slots) master S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. How many slots do you want to move (from 1 to 16384)?4096 What is the receiving node ID? d6e2eca6b338b717923f64866bd31d42e52edc98 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node ...... Do you want to proceed with the proposed reshard plan (yes /no)? yes ...... Moving slot 12280 from 10.0.0.28:6379 to 10.0.0.68:6379: . Moving slot 12281 from 10.0.0.28:6379 to 10.0.0.68:6379: . Moving slot 12282 from 10.0.0.28:6379 to 10.0.0.68:6379: Moving slot 12283 from 10.0.0.28:6379 to 10.0.0.68:6379: .. Moving slot 12284 from 10.0.0.28:6379 to 10.0.0.68:6379: Moving slot 12285 from 10.0.0.28:6379 to 10.0.0.68:6379: . Moving slot 12286 from 10.0.0.28:6379 to 10.0.0.68:6379: Moving slot 12287 from 10.0.0.28:6379 to 10.0.0.68:6379: .. [root@redis-node1 ~] 10.0.0.8:6379 (cb028b83...) -> 5019 keys | 4096 slots | 1 slaves. 10.0.0.68:6379 (d6e2eca6...) -> 4948 keys | 4096 slots | 0 slaves. 10.0.0.48:6379 (d04e524d...) -> 5033 keys | 4096 slots | 1 slaves. 10.0.0.28:6379 (d34da866...) -> 5000 keys | 4096 slots | 1 slaves. [OK] 20000 keys in 5 masters. 1.22 keys per slot on average. >>> Performing Cluster Check (using node 10.0.0.8:6379) M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[1365-5460] (4096 slots) master 1 additional replica(s) M: d6e2eca6b338b717923f64866bd31d42e52edc98 10.0.0.68:6379 slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master S: 9875b50925b4e4f29598e6072e5937f90df9fc71 10.0.0.58:6379 slots: (0 slots) slave replicates d34da8666a6f587283a1c2fca5d13691407f9462 S: f9adcfb8f5a037b257af35fa548a26ffbadc852d 10.0.0.38:6379 slots: (0 slots) slave replicates cb028b83f9dc463d732f6e76ca6bbcd469d948a7 M: d04e524daec4d8e22bdada7f21a9487c2d3e1057 10.0.0.48:6379 slots:[6827-10922] (4096 slots) master 1 additional replica(s) S: 99720241248ff0e4c6fa65c2385e92468b3b5993 10.0.0.18:6379 slots: (0 slots) slave replicates d04e524daec4d8e22bdada7f21a9487c2d3e1057 M: d34da8666a6f587283a1c2fca5d13691407f9462 10.0.0.28:6379 slots:[12288-16383] (4096 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
为新的master指定新的slave节点 当前Redis集群中新的master节点存单点问题,还需要给其添加一个对应slave节点,实现高可用功能
有两种方式:
**方法1:在新加节点到集群时,直接将之设置为slave **
Redis 3/4 添加命令:
1 redis-trib.rb add-node --slave --master-id 750cab050bc81f2655ed53900fd43d2e64423333 10.0.0.77:6379 <任意集群节点>:6379
Redis 5 以上版本添加命令:
1 redis-cli -a 123456 --cluster add-node 10.0.0.78:6379 <任意集群节点>:6379 --cluster-slave --cluster-master-id d6e2eca6b338b717923f64866bd31d42e52edc98
范例:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] [root@centos8 ~] cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:8 cluster_size:4
方法2:先将新节点加入集群,再修改为slave
Redis 3/4 版本命令:
Redis 5 以上版本命令:
需要手动将其指定为某个master的slave,否则其默认角色为master。
1 2 3 4 [root@redis-node1 ~] 10.0.0.78:6380> CLUSTER NODES 10.0.0.78:6380> CLUSTER REPLICATE 886338acd50c3015be68a760502b239f4509881c 10.0.0.78:6380> CLUSTER NODES
集群缩容 缩容适用场景:
随着业务萎缩用户量下降明显,和领导商量决定将现有Redis集群的8台主机中下线两台主机挪做它用,缩容后性能仍能满足当前业务需求
删除节点过程:
扩容时是先添加node到集群,然后再分配槽位,而缩容时的操作相反,是先将被要删除的node上的槽位迁移到集群中的其他node上,然后 才能再将其从集群中删除,如果一个node上的槽位没有被完全迁移空,删除该node时也会提示有数据出错导致无法删除。
迁移要删除的master节点上面的槽位到其它master 注意: 被迁移Redis master源服务器必须保证没有数据,否则迁移报错并会被强制中断。
Redis 3/4 版本命令
1 2 [root@redis-node1 ~] [root@redis-node1 ~]
Redis 5版本以上命令
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 [root@redis-node1 ~] M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots:[1365-5460] (4096 slots) master 1 additional replica(s) [root@redis-node1 ~] How many slots do you want to move (from 1 to 16384)? 1356 What is the receiving node ID? d34da8666a6f587283a1c2fca5d13691407f9462 Please enter all the source node IDs. Type 'all' to use all the nodes as source nodes for the hash slots. Type 'done' once you entered all the source nodes IDs. Source node Source node [root@redis-node1 ~] [root@redis-node1 ~] [root@redis-node1 ~] M: cb028b83f9dc463d732f6e76ca6bbcd469d948a7 10.0.0.8:6379 slots: (0 slots) master [root@redis-node1 ~] [root@centos8 ~] cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:8 cluster_size:3
从集群中删除服务器 上面步骤完成后,槽位已经迁移走,但是节点仍然还属于集群成员,因此还需从集群删除该节点
注意: 删除服务器前,必须清除主机上面的槽位,否则会删除主机失败
Redis 3/4命令:
1 2 3 4 5 [root@s~] >>> Removing node dfffc371085859f2858730e1f350e9167e287073 from cluster 192.168.7.102:6379 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node.
Redis 5以上版本命令:
1 2 3 4 5 6 7 8 9 [root@redis-node1 ~] >>> Removing node cb028b83f9dc463d732f6e76ca6bbcd469d948a7 from cluster 10.0.0.8:6379 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node. [root@redis-node1 ~]
删除多余的slave节点验证结果 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@redis-node1 ~] State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 [::]:22 [::]:* [root@redis-node1 ~] [root@redis-node1 ~] >>> Removing node f9adcfb8f5a037b257af35fa548a26ffbadc852d from cluster 10.0.0.18:6379 >>> Sending CLUSTER FORGET messages to the cluster... >>> SHUTDOWN the node. [root@redis-node4 ~] [root@redis-node1 ~] [root@redis-node1 ~] cluster_known_nodes:6
导入现有Redis数据至集群 官方提供了迁移单个Redis节点数据到集群的工具,有些公司开发了离线迁移工具
官方工具: redis-cli –cluster import
第三方在线迁移工具: 模拟slave 节点实现, 比如: 唯品会 redis-migrate-tool , 豌豆荚 redis-port
导入适用场景:
业务数据初始是放在单一节点的主机上,随着业务量上升,建立了redis 集群,需要将之前旧数据导入到新建的Redis cluster中.
注意: 导入数据需要redis cluster不能与被导入的数据有重复的key名称,否则导入不成功或中断。
基础环境准备 因为导入时不能指定验证密码,所以导入数据之前需要关闭所有Redis 节点的密码。
1 2 3 4 5 6 [root@ubuntu2204 ~] [root@redis ~] OK
执行数据导入 将源Redis 节点的数据直接导入之 Redis cluster,此方式慎用!
Redis 3/4命令:
Redis 5以上版本命令:
范例:将非集群节点的数据导入redis cluster
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 [root@centos8 ~] 10.0.0.78 [root@centos8 ~] NUM=10 PASS=123456 for i in `seq $NUM `;do redis-cli -h 127.0.0.1 -a "$PASS " --no-auth-warning set testkey${i} testvalue${i} echo "testkey${i} testvalue${i} 写入完成" done echo "$NUM 个key写入到Redis完成" [root@centos8 ~] OK testkey1 testvalue1 写入完成 OK testkey2 testvalue2 写入完成 OK testkey3 testvalue3 写入完成 OK testkey4 testvalue4 写入完成 OK testkey5 testvalue5 写入完成 OK testkey6 testvalue6 写入完成 OK testkey7 testvalue7 写入完成 OK testkey8 testvalue8 写入完成 OK testkey9 testvalue9 写入完成 OK testkey10 testvalue10 写入完成 10个key写入到Redis完成 [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] [root@centos8 ~] *** Importing 10 keys from DB 0 Migrating testkey4 to 10.0.0.18:6379: OK Migrating testkey8 to 10.0.0.18:6379: OK Migrating testkey6 to 10.0.0.28:6379: OK Migrating testkey1 to 10.0.0.8:6379: OK Migrating testkey5 to 10.0.0.8:6379: OK Migrating testkey10 to 10.0.0.28:6379: OK Migrating testkey7 to 10.0.0.18:6379: OK Migrating testkey9 to 10.0.0.8:6379: OK Migrating testkey2 to 10.0.0.28:6379: OK Migrating testkey3 to 10.0.0.18:6379: OK [root@centos8 ~] 1) "testkey5" 2) "testkey1" 3) "testkey9" [root@centos8 ~] 1) "testkey8" 2) "testkey4" 3) "testkey3" 4) "testkey7" [root@centos8 ~] 1) "testkey6" 2) "testkey10" 3) "testkey2"
集群偏斜 redis cluster 多个节点运行一段时间后,可能会出现倾斜现象,某个节点数据偏多,内存消耗更大,或者接受用户请求访问更多
发生倾斜的原因可能如下:
节点和槽分配不均
不同槽对应键值数量差异较大
包含bigkey,建议少用
内存相关配置不一致
热点数据不均衡 : 一致性不高时,可以使用本缓存和MQ
获取指定槽位中对应键key值的个数
范例: 获取指定slot对应的key个数
1 2 3 4 5 6 [root@centos8 ~] (integer ) 0 [root@centos8 ~] (integer ) 0 [root@centos8 ~] (integer ) 1
执行自动的槽位重新平衡分布,但会影响客户端的访问,此方法慎用
范例: 执行自动的槽位重新平衡分布
1 2 3 4 5 6 7 [root@centos8 ~] >>> Performing Cluster Check (using node 10.0.0.8:6379) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. *** No rebalancing needed! All nodes are within the 2.00% threshold.
获取bigkey ,建议在slave节点执行
范例: 查找 bigkey
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [root@centos8 ~] [00.00%] Biggest string found so far 'key8811' with 9 bytes [26.42%] Biggest string found so far 'testkey1' with 10 bytes -------- summary ------- Sampled 3335 keys in the keyspace! Total key length in bytes is 22979 (avg len 6.89) Biggest string found 'testkey1' has 10 bytes 3335 strings with 29649 bytes (100.00% of keys, avg size 8.89) 0 lists with 0 items (00.00% of keys, avg size 0.00) 0 sets with 0 members (00.00% of keys, avg size 0.00) 0 hashs with 0 fields (00.00% of keys, avg size 0.00) 0 zsets with 0 members (00.00% of keys, avg size 0.00) 0 streams with 0 entries (00.00% of keys, avg size 0.00)
Redis Cluster 的局限性 集群的读写分离 在集群模式下的从节点是只读连接的,也就是说集群模式中的从节点是拒绝任何读写请求的。当有命令尝试从slave节点获取数据时,slave节点会重定向命令到负责该数据所在槽的节点。
为什么说是只读连接呢?因为slave可以执行命令:readonly,这样从节点就能读取请求,但是这只是在这次连接中生效。也就是说,当客户端断开连接重启后,再次请求又变成重定向了。
集群模式下的读写分离更加复杂,需要维护不同主节点的从节点和对于槽的关系。
通常是不建议在集群模式下构建读写分离,而是添加节点来解决需求。不过考虑到节点之间信息交流带来的带宽问题,官方建议节点数不超过1000个。
单机,哨兵和集群的选择
大多数时客户端性能会”降低”
命令无法跨节点使用︰mget、keys、scan、flush、sinter等
客户端维护更复杂︰SDK和应用本身消耗(例如更多的连接池)
不支持多个数据库︰集群模式下只有一个db 0
复制只支持一层∶不支持树形复制结构,不支持级联复制
Key事务和Lua支持有限∶操作的key必须在一个节点,Lua和事务无法跨节点使用
所以集群搭建还要考虑单机redis是否已经不能满足业务的并发量,在redis sentinel同样能够满足高可用,且并发并未饱和的前提下,搭建集群反而是画蛇添足了。
范例: 跨slot的局限性
1 2 [root@centos8 ~] (error) CROSSSLOT Keys in request don't hash to the same slot