Mallux - 宁静致远

Redis

存储系统

  • RDBMS:关系型数据库

    • Oracle、DB2、PostgreSQL、MySQL、SQL Server …
  • NoSQL:Not only SQL,非关系型数据库

    • Key-value:Memched、Redis …
    • Column family:Cassandra、HBase …
    • Document:MongoDB …
    • Graph:Neo4j …
  • NewSQL :原生支持分布式,支持事务

    • Aerospike、FoundationDB、RethinkDB

Redis 简介

Redis(REmote DIctionary Server,远程字典服务器)是一个开源的、高性能的、基于key-value(键值对)的缓存与存储系统,通过提供多种键值数据类型来适应不同场景下的缓存与存储需求。同时 Redis 的诸多高层级功能使其可以胜任消息队列、任务队列等不同的角色。允许其他应用通过TCP协议读写字典中的内容。同大多数脚本语言中的字典一样,Redis 字典中的键值除了可以是字符串,还可以是其他数据类型。到目前为止Redis支持的键值数据类型如下:

  • 字符串(SET…)
  • 散列(HSET…)
  • 列表(LPUSH…)
  • 集合(SADD…)
  • 有序集合(ZADD…)
  • Bitmaps、HyperLogLog:集合的一种实现,主要用于实现统计。Bitmaps 针对海量级,HyperLogLog 轻量级。

特性

  • 所有数据存储在内存(in-memory),支持持久化存储(RDB 和 AOF)

    在一台普通的笔记本电脑上,Redis 可以在一秒内读写超过 10 万个键值。100万的键值(字符串类型)大约需要 ~100MB 的内存。

  • 单线程

    Redis 是单线程模型,而Memcached 支持多线程,所以在多核服务器上后者的性能理论上相对更高一些。然而 Redis 的性能已经足够优异,在绝大部分场合下其性能都不会成为瓶颈(每秒可以响应 50 万的并发),所以在使用时更应该关心的是二者在功能上的区别。随着 Redis 3.0 的推出,标志着 Memcached 几乎所有功能都成为了 Redis 的子集。同时,Redis 对集群的支持使得 Memcached 原有的第三方集群工具不再成为优势。因此,在新项目中使用 Redis 代替 Memcached 将会是非常好的选择。

  • 功能丰富

    • Redis虽然是作为数据库开发的,但由于其提供了丰富的功能,越来越多的人将其用作缓存消息队列系统等。
    • 支持 Lua 脚本
    • 支持复制主从(依赖于哨兵(sentinel)监控主从节点及哨兵之间的运行状态,实现高可用)集群(去中心化)
    • 持久化
      • RDB(快照机制):异步,fork 父进程,父进程负责响应请求,子进程负责同步内存中的数据到磁盘的临时文件上,待同步OK后,再覆盖掉老的 RDB 文件
      • AOF(append only file):记录每条执行的命令到磁盘的文件上

Redis 与 Memcached 的比较

  • Redis 的优势

    • 单线程
    • 丰富的(资料形态)操作:Sets、Hashs、Lists、Sorted keys、Hyperloglog 等
    • 内建 replication 及 cluster
    • 就地更新(in-place update)
    • 支持持久化(磁盘):避免雪崩效应
  • Memcached 的优势

    • 多线程:善用多核 CPU、更少的阻塞操作
    • 更少的内存开销
    • 更少的内存分配压力
    • 可能有更少的内存碎片(slab)

使用 Redis 的国外公司

  • Twitter
  • Pinterest
  • Tumblr
  • GitHub
  • Stack Overflow
  • digg
  • Blizard
  • flickr
  • Weibo

Redis 安装

安装方式(依赖于 jemalloc)

  • 源码安装
1
2
3
4
5
# wget http://download.redis.io/redis-stable.tar.gz
# tar -zxvf redis-stable.tar.gz
# cd redis-stable
# make
# make install

init 脚本位置:utils/redis_init_script

  • RPM
1
2
# wget http://www6.atomicorp.com/channels/atomic/centos/7/x86_64/RPMS/redis-3.0.7-4.el7.art.x86_64.rpm
# yum localinststall redis-3.0.7-4.el7.art.x86_64.rpm
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# rpm -ql redis
/etc/logrotate.d/redis
/etc/redis-sentinel.conf
/etc/redis.conf
/etc/systemd/system/redis-sentinel.service.d
/etc/systemd/system/redis-sentinel.service.d/limit.conf
/etc/systemd/system/redis.service.d
/etc/systemd/system/redis.service.d/limit.conf
/usr/bin/redis-benchmark
/usr/bin/redis-check-aof
/usr/bin/redis-check-dump
/usr/bin/redis-cli
/usr/bin/redis-sentinel
/usr/bin/redis-server
/usr/bin/redis-shutdown
/usr/lib/systemd/system/redis-sentinel.service
/usr/lib/systemd/system/redis.service
/usr/lib/tmpfiles.d/redis.conf
/usr/share/doc/redis-3.0.7
/usr/share/doc/redis-3.0.7/00-RELEASENOTES
/usr/share/doc/redis-3.0.7/BUGS
/usr/share/doc/redis-3.0.7/CONTRIBUTING
/usr/share/doc/redis-3.0.7/MANIFESTO
/usr/share/doc/redis-3.0.7/README
/usr/share/licenses/redis-3.0.7
/usr/share/licenses/redis-3.0.7/COPYING
/var/lib/redis
/var/log/redis
/var/run/redis

Redis 组件

  • redis-server:Redis 服务器
  • redis-cli:Redis 命令行客户端
  • redis-bechmark:Redis 性能测试工具
  • redis-check-dump:RDB 文件检查工具
  • redis-check-aof:AOF 文件检查工具
  • redis-sentinel:Sentinel 服务器(2.8 版本以后)

Redis 主配置文件(/etc/redis.conf)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
# cat /etc/redis.conf
# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
################################## INCLUDES ###################################
# Include one or more other config files here. This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
################################ GENERAL #####################################
# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
### 脚本启动时,依然会以守护进程运行
daemonize no
# When running daemonized, Redis writes a pid file in /var/run/redis.pid by
# default. You can specify a custom pid file location here.
pidfile /var/run/redis/redis.pid
# Accept connections on the specified port, default is 6379.
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
### TCP 缓冲队列满以后允许的等待队列
tcp-backlog 511
# By default Redis listens for connections from all the network interfaces
# available on the server. It is possible to listen to just one or multiple
# interfaces using the "bind" configuration directive, followed by one or
# more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
bind 127.0.0.1 172.16.0.11
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
### 本地运行时,强烈建议打开 socket 连接方式
# unixsocket /tmp/redis.sock
# unixsocketperm 700
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 60 seconds.
tcp-keepalive 0
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/redis.log
# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no
# Specify the syslog identity.
# syslog-ident redis
# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
### Redis 最大存储 16 个数据库,select 切换,集群时只支持 0 号数据库
databases 16
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
### RDB 快照持久化规则,子进程异步写内存数据至临时文件,父进程提供服务能力。
### 子进程写数据完以后,mv 临时文件为旧的 RDB(dump.rdb)文件。
### 关闭快照功能,指定 save ""。
save 900 1
save 300 10
save 60 10000
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
### RDB 快照文件
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
### 持久化文件存储目录
dir /var/lib/redis/
################################# REPLICATION #################################
# Master-Slave replication. Use slaveof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
# 1) Redis replication is asynchronous, but you can configure a master to
# stop accepting writes if it appears to be not connected with at least
# a given number of slaves.
# 2) Redis slaves are able to perform a partial resynchronization with the
# master if the replication link is lost for a relatively small amount of
# time. You may want to configure the replication backlog size (see the next
# sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
# network partition slaves automatically try to reconnect to masters
# and resynchronize with them.
#
### 默认 Redis 工作于 master,主从只需配置从 Redis,指定 slaveof 哪台主 Redis
# slaveof <masterip> <masterport>
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the slave to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the slave request.
#
### AUTH 认证机制
# masterauth <master-password>
# When a slave loses its connection with the master, or when the replication
# is still in progress, the slave can act in two different ways:
#
# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if slave-serve-stale-data is set to 'no' the slave will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO and SLAVEOF.
#
### 快照期间,使用老的数据响应请求,指定 no 时,则会返回 "SYNC with master in progress"
slave-serve-stale-data yes
# You can configure a slave instance to accept writes or not. Writing against
# a slave instance may be useful to store some ephemeral data (because data
# written on a slave will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default slaves are read-only.
#
# Note: read only slaves are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only slave exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only slaves using 'rename-command' to shadow all the
# administrative / dangerous commands.
### yes 指定从 Redis 只允许读操作
slave-read-only yes
# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New slaves and reconnecting slaves that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the slaves.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
# file on disk. Later the file is transferred by the parent
# process to the slaves incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
# RDB file to slave sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more slaves
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new slaves arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple slaves
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
### 是否开启无盘复制,直接将 master 内存中的数据通过 TCP 复制给 slave
repl-diskless-sync no
# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the slaves.
#
# This is important since once the transfer starts, it is not possible to serve
# new slaves arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more slaves arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5
# Slaves send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_slave_period option. The default value is 10
# seconds.
#
# repl-ping-slave-period 10
# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of slave.
# 2) Master timeout from the point of view of slaves (data, pings).
# 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-slave-period otherwise a timeout will be detected
# every time there is low traffic between the master and the slave.
#
# repl-timeout 60
# Disable TCP_NODELAY on the slave socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to slaves. But this can add a delay for
# the data to appear on the slave side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the slave side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and slaves are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
# Set the replication backlog size. The backlog is a buffer that accumulates
# slave data when slaves are disconnected for some time, so that when a slave
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the slave missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the slave can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a slave connected.
#
# repl-backlog-size 1mb
# After a master has no longer connected slaves for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last slave disconnected, for
# the backlog buffer to be freed.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600
# The slave priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a slave to promote into a
# master if the master is no longer working correctly.
#
# A slave with a low priority number is considered better for promotion, so
# for instance if there are three slaves with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the slave as not able to perform the
# role of master, so a slave with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
### slave 优先级,在 sentinel 机制中,提升为 master 的条件。
### 优先级高者优先,如果优先级相同中,则运行 id 小的优先(每个 redis 重启都会有新的 id)
slave-priority 100
# It is possible for a master to stop accepting writes if there are less than
# N slaves connected, having a lag less or equal than M seconds.
#
# The N slaves need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the slave, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough slaves
# are available, to the specified number of seconds.
#
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
### 写多少个 slave 时,master 才允许写操作
# min-slaves-to-write 3
# min-slaves-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
################################## SECURITY ###################################
# Require clients to issue AUTH <PASSWORD> before processing any other
# commands. This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# requirepass foobared
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to slaves may cause problems.
################################### LIMITS ####################################
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# maxclients 10000
# Don't use more memory than the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU cache, or to set
# a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have slaves attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the slaves are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of slaves is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have slaves attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for slave
# output buffers (but this is not needed if the policy is 'noeviction').
#
# maxmemory <bytes>
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
# LRU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs a bit more CPU. 3 is very fast but not very accurate.
#
# maxmemory-samples 5
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
### 开启 aof 持久机制
appendonly no
# The name of the append only file (default: "appendonly.aof")
### 默认持久文件
appendfilename "appendonly.aof"
# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
### always 每执行一条命令写入 aof 文件,everysec 为每秒写入
# appendfsync always
appendfsync everysec
# appendfsync no
# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
no-appendfsync-on-rewrite no
# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
### 重写 aof 文件百分比
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes
################################ LUA SCRIPTING ###############################
# Max execution time of a Lua script in milliseconds.
#
# If the maximum execution time is reached Redis will log that a script is
# still in execution after the maximum allowed time and will start to
# reply to queries with an error.
#
# When a long running script exceeds the maximum execution time only the
# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
# used to stop a script that did not yet called write commands. The second
# is the only way to shut down the server in the case a write command was
# already issued by the script but the user doesn't want to wait for the natural
# termination of the script.
#
# Set it to 0 or a negative value for unlimited execution without warnings.
lua-time-limit 5000
################################ REDIS CLUSTER ###############################
#
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# WARNING EXPERIMENTAL: Redis Cluster is considered to be stable code, however
# in order to mark it as "mature" we need to wait for a non trivial percentage
# of users to deploy it in production.
# ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#
# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
### Redis 集群
# cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
# A slave of a failing master will avoid to start a failover if its data
# looks too old.
#
# There is no simple way for a slave to actually have a exact measure of
# its "data age", so the following two checks are performed:
#
# 1) If there are multiple slaves able to failover, they exchange messages
# in order to try to give an advantage to the slave with the best
# replication offset (more data from the master processed).
# Slaves will try to get their rank by offset, and apply to the start
# of the failover a delay proportional to their rank.
#
# 2) Every single slave computes the time of the last interaction with
# its master. This can be the last ping or command received (if the master
# is still in the "connected" state), or the time that elapsed since the
# disconnection with the master (if the replication link is currently down).
# If the last interaction is too old, the slave will not try to failover
# at all.
#
# The point "2" can be tuned by user. Specifically a slave will not perform
# the failover if, since the last interaction with the master, the time
# elapsed is greater than:
#
# (node-timeout * slave-validity-factor) + repl-ping-slave-period
#
# So for example if node-timeout is 30 seconds, and the slave-validity-factor
# is 10, and assuming a default repl-ping-slave-period of 10 seconds, the
# slave will not try to failover if it was not able to talk with the master
# for longer than 310 seconds.
#
# A large slave-validity-factor may allow slaves with too old data to failover
# a master, while a too small value may prevent the cluster from being able to
# elect a slave at all.
#
# For maximum availability, it is possible to set the slave-validity-factor
# to a value of 0, which means, that slaves will always try to failover the
# master regardless of the last time they interacted with the master.
# (However they'll always try to apply a delay proportional to their
# offset rank).
#
# Zero is the only value able to guarantee that when all the partitions heal
# the cluster will always be able to continue.
#
# cluster-slave-validity-factor 10
# Cluster slaves are able to migrate to orphaned masters, that are masters
# that are left without working slaves. This improves the cluster ability
# to resist to failures as otherwise an orphaned master can't be failed over
# in case of failure if it has no working slaves.
#
# Slaves migrate to orphaned masters only if there are still at least a
# given number of other working slaves for their old master. This number
# is the "migration barrier". A migration barrier of 1 means that a slave
# will migrate only if there is at least 1 other working slave for its master
# and so forth. It usually reflects the number of slaves you want for every
# master in your cluster.
#
# Default is 1 (slaves migrate only if their masters remain with at least
# one slave). To disable migration just set it to a very large value.
# A value of 0 can be set but is useful only for debugging and dangerous
# in production.
#
# cluster-migration-barrier 1
# By default Redis Cluster nodes stop accepting queries if they detect there
# is at least an hash slot uncovered (no available node is serving it).
# This way if the cluster is partially down (for example a range of hash slots
# are no longer covered) all the cluster becomes, eventually, unavailable.
# It automatically returns available as soon as all the slots are covered again.
#
# However sometimes you want the subset of the cluster which is working,
# to continue to accept queries for the part of the key space that is still
# covered. In order to do so, just set the cluster-require-full-coverage
# option to no.
#
# cluster-require-full-coverage yes
# In order to setup your cluster make sure to read the documentation
# available at http://redis.io web site.
################################## SLOW LOG ###################################
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.
# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000
# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128
################################ LATENCY MONITOR ##############################
# The Redis latency monitoring subsystem samples different operations
# at runtime in order to collect data related to possible sources of
# latency of a Redis instance.
#
# Via the LATENCY command this information is available to the user that can
# print graphs and obtain reports.
#
# The system only logs operations that were performed in a time equal or
# greater than the amount of milliseconds specified via the
# latency-monitor-threshold configuration directive. When its value is set
# to zero, the latency monitor is turned off.
#
# By default latency monitoring is disabled since it is mostly not needed
# if you don't have latency issues, and collecting data has a performance
# impact, that while very small, can be measured under big load. Latency
# monitoring can easily be enabled at runtime using the command
# "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
latency-monitor-threshold 0
############################# EVENT NOTIFICATION ##############################
# Redis can notify Pub/Sub clients about events happening in the key space.
# This feature is documented at http://redis.io/topics/notifications
#
# For instance if keyspace events notification is enabled, and a client
# performs a DEL operation on key "foo" stored in the Database 0, two
# messages will be published via Pub/Sub:
#
# PUBLISH __keyspace@0__:foo del
# PUBLISH __keyevent@0__:del foo
#
# It is possible to select the events that Redis will notify among a set
# of classes. Every class is identified by a single character:
#
# K Keyspace events, published with __keyspace@<db>__ prefix.
# E Keyevent events, published with __keyevent@<db>__ prefix.
# g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
# $ String commands
# l List commands
# s Set commands
# h Hash commands
# z Sorted set commands
# x Expired events (events generated every time a key expires)
# e Evicted events (events generated when a key is evicted for maxmemory)
# A Alias for g$lshzxe, so that the "AKE" string means all the events.
#
# The "notify-keyspace-events" takes as argument a string that is composed
# of zero or multiple characters. The empty string means that notifications
# are disabled.
#
# Example: to enable list and generic events, from the point of view of the
# event name, use:
#
# notify-keyspace-events Elg
#
# Example 2: to get the stream of the expired keys subscribing to channel
# name __keyevent@0__:expired use:
#
# notify-keyspace-events Ex
#
# By default all notifications are disabled because most users don't need
# this feature and the feature has some overhead. Note that if you don't
# specify at least one of K or E, no events will be delivered.
notify-keyspace-events ""
############################### ADVANCED CONFIG ###############################
# Hashes are encoded using a memory efficient data structure when they have a
# small number of entries, and the biggest entry does not exceed a given
# threshold. These thresholds can be configured using the following directives.
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
# Similarly to hashes, small lists are also encoded in a special way in order
# to save a lot of space. The special representation is only used when
# you are under the following limits:
list-max-ziplist-entries 512
list-max-ziplist-value 64
# Sets have a special encoding in just one case: when a set is composed
# of just strings that happen to be integers in radix 10 in the range
# of 64 bit signed integers.
# The following configuration setting sets the limit in the size of the
# set in order to use this special memory saving encoding.
set-max-intset-entries 512
# Similarly to hashes and lists, sorted sets are also specially encoded in
# order to save a lot of space. This encoding is only used when the length and
# elements of a sorted set are below the following limits:
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
# HyperLogLog sparse representation bytes limit. The limit includes the
# 16 bytes header. When an HyperLogLog using the sparse representation crosses
# this limit, it is converted into the dense representation.
#
# A value greater than 16000 is totally useless, since at that point the
# dense representation is more memory efficient.
#
# The suggested value is ~ 3000 in order to have the benefits of
# the space efficient encoding without slowing down too much PFADD,
# which is O(N) with the sparse encoding. The value can be raised to
# ~ 10000 when CPU is not a concern, but space is, and the data set is
# composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
hll-sparse-max-bytes 3000
# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
# order to help rehashing the main Redis hash table (the one mapping top-level
# keys to values). The hash table implementation Redis uses (see dict.c)
# performs a lazy rehashing: the more operation you run into a hash table
# that is rehashing, the more rehashing "steps" are performed, so if the
# server is idle the rehashing is never complete and some more memory is used
# by the hash table.
#
# The default is to use this millisecond 10 times every second in order to
# actively rehash the main dictionaries, freeing memory when possible.
#
# If unsure:
# use "activerehashing no" if you have hard latency requirements and it is
# not a good thing in your environment that Redis can reply from time to time
# to queries with 2 milliseconds delay.
#
# use "activerehashing yes" if you don't have such hard requirements but
# want to free memory asap when possible.
activerehashing yes
# The client output buffer limits can be used to force disconnection of clients
# that are not reading data from the server fast enough for some reason (a
# common reason is that a Pub/Sub client can't consume messages as fast as the
# publisher can produce them).
#
# The limit can be set differently for the three different classes of clients:
#
# normal -> normal clients including MONITOR clients
# slave -> slave clients
# pubsub -> clients subscribed to at least one pubsub channel or pattern
#
# The syntax of every client-output-buffer-limit directive is the following:
#
# client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
#
# A client is immediately disconnected once the hard limit is reached, or if
# the soft limit is reached and remains reached for the specified number of
# seconds (continuously).
# So for instance if the hard limit is 32 megabytes and the soft limit is
# 16 megabytes / 10 seconds, the client will get disconnected immediately
# if the size of the output buffers reach 32 megabytes, but will also get
# disconnected if the client reaches 16 megabytes and continuously overcomes
# the limit for 10 seconds.
#
# By default normal clients are not limited because they don't receive data
# without asking (in a push way), but just after a request, so only
# asynchronous clients may create a scenario where data is requested faster
# than it can read.
#
# Instead there is a default limit for pubsub and slave clients, since
# subscribers and slaves receive data in a push fashion.
#
# Both the hard or the soft limit can be disabled by setting them to zero.
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
# Redis calls an internal function to perform many background tasks, like
# closing connections of clients in timeout, purging expired keys that are
# never requested, and so forth.
#
# Not all tasks are performed with the same frequency, but Redis checks for
# tasks to perform according to the specified "hz" value.
#
# By default "hz" is set to 10. Raising the value will use more CPU when
# Redis is idle, but at the same time will make Redis more responsive when
# there are many keys expiring at the same time, and timeouts may be
# handled with more precision.
#
# The range is between 1 and 500, however a value over 100 is usually not
# a good idea. Most users should use the default of 10 and raise this up to
# 100 only in environments where very low latency is required.
hz 10
# When a child rewrites the AOF file, if the following option is enabled
# the file will be fsync-ed every 32 MB of data generated. This is useful
# in order to commit the file to the disk more incrementally and avoid
# big latency spikes.
### 复制初始化后(快照和快照期间缓存的命令),执行增量复制机制
aof-rewrite-incremental-fsync yes

启动 Redis

1
2
3
4
5
6
# systemctl enable redis.service
# systemctl start redis.service
# lsof -i:6379
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 112173 redis 4u IPv4 1329743 0t0 TCP localhost:6379 (LISTEN)

redis-cli 命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# redis-cli -h
redis-cli 3.0.7
Usage: redis-cli [OPTIONS] [cmd [arg [arg ...]]]
-h <hostname> Server hostname (default: 127.0.0.1).
-p <port> Server port (default: 6379).
-s <socket> Server socket (overrides hostname and port).
-a <password> Password to use when connecting to the server.
-r <repeat> Execute specified command N times.
-i <interval> When -r is used, waits <interval> seconds per command.
It is possible to specify sub-second times like -i 0.1.
-n <db> Database number.
-x Read last argument from STDIN.
-d <delimiter> Multi-bulk delimiter in for raw formatting (default: \n).
-c Enable cluster mode (follow -ASK and -MOVED redirections).
--raw Use raw formatting for replies (default when STDOUT is
not a tty).
--no-raw Force formatted output even when STDOUT is not a tty.
--csv Output in CSV format.
--stat Print rolling stats about server: mem, clients, ...
--latency Enter a special mode continuously sampling latency.
--latency-history Like --latency but tracking latency changes over time.
Default time interval is 15 sec. Change it using -i.
--latency-dist Shows latency as a spectrum, requires xterm 256 colors.
Default time interval is 1 sec. Change it using -i.
--lru-test <keys> Simulate a cache workload with an 80-20 distribution.
--slave Simulate a slave showing commands received from the master.
--rdb <filename> Transfer an RDB dump from remote server to local file.
--pipe Transfer raw Redis protocol from stdin to server.
--pipe-timeout <n> In --pipe mode, abort with error if after sending all data.
no reply is received within <n> seconds.
Default timeout: 30. Use 0 to wait forever.
--bigkeys Sample Redis keys looking for big keys.
--scan List all keys using the SCAN command.
--pattern <pat> Useful with --scan to specify a SCAN pattern.
--intrinsic-latency <sec> Run a test to measure intrinsic system latency.
The test will run for the specified amount of seconds.
--eval <file> Send an EVAL command using the Lua script at <file>.
--help Output this help and exit.
--version Output version and exit.
Examples:
cat /etc/passwd | redis-cli -x set mypasswd
redis-cli get mypasswd
redis-cli -r 100 lpush mylist x
redis-cli -r 100 -i 1 info | grep used_memory_human:
redis-cli --eval myscript.lua key1 key2 , arg1 arg2 arg3
redis-cli --scan --pattern '*:12345*'
(Note: when using --eval the comma separates KEYS[] from ARGV[] items)
When no command is given, redis-cli starts in interactive mode.
Type "help" in interactive mode for information on available commands.

redis-cli 使用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
# redis-cli -h 172.16.0.11 PING
PONG
# redis-cli -h 172.16.0.11
172.16.0.11:6379> help
redis-cli 3.0.7
Type: "help @<group>" to get a list of commands in <group>
"help <command>" for help on <command>
"help <tab>" to get a list of possible help topics
"quit" to exit
172.16.0.11:6379> help set
SET key value [EX seconds] [PX milliseconds] [NX|XX]
summary: Set the string value of a key
since: 1.0.0
group: string
172.16.0.11:6379> help @STRING ### help <tab>键,可以切换 group 类型
APPEND key value
summary: Append a value to a key
since: 2.0.0
BITCOUNT key [start end]
summary: Count set bits in a string
since: 2.6.0
BITOP operation destkey key [key ...]
summary: Perform bitwise operations between strings
since: 2.6.0
172.16.0.11:6379> help APPEND
APPEND key value
summary: Append a value to a key
since: 2.0.0
group: string

Redis 命令返回值的 5 种类型

  • 状态回复

    1
    2
    # redis-cli -h 172.16.0.11 PING
    PONG
  • 错误回复

    1
    2
    172.16.0.11:6379> ERROECOMMEND
    (error) ERR unknown command 'ERROECOMMEND'
  • 整数回复

    1
    2
    172.16.0.11:6379> INCR foo
    (integer) 1
  • 字符串回复

    1
    2
    172.16.0.11:6379> GET noexists
    (nil)
  • 多行字符串回复

    1
    2
    3
    172.16.0.11:6379> KEYS *
    1) "bar"
    2) "foo"

[Redis 命令手册] http://www.lvtao.net/content/book/redis.htm

connection 相关命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
> HELP @connection
AUTH password
summary: Authenticate to the server
since: 1.0.0
ECHO message
summary: Echo the given string
since: 1.0.0
PING -
summary: Ping the server
since: 1.0.0
QUIT -
summary: Close the connection
since: 1.0.0
SELECT index
summary: Change the selected database for the current connection
since: 1.0.0

server 相关命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
> HELP @server
### AOF 异步持久机制
BGREWRITEAOF -
summary: Asynchronously rewrite the append-only file
since: 1.0.0
### RDB 异步持久机制
BGSAVE -
summary: Asynchronously save the dataset to disk
since: 1.0.0
CLIENT GETNAME -
summary: Get the current connection name
since: 2.6.9
CLIENT KILL [ip:port] [ID client-id] [TYPE normal|slave|pubsub] [ADDR ip:port] [SKIPME yes/no]
summary: Kill the connection of a client
since: 2.4.0
CLIENT LIST -
summary: Get the list of client connections
since: 2.4.0
CLIENT PAUSE timeout
summary: Stop processing commands from clients for some time
since: 2.9.50
CLIENT SETNAME connection-name
summary: Set the current connection name
since: 2.6.9
COMMAND -
summary: Get array of Redis command details
since: 2.8.13
COMMAND COUNT -
summary: Get total number of Redis commands
since: 2.8.13
COMMAND GETKEYS -
summary: Extract keys given a full Redis command
since: 2.8.13
COMMAND INFO command-name [command-name ...]
summary: Get array of specific Redis command details
since: 2.8.13
 ### 获取 /etc/redis.conf 配置参数的值
CONFIG GET parameter
summary: Get the value of a configuration parameter
since: 2.0.0
CONFIG RESETSTAT -
summary: Reset the stats returned by INFO
since: 2.0.0
### 将 CONFIG 配置在内存中的值,写入到 /etc/redis.conf 配置文件中
CONFIG REWRITE -
summary: Rewrite the configuration file with the in memory configuration
since: 2.8.0
 ### 配置 /etc/redis.conf 文件中参数的值
CONFIG SET parameter value
summary: Set a configuration parameter to the given value
since: 2.0.0
DBSIZE -
summary: Return the number of keys in the selected database
since: 1.0.0
DEBUG OBJECT key
summary: Get debugging information about a key
since: 1.0.0
DEBUG SEGFAULT -
summary: Make the server crash
since: 1.0.0
### 清除所数据库的键值
FLUSHALL -
summary: Remove all keys from all databases
since: 1.0.0
### 清除当前数据库的键值
FLUSHDB -
summary: Remove all keys from the current database
since: 1.0.0
### 获取服务器状态和统计信息,section 为 “#” 起始的那行,例 INFO CPU、INFO memory
INFO [section]
summary: Get information and statistics about the server
since: 1.0.0
### 取得最后一次持久机制的 UNIX TIME
LASTSAVE -
summary: Get the UNIX time stamp of the last successful save to disk
since: 1.0.0
MONITOR -
summary: Listen for all requests received by the server in real time
since: 1.0.0
ROLE -
summary: Return the role of the instance in the context of replication
since: 2.8.12
### RDB 实时持久机制
SAVE -
summary: Synchronously save the dataset to disk
since: 1.0.0
### 把所有数据同步到磁盘后,安全地关闭 redis
SHUTDOWN [NOSAVE] [SAVE]
summary: Synchronously save the dataset to disk and then shut down the server
since: 1.0.0
### 复制功能,指定主 redis
SLAVEOF host port
summary: Make the server a slave of another instance, or promote it as master
since: 1.0.0
### 查询内存的慢查询日志SLOWLOG subcommand [argument]
summary: Manages the Redis slow queries log
since: 2.2.12
SYNC -
summary: Internal command used for replication
since: 1.0.0
TIME -
summary: Return the current server time
since: 2.6.0

Redis 数据类型

Keys 使用法则

  • 可以使用任意的 ASCII 字符

    • 键名不要太长,以节约空间
  • 多名称空间

    • 16 个 Database(SELECT [0-15]),建议使用不同的数据库来存储不同环境(例生产、测试)的数据,而不推荐存储同一业务环境不同应用的数据。一个空的 Redis 实例占用的内存只有 1MB 左右。
    • 单名称空间,键不能重复
  • 键会自动过期

常见的数据类型

  • 字符串类型
  • 散列类型
  • 列表类型
  • 集合类型
  • 有序集合类型

字符串类型

字符串类型是 Redis 中最基本的数据类型,它能存储任何形式的字符串,包括二进制数据。你可以用其存储用户的邮箱、JSON 化的对象甚至是一张图片。一个字符串类型键允许存储的数据的最大容量是 512 MB。

字符串类型是其他 4 种数据类型的基础,其他数据类型和字符串类型的差别从某种角度来说只是组织字符串的形式不同。例如,列表类型是以列表的形式组织字符串,而集合类型是以集合的形式组织字符串。

赋值于取值

  • SET key value [NX|XX]
  • GET key

递增数字(默认值为 0)

  • INCR key:每次递增 +1
  • INCRBY key increment:指定一次增加的数值

减少指定的整数

  • DECR key
  • DECRBY key

增加指定浮点数

  • INCRBYFLOAT key increment

向尾部追加值

  • APPEND key value

获取字符串长度

  • STRLEN key

同时获取设置多个键值

  • MGET key [key ...]
  • MSET key value [key value ...]

位操作

  • GITBIT key offset:获取一个字符串指定位置的二进制的值(0 或 1)
  • SETBIT key offset value:设置字符串类型键指定位置的二进制位的值,返回值是该位置的旧值。
  • BITCOUNT key [start] [end]:获得字符串类型键串值是 1 的二进制位的个数
  • BITOP operation destkey key [key] ...:对多个字符串类型键进行位运算,并将结果存储在 destkey 参数指定的键中
  • BITPOS key bit [start] [end]:获得字符串类型键的第一个位值是 0 或 1 的位置

散列类型

散列类型(hash)的键值也是一种字典结构,其存储了字段(field)和字段值的映射,但字段值只能是字符串,不支持其他数据类型,换句话说,散列类型不能嵌套其他的数据类型。一个散列类型键可以包含至多 2^32−1 个字段。

除了散列类型,Redis 的其他数据类型同样不支持数据类型嵌套。比如集合类型的每个元素都只能是字符串,不能是另一个集合或散列表等。

1
散列类型适合存储对象:使用对象类别和 ID 构成键名,使用字段表示对象的属性,而字段值则存储属性值。

应用场景:存储文章的数据和缩略名。

赋值于取值

  • HSET key field value
  • HGET key field
  • HMSET key filed value [field value ...]
  • HMGET key field [field ...]
  • HGETALL key
  • HSETNX key filed value:当字段不存时赋值,NX 表法 if Note Xists(如果不存在)

增加数字

  • HINCRBY key field increment:散列类型没有 HINCR 命令

删除字段

  • HDEL key field [field ...]

获取字段名、字段值和字段数量

  • HKEYS key
  • HVALS key
  • HLEN key

列表类型

列表类型(list)可以存储一个有序的字符串列表,常用的操作是向列表两端添加元素,或者获得列表的某一个片段。

列表类型内部是使用双向链表(double linked list)实现的,所以向列表两端添加元素时间复杂度O(1),获取越接近两端的元素速度就越快。这意味着即使是一个有几千万个元素的列表,获取头部或尾部的 10 条记录也是极快的(和从只有 20 个元素的列表中获取头部或尾部的 10 条记录的速度是一样的)。

应用场景:社交网站的新鲜事。

向列表两端增加元素

  • LPUSH key value [value ...]
  • RPUSH key value [value ...]

从列表两端弹出元素

  • LPOP key
  • RPOP key

获取列表元素的个数和片段

  • LLEN key
  • LRANGE key start stop

删除列表中指定的值

  • LREM key count value:删除列表中前 count 个值为 value 的元素
    • count > 0:从列表左边开始删除前 count 个值为 value 的元素
    • count < 0:从列表右边开始删除前 count 个值为 value 的元素
    • count = 0:删除所有值为 value 的元素

获取、设置指定索引的元素值

  • LINDEX key index
  • LSET key index value

只保留列表指定片段

  • LTRIM key start end

向列静中插入元素

  • LINSERT key BEFORE|AFTER pivot value:从左到右查找值 为 pivot 的元素,根据第二个参数 BEFORE 还是 AFTER 来决定将 value 插入到该元素的前面还是后面

将元素从一个列表转到另一个列表

  • RPOPLPUSH source destination:先执行 RPOP 命令再执行 LPUSH,即从 source 列表类型键的右边弹出一个元素,然后将其加入到 destination 列表类型的左边,并返回这个元素的值,整个过程是原子的。

集合类型

集合中的每个元素都是不同的,且没有顺序。一个集合类型(set)键可以存储至多 2^32 −1 个字符串,且具有唯一性。

集合类型的常用操作是向集合中加入或删除元素、判断某个元素是否存在等,由于集合类型在 Redis 内部是使用值为空的散列表(hash table)实现的,所以这些操作的时间复杂度都是 O(1)。最方便的是多个集合类型键之间还可以进行并集、交集和差集运算。

应用场景:存储文章标签

增加、删除元素

  • SADD key member [member ...]
  • SREM key member [member ...]

获取集合中的所有元素

  • SMEMBERS key

判断元素是否在集合中

  • SISMEMBER key member

集合间运算

  • SDIFF key [key ...]:差集
  • SINTER key [key ...]:交集
  • SUNION key [key ...]:并集

获得集合中元素个数

  • SCARD key

存储集合运算结果

  • SDIFFSTORE destination key [key ...]:差集
  • SINTERSTORE destination key [key ...]:交集
  • SUNIONSTORE destination key [key ...]:并集

随机获得集合中的元素

  • SRANDMEMBER key [count]

  • count > 0:随机中集中里获得 count 个不重复的元素,如果 count 的值大于集合中的元素个数,刚会返回集合中的全部元素

  • count < 0:会随机从集合里获得 | count | 个的元素,这些元素有可能相同

从集合中弹出一个元素

  • SPOP key

有序集合类型

在集合类型的基础上有序集合类型为集合中的每个元素都关联了一个分数,这使得我们不仅可以完成插入、删除和判断元素是否存在等集合类型支持的操作,还能够获得分数最高(或最低)的前 N 个元素、获得指定分数范围内的元素等与分数有关的操作。虽然集合中每个元素都是不同的,但是它们的分数却可以相同。

应用场景:点击量排序

有序集合类型在某些方面和列表类型有些相似

  1. 二者都是有序的
  2. 二者都可以获得某一范围的元素

但是二者有着很大的区别,这使得它们的应用场景也是不同的。

  1. 列表类型是通过链表实现的,获取靠近两端的数据速度极快,而当元素增多后,访问中间数据的速度会较慢,所以它更加适合实现如“新鲜事”或“日志”这样很少访问中间元素的应用。

  2. 有序集合类型是使用散列表和跳跃表(Skip list)实现的,所以即使读取位于中间部分的数据速度也很快(时间复杂度是 O(log(N)))。

  3. 列表中不能简单地调整某个元素的位置,但是有序集合可以(通过更改这个元素的分数)。

  4. 有序集合要比列表类型更耗费内存。

有序集合类型算得上是 Redis 的 5 种数据类型中最高级的类型了,在学习时可以与列表类型和集合类型对照理解。

增加元素

  • ZADD key score member [score member ...]

获得元素的分数

  • ZSCORE key member

获得排名在某个范围的元素列表

  • ZRANGE key start stop [WITHSCORES]
  • ZREVRANGE key start stop [WITHSCORES]

获得指定分数范围的元素

  • ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT offset count]

增加某个元素的分数

  • ZINCRBY key increment member

获得集合中元素的数量

  • ZCARD key

获得指定分数范围内的元素个数

  • ZCOUNT key min max

删除一个或多个元素

  • ZREM key member [member ...]

按照排名范围删除元素

  • ZREMRANGEBYRANK key start stop

按照分数范围删除元素

  • ZREMRANGEBYSCORE key min max

获得元素的排名

  • ZRANK key member
  • ZREVRANK key member

计算有序集合的交集

  • ZINTERSTORE destination numkeys key [key …] [WEIGHTS weight [weight …]] [AGGREGATE SUM|MIN|MAX]

Redis 进阶

Redis 认证

配置 redis.conf 开启认证

1
2
3
4
5
# grep -iE '^(# masterauth|requirepass)' /etc/redis.conf
# masterauth <master-password> ### Redis 复制中,开启主 redis 的认证
requirepass redispass ### 启用 redis 认证
# systemctl restart redis

认证连接

1
2
3
4
5
6
7
# redis-cli -h 172.16.0.11 -a redispass
# redis-cli -h 172.16.0.11
172.16.0.11:6379> KEYS *
(error) NOAUTH Authentication required.
172.16.0.11:6379> AUTH redispass
OK

Redis 事务

Redis 中的事务(transaction)是一组命令的集合。事务同命令一样都是 Redis 的最小执行单位,一个事务中的命令要么都执行,要么都不执行。并保证一个事务内的命令依次执行而不被其它命令插入。

Redis 的事功功能可以通过MULTIEXECWATCH等命令实现。Redis 的事务没有关系数据库事务提供的回滚(rollback)功能。

  • MULTI:启动一个事务。告诉 Redis 下面我发给你的命令属于同一个事务的命令,你先不要执行,而是把它们暂时存起来。
  • EXEC:执行事务。告诉 Redis 将等待执行的事务队列中的所有命令(即刚才所有返回 QUEUED 的命令)按照发送顺序依次扫行。EXEC 命令的返回值就是这些命令的返回值组成的列表,返回值顺序和命令的顺序相同。
  • WATCH:乐观锁。在 EXEC 命令执行之前,监控一个或多个键,一旦其中有一个键被修改(或删除),之后的事务就不会执行。监控一直持续到 EXEC 命令(事务中的命令是在 EXEC 之后才执行的,所以在 MULTI 命令后可以修改 WATCH 监控的键值)。
1
2
3
4
5
6
7
8
9
redis A> SET ip 172.16.0.11
redis A> WATCH ip
redis A> MULTI
redis A> SET ip 172.16.0.12 ### WATCH/MULTI 后,“redis B”客户端修改 ip 键值为“SET ip 172.16.0.13”
redis A> SET ip 172.16.0.13
QUEUED
redis A> EXEC
(nil)

事务的错误处理

  • 语法错误:语法错误指命令不存在或者命令参数的个数不对,那么所有事务中的命令不会执行
  • 运行错误:运行错误指在命令执行时出现的错误,比如使用散列类型的命令操作集合类型的键,这种错误在实际执行之前 Redis 是无法发现的,所以在事务里这样的命令是会被 Redis 接受并执行的。如果事务里的一条命令出现了运行错误,事务里其他的命令依然会继续执行(包括出错命令之后的命令)。

Redis 消息队列

任务队列

任务队列顾名思义,就是“传递任务的队列”。与任务队列进行交互的实体有两类,一类是生产者(producer),另一类是消费者(consumer)。生产者会将需要处理的任务放入任务队列中,而消费者则不断地从任务队列中读入任务信息并执行。

任务队列的实现的相关命令:

  • LPUSH key value [value ...]:生产者将任务使用 LPUSH 命令加入到指定的键中
  • RPOP key:消费者不断地使用RPOP命令从该键中取出任务
  • BRPOP key [key ...] timeout:阻塞式队列,timeout 为 0 时表示不限制等待的时间

“发布与订阅” 模式

除了实现任务队列外,Redis 还提供了一组命令可以让开发者实现“发布/订阅”(publish/subscribe)模式。“发布/订阅”模式同样可以实现进程间的消息传递,其原理为:“发布/订阅”模式中包含两种角色,分别是发布者和订阅者。订阅者可以订阅一个或若干个频道(channel),而发布者可以向指定的频道发送消息,所有订阅此频道的订阅者都会收到此消息。

  • PUBLISH channel message
  • SUBSCRIBE channel [channel ...]

执行 SUBSCRIBE 命令后客户端会进入订阅状态,处于此状态下客户端不能使用除 SUBSCRIBE 、 UNSUBSCRIBE 、 PSUBSCRIBE 和 PUNSUBSCRIBE 这 4 个属于“发布/订阅”模式的命令之外的命令,否则会报错。

进入订阅状态后客户端可能收到 3 种类型的回复。每种类型的回复都包含 3 个值,第一个值是消息的类型,根据消息类型的不同,第二、三个值的含义也不同。消息类型可能的取值有以下 3 个。

  1. subscribe:表示订阅成功的反馈信息。第二个值是订阅成功的频道名称,第三个值是当前客户端订阅的频道数量。

  2. message:这个类型的回复是我们最关心的,它表示接收到的消息。第二个值表示产生消息的频道名称,第三个值是消息的内容。

  3. unsubscribe:表示成功取消订阅某个频道。第二个值是对应的频道名称,第三个值是当前客户端订阅的频道数量,当此值为0时客户端会退出订阅状态,之后就可以执行其他非“发布/订阅”模式的命令了。

除了可以使用SUBSCRIBE命令订阅指定名称的频道外,还可以使用PSUBSCRIBE命令订阅指定的规则。

使用 PUNSUBSCRIBE 命令只能退订通过 PSUBSCRIBE 命令订阅的规则,不会影响直接通过 SUBSCRIBE 命令订阅的频道;同样 UNSUBSCRIBE 命令也不会影响通过 PSUBSCRIBE 命令订阅的规则。另外容易出错的一点是使用 PUNSUBSCRIBE 命令退订某个规则时不会将其中的通配符展开,而是进行严格的字符串匹配,所以 PUNSUBSCRIBE 无法退订 channel. 规则,而是必须使用 PUNSUBSCRIBE channel.* 才能退订。

Redis 持久化

持久化方式

  • RDB:根据指定的规则“定时”将内存中的数据存储在硬盘上
  • AOF方式:将每次执行命令后将命令本身记录下来

两种持久化方式可以单独使用其中一种,但更多情况下是将二者结合使用。

注意:持久本身不能取代备份;还应该制定备份策略,对 Redis 数据库进行定期备份。

RDB 持久化

RDB方式的持久化是通过快照(snapshotting)完成的,当符合一定条件时 Redis 会自动将内存中的所有数据生成一份副本并存储在硬盘上,这个过程即为“快照”。

RDB 快照实现

Redis 会在以下几种情况下对数据进行快照:

  • 根据配置规则进行自动快照
  • 用户执行 SAVE 或 BGSAVE 命令
  • 执行 FLUSHALL 命令
  • 执行复制(replication)

根据配置规则进行自动快照

进行快照的条件可以由用户在配置文件中自定义,由两个参数构成:时间窗口 M(单位秒)和改动的键的个数 N。每当时间 M 内被更改的键的个数大于 N 时,即符合自动快照条件。如下所示,每条快照条件占一行,并且以save参数开头。同时可以存在多个条件,条件之间是“或”的关系。

1
2
3
4
# grep ^save /etc/redis.conf
save 900 1
save 300 10
save 60 1000

用户执行 SAVE 或 BGSAVE 命令

  • SAVE 命令

当执行 SAVE 命令时,Redis 同步地进行快照操作,在快照执行的过程中会阻塞所有来自客户端的请求。当数据库中的数据比较多时,这一过程会导致 Redis 较长时间不响应,所以要尽量避免在生产环境中使用这一命令。

  • BGSAVE 命令

需要手动执行快照时推荐使用 BGSAVE 命令。BGSAVE 命令可以在后台异步地进行快照操作,快照的同时服务器还可以继续响应来自客户端的请求。执行 BGSAVE 后 Redis 会立即返回 OK 表示开始执行快照操作,如果想知道快照是否完成,可以通过 LASTSAVE 命令获取最近一次成功执行快照的时间,返回结果是一个 Unix 时间戳。

执行 FLUSHALL 命令

当执行 FLUSHALL 命令时,Redis 会清除数据库中的所有数据。需要注意的是,不论清空数据库的过程是否触发了自动快照条件,只要自动快照条件不为空,Redis 就会执行一次快照操作。例如,当定义的快照条件为当 1 秒内修改 10000 个键时进行自动快照,而当数据库里只有一个键时,执行 FLUSHALL 命令也会触发快照,即使这一过程实际上只有一个键被修改了。当没有定义自动快照条件时,执行 FLUSHALL 则不会进行快照。

执行复制(replication)

当设置了主从模式时,Redis 会在复制初始化时进行自动快照。使用复制操作时,即使没有定义自动快照条件,并且没有手动执行过快照操作,也会生成 RDB 快照文件。

RDB 快照原理

Redis 默认会将快照文件存储在 Redis 当前进程的工作目录中的dump.rdb文件中,可以通过配置dirdbfilename两个参数分别指定快照文件的存储路径和文件名。快照的过程如下。

  1. Redis 使用 fork 函数复制一份当前进程(父进程)的副本(子进程);

  2. 父进程继续接收并处理客户端发来的命令,而子进程开始将内存中的数据写入硬盘中的临时文件;

  3. 当子进程写入完所有数据后会用该临时文件替换旧的 RDB 文件,至此一次快照操作完成。

在执行 fork 的时候操作系统(类 Unix 操作系统)会使用写时复制(copy-on-write)策略,即 fork 函数发生的一刻父子进程共享同一内存数据,当父进程要更改其中某片数据时(如执行一个写命令),操作系统会将该片数据复制一份以保证子进程的数据不受影响,所以新的 RDB 文件存储的是执行 fork 一刻的内存数据。

写时复制策略也保证了在 fork 的时刻虽然看上去生成了两份内存副本,但实际上内存的占用量并不会增加一倍。这就意味着当系统内存只有 2 GB,而 Redis 数据库的内存有 1.5 GB 时,执行 fork 后内存使用量并不会增加到 3 GB(超出物理内存)。为此需要确保 Linux 系统允许应用程序申请超过可用内存(物理内存和交换分区)的空间,方法是在/etc/sysctl.conf文件加入vm.overcommit_memory = 1,然后重启系统或者执行sysctl vm.overcommit_memory=1确保设置生效。

另外需要注意的是,当进行快照的过程中,如果写入操作较多,造成 fork 前后数据差异较大,是会使得内存使用量显著超过实际数据大小的,因为内存中不仅保存了当前的数据库数据,而且还保存着 fork 时刻的内存数据。进行内存用量估算时很容易忽略这一问题,造成内存用量超限。

通过上述过程可以发现 Redis 在进行快照的过程中不会修改 RDB 文件,只有快照结束后才会将旧的文件替换成新的,也就是说任何时候 RDB 文件都是完整的。这使得我们可以通过定时备份 RDB 文件来实现 Redis 数据库备份。RDB 文件是经过压缩(可以配置rdbcompression参数以禁用压缩节省 CPU 占用)的二进制格式,所以占用的空间会小于内存中的数据大小,更加利于传输。

Redis 启动后会读取 RDB 快照文件,将数据从硬盘载入到内存。根据数据量大小与结构和服务器性能不同,这个时间也不同。通常将一个记录 1000 万个字符串类型键、大小为 1 GB 的快照文件载入到内存中需要花费 20~30 秒。

通过 RDB 方式实现持久化,一旦 Redis 异常退出,就会丢失最后一次快照以后更改的所有数据。这就需要开发者根据具体的应用场合,通过组合设置自动快照条件的方式来将可能发生的数据损失控制在能够接受的范围。例如,使用 Redis 存储缓存数据时,丢失最近几秒的数据或者丢失最近更新的几十个键并不会有很大的影响。如果数据相对重要,希望将损失降到最小,则可以使用 AOF 方式进行持久化。

RDB 相关参数

1
2
3
4
5
6
# grep -iE '^(stop-write|rdb|db|dir)' /etc/redis.conf
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis/

RDB 快照案例

一台拥有 68G 内存的 Xen 虚拟机,对一个占用 50G 内存的 Redis 执行 BGSAVE 命令的话,光是创建子进程需要花费 15s 以上,而生成快照需要花费 15~20 分钟;但 SAVE 只要 3~5 分钟就可以完成快照 的生成工作。

AOF 持久化

当使用 Redis 存储非临时数据时,一般需要打开 AOF 持久化来降低进程中止导致的数据丢失。AOF 可以将 Redis 执行的每一条写命令追加到硬盘文件中,这一过程显然会降低 Redis 的性能,但是大部分情况下这个影响是可以接受的,另外使用较快的硬盘可以提高 AOF 的性能。

AOF 文件以纯文本的形式记录了 Redis 执行的命令。

开启 AOF(默认关闭)

默认情况下 Redis 没有开启 AOF(append only file)方式的持久化,可以通过 appendonly 参数启用,默认文件名为appendonly.aof

1
2
3
4
# grep ^append /etc/redis.conf
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec

appendonly 参数:

  • no:让操作系统自己决定应该何时进行同步
  • always:每个 Redis 命令都要同步写入硬盘
  • everysync:每秒执行一次同步,显式地将多个命令同步到硬盘

appendfsync参数:同 appendonly

每当达到一定条件时 Redis 就会自动重写 AOF 文件(优化,删除无用的记录),这个条件可以在配置文件中设置:

1
2
3
# grep ^auto-aof /etc/redis.conf
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

auto-aof-rewrite-percentage参数的意义是当目前的 AOF 文件大小超过上一次重写时的 AOF 文件大小的百分之多少时会再次进行重写,如果之前没有重写过,则以启动时的 AOF 文件大小为依据。auto-aof-rewrite-min-size参数限制了允许重写的最小 AOF 文件大小,通常在 AOF 文件很小的情况下即使其中有很多冗余的命令我们也并不太关心。除了让 Redis 自动执行重写外,我们还可以主动使用BGREWRITEAOF命令手动执行 AOF 重写。

重写过程

  1. redis 主进程通过 fork 创建子进程
  2. 子进程根据 redis 内存中的数据创建数据库重建命令序列于临时文件中
  3. 父进程继承 client 的请求,并会把这些请求中的写操作继续追加至原来 AOF 文件;额外地,这些新的写请求还会放置于一个缓冲队列中
  4. 子进程重写完成,会通知父进程,父进程把缓冲中的命令写到临时文件中
  5. 父进程用临时文件替换老的 AOF 文件

同步硬盘数据

虽然每次执行更改数据库内容的操作时,AOF 都会将命令记录在 AOF 文件中,但是事实上,由于操作系统的缓存机制,数据并没有真正地写入硬盘,而是进入了系统的硬盘缓存。在默认情况下系统每 30 秒会执行一次同步操作,以便将硬盘缓存中的内容真正地写入硬盘,在这 30 秒的过程中如果系统异常退出则会导致硬盘缓存中的数据丢失。一般来讲启用 AOF 持久化的应用都无法容忍这样的损失,这就需要 Redis 在写入 AOF 文件后主动要求系统将缓存内容同步到硬盘中。在 Redis 中我们可以通过appendfsync参数设置同步的时机:

1
2
# grep ^appendfsync /etc/redis.conf
appendfsync everysec

默认情况下 Redis 采用everysec规则,即每秒执行一次同步操作,显示式将多个命令同步至硬盘上。always表示每次执行写入都会执行同步,这是最安全也是最慢的方式。no表示不主动进行同步操作,而是完全交由操作系统来做(即每 30 秒一次),这是最快但最不安全的方式。一般情况下使用默认值 everysec 就足够了,既兼顾了性能又保证了安全。

Redis 允许同时开启AOF 和 RDB,既保证了数据安全又使得进行备份等操作十分容易。此时重新启动 Redis后 Redis 会使用 AOF 文件来恢复数据,因为 AOF 方式的持久化可能丢失的数据更少。

AOF 相关参数

1
2
3
4
5
6
7
# grep -iE '^(append|no-append|auto-aof)' /etc/redis.conf
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

Redis 高可用

通过持久化功能,Redis 保证了即使在服务器重启的情况下也不会损失(或少量损失)数据。为了避免单点故障,通常的做法是将数据库复制或者分布式在不同的服务器上,这样即使有一台服务器出现故障,其他服务器依然可以继续提供服务。Redis 3.0 支持如下跨主机的数据可用性:

  • 复制(replication):master / salve 同步
  • 哨兵(sentinel):监控复制,自动切换主从,Redis 2.8 引入。
  • 集群(cluster):键值数据分布式

Redis 复制(replication)

复制(replication)功能,可以实现当一台数据库中的数据更新后,自动将更新的数据同步到其他数据库上。

特点:

  • 一个 Master 可以有多个 Slave
  • 支持链式复制
  • Master 以非阻塞方式同步数据至 slave

配置复制

配置方式

仅在slave redis配置即可,master redis无需任何配置:

  • 命令行:redis-server、redis-cli

    1
    # redis-server --port <port> --slaveof <master redis-ip>:<master redis-port>
  • 配置文件:/etc/redis.conf

    1
    # slaveof <master redis-ip> <master redis-port>

使用INFO replication可查看当前 redis 服务器 role。

注意:如果 master 使用requirepass开启了认证功能,从服务器需要使用masterauth <password>来连入服务请求密码认证。

DEMO

Slave Redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ redis-server --port 6380 --slaveof 172.16.0.11 6379
...
50600:S 06 Aug 09:25:15.141 * The server is now ready to accept connections on port 6380
50600:S 06 Aug 09:25:16.142 * Connecting to MASTER 172.16.0.11:6379
50600:S 06 Aug 09:25:16.142 * MASTER <-> SLAVE sync started
50600:S 06 Aug 09:25:16.142 * Non blocking connect for SYNC fired the event.
50600:S 06 Aug 09:25:16.157 * Master replied to PING, replication can continue...
50600:S 06 Aug 09:25:16.172 * Partial resynchronization not possible (no cached master)
50600:S 06 Aug 09:25:16.275 * Full resync from master: a96f078b69fe61dc96ad75d0bc496ecc56a15280:1
50600:S 06 Aug 09:25:16.376 * MASTER <-> SLAVE sync: receiving 18 bytes from master
50600:S 06 Aug 09:25:16.376 * MASTER <-> SLAVE sync: Flushing old data
50600:S 06 Aug 09:25:16.377 * MASTER <-> SLAVE sync: Loading DB in memory
50600:S 06 Aug 09:25:16.377 * MASTER <-> SLAVE sync: Finished with success

验证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
### Master Redis
$ redis-cli INFO replication
# Replication
role:master
connected_slaves:1
slave0:ip=172.16.0.11,port=6380,state=online,offset=463,lag=0
master_repl_offset:463
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:462
### Slave Redis
$ redis-cli -p 6380 INFO replication
# Replication
role:slave
master_host:172.16.0.11
master_port:6379
master_link_status:up
master_last_io_seconds_ago:8
master_sync_in_progress:0
slave_repl_offset:491
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

telnet 测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ telnet 172.16.0.11 6379
Trying 172.16.0.11...
Connected to 172.16.0.11.
Escape character is '^]'.
PING
+PONG
REPLCONF listening-port 6381
+OK
SYNC
$31
REDIS0006þkeyhello:V1엳hell
-ERR unknown command 'Xshell'
*1
$4
PING
GET key
$5
hello
*1
$4
PING

复制初始化

当一个从数据库启动后,会向主数据库发送SYNC命令。同时主数据库接收到 SYNC 命令后会开始在后台保存快照(即RDB 持久化的过程),并将保存快照期间接收到的命令缓存起来。当快照完成后,Redis 会将快照文件和所有缓存的命令发送给从数据库。从数据库收到后,会载入快照文件并执行收到的缓存的命令。以上过程称为复制初始化。

复制初始化结束后,主数据库每当收到写命令时就会将命令同步给从数据库,从而保证主从数据库数据一致。这一过程为复制同步阶段

当主从数据库之间的连接断开重连后,Redis 2.6 以及之前的版本会重新进行复制初始化(即主数据库重新保存快照并传送给从数据库),即使从数据库可以仅有几条命令没有收到,主数据库也必须要将数据库里的所有数据重新传送给从数据库。这使得主从数据库断线重连后的数据恢复过程效率很低下,在网络环境不好的时候这一问题尤其明显。Redis 2.8版的一个重要改进就是断线重连能够支持有条件的增量数据传输(从数据库向主数据库发送PSYNC命令),当从数据库重新连接上主数据库后,主数据库只需要将断线期间执行的命令传送给从数据库,从而大大提高 Redis 复制的实用性。

从数据库会将收到的内容写入到硬盘上的临时文件中,当写入完成后从数据库会用该临时文件替换 RDB 快照文件(RDB 快照文件的位置就是持久化时配置的位置,由 dir 和 dbfilename 两个参数确定),之后的操作就和 RDB 持久化时启动恢复的过程一样了。需要注意的是在同步的过程中从数据库并不会阻塞,而是可以继续处理客户端发来的命令。默认情况下,从数据库会用同步前的数据对命令进行响应。可以配置slave-serve-stale-data参数为 no 来使从数据库在同步完成前对所有命令(除了 INFO 和 SLAVEOF)都回复错误:“SYNC with master in progress. ”

乐观复制

Redis 采用了乐观复制(optimistic replication)的复制策略,容忍在一定时间内主从数据库的内容是不同的,但是两者的数据会最终同步。具体来说,Redis 在主从数据库之间复制数据的过程本身是异步的,这意味着,主数据库执行完客户端请求的命令后会立即将命令在主数据库的执行结果返回给客户端,并异步地将命令同步给从数据库,而不会等待从数据库接收到该命令后再返回给客户端。这一特性保证了启用复制后主数据库的性能不会受到影响,但另一方面也会产生一个主从数据库数据不一致的时间窗口,当主数据库执行了一条写命令后,主数据库的数据已经发生的变动,

然而在主数据库将该命令传送给从数据库之前,如果两个数据库之间的网络连接断开了,此时二者之间的数据就会是不一致的。从这个角度来看,主数据库是无法得知某个命令最终同步给了多少个从数据库的,不过Redis提供了两个配置选项来限制只有当数据至少同步给指定数量的从数据库时,主数据库才是可写的:

1
2
3
4
5
6
7
8
9
10
# grep -iE 'min-slaves' -B2 /etc/redis.conf
# For example to require at least 3 slaves with a lag <= 10 seconds use:
#
min-slaves-to-write 3
min-slaves-max-lag 10
--
# Setting one or the other to 0 disables the feature.
#
# By default min-slaves-to-write is set to 0 (feature disabled) and
# min-slaves-max-lag is set to 10.
  • min-slaves-to-write:多少个以上的从数据库连接到主数据库时,主数据库才是可写的
  • min-slaves-max-lag:允许从数据库最长失去连接的时间(单位秒),如果从数据库最后与主数据库联系(即发送 REPLCONF ACK 命令)的时间小于这个值,则认为从数据库还在保持与主数据库的连接。

slave 数据库持久化

为了提高性能,可以通过复制功能建立一个(或若干个)从数据库,并在从数据库中启用持久化,同时在主数据库禁用持久化(不推荐)。当从数据库崩溃重启后主数据库会自动将数据同步过来,所以无需担心数据丢失。

然而当主数据库崩溃时,情况就稍显复杂了。手工通过从数据库数据恢复主数据库数据时,需要严格按照以下两步进行。

  1. 在从数据库中使用SLAVEOF NO ONE命令将从数据库提升成主数据库继续服务。

  2. 启动之前崩溃的主数据库,然后使用SLAVEOF命令将其设置成新的主数据库的从数据库,即可将数据同步回来。

注意:当开启复制且主数据库关闭持久化功能时,一定不要使用 Supervisor 以及类似的进程管理工具令主数据库崩溃后自动重启。同样当主数据库所在的服务器因故关闭时,也要避免直接重新启动。这是因为当主数据库重新启动后,因为没有开启持久化功能,所以数据库中所有数据都被清空,这时从数据库依然会从主数据库中接收数据,使得所有从数据库也被清空,导致从数据库的持久化失去意义。

在实现复制同步时,为了避免主数据库未持久化,意外重启导致地主从数据库数据被清空的可能,Redis 2.8 开始引入了sentinel哨兵机制来监控主从服务器的运行状态,自动实现主从切换,而无需人工干预。

无硬盘复制

Redis 的复制默认基于RDB 持久化实现的,即主数据库端在后台保存 RDB 快照,从数据库端则接收并载入快照文件。这样的实现优点是可以显著地简化逻辑,复用已有的代码,但是缺点也很明显。

  1. 当主数据库禁用 RDB 快照时(即删除了所有的配置文件中的 save 语句),如果执行了复制初始化操作, Redis 依然会生成 RDB 快照,所以下次启动后主数据库会以该快照恢复数据。因为复制发生的时间不能确定,这使得恢复的数据可能是任何时间点的。

  2. 因为复制初始化时需要在硬盘中创建 RDB 快照文件,所以如果硬盘性能很慢(如网络硬盘)时这一过程会对性能产生影响。举例来说,当使用 Redis 做缓存系统时,因为不需要持久化,所以服务器的硬盘读写速度可能较差。但是当该缓存系统使用一主多从的集群架构时,每次和从数据库同步,Redis 都会执行一次快照,同时对硬盘进行读写,导致性能降低。

因此从 2.8.18 版本开始,Redis 引入了无硬盘复制选项,开启该选项时,Redis 在与从数据库进行复制初始化时将不会将快照内容存储到硬盘上,而是直接通过网络发送给从数据库,避免了硬盘的性能瓶颈。

1
2
3
# grep -iE 'repl-diskless-sync' /etc/redis.conf
repl-diskless-sync no
repl-diskless-sync-delay 5

增量复制

Redis 复制在2.6 版本以前,当主从数据库连接断开后,从数据库会发送 SYNC 命令来重新进行一次完整复制操作。这样即使断开期间数据库的变化很小(甚至没有),也需要将数据库中的所有数据重新快照并传送一次。Redis 2.8 版本相对 2.6 版本的最重要的更新之一就是实现了主从断线重连的情况下的增量复制。

增量复制是基于如下 3 点实现的:

  1. 从数据库会存储主数据库的运行 ID(run id)。每个 Redis 运行实例均会拥有一个唯一的运行 ID,每当实例重启后,就会自动生成一个新的运行 ID。

  2. 在复制同步阶段,主数据库每将一个命令传送给从数据库时,都会同时把该命令存放到一个积压队列(backlog)中,并记录下当前积压队列中存放的命令的偏移量范围。

  3. 同时,从数据库接收到主数据库传来的命令时,会记录下该命令的偏移量。

这 3 点是实现增量复制的基础。当主从连接准备就绪后,从数据库会发送一条 SYNC 命令来告诉主数据库可以开始把所有数据同步过来了。而 2.8 版本之后,不再发送 SYNC 命令,取而代之的是发送 PSYNC,格式为“PSYNC <主数据库的运行ID> <断开前最新的命令偏移量>”

主数据库收到 PSYNC 命令后,会执行以下判断来决定此次重连是否可以执行增量复制。

  1. 首先主数据库会判断从数据库传送来的运行 ID 是否和自己的运行 ID 相同。这一步骤的意义在于确保从数据库之前确实是和自己同步的,以免从数据库拿到错误的数据(比如主数据库在断线期间重启过,会造成数据的不一致)。

  2. 然后判断从数据库最后同步成功的命令偏移量是否在积压队列中,如果在则可以执行增量复制,并将积压队列中相应的命令发送给从数据库。

如果此次重连不满足增量复制的条件,主数据库会进行一次全部同步(即与 Redis 2.6 的过程相同)。

大部分情况下,增量复制的过程对开发者来说是完全透明的,开发者不需要关心增量复制的具体细节。2.8 版本的主数据库也可以正常地和旧版本的从数据库同步(通过接收 SYNC 命令),同样 2.8 版本的从数据库也可以与旧版本的主数据库同步(通过发送 SYNC 命令)。唯一需要开发者设置的就是积压队列的大小了。

积压队列在本质上是一个固定长度的循环队列,默认情况下积压队列的大小为 1 MB,可以通过配置文件的repl-backlog-size选项来调整。很容易理解的是,积压队列越大,其允许的主从数据库断线的时间就越长。根据主从数据库之间的网络状态,设置一个合理的积压队列很重要。因为积压队列存储的内容是命令本身,如 SET foo bar,所以估算积压队列的大小只需要估计主从数据库断线的时间中主数据库可能执行的命令的大小即可。

与积压队列相关的另一个配置选项是repl-backlog-ttl,即当所有从数据库与主数据库断开连接后,经过多久时间可以释放积压队列的内存空间。默认时间是 1 小时。

复制相关参数

1
2
3
4
5
6
7
8
9
10
11
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
# repl-timeout 60
repl-disable-tcp-nodelay no
# repl-backlog-size 1mb
# repl-backlog-ttl 3600
slave-priority 100
# min-slaves-to-write 3
# min-slaves-max-lag 10

Redis 哨兵(sentinel)

哨兵的作用就是监控 Redis 系统的运行状况。它的功能包括主要有:

  • 用于管理多个 redis 服务实现 HA
    • 监控主数据库和从数据库是否正常运行
    • 主数据库出现故障时自动将从数据库转换为主数据库

启动过程

启动方式

  • redis-sentinel /path/to/redis-sentinel.conf
  • redis-server /path/to/redis-sentinel.conf –sentinel

启动步骤

  1. 服务器自身初始化,运行 redis-seerver 中专用于 sentinal 的代码
  2. 初始化 sentinel 状态,根据给定的配置文件,初始化 master 的服务器列表
  3. 创建连向 master 的连接

专用配置文件:/etc/redis-sentinel.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
# Example sentinel.conf
# port <sentinel-port>
# The port that this sentinel instance will run on
port 26379
### 使用指定的 ip 和 port 来向 __sentinel__:hello 发送自己的信息
# sentinel announce-ip <ip>
# sentinel announce-port <port>
#
# The above two configuration directives are useful in environments where,
# because of NAT, Sentinel is reachable from outside via a non-local address.
#
# When announce-ip is provided, the Sentinel will claim the specified IP address
# in HELLO messages used to gossip its presence, instead of auto-detecting the
# local address as it usually does.
#
# Similarly when announce-port is provided and is valid and non-zero, Sentinel
# will announce the specified TCP port.
#
# The two options don't need to be used together, if only announce-ip is
# provided, the Sentinel will announce the specified IP and the server port
# as specified by the "port" option. If only announce-port is provided, the
# Sentinel will announce the auto-detected local IP and the specified port.
#
# Example:
#
# sentinel announce-ip 1.2.3.4
# dir <working-directory>
# Every long running process should have a well-defined working directory.
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
# for the process to don't interfere with administrative tasks such as
# unmounting filesystems.
dir /tmp
### 监控 master 服务器,指定成为领头哨兵的法务票数
# sentinel monitor <master-name> <ip> <redis-port> <quorum>
#
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
# (Objectively Down) state only if at least <quorum> sentinels agree.
#
# Note that whatever is the ODOWN quorum, a Sentinel will require to
# be elected by the majority of the known Sentinels in order to
# start a failover, so no failover can be performed in minority.
#
# Slaves are auto-discovered, so you don't need to specify slaves in
# any way. Sentinel itself will rewrite this configuration file adding
# the slaves using additional configuration options.
# Also note that the configuration file is rewritten when a
# slave is promoted to master.
#
# Note: master name should not include special characters or spaces.
# The valid charset is A-z 0-9 and the three characters ".-_".
sentinel monitor mymaster 127.0.0.1 6379 2
### master 服务器认证请求
# sentinel auth-pass <master-name> <password>
#
# Set the password to use to authenticate with the master and slaves.
# Useful if there is a password set in the Redis instances to monitor.
#
# Note that the master password is also used for slaves, so it is not
# possible to set a different password in masters and slaves instances
# if you want to be able to monitor these instances with Sentinel.
#
# However you can have Redis instances without the authentication enabled
# mixed with Redis instances requiring the authentication (as long as the
# password set is the same for all the instances requiring the password) as
# the AUTH command will have no effect in Redis instances with authentication
# switched off.
#
# Example:
#
# sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
### 主观认为 master 服务器离线的超时时长
# sentinel down-after-milliseconds <master-name> <milliseconds>
#
# Number of milliseconds the master (or any attached slave or sentinel) should
# be unreachable (as in, not acceptable reply to PING, continuously, for the
# specified period) in order to consider it in S_DOWN state (Subjectively
# Down).
#
# Default is 30 seconds.
sentinel down-after-milliseconds mymaster 30000
### 指定多少个 slave 服务器,在故障转移期间可以跟 master 服务器进行同步
# sentinel parallel-syncs <master-name> <numslaves>
#
# How many slaves we can reconfigure to point to the new slave simultaneously
# during the failover. Use a low number if you use the slaves to serve query
# to avoid that all the slaves will be unreachable at about the same
# time while performing the synchronization with the master.
sentinel parallel-syncs mymaster 1
### 故障转移的超时时长,slave 服务器提升为 master 失败的允许时间,进而重新选举其它的 slave 为 master
# sentinel failover-timeout <master-name> <milliseconds>
#
# Specifies the failover timeout in milliseconds. It is used in many ways:
#
# - The time needed to re-start a failover after a previous failover was
# already tried against the same master by a given Sentinel, is two
# times the failover timeout.
#
# - The time needed for a slave replicating to a wrong master according
# to a Sentinel current configuration, to be forced to replicate
# with the right master, is exactly the failover timeout (counting since
# the moment a Sentinel detected the misconfiguration).
#
# - The time needed to cancel a failover that is already in progress but
# did not produced any configuration change (SLAVEOF NO ONE yet not
# acknowledged by the promoted slave).
#
# - The maximum time a failover in progress waits for all the slaves to be
# reconfigured as slaves of the new master. However even after this time
# the slaves will be reconfigured by the Sentinels anyway, but not with
# the exact parallel-syncs progression as specified.
#
# Default is 3 minutes.
sentinel failover-timeout mymaster 180000
# SCRIPTS EXECUTION
#
# sentinel notification-script and sentinel reconfig-script are used in order
# to configure scripts that are called to notify the system administrator
# or to reconfigure clients after a failover. The scripts are executed
# with the following rules for error handling:
#
# If script exits with "1" the execution is retried later (up to a maximum
# number of times currently set to 10).
#
# If script exits with "2" (or an higher value) the script execution is
# not retried.
#
# If script terminates because it receives a signal the behavior is the same
# as exit code 1.
#
# A script has a maximum running time of 60 seconds. After this limit is
# reached the script is terminated with a SIGKILL and the execution retried.
### 故障转移期间,master role 转变时,调用的异常报警脚本
# NOTIFICATION SCRIPT
#
# sentinel notification-script <master-name> <script-path>
#
# Call the specified notification script for any sentinel event that is
# generated in the WARNING level (for instance -sdown, -odown, and so forth).
# This script should notify the system administrator via email, SMS, or any
# other messaging system, that there is something wrong with the monitored
# Redis systems.
#
# The script is called with just two arguments: the first is the event type
# and the second the event description.
#
# The script must exist and be executable in order for sentinel to start if
# this option is provided.
#
# Example:
#
# sentinel notification-script mymaster /var/redis/notify.sh
### 故障转移期之后,master 发生 role 变化,配置通知客户端的脚本
# CLIENTS RECONFIGURATION SCRIPT
#
# sentinel client-reconfig-script <master-name> <script-path>
#
# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
#
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
#
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
#
# This script should be resistant to multiple invocations.
#
# Example:
#
# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
#
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile /var/log/redis/sentinel.log

专用命令

  • SENTINEL masters:列出 master 服务器
  • SENTINEL salves <master name>:获取指定 master 有务器的所有 slave 服务器
  • SENTINEL get-master-addr-by-name <master name>:获取指定 master 名称的 ip 地址
  • SENTINEL reset :重置所有操作
  • SENTINEL failover <master name>:强制转移 master

sentinel 原理

一个哨兵进程启动时会读取配置文件的内容,通过如下的配置找出需要监控的主数据库:

1
sentinel monitor <master-name> <ip> <redis-port> <quorum>
  • master-name:是一个由大小写字母、数字和“.-_”组成的主数据库的名字,因为考虑到故障恢复后当前监控的系统的主数据库的地址和端口会产生变化,所以哨兵提供了命令可以通过主数据库的名字获取当前系统的主数据库的地址和端口号。

  • quorum:当有多个 sentinel 节点时,至少需要多少个 sentinel 同意。

一个哨兵节点可以同时监控多个 Redis 主从系统,只需要提供多个 sentinel monitor 配置即可。同时多个哨兵节点也可以同时监控同一个 Redis 主从系统,从而形成网状结构。

哨兵启动后,会与要监控的主数据库建立两条连接,这两个连接的建立方式与普通的 Redis 客户端无异。其中一条连接用来订阅该主数据的__sentinel__:hello频道以获取其他同样监控该数据库的哨兵节点的信息,另外哨兵也需要定期向主数据库发送INFO等命令来获取主数据库本身的信息,当客户端的连接进入订阅模式时就不能再执行其他命令了,所以这时哨兵会使用另外一条连接来发送这些命令。

和主数据库的连接建立完成后,哨兵会定时执行下面 3 个操作。

  1. 每 10 秒哨兵会向主数据库和从数据库发送INFO命令。

  2. 每 2 秒哨兵会向主数据库和从数据库的__sentinel__:hello频道发送自己的信息。

  3. 每 1 秒哨兵会向主数据库、从数据库和其他哨兵节点发送PING命令。

这 3 个操作贯穿哨兵进程的整个生命周期中,非常重要。

具体实现

首先,发送 INFO 命令使得哨兵可以获得当前数据库的相关信息(包括运行 ID、复制信息等)从而实现新节点的自动发现。配置哨兵监控 Redis 主从系统时只需要指定主数据库的信息即可,因为哨兵正是借助 INFO 命令来获取所有复制该主数据库的从数据库信息的。启动后,哨兵向主数据库发送 INFO 命令,通过解析返回结果来得知从数据库列表,而后对每个从数据库同样建立两个连接,两个连接的作用和前文介绍的与主数据库建立的两个连接完全一致。在此之后,哨兵会每 10 秒定时向已知的所有主从数据库发送 INFO 命令来获取信息更新并进行相应操作,比如对新增的从数据库建立连接并加入监控列表,对主从数据库的角色变化(由故障恢复操作引起)进行信息更新等。

接下来哨兵向主从数据库的 sentinel:hello 频道发送信息来与同样监控该数据库的哨兵分享自己的信息。发送的消息内容为:

1
<哨兵的地址>, <哨兵的端口>, <哨兵的运行ID>, <哨兵的配置版本>, <主数据库的名字>, <主数据库的地址>, <主数据库的端口>, <主数据库的配置版本>

发送的消息包括的哨兵的基本信息,以及其监控的主数据库的信息。哨兵会订阅每个其监控的数据库的 sentinel:hello 频道,所以当其他哨兵收到消息后,会判断发消息的哨兵是不是新发现的哨兵。如果是则将其加入已发现的哨兵列表中并创建一个到其的连接(与数据库不同,哨兵与哨兵之间只会创建一条连接用来发送 PING 命令,而不需要创建另外一条连接来订阅频道,因为哨兵只需要订阅数据库的频道即可实现自动发现其他哨兵)。同时哨兵会判断信息中主数据库的配置版本,如果该版本比当前记录的主数据库的版本高,则更新主数据库的数据。

实现了自动发现从数据库和其他哨兵节点后,哨兵要做的就是定时监控这些数据库和节点有没有停止服务。这是通过每隔一定时间向这些节点发送 PING 命令实现的。时间间隔与 down-after-milliseconds 选项有关,当 down-after-milliseconds 的值小于 1 秒时,哨兵会每隔 down-after-mill iseconds 指定的时间发送一次 PING 命令,当 down-after-milliseconds 的值大于 1 秒时,哨兵会每隔 1 秒发送一次 PING 命令。例如:

1
2
3
4
5
// 每隔1秒发送一次PING命令
sentinel down-after-milliseconds mymaster 60000
// 每隔600毫秒发送一次PING命令
sentinel down-after-milliseconds othermaster 600

Raft 算法

当超过 down-after-milliseconds 选项指定时间后,如果被 PING 的数据库或节点仍然未进行回复,则哨兵认为其主观下线(subjectively down)。主观下线表示从当前的哨兵进程看来,该节点已经下线。如果该节点是主数据库,则哨兵会进一步判断是否需要对其进行故障恢复:哨兵发送 SENTINEL is-master-down-by-addr 命令询问其他哨兵节点以了解他们是否也认为该主数据库主观下线,如果达到指定数量时,哨兵会认为其客观下线(objectively down),并选举领头的哨兵节点对主从系统发起故障恢复。这个指定数量即为 quorum 参数。

选举领头哨兵的过程使用了 Raft 算法,具体过程如下:

  1. 发现主数据库客观下线的哨兵节点(下面称作 A)向每个哨兵节点发送命令,要求对方选自己成为领头哨兵。
  2. 如果目标哨兵节点没有选过其他人,则会同意将 A 设置成领头哨兵。
  3. 如果 A 发现有超过半数且超过 quorum 参数值的哨兵节点同意选自己成为领头哨兵,则 A 成功成为领头哨兵。
  4. 当有多个哨兵节点同时参选领头哨兵,则会出现没有任何节点当选的可能。此时每个参选节点将等待一个随机时间重新发起参选请求,进行下一轮选举,直到选举成功。

选出领头哨兵后,领头哨兵将会开始对主数据库进行故障恢复。故障恢复的过程相对简单,具体如下:

首先领头哨兵将从停止服务的主数据库的从数据库中挑选一个来充当新的主数据库。挑选的依据如下。

  1. 所有在线的从数据库中,选择优先级最高的从数据库。优先级可以通过 slave-priority 选项来设置。

  2. 如果有多个最高优先级的从数据库,则复制的命令偏移量越大越优先。

  3. 如果以上条件都一样,则选择运行 ID 较小的从数据库。

选出一个从数据库后,领头哨兵将向从数据库发送 SLAVEOF NO ONE 命令使其升格为主数据库。而后领头哨兵向其他从数据库发送 SLAVEOF 命令来使其成为新主数据库的从数据库。最后一步则是更新内部的记录,将已经停止服务的旧的主数据库更新为新的主数据库的从数据库,使得当其恢复服务时自动以从数据库的身份继续服务。

哨兵的部署

哨兵以独立进程的方式对一个主从系统进行监控,监控的效果好坏与否取决于哨兵的视角是否有代表性。如果一个主从系统中配置的哨兵较少,哨兵对整个系统的判断的可靠性就会降低。极端情况下,当只有一个哨兵时,哨兵本身就可能会发生单点故障。整体来讲,相对稳妥的哨兵部署方案是使得哨兵的视角尽可能地与每个节点的视角一致,即:

  1. 每个节点(无论是主数据库还是从数据库)部署一个哨兵
  2. 使每个哨兵与其对应的节点的网络环境相同或相近

这样的部署方案可以保证哨兵的视角拥有较高的代表性和可靠性。举例一个例子:当网络分区后,如果哨兵认为某个分区是主要分区,即意味着从每个节点观察,该分区均为主分区。

同时设置quorum的值为 N/2 + 1(其中 N 为哨兵节点数量),这样使得只有当大部分哨兵节点同意后才会进行故障恢复。

当系统中的节点较多时,考虑到每个哨兵都会和系统中的所有节点建立连接,为每个节点分配一个哨兵会产生较多连接,尤其是当进行客户端分片时使用多个哨兵节点监控多个主数据库会因为 Redis 不支持连接复用而产生大量冗余连接,具体可以见此issue:https://github.com/antirez/redis/issues/2257;同时如果 Redis 节点负载较高,会在一定程度上影响其对哨兵的回复以及与其同机的哨兵与其他节点的通信。所以配置哨兵时还需要根据实际的生产环境情况进行选择。

示例配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
port 26379
dir "/data/redis-sentinel"
daemonize yes
protected-mode no
loglevel notice
logfile "/data/redis-sentinel/redis-sentinel.log"
sentinel myid 7e4a8c7fd306393115958b83a94c9ee5f1497cb1
sentinel monitor master 192.168.112.102 6379 1
sentinel down-after-milliseconds master 5000
sentinel failover-timeout master 15000
sentinel auth-pass master redispass
# Generated by CONFIG REWRITE
sentinel config-epoch master 10
sentinel leader-epoch master 10
sentinel known-slave master 192.168.112.101 6379
sentinel known-slave master 192.168.112.103 6379
sentinel known-sentinel master 192.168.112.103 26379 47852940dd93a39643bbc8a74b1f3b469cc4d7f2
sentinel known-sentinel master 192.168.112.101 26379 40748f243b9da42128eb5af0a8ba2a455fd583f4
sentinel current-epoch 10

Redis 集群(cluster)

Redis 3.0 版的一大特性就是支持集群(Cluster,去中心化)功能。集群的特点在于拥有和单机实例同样的性能,同时在网络分区后能够提供一定的可访问性以及对主数据库故障恢复的支持。另外集群支持几乎所有的单机实例支持的命令,对于涉及多键的命令(如 MGET),如果每个键都位于同一个节点中,则可以正常支持,否则会提示错误。除此之外集群还有一个限制是只能使用默认的0号数据库,如果执行 SELECT 切换数据库则会提示错误。

Cluster 提供分布式数据库,通过分片机制进行数据分布,cluster 内的每个节点仅数据库的一部分数据。每个节点持有全局元数据,但仅特有一部分数据。客户端在请求数据时,并不一定由当前请求的节点来响应数据请求。

哨兵与集群是两个独立的功能,但从特性来看哨兵可以视为集群的子集,当不需要数据分片或者已经在客户端进行分片的场景下哨兵就足够使用了,但如果需要进行水平扩容,则集群是一个非常好的选择。

Redis 集群方案:

  • Twemproxy(Twitter):代理分布机制
  • Codis(碗豆荚):代理分布机制
  • Redis Cluster(官方)
  • Cerberus(芒果TV)

配置集群

使用集群,只需要将每个数据库节点的cluster-enabled配置选项打开即可。每个集群中至少需要 3 个主数据库才能正常运行。考虑到主从复制,因此一个集群需要6个 Redis 实例,配置成一个 3 主 3 从的集群系统。

redis.conf 配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
bind 0.0.0.0
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 0
daemonize no
supervised no
pidfile /data/database/redis/redis_6379.pid
loglevel notice
logfile "/data/database/redis/redis_6379.log"
databases 16
#requirepass devappwsx
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dir ./
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
maxmemory 1gb
appendonly yes
appendfilename "appendonly_6379.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
cluster-enabled no
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes

初始化集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# cat cluster-init
#!/bin/bash
# start 8001
cd /data/database/redis_cluster/r8001 && \
redis-server redis.conf
# start 8002
cd /data/database/redis_cluster/r8002 && \
redis-server redis.conf
# start 8003
cd /data/database/redis_cluster/r8003 && \
redis-server redis.conf
IPADDR=$(ifconfig eth0 | grep -oP '\d.+(?= B)')
/tmp/redis-3.2.1/src/redis-trib.rb create ${IPADDR}:8001 ${IPADDR}:8002 ${IPADDR}:8003
ps -ef | grep redis-server | grep -v grep | awk '{printf ("%s %s\n","kill",$2)}' | bash
sed -i 's/^#masterauth devappwsx/masterauth devappwsx/' /data/database/redis_cluster/r8001/redis.conf
sed -i 's/^#requirepass devappwsx/requirepass devappwsx/' /data/database/redis_cluster/r8001/redis.conf
sed -i 's/^#masterauth devappwsx/masterauth devappwsx/' /data/database/redis_cluster/r8002/redis.conf
sed -i 's/^#requirepass devappwsx/requirepass devappwsx/' /data/database/redis_cluster/r8002/redis.conf
sed -i 's/^#masterauth devappwsx/masterauth devappwsx/' /data/database/redis_cluster/r8003/redis.conf
sed -i 's/^#requirepass devappwsx/requirepass devappwsx/' /data/database/redis_cluster/r8003/redis.conf

Redis 命令行客户端提供了集群模式来支持自动重定向,使用 -c 参数来启用。加入了 -c 参数后,如果当前节点并不负责要处理的键,Redis命令行客户端会进行自动命令重定向。而这一过程正是每个支持集群的客户端应该实现的。

1
redis-cli -h localhsot -c -a devappswx

然而相比单机实例,集群的命令重定向也增加了命令的请求次数,原先只需要执行一次的命令现在有可能需要依次发向两个节点,算上往返时延,可以说请求重定向对性能的还是有些影响的。

为了解决这一问题,当发现新的重定向请求时,客户端应该在重新向正确节点发送命令的同时,缓存插槽的路由信息,即记录下当前插槽是由哪个节点负责的。这样每次发起命令时,客户端首先计算相关键是属于哪个插槽的,然后根据缓存的路由判断插槽由哪个节点负责。考虑到插槽总数相对较少(16384个),缓存所有插槽的路由信息后,每次命令将均只发向正确的节点,从而达到和单机实例同样的性能。

Redis Docker

Dockerfile

Single Redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
FROM gd2a-harbor.service/library/ubuntu:14.04.5
LABEL maintainer "hj.mallux@gmail.com"
ENV LANG en_US.UTF-8
ENV TZ Asia/Shanghai
RUN echo 'LANG="en_US.UTF-8"' > /etc/default/locale && \
echo 'LANGUAGE="en_US:en"' >> /etc/default/locale && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
echo $TZ > /etc/timezone && \
cd /usr/share/i18n/charmaps && \
gunzip UTF-8.gz && \
localedef -f UTF-8 -i zh_CN /usr/lib/locale/en_US.utf8
COPY ./archives/sources.list /etc/apt/sources.list
COPY ./archives/vimrc /root/.vimrc
RUN apt-get update && \
apt-get install -y apt-transport-https curl wget vim lrzsz net-tools \
ca-certificates \
gcc \
make \
libjemalloc1 && \
apt-get clean && rm -rf /var/lib/apt/lists/*
RUN sed -i '/^#force_color_prompt=yes/ s/#//' /root/.bashrc && \
sed -i "/^if \[ \"\$color_prompt\" = yes \]/ { N; s/\(.*PS1\).*/\1='\${debian_chroot:+(\$debian_chroot)}[\\\[\\\e[0;32;1m\\\]\\\u\\\[\\\e[0m\\\]@\\\[\\\e[0;36;1m\\\]\\\h\\\[\\\e[0m\\\] \\\[\\\e[0;33;1m\\\]\\\W\\\[\\\e[0m\\\]]\\\\$ '/}" /root/.bashrc
ENV TINI_VERSION 0.14.0
ENV TINI_SHA 6c41ec7d33e857d4779f14d9c74924cab0c7973485d2972419a3b7c7620ff5fd
ARG TINI_DOWNURL=http://192.168.251.4:88/archives/tini-static-amd64
## Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL ${TINI_DOWNURL} -o /usr/local/bin/tini && \
chmod +x /usr/local/bin/tini && \
echo "${TINI_SHA} /usr/local/bin/tini" | sha256sum -c -
ENV REDIS_USER redis
ENV REDIS_GROUP redis
ENV REDIS_VERSION 3.2.6
ENV REDIS_HOME /opt/redis
ENV REDIS_DATA /data/redis
ENV REDIS_PORT 6379
ARG REDIS_DOWNURL=http://192.168.251.4:88/archives/redis-${REDIS_VERSION}.tar.gz
RUN mkdir -p /opt && \
curl --fail --silent --location --retry 3 \
${REDIS_DOWNURL} | \
gunzip | \
tar -x -C /opt && \
ln -sf /opt/redis-${REDIS_VERSION} ${REDIS_HOME} && \
cd ${REDIS_HOME} && make && make install
RUN groupadd ${REDIS_USER} && \
useradd -g ${REDIS_GROUP} ${REDIS_USER} && \
mkdir -p ${REDIS_DATA}/${REDIS_PORT} /etc/redis /var/run/redis
VOLUME ${REDIS_DATA}
COPY ./archives/gosu-amd64 /usr/local/bin/gosu
COPY ./redis/redis /etc/init.d/redis
COPY ./redis/redis.conf /etc/redis/${REDIS_PORT}.conf
COPY ./redis/entrypoint.sh /
RUN chown ${REDIS_USER}.${REDIS_GROUP} -R ${REDIS_DATA} /etc/redis /var/run/redis && \
chmod +x /usr/local/bin/gosu && \
chmod +x /entrypoint.sh /etc/init.d/redis
EXPOSE ${REDIS_PORT}
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -w 1 -v localhost ${SERVICE_PORT} 1>/dev/null 2>&1 || exit 1
ENTRYPOINT ["/usr/local/bin/tini", "--", "/entrypoint.sh"]

Cluster Redis

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
FROM gd2a-harbor.service/library/ubuntu:14.04.5
LABEL maintainer "hj.mallux@gmail.com"
ENV LANG en_US.UTF-8
ENV TZ Asia/Shanghai
RUN echo 'LANG="en_US.UTF-8"' > /etc/default/locale && \
echo 'LANGUAGE="en_US:en"' >> /etc/default/locale && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
echo $TZ > /etc/timezone && \
cd /usr/share/i18n/charmaps && \
gunzip UTF-8.gz && \
localedef -f UTF-8 -i zh_CN /usr/lib/locale/en_US.utf8
COPY ./archives/sources.list /etc/apt/sources.list
COPY ./archives/vimrc /root/.vimrc
RUN apt-get update && \
apt-get install -y apt-transport-https curl wget vim lrzsz net-tools \
ca-certificates \
gcc \
make \
libjemalloc1 \
ruby && \
apt-get clean && rm -rf /var/lib/apt/lists/* && \
gem sources --add https://gems.ruby-china.org/ --remove https://rubygems.org/ && \
gem install redis
RUN sed -i '/^#force_color_prompt=yes/ s/#//' /root/.bashrc && \
sed -i "/^if \[ \"\$color_prompt\" = yes \]/ { N; s/\(.*PS1\).*/\1='\${debian_chroot:+(\$debian_chroot)}[\\\[\\\e[0;32;1m\\\]\\\u\\\[\\\e[0m\\\]@\\\[\\\e[0;36;1m\\\]\\\h\\\[\\\e[0m\\\] \\\[\\\e[0;33;1m\\\]\\\W\\\[\\\e[0m\\\]]\\\\$ '/}" /root/.bashrc
ENV TINI_VERSION 0.14.0
ENV TINI_SHA 6c41ec7d33e857d4779f14d9c74924cab0c7973485d2972419a3b7c7620ff5fd
ARG TINI_DOWNURL=http://192.168.251.4:88/archives/tini-static-amd64
## Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL ${TINI_DOWNURL} -o /usr/local/bin/tini && \
chmod +x /usr/local/bin/tini && \
echo "${TINI_SHA} /usr/local/bin/tini" | sha256sum -c -
ENV REDIS_USER redis
ENV REDIS_GROUP redis
ENV REDIS_VERSION 3.2.6
ENV REDIS_HOME /opt/redis
ENV REDIS_DATA /data/redis_cluster
ENV REDIS_PORT 8001
ARG REDIS_DOWNURL=http://192.168.251.4:88/archives/redis-${REDIS_VERSION}.tar.gz
RUN mkdir -p /opt && \
curl --fail --silent --location --retry 3 \
${REDIS_DOWNURL} | \
gunzip | \
tar -x -C /opt && \
ln -sf /opt/redis-${REDIS_VERSION} ${REDIS_HOME} && \
cd ${REDIS_HOME} && make && make install
RUN groupadd ${REDIS_GROUP} && \
useradd -g ${REDIS_GROUP} ${REDIS_USER} && \
mkdir -p ${REDIS_DATA}/${REDIS_PORT} /etc/redis /var/run/redis
VOLUME ${REDIS_DATA}
COPY ./archives/gosu-amd64 /usr/local/bin/gosu
COPY ./redis/redis /etc/init.d/redis
COPY ./redis/redis.conf /etc/redis/${REDIS_PORT}.conf
COPY ./redis/entrypoint.sh /
COPY ./redis/create-cluster /
RUN chown redis.redis -R /data/redis_cluster /etc/redis /var/run/redis && \
chmod +x /usr/local/bin/gosu && \
chmod +x /entrypoint.sh /create-cluster /etc/init.d/redis && \
sed -i "s;/data/redis;/data/redis_cluster;" /entrypoint.sh && \
sed -i "s;/data/redis;/data/redis_cluster;" /create-cluster && \
sed -i "/^#.*default is 6379/! s;6379;${REDIS_PORT};" /etc/redis/${REDIS_PORT}.conf && \
sed -i "s;/data/redis;/data/redis_cluster;" /etc/redis/${REDIS_PORT}.conf && \
sed -i "/^# cluster-enabled yes$/ s;^# ;;" /etc/redis/${REDIS_PORT}.conf && \
sed -i "/^# cluster-config-file nodes-${REDIS_PORT}.conf$/ s;^# ;;" /etc/redis/${REDIS_PORT}.conf
EXPOSE 8001-8006
HEALTHCHECK --interval=1m --timeout=10s \
CMD nc -w 1 -v localhost ${SERVICE_PORT} 1>/dev/null 2>&1 || exit 1
ENTRYPOINT ["/usr/local/bin/tini", "--", "/entrypoint.sh"]

Redis 脚本

entrypoint.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
#!/usr/bin/env bash
## --------------------------------------------------
## Filename: entrypoint.sh
## Revision: latest stable
## Author: Mallux
## E-mail: hj.mallux@gmail.com
## Blog: blog.mallux.me
## Description:
## --------------------------------------------------
## Copyright © 2014-2018 Mallux
#set -x #-e
## Exit
trap "__shutdown" EXIT
redis_nodefile="/.redis_node_done"
__shutdown() {
for node in $(cat ${redis_nodefile}) ; do
HOST=$(echo $node | awk -F':' '{ print $1}')
PORT=$(echo $node | awk -F':' '{ print $2}')
REDIS_PORT_CONF="/etc/redis/${PORT}.conf"
REDIS_PASS=$(sed -n 's/^requirepass \(.*\)/\1/p' ${REDIS_PORT_CONF})
echo "Shutdown Redis server ..."
if [ x"${REDIS_PASS}" != x"" ] ; then
redis-cli -h $HOST -p $PORT -a ${REDIS_PASS} shutdown
else
redis-cli -h $HOST -p $PORT shutdown
fi
echo "=> Done!"
PORT=$((PORT+1))
done
[ -e ${redis_nodefile} ] && rm -rf ${redis_nodefile}
}
PORT=${REDIS_PORT:="6379"} ; NODES=${CLUSTER_NODES:=1} ; ENDPORT=$((PORT+NODES))
IPADDR=$(ifconfig eth0 | grep -oP '\d.+(?= Bcast:|netmask)')
## Redis template and cluster config file
REDIS_TEMPLATE_CONF="/etc/redis/${PORT}.conf"
REDIS_CLUSTER_CONF="/data/redis/${PORT}/nodes-${PORT}.conf"
## starting up Redis cluster service
[ x"$NODES" != x"1" -a ! -e ${REDIS_CLUSTER_CONF} ] && {
/create-cluster start
/create-cluster create
/create-cluster stop
}
set -- "$@" "${ENTRYPOINT_OPTS}" ; args_array=( $@ )
for arg in ${args_array[@]} ; do
case "$arg" in
--auth)
require_pass="true"
;;
--max-memory=*)
max_memory=$(echo $arg | awk -F'=' '{ print $2 }')
;;
esac
shift
done
## starting up Redis service
function gosu_redis {
while [ $((PORT < ENDPORT)) != "0" ] ; do
REDIS_PORT_CONF="/etc/redis/${PORT}.conf"
REDIS_DATA_DIR="/data/redis/${PORT}"
[ ! -e ${REDIS_DATA_DIR} ] && mkdir -p ${REDIS_DATA_DIR}
[ ! -e ${REDIS_PORT_CONF} ] && {
cp -af ${REDIS_TEMPLATE_CONF} ${REDIS_PORT_CONF}
sed -i "/^#.*default is 8001/! s;8001;$PORT;" ${REDIS_PORT_CONF}
}
## Redis cluster config file
REDIS_CLUSTER_CONF="/data/redis/${PORT}/nodes-${PORT}.conf"
[ -e ${REDIS_CLUSTER_CONF} ] && {
oldIP=$(sed -n "s/.* \(.*\):${PORT}.*/\1/p" ${REDIS_CLUSTER_CONF})
[ x"$oldIP" != x"$IPADDR" ] && sed -i "s/$oldIP/$IPADDR/" ${REDIS_CLUSTER_CONF}
}
[ x"$NODES" != x"1" ] && {
sed -i "s/^# \(masterauth\) <master-password>$/\1 ${REDIS_PASS}/" ${REDIS_PORT_CONF}
}
[ x"${require_pass}" == x"true" ] && sed -i "s/^# \(requirepass\) .*/\1 ${REDIS_PASS}/" ${REDIS_PORT_CONF}
[ x"${max_memory}" != x"" ] && sed -i "s/^\(maxmemory\) .*/\1 ${max_memory}/" ${REDIS_PORT_CONF}
## Redis PID file
PIDFILE=/var/run/redis/${PORT}.pid
[ -e $PIDFILE ] && rm -rf $PIDFILE
chown ${REDIS_USER}.${REDIS_GROUP} -R /data/redis/${PORT}
gosu ${REDIS_USER} bash -c "cd /data/redis/${PORT} ;
/etc/init.d/redis -p ${PORT} start"
echo "${IPADDR}:${PORT}" >> ${redis_nodefile}
echo "=> Done!"
PORT=$((PORT+1))
done
}
gosu_redis "$@"
while : ; do
## wait for 60 seconds
sleep 60
## check your redis server is running.
nc -w 1 -v localhost ${SERVICE_PORT} 1>/dev/null 2>&1
exit_code=$?
[ x"$exit_code" != x"0" ] && {
echo "your redis server has stop, please restart it"
#/etc/init.d/redis -p ${PORT} restart
}
done

create-cluster 脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
#!/usr/bin/env bash
## --------------------------------------------------
## Filename: create-cluster
## Revision: latest stable
## Author: Mallux
## E-mail: hj.mallux@gmail.com
## Blog: blog.mallux.me
## Description:
## --------------------------------------------------
## Copyright © 2014-2018 Mallux
cmd=${BASH_SOURCE##*/}
usage() {
echo "Usage: $cmd <start|create|stop|watch|tail|clean>"
echo
echo -e " start \e[21G Launch Redis Cluster instances."
echo -e " create \e[21G Create a cluster using redis-trib create."
echo -e " stop \e[21G Stop Redis Cluster instances."
echo -e " watch \e[21G Show CLUSTER NODES output (first 30 lines) of first node."
echo -e " tail <id> \e[21G Run tail -f of instance at base port + ID."
echo -e " clean \e[21G Remove all instances data, logs, configs."
echo
}
[ $# == 0 ] && usage
## Settings
PORT=${REDIS_PORT:=8001}
NODES=${CLUSTER_NODES:=3}
REPLICAS=${CLUSTER_NODE_REPLICAS:=0}
TIMEOUT=2000
## You may want to put the above config parameters into config.sh in order to
## override the defaults without modifying this script.
[ -a config.sh ] && source "config.sh"
## Computed vars
ENDPORT=$((PORT+NODES)) ; IPADDR=$(ifconfig eth0 | grep -oP '\d.+(?= Bcast:|netmask)')
## Redis template config file
REDIS_TEMPLATE_CONF="/etc/redis/${PORT}.conf"
if [ "$1" == "start" ] ; then
while [ $((PORT < ENDPORT)) != "0" ] ; do
echo "Starting $PORT"
REDIS_PORT_CONF="/etc/redis/${PORT}.conf"
REDIS_DATA_DIR="/data/redis/${PORT}"
[ ! -e $REDIS_DATA_DIR ] && mkdir -p $REDIS_DATA_DIR
[ ! -e $REDIS_PORT_CONF ] && {
cp -af $REDIS_TEMPLATE_CONF $REDIS_PORT_CONF
sed -i "/^#.*default is 8001/! s;8001;$PORT;" $REDIS_PORT_CONF
}
redis-server $REDIS_PORT_CONF --daemonize yes
echo "=> Done!"
PORT=$((PORT+1))
done
exit 0
fi
if [ "$1" == "create" ] ; then
HOSTS=""
while [ $((PORT < ENDPORT)) != "0" ] ; do
HOSTS="$HOSTS $IPADDR:$PORT"
PORT=$((PORT+1))
done
if [ x"$REPLICAS" != x"0" ] ; then
/opt/redis/src/redis-trib.rb create --replicas $REPLICAS $HOSTS <<-EOF
yes
EOF
else
/opt/redis/src/redis-trib.rb create $HOSTS <<-EOF
yes
EOF
fi
echo "=> Done!"
exit 0
fi
if [ "$1" == "stop" ] ; then
while [ $((PORT < ENDPORT)) != "0" ] ; do
echo "Stopping $PORT"
redis-cli -p $PORT shutdown nosave
echo "=> Done!"
PORT=$((PORT+1))
done
exit 0
fi
if [ "$1" == "watch" ] ; then
while [ 1 ] ; do
clear
date
redis-cli -p $PORT cluster nodes | head -30
sleep 1
done
exit 0
fi
if [ "$1" == "tail" ] ; then
INSTANCE=$2
PORT=$((PORT+INSTANCE))
tail -f /data/redis/${PORT}/redis.log
exit 0
fi
if [ "$1" == "call" ] ; then
while [ $((PORT < ENDPORT)) != "0" ] ; do
redis-cli -p $PORT $2 $3 $4 $5 $6 $7 $8 $9
PORT=$((PORT+1))
done
exit 0
fi
if [ "$1" == "clean" ] ; then
while [ $((PORT < ENDPORT)) != "0" ] ; do
rm -rf /data/redis/${PORT}/*.log
rm -rf /data/redis/${PORT}/appendonly-*.aof
rm -rf /data/redis/${PORT}/dump-*.rdb
rm -rf /data/redis/${PORT}/nodes-*.conf
PORT=$((PORT+1))
done
exit 0
fi

redis 启动脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
#!/usr/bin/env bash
## --------------------------------------------------
## Filename: redis
## Revision: latest stable
## Author: Mallux
## E-mail: hj.mallux@gmail.com
## Blog: blog.mallux.me
## Description:
## --------------------------------------------------
## Copyright © 2014-2018 Mallux
## Simple Redis init.d script conceived to work on Linux systems
## as it does use of the /proc filesystem.
while getopts :p:a: opt
do
case $opt in
p)
[ ${OPTARG:0:1} == '-' ] && {
echo -e "\e[0;33;1mFatal:\e[0m $cmd: option requires an argument -- $opt\n"
exit 1
}
REDIS_PORT=$OPTARG
;;
a)
[ ${OPTARG:0:1} == '-' ] && {
echo -e "\e[0;33;1mFatal:\e[0m $cmd: option requires an argument -- $opt\n"
exit 1
}
REDIS_OPTS="$REDIS_OPTS -$opt $OPTARG"
;;
esac
done
shift $[$OPTIND-1]
PORT=${REDIS_PORT:=6379}
EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli
PIDFILE=/var/run/redis/${PORT}.pid
CONF=/etc/redis/${PORT}.conf
case "$1" in
start)
if [ -f $PIDFILE ]
then
echo "$PIDFILE exists, process is already running or crashed"
else
echo "Starting Redis server..."
$EXEC $CONF --daemonize yes
fi
;;
stop)
if [ ! -f $PIDFILE ]
then
echo "$PIDFILE does not exist, process is not running"
else
PID=$(cat $PIDFILE)
echo "Stopping ..."
$CLIEXEC -p $PORT $REDIS_OPTS shutdown
while [ -x /proc/${PID} ]
do
echo "Waiting for Redis to shutdown ..."
sleep 1
done
echo "Redis stopped"
fi
;;
status)
PID=$(cat $PIDFILE)
if [ ! -x /proc/${PID} ]
then
echo 'Redis is not running'
else
echo "Redis is running ($PID)"
fi
;;
restart)
$0 stop
$0 start
;;
*)
echo -e "\e[0;33;1mFatal:\e[0m Please use start or stop as first argument.\n"
;;
esac

END