服务器之家:专注于服务器技术及软件下载分享
分类导航

Mysql|Sql Server|Oracle|Redis|MongoDB|PostgreSQL|Sqlite|DB2|mariadb|Access|数据库技术|

服务器之家 - 数据库 - Redis - windows下创建redis出现问题小结

windows下创建redis出现问题小结

2020-12-20 22:23郑子胜 Redis

这篇文章主要介绍了window下创建redis出现问题总结,本文给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友可以参考下

一.准备工作

1.准备一个redis,删除目录下的,dump.rdb文件,并修改他的配置文件:redis.windows.conf

?
1
2
3
4
5
1.修改端口:我设置为 port 7001
2.放开配置:
 cluster-enabled yes
 cluster-config-file nodes-7001.conf //名称可自改
 cluster-node-timeout 15000

 

windows下创建redis出现问题小结

windows下创建redis出现问题小结

将这个redis复制5份,并修改相应配置,将端口设置为:7001,7002,7003,7004,7005,7006

2.配置ruby环境、redis的ruby驱动redis.gem以及创建redis集群的工具redis-trib.rb。

(1)下载redis安装文件:http://download.redis.io/releases/,我们下载zip格式redis-x64-3.2.1版本。

windows下创建redis出现问题小结

(2)下载ruby安装文件:http://dl.bintray.com/oneclick/rubyinstaller/rubyinstaller-2.2.4-x64.exe

(3)下载ruby环境下redis的驱动:https://rubygems.org/gems/redis/versions/3.2.2,考虑到兼容性,这里下载的是3.2.2版本。

下载:https://rubygems.org/downloads/redis-3.2.2.gem

windows下创建redis出现问题小结

(4)下载redis官方提供的创建redis集群的ruby脚本文件redis-trib.rb;(我自己下载的,在文章最后,自用)

3.将所有redis文件和redis-trib.rb放在一个redis目录下:

windows下创建redis出现问题小结

二.启动redis群

1.在redis-trib.rb目录下输入:后面会出现选项,选yes即可;

?
1
redis-trib.rb create --replicas 0 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 127.0.0.1:7006

三.出现问题

1.缺少redis库:      

 解决方法:gem install redis (安装redis库)

windows下创建redis出现问题小结

2.插槽15被占用,解决方法:打开每一个服务器,执行flushall、flushdb、cluster reset指令:到每个redis目录下修改:

windows下创建redis出现问题小结

windows下创建redis出现问题小结

四.资源

redis-trib.rb:将他复制到文本上并修改文本名称即可:

?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
#!/usr/bin/env ruby
 
# todo (temporary here, we'll move this into the github issues once
# redis-trib initial implementation is completed).
#
# - make sure that if the rehashing fails in the middle redis-trib will try
# to recover.
# - when redis-trib performs a cluster check, if it detects a slot move in
# progress it should prompt the user to continue the move from where it
# stopped.
# - gracefully handle ctrl+c in move_slot to prompt the user if really stop
# while rehashing, and performing the best cleanup possible if the user
# forces the quit.
# - when doing "fix" set a global fix to true, and prompt the user to
# fix the problem if automatically fixable every time there is something
# to fix. for instance:
# 1) if there is a node that pretend to receive a slot, or to migrate a
# slot, but has no entries in that slot, fix it.
# 2) if there is a node having keys in slots that are not owned by it
# fix this condition moving the entries in the same node.
# 3) perform more possibly slow tests about the state of the cluster.
# 4) when aborted slot migration is detected, fix it.
 
require 'rubygems'
require 'redis'
 
clusterhashslots = 16384
migratedefaulttimeout = 60000
migratedefaultpipeline = 10
rebalancedefaultthreshold = 2
 
$verbose = false
 
def xputs(s)
 case s[0..2]
 when ">>>"
 color="29;1"
 when "[er"
 color="31;1"
 when "[wa"
 color="31;1"
 when "[ok"
 color="32"
 when "[fa","***"
 color="33"
 else
 color=nil
 end
 
 color = nil if env['term'] != "xterm"
 print "\033[#{color}m" if color
 print s
 print "\033[0m" if color
 print "\n"
end
 
class clusternode
 def initialize(addr)
 s = addr.split(":")
 if s.length < 2
  puts "invalid ip or port (given as #{addr}) - use ip:port format"
  exit 1
 end
 port = s.pop # removes port from split array
 ip = s.join(":") # if s.length > 1 here, it's ipv6, so restore address
 @r = nil
 @info = {}
 @info[:host] = ip
 @info[:port] = port
 @info[:slots] = {}
 @info[:migrating] = {}
 @info[:importing] = {}
 @info[:replicate] = false
 @dirty = false # true if we need to flush slots info into node.
 @friends = []
 end
 
 def friends
 @friends
 end
 
 def slots
 @info[:slots]
 end
 
 def has_flag?(flag)
 @info[:flags].index(flag)
 end
 
 def to_s
 "#{@info[:host]}:#{@info[:port]}"
 end
 
 def connect(o={})
 return if @r
 print "connecting to node #{self}: " if $verbose
 stdout.flush
 begin
  @r = redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60)
  @r.ping
 rescue
  xputs "[err] sorry, can't connect to node #{self}"
  exit 1 if o[:abort]
  @r = nil
 end
 xputs "ok" if $verbose
 end
 
 def assert_cluster
 info = @r.info
 if !info["cluster_enabled"] || info["cluster_enabled"].to_i == 0
  xputs "[err] node #{self} is not configured as a cluster node."
  exit 1
 end
 end
 
 def assert_empty
 if !(@r.cluster("info").split("\r\n").index("cluster_known_nodes:1")) ||
  (@r.info['db0'])
  xputs "[err] node #{self} is not empty. either the node already knows other nodes (check with cluster nodes) or contains some key in database 0."
  exit 1
 end
 end
 
 def load_info(o={})
 self.connect
 nodes = @r.cluster("nodes").split("\n")
 nodes.each{|n|
  # name addr flags role ping_sent ping_recv link_status slots
  split = n.split
  name,addr,flags,master_id,ping_sent,ping_recv,config_epoch,link_status = split[0..6]
  slots = split[8..-1]
  info = {
  :name => name,
  :addr => addr,
  :flags => flags.split(","),
  :replicate => master_id,
  :ping_sent => ping_sent.to_i,
  :ping_recv => ping_recv.to_i,
  :link_status => link_status
  }
  info[:replicate] = false if master_id == "-"
 
  if info[:flags].index("myself")
  @info = @info.merge(info)
  @info[:slots] = {}
  slots.each{|s|
   if s[0..0] == '['
   if s.index("->-") # migrating
    slot,dst = s[1..-1].split("->-")
    @info[:migrating][slot.to_i] = dst
   elsif s.index("-<-") # importing
    slot,src = s[1..-1].split("-<-")
    @info[:importing][slot.to_i] = src
   end
   elsif s.index("-")
   start,stop = s.split("-")
   self.add_slots((start.to_i)..(stop.to_i))
   else
   self.add_slots((s.to_i)..(s.to_i))
   end
  } if slots
  @dirty = false
  @r.cluster("info").split("\n").each{|e|
   k,v=e.split(":")
   k = k.to_sym
   v.chop!
   if k != :cluster_state
   @info[k] = v.to_i
   else
   @info[k] = v
   end
  }
  elsif o[:getfriends]
  @friends << info
  end
 }
 end
 
 def add_slots(slots)
 slots.each{|s|
  @info[:slots][s] = :new
 }
 @dirty = true
 end
 
 def set_as_replica(node_id)
 @info[:replicate] = node_id
 @dirty = true
 end
 
 def flush_node_config
 return if !@dirty
 if @info[:replicate]
  begin
  @r.cluster("replicate",@info[:replicate])
  rescue
  # if the cluster did not already joined it is possible that
  # the slave does not know the master node yet. so on errors
  # we return asap leaving the dirty flag set, to flush the
  # config later.
  return
  end
 else
  new = []
  @info[:slots].each{|s,val|
  if val == :new
   new << s
   @info[:slots][s] = true
  end
  }
  @r.cluster("addslots",*new)
 end
 @dirty = false
 end
 
 def info_string
 # we want to display the hash slots assigned to this node
 # as ranges, like in: "1-5,8-9,20-25,30"
 #
 # note: this could be easily written without side effects,
 # we use 'slots' just to split the computation into steps.
 
 # first step: we want an increasing array of integers
 # for instance: [1,2,3,4,5,8,9,20,21,22,23,24,25,30]
 slots = @info[:slots].keys.sort
 
 # as we want to aggregate adjacent slots we convert all the
 # slot integers into ranges (with just one element)
 # so we have something like [1..1,2..2, ... and so forth.
 slots.map!{|x| x..x}
 
 # finally we group ranges with adjacent elements.
 slots = slots.reduce([]) {|a,b|
  if !a.empty? && b.first == (a[-1].last)+1
  a[0..-2] + [(a[-1].first)..(b.last)]
  else
  a + [b]
  end
 }
 
 # now our task is easy, we just convert ranges with just one
 # element into a number, and a real range into a start-end format.
 # finally we join the array using the comma as separator.
 slots = slots.map{|x|
  x.count == 1 ? x.first.to_s : "#{x.first}-#{x.last}"
 }.join(",")
 
 role = self.has_flag?("master") ? "m" : "s"
 
 if self.info[:replicate] and @dirty
  is = "s: #{self.info[:name]} #{self.to_s}"
 else
  is = "#{role}: #{self.info[:name]} #{self.to_s}\n"+
  " slots:#{slots} (#{self.slots.length} slots) "+
  "#{(self.info[:flags]-["myself"]).join(",")}"
 end
 if self.info[:replicate]
  is += "\n replicates #{info[:replicate]}"
 elsif self.has_flag?("master") && self.info[:replicas]
  is += "\n #{info[:replicas].length} additional replica(s)"
 end
 is
 end
 
 # return a single string representing nodes and associated slots.
 # todo: remove slaves from config when slaves will be handled
 # by redis cluster.
 def get_config_signature
 config = []
 @r.cluster("nodes").each_line{|l|
  s = l.split
  slots = s[8..-1].select {|x| x[0..0] != "["}
  next if slots.length == 0
  config << s[0]+":"+(slots.sort.join(","))
 }
 config.sort.join("|")
 end
 
 def info
 @info
 end
 
 def is_dirty?
 @dirty
 end
 
 def r
 @r
 end
end
 
class redistrib
 def initialize
 @nodes = []
 @fix = false
 @errors = []
 @timeout = migratedefaulttimeout
 end
 
 def check_arity(req_args, num_args)
 if ((req_args > 0 and num_args != req_args) ||
  (req_args < 0 and num_args < req_args.abs))
  xputs "[err] wrong number of arguments for specified sub command"
  exit 1
 end
 end
 
 def add_node(node)
 @nodes << node
 end
 
 def reset_nodes
 @nodes = []
 end
 
 def cluster_error(msg)
 @errors << msg
 xputs msg
 end
 
 # return the node with the specified id or nil.
 def get_node_by_name(name)
 @nodes.each{|n|
  return n if n.info[:name] == name.downcase
 }
 return nil
 end
 
 # like get_node_by_name but the specified name can be just the first
 # part of the node id as long as the prefix in unique across the
 # cluster.
 def get_node_by_abbreviated_name(name)
 l = name.length
 candidates = []
 @nodes.each{|n|
  if n.info[:name][0...l] == name.downcase
  candidates << n
  end
 }
 return nil if candidates.length != 1
 candidates[0]
 end
 
 # this function returns the master that has the least number of replicas
 # in the cluster. if there are multiple masters with the same smaller
 # number of replicas, one at random is returned.
 def get_master_with_least_replicas
 masters = @nodes.select{|n| n.has_flag? "master"}
 sorted = masters.sort{|a,b|
  a.info[:replicas].length <=> b.info[:replicas].length
 }
 sorted[0]
 end
 
 def check_cluster(opt={})
 xputs ">>> performing cluster check (using node #{@nodes[0]})"
 show_nodes if !opt[:quiet]
 check_config_consistency
 check_open_slots
 check_slots_coverage
 end
 
 def show_cluster_info
 masters = 0
 keys = 0
 @nodes.each{|n|
  if n.has_flag?("master")
  puts "#{n} (#{n.info[:name][0...8]}...) -> #{n.r.dbsize} keys | #{n.slots.length} slots | "+
   "#{n.info[:replicas].length} slaves."
  masters += 1
  keys += n.r.dbsize
  end
 }
 xputs "[ok] #{keys} keys in #{masters} masters."
 keys_per_slot = sprintf("%.2f",keys/16384.0)
 puts "#{keys_per_slot} keys per slot on average."
 end
 
 # merge slots of every known node. if the resulting slots are equal
 # to clusterhashslots, then all slots are served.
 def covered_slots
 slots = {}
 @nodes.each{|n|
  slots = slots.merge(n.slots)
 }
 slots
 end
 
 def check_slots_coverage
 xputs ">>> check slots coverage..."
 slots = covered_slots
 if slots.length == clusterhashslots
  xputs "[ok] all #{clusterhashslots} slots covered."
 else
  cluster_error \
  "[err] not all #{clusterhashslots} slots are covered by nodes."
  fix_slots_coverage if @fix
 end
 end
 
 def check_open_slots
 xputs ">>> check for open slots..."
 open_slots = []
 @nodes.each{|n|
  if n.info[:migrating].size > 0
  cluster_error \
   "[warning] node #{n} has slots in migrating state (#{n.info[:migrating].keys.join(",")})."
  open_slots += n.info[:migrating].keys
  end
  if n.info[:importing].size > 0
  cluster_error \
   "[warning] node #{n} has slots in importing state (#{n.info[:importing].keys.join(",")})."
  open_slots += n.info[:importing].keys
  end
 }
 open_slots.uniq!
 if open_slots.length > 0
  xputs "[warning] the following slots are open: #{open_slots.join(",")}"
 end
 if @fix
  open_slots.each{|slot| fix_open_slot slot}
 end
 end
 
 def nodes_with_keys_in_slot(slot)
 nodes = []
 @nodes.each{|n|
  next if n.has_flag?("slave")
  nodes << n if n.r.cluster("getkeysinslot",slot,1).length > 0
 }
 nodes
 end
 
 def fix_slots_coverage
 not_covered = (0...clusterhashslots).to_a - covered_slots.keys
 xputs ">>> fixing slots coverage..."
 xputs "list of not covered slots: " + not_covered.join(",")
 
 # for every slot, take action depending on the actual condition:
 # 1) no node has keys for this slot.
 # 2) a single node has keys for this slot.
 # 3) multiple nodes have keys for this slot.
 slots = {}
 not_covered.each{|slot|
  nodes = nodes_with_keys_in_slot(slot)
  slots[slot] = nodes
  xputs "slot #{slot} has keys in #{nodes.length} nodes: #{nodes.join(", ")}"
 }
 
 none = slots.select {|k,v| v.length == 0}
 single = slots.select {|k,v| v.length == 1}
 multi = slots.select {|k,v| v.length > 1}
 
 # handle case "1": keys in no node.
 if none.length > 0
  xputs "the folowing uncovered slots have no keys across the cluster:"
  xputs none.keys.join(",")
  yes_or_die "fix these slots by covering with a random node?"
  none.each{|slot,nodes|
  node = @nodes.sample
  xputs ">>> covering slot #{slot} with #{node}"
  node.r.cluster("addslots",slot)
  }
 end
 
 # handle case "2": keys only in one node.
 if single.length > 0
  xputs "the folowing uncovered slots have keys in just one node:"
  puts single.keys.join(",")
  yes_or_die "fix these slots by covering with those nodes?"
  single.each{|slot,nodes|
  xputs ">>> covering slot #{slot} with #{nodes[0]}"
  nodes[0].r.cluster("addslots",slot)
  }
 end
 
 # handle case "3": keys in multiple nodes.
 if multi.length > 0
  xputs "the folowing uncovered slots have keys in multiple nodes:"
  xputs multi.keys.join(",")
  yes_or_die "fix these slots by moving keys into a single node?"
  multi.each{|slot,nodes|
  target = get_node_with_most_keys_in_slot(nodes,slot)
  xputs ">>> covering slot #{slot} moving keys to #{target}"
 
  target.r.cluster('addslots',slot)
  target.r.cluster('setslot',slot,'stable')
  nodes.each{|src|
   next if src == target
   # set the source node in 'importing' state (even if we will
   # actually migrate keys away) in order to avoid receiving
   # redirections for migrate.
   src.r.cluster('setslot',slot,'importing',target.info[:name])
   move_slot(src,target,slot,:dots=>true,:fix=>true,:cold=>true)
   src.r.cluster('setslot',slot,'stable')
  }
  }
 end
 end
 
 # return the owner of the specified slot
 def get_slot_owners(slot)
 owners = []
 @nodes.each{|n|
  next if n.has_flag?("slave")
  n.slots.each{|s,_|
  owners << n if s == slot
  }
 }
 owners
 end
 
 # return the node, among 'nodes' with the greatest number of keys
 # in the specified slot.
 def get_node_with_most_keys_in_slot(nodes,slot)
 best = nil
 best_numkeys = 0
 @nodes.each{|n|
  next if n.has_flag?("slave")
  numkeys = n.r.cluster("countkeysinslot",slot)
  if numkeys > best_numkeys || best == nil
  best = n
  best_numkeys = numkeys
  end
 }
 return best
 end
 
 # slot 'slot' was found to be in importing or migrating state in one or
 # more nodes. this function fixes this condition by migrating keys where
 # it seems more sensible.
 def fix_open_slot(slot)
 puts ">>> fixing open slot #{slot}"
 
 # try to obtain the current slot owner, according to the current
 # nodes configuration.
 owners = get_slot_owners(slot)
 owner = owners[0] if owners.length == 1
 
 migrating = []
 importing = []
 @nodes.each{|n|
  next if n.has_flag? "slave"
  if n.info[:migrating][slot]
  migrating << n
  elsif n.info[:importing][slot]
  importing << n
  elsif n.r.cluster("countkeysinslot",slot) > 0 && n != owner
  xputs "*** found keys about slot #{slot} in node #{n}!"
  importing << n
  end
 }
 puts "set as migrating in: #{migrating.join(",")}"
 puts "set as importing in: #{importing.join(",")}"
 
 # if there is no slot owner, set as owner the slot with the biggest
 # number of keys, among the set of migrating / importing nodes.
 if !owner
  xputs ">>> nobody claims ownership, selecting an owner..."
  owner = get_node_with_most_keys_in_slot(@nodes,slot)
 
  # if we still don't have an owner, we can't fix it.
  if !owner
  xputs "[err] can't select a slot owner. impossible to fix."
  exit 1
  end
 
  # use addslots to assign the slot.
  puts "*** configuring #{owner} as the slot owner"
  owner.r.cluster("setslot",slot,"stable")
  owner.r.cluster("addslots",slot)
  # make sure this information will propagate. not strictly needed
  # since there is no past owner, so all the other nodes will accept
  # whatever epoch this node will claim the slot with.
  owner.r.cluster("bumpepoch")
 
  # remove the owner from the list of migrating/importing
  # nodes.
  migrating.delete(owner)
  importing.delete(owner)
 end
 
 # if there are multiple owners of the slot, we need to fix it
 # so that a single node is the owner and all the other nodes
 # are in importing state. later the fix can be handled by one
 # of the base cases above.
 #
 # note that this case also covers multiple nodes having the slot
 # in migrating state, since migrating is a valid state only for
 # slot owners.
 if owners.length > 1
  owner = get_node_with_most_keys_in_slot(owners,slot)
  owners.each{|n|
  next if n == owner
  n.r.cluster('delslots',slot)
  n.r.cluster('setslot',slot,'importing',owner.info[:name])
  importing.delete(n) # avoid duplciates
  importing << n
  }
  owner.r.cluster('bumpepoch')
 end
 
 # case 1: the slot is in migrating state in one slot, and in
 #  importing state in 1 slot. that's trivial to address.
 if migrating.length == 1 && importing.length == 1
  move_slot(migrating[0],importing[0],slot,:dots=>true,:fix=>true)
 # case 2: there are multiple nodes that claim the slot as importing,
 # they probably got keys about the slot after a restart so opened
 # the slot. in this case we just move all the keys to the owner
 # according to the configuration.
 elsif migrating.length == 0 && importing.length > 0
  xputs ">>> moving all the #{slot} slot keys to its owner #{owner}"
  importing.each {|node|
  next if node == owner
  move_slot(node,owner,slot,:dots=>true,:fix=>true,:cold=>true)
  xputs ">>> setting #{slot} as stable in #{node}"
  node.r.cluster("setslot",slot,"stable")
  }
 # case 3: there are no slots claiming to be in importing state, but
 # there is a migrating node that actually don't have any key. we
 # can just close the slot, probably a reshard interrupted in the middle.
 elsif importing.length == 0 && migrating.length == 1 &&
  migrating[0].r.cluster("getkeysinslot",slot,10).length == 0
  migrating[0].r.cluster("setslot",slot,"stable")
 else
  xputs "[err] sorry, redis-trib can't fix this slot yet (work in progress). slot is set as migrating in #{migrating.join(",")}, as importing in #{importing.join(",")}, owner is #{owner}"
 end
 end
 
 # check if all the nodes agree about the cluster configuration
 def check_config_consistency
 if !is_config_consistent?
  cluster_error "[err] nodes don't agree about configuration!"
 else
  xputs "[ok] all nodes agree about slots configuration."
 end
 end
 
 def is_config_consistent?
 signatures=[]
 @nodes.each{|n|
  signatures << n.get_config_signature
 }
 return signatures.uniq.length == 1
 end
 
 def wait_cluster_join
 print "waiting for the cluster to join"
 while !is_config_consistent?
  print "."
  stdout.flush
  sleep 1
 end
 print "\n"
 end
 
 def alloc_slots
 nodes_count = @nodes.length
 masters_count = @nodes.length / (@replicas+1)
 masters = []
 
 # the first step is to split instances by ip. this is useful as
 # we'll try to allocate master nodes in different physical machines
 # (as much as possible) and to allocate slaves of a given master in
 # different physical machines as well.
 #
 # this code assumes just that if the ip is different, than it is more
 # likely that the instance is running in a different physical host
 # or at least a different virtual machine.
 ips = {}
 @nodes.each{|n|
  ips[n.info[:host]] = [] if !ips[n.info[:host]]
  ips[n.info[:host]] << n
 }
 
 # select master instances
 puts "using #{masters_count} masters:"
 interleaved = []
 stop = false
 while not stop do
  # take one node from each ip until we run out of nodes
  # across every ip.
  ips.each do |ip,nodes|
  if nodes.empty?
   # if this ip has no remaining nodes, check for termination
   if interleaved.length == nodes_count
   # stop when 'interleaved' has accumulated all nodes
   stop = true
   next
   end
  else
   # else, move one node from this ip to 'interleaved'
   interleaved.push nodes.shift
  end
  end
 end
 
 masters = interleaved.slice!(0, masters_count)
 nodes_count -= masters.length
 
 masters.each{|m| puts m}
 
 # alloc slots on masters
 slots_per_node = clusterhashslots.to_f / masters_count
 first = 0
 cursor = 0.0
 masters.each_with_index{|n,masternum|
  last = (cursor+slots_per_node-1).round
  if last > clusterhashslots || masternum == masters.length-1
  last = clusterhashslots-1
  end
  last = first if last < first # min step is 1.
  n.add_slots first..last
  first = last+1
  cursor += slots_per_node
 }
 
 # select n replicas for every master.
 # we try to split the replicas among all the ips with spare nodes
 # trying to avoid the host where the master is running, if possible.
 #
 # note we loop two times. the first loop assigns the requested
 # number of replicas to each master. the second loop assigns any
 # remaining instances as extra replicas to masters. some masters
 # may end up with more than their requested number of replicas, but
 # all nodes will be used.
 assignment_verbose = false
 
 [:requested,:unused].each do |assign|
  masters.each do |m|
  assigned_replicas = 0
  while assigned_replicas < @replicas
   break if nodes_count == 0
   if assignment_verbose
   if assign == :requested
    puts "requesting total of #{@replicas} replicas " \
     "(#{assigned_replicas} replicas assigned " \
     "so far with #{nodes_count} total remaining)."
   elsif assign == :unused
    puts "assigning extra instance to replication " \
     "role too (#{nodes_count} remaining)."
   end
   end
 
   # return the first node not matching our current master
   node = interleaved.find{|n| n.info[:host] != m.info[:host]}
 
   # if we found a node, use it as a best-first match.
   # otherwise, we didn't find a node on a different ip, so we
   # go ahead and use a same-ip replica.
   if node
   slave = node
   interleaved.delete node
   else
   slave = interleaved.shift
   end
   slave.set_as_replica(m.info[:name])
   nodes_count -= 1
   assigned_replicas += 1
   puts "adding replica #{slave} to #{m}"
 
   # if we are in the "assign extra nodes" loop,
   # we want to assign one extra replica to each
   # master before repeating masters.
   # this break lets us assign extra replicas to masters
   # in a round-robin way.
   break if assign == :unused
  end
  end
 end
 end
 
 def flush_nodes_config
 @nodes.each{|n|
  n.flush_node_config
 }
 end
 
 def show_nodes
 @nodes.each{|n|
  xputs n.info_string
 }
 end
 
 # redis cluster config epoch collision resolution code is able to eventually
 # set a different epoch to each node after a new cluster is created, but
 # it is slow compared to assign a progressive config epoch to each node
 # before joining the cluster. however we do just a best-effort try here
 # since if we fail is not a problem.
 def assign_config_epoch
 config_epoch = 1
 @nodes.each{|n|
  begin
  n.r.cluster("set-config-epoch",config_epoch)
  rescue
  end
  config_epoch += 1
 }
 end
 
 def join_cluster
 # we use a brute force approach to make sure the node will meet
 # each other, that is, sending cluster meet messages to all the nodes
 # about the very same node.
 # thanks to gossip this information should propagate across all the
 # cluster in a matter of seconds.
 first = false
 @nodes.each{|n|
  if !first then first = n.info; next; end # skip the first node
  n.r.cluster("meet",first[:host],first[:port])
 }
 end
 
 def yes_or_die(msg)
 print "#{msg} (type 'yes' to accept): "
 stdout.flush
 if !(stdin.gets.chomp.downcase == "yes")
  xputs "*** aborting..."
  exit 1
 end
 end
 
 def load_cluster_info_from_node(nodeaddr)
 node = clusternode.new(nodeaddr)
 node.connect(:abort => true)
 node.assert_cluster
 node.load_info(:getfriends => true)
 add_node(node)
 node.friends.each{|f|
  next if f[:flags].index("noaddr") ||
   f[:flags].index("disconnected") ||
   f[:flags].index("fail")
  fnode = clusternode.new(f[:addr])
  fnode.connect()
  next if !fnode.r
  begin
  fnode.load_info()
  add_node(fnode)
  rescue => e
  xputs "[err] unable to load info for node #{fnode}"
  end
 }
 populate_nodes_replicas_info
 end
 
 # this function is called by load_cluster_info_from_node in order to
 # add additional information to every node as a list of replicas.
 def populate_nodes_replicas_info
 # start adding the new field to every node.
 @nodes.each{|n|
  n.info[:replicas] = []
 }
 
 # populate the replicas field using the replicate field of slave
 # nodes.
 @nodes.each{|n|
  if n.info[:replicate]
  master = get_node_by_name(n.info[:replicate])
  if !master
   xputs "*** warning: #{n} claims to be slave of unknown node id #{n.info[:replicate]}."
  else
   master.info[:replicas] << n
  end
  end
 }
 end
 
 # given a list of source nodes return a "resharding plan"
 # with what slots to move in order to move "numslots" slots to another
 # instance.
 def compute_reshard_table(sources,numslots)
 moved = []
 # sort from bigger to smaller instance, for two reasons:
 # 1) if we take less slots than instances it is better to start
 # getting from the biggest instances.
 # 2) we take one slot more from the first instance in the case of not
 # perfect divisibility. like we have 3 nodes and need to get 10
 # slots, we take 4 from the first, and 3 from the rest. so the
 # biggest is always the first.
 sources = sources.sort{|a,b| b.slots.length <=> a.slots.length}
 source_tot_slots = sources.inject(0) {|sum,source|
  sum+source.slots.length
 }
 sources.each_with_index{|s,i|
  # every node will provide a number of slots proportional to the
  # slots it has assigned.
  n = (numslots.to_f/source_tot_slots*s.slots.length)
  if i == 0
  n = n.ceil
  else
  n = n.floor
  end
  s.slots.keys.sort[(0...n)].each{|slot|
  if moved.length < numslots
   moved << {:source => s, :slot => slot}
  end
  }
 }
 return moved
 end
 
 def show_reshard_table(table)
 table.each{|e|
  puts " moving slot #{e[:slot]} from #{e[:source].info[:name]}"
 }
 end
 
 # move slots between source and target nodes using migrate.
 #
 # options:
 # :verbose -- print a dot for every moved key.
 # :fix -- we are moving in the context of a fix. use replace.
 # :cold -- move keys without opening slots / reconfiguring the nodes.
 # :update -- update nodes.info[:slots] for source/target nodes.
 # :quiet -- don't print info messages.
 def move_slot(source,target,slot,o={})
 o = {:pipeline => migratedefaultpipeline}.merge(o)
 
 # we start marking the slot as importing in the destination node,
 # and the slot as migrating in the target host. note that the order of
 # the operations is important, as otherwise a client may be redirected
 # to the target node that does not yet know it is importing this slot.
 if !o[:quiet]
  print "moving slot #{slot} from #{source} to #{target}: "
  stdout.flush
 end
 
 if !o[:cold]
  target.r.cluster("setslot",slot,"importing",source.info[:name])
  source.r.cluster("setslot",slot,"migrating",target.info[:name])
 end
 # migrate all the keys from source to target using the migrate command
 while true
  keys = source.r.cluster("getkeysinslot",slot,o[:pipeline])
  break if keys.length == 0
  begin
  source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:keys,*keys])
  rescue => e
  if o[:fix] && e.to_s =~ /busykey/
   xputs "*** target key exists. replacing it for fix."
   source.r.client.call(["migrate",target.info[:host],target.info[:port],"",0,@timeout,:replace,:keys,*keys])
  else
   puts ""
   xputs "[err] calling migrate: #{e}"
   exit 1
  end
  end
  print "."*keys.length if o[:dots]
  stdout.flush
 end
 
 puts if !o[:quiet]
 # set the new node as the owner of the slot in all the known nodes.
 if !o[:cold]
  @nodes.each{|n|
  next if n.has_flag?("slave")
  n.r.cluster("setslot",slot,"node",target.info[:name])
  }
 end
 
 # update the node logical config
 if o[:update] then
  source.info[:slots].delete(slot)
  target.info[:slots][slot] = true
 end
 end
 
 # redis-trib subcommands implementations.
 
 def check_cluster_cmd(argv,opt)
 load_cluster_info_from_node(argv[0])
 check_cluster
 end
 
 def info_cluster_cmd(argv,opt)
 load_cluster_info_from_node(argv[0])
 show_cluster_info
 end
 
 def rebalance_cluster_cmd(argv,opt)
 opt = {
  'pipeline' => migratedefaultpipeline,
  'threshold' => rebalancedefaultthreshold
 }.merge(opt)
 
 # load nodes info before parsing options, otherwise we can't
 # handle --weight.
 load_cluster_info_from_node(argv[0])
 
 # options parsing
 threshold = opt['threshold'].to_i
 autoweights = opt['auto-weights']
 weights = {}
 opt['weight'].each{|w|
  fields = w.split("=")
  node = get_node_by_abbreviated_name(fields[0])
  if !node || !node.has_flag?("master")
  puts "*** no such master node #{fields[0]}"
  exit 1
  end
  weights[node.info[:name]] = fields[1].to_f
 } if opt['weight']
 useempty = opt['use-empty-masters']
 
 # assign a weight to each node, and compute the total cluster weight.
 total_weight = 0
 nodes_involved = 0
 @nodes.each{|n|
  if n.has_flag?("master")
  next if !useempty && n.slots.length == 0
  n.info[:w] = weights[n.info[:name]] ? weights[n.info[:name]] : 1
  total_weight += n.info[:w]
  nodes_involved += 1
  end
 }
 
 # check cluster, only proceed if it looks sane.
 check_cluster(:quiet => true)
 if @errors.length != 0
  puts "*** please fix your cluster problems before rebalancing"
  exit 1
 end
 
 # calculate the slots balance for each node. it's the number of
 # slots the node should lose (if positive) or gain (if negative)
 # in order to be balanced.
 threshold = opt['threshold'].to_f
 threshold_reached = false
 @nodes.each{|n|
  if n.has_flag?("master")
  next if !n.info[:w]
  expected = ((clusterhashslots.to_f / total_weight) *
    n.info[:w]).to_i
  n.info[:balance] = n.slots.length - expected
  # compute the percentage of difference between the
  # expected number of slots and the real one, to see
  # if it's over the threshold specified by the user.
  over_threshold = false
  if threshold > 0
   if n.slots.length > 0
   err_perc = (100-(100.0*expected/n.slots.length)).abs
   over_threshold = true if err_perc > threshold
   elsif expected > 0
   over_threshold = true
   end
  end
  threshold_reached = true if over_threshold
  end
 }
 if !threshold_reached
  xputs "*** no rebalancing needed! all nodes are within the #{threshold}% threshold."
  return
 end
 
 # only consider nodes we want to change
 sn = @nodes.select{|n|
  n.has_flag?("master") && n.info[:w]
 }
 
 # because of rounding, it is possible that the balance of all nodes
 # summed does not give 0. make sure that nodes that have to provide
 # slots are always matched by nodes receiving slots.
 total_balance = sn.map{|x| x.info[:balance]}.reduce{|a,b| a+b}
 while total_balance > 0
  sn.each{|n|
  if n.info[:balance] < 0 && total_balance > 0
   n.info[:balance] -= 1
   total_balance -= 1
  end
  }
 end
 
 # sort nodes by their slots balance.
 sn = sn.sort{|a,b|
  a.info[:balance] <=> b.info[:balance]
 }
 
 xputs ">>> rebalancing across #{nodes_involved} nodes. total weight = #{total_weight}"
 
 if $verbose
  sn.each{|n|
  puts "#{n} balance is #{n.info[:balance]} slots"
  }
 end
 
 # now we have at the start of the 'sn' array nodes that should get
 # slots, at the end nodes that must give slots.
 # we take two indexes, one at the start, and one at the end,
 # incrementing or decrementing the indexes accordingly til we
 # find nodes that need to get/provide slots.
 dst_idx = 0
 src_idx = sn.length - 1
 
 while dst_idx < src_idx
  dst = sn[dst_idx]
  src = sn[src_idx]
  numslots = [dst.info[:balance],src.info[:balance]].map{|n|
  n.abs
  }.min
 
  if numslots > 0
  puts "moving #{numslots} slots from #{src} to #{dst}"
 
  # actaully move the slots.
  reshard_table = compute_reshard_table([src],numslots)
  if reshard_table.length != numslots
   xputs "*** assertio failed: reshard table != number of slots"
   exit 1
  end
  if opt['simulate']
   print "#"*reshard_table.length
  else
   reshard_table.each{|e|
   move_slot(e[:source],dst,e[:slot],
    :quiet=>true,
    :dots=>false,
    :update=>true,
    :pipeline=>opt['pipeline'])
   print "#"
   stdout.flush
   }
  end
  puts
  end
 
  # update nodes balance.
  dst.info[:balance] += numslots
  src.info[:balance] -= numslots
  dst_idx += 1 if dst.info[:balance] == 0
  src_idx -= 1 if src.info[:balance] == 0
 end
 end
 
 def fix_cluster_cmd(argv,opt)
 @fix = true
 @timeout = opt['timeout'].to_i if opt['timeout']
 
 load_cluster_info_from_node(argv[0])
 check_cluster
 end
 
 def reshard_cluster_cmd(argv,opt)
 opt = {'pipeline' => migratedefaultpipeline}.merge(opt)
 
 load_cluster_info_from_node(argv[0])
 check_cluster
 if @errors.length != 0
  puts "*** please fix your cluster problems before resharding"
  exit 1
 end
 
 @timeout = opt['timeout'].to_i if opt['timeout'].to_i
 
 # get number of slots
 if opt['slots']
  numslots = opt['slots'].to_i
 else
  numslots = 0
  while numslots <= 0 or numslots > clusterhashslots
  print "how many slots do you want to move (from 1 to #{clusterhashslots})? "
  numslots = stdin.gets.to_i
  end
 end
 
 # get the target instance
 if opt['to']
  target = get_node_by_name(opt['to'])
  if !target || target.has_flag?("slave")
  xputs "*** the specified node is not known or not a master, please retry."
  exit 1
  end
 else
  target = nil
  while not target
  print "what is the receiving node id? "
  target = get_node_by_name(stdin.gets.chop)
  if !target || target.has_flag?("slave")
   xputs "*** the specified node is not known or not a master, please retry."
   target = nil
  end
  end
 end
 
 # get the source instances
 sources = []
 if opt['from']
  opt['from'].split(',').each{|node_id|
  if node_id == "all"
   sources = "all"
   break
  end
  src = get_node_by_name(node_id)
  if !src || src.has_flag?("slave")
   xputs "*** the specified node is not known or is not a master, please retry."
   exit 1
  end
  sources << src
  }
 else
  xputs "please enter all the source node ids."
  xputs " type 'all' to use all the nodes as source nodes for the hash slots."
  xputs " type 'done' once you entered all the source nodes ids."
  while true
  print "source node ##{sources.length+1}:"
  line = stdin.gets.chop
  src = get_node_by_name(line)
  if line == "done"
   break
  elsif line == "all"
   sources = "all"
   break
  elsif !src || src.has_flag?("slave")
   xputs "*** the specified node is not known or is not a master, please retry."
  elsif src.info[:name] == target.info[:name]
   xputs "*** it is not possible to use the target node as source node."
  else
   sources << src
  end
  end
 end
 
 if sources.length == 0
  puts "*** no source nodes given, operation aborted"
  exit 1
 end
 
 # handle soures == all.
 if sources == "all"
  sources = []
  @nodes.each{|n|
  next if n.info[:name] == target.info[:name]
  next if n.has_flag?("slave")
  sources << n
  }
 end
 
 # check if the destination node is the same of any source nodes.
 if sources.index(target)
  xputs "*** target node is also listed among the source nodes!"
  exit 1
 end
 
 puts "\nready to move #{numslots} slots."
 puts " source nodes:"
 sources.each{|s| puts " "+s.info_string}
 puts " destination node:"
 puts " #{target.info_string}"
 reshard_table = compute_reshard_table(sources,numslots)
 puts " resharding plan:"
 show_reshard_table(reshard_table)
 if !opt['yes']
  print "do you want to proceed with the proposed reshard plan (yes/no)? "
  yesno = stdin.gets.chop
  exit(1) if (yesno != "yes")
 end
 reshard_table.each{|e|
  move_slot(e[:source],target,e[:slot],
  :dots=>true,
  :pipeline=>opt['pipeline'])
 }
 end
 
 # this is an helper function for create_cluster_cmd that verifies if
 # the number of nodes and the specified replicas have a valid configuration
 # where there are at least three master nodes and enough replicas per node.
 def check_create_parameters
 masters = @nodes.length/(@replicas+1)
 if masters < 3
  puts "*** error: invalid configuration for cluster creation."
  puts "*** redis cluster requires at least 3 master nodes."
  puts "*** this is not possible with #{@nodes.length} nodes and #{@replicas} replicas per node."
  puts "*** at least #{3*(@replicas+1)} nodes are required."
  exit 1
 end
 end
 
 def create_cluster_cmd(argv,opt)
 opt = {'replicas' => 0}.merge(opt)
 @replicas = opt['replicas'].to_i
 
 xputs ">>> creating cluster"
 argv[0..-1].each{|n|
  node = clusternode.new(n)
  node.connect(:abort => true)
  node.assert_cluster
  node.load_info
  node.assert_empty
  add_node(node)
 }
 check_create_parameters
 xputs ">>> performing hash slots allocation on #{@nodes.length} nodes..."
 alloc_slots
 show_nodes
 yes_or_die "can i set the above configuration?"
 flush_nodes_config
 xputs ">>> nodes configuration updated"
 xputs ">>> assign a different config epoch to each node"
 assign_config_epoch
 xputs ">>> sending cluster meet messages to join the cluster"
 join_cluster
 # give one second for the join to start, in order to avoid that
 # wait_cluster_join will find all the nodes agree about the config as
 # they are still empty with unassigned slots.
 sleep 1
 wait_cluster_join
 flush_nodes_config # useful for the replicas
 check_cluster
 end
 
 def addnode_cluster_cmd(argv,opt)
 xputs ">>> adding node #{argv[0]} to cluster #{argv[1]}"
 
 # check the existing cluster
 load_cluster_info_from_node(argv[1])
 check_cluster
 
 # if --master-id was specified, try to resolve it now so that we
 # abort before starting with the node configuration.
 if opt['slave']
  if opt['master-id']
  master = get_node_by_name(opt['master-id'])
  if !master
   xputs "[err] no such master id #{opt['master-id']}"
  end
  else
  master = get_master_with_least_replicas
  xputs "automatically selected master #{master}"
  end
 end
 
 # add the new node
 new = clusternode.new(argv[0])
 new.connect(:abort => true)
 new.assert_cluster
 new.load_info
 new.assert_empty
 first = @nodes.first.info
 add_node(new)
 
 # send cluster meet command to the new node
 xputs ">>> send cluster meet to node #{new} to make it join the cluster."
 new.r.cluster("meet",first[:host],first[:port])
 
 # additional configuration is needed if the node is added as
 # a slave.
 if opt['slave']
  wait_cluster_join
  xputs ">>> configure node as replica of #{master}."
  new.r.cluster("replicate",master.info[:name])
 end
 xputs "[ok] new node added correctly."
 end
 
 def delnode_cluster_cmd(argv,opt)
 id = argv[1].downcase
 xputs ">>> removing node #{id} from cluster #{argv[0]}"
 
 # load cluster information
 load_cluster_info_from_node(argv[0])
 
 # check if the node exists and is not empty
 node = get_node_by_name(id)
 
 if !node
  xputs "[err] no such node id #{id}"
  exit 1
 end
 
 if node.slots.length != 0
  xputs "[err] node #{node} is not empty! reshard data away and try again."
  exit 1
 end
 
 # send cluster forget to all the nodes but the node to remove
 xputs ">>> sending cluster forget messages to the cluster..."
 @nodes.each{|n|
  next if n == node
  if n.info[:replicate] && n.info[:replicate].downcase == id
  # reconfigure the slave to replicate with some other node
  master = get_master_with_least_replicas
  xputs ">>> #{n} as replica of #{master}"
  n.r.cluster("replicate",master.info[:name])
  end
  n.r.cluster("forget",argv[1])
 }
 
 # finally shutdown the node
 xputs ">>> shutdown the node."
 node.r.shutdown
 end
 
 def set_timeout_cluster_cmd(argv,opt)
 timeout = argv[1].to_i
 if timeout < 100
  puts "setting a node timeout of less than 100 milliseconds is a bad idea."
  exit 1
 end
 
 # load cluster information
 load_cluster_info_from_node(argv[0])
 ok_count = 0
 err_count = 0
 
 # send cluster forget to all the nodes but the node to remove
 xputs ">>> reconfiguring node timeout in every cluster node..."
 @nodes.each{|n|
  begin
  n.r.config("set","cluster-node-timeout",timeout)
  n.r.config("rewrite")
  ok_count += 1
  xputs "*** new timeout set for #{n}"
  rescue => e
  puts "err setting node-timeot for #{n}: #{e}"
  err_count += 1
  end
 }
 xputs ">>> new node timeout set. #{ok_count} ok, #{err_count} err."
 end
 
 def call_cluster_cmd(argv,opt)
 cmd = argv[1..-1]
 cmd[0] = cmd[0].upcase
 
 # load cluster information
 load_cluster_info_from_node(argv[0])
 xputs ">>> calling #{cmd.join(" ")}"
 @nodes.each{|n|
  begin
  res = n.r.send(*cmd)
  puts "#{n}: #{res}"
  rescue => e
  puts "#{n}: #{e}"
  end
 }
 end
 
 def import_cluster_cmd(argv,opt)
 source_addr = opt['from']
 xputs ">>> importing data from #{source_addr} to cluster #{argv[1]}"
 use_copy = opt['copy']
 use_replace = opt['replace']
  
 # check the existing cluster.
 load_cluster_info_from_node(argv[0])
 check_cluster
 
 # connect to the source node.
 xputs ">>> connecting to the source redis instance"
 src_host,src_port = source_addr.split(":")
 source = redis.new(:host =>src_host, :port =>src_port)
 if source.info['cluster_enabled'].to_i == 1
  xputs "[err] the source node should not be a cluster node."
 end
 xputs "*** importing #{source.dbsize} keys from db 0"
 
 # build a slot -> node map
 slots = {}
 @nodes.each{|n|
  n.slots.each{|s,_|
  slots[s] = n
  }
 }
 
 # use scan to iterate over the keys, migrating to the
 # right node as needed.
 cursor = nil
 while cursor != 0
  cursor,keys = source.scan(cursor, :count => 1000)
  cursor = cursor.to_i
  keys.each{|k|
  # migrate keys using the migrate command.
  slot = key_to_slot(k)
  target = slots[slot]
  print "migrating #{k} to #{target}: "
  stdout.flush
  begin
   cmd = ["migrate",target.info[:host],target.info[:port],k,0,@timeout]
   cmd << :copy if use_copy
   cmd << :replace if use_replace
   source.client.call(cmd)
  rescue => e
   puts e
  else
   puts "ok"
  end
  }
 end
 end
 
 def help_cluster_cmd(argv,opt)
 show_help
 exit 0
 end
 
 # parse the options for the specific command "cmd".
 # returns an hash populate with option => value pairs, and the index of
 # the first non-option argument in argv.
 def parse_options(cmd)
 idx = 1 ; # current index into argv
 options={}
 while idx < argv.length && argv[idx][0..1] == '--'
  if argv[idx][0..1] == "--"
  option = argv[idx][2..-1]
  idx += 1
 
  # --verbose is a global option
  if option == "verbose"
   $verbose = true
   next
  end
 
  if allowed_options[cmd] == nil || allowed_options[cmd][option] == nil
   puts "unknown option '#{option}' for command '#{cmd}'"
   exit 1
  end
  if allowed_options[cmd][option] != false
   value = argv[idx]
   idx += 1
  else
   value = true
  end
 
  # if the option is set to [], it's a multiple arguments
  # option. we just queue every new value into an array.
  if allowed_options[cmd][option] == []
   options[option] = [] if !options[option]
   options[option] << value
  else
   options[option] = value
  end
  else
  # remaining arguments are not options.
  break
  end
 end
 
 # enforce mandatory options
 if allowed_options[cmd]
  allowed_options[cmd].each {|option,val|
  if !options[option] && val == :required
   puts "option '--#{option}' is required "+ \
    "for subcommand '#{cmd}'"
   exit 1
  end
  }
 end
 return options,idx
 end
end
 
#################################################################################
# libraries
#
# we try to don't depend on external libs since this is a critical part
# of redis cluster.
#################################################################################
 
# this is the crc16 algorithm used by redis cluster to hash keys.
# implementation according to ccitt standards.
#
# this is actually the xmodem crc 16 algorithm, using the
# following parameters:
#
# name   : "xmodem", also known as "zmodem", "crc-16/acorn"
# width   : 16 bit
# poly   : 1021 (that is actually x^16 + x^12 + x^5 + 1)
# initialization  : 0000
# reflect input byte  : false
# reflect output crc  : false
# xor constant to output crc : 0000
# output for "123456789" : 31c3
 
module redisclustercrc16
 def redisclustercrc16.crc16(bytes)
 crc = 0
 bytes.each_byte{|b|
  crc = ((crc<<8) & 0xffff) ^ xmodemcrc16lookup[((crc>>8)^b) & 0xff]
 }
 crc
 end
 
private
 xmodemcrc16lookup = [
 0x0000,0x1021,0x2042,0x3063,0x4084,0x50a5,0x60c6,0x70e7,
 0x8108,0x9129,0xa14a,0xb16b,0xc18c,0xd1ad,0xe1ce,0xf1ef,
 0x1231,0x0210,0x3273,0x2252,0x52b5,0x4294,0x72f7,0x62d6,
 0x9339,0x8318,0xb37b,0xa35a,0xd3bd,0xc39c,0xf3ff,0xe3de,
 0x2462,0x3443,0x0420,0x1401,0x64e6,0x74c7,0x44a4,0x5485,
 0xa56a,0xb54b,0x8528,0x9509,0xe5ee,0xf5cf,0xc5ac,0xd58d,
 0x3653,0x2672,0x1611,0x0630,0x76d7,0x66f6,0x5695,0x46b4,
 0xb75b,0xa77a,0x9719,0x8738,0xf7df,0xe7fe,0xd79d,0xc7bc,
 0x48c4,0x58e5,0x6886,0x78a7,0x0840,0x1861,0x2802,0x3823,
 0xc9cc,0xd9ed,0xe98e,0xf9af,0x8948,0x9969,0xa90a,0xb92b,
 0x5af5,0x4ad4,0x7ab7,0x6a96,0x1a71,0x0a50,0x3a33,0x2a12,
 0xdbfd,0xcbdc,0xfbbf,0xeb9e,0x9b79,0x8b58,0xbb3b,0xab1a,
 0x6ca6,0x7c87,0x4ce4,0x5cc5,0x2c22,0x3c03,0x0c60,0x1c41,
 0xedae,0xfd8f,0xcdec,0xddcd,0xad2a,0xbd0b,0x8d68,0x9d49,
 0x7e97,0x6eb6,0x5ed5,0x4ef4,0x3e13,0x2e32,0x1e51,0x0e70,
 0xff9f,0xefbe,0xdfdd,0xcffc,0xbf1b,0xaf3a,0x9f59,0x8f78,
 0x9188,0x81a9,0xb1ca,0xa1eb,0xd10c,0xc12d,0xf14e,0xe16f,
 0x1080,0x00a1,0x30c2,0x20e3,0x5004,0x4025,0x7046,0x6067,
 0x83b9,0x9398,0xa3fb,0xb3da,0xc33d,0xd31c,0xe37f,0xf35e,
 0x02b1,0x1290,0x22f3,0x32d2,0x4235,0x5214,0x6277,0x7256,
 0xb5ea,0xa5cb,0x95a8,0x8589,0xf56e,0xe54f,0xd52c,0xc50d,
 0x34e2,0x24c3,0x14a0,0x0481,0x7466,0x6447,0x5424,0x4405,
 0xa7db,0xb7fa,0x8799,0x97b8,0xe75f,0xf77e,0xc71d,0xd73c,
 0x26d3,0x36f2,0x0691,0x16b0,0x6657,0x7676,0x4615,0x5634,
 0xd94c,0xc96d,0xf90e,0xe92f,0x99c8,0x89e9,0xb98a,0xa9ab,
 0x5844,0x4865,0x7806,0x6827,0x18c0,0x08e1,0x3882,0x28a3,
 0xcb7d,0xdb5c,0xeb3f,0xfb1e,0x8bf9,0x9bd8,0xabbb,0xbb9a,
 0x4a75,0x5a54,0x6a37,0x7a16,0x0af1,0x1ad0,0x2ab3,0x3a92,
 0xfd2e,0xed0f,0xdd6c,0xcd4d,0xbdaa,0xad8b,0x9de8,0x8dc9,
 0x7c26,0x6c07,0x5c64,0x4c45,0x3ca2,0x2c83,0x1ce0,0x0cc1,
 0xef1f,0xff3e,0xcf5d,0xdf7c,0xaf9b,0xbfba,0x8fd9,0x9ff8,
 0x6e17,0x7e36,0x4e55,0x5e74,0x2e93,0x3eb2,0x0ed1,0x1ef0
 ]
end
 
# turn a key name into the corrisponding redis cluster slot.
def key_to_slot(key)
 # only hash what is inside {...} if there is such a pattern in the key.
 # note that the specification requires the content that is between
 # the first { and the first } after the first {. if we found {} without
 # nothing in the middle, the whole key is hashed as usually.
 s = key.index "{"
 if s
 e = key.index "}",s+1
 if e && e != s+1
  key = key[s+1..e-1]
 end
 end
 redisclustercrc16.crc16(key) % 16384
end
 
#################################################################################
# definition of commands
#################################################################################
 
commands={
 "create" => ["create_cluster_cmd", -2, "host1:port1 ... hostn:portn"],
 "check" => ["check_cluster_cmd", 2, "host:port"],
 "info" => ["info_cluster_cmd", 2, "host:port"],
 "fix" => ["fix_cluster_cmd", 2, "host:port"],
 "reshard" => ["reshard_cluster_cmd", 2, "host:port"],
 "rebalance" => ["rebalance_cluster_cmd", -2, "host:port"],
 "add-node" => ["addnode_cluster_cmd", 3, "new_host:new_port existing_host:existing_port"],
 "del-node" => ["delnode_cluster_cmd", 3, "host:port node_id"],
 "set-timeout" => ["set_timeout_cluster_cmd", 3, "host:port milliseconds"],
 "call" => ["call_cluster_cmd", -3, "host:port command arg arg .. arg"],
 "import" => ["import_cluster_cmd", 2, "host:port"],
 "help" => ["help_cluster_cmd", 1, "(show this help)"]
}
 
allowed_options={
 "create" => {"replicas" => true},
 "add-node" => {"slave" => false, "master-id" => true},
 "import" => {"from" => :required, "copy" => false, "replace" => false},
 "reshard" => {"from" => true, "to" => true, "slots" => true, "yes" => false, "timeout" => true, "pipeline" => true},
 "rebalance" => {"weight" => [], "auto-weights" => false, "use-empty-masters" => false, "timeout" => true, "simulate" => false, "pipeline" => true, "threshold" => true},
 "fix" => {"timeout" => migratedefaulttimeout},
}
 
def show_help
 puts "usage: redis-trib <command> <options> <arguments ...>\n\n"
 commands.each{|k,v|
 o = ""
 puts " #{k.ljust(15)} #{v[2]}"
 if allowed_options[k]
  allowed_options[k].each{|optname,has_arg|
  puts "   --#{optname}" + (has_arg ? " <arg>" : "")
  }
 end
 }
 puts "\nfor check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.\n"
end
 
# sanity check
if argv.length == 0
 show_help
 exit 1
end
 
rt = redistrib.new
cmd_spec = commands[argv[0].downcase]
if !cmd_spec
 puts "unknown redis-trib subcommand '#{argv[0]}'"
 exit 1
end
 
# parse options
cmd_options,first_non_option = rt.parse_options(argv[0].downcase)
rt.check_arity(cmd_spec[1],argv.length-(first_non_option-1))
 
# dispatch
rt.send(cmd_spec[0],argv[first_non_option..-1],cmd_options)

总结

到此这篇关于window下创建redis出现问题小结的文章就介绍到这了,更多相关创建redis出现问题内容请搜索服务器之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持服务器之家!

原文链接:https://blog.csdn.net/zys15256630193/article/details/109091746

延伸 · 阅读

精彩推荐
  • RedisRedis Template实现分布式锁的实例代码

    Redis Template实现分布式锁的实例代码

    这篇文章主要介绍了Redis Template实现分布式锁,需要的朋友可以参考下 ...

    晴天小哥哥2592019-11-18
  • Redis如何使用Redis锁处理并发问题详解

    如何使用Redis锁处理并发问题详解

    这篇文章主要给大家介绍了关于如何使用Redis锁处理并发问题的相关资料,文中通过示例代码介绍的非常详细,对大家学习或者使用Redis具有一定的参考学习...

    haofly4522019-11-26
  • RedisRedis 6.X Cluster 集群搭建

    Redis 6.X Cluster 集群搭建

    码哥带大家完成在 CentOS 7 中安装 Redis 6.x 教程。在学习 Redis Cluster 集群之前,我们需要先搭建一套集群环境。机器有限,实现目标是一台机器上搭建 6 个节...

    码哥字节15752021-04-07
  • Redis详解三分钟快速搭建分布式高可用的Redis集群

    详解三分钟快速搭建分布式高可用的Redis集群

    这篇文章主要介绍了详解三分钟快速搭建分布式高可用的Redis集群,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,...

    万猫学社4502021-07-25
  • Redis《面试八股文》之 Redis十六卷

    《面试八股文》之 Redis十六卷

    redis 作为我们最常用的内存数据库,很多地方你都能够发现它的身影,比如说登录信息的存储,分布式锁的使用,其经常被我们当做缓存去使用。...

    moon聊技术8182021-07-26
  • Redis关于Redis数据库入门详细介绍

    关于Redis数据库入门详细介绍

    大家好,本篇文章主要讲的是关于Redis数据库入门详细介绍,感兴趣的同学赶快来看一看吧,对你有帮助的话记得收藏一下,方便下次浏览...

    沃尔码6982022-01-24
  • RedisRedis集群的5种使用方式,各自优缺点分析

    Redis集群的5种使用方式,各自优缺点分析

    Redis 多副本,采用主从(replication)部署结构,相较于单副本而言最大的特点就是主从实例间数据实时同步,并且提供数据持久化和备份策略。...

    优知学院4082021-08-10
  • Redisredis缓存存储Session原理机制

    redis缓存存储Session原理机制

    这篇文章主要为大家介绍了redis缓存存储Session原理机制详解,有需要的朋友可以借鉴参考下,希望能够有所帮助,祝大家多多进步,早日升职加薪...

    程序媛张小妍9252021-11-25