├── README.md ├── README_cn.md ├── go.mod ├── go.sum └── redis_find_big_key.go /README.md: -------------------------------------------------------------------------------- 1 | 📖 [中文文档](./README_cn.md) 2 | 3 | # Redis Large Key Analysis Tool: Supports TOP N, Batch Analysis, and Slave Node Priority 4 | 5 | # Background 6 | 7 | Redis large key analysis tools are mainly divided into two categories: 8 | 9 | 1. Offline Analysis 10 | 11 | Parsing is based on the RDB file, and the commonly used tool is redis-rdb-tools (https://github.com/sripathikrishnan/redis-rdb-tools). 12 | However, this tool has not been updated for nearly 5 years, does not support Redis 7, and because it is developed using Python, the parsing speed is slow. 13 | The currently more active alternative tool is https://github.com/HDT3213/rdb. This tool supports Redis 7 and is developed using Go. 14 | 15 | 2. Online Analysis 16 | 17 | The commonly used tool is redis-cli, which provides two analysis methods: 18 | 19 | - \- -bigkeys: Introduced in Redis 3.0.0, it counts the number of elements in the key. 20 | - \- -memkeys: Introduced in Redis 6.0.0, it uses the `MEMORY USAGE` command to count the memory occupation of the key. 21 | 22 | The advantages and disadvantages of these two methods are as follows: 23 | 24 | - Offline analysis: Parsing is based on the RDB file and will not affect the performance of online instance. The disadvantage is that the operation is relatively complex. Especially for many Redis cloud services, since the SYNC command is disabled, the RDB file cannot be directly downloaded through `redis-cli --rdb `, and it can only be manually downloaded from the console. 25 | - Online analysis: The operation is simple. As long as there is access permission to the instance, the analysis can be directly carried out. The disadvantage is that the analysis process may have a certain impact on the performance of the online instance. 26 | 27 | The tool introduced in this article (`redis-find-big-key`) is also an online analysis tool. Its implementation idea is similar to `redis-cli --memkeys`, but it is more powerful and practical. It is mainly reflected in: 28 | 29 | 1. Supports the TOP N function 30 | This tool can output the top N keys with the most memory occupation, while redis-cli can only output the single key with the most occupation in each type. 31 | 2. Supports batch analysis 32 | This tool can analyze multiple Redis nodes at the same time. Especially for Redis Cluster, after enabling the cluster mode (`-cluster-mode`), it will automatically analyze each shard. While redis-cli can only analyze a single node. 33 | 3. Automatically selects the slave node for analysis 34 | In order to reduce the impact on the instance performance, the tool will automatically select the slave node for analysis. Only when there is no slave node will the master node be selected for analysis. While redis-cli can only analyze the master node. 35 | 36 | # Test Time Comparison 37 | 38 | Test environment: Redis 6.2.17, single instance, used_memory_human is 9.75G, the number of keys is 1 millon, and the RDB file size is 3GB. 39 | The following is the time-consuming situation of the above four tools when obtaining the 100 keys with the most memory occupation: 40 | 41 | 42 | | Tool | Time-consuming | 43 | | ------------------------------ | --------- | 44 | | redis-rdb-tools | 25m38.68s | 45 | | https://github.com/HDT3213/rdb | 50.68s | 46 | | redis-cli --memkeys | 40.22s | 47 | | redis-find-big-key | 29.12s | 48 | 49 | # 50 | 51 | # Tool Effect 52 | 53 | ```c++ 54 | # ./redis-find-big-key -addr 10.0.1.76:6379 -cluster-mode 55 | Log file not specified, using default: /tmp/10.0.1.76:6379_20250222_043832.txt 56 | Scanning keys from node: 10.0.1.76:6380 (slave) 57 | Node: 10.0.1.76:6380 58 | -------- Summary -------- 59 | Sampled 8 keys in the keyspace! 60 | Total key length in bytes is 2.96 MB (avg len 379.43 KB) 61 | Top biggest keys: 62 | +------------------------------+--------+-----------+---------------------+ 63 | | Key | Type | Size | Number of elements | 64 | +------------------------------+--------+-----------+---------------------+ 65 | | mysortedset_20250222043729:1 | zset | 739.6 KB | 8027 members | 66 | | myhash_20250222043741:2 | hash | 648.12 KB | 9490 fields | 67 | | mysortedset_20250222043741:1 | zset | 536.44 KB | 5608 members | 68 | | myset_20250222043729:1 | set | 399.66 KB | 8027 members | 69 | | myset_20250222043741:1 | set | 328.36 KB | 5608 members | 70 | | myhash_20250222043729:2 | hash | 222.65 KB | 3917 fields | 71 | | mylist_20250222043729:1 | list | 160.54 KB | 8027 items | 72 | | mykey_20250222043729:2 | string | 73 bytes | 7 bytes (value len) | 73 | +------------------------------+--------+-----------+---------------------+ 74 | Scanning keys from node: 10.0.1.202:6380 (slave) 75 | Node: 10.0.1.202:6380 76 | -------- Summary -------- 77 | Sampled 8 keys in the keyspace! 78 | Total key length in bytes is 3.11 MB (avg len 398.23 KB) 79 | Top biggest keys: 80 | +------------------------------+--------+------------+---------------------+ 81 | | Key | Type | Size | Number of elements | 82 | +------------------------------+--------+------------+---------------------+ 83 | | mysortedset_20250222043741:2 | zset | 1020.13 KB | 9490 members | 84 | | myset_20250222043741:2 | set | 588.81 KB | 9490 members | 85 | | myhash_20250222043729:1 | hash | 456.1 KB | 8027 fields | 86 | | mysortedset_20250222043729:2 | zset | 404.5 KB | 3917 members | 87 | | myhash_20250222043741:1 | hash | 335.79 KB | 5608 fields | 88 | | myset_20250222043729:2 | set | 195.87 KB | 3917 members | 89 | | mylist_20250222043741:2 | list | 184.55 KB | 9490 items | 90 | | mykey_20250222043741:1 | string | 73 bytes | 7 bytes (value len) | 91 | +------------------------------+--------+------------+---------------------+ 92 | Scanning keys from node: 10.0.1.147:6380 (slave) 93 | Node: 10.0.1.147:6380 94 | -------- Summary -------- 95 | Sampled 4 keys in the keyspace! 96 | Total key length in bytes is 192.9 KB (avg len 48.22 KB) 97 | Top biggest keys: 98 | +-------------------------+--------+-----------+---------------------+ 99 | | Key | Type | Size | Number of elements | 100 | +-------------------------+--------+-----------+---------------------+ 101 | | mylist_20250222043741:1 | list | 112.45 KB | 5608 items | 102 | | mylist_20250222043729:2 | list | 80.31 KB | 3917 items | 103 | | mykey_20250222043729:1 | string | 73 bytes | 7 bytes (value len) | 104 | | mykey_20250222043741:2 | string | 73 bytes | 7 bytes (value len) | 105 | +-------------------------+--------+-----------+---------------------+ 106 | ``` 107 | 108 | # Tool Address 109 | 110 | Project address: https://github.com/slowtech/redis-find-big-key 111 | You can directly download the binary package or compile the source code. 112 | 113 | ## Directly Download the Binary Package 114 | 115 | ```bash 116 | # wget https://github.com/slowtech/redis-find-big-key/releases/download/v1.0.0/redis-find-big-key-linux-amd64.tar.gz 117 | # tar xvf redis-find-big-key-linux-amd64.tar.gz 118 | ``` 119 | 120 | After decompression, an executable file named `redis-find-big-key` will be generated in the current directory. 121 | 122 | ## Build from Source Code 123 | 124 | ```bash 125 | # wget https://github.com/slowtech/redis-find-big-key/archive/refs/tags/v1.0.0.tar.gz 126 | # tar xvf v1.0.0.tar.gz 127 | # cd redis-find-big-key-1.0.0 128 | # go build 129 | ``` 130 | 131 | After compilation, an executable file named `redis-find-big-key` will be generated in the current directory. 132 | 133 | # Parameter Parsing 134 | 135 | ```bash 136 | # ./redis-find-big-key --help 137 | Usage of ./redis-find-big-key: 138 | -addr string 139 | Redis server address in the format : 140 | -cluster-mode 141 | Enable cluster mode to get keys from all shards in the Redis cluster 142 | -concurrency int 143 | Maximum number of nodes to process concurrently (default 1) 144 | -direct 145 | Perform operation on the specified node. If not specified, the operation will default to executing on the slave node 146 | -log-file string 147 | Log file for saving progress and intermediate result 148 | -master-yes 149 | Execute even if the Redis role is master 150 | -password string 151 | Redis password 152 | -samples uint 153 | Samples for memory usage (default 5) 154 | -skip-lazyfree-check 155 | Skip check lazyfree-lazy-expire 156 | -sleep float 157 | Sleep duration (in seconds) after processing each batch 158 | -tls 159 | Enable TLS for Redis connection 160 | -top int 161 | Maximum number of biggest keys to display (default 100) 162 | ``` 163 | 164 | The specific meanings of each parameter are as follows: 165 | 166 | - -addr: Specify the address of the Redis instance in the format of `:`, for example, 10.0.0.108:6379. Note, If the cluster mode (`-cluster-mode`) is not enabled, multiple addresses can be specified, separated by commas, for example, 10.0.0.108:6379,10.0.0.108:6380. If the cluster mode is enabled, only one address can be specified, and the tool will automatically discover other nodes in the cluster. 167 | 168 | - -cluster-mode: Enable the cluster mode. The tool will automatically analyze each shard in the Redis Cluster and preferentially select the slave node. Only when there is no slave node in the corresponding shard will the master node be selected for analysis. 169 | 170 | - -concurrency: Set the concurrency, with a default value of 1, that is, analyze the nodes one by one. If there are many nodes to be analyzed, increasing the concurrency can improve the analysis speed. 171 | 172 | - -direct: Directly perform the analysis on the node specified by -addr, which will skip the default logic of automatically selecting the slave node. 173 | 174 | - -log-file: Specify the path of the log file, which is used to record the progress information and intermediate process information during the analysis process. If not specified, the default is `/tmp/_.txt`, for example, /tmp/10.0.0.108:6379_20250218_125955.txt. 175 | 176 | - -master-yes: If there is a master node among the nodes to be analyzed (common reasons: the slave node does not exist; specify to analyze on the master node through the -direct parameter), the tool will prompt the following error: 177 | 178 | ``` 179 | Error: nodes 10.0.1.76:6379 are master. To execute, you must specify -master-yes 180 | ``` 181 | 182 | If it is determined that the analysis can be carried out on the master node, you can specify -master-yes to skip the detection. 183 | 184 | - -password: Specify the password of the Redis instance. 185 | 186 | - -samples: Set the sampling number in the `MEMORY USAGE key [SAMPLES count]` command. For data structures with multiple elements (such as LIST, SET, ZSET, HASH, STREAM, etc.), a too low sampling number may lead to inaccurate estimation of memory occupation, while a too high number will increase the calculation time and resource consumption. If SAMPLES is not specified, the default value is 5. 187 | 188 | - -skip-lazyfree-check: If the analysis is carried out on the master node, special attention should be paid to the large expired keys. Because the scanning operation will trigger the deletion of expired keys. If lazy deletion (`lazyfree-lazy-expire`) is not enabled, the deletion operation will be executed in the main thread. At this time, deleting large keys may cause blocking and affect normal business requests. 189 | Therefore, when the tool analyzes on the master node, it will automatically check whether the node has enabled lazy deletion. If it is not enabled, the tool will prompt the following error and terminate the operation to avoid affecting the online business: 190 | 191 | ```bash 192 | Error: nodes 10.0.1.76:6379 are master and lazyfree-lazy-expire is set to ‘no’. Scanning might trigger large key expiration, which could block the main thread. Please set lazyfree-lazy-expire to ‘yes’ for better performance. To skip this check, you must specify --skip-lazyfree-check 193 | ``` 194 | 195 | In this case, it is recommended to enable lazy deletion by the command `CONFIG SET lazyfree-lazy-expire yes`*.* 196 | If it is confirmed that there are no large expired keys, you can specify -skip-lazyfree-check to skip the detection. 197 | 198 | - -sleep: Set the sleep time after scanning each batch of data. 199 | 200 | - -tls: Enable the TLS connection. 201 | 202 | - -top: Display the top N keys with the most memory occupation. The default is 100. 203 | 204 | # Common Usage 205 | 206 | ## Analyze a Single Node 207 | 208 | ```bash 209 | ./redis-find-big-key -addr 10.0.1.76:6379 210 | Scanning keys from node: 10.0.1.202:6380 (slave) 211 | ``` 212 | 213 | Note that in the above example, the specified node is not the same as the actually scanned node. This is because 10.0.1.76:6379 is the master node, and the tool will default to selecting the slave library for analysis. Only when there is no slave library for the specified master node will the tool directly scan the master node. 214 | 215 | ## Analyze a Single Redis Cluster 216 | 217 | ```bash 218 | ./redis-find-big-key -addr 10.0.1.76:6379 -cluster-mode 219 | ``` 220 | 221 | Just provide the address of any node in the cluster, and the tool will automatically obtain the addresses of other nodes in the cluster. At the same time, the tool will preferentially select the slave node for analysis. Only when there is no slave node in a certain shard will the master node of that shard be selected for analysis. 222 | 223 | ## Analyze Multiple Nodes 224 | 225 | ```bash 226 | ./redis-find-big-key -addr 10.0.1.76:6379,10.0.1.202:6379,10.0.1.147:6379 227 | ``` 228 | 229 | The nodes are independent of each other and can come from the same cluster or different clusters. Note that if multiple node addresses are specified in the -addr parameter, the -cluster-mode parameter cannot be used. 230 | 231 | ## Analyze the Master Node 232 | 233 | If you need to analyze the master node, you can specify the master node and use the `-direct` parameter. 234 | 235 | ```bash 236 | ./redis-find-big-key -addr 10.0.1.76:6379 -direct -master-yes 237 | ``` 238 | 239 | # Notes 240 | 241 | 1. This tool is only applicable to Redis version 4.0 and above, because `MEMORY USAGE` and `lazyfree-lazy-expire` are supported starting from Redis 4.0. 242 | 243 | 2. The size of the same key displayed by `redis-find-big-key` and `redis-cli` may differ. This is normal because `redis-find-big-key` defaults to analyzing slave nodes, showing the key size in the slave, while `redis-cli` can only analyze the master node, displaying the key size in the master. Consider the following example: 244 | 245 | ```bash 246 | # ./redis-find-big-key -addr 10.0.1.76:6379 -top 1 247 | Scanning keys from node: 10.0.1.202:6380 (slave) 248 | ... 249 | Top biggest keys: 250 | +------------------------------+------+------------+--------------------+ 251 | | Key | Type | Size | Number of elements | 252 | +------------------------------+------+------------+--------------------+ 253 | | mysortedset_20250222043741:2 | zset | 1020.13 KB | 9490 members | 254 | +------------------------------+------+------------+--------------------+ 255 | 256 | # redis-cli -h 10.0.1.76 -p 6379 -c MEMORY USAGE mysortedset_20250222043741:2 257 | (integer) 1014242 258 | # echo "scale=2; 1014242 / 1024" | bc 259 | 990.47 260 | ``` 261 | 262 | One shows 1020.13 KB, and the other 990.47 KB. 263 | 264 | If you directly check the key size in the master using `redis-find-big-key`, the result will match `redis-cli`: 265 | 266 | ```bash 267 | # ./redis-find-big-key -addr 10.0.1.76:6379 -direct --master-yes -top 1 --skip-lazyfree-check 268 | Scanning keys from node: 10.0.1.76:6379 (master) 269 | ... 270 | Top biggest keys: 271 | +------------------------------+------+-----------+--------------------+ 272 | | Key | Type | Size | Number of elements | 273 | +------------------------------+------+-----------+--------------------+ 274 | | mysortedset_20250222043741:2 | zset | 990.47 KB | 9490 members | 275 | +------------------------------+------+-----------+--------------------+ 276 | ``` 277 | 278 | 279 | 280 | # Implementation Principle 281 | 282 | This tool is implemented by referring to `redis-cli --memkeys`. 283 | 284 | In fact, both `redis-cli --bigkeys` and `redis-cli --memkeys` call the `findBigKeys` function with different parameters: 285 | 286 | ```c++ 287 | /* Find big keys */ 288 | if (config.bigkeys) { 289 | if (cliConnect(0) == REDIS_ERR) exit(1); 290 | findBigKeys(0, 0); 291 | } 292 | 293 | /* Find large keys */ 294 | if (config.memkeys) { 295 | if (cliConnect(0) == REDIS_ERR) exit(1); 296 | findBigKeys(1, config.memkeys_samples); 297 | } 298 | ``` 299 | 300 | Next, let’s look at the specific implementation logic of this function: 301 | 302 | ```c++ 303 | static void findBigKeys(int memkeys, unsigned memkeys_samples) { 304 | ... 305 | // Get the total number of keys via the DBSIZE command 306 | total_keys = getDbSize(); 307 | 308 | /* Status message */ 309 | printf("\n# Scanning the entire keyspace to find biggest keys as well as\n"); 310 | printf("# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec\n"); 311 | printf("# per 100 SCAN commands (not usually needed).\n\n"); 312 | 313 | /* SCAN loop */ 314 | do { 315 | /* Calculate approximate percentage completion */ 316 | pct = 100 * (double)sampled/total_keys; 317 | 318 | // Scan keys via the SCAN command 319 | reply = sendScan(&it); 320 | scan_loops++; 321 | // Get the key names of the current batch 322 | keys = reply->element[1]; 323 | ... 324 | // Use pipeline to batch send TYPE commands to get the type of each key 325 | getKeyTypes(types_dict, keys, types); 326 | // Use pipeline to batch send corresponding commands to get the size of each key 327 | getKeySizes(keys, types, sizes, memkeys, memkeys_samples); 328 | 329 | // Process each key and update statistics 330 | for(i=0;ielements;i++) { 331 | typeinfo *type = types[i]; 332 | /* Skip keys that disappeared between SCAN and TYPE */ 333 | if(!type) 334 | continue; 335 | 336 | type->totalsize += sizes[i]; // Accumulate the total size of keys of this type 337 | type->count++; // Count the number of keys of this type 338 | totlen += keys->element[i]->len; // Accumulate the key length 339 | sampled++; // Count the number of scanned keys 340 | // If the current key size exceeds the maximum of this type, update the maximum key size and print statistics 341 | if(type->biggestbiggest_key) 343 | sdsfree(type->biggest_key); 344 | type->biggest_key = sdscatrepr(sdsempty(), keys->element[i]->str, keys->element[i]->len); 345 | ... 346 | printf( 347 | "[%05.2f%%] Biggest %-6s found so far '%s' with %llu %s\n", 348 | pct, type->name, type->biggest_key, sizes[i], 349 | !memkeys? type->sizeunit: "bytes"); 350 | 351 | type->biggest = sizes[i]; 352 | } 353 | 354 | // Every 1 million keys scanned, output the current progress and the number of scanned keys 355 | if(sampled % 1000000 == 0) { 356 | printf("[%05.2f%%] Sampled %llu keys so far\n", pct, sampled); 357 | } 358 | } 359 | 360 | // If interval is set, sleep for a while every 100 SCAN commands 361 | if (config.interval && (scan_loops % 100) == 0) { 362 | usleep(config.interval); 363 | } 364 | 365 | freeReplyObject(reply); 366 | } while(force_cancel_loop == 0 && it != 0); 367 | .. 368 | // Output overall statistics 369 | printf("\n-------- summary -------\n\n"); 370 | if (force_cancel_loop) printf("[%05.2f%%] ", pct); // Show progress percentage if the loop was cancelled 371 | printf("Sampled %llu keys in the keyspace!\n", sampled); // Print the number of scanned keys 372 | printf("Total key length in bytes is %llu (avg len %.2f)\n\n", 373 | totlen, totlen ? (double)totlen/sampled : 0); // Print the total and average key name length 374 | 375 | // Output information about the largest key of each type 376 | di = dictGetIterator(types_dict); 377 | while ((de = dictNext(di))) { 378 | typeinfo *type = dictGetVal(de); 379 | if(type->biggest_key) { 380 | printf("Biggest %6s found '%s' has %llu %s\n", type->name, type->biggest_key, 381 | type->biggest, !memkeys? type->sizeunit: "bytes"); 382 | } // type->name is the key type, type->biggest_key is the largest key name 383 | } // type->biggest is the size of the largest key, !memkeys? type->sizeunit: "bytes" is the size unit 384 | 385 | .. 386 | // Output statistics for each type 387 | di = dictGetIterator(types_dict); 388 | while ((de = dictNext(di))) { 389 | typeinfo *type = dictGetVal(de); 390 | printf("%llu %ss with %llu %s (%05.2f%% of keys, avg size %.2f)\n", 391 | type->count, type->name, type->totalsize, !memkeys? type->sizeunit: "bytes", 392 | sampled ? 100 * (double)type->count/sampled : 0, 393 | type->count ? (double)type->totalsize/type->count : 0); 394 | } // sampled ? 100 * (double)type->count/sampled : 0 is the percentage of keys of this type among all scanned keys 395 | .. 396 | exit(0); 397 | } 398 | ``` 399 | 400 | The implementation logic of this function is as follows: 401 | 402 | 1. Use the DBSIZE command to get the total number of keys in the Redis database. 403 | 2. Use the SCAN command to batch scan keys and get the key names of the current batch. 404 | 3. Use pipeline to batch send TYPE commands to get the type of each key. 405 | 4. Use pipeline to batch send corresponding commands to get the size of each key: If `--bigkeys` is specified, use corresponding commands based on key type: STRLEN (string), LLEN (list), SCARD (set), HLEN (hash), ZCARD (zset), XLEN (stream). If `--memkeys` is specified, use the MEMORY USAGE command to get the memory usage of the key. 406 | 5. Process each key and update statistics: If a key’s size exceeds the maximum of its type, update the maximum and print relevant statistics. 407 | 6. Output summary information showing the largest key of each type and related statistics. 408 | -------------------------------------------------------------------------------- /README_cn.md: -------------------------------------------------------------------------------- 1 | # redis-find-big-key 2 | Redis 大 key 分析工具主要分为两类: 3 | 4 | **1. 离线分析** 5 | 6 | 基于 RDB 文件进行解析,常用工具是 redis-rdb-tools(https://github.com/sripathikrishnan/redis-rdb-tools)。 7 | 8 | 不过这个工具已近 5 年未更新,不支持 Redis 7,而且由于使用 Python 开发,解析速度较慢。 9 | 10 | 目前较为活跃的替代工具是 https://github.com/HDT3213/rdb ,该工具支持 Redis 7,并使用 Go 开发。 11 | 12 | **2. 在线分析** 13 | 14 | 常用工具是 redis-cli,提供两种分析方式: 15 | 16 | 1. --bigkeys:Redis 3.0.0 引入,统计的是 key 中元素的数量。 17 | 2. --memkeys:Redis 6.0.0 引入,通过`MEMORY USAGE`命令统计 key 的内存占用。 18 | 19 | 这两种方式的优缺点如下: 20 | 21 | - 离线分析:基于 RDB 文件进行解析,不会对线上实例产生影响,不足的是操作相对复杂,尤其是对于很多 Redis 云服务,由于禁用了 SYNC 命令,无法直接通过 `redis-cli --rdb ` 下载 RDB 文件,只能手动从控制台下载。 22 | - 在线分析:操作简单,只要有实例的访问权限,即可直接进行分析,不足的是分析过程中可能会对线上实例的性能产生一定影响。 23 | 24 | 本文要介绍的工具(`redis-find-big-key`)也是一个在线分析工具,其实现思路与`redis-cli --memkeys`类似,但功能更为强大实用。主要体现在: 25 | 26 | 1. 支持 TOP N 功能 27 | 28 | 该工具能够输出内存占用最多的前 N 个 key,而 redis-cli 只能输出每种类型中占用最多的单个 key。 29 | 30 | 2. 支持批量分析 31 | 32 | 该工具能够同时分析多个 Redis 节点,特别是对于 Redis Cluster,启用集群模式(`-cluster-mode`)后,会自动分析每个分片。而 redis-cli 只能针对单个节点进行分析。 33 | 34 | 3. 自动选择从节点进行分析 35 | 36 | 为了减少对实例性能的影响,工具会自动选择从节点进行分析,即使指定的是主节点的地址。只有在没有从节点时,才会选择主节点进行分析。而 redis-cli 只能分析主节点。 37 | 38 | ## 测试时间对比 39 | 40 | 测试环境:Redis 6.2.17,单实例,used_memory_human 为 9.75G,key 数量 100w,RDB 文件大小 3GB。 41 | 42 | 以下是上述工具在获取内存占用最多的 100 个大 key 时的耗时情况: 43 | 44 | | 工具 | 耗时 | 45 | | ------------------------------ | --------- | 46 | | redis-rdb-tools | 25m38.68s | 47 | | https://github.com/HDT3213/rdb | 50.68s | 48 | | redis-cli --memkeys | 40.22s | 49 | | redis-find-big-key | 29.12s | 50 | 51 | ## 工具效果 52 | 53 | ```bash 54 | # ./redis-find-big-key -addr 10.0.1.76:6379 -cluster-mode 55 | Log file not specified, using default: /tmp/10.0.1.76:6379_20250222_043832.txt 56 | Scanning keys from node: 10.0.1.76:6380 (slave) 57 | 58 | Node: 10.0.1.76:6380 59 | -------- Summary -------- 60 | Sampled 8 keys in the keyspace! 61 | Total key length in bytes is 2.96 MB (avg len 379.43 KB) 62 | 63 | Top biggest keys: 64 | +------------------------------+--------+-----------+---------------------+ 65 | | Key | Type | Size | Number of elements | 66 | +------------------------------+--------+-----------+---------------------+ 67 | | mysortedset_20250222043729:1 | zset | 739.6 KB | 8027 members | 68 | | myhash_20250222043741:2 | hash | 648.12 KB | 9490 fields | 69 | | mysortedset_20250222043741:1 | zset | 536.44 KB | 5608 members | 70 | | myset_20250222043729:1 | set | 399.66 KB | 8027 members | 71 | | myset_20250222043741:1 | set | 328.36 KB | 5608 members | 72 | | myhash_20250222043729:2 | hash | 222.65 KB | 3917 fields | 73 | | mylist_20250222043729:1 | list | 160.54 KB | 8027 items | 74 | | mykey_20250222043729:2 | string | 73 bytes | 7 bytes (value len) | 75 | +------------------------------+--------+-----------+---------------------+ 76 | Scanning keys from node: 10.0.1.202:6380 (slave) 77 | 78 | Node: 10.0.1.202:6380 79 | -------- Summary -------- 80 | Sampled 8 keys in the keyspace! 81 | Total key length in bytes is 3.11 MB (avg len 398.23 KB) 82 | 83 | Top biggest keys: 84 | +------------------------------+--------+------------+---------------------+ 85 | | Key | Type | Size | Number of elements | 86 | +------------------------------+--------+------------+---------------------+ 87 | | mysortedset_20250222043741:2 | zset | 1020.13 KB | 9490 members | 88 | | myset_20250222043741:2 | set | 588.81 KB | 9490 members | 89 | | myhash_20250222043729:1 | hash | 456.1 KB | 8027 fields | 90 | | mysortedset_20250222043729:2 | zset | 404.5 KB | 3917 members | 91 | | myhash_20250222043741:1 | hash | 335.79 KB | 5608 fields | 92 | | myset_20250222043729:2 | set | 195.87 KB | 3917 members | 93 | | mylist_20250222043741:2 | list | 184.55 KB | 9490 items | 94 | | mykey_20250222043741:1 | string | 73 bytes | 7 bytes (value len) | 95 | +------------------------------+--------+------------+---------------------+ 96 | Scanning keys from node: 10.0.1.147:6380 (slave) 97 | 98 | Node: 10.0.1.147:6380 99 | -------- Summary -------- 100 | Sampled 4 keys in the keyspace! 101 | Total key length in bytes is 192.9 KB (avg len 48.22 KB) 102 | 103 | Top biggest keys: 104 | +-------------------------+--------+-----------+---------------------+ 105 | | Key | Type | Size | Number of elements | 106 | +-------------------------+--------+-----------+---------------------+ 107 | | mylist_20250222043741:1 | list | 112.45 KB | 5608 items | 108 | | mylist_20250222043729:2 | list | 80.31 KB | 3917 items | 109 | | mykey_20250222043729:1 | string | 73 bytes | 7 bytes (value len) | 110 | | mykey_20250222043741:2 | string | 73 bytes | 7 bytes (value len) | 111 | +-------------------------+--------+-----------+---------------------+ 112 | ``` 113 | 114 | ## 工具地址 115 | 116 | 项目地址:https://github.com/slowtech/redis-find-big-key 117 | 118 | 可直接下载二进制包,也可进行源码编译。 119 | 120 | ### 直接下载二进制包 121 | 122 | ```bash 123 | # wget https://github.com/slowtech/redis-find-big-key/releases/download/v1.0.0/redis-find-big-key-linux-amd64.tar.gz 124 | # tar xvf redis-find-big-key-linux-amd64.tar.gz 125 | ``` 126 | 127 | 解压后,会在当前目录生成一个名为`redis-find-big-key`的可执行文件。 128 | 129 | ### 源码编译 130 | 131 | ```bash 132 | # wget https://github.com/slowtech/redis-find-big-key/archive/refs/tags/v1.0.0.tar.gz 133 | # tar xvf v1.0.0.tar.gz 134 | # cd redis-find-big-key-1.0.0 135 | # go build 136 | ``` 137 | 138 | 编译完成后,会在当前目录生成一个名为`redis-find-big-key`的可执行文件。 139 | 140 | ## 参数解析 141 | 142 | ```bash 143 | # ./redis-find-big-key --help 144 | Usage of ./redis-find-big-key: 145 | -addr string 146 | Redis server address in the format : 147 | -cluster-mode 148 | Enable cluster mode to get keys from all shards in the Redis cluster 149 | -concurrency int 150 | Maximum number of nodes to process concurrently (default 1) 151 | -direct 152 | Perform operation on the specified node. If not specified, the operation will default to executing on the slave node 153 | -log-file string 154 | Log file for saving progress and intermediate result 155 | -master-yes 156 | Execute even if the Redis role is master 157 | -password string 158 | Redis password 159 | -samples uint 160 | Samples for memory usage (default 5) 161 | -skip-lazyfree-check 162 | Skip check lazyfree-lazy-expire 163 | -sleep float 164 | Sleep duration (in seconds) after processing each batch 165 | -tls 166 | Enable TLS for Redis connection 167 | -top int 168 | Maximum number of biggest keys to display (default 100) 169 | ``` 170 | 171 | 各个参数的具体含义如下: 172 | 173 | - -addr:指定 Redis 实例的地址,格式为`:`,例如 10.0.0.108:6379。注意, 174 | 175 | - 如果不启用集群模式(-cluster-mode),可以指定多个地址,地址之间用逗号分隔,例如 10.0.0.108:6379,10.0.0.108:6380。 176 | - 如果启用集群模式,只能指定一个地址,工具会自动发现集群中的其它节点。 177 | 178 | - -cluster-mode:开启集群模式。工具会自动分析 Redis Cluster 中的每个分片,并优先选择从节点,只有在对应分片没有从节点时,才会选择主节点进行分析。 179 | 180 | - -concurrency:设置并发度,默认值为 1,即逐个节点进行分析。如果要分析的节点比较多,可以增加并发度来提升分析速度。 181 | 182 | - -direct:在 -addr 指定的节点上直接进行分析,这样会跳过自动选择从节点这个默认逻辑。 183 | 184 | - -log-file:指定日志文件路径,用于记录分析过程中的进度信息和中间过程信息。不指定则默认是`/tmp/_.txt`,例如 /tmp/10.0.0.108:6379_20250218_125955.txt。 185 | 186 | - -master-yes:如果待分析的节点中存在主节点(常见原因:从节点不存在;通过 -direct 参数指定要在主节点上分析),工具会提示以下错误: 187 | 188 | > Error: nodes 10.0.1.76:6379 are master. To execute, you must specify --master-yes 189 | 190 | 如果确定可以在主节点上进行分析,可指定 -master-yes 跳过检测。 191 | 192 | - -password:指定 Redis 实例的密码。 193 | 194 | - -samples:设置`MEMORY USAGE key [SAMPLES count]`命令中的采样数量。对于包含多个元素的数据结构(如 LIST、SET、ZSET、HASH、STREAM 等),采样数量过低可能导致内存占用估算不准确,而过高则会增加计算时间和资源消耗。SAMPLES 不指定的话,默认为 5。 195 | 196 | - -skip-lazyfree-check:如果是在主节点上进行分析,需要特别注意过期大 key。因为扫描操作会触发过期 key 的删除,如果未开启惰性删除(`lazyfree-lazy-expire`),删除操作将在主线程中执行,删除大 key 时可能会导致阻塞,从而影响正常的业务请求。 197 | 198 | 因此,当工具在主节点上进行分析时,会自动检查节点是否启用了惰性删除。如果未启用,工具将提示以下错误并终止操作,以避免对线上业务造成影响: 199 | 200 | > Error: nodes 10.0.1.76:6379 are master and lazyfree-lazy-expire is set to 'no'. Scanning might trigger large key expiration, which could block the main thread. Please set lazyfree-lazy-expire to 'yes' for better performance. To skip this check, you must specify --skip-lazyfree-check 201 | 202 | 在这种情况下,建议通过`CONFIG SET lazyfree-lazy-expire yes`命令开启惰性删除。 203 | 204 | 如果确认没有过期大 key,想跳过检测,可指定 -skip-lazyfree-check。 205 | 206 | - -sleep:设置每扫描完一批数据后的休眠时间。 207 | 208 | - -tls:启用 TLS 连接。 209 | 210 | - -top: 显示占用内存最多的 N 个 key。默认是 100。 211 | 212 | 213 | 214 | ## 常见用法 215 | 216 | ### 分析单个节点 217 | 218 | ```bash 219 | ./redis-find-big-key -addr 10.0.1.76:6379 220 | Scanning keys from node: 10.0.1.202:6380 (slave) 221 | ``` 222 | 223 | 注意,在上面的示例中,指定的节点和实际扫描的节点并不相同。这是因为 10.0.1.76:6379 是主节点,而该工具默认会选择从库进行分析。只有当指定的主节点没有从库时,工具才会直接扫描该主节点。 224 | 225 | 226 | 227 | ### 分析单个 Redis 集群 228 | 229 | ```bash 230 | ./redis-find-big-key -addr 10.0.1.76:6379 -cluster-mode 231 | ``` 232 | 233 | 只需提供集群中任意一个节点的地址,工具会自动获取集群中其它节点的地址。同时,工具会优先选择从节点进行分析,只有在某个分片没有从节点时,才会选择该分片的主节点进行分析。 234 | 235 | 236 | 237 | ### 分析多个节点 238 | 239 | ```bash 240 | ./redis-find-big-key -addr 10.0.1.76:6379,10.0.1.202:6379,10.0.1.147:6379 241 | ``` 242 | 243 | 节点之间是相互独立的,可以来自同一个集群,也可以来自不同的集群。注意,如果 -addr 参数指定了多个节点地址,则不能再使用 -cluster-mode 参数。 244 | 245 | 246 | 247 | ### 对主节点进行分析 248 | 249 | 如果需要对主节点进行分析,可指定主节点并使用`-direct`参数。 250 | 251 | ```bash 252 | ./redis-find-big-key -addr 10.0.1.76:6379 -direct -master-yes 253 | ``` 254 | 255 | 256 | 257 | ## 注意事项 258 | 259 | 1.该工具仅适用于 Redis 4.0 及以上版本,因为`MEMORY USAGE`和`lazyfree-lazy-expire`是从 Redis 4.0 开始支持的。 260 | 261 | 2.同一个 key 在 redis-find-big-key 和 redis-cli 中显示的大小可能不一致,这是正常现象。原因在于,redis-find-big-key 默认选择从库进行分析,因此通常显示的是从库中的 key 大小,而 redis-cli 只能对主库进行分析,显示的是主库中的 key 大小。看下面这个示例。 262 | 263 | ```bash 264 | # ./redis-find-big-key -addr 10.0.1.76:6379 -top 1 265 | Scanning keys from node: 10.0.1.202:6380 (slave) 266 | ... 267 | Top biggest keys: 268 | +------------------------------+------+------------+--------------------+ 269 | | Key | Type | Size | Number of elements | 270 | +------------------------------+------+------------+--------------------+ 271 | | mysortedset_20250222043741:2 | zset | 1020.13 KB | 9490 members | 272 | +------------------------------+------+------------+--------------------+ 273 | 274 | # redis-cli -h 10.0.1.76 -p 6379 -c MEMORY USAGE mysortedset_20250222043741:2 275 | (integer) 1014242 276 | 277 | # echo "scale=2; 1014242 / 1024" | bc 278 | 990.47 279 | ``` 280 | 281 | 一个是 1020.13 KB,一个是 990.47 KB。 282 | 283 | 如果直接通过 redis-find-big-key 查看主库中该 key 的大小,结果与 redis-cli 完全一致: 284 | 285 | ```bash 286 | # ./redis-find-big-key -addr 10.0.1.76:6379 -direct --master-yes -top 1 --skip-lazyfree-check 287 | Scanning keys from node: 10.0.1.76:6379 (master) 288 | ... 289 | Top biggest keys: 290 | +------------------------------+------+-----------+--------------------+ 291 | | Key | Type | Size | Number of elements | 292 | +------------------------------+------+-----------+--------------------+ 293 | | mysortedset_20250222043741:2 | zset | 990.47 KB | 9490 members | 294 | +------------------------------+------+-----------+--------------------+ 295 | ``` 296 | 297 | ## 实现原理 298 | 299 | 该工具是参考`redis-cli --memkeys`实现的。 300 | 301 | 实际上,无论是`redis-cli --bigkeys`还是`redis-cli --memkeys`,调用的都是`findBigKeys`函数,只不过传入的参数不一样。 302 | 303 | ```c 304 | /* Find big keys */ 305 | if (config.bigkeys) { 306 | if (cliConnect(0) == REDIS_ERR) exit(1); 307 | findBigKeys(0, 0); 308 | } 309 | 310 | /* Find large keys */ 311 | if (config.memkeys) { 312 | if (cliConnect(0) == REDIS_ERR) exit(1); 313 | findBigKeys(1, config.memkeys_samples); 314 | } 315 | ``` 316 | 317 | 接下来,我们看一下这个函数的具体实现逻辑。 318 | 319 | ```c 320 | static void findBigKeys(int memkeys, unsigned memkeys_samples) { 321 | ... 322 | // 通过 DBSIZE 命令获取 key 的总数量 323 | total_keys = getDbSize(); 324 | 325 | /* Status message */ 326 | printf("\n# Scanning the entire keyspace to find biggest keys as well as\n"); 327 | printf("# average sizes per key type. You can use -i 0.1 to sleep 0.1 sec\n"); 328 | printf("# per 100 SCAN commands (not usually needed).\n\n"); 329 | 330 | /* SCAN loop */ 331 | do { 332 | /* Calculate approximate percentage completion */ 333 | pct = 100 * (double)sampled/total_keys; 334 | 335 | // 通过 SCAN 命令扫描 key 336 | reply = sendScan(&it); 337 | scan_loops++; 338 | // 获取当前批次的 key 名称。 339 | keys = reply->element[1]; 340 | ... 341 | // 使用 pipeline 技术批量发送 TYPE 命令,获取每个 key 的类型 342 | getKeyTypes(types_dict, keys, types); 343 | // 使用 pipeline 技术批量发送相应命令获取每个 key 的大小 344 | getKeySizes(keys, types, sizes, memkeys, memkeys_samples); 345 | 346 | // 逐个处理 key,更新统计信息 347 | for(i=0;ielements;i++) { 348 | typeinfo *type = types[i]; 349 | /* Skip keys that disappeared between SCAN and TYPE */ 350 | if(!type) 351 | continue; 352 | 353 | type->totalsize += sizes[i]; // 累计每个类型 key 的总大小 354 | type->count++; // 累计每个类型 key 的数量 355 | totlen += keys->element[i]->len; // 累计 key 的长度 356 | sampled++; // 累计扫描的 key 的数量 357 | // 如果当前 key 的大小超过该类型的最大值,则会更新该类型的最大键大小,并打印统计信息。 358 | if(type->biggestbiggest_key) 360 | sdsfree(type->biggest_key); 361 | type->biggest_key = sdscatrepr(sdsempty(), keys->element[i]->str, keys->element[i]->len); 362 | ... 363 | printf( 364 | "[%05.2f%%] Biggest %-6s found so far '%s' with %llu %s\n", 365 | pct, type->name, type->biggest_key, sizes[i], 366 | !memkeys? type->sizeunit: "bytes"); 367 | 368 | type->biggest = sizes[i]; 369 | } 370 | 371 | // 每扫描 100 万个 key,还会输出当前进度和扫描的 key 数量。 372 | if(sampled % 1000000 == 0) { 373 | printf("[%05.2f%%] Sampled %llu keys so far\n", pct, sampled); 374 | } 375 | } 376 | 377 | // 如果设置了 interval,则每执行 100 次 SCAN 命令,都会 sleep 一段时间。 378 | if (config.interval && (scan_loops % 100) == 0) { 379 | usleep(config.interval); 380 | } 381 | 382 | freeReplyObject(reply); 383 | } while(force_cancel_loop == 0 && it != 0); 384 | .. 385 | // 输出总的统计信息 386 | printf("\n-------- summary -------\n\n"); 387 | if (force_cancel_loop) printf("[%05.2f%%] ", pct); // 如果循环被取消,则显示进度百分比 388 | printf("Sampled %llu keys in the keyspace!\n", sampled); // 打印已经扫描的 key 的数量 389 | printf("Total key length in bytes is %llu (avg len %.2f)\n\n", 390 | totlen, totlen ? (double)totlen/sampled : 0); // 打印 key 名的总长度及平均长度 391 | 392 | // 输出每种类型最大键的信息 393 | di = dictGetIterator(types_dict); 394 | while ((de = dictNext(di))) { 395 | typeinfo *type = dictGetVal(de); 396 | if(type->biggest_key) { 397 | printf("Biggest %6s found '%s' has %llu %s\n", type->name, type->biggest_key, 398 | type->biggest, !memkeys? type->sizeunit: "bytes"); 399 | } // type->name 是 key 的类型名称,type->biggest_key 是最大键的名称 400 | } // type->biggest 是最大键的大小,!memkeys? type->sizeunit: "bytes" 是大小单位。 401 | .. 402 | // 输出每种类型的统计信息 403 | di = dictGetIterator(types_dict); 404 | while ((de = dictNext(di))) { 405 | typeinfo *type = dictGetVal(de); 406 | printf("%llu %ss with %llu %s (%05.2f%% of keys, avg size %.2f)\n", 407 | type->count, type->name, type->totalsize, !memkeys? type->sizeunit: "bytes", 408 | sampled ? 100 * (double)type->count/sampled : 0, 409 | type->count ? (double)type->totalsize/type->count : 0); 410 | } // sampled ? 100 * (double)type->count/sampled : 0 是当前类型的 key 的数量在总扫描的 key 数量中的百分比。 411 | .. 412 | exit(0); 413 | } 414 | ``` 415 | 416 | 该函数的实现逻辑如下: 417 | 418 | 1. 使用 DBSIZE 命令获取 Redis 数据库中的 key 总数。 419 | 2. 使用 SCAN 命令批量扫描 key,并获取当前批次的 key 名称。 420 | 3. 使用 pipeline 技术批量发送 TYPE 命令,获取每个 key 的类型。 421 | 4. 使用 pipeline 技术批量发送相应命令获取每个 key 的大小: 422 | - 若指定了 --bigkeys,根据 key 类型使用对应命令获取大小:STRLEN(string 类型)、LLEN(list 类型)、SCARD(set 类型)、HLEN(hash 类型)、ZCARD(zset 类型)、XLEN(stream 类型)。 423 | - 若指定了 --memkeys,使用 MEMORY USAGE 命令获取 key 的内存占用。 424 | 5. 逐个处理 key,更新统计信息:若某个 key 的大小超过该类型的最大值,则更新最大值并打印相关统计信息。 425 | 426 | 6. 输出总结信息,展示每种 key 类型的最大 key 及其相关统计数据。 427 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module redis-find-big-key 2 | 3 | go 1.21.13 4 | 5 | require ( 6 | github.com/redis/go-redis/v9 v9.6.1 7 | golang.org/x/net v0.30.0 8 | ) 9 | 10 | require ( 11 | github.com/cespare/xxhash/v2 v2.2.0 // indirect 12 | github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect 13 | github.com/mattn/go-runewidth v0.0.9 // indirect 14 | github.com/olekukonko/tablewriter v0.0.5 // indirect 15 | ) 16 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs= 2 | github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c= 3 | github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA= 4 | github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0= 5 | github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= 6 | github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= 7 | github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78= 8 | github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc= 9 | github.com/mattn/go-runewidth v0.0.9 h1:Lm995f3rfxdpd6TSmuVCHVb/QhupuXlYr8sCI/QdE+0= 10 | github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI= 11 | github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec= 12 | github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY= 13 | github.com/redis/go-redis/v9 v9.6.1 h1:HHDteefn6ZkTtY5fGUE8tj8uy85AHk6zP7CpzIAM0y4= 14 | github.com/redis/go-redis/v9 v9.6.1/go.mod h1:0C0c6ycQsdpVNQpxb1njEQIqkx5UcsM8FJCQLgE9+RA= 15 | golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4= 16 | golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU= 17 | -------------------------------------------------------------------------------- /redis_find_big_key.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "crypto/tls" 5 | "flag" 6 | "fmt" 7 | "log" 8 | "os" 9 | "regexp" 10 | "sort" 11 | "strconv" 12 | "strings" 13 | "sync" 14 | "time" 15 | 16 | "github.com/olekukonko/tablewriter" 17 | "github.com/redis/go-redis/v9" 18 | "golang.org/x/net/context" 19 | ) 20 | 21 | var ( 22 | ctx = context.Background() 23 | printMutex sync.Mutex 24 | logWriter *os.File 25 | ) 26 | 27 | var ( 28 | // Redis connection configuration 29 | addr = flag.String("addr", "", "Redis server address in the format :") 30 | password = flag.String("password", "", "Redis password") 31 | tlsFlag = flag.Bool("tls", false, "Enable TLS for Redis connection") 32 | 33 | // Key scanning and memory usage configuration 34 | topN = flag.Int("top", 100, "Maximum number of biggest keys to display") 35 | samples = flag.Uint("samples", 5, "Samples for memory usage") 36 | sleepDuration = flag.Float64("sleep", 0, "Sleep duration (in seconds) after processing each batch") 37 | 38 | // Additional flags 39 | masterYes = flag.Bool("master-yes", false, "Execute even if the Redis role is master") 40 | concurrency = flag.Int("concurrency", 1, "Maximum number of nodes to process concurrently") 41 | clusterFlag = flag.Bool("cluster-mode", false, "Enable cluster mode to get keys from all shards in the Redis cluster") 42 | directFlag = flag.Bool("direct", false, "Perform operation on the specified node. If not specified, the operation will default to executing on the slave node") 43 | skipLazyfreeCheck = flag.Bool("skip-lazyfree-check", false, "Skip check lazyfree-lazy-expire") 44 | logFile = flag.String("log-file", "", "Log file for saving progress and intermediate result") 45 | ) 46 | 47 | // TypeInfo holds the command for checking key size and the unit of measurement 48 | type TypeInfo struct { 49 | SizeCmd string 50 | SizeUnit string 51 | } 52 | 53 | // KeyTypeSize holds the type of a key and the number of elements in it 54 | type KeyTypeSize struct { 55 | Type string 56 | ElementNum uint64 57 | } 58 | 59 | // KeySize holds the name of the key and its size 60 | type KeySize struct { 61 | Key string 62 | Size uint64 63 | } 64 | 65 | // BySize implements sort.Interface for []KeySize based on the Size field 66 | type BySize []KeySize 67 | 68 | func (a BySize) Len() int { return len(a) } 69 | func (a BySize) Swap(i, j int) { a[i], a[j] = a[j], a[i] } 70 | func (a BySize) Less(i, j int) bool { return a[i].Size > a[j].Size } // Descending order 71 | 72 | type NodeInfo struct { 73 | Address string // Address of the Redis node 74 | Role string // Role of the node (master or slave) 75 | MasterID string // For slaves, the ID of the master node 76 | ID string // Unique ID for the node (used to link slave to master) 77 | } 78 | 79 | // TypeInfo map for Redis key types 80 | var typeInfoMap = map[string]TypeInfo{ 81 | "string": {"STRLEN", "bytes (value len)"}, 82 | "list": {"LLEN", "items"}, 83 | "set": {"SCARD", "members"}, 84 | "hash": {"HLEN", "fields"}, 85 | "zset": {"ZCARD", "members"}, 86 | "stream": {"XLEN", "entries"}, 87 | "none": {"", ""}, 88 | } 89 | 90 | func InitLogFile() (*os.File, error) { 91 | if *logFile == "" { 92 | nodes := strings.Split(*addr, ",") 93 | firstNode := nodes[0] 94 | 95 | timestamp := time.Now().Format("20060102_150405") 96 | *logFile = fmt.Sprintf("/tmp/%s_%s.txt", firstNode, timestamp) 97 | fmt.Printf("Log file not specified, using default: %s\n", *logFile) 98 | } 99 | 100 | var err error 101 | logWriter, err = os.OpenFile(*logFile, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644) 102 | if err != nil { 103 | return nil, fmt.Errorf("%v", err) 104 | } 105 | 106 | return logWriter, nil 107 | } 108 | 109 | func writeToLogFile(logWriter *os.File, logMessage string) { 110 | if logWriter != nil { 111 | printMutex.Lock() 112 | defer printMutex.Unlock() 113 | timestamp := time.Now().Format("2006/01/02 15:04:05") 114 | logMessage = fmt.Sprintf("%s %s", timestamp, logMessage) 115 | _, err := logWriter.WriteString(logMessage) 116 | if err != nil { 117 | log.Printf("Failed to write to log file: %v", err) 118 | } 119 | } 120 | } 121 | 122 | // bytesToHuman converts bytes to a human-readable string 123 | func bytesToHuman(n uint64) string { 124 | units := []string{"bytes", "KB", "MB", "GB"} 125 | unit := units[0] 126 | nFloat := float64(n) 127 | 128 | for _, u := range units[1:] { 129 | if nFloat < 1024 { 130 | break 131 | } 132 | nFloat /= 1024 133 | unit = u 134 | } 135 | 136 | result := fmt.Sprintf("%.2f", nFloat) 137 | // Remove trailing zeros and decimal point 138 | if strings.Contains(result, ".") { 139 | result = strings.TrimRight(result, "0") // Remove trailing zeros 140 | result = strings.TrimRight(result, ".") // Remove decimal point if it's the last character 141 | } 142 | return result + " " + unit 143 | } 144 | 145 | func newRedisClient(addr string) (*redis.Client, error) { 146 | options := &redis.Options{ 147 | Addr: addr, 148 | Password: *password, 149 | DialTimeout: 2 * time.Second, 150 | } 151 | 152 | if *tlsFlag { 153 | options.TLSConfig = &tls.Config{InsecureSkipVerify: true} 154 | } 155 | 156 | rdb := redis.NewClient(options) 157 | 158 | // Test the connection by sending a Ping 159 | _, err := rdb.Ping(ctx).Result() 160 | if err != nil { 161 | return nil, fmt.Errorf("Error connecting to Redis: %v", err) 162 | } 163 | _, err = rdb.Do(ctx, "READONLY").Result() 164 | if err != nil { 165 | if err.Error() == "ERR This instance has cluster support disabled" { 166 | // Non-cluster mode, skip READONLY command 167 | return rdb, nil 168 | } 169 | return nil, fmt.Errorf("failed to send READONLY command: %v", err) 170 | } 171 | return rdb, nil 172 | } 173 | 174 | // getKeyInfoMap gathers key type and element counts for the biggest keys 175 | func getKeyInfoMap(rdb *redis.Client, biggestKeys []KeySize) (map[string]KeyTypeSize, error) { 176 | keyNames := make([]string, len(biggestKeys)) 177 | for i, keySize := range biggestKeys { 178 | keyNames[i] = keySize.Key 179 | } 180 | 181 | types, err := getKeyTypes(rdb, keyNames) 182 | if err != nil { 183 | return nil, err 184 | } 185 | elementNums, err := getKeySizes(rdb, keyNames, types, false, 0) 186 | if err != nil { 187 | return nil, err 188 | } 189 | 190 | keyInfoMap := make(map[string]KeyTypeSize, len(biggestKeys)) 191 | 192 | for i, key := range keyNames { 193 | keyInfoMap[key] = KeyTypeSize{ 194 | Type: types[i], 195 | ElementNum: elementNums[i], 196 | } 197 | } 198 | 199 | return keyInfoMap, nil 200 | } 201 | 202 | // scanKeys scans Redis keys with the SCAN command 203 | func scanKeys(rdb *redis.Client, cursor uint64) ([]string, uint64, error) { 204 | keys, newCursor, err := rdb.Scan(ctx, cursor, "*", 10).Result() 205 | if err != nil { 206 | return nil, 0, fmt.Errorf("error scanning keys: %v", err) 207 | } 208 | return keys, newCursor, nil 209 | } 210 | 211 | // getRedisVersion retrieves the Redis version 212 | func getRedisVersion(rdb *redis.Client) (string, int, error) { 213 | info, err := rdb.Info(ctx, "server").Result() 214 | if err != nil { 215 | return "", 0, fmt.Errorf("%v", err) 216 | } 217 | 218 | for _, line := range strings.Split(info, "\n") { 219 | if strings.HasPrefix(line, "redis_version:") { 220 | version := strings.TrimSpace(strings.Split(line, ":")[1]) 221 | majorVersion, _ := strconv.Atoi(strings.Split(version, ".")[0]) 222 | return version, majorVersion, nil 223 | } 224 | } 225 | return "", 0, fmt.Errorf("redis_version not found in server info") 226 | } 227 | 228 | // getKeyTypes retrieves the types of the specified keys 229 | func getKeyTypes(rdb *redis.Client, keys []string) ([]string, error) { 230 | pipe := rdb.Pipeline() 231 | typeCmds := make([]*redis.StatusCmd, len(keys)) 232 | for i, key := range keys { 233 | typeCmds[i] = pipe.Type(ctx, key) 234 | } 235 | // Execute the pipeline 236 | _, err := pipe.Exec(ctx) 237 | if err != nil { 238 | return nil, fmt.Errorf("pipeline execution failed: %v", err) 239 | } 240 | types := make([]string, len(keys)) 241 | 242 | for i, cmd := range typeCmds { 243 | if cmd.Err() != nil { 244 | return nil, fmt.Errorf("error getting type for key '%s': %v", keys[i], cmd.Err()) 245 | } 246 | types[i] = cmd.Val() 247 | } 248 | 249 | return types, nil 250 | } 251 | 252 | // getKeySizes retrieves the sizes of the specified keys 253 | func getKeySizes(rdb *redis.Client, keys []string, types []string, memkeys bool, samples uint) ([]uint64, error) { 254 | sizes := make([]uint64, len(keys)) 255 | pipeline := rdb.Pipeline() 256 | commands := make([]*redis.Cmd, len(keys)) 257 | 258 | for i, key := range keys { 259 | if !memkeys && typeInfoMap[types[i]].SizeCmd == "" { 260 | continue 261 | } 262 | 263 | if memkeys { 264 | commands[i] = pipeline.Do(ctx, "MEMORY", "USAGE", key, "SAMPLES", samples) 265 | } else { 266 | commands[i] = pipeline.Do(ctx, typeInfoMap[types[i]].SizeCmd, key) 267 | } 268 | } 269 | 270 | // Execute the pipeline 271 | _, err := pipeline.Exec(ctx) 272 | if err != nil { 273 | return nil, fmt.Errorf("error executing pipeline: %v", err) 274 | } 275 | 276 | // Collect the results 277 | for i, cmd := range commands { 278 | if !memkeys && typeInfoMap[types[i]].SizeCmd == "" { 279 | sizes[i] = 0 280 | continue 281 | } 282 | size, err := cmd.Uint64() 283 | if err != nil { 284 | return nil, fmt.Errorf("error getting size for key '%s': %v", keys[i], err) 285 | } 286 | sizes[i] = size 287 | } 288 | 289 | return sizes, nil 290 | } 291 | 292 | // printSummary prints the summary of the largest keys 293 | func printSummary(nodeAddr string, sampled, totalKeyLen, totalKeys uint64, biggestKeys []KeySize, keyInfoMap map[string]KeyTypeSize) { 294 | fmt.Printf("\nNode: %s\n-------- Summary --------\n", nodeAddr) 295 | fmt.Printf("Sampled %d keys in the keyspace!\n", sampled) 296 | var avgLen uint64 = 0 297 | if sampled != 0 { 298 | avgLen = uint64(float64(totalKeyLen) / float64(sampled)) 299 | } 300 | fmt.Printf("Total key length in bytes is %s (avg len %s)\n\n", bytesToHuman(totalKeyLen), bytesToHuman(avgLen)) 301 | 302 | // Print biggest keys 303 | if sampled > 0 { 304 | table := tablewriter.NewWriter(os.Stdout) 305 | table.SetAutoFormatHeaders(false) 306 | table.SetHeader([]string{"Key", "Type", "Size", "Number of elements"}) 307 | table.SetAlignment(tablewriter.ALIGN_CENTER) 308 | fmt.Println("Top biggest keys:") 309 | for _, ks := range biggestKeys { 310 | if ks.Size > 0 { 311 | keyName := ks.Key 312 | keyType := keyInfoMap[keyName].Type 313 | keyElementNum := keyInfoMap[keyName].ElementNum 314 | table.Append([]string{keyName, keyType, bytesToHuman(ks.Size), fmt.Sprintf("%d %s", keyElementNum, typeInfoMap[keyType].SizeUnit)}) 315 | } 316 | } 317 | table.Render() 318 | } 319 | } 320 | 321 | /* 322 | func compareKeySizes(rdb *redis.Client, keys []string, sizes []uint64, samples uint) error { 323 | for i, key := range keys { 324 | cmd := rdb.Do(ctx, "MEMORY", "USAGE", key, "SAMPLES", samples) 325 | memoryUsage, err := cmd.Uint64() 326 | if err != nil { 327 | return fmt.Errorf("error getting memory usage for key '%s': %v", key, err) 328 | } 329 | if memoryUsage != sizes[i] { 330 | fmt.Printf("Key '%s' - Size mismatch! MEMORY USAGE: %d, Expected Size: %d\n", key, memoryUsage, sizes[i]) 331 | } 332 | } 333 | 334 | return nil 335 | } 336 | */ 337 | // scanKeysFromNode scans keys from a specific node in the cluster 338 | func scanKeysFromNode(nodeAddr, role string) error { 339 | parts := strings.Split(nodeAddr, "@") 340 | nodeAddr = parts[0] 341 | fmt.Printf("Scanning keys from node: %s (%s)\n", nodeAddr, role) 342 | rdb, err := newRedisClient(nodeAddr) 343 | if err != nil { 344 | return fmt.Errorf("%s", err) 345 | } 346 | var sampled, totalKeys, totalKeyLen uint64 347 | var keys []string 348 | var cursor uint64 = 0 349 | var biggestKey uint64 350 | biggestKeys := make([]KeySize, 0, *topN) 351 | totalKeys, err = rdb.DBSize(ctx).Uint64() 352 | if err != nil { 353 | return fmt.Errorf("Error getting DB size: %v", err) 354 | } 355 | // Repeat scan process for each node 356 | for { 357 | pct := 100 * float64(sampled) / float64(totalKeys) 358 | keys, cursor, err = scanKeys(rdb, cursor) 359 | if err != nil { 360 | return err 361 | } 362 | if len(keys) == 0 { 363 | break 364 | } 365 | 366 | sizes, err := getKeySizes(rdb, keys, []string{}, true, *samples) 367 | 368 | if err != nil { 369 | return err 370 | } 371 | // compareKeySizes(rdb, keys, sizes, 5) 372 | for i, key := range keys { 373 | sampled++ 374 | totalKeyLen += sizes[i] 375 | if sizes[i] > biggestKey { 376 | biggestKey = sizes[i] 377 | types, err := getKeyTypes(rdb, []string{key}) 378 | if err != nil { 379 | return err 380 | } 381 | elementNum, err := getKeySizes(rdb, []string{key}, types, false, 0) 382 | if err != nil { 383 | return err 384 | } 385 | logMessage := fmt.Sprintf("%s [%05.2f%%] Biggest key found so far '%s' with type: %s, size: %s, %d %s\n", 386 | nodeAddr, pct, key, types[0], bytesToHuman(sizes[i]), elementNum[0], typeInfoMap[types[0]].SizeUnit) 387 | writeToLogFile(logWriter, logMessage) 388 | } 389 | if sampled%100000 == 0 { 390 | logMessage := fmt.Sprintf("%s [%05.2f%%] Sampled %d keys so far\n", nodeAddr, pct, sampled) 391 | writeToLogFile(logWriter, logMessage) 392 | } 393 | biggestKeys = append(biggestKeys, KeySize{Key: key, 394 | Size: sizes[i]}) 395 | } 396 | 397 | if len(biggestKeys) > *topN { 398 | sort.Sort(BySize(biggestKeys)) 399 | biggestKeys = biggestKeys[:*topN] 400 | } 401 | 402 | if *sleepDuration > 0 { 403 | time.Sleep(time.Duration(*sleepDuration * float64(time.Second))) 404 | } 405 | 406 | if cursor == 0 { 407 | logMessage := fmt.Sprintf("%s [%05.2f%%] Sampled a total of %d keys\n", nodeAddr, 100.00, sampled) 408 | writeToLogFile(logWriter, logMessage) 409 | break 410 | } 411 | } 412 | // Final sort and print summary 413 | sort.Sort(BySize(biggestKeys)) // Final sorting 414 | keyInfoMap, err := getKeyInfoMap(rdb, biggestKeys) 415 | if err != nil { 416 | return err 417 | } 418 | printMutex.Lock() 419 | defer printMutex.Unlock() 420 | printSummary(nodeAddr, sampled, totalKeyLen, totalKeys, biggestKeys, keyInfoMap) 421 | 422 | return nil 423 | } 424 | 425 | // getNonClusterNodes retrieves nodes in non-cluster mode 426 | func getNonClusterNodes(addrs []string) (map[string]string, error) { 427 | nodes := make(map[string]string) 428 | 429 | for _, addr := range addrs { 430 | rdb, err := newRedisClient(addr) 431 | if err != nil { 432 | return nil, fmt.Errorf("%v", err) 433 | } 434 | 435 | _, majorVersion, err := getRedisVersion(rdb) 436 | if err != nil { 437 | return nil, fmt.Errorf("failed to get Redis version for node %s: %v", addr, err) 438 | } 439 | if majorVersion < 4 { 440 | return nil, fmt.Errorf("node %s has Redis version < 4.0, which is not supported", addr) 441 | } 442 | 443 | nodeInfo, err := rdb.Info(ctx, "replication").Result() 444 | if err != nil { 445 | return nil, fmt.Errorf("failed to fetch replication info for node %s: %v", addr, err) 446 | } 447 | 448 | var masterNode, slaveNode string 449 | slaveInfoRegexp := regexp.MustCompile(`slave\d+:ip=.*`) 450 | for _, line := range strings.Split(nodeInfo, "\n") { 451 | if strings.HasPrefix(line, "role:master") { 452 | masterNode = addr 453 | if *directFlag { 454 | nodes[masterNode] = "master" 455 | break 456 | } 457 | } else if strings.HasPrefix(line, "role:slave") { 458 | slaveNode = addr 459 | if *directFlag { 460 | nodes[slaveNode] = "slave" 461 | break 462 | } 463 | } else if slaveInfoRegexp.MatchString(line) && strings.Contains(line, "online") { 464 | slaveInfo := strings.Split(line, ":") 465 | s1 := slaveInfo[1] 466 | slaveInfo = strings.Split(s1, ",") 467 | var host, port string 468 | for _, item := range slaveInfo { 469 | if strings.HasPrefix(item, "ip=") { 470 | host = strings.Split(item, "=")[1] 471 | } 472 | if strings.HasPrefix(item, "port=") { 473 | port = strings.Split(item, "=")[1] 474 | } 475 | } 476 | slaveNode = host + ":" + port 477 | break 478 | } 479 | } 480 | if !*directFlag { 481 | if slaveNode != "" { 482 | nodes[slaveNode] = "slave" 483 | } else if masterNode != "" { 484 | nodes[masterNode] = "master" 485 | } 486 | } 487 | } 488 | 489 | return nodes, nil 490 | } 491 | 492 | // getClusterNodes retrieves nodes in cluster mode 493 | func getClusterNodes(addr string) (map[string]string, error) { 494 | rdb, err := newRedisClient(addr) 495 | if err != nil { 496 | return nil, fmt.Errorf("failed to connect to cluster node %s: %v", addr, err) 497 | } 498 | 499 | _, majorVersion, err := getRedisVersion(rdb) 500 | if err != nil { 501 | return nil, fmt.Errorf("failed to get Redis version for cluster node %s: %v", addr, err) 502 | } 503 | if majorVersion < 4 { 504 | return nil, fmt.Errorf("cluster node %s has Redis version < 4.0, which is not supported", addr) 505 | } 506 | 507 | nodes := make(map[string]string) 508 | nodesInfo, err := rdb.ClusterNodes(ctx).Result() 509 | if err != nil { 510 | if err.Error() == "ERR This instance has cluster support disabled" { 511 | addrs := strings.Split(addr, ",") 512 | nodes, err = getNonClusterNodes(addrs) 513 | return nodes, err 514 | 515 | } 516 | return nil, fmt.Errorf("failed to fetch cluster nodes: %v", err) 517 | } 518 | 519 | var slaveNodes []NodeInfo 520 | var masterNodes []NodeInfo 521 | 522 | for _, line := range strings.Split(nodesInfo, "\n") { 523 | parts := strings.Fields(line) 524 | if len(parts) >= 8 { 525 | nodeID := parts[0] 526 | address := parts[1] 527 | role := parts[2] 528 | masterID := parts[3] // master ID of the slave node 529 | if strings.Contains(role, "slave") { 530 | slaveNodes = append(slaveNodes, NodeInfo{Address: address, MasterID: masterID}) 531 | } else if strings.Contains(role, "master") { 532 | masterNodes = append(masterNodes, NodeInfo{Address: address, ID: nodeID}) 533 | } 534 | } 535 | } 536 | 537 | for _, master := range masterNodes { 538 | foundSlave := false 539 | for _, slave := range slaveNodes { 540 | if slave.MasterID == master.ID { 541 | foundSlave = true 542 | nodes[slave.Address] = "slave" 543 | break 544 | } 545 | } 546 | if !foundSlave { 547 | nodes[master.Address] = "master" 548 | } 549 | } 550 | 551 | return nodes, nil 552 | } 553 | 554 | // findBigKeys scans the cluster nodes in parallel for the biggest keys 555 | func findBigKeys(addr string) error { 556 | var nodes map[string]string 557 | var err error 558 | if *clusterFlag { 559 | nodes, err = getClusterNodes(addr) 560 | } else { 561 | addrs := strings.Split(addr, ",") 562 | nodes, err = getNonClusterNodes(addrs) 563 | } 564 | 565 | if err != nil { 566 | return fmt.Errorf("%v", err) 567 | } 568 | 569 | var masterNodes []string 570 | for nodeAddr, role := range nodes { 571 | if role == "master" { 572 | masterNodes = append(masterNodes, nodeAddr) 573 | } 574 | } 575 | 576 | if len(masterNodes) > 0 { 577 | masterNodesStr := strings.Join(masterNodes, ", ") 578 | 579 | if !*masterYes { 580 | return fmt.Errorf("Error: nodes %s are master. To execute, you must specify --master-yes", masterNodesStr) 581 | } 582 | 583 | var lazyfreeNoNodes []string 584 | for _, nodeAddr := range masterNodes { 585 | rdb, err := newRedisClient(nodeAddr) 586 | if err != nil { 587 | return fmt.Errorf("failed to connect to master node %s: %v", nodeAddr, err) 588 | } 589 | 590 | lazyfreeConfig, err := rdb.ConfigGet(ctx, "lazyfree-lazy-expire").Result() 591 | if err != nil { 592 | return fmt.Errorf("failed to get lazyfree-lazy-expire config for master node %s: %v", nodeAddr, err) 593 | } 594 | 595 | lazyfreeValue := lazyfreeConfig["lazyfree-lazy-expire"] 596 | if lazyfreeValue == "no" { 597 | lazyfreeNoNodes = append(lazyfreeNoNodes, nodeAddr) 598 | } 599 | } 600 | 601 | if len(lazyfreeNoNodes) > 0 && !*skipLazyfreeCheck { 602 | lazyfreeNoNodesStr := strings.Join(lazyfreeNoNodes, ", ") 603 | return fmt.Errorf("Error: nodes %s are master and lazyfree-lazy-expire is set to 'no'. "+ 604 | "Scanning might trigger large key expiration, which could block the main thread. "+ 605 | "Please set lazyfree-lazy-expire to 'yes' for better performance. "+ 606 | "To skip this check, you must specify --skip-lazyfree-check", lazyfreeNoNodesStr) 607 | } 608 | } 609 | 610 | logWriter, err := InitLogFile() 611 | if err != nil { 612 | return fmt.Errorf("Error initializing log file: %v\n", err) 613 | } 614 | defer logWriter.Close() 615 | 616 | var wg sync.WaitGroup 617 | sem := make(chan struct{}, *concurrency) // Semaphore for controlling concurrency 618 | errs := make(chan error, len(nodes)) 619 | 620 | // Parallel scan across nodes 621 | for nodeAddr, role := range nodes { 622 | sem <- struct{}{} // Acquire a slot 623 | wg.Add(1) 624 | go func(nodeAddr string, role string) { 625 | defer func() { 626 | <-sem 627 | wg.Done() 628 | }() 629 | if err := scanKeysFromNode(nodeAddr, role); err != nil { 630 | errs <- err 631 | } 632 | }(nodeAddr, role) 633 | } 634 | 635 | // Wait for all goroutines to finish 636 | go func() { 637 | wg.Wait() 638 | close(errs) 639 | }() 640 | 641 | var combinedErr error 642 | for err := range errs { 643 | if combinedErr == nil { 644 | combinedErr = err 645 | } else { 646 | combinedErr = fmt.Errorf("%v; %w", combinedErr, err) 647 | } 648 | } 649 | 650 | return combinedErr 651 | } 652 | 653 | func main() { 654 | flag.Parse() 655 | 656 | // Check if addr is provided 657 | if *addr == "" { 658 | log.Fatalf("Error: Redis server address must be provided.") 659 | } 660 | 661 | if *clusterFlag && *directFlag { 662 | log.Fatalf("-cluster-mode and -direct cannot be specified at the same time") 663 | } 664 | 665 | if *clusterFlag { 666 | if strings.Contains(*addr, ",") { 667 | log.Fatalf("when -cluster-mode is specified, addr must be a single address") 668 | } 669 | } 670 | 671 | addresses := strings.Split(*addr, ",") 672 | for _, address := range addresses { 673 | parts := strings.Split(address, ":") 674 | if len(parts) != 2 || parts[1] == "" { 675 | log.Fatal("Error: Redis server address must be in the format :") 676 | } 677 | } 678 | 679 | if err := findBigKeys(*addr); err != nil { 680 | // log.Fatalf("%v\n", err) 681 | for _, e := range strings.Split(err.Error(), ";") { 682 | log.Println(" -", strings.TrimSpace(e)) 683 | } 684 | } 685 | } 686 | --------------------------------------------------------------------------------