├── README.md ├── TODO ├── algorithm ├── setup.sh ├── thin_shrink.py ├── thin_shrink_splitranges.py └── unit_tests /README.md: -------------------------------------------------------------------------------- 1 | lvm thin pools cannot be shrunk today. 2 | 3 | lvreduce -L1g thinvg/p1 4 | 5 | Thin pool volumes thinvg/p1_tdata cannot be reduced in size yet. 6 | 7 | This is because lvm doesn't support reduction of the thin pool data lv (tdata), since) 8 | the pool may not have written data linearly, and so there might be data at 9 | the end, i.e data chunks may be allocated at the end of the pool device 10 | and dm-thin does not provide defrag or some process to free it up and place it all linearly. 11 | 12 | 13 | This tool makes thinpool shrink possible, since it examines thin pool metadata mappings and moves single and range mappings beyond the new size, to free space within the new limit. Once the mappings are moved to free ranges or blocks inside the new limit, the pool can be safely reduced. 14 | 15 | Run this script on inactive pools with all its thinlvs unmounted. 16 | 17 | Usage: 18 | ./thin_shrink -L new_size -t vgname/poolname 19 | 20 | At the end of the run, you will have a deactivated thin pool reduced to the size you specified, if reduce was possible [1] 21 | 22 | [1] The pool will not reduce in size if the new size is lesser than the number of 23 | mapped blocks in the pool. The pool will also not reduce if there is a lack of equal contiguous extents to 24 | accomodate range mappings that needs moving. (range mappings will be split in the future - TODO) 25 | 26 | --------------------- 27 | 28 | This work has been used as a reference for a rust based implementation of thin_shrink https://github.com/jthornber/thin-provisioning-tools/blob/main/src/commands/thin_shrink.rs 29 | 30 | (upstream commit mentioning this project - https://github.com/jthornber/thin-provisioning-tools/commit/b67b587a109ccdab49f9d5ca1ed90a5a8fcc9467 ) 31 | -------------------------------------------------------------------------------- /TODO: -------------------------------------------------------------------------------- 1 | TODO: 2 | 3 | 1) check returns of each cmd executed, sanity checks 4 | 5 | 2) round off passed smaller size to multiple of extent side 6 | 7 | 3) allow decimals in size 8 | 9 | 4) Make use of lvm2 support for lvreduce of tdata, when available. 10 | 11 | 5) Split ranges into individual blocks if shrinking not possible without breaking range mappings. 12 | 13 | 6) Suggest smallest size pool can be shrunk to. (only possible when 5 is implemented) 14 | 15 | 7) prevent shrinking different pools at same time 16 | 17 | 8) ensure pool is inactive when thin_shrink is run 18 | -------------------------------------------------------------------------------- /algorithm: -------------------------------------------------------------------------------- 1 | Rough logic: (not totally in sync with code) 2 | 3 | assumptions: 4 | 1) thin_rmap produces mappings sorted by data_block numbers. 5 | 6 | usage: thin_shrink.py -L new_size -t vgname/poolname 7 | 8 | 9 | 0) verify args, sanity checks, etc.. 10 | 1) activate the pool if not activated, parse dmsetup table of the pool and create a shrink_tdata device with same mapping, i.e parse from dmsetup table the poolname_tdata mappings, 11 | 12 | for eg: thinvg-p1_tdata: 0 2097152 linear 252:32 10240 13 | 14 | and create a new dm device like this: 15 | dmsetup create shrink_tdata --table '0 2097152 linear 252:32 10240' 16 | 17 | This is because the pool is going to be deactivated to do the shrinking, and we need tdata access in r/w mode. 18 | (activating it using lvm, activates it only in readonly mode) 19 | 20 | 2) run lvs -o+chunksize and get chunksize of the pool we need to shrink. 21 | 3) Deactivate the pool, activate metadata readonly 22 | 4) thin_dump the metadata into a file, thin_rmap the metadata into a file 23 | 24 | 5) convert new size to bytes then divide by chunksize we got in step 2. Now we have the size_to_shrink_to, in chunks. 25 | 26 | 6) parse the thin_dump'd file (example pasted below) and add up all the mapped_blocks of each thinlv. 27 | If this is larger than new_size in chunks, reject the shrink straightaway and exit. 28 | 29 | 30 | 31 | 32 | 33 | ... 34 | 35 | 36 | 7) Check if the last block in the allocated_ranges is lesser than the new_size in chunks. 37 | If yes, the pool can be shrunk straight away, no need to move anything! change nr_data_blocks in the thin_dumped file to new_size 38 | and change the VG metadata too. 39 | 40 | 41 | ------------------------------------------------------------------------------------------------------------------------------------------------------------ 42 | 43 | logical_volumes { 44 | 45 | p1 { 46 | id = "8U93uO-eQ2L-JwDD-niZ4-gaJl-lweB-SCPh3O" 47 | status = ["READ", "WRITE", "VISIBLE"] 48 | flags = [] 49 | creation_time = 1591794117 # 2020-06-10 18:31:57 +0530 50 | creation_host = "localhost.localdomain" 51 | segment_count = 1 52 | 53 | segment1 { 54 | start_extent = 0 55 | extent_count = 256 # 1024 Megabytes <------ 56 | 57 | 58 | ------------------------------------------------------------------------------------------------------------------------------------------------------------ 59 | 60 | 61 | 8) parse thin_rmap output and create the data structures mentioned below, 62 | 63 | 64 | data structures: 65 | 66 | a) 67 | list: allocated_ranges: each element is of type (block,length) 68 | Eg - [(0,2) , (4,160) , (164,1) , (165,1) etc ... ] 69 | note - A single_mapping is treated as a range with length 1, to eliminate a separate data structure for 70 | storing single_mappings. This list will be created ready-sorted in ascending order of block. 71 | 72 | b) 73 | list: free_ranges generated using the gaps in the rmap figures line by line. 74 | This is a python list of lists. each element is of type (block,length) 75 | [(171,49), (221,76), (298,7) , etc ... ] 76 | This list includes free single blocks, treated as free ranges of length 1. (like 1 free block stuck between 2 allocated ranges. I don't expect many of these.) 77 | Reverse sort this on length. 78 | free_ranges.sort(key=lambda x: x[1], reverse=True) 79 | 80 | Also create 81 | 82 | list: ranges_to_move 83 | (These contain all the elements of allocated_ranges whose starting block+len is larger than size_to_shrink_to 84 | 85 | list: changed_list 86 | The changed_list can be treated as a useful output to tools that can better optimise the 87 | actual block copying. (thin_shrink uses dd) 88 | 89 | 9) Sort the free_ranges list by length. 90 | 91 | 10) Reverse sort ranges_to_move according to length (we will begin moving the largest range mappings) 92 | 93 | 11) loop through ranges_to_move and check if there are suitable (closest in length) ranges free in free_ranges list. 94 | If so, add entry to changed_list, and change the free_ranges list accordingly. 95 | 96 | 12) if changed_list is same in length to ranges_to_move ,we can proceed with the block copy using dd. Else say pool cannot be 97 | shrunk and exit, removing all extra devices we created. 98 | 99 | 13) change nr_data_blocks in the thin_dumped file to new_size. Change the xml block mappings according to changed_list, 100 | i.e each occurrence of the block number in either data_begin or data_block must be changed to the new numbers. Write out 101 | a new xml mapping for thin-restoring to. Backup the metadata, then change it for the vgcfgrestore with new size. (change the pool 102 | device since it has just 1 segment, instead of trying to change tdata which could potentially have many segments, and it 103 | may get complex deciding which segment needs reduction by how much to end up with the tdata size as the reduced size. ) 104 | 105 | 14) thin_restore the changed xml into a new lv created in the same VG, vgcfgrestore the meta. 106 | 107 | 15) User can then activate the pool and thinlvs and mount the thin lvs. 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | -------------------------------------------------------------------------------- /setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | lvcreate -T -L 1100G -V1000G -n t1 --thinpool p1 thinvg 3 | lvcreate -T -V1000G -n t2 thinvg/p1 4 | 5 | mkfs.xfs /dev/thinvg/t2 6 | mkfs.xfs /dev/thinvg/t1 7 | 8 | mount /dev/thinvg/t1 /home/nkshirsa/formt/t1 9 | mount /dev/thinvg/t2 /home/nkshirsa/formt/t2/ 10 | 11 | mkdir /home/nkshirsa/formt/t1/folder1 12 | mkdir /home/nkshirsa/formt/t1/folder2 13 | mkdir /home/nkshirsa/formt/t2/folder2 14 | mkdir /home/nkshirsa/formt/t2/folder1 15 | 16 | fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=4m --group_reporting --nrfiles=1000 --directory=/home/nkshirsa/formt/t2/folder1/ --size=10G 17 | fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=4m --group_reporting --nrfiles=1000 --directory=/home/nkshirsa/formt/t2/folder2/ --size=10G 18 | fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=4m --group_reporting --nrfiles=1000 --directory=/home/nkshirsa/formt/t1/folder2/ --size=10G 19 | fio --name=randwrite --rw=randwrite --direct=1 --ioengine=libaio --bs=4m --group_reporting --nrfiles=1000 --directory=/home/nkshirsa/formt/t1/folder1/ --size=10G 20 | 21 | dd if=/dev/urandom of=/home/nkshirsa/formt/t2/somefile bs=1M count=5000 22 | dd if=/dev/urandom of=/home/nkshirsa/formt/t1/somefile bs=1M count=5000 23 | 24 | rm -rf /home/nkshirsa/formt/t2/somefile 25 | rm -rf /home/nkshirsa/formt/t2/folder1 26 | rm -rf /home/nkshirsa/formt/t1/folder1 27 | fstrim /home/nkshirsa/formt/t1 28 | fstrim /home/nkshirsa/formt/t2 29 | 30 | umount /home/nkshirsa/formt/t1 31 | umount /home/nkshirsa/formt/t2 32 | 33 | vgchange -an thinvg 34 | 35 | -------------------------------------------------------------------------------- /thin_shrink.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import argparse 3 | import os 4 | import sys 5 | import time 6 | import subprocess 7 | 8 | 9 | def calculate_size_in_bytes(size): 10 | units = size[-1] 11 | if (units == 'M') or (units == "m"): 12 | size_without_units = size[:-1] 13 | size_in_bytes = long(size_without_units) * 1024 * 1024 14 | return long(size_in_bytes) 15 | 16 | if (units == 'G') or (units == "g"): 17 | size_without_units = size[:-1] 18 | size_in_bytes = long(size_without_units) * 1024 * 1024 * 1024 19 | return long(size_in_bytes) 20 | 21 | if (units == 'T') or (units == "t"): 22 | size_without_units = size[:-1] 23 | size_in_bytes = long(size_without_units) * 1024 * 1024 * 1024 * 1024 24 | return long(size_in_bytes) 25 | 26 | if units == 'k': 27 | size_without_units = size[:-1] 28 | size_in_bytes = long(size_without_units) * 1024 29 | return long(size_in_bytes) 30 | 31 | def activate_pool(pool_name): 32 | #print pool_name 33 | cmd_to_run = "lvchange -ay " + pool_name 34 | #os.system(cmd_to_run) 35 | result = subprocess.call(cmd_to_run, shell=True) 36 | if(result != 0): 37 | print ("could not run cmd %s" % (cmd_to_run)) 38 | 39 | def deactivate_pool(pool_name): 40 | #print pool_name 41 | cmd_to_run = "lvchange -an " + pool_name 42 | #print cmd_to_run 43 | #os.system(cmd_to_run) 44 | result = subprocess.call(cmd_to_run, shell=True) 45 | if(result != 0): 46 | print ("could not run cmd %s" % (cmd_to_run)) 47 | 48 | def activate_metadata_readonly(pool_name): 49 | #print pool_name 50 | cmd_to_run = "lvchange -ay " + pool_name + "_tmeta -y >/dev/null 2>&1" 51 | #print cmd_to_run 52 | #os.system(cmd_to_run) 53 | result = subprocess.call(cmd_to_run, shell=True) 54 | if(result != 0): 55 | print ("could not run cmd %s" % (cmd_to_run)) 56 | 57 | 58 | def deactivate_metadata(pool_name): 59 | #print pool_name 60 | cmd_to_run = "lvchange -an " + pool_name + "_tmeta " 61 | #print cmd_to_run 62 | #os.system(cmd_to_run) 63 | result = subprocess.call(cmd_to_run, shell=True) 64 | if(result != 0): 65 | print ("could not run cmd %s" % (cmd_to_run)) 66 | 67 | def thin_dump_metadata(pool_name): 68 | cmd_to_run = "thin_dump /dev/" + pool_name + "_tmeta" + " > /tmp/dump" 69 | #print cmd_to_run 70 | #os.system(cmd_to_run) 71 | result = subprocess.call(cmd_to_run, shell=True) 72 | if(result != 0): 73 | print ("could not run cmd %s" % (cmd_to_run)) 74 | 75 | def thin_rmap_metadata(pool_name, nr_chunks_str): 76 | cmd_to_run = "thin_rmap --region 0.." + nr_chunks_str + " /dev/" + pool_name + "_tmeta" + " > /tmp/rmap" 77 | #print cmd_to_run 78 | #os.system(cmd_to_run) 79 | result = subprocess.call(cmd_to_run, shell=True) 80 | if(result != 0): 81 | print ("could not run cmd %s" % (cmd_to_run)) 82 | 83 | def get_nr_chunks(): 84 | with open('/tmp/dump') as f: 85 | first_line = f.readline() 86 | #print first_line 87 | nr_blocks_string = first_line.rpartition("=")[-1] 88 | #print nr_blocks_string 89 | nr_blocks_str = nr_blocks_string.rstrip()[1:-2] 90 | return nr_blocks_str 91 | 92 | 93 | def create_shrink_device(pool_name): 94 | split_vg_and_pool = pool_name.split('/') 95 | vgname = split_vg_and_pool[0] 96 | poolname = split_vg_and_pool[1] 97 | #print vgname 98 | #print poolname 99 | search_in_dmsetup = vgname + "-" + poolname + "_tdata" 100 | cmd = "dmsetup table | grep " + search_in_dmsetup 101 | #print cmd 102 | #os.system(cmd) 103 | result = subprocess.check_output(cmd, shell=True) 104 | 105 | #with open('/tmp/dmsetup_table_grepped', 'r') as myfile: 106 | #print dmsetup_lines 107 | #myfile.close() 108 | dmsetup_lines = result.splitlines() 109 | dmsetup_cmd = "echo -e " 110 | for line_iter in range(0, len(dmsetup_lines)): 111 | split_dmsetup_line = dmsetup_lines[line_iter].split(':' , 1) 112 | #print split_dmsetup_line[1] 113 | dmsetup_table_entry_of_tdata = split_dmsetup_line[1].lstrip() 114 | #print dmsetup_table_entry_of_tdata 115 | if (line_iter > 0): 116 | dmsetup_cmd = dmsetup_cmd + "\\" + "\\" + "n" 117 | dmsetup_cmd = dmsetup_cmd + "\'" + dmsetup_table_entry_of_tdata.rstrip() + "\'" 118 | 119 | dmsetup_cmd = dmsetup_cmd + " |" + " dmsetup create shrink_" + poolname.rstrip() 120 | #print "running this command.. " 121 | #print dmsetup_cmd 122 | #os.system(dmsetup_cmd) 123 | result = subprocess.call(dmsetup_cmd, shell=True) 124 | if(result != 0): 125 | print ("could not run cmd %s" % (dmsetup_cmd)) 126 | 127 | name_of_device = "shrink_" + poolname.rstrip() 128 | #print "also running dmsetup table" 129 | #cmd2 = "dmsetup table" 130 | #os.system(cmd2) 131 | return name_of_device 132 | 133 | 134 | def get_chunksize(pool_name): 135 | cmd = "lvs -o +chunksize " + pool_name + " | grep -v Chunk" 136 | #print "running this cmd now... \n" 137 | #print cmd 138 | #os.system(cmd) 139 | 140 | result = subprocess.check_output(cmd, shell=True) 141 | 142 | #with open('/tmp/chunksize', 'r') as myfile: 143 | chunk_line = result 144 | #print chunk_line 145 | chunksz_string = chunk_line.lstrip().rpartition(" ")[-1].rstrip() 146 | units = chunksz_string[-1] 147 | chunksz = chunksz_string[:-1] 148 | chunksz = chunksz[:chunksz.index('.')] 149 | # now that we removed the decimal part, add back the units 150 | chunksz_string = chunksz + units 151 | return chunksz_string 152 | 153 | 154 | def get_total_mapped_blocks(pool_name): 155 | split_vg_and_pool = pool_name.split('/') 156 | vgname = split_vg_and_pool[0] 157 | poolname = split_vg_and_pool[1] 158 | search_in_dmsetup_silently = vgname + "-" + poolname + "-tpool >/dev/null 2>&1" 159 | search_in_dmsetup = vgname + "-" + poolname + "-tpool" 160 | cmd = "dmsetup status " + search_in_dmsetup_silently 161 | #print cmd 162 | #os.system(cmd) 163 | 164 | #first test if command will run and not throw CalledProcessError exception 165 | 166 | result = subprocess.call(cmd, shell=True) 167 | 168 | if(result != 0): #no vgname-poolname-tpool in dmsetup status 169 | print "Warning: No tpool device found, perhaps pool has no thins?" 170 | search_in_dmsetup = vgname + "-" + poolname 171 | search_in_dmsetup_quietly = vgname + "-" + poolname + " >/dev/null 2>&1" 172 | cmd = "dmsetup status " + search_in_dmsetup_quietly 173 | #print cmd 174 | #os.system(cmd) 175 | result = subprocess.call(cmd, shell=True) 176 | if(result==0): 177 | #print "found pool" 178 | cmd = "dmsetup status " + search_in_dmsetup 179 | result = subprocess.check_output(cmd, shell=True) 180 | else: 181 | print "did not find pool in dmsetup status" 182 | exit() 183 | 184 | dmsetup_line = result.splitlines() 185 | if(len(dmsetup_line)>1): #this should never happen anyway 186 | print "More than 1 device found in dmsetup status" 187 | exit() 188 | 189 | # eg: RHELCSB-test_pool: 0 20971520 thin-pool 0 4356/3932160 0/163840 - rw no_discard_passdown queue_if_no_space - 1024 190 | split_dmsetup_line = dmsetup_line[0].split(' ') 191 | dmsetup_status_entry = split_dmsetup_line[5].lstrip() 192 | used_blocks = dmsetup_status_entry.split('/')[0] 193 | 194 | 195 | else: # there is tpool 196 | 197 | cmd = "dmsetup status " + search_in_dmsetup 198 | result = subprocess.check_output(cmd, shell=True) 199 | 200 | dmsetup_line = result.splitlines() 201 | if(len(dmsetup_line)>1): #this should never happen anyway 202 | print "More than 1 device found in dmsetup status" 203 | exit() 204 | split_dmsetup_line = dmsetup_line[0].split(' ') 205 | dmsetup_status_entry = split_dmsetup_line[5].lstrip() 206 | used_blocks = dmsetup_status_entry.split('/')[0] 207 | 208 | #print "used blocks are.." 209 | #print used_blocks 210 | return long(used_blocks) 211 | 212 | def replace_chunk_numbers_in_xml(chunks_to_shrink_to, changed_list): 213 | count = 0 214 | #logfile.write("length of list of changes required is..\n") 215 | #logfile.write(len(changed_list)) 216 | new_xml = open('/tmp/changed.xml', 'w') 217 | 218 | with open('/tmp/dump') as f: 219 | for line in f: 220 | if (count == 0): # only do this for the first line, change nr_chunks 221 | count=1 222 | first_line = line 223 | first_line_fields = first_line.split() 224 | #print first_line_fields 225 | nr_blocks_field = first_line_fields[7] 226 | #print nr_blocks_field 227 | new_line_first_part="" 228 | for element in first_line_fields[0:-1]: 229 | new_line_first_part = new_line_first_part + " " + element 230 | #print new_line_first_part 231 | complete_first_line = new_line_first_part + " " + "nr_data_blocks=" + "\"" + str(chunks_to_shrink_to) + "\"" + ">" + "\n" 232 | #print complete_first_line 233 | complete_first_line = complete_first_line.lstrip() 234 | new_xml.write(complete_first_line) 235 | 236 | else: 237 | data_found = line.find("data_") 238 | 239 | if (data_found > 0): 240 | split_line = line[data_found:] 241 | last_quotes = split_line.index(" ") 242 | blocknum = split_line[12:last_quotes-1] 243 | int_block = int(blocknum) 244 | 245 | if(changed_list.get(int_block,0) == 0): 246 | # write the unmodified line as it is 247 | new_xml.write(line) 248 | else: 249 | to_change = changed_list[int_block] 250 | first_part_string = line[0:data_found+12] 251 | last_part_string = split_line[last_quotes+1:] 252 | new_string = first_part_string + str(to_change[0]) + "\" " + last_part_string 253 | #print new_string 254 | new_xml.write(new_string) 255 | else: 256 | new_xml.write(line) 257 | 258 | 259 | new_xml.close() 260 | f.close() 261 | 262 | 263 | def change_xml(chunks_to_shrink_to, chunksize_in_bytes, needs_dd=0): 264 | if (needs_dd == 0): 265 | # we only need to change the nr_blocks in the xml 266 | with open('/tmp/dump') as f: 267 | first_line = f.readline() 268 | first_line_fields = first_line.split() 269 | #print first_line_fields 270 | nr_blocks_field = first_line_fields[7] 271 | #print nr_blocks_field 272 | new_line_first_part="" 273 | for element in first_line_fields[0:-1]: 274 | new_line_first_part = new_line_first_part + " " + element 275 | #print new_line_first_part 276 | complete_first_line = new_line_first_part + " " + "nr_data_blocks=" + "\"" + str(chunks_to_shrink_to) + "\"" + ">" + "\n" 277 | #print complete_first_line 278 | complete_first_line = complete_first_line.lstrip() 279 | new_xml = open('/tmp/changed.xml', 'w') 280 | new_xml.write(complete_first_line) 281 | remaining = f.readlines() 282 | type(remaining) 283 | for i in range(0, len(remaining)): 284 | new_xml.write(remaining[i]) 285 | new_xml.close() 286 | else: 287 | # we need to dd blocks, change the numbers in the xml, etc 288 | print "Checking if blocks can be copied" 289 | allocated_ranges = [] 290 | free_ranges = [] 291 | earlier_element=[] 292 | ranges_requiring_move = [] 293 | 294 | #changed_list = [] # [(old, new, length) , (old, new, length), ... ] 295 | 296 | changed_list = {} # {old: [new,len] , old: [new,len], ... } 297 | total_blocks_requiring_copy = 0 298 | 299 | with open('/tmp/rmap') as f: 300 | entire_file = f.readlines() 301 | type(entire_file) 302 | for i in range(0,len(entire_file)): 303 | mapping = entire_file[i].split()[1] 304 | split_mapping = mapping.split(".") 305 | start_block = split_mapping[0] 306 | end_block = split_mapping[-1] 307 | length_of_mapping = int(end_block) - int(start_block) 308 | range_to_add = [] 309 | range_to_add.append(int(start_block)) 310 | range_to_add.append(length_of_mapping) 311 | allocated_ranges.append(range_to_add) 312 | if(int(start_block) + length_of_mapping > chunks_to_shrink_to): 313 | ranges_requiring_move.append(range_to_add) 314 | 315 | if (i == 0): # first iteration 316 | earlier_element = range_to_add 317 | 318 | else: 319 | #print start_block 320 | #print end_block 321 | #print "\n printing earlier element" 322 | #print earlier_element 323 | 324 | #iteration 1 onwards, start creating free_ranges list also 325 | 326 | if (int(start_block) > (earlier_element[0] + earlier_element[1]) ): 327 | if(int(start_block) < chunks_to_shrink_to): #if starting block is within the new size 328 | # we have a free range, so add it to the free_ranges list 329 | free_range_element = [] 330 | free_range_element.append(earlier_element[0] + earlier_element[1]) #start of free range 331 | free_range_element.append(int(start_block) - (earlier_element[0] + earlier_element[1])) #length of free range 332 | 333 | if((free_range_element[0] + free_range_element[1]) < chunks_to_shrink_to): #if entire free range is within new size 334 | free_ranges.append(free_range_element) 335 | #earlier_element = range_to_add 336 | else: 337 | free_range_element.pop(1) #get rid of older length, needs trimming 338 | free_range_element.append(chunks_to_shrink_to - (earlier_element[0])) #length of free range that will fit within new size 339 | free_ranges.append(free_range_element) 340 | #earlier_element = range_to_add 341 | #else: 342 | #earlier_element = range_to_add 343 | earlier_element = range_to_add 344 | 345 | #print "\nallocated ranges are.." 346 | #print allocated_ranges 347 | #print "\nfree ranges are.." 348 | #print free_ranges 349 | free_ranges.sort(key=lambda x: x[1]) 350 | #print "\nsorted free ranges are" 351 | #print free_ranges 352 | 353 | ranges_requiring_move.sort(key=lambda x: x[1], reverse=True) 354 | #print "\nranges requiring move are" 355 | #print ranges_requiring_move 356 | 357 | #print "length of list of free ranges is..\n" 358 | #print len(free_ranges) 359 | 360 | for each_range in ranges_requiring_move: 361 | #find closest fitting free range I can move this to 362 | len_requiring_move = each_range[1] 363 | #print len_requiring_move 364 | for i in range(len(free_ranges)): 365 | if free_ranges[i][1] > len_requiring_move: 366 | #found free range to move this range to 367 | #print "range mapping of size" 368 | #remove that entry from the free ranges list, we will add it back with the reduced length later 369 | changed_element = [] 370 | #changed_element.append(each_range[0]) 371 | changed_element.append(free_ranges[i][0]) 372 | changed_element.append(len_requiring_move) 373 | total_blocks_requiring_copy = total_blocks_requiring_copy + len_requiring_move 374 | changed_list[each_range[0]] = changed_element 375 | #changed_list.append(changed_element) 376 | 377 | if((free_ranges[i][1] - len_requiring_move) > 0): 378 | new_free_range = [] 379 | new_free_range_block = free_ranges[i][0]+len_requiring_move 380 | new_range_length = free_ranges[i][1] - len_requiring_move 381 | new_free_range.append(new_free_range_block) 382 | new_free_range.append(new_range_length) 383 | free_ranges.pop(i) 384 | free_ranges.append(new_free_range) 385 | #sort it again, so this element is put in proper place 386 | free_ranges.sort(key=lambda x: x[1]) 387 | break 388 | 389 | #logfile.write("\nchange list is..") 390 | #logfile.write(changed_list) 391 | print "\nlength of change list is.." 392 | print len(changed_list) 393 | 394 | if(len(changed_list) == len(ranges_requiring_move)): 395 | print "This pool can be shrunk, but blocks will need to be moved." 396 | total_gb = ( float(total_blocks_requiring_copy) * float(chunksize_in_bytes) ) / 1024 / 1024 /1024 397 | print ("Total amount of data requiring move is %.2f GB. Proceed ? Y/N" % (total_gb)) 398 | if sys.version_info[0]==2: 399 | inp = raw_input() 400 | else: # assume python 3 onward 401 | inp = input() 402 | 403 | if(inp.lower() == "y"): 404 | 405 | replace_chunk_numbers_in_xml(chunks_to_shrink_to ,changed_list) 406 | return changed_list 407 | else: 408 | print "Aborting.." 409 | changed_list = {} 410 | return changed_list 411 | else: 412 | print "Cannot fit every range requiring move to free ranges. Cannot shrink pool." 413 | changed_list = {} 414 | return changed_list 415 | 416 | def check_pool_shrink_without_dd(chunks_to_shrink_to): 417 | if(os.path.exists('/tmp/rmap')): 418 | if(os.path.getsize('/tmp/rmap') > 0): 419 | with open('/tmp/rmap') as f: 420 | for line in f: 421 | pass 422 | last_line = line 423 | #print last_line 424 | last_range = last_line.split()[1] 425 | #print last_range 426 | last_block = last_range.split(".")[2] 427 | last_block_long = long(last_block) 428 | if ((last_block_long - 1) < chunks_to_shrink_to): 429 | print "Pool can be shrunk without moving blocks. Last mapped block is %d and new size in chunks is %d\n" % ((last_block_long - 1), chunks_to_shrink_to ) 430 | return 1 431 | else: 432 | print "Last mapped block is %d and new size in chunks is %d\n" % ((last_block_long - 1), chunks_to_shrink_to ) 433 | return 0 434 | 435 | print "no valid /tmp/rmap file found. Perhaps this pool has no data mappings ?" 436 | return 1 437 | 438 | def restore_xml_and_swap_metadata(pool_to_shrink): 439 | #need to create a new lv as large as the metadata 440 | vg_and_lv = pool_to_shrink.split("/") 441 | vgname = vg_and_lv[0] 442 | lvname = vg_and_lv[1] 443 | #print vgname 444 | #print lvname 445 | #search for the tmeta size in lvs -a 446 | cmd = "lvs -a | grep " + "\"" + " " + vgname + " \" " + "|" + " grep " + "\"" + "\\" + "[" + lvname + "_tmeta]\"" 447 | #print cmd 448 | #os.system(cmd) 449 | result = subprocess.check_output(cmd, shell=True) 450 | tmeta_line = result 451 | #print tmeta_line 452 | size_of_metadata = tmeta_line.split()[-1] 453 | #print size_of_metadata 454 | units = size_of_metadata[-1] 455 | meta_size = size_of_metadata[:-1] 456 | meta_size = meta_size[:meta_size.index('.')] 457 | meta_size_str = meta_size + units 458 | #print meta_size_str 459 | cmd = "lvcreate -n shrink_restore_lv -L" + meta_size_str + " " + vgname + " >/dev/null 2>&1" 460 | #print cmd 461 | #os.system(cmd) 462 | 463 | result = subprocess.call(cmd, shell=True) 464 | if(result != 0): 465 | print ("could not run cmd %s" % (cmd)) 466 | 467 | cmd = "thin_restore -i /tmp/changed.xml -o " + "/dev/" + vgname + "/" + "shrink_restore_lv" 468 | #print cmd 469 | result = subprocess.call(cmd, shell=True) 470 | if(result != 0): 471 | print ("could not run cmd %s" % (cmd)) 472 | 473 | #os.system(cmd) 474 | cmd = "lvconvert --thinpool " + pool_to_shrink + " --poolmetadata " + "/dev/" + vgname + "/shrink_restore_lv -y" 475 | #print cmd 476 | #os.system(cmd) 477 | result = subprocess.call(cmd, shell=True) 478 | if(result != 0): 479 | print ("could not run cmd %s" % (cmd)) 480 | 481 | def change_vg_metadata(pool_to_shrink, chunks_to_shrink_to,nr_chunks,chunksize_in_bytes): 482 | vg_and_lv = pool_to_shrink.split("/") 483 | vgname = vg_and_lv[0] 484 | lvname = vg_and_lv[1] 485 | #print vgname 486 | #print lvname 487 | cmd = "vgcfgbackup -f /tmp/vgmeta_backup " + vgname + " >/dev/null 2>&1" 488 | #print cmd 489 | #os.system(cmd) 490 | result = subprocess.call(cmd, shell=True) 491 | if(result != 0): 492 | print ("could not run cmd %s" % (cmd)) 493 | 494 | with open('/tmp/vgmeta_backup') as f: 495 | new_vgmeta = open('/tmp/changed_vgmeta', 'w') 496 | 497 | remaining = f.readlines() 498 | type(remaining) 499 | search_string = " " + lvname + " {" 500 | #print search_string 501 | #print "***" 502 | extent_size_string = "extent_size = " 503 | extent_size_in_bytes=0 504 | dont_look_any_more = 0 505 | found_search_string = 0 506 | found_logical_volumes = 0 507 | for i in range(0, len(remaining)): 508 | if dont_look_any_more == 0: 509 | if ( remaining[i].find(extent_size_string) != -1 ): 510 | extent_elements = remaining[i].split() 511 | #print extent_elements 512 | 513 | extent_size_figure = extent_elements[-2] 514 | extent_size_units = extent_elements[-1][0] 515 | extent_size = extent_size_figure+extent_size_units 516 | #print "extent size is.. " 517 | #print extent_size 518 | #print "\n and in bytes .. " 519 | extent_size_in_bytes = calculate_size_in_bytes(extent_size) 520 | #print extent_size_in_bytes 521 | 522 | if (remaining[i].find("logical_volumes {") != -1): 523 | found_logical_volumes = 1 524 | #print "found the logical volumes" 525 | 526 | if ((" " + remaining[i].lstrip()).find(search_string) != -1): 527 | if(found_logical_volumes == 1): 528 | found_search_string = 1 529 | #print "found the search string" 530 | 531 | if (remaining[i].find("extent_count") != -1): 532 | if(found_search_string == 1): 533 | #print "found the extent count" 534 | num_tabs = remaining[i].count('\t') 535 | #print "number of tabs is " 536 | #print num_tabs 537 | 538 | num_whitespaces = len(remaining[i]) - len(remaining[i].lstrip()) 539 | #print num_whitespaces 540 | elements = remaining[i].split() 541 | new_size = (chunks_to_shrink_to * chunksize_in_bytes) / extent_size_in_bytes 542 | #print "number of extents to shrink to is " 543 | #print new_size 544 | new_string = "extent_count = " + str(new_size) + "\n" 545 | #new_string_len = len(new_string) + num_whitespaces 546 | #new_string_with_trailing_spaces = new_string.rjust(new_string_len) 547 | new_string_with_tabs=new_string 548 | for x in range(0,num_tabs-1): 549 | new_string_with_tabs = "\t" + new_string_with_tabs 550 | new_vgmeta.write(new_string_with_tabs) 551 | dont_look_any_more = 1 552 | continue 553 | 554 | new_vgmeta.write(remaining[i]) 555 | new_vgmeta.close() 556 | 557 | 558 | 559 | def restore_vg_metadata(pool_to_shrink): 560 | vg_and_lv = pool_to_shrink.split("/") 561 | vgname = vg_and_lv[0] 562 | lvname = vg_and_lv[1] 563 | cmd = "vgcfgrestore -f /tmp/changed_vgmeta " + vgname + " --force -y >/dev/null 2>&1" 564 | #os.system(cmd) 565 | result = subprocess.call(cmd, shell=True) 566 | if(result != 0): 567 | print ("could not run cmd %s" % (cmd)) 568 | 569 | 570 | def move_blocks(changed_list,shrink_device,chunksize_string): 571 | progress=0 572 | percent_done = 0 573 | previous_percent = 0 574 | counter = 0 575 | print "Generated new metadata map. Now copying blocks to match the changed metadata." 576 | 577 | for changed_entry in changed_list: 578 | 579 | progress=progress+1 580 | counter = counter + 1 581 | 582 | old_block = changed_entry 583 | new_block = changed_list[changed_entry][0] 584 | length = changed_list[changed_entry][1] 585 | bs = chunksize_string[0:-1] 586 | units = chunksize_string[-1].upper() 587 | bs_with_units = bs + units 588 | #if(length>1): 589 | # print ("moving %d blocks at %d to %d" % (length , old_block , new_block) ) 590 | #else: 591 | # print ("moving %d block at %d to %d" % (length , old_block , new_block) ) 592 | 593 | cmd = "dd if=/dev/mapper/" + shrink_device + " of=/dev/mapper/" + shrink_device + " bs=" + bs_with_units + " skip=" + str(old_block) + " seek=" + str(new_block) + " count=" + str(length) + " conv=notrunc >/dev/null 2>&1" 594 | #print cmd 595 | #os.system(cmd) 596 | result = subprocess.call(cmd, shell=True) 597 | if(result != 0): 598 | print ("could not run cmd %s" % (cmd)) 599 | 600 | one_tenth = len(changed_list) / 10 601 | if(progress == one_tenth): 602 | print("%d moved of %d elements" % (counter, len(changed_list))) 603 | progress=0 604 | 605 | print "Done with data copying" 606 | 607 | def cleanup(shrink_device, pool_to_shrink): 608 | vg_and_lv = pool_to_shrink.split("/") 609 | vgname = vg_and_lv[0] 610 | cmd = "dmsetup remove " + shrink_device 611 | #os.system(cmd) 612 | result = subprocess.call(cmd, shell=True) 613 | if(result != 0): 614 | print ("could not run cmd %s" % (cmd)) 615 | 616 | cmd = "lvremove " + vgname + "/shrink_restore_lv >/dev/null 2>&1" 617 | result = subprocess.call(cmd, shell=True) 618 | if(result != 0): 619 | print ("could not run cmd %s" % (cmd)) 620 | 621 | #os.system(cmd) 622 | cmd = "lvchange -an " + pool_to_shrink 623 | #os.system(cmd) 624 | result = subprocess.call(cmd, shell=True) 625 | if(result != 0): 626 | print ("could not run cmd %s" % (cmd)) 627 | 628 | 629 | def delete_restore_lv(pool_to_shrink): 630 | vg_and_lv = pool_to_shrink.split("/") 631 | vgname = vg_and_lv[0] 632 | cmd = "lvremove " + vgname + "/shrink_restore_lv" 633 | result = subprocess.call(cmd, shell=True) 634 | if(result != 0): 635 | print ("could not run cmd %s" % (cmd)) 636 | #os.system(cmd) 637 | 638 | #TODO close opened files 639 | 640 | 641 | def main(): 642 | #logfile = open("/tmp/shrink_logs", "w") 643 | #logfile.write("starting logs") 644 | ap = argparse.ArgumentParser() 645 | ap.add_argument("-L", "--size", required=True, help="size to shrink to") 646 | ap.add_argument("-t", "--thinpool", required=True, help="vgname/poolname") 647 | args = vars(ap.parse_args()) 648 | 649 | size_in_chunks=0L 650 | pool_to_shrink = args['thinpool'] 651 | 652 | #delete_restore_lv(pool_to_shrink) 653 | activate_pool(pool_to_shrink) 654 | total_mapped_blocks = get_total_mapped_blocks(pool_to_shrink) 655 | 656 | shrink_device = create_shrink_device(pool_to_shrink) 657 | 658 | chunksz_string = get_chunksize(pool_to_shrink) 659 | chunksize_in_bytes = calculate_size_in_bytes(chunksz_string) 660 | 661 | deactivate_pool(pool_to_shrink) 662 | activate_metadata_readonly(pool_to_shrink) 663 | thin_dump_metadata(pool_to_shrink) 664 | 665 | nr_chunks = get_nr_chunks() 666 | #print nr_chunks 667 | 668 | thin_rmap_metadata(pool_to_shrink, nr_chunks) 669 | deactivate_metadata(pool_to_shrink) 670 | 671 | size_to_shrink = args['size'] 672 | size_to_shrink_to_in_bytes = 0L 673 | size_to_shrink_to_in_bytes = calculate_size_in_bytes(size_to_shrink) 674 | #print size_to_shrink_to_in_bytes 675 | chunks_to_shrink_to = size_to_shrink_to_in_bytes/chunksize_in_bytes 676 | print "Need to shrink pool to number of chunks - " + str(chunks_to_shrink_to) 677 | 678 | if(chunks_to_shrink_to >= int(nr_chunks)): 679 | print "This thin pool cannot be shrunk. The pool is already smaller than the size provided." 680 | cleanup(shrink_device,pool_to_shrink) 681 | exit() 682 | 683 | if (total_mapped_blocks >= chunks_to_shrink_to): 684 | print "This thin pool cannot be shrunk. The mapped chunks are more than the lower size provided. Discarding allocated blocks from the pool may help." 685 | cleanup(shrink_device,pool_to_shrink) 686 | exit() 687 | 688 | if( check_pool_shrink_without_dd(chunks_to_shrink_to) == 1): 689 | change_xml(chunks_to_shrink_to, chunksize_in_bytes) 690 | restore_xml_and_swap_metadata(pool_to_shrink) 691 | change_vg_metadata(pool_to_shrink, chunks_to_shrink_to,nr_chunks,chunksize_in_bytes) 692 | restore_vg_metadata(pool_to_shrink) 693 | cleanup(shrink_device,pool_to_shrink) 694 | print("\nThis pool has been shrunk to the specified size of %s" % (size_to_shrink)) 695 | 696 | else: 697 | 698 | changed_list = change_xml(chunks_to_shrink_to, chunksize_in_bytes, 1) 699 | if(len(changed_list) > 0): 700 | move_blocks(changed_list,shrink_device,chunksz_string) 701 | restore_xml_and_swap_metadata(pool_to_shrink) 702 | change_vg_metadata(pool_to_shrink, chunks_to_shrink_to,nr_chunks,chunksize_in_bytes) 703 | restore_vg_metadata(pool_to_shrink) 704 | print("\nThis pool has been shrunk to the specified size of %s" % (size_to_shrink)) 705 | cleanup(shrink_device,pool_to_shrink) 706 | 707 | if __name__=="__main__": 708 | main() 709 | -------------------------------------------------------------------------------- /thin_shrink_splitranges.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import argparse 3 | import os 4 | import sys 5 | import time 6 | import subprocess 7 | 8 | 9 | def calculate_size_in_bytes(size): 10 | units = size[-1] 11 | if (units == 'M') or (units == "m"): 12 | size_without_units = size[:-1] 13 | size_in_bytes = long(size_without_units) * 1024 * 1024 14 | return long(size_in_bytes) 15 | 16 | if (units == 'G') or (units == "g"): 17 | size_without_units = size[:-1] 18 | size_in_bytes = long(size_without_units) * 1024 * 1024 * 1024 19 | return long(size_in_bytes) 20 | 21 | if (units == 'T') or (units == "t"): 22 | size_without_units = size[:-1] 23 | size_in_bytes = long(size_without_units) * 1024 * 1024 * 1024 * 1024 24 | return long(size_in_bytes) 25 | 26 | if units == 'k': 27 | size_without_units = size[:-1] 28 | size_in_bytes = long(size_without_units) * 1024 29 | return long(size_in_bytes) 30 | 31 | def activate_pool(pool_name): 32 | #print pool_name 33 | cmd_to_run = "lvchange -ay " + pool_name 34 | #os.system(cmd_to_run) 35 | result = subprocess.call(cmd_to_run, shell=True) 36 | if(result != 0): 37 | print ("could not run cmd %s" % (cmd_to_run)) 38 | 39 | def deactivate_pool(pool_name): 40 | #print pool_name 41 | cmd_to_run = "lvchange -an " + pool_name 42 | #print cmd_to_run 43 | #os.system(cmd_to_run) 44 | result = subprocess.call(cmd_to_run, shell=True) 45 | if(result != 0): 46 | print ("could not run cmd %s" % (cmd_to_run)) 47 | 48 | def activate_metadata_readonly(pool_name): 49 | #print pool_name 50 | cmd_to_run = "lvchange -ay " + pool_name + "_tmeta -y >/dev/null 2>&1" 51 | #print cmd_to_run 52 | #os.system(cmd_to_run) 53 | result = subprocess.call(cmd_to_run, shell=True) 54 | if(result != 0): 55 | print ("could not run cmd %s" % (cmd_to_run)) 56 | 57 | 58 | def deactivate_metadata(pool_name): 59 | #print pool_name 60 | cmd_to_run = "lvchange -an " + pool_name + "_tmeta " 61 | #print cmd_to_run 62 | #os.system(cmd_to_run) 63 | result = subprocess.call(cmd_to_run, shell=True) 64 | if(result != 0): 65 | print ("could not run cmd %s" % (cmd_to_run)) 66 | 67 | def thin_dump_metadata(pool_name): 68 | cmd_to_run = "thin_dump /dev/" + pool_name + "_tmeta" + " > /tmp/dump" 69 | #print cmd_to_run 70 | #os.system(cmd_to_run) 71 | result = subprocess.call(cmd_to_run, shell=True) 72 | if(result != 0): 73 | print ("could not run cmd %s" % (cmd_to_run)) 74 | 75 | def thin_rmap_metadata(pool_name, nr_chunks_str): 76 | cmd_to_run = "thin_rmap --region 0.." + nr_chunks_str + " /dev/" + pool_name + "_tmeta" + " > /tmp/rmap" 77 | #print cmd_to_run 78 | #os.system(cmd_to_run) 79 | result = subprocess.call(cmd_to_run, shell=True) 80 | if(result != 0): 81 | print ("could not run cmd %s" % (cmd_to_run)) 82 | 83 | def get_nr_chunks(): 84 | with open('/tmp/dump') as f: 85 | first_line = f.readline() 86 | #print first_line 87 | nr_blocks_string = first_line.rpartition("=")[-1] 88 | #print nr_blocks_string 89 | nr_blocks_str = nr_blocks_string.rstrip()[1:-2] 90 | return nr_blocks_str 91 | 92 | 93 | def create_shrink_device(pool_name): 94 | split_vg_and_pool = pool_name.split('/') 95 | vgname = split_vg_and_pool[0] 96 | poolname = split_vg_and_pool[1] 97 | #print vgname 98 | #print poolname 99 | search_in_dmsetup = vgname + "-" + poolname + "_tdata" 100 | cmd = "dmsetup table | grep " + search_in_dmsetup 101 | #print cmd 102 | #os.system(cmd) 103 | result = subprocess.check_output(cmd, shell=True) 104 | 105 | #with open('/tmp/dmsetup_table_grepped', 'r') as myfile: 106 | #print dmsetup_lines 107 | #myfile.close() 108 | dmsetup_lines = result.splitlines() 109 | dmsetup_cmd = "echo -e " 110 | for line_iter in range(0, len(dmsetup_lines)): 111 | split_dmsetup_line = dmsetup_lines[line_iter].split(':' , 1) 112 | #print split_dmsetup_line[1] 113 | dmsetup_table_entry_of_tdata = split_dmsetup_line[1].lstrip() 114 | #print dmsetup_table_entry_of_tdata 115 | if (line_iter > 0): 116 | dmsetup_cmd = dmsetup_cmd + "\\" + "\\" + "n" 117 | dmsetup_cmd = dmsetup_cmd + "\'" + dmsetup_table_entry_of_tdata.rstrip() + "\'" 118 | 119 | dmsetup_cmd = dmsetup_cmd + " |" + " dmsetup create shrink_" + poolname.rstrip() 120 | #print "running this command.. " 121 | #print dmsetup_cmd 122 | #os.system(dmsetup_cmd) 123 | result = subprocess.call(dmsetup_cmd, shell=True) 124 | if(result != 0): 125 | print ("could not run cmd %s" % (dmsetup_cmd)) 126 | 127 | name_of_device = "shrink_" + poolname.rstrip() 128 | #print "also running dmsetup table" 129 | #cmd2 = "dmsetup table" 130 | #os.system(cmd2) 131 | return name_of_device 132 | 133 | 134 | def get_chunksize(pool_name): 135 | cmd = "lvs -o +chunksize " + pool_name + " | grep -v Chunk" 136 | #print "running this cmd now... \n" 137 | #print cmd 138 | #os.system(cmd) 139 | 140 | result = subprocess.check_output(cmd, shell=True) 141 | 142 | #with open('/tmp/chunksize', 'r') as myfile: 143 | chunk_line = result 144 | #print chunk_line 145 | chunksz_string = chunk_line.lstrip().rpartition(" ")[-1].rstrip() 146 | units = chunksz_string[-1] 147 | chunksz = chunksz_string[:-1] 148 | chunksz = chunksz[:chunksz.index('.')] 149 | # now that we removed the decimal part, add back the units 150 | chunksz_string = chunksz + units 151 | return chunksz_string 152 | 153 | 154 | def get_total_mapped_blocks(pool_name): 155 | split_vg_and_pool = pool_name.split('/') 156 | vgname = split_vg_and_pool[0] 157 | poolname = split_vg_and_pool[1] 158 | search_in_dmsetup_silently = vgname + "-" + poolname + "-tpool >/dev/null 2>&1" 159 | search_in_dmsetup = vgname + "-" + poolname + "-tpool" 160 | cmd = "dmsetup status " + search_in_dmsetup_silently 161 | #print cmd 162 | #os.system(cmd) 163 | 164 | #first test if command will run and not throw CalledProcessError exception 165 | 166 | result = subprocess.call(cmd, shell=True) 167 | 168 | if(result != 0): #no vgname-poolname-tpool in dmsetup status 169 | print "Warning: No tpool device found, perhaps pool has no thins?" 170 | search_in_dmsetup = vgname + "-" + poolname 171 | search_in_dmsetup_quietly = vgname + "-" + poolname + " >/dev/null 2>&1" 172 | cmd = "dmsetup status " + search_in_dmsetup_quietly 173 | #print cmd 174 | #os.system(cmd) 175 | result = subprocess.call(cmd, shell=True) 176 | if(result==0): 177 | #print "found pool" 178 | cmd = "dmsetup status " + search_in_dmsetup 179 | result = subprocess.check_output(cmd, shell=True) 180 | else: 181 | print "did not find pool in dmsetup status" 182 | exit() 183 | 184 | dmsetup_line = result.splitlines() 185 | if(len(dmsetup_line)>1): #this should never happen anyway 186 | print "More than 1 device found in dmsetup status" 187 | exit() 188 | 189 | # eg: RHELCSB-test_pool: 0 20971520 thin-pool 0 4356/3932160 0/163840 - rw no_discard_passdown queue_if_no_space - 1024 190 | split_dmsetup_line = dmsetup_line[0].split(' ') 191 | dmsetup_status_entry = split_dmsetup_line[5].lstrip() 192 | used_blocks = dmsetup_status_entry.split('/')[0] 193 | 194 | 195 | else: # there is tpool 196 | 197 | cmd = "dmsetup status " + search_in_dmsetup 198 | result = subprocess.check_output(cmd, shell=True) 199 | 200 | dmsetup_line = result.splitlines() 201 | if(len(dmsetup_line)>1): #this should never happen anyway 202 | print "More than 1 device found in dmsetup status" 203 | exit() 204 | split_dmsetup_line = dmsetup_line[0].split(' ') 205 | dmsetup_status_entry = split_dmsetup_line[5].lstrip() 206 | used_blocks = dmsetup_status_entry.split('/')[0] 207 | 208 | #print "used blocks are.." 209 | #print used_blocks 210 | return long(used_blocks) 211 | 212 | def replace_chunk_numbers_in_xml(chunks_to_shrink_to, all_changes): 213 | search_snapshots = 0 214 | count = 0 215 | #logfile.write("length of list of changes required is..\n") 216 | #logfile.write(len(changed_list)) 217 | new_xml = open('/tmp/changed.xml', 'w') 218 | split_ranges_changed_list = all_changes[0] 219 | 220 | changed_list = all_changes[1] 221 | wroteline = 0 222 | 223 | with open('/tmp/dump') as f: 224 | for line in f: 225 | if (count == 0): # only do this for the first line, change nr_chunks 226 | count=1 227 | first_line = line 228 | first_line_fields = first_line.split() 229 | #print first_line_fields 230 | nr_blocks_field = first_line_fields[7] 231 | #print nr_blocks_field 232 | new_line_first_part="" 233 | for element in first_line_fields[0:-1]: 234 | new_line_first_part = new_line_first_part + " " + element 235 | #print new_line_first_part 236 | complete_first_line = new_line_first_part + " " + "nr_data_blocks=" + "\"" + str(chunks_to_shrink_to) + "\"" + ">" + "\n" 237 | #print complete_first_line 238 | complete_first_line = complete_first_line.lstrip() 239 | new_xml.write(complete_first_line) 240 | 241 | else: 242 | data_found = line.find("data_") 243 | 244 | if (data_found > 0): 245 | split_line = line[data_found:] 246 | last_quotes = split_line.index(" ") 247 | blocknum = split_line[12:last_quotes-1] 248 | int_block = int(blocknum) 249 | 250 | if(split_ranges_changed_list.get(int_block,0) == 0): # is this block number not a key in split ranges ? 251 | if(changed_list.get(int_block,0) == 0): #is this block number not a key in changed list ? 252 | 253 | if(search_snapshots==1): 254 | # it may also be a snapshot mapping pointing to inside a range that we may be moving! 255 | if(len(changed_list)>0): 256 | #print changed_list 257 | 258 | candidates = [] 259 | for keys in changed_list: 260 | #print "integer value of keys is" 261 | #print int(keys) 262 | #print "and in_block is" 263 | #print int_block 264 | if( int(keys) < int_block): 265 | candidates.append(keys) 266 | 267 | if(len(candidates)>0): 268 | closest_earlier = max(candidates) 269 | closest_smaller_key = changed_list[closest_earlier] 270 | #print closest_smaller_key 271 | 272 | #return d[str(max(key for key in map(int, d.keys()) if key <= k))] 273 | #return sample[str(max(x for x in sample.keys() if int(x) < int(key)))] 274 | 275 | closest_range = changed_list[closest_smaller_key] 276 | 277 | # if this block lies inside the range we are moving to another range 278 | if( (int_block > closest_range[0]) and ( int_block < ( int(closest_range[0]) + int(closest_range[1]) ) )): 279 | #if(int_block < (int(closest_range[0]) + int(closest_range[1])): 280 | # yes this is a snapshot mapping pointing to inside a range we are moving 281 | first_part_string = line[0:data_found+12] 282 | last_part_string = split_line[last_quotes+1:] 283 | changed_snap_blocknum = int(closest_range[0]) + (int_block - int(closest_smaller_key)) 284 | new_string = first_part_string + str(changed_snap_blocknum) + "\" " + last_part_string 285 | new_xml.write(line) 286 | wroteline = 1 287 | 288 | # it may also be a snapshot mapping pointing to inside a SPLIT range we are moving !! 289 | if( (len(split_ranges_changed_list)>0) and (wroteline == 0)): 290 | candidates = [] 291 | for keys in split_ranges_changed_list: 292 | if( int(keys) < int_block): 293 | candidates.append(keys) 294 | if(len(candidates)>0): 295 | closest_smaller_key = split_ranges_changed_list[max(candidates)] 296 | closest_range = split_ranges_changed_list[closest_smaller_key] 297 | if((int_block > closest_range[0]) and (int_block < (int(closest_range[0]) + int(closest_range[1])))): 298 | 299 | # yes this is a snapshot mapping pointing to inside a split range we are moving 300 | first_part_string = line[0:data_found+12] 301 | last_part_string = split_line[last_quotes+1:] 302 | changed_snap_blocknum = int(closest_range[0]) + (int_block - int(closest_smaller_key)) 303 | new_string = first_part_string + str(changed_snap_blocknum) + "\" " + last_part_string 304 | new_xml.write(line) 305 | wroteline = 1 306 | 307 | if(wroteline == 0): 308 | # write the unmodified line as it is 309 | new_xml.write(line) 310 | 311 | else: 312 | # dont bother with snapshots, assume none exist 313 | # write the unmodified line as it is 314 | new_xml.write(line) 315 | 316 | else: # its in changed list 317 | to_change = changed_list[int_block] 318 | first_part_string = line[0:data_found+12] 319 | last_part_string = split_line[last_quotes+1:] 320 | new_string = first_part_string + str(to_change[0]) + "\" " + last_part_string 321 | #print new_string 322 | new_xml.write(new_string) 323 | 324 | else: #its in the split ranges 325 | # change line of xml for first and generates lines for all lookaheads 326 | 327 | # 328 | 329 | 330 | to_change = split_ranges_changed_list[int_block] 331 | first_part_string = line[0:data_found+12] 332 | last_part_string = split_line[last_quotes+1:] 333 | # last_part_string is now looking like 334 | # length="26" time="0"/> 335 | 336 | #length needs reducing 337 | split_last_line = last_part_string.split(" ") 338 | 339 | new_string = first_part_string + str(to_change[0]) + "\" " + "length=\"" + str(to_change[1]) + "\" " + split_last_line[1] 340 | #print new_string 341 | new_xml.write(new_string) 342 | 343 | # now do the lookaheads 344 | new_block = int_block 345 | origin_block =0 346 | split_first_part = first_part_string.split("=") 347 | 348 | origin_string = split_first_part[1].split(" ")[0] # "63962" 349 | origin = origin_string[1:-1] 350 | int_origin = int(origin) 351 | #print int_origin 352 | time_string = line.split(" ")[-1] 353 | new_origin = int_origin 354 | spaces_before_string = line[0:line.index('<')] 355 | #print to_change 356 | while to_change[2]==1: #while lookahead 357 | new_block = new_block + to_change[1] 358 | new_origin = new_origin + int(to_change[1]) 359 | to_change = split_ranges_changed_list[new_block] 360 | 361 | # 362 | # 363 | 364 | if(to_change[1]>1): #range mapping 365 | new_string = spaces_before_string + "" + "\n" 403 | #print complete_first_line 404 | complete_first_line = complete_first_line.lstrip() 405 | new_xml = open('/tmp/changed.xml', 'w') 406 | new_xml.write(complete_first_line) 407 | remaining = f.readlines() 408 | type(remaining) 409 | for i in range(0, len(remaining)): 410 | new_xml.write(remaining[i]) 411 | new_xml.close() 412 | else: 413 | # we need to dd blocks, change the numbers in the xml, etc 414 | print "Checking if blocks can be copied" 415 | allocated_ranges = [] 416 | free_ranges = [] 417 | earlier_element=[] 418 | ranges_requiring_move = [] 419 | 420 | #changed_list = [] # [(old, new, length) , (old, new, length), ... ] 421 | 422 | split_ranges_changed_list= {} # {old:[new,len,lookahead} note that lookahead is bool i.e 1 or 0 423 | changed_list = {} # {old: [new,len] , old: [new,len], ... } 424 | total_blocks_requiring_copy = 0 425 | 426 | with open('/tmp/rmap') as f: 427 | entire_file = f.readlines() 428 | type(entire_file) 429 | for i in range(0,len(entire_file)): 430 | mapping = entire_file[i].split()[1] 431 | split_mapping = mapping.split(".") 432 | start_block = split_mapping[0] 433 | end_block = split_mapping[-1] 434 | length_of_mapping = int(end_block) - int(start_block) 435 | range_to_add = [] 436 | range_to_add.append(int(start_block)) 437 | range_to_add.append(length_of_mapping) 438 | allocated_ranges.append(range_to_add) 439 | if(int(start_block) + length_of_mapping > chunks_to_shrink_to): 440 | ranges_requiring_move.append(range_to_add) 441 | 442 | if (i == 0): # first iteration 443 | earlier_element = range_to_add 444 | 445 | else: 446 | 447 | #iteration 1 onwards, start creating free_ranges list also 448 | 449 | if (int(start_block) > (earlier_element[0] + earlier_element[1]) ): 450 | if(int(start_block) < chunks_to_shrink_to): #if starting block is within the new size 451 | # we have a free range, so add it to the free_ranges list 452 | free_range_element = [] 453 | free_range_element.append(earlier_element[0] + earlier_element[1]) #start of free range 454 | free_range_element.append(int(start_block) - (earlier_element[0] + earlier_element[1])) #length of free range 455 | 456 | if((free_range_element[0] + free_range_element[1]) < chunks_to_shrink_to): #if entire free range is within new size 457 | free_ranges.append(free_range_element) 458 | #earlier_element = range_to_add 459 | else: 460 | free_range_element.pop(1) #get rid of older length, needs trimming 461 | free_range_element.append(chunks_to_shrink_to - (earlier_element[0])) #length of free range that will fit within new size 462 | free_ranges.append(free_range_element) 463 | #earlier_element = range_to_add 464 | #else: 465 | #earlier_element = range_to_add 466 | earlier_element = range_to_add 467 | 468 | #print "\nallocated ranges are.." 469 | #print allocated_ranges 470 | 471 | ########################################## 472 | # used for testing split ranges 473 | #free_ranges = [[1,1],[200,300],[700,400],[1200,500]] 474 | #ranges_requiring_move = [[3000,600], [5000,550]] 475 | ########################################## 476 | 477 | free_ranges.sort(key=lambda x: x[1]) 478 | #print "\nsorted free ranges are" 479 | #print free_ranges 480 | 481 | ranges_requiring_move.sort(key=lambda x: x[1], reverse=True) 482 | #print "\nranges requiring move are" 483 | #print ranges_requiring_move 484 | 485 | #print "length of list of free ranges is..\n" 486 | #print len(free_ranges) 487 | 488 | 489 | total_needing_copy = sum(need_copy[1] for need_copy in ranges_requiring_move) 490 | total_free = sum(available_free[1] for available_free in free_ranges) 491 | #print "total needing copy is" 492 | #print total_needing_copy 493 | #print "total free is" 494 | #print total_free 495 | 496 | if(total_needing_copy > total_free): 497 | print "this pool cannot be shrunk because not enough free blocks available" 498 | print "total needing copy is" 499 | print total_needing_copy 500 | print "total free is" 501 | print total_free 502 | changed_list = {} 503 | return changed_list 504 | 505 | 506 | # at this point, we know that we can shrink the pool 507 | 508 | #print "free ranges are" 509 | #print free_ranges 510 | #print "ranges requiring move are" 511 | #print ranges_requiring_move 512 | 513 | for each_range in ranges_requiring_move: 514 | len_requiring_move = each_range[1] 515 | #print len_requiring_move 516 | could_split=0 517 | this_range_has_fit=0 518 | # find out if it needs splitting 519 | 520 | if(len_requiring_move > free_ranges[-1][1]): 521 | print "This one will need splitting. the range to move is" 522 | print each_range 523 | print "the free ranges " 524 | print free_ranges 525 | #old_block = each_range[0] 526 | reversed_free_ranges = reversed(free_ranges) 527 | moved = 0 528 | split_range = [] 529 | for iterate_free_backwards in reversed_free_ranges: 530 | if(iterate_free_backwards[1] < len_requiring_move): #if we must use this entire range 531 | split_range.append(iterate_free_backwards[0]) 532 | split_range.append(iterate_free_backwards[1]) 533 | split_range.append(1) 534 | 535 | index = each_range[0] + moved 536 | split_ranges_changed_list[index]=split_range 537 | 538 | # split_ranges_changed_list looks like this 539 | # [old: new,len,lookahead] , lookahead is 0 or 1 and specifies to look ahead to generate xml for next 540 | # element in split_ranges_changed_list too because you wont find that one in the xml in blocks to change since its 541 | # from the middle of a range that you split. 542 | 543 | total_blocks_requiring_copy = total_blocks_requiring_copy + split_range[1] 544 | moved = moved + split_range[1] 545 | 546 | free_ranges.pop() 547 | len_requiring_move = len_requiring_move - split_range[1] 548 | 549 | else: 550 | this_range_has_fit = 1 551 | # partial free range to accomodate last remaining part of large range 552 | split_range = [] 553 | split_range.append(iterate_free_backwards[0]) 554 | split_range.append(len_requiring_move) 555 | split_range.append(0) 556 | index = each_range[0] + moved 557 | split_ranges_changed_list[index]=split_range 558 | 559 | #adjust the free ranges 560 | #remove first from free ranges 561 | last_one = free_ranges.pop() 562 | #add back with changed free map unless it was completely consumed 563 | 564 | if(len_requiring_move < last_one[1]): 565 | temp_free_range = [] 566 | blknum = last_one[0] 567 | new_blknum = blknum + len_requiring_move 568 | print new_blknum 569 | 570 | temp_free_range.append(new_blknum) 571 | len_changed = last_one[1] 572 | changed_len = len_changed - len_requiring_move 573 | #print changed_len 574 | temp_free_range.append(changed_len) 575 | print "temp free range is is." 576 | print temp_free_range 577 | print free_ranges 578 | free_ranges.append(temp_free_range) 579 | print free_ranges 580 | #sort it again, so this element is put in proper place 581 | free_ranges.sort(key=lambda x: x[1]) 582 | total_blocks_requiring_copy = total_blocks_requiring_copy + len_requiring_move 583 | print "done splitting this range" 584 | print "free ranges are now" 585 | print free_ranges 586 | print "and split ranges changed list is " 587 | print split_ranges_changed_list 588 | break 589 | 590 | else: 591 | #find closest fitting free range I can move this to 592 | for i in range(len(free_ranges)): 593 | if free_ranges[i][1] > len_requiring_move: 594 | #found free range to move this range to 595 | #print "range mapping of size" 596 | #remove that entry from the free ranges list, we will add it back with the reduced length later 597 | changed_element = [] 598 | #changed_element.append(each_range[0]) 599 | changed_element.append(free_ranges[i][0]) 600 | changed_element.append(len_requiring_move) 601 | total_blocks_requiring_copy = total_blocks_requiring_copy + len_requiring_move 602 | changed_list[each_range[0]] = changed_element 603 | #changed_list.append(changed_element) 604 | 605 | if((free_ranges[i][1] - len_requiring_move) > 0): 606 | new_free_range = [] 607 | new_free_range_block = free_ranges[i][0]+len_requiring_move 608 | new_range_length = free_ranges[i][1] - len_requiring_move 609 | new_free_range.append(new_free_range_block) 610 | new_free_range.append(new_range_length) 611 | free_ranges.pop(i) 612 | free_ranges.append(new_free_range) 613 | #sort it again, so this element is put in proper place 614 | free_ranges.sort(key=lambda x: x[1]) 615 | break 616 | 617 | print "\nlength of change list is.." 618 | print len(changed_list) 619 | print "\nlength of split range list is.." 620 | print len(split_ranges_changed_list) 621 | 622 | #make a list that stores both the lists, split_ranges_changed_list and changed_list 623 | all_changes = [] 624 | all_changes.append(split_ranges_changed_list) 625 | all_changes.append(changed_list) 626 | 627 | #if(len(changed_list) == len(ranges_requiring_move)): 628 | print "This pool can be shrunk, but blocks will need to be moved." 629 | total_gb = ( float(total_blocks_requiring_copy) * float(chunksize_in_bytes) ) / 1024 / 1024 /1024 630 | print ("Total amount of data requiring move is %.2f GB. Proceed ? Y/N" % (total_gb)) 631 | if sys.version_info[0]==2: 632 | inp = raw_input() 633 | else: # assume python 3 onward 634 | inp = input() 635 | 636 | if(inp.lower() == "y"): 637 | 638 | replace_chunk_numbers_in_xml(chunks_to_shrink_to ,all_changes) 639 | return all_changes 640 | else: 641 | print "Aborting.." 642 | zero_list = [] 643 | return zero_list 644 | 645 | def check_pool_shrink_without_dd(chunks_to_shrink_to): 646 | if(os.path.exists('/tmp/rmap')): 647 | if(os.path.getsize('/tmp/rmap') > 0): 648 | with open('/tmp/rmap') as f: 649 | for line in f: 650 | pass 651 | last_line = line 652 | #print last_line 653 | last_range = last_line.split()[1] 654 | #print last_range 655 | last_block = last_range.split(".")[2] 656 | last_block_long = long(last_block) 657 | if ((last_block_long - 1) < chunks_to_shrink_to): 658 | print "Pool can be shrunk without moving blocks. Last mapped block is %d and new size in chunks is %d\n" % ((last_block_long - 1), chunks_to_shrink_to ) 659 | return 1 660 | else: 661 | print "Last mapped block is %d and new size in chunks is %d\n" % ((last_block_long - 1), chunks_to_shrink_to ) 662 | return 0 663 | 664 | print "no valid /tmp/rmap file found. Perhaps this pool has no data mappings ?" 665 | return 1 666 | 667 | def restore_xml_and_swap_metadata(pool_to_shrink): 668 | #need to create a new lv as large as the metadata 669 | vg_and_lv = pool_to_shrink.split("/") 670 | vgname = vg_and_lv[0] 671 | lvname = vg_and_lv[1] 672 | #print vgname 673 | #print lvname 674 | #search for the tmeta size in lvs -a 675 | cmd = "lvs -a | grep " + "\"" + " " + vgname + " \" " + "|" + " grep " + "\"" + "\\" + "[" + lvname + "_tmeta]\"" 676 | #print cmd 677 | #os.system(cmd) 678 | result = subprocess.check_output(cmd, shell=True) 679 | tmeta_line = result 680 | #print tmeta_line 681 | size_of_metadata = tmeta_line.split()[-1] 682 | #print size_of_metadata 683 | units = size_of_metadata[-1] 684 | meta_size = size_of_metadata[:-1] 685 | meta_size = meta_size[:meta_size.index('.')] 686 | meta_size_str = meta_size + units 687 | #print meta_size_str 688 | cmd = "lvcreate -n shrink_restore_lv -L" + meta_size_str + " " + vgname + " >/dev/null 2>&1" 689 | print cmd 690 | #os.system(cmd) 691 | 692 | result = subprocess.call(cmd, shell=True) 693 | if(result != 0): 694 | print ("could not run cmd %s" % (cmd)) 695 | else: 696 | print "ran the command" 697 | 698 | cmd = "thin_restore -i /tmp/changed.xml -o " + "/dev/" + vgname + "/" + "shrink_restore_lv" 699 | print cmd 700 | result = subprocess.call(cmd, shell=True) 701 | if(result != 0): 702 | print ("could not run cmd %s" % (cmd)) 703 | 704 | #os.system(cmd) 705 | cmd = "lvconvert --thinpool " + pool_to_shrink + " --poolmetadata " + "/dev/" + vgname + "/shrink_restore_lv -y" 706 | #print cmd 707 | #os.system(cmd) 708 | result = subprocess.call(cmd, shell=True) 709 | if(result != 0): 710 | print ("could not run cmd %s" % (cmd)) 711 | 712 | def change_vg_metadata(pool_to_shrink, chunks_to_shrink_to,nr_chunks,chunksize_in_bytes): 713 | vg_and_lv = pool_to_shrink.split("/") 714 | vgname = vg_and_lv[0] 715 | lvname = vg_and_lv[1] 716 | #print vgname 717 | #print lvname 718 | cmd = "vgcfgbackup -f /tmp/vgmeta_backup " + vgname + " >/dev/null 2>&1" 719 | #print cmd 720 | #os.system(cmd) 721 | result = subprocess.call(cmd, shell=True) 722 | if(result != 0): 723 | print ("could not run cmd %s" % (cmd)) 724 | 725 | with open('/tmp/vgmeta_backup') as f: 726 | new_vgmeta = open('/tmp/changed_vgmeta', 'w') 727 | 728 | remaining = f.readlines() 729 | type(remaining) 730 | search_string = " " + lvname + " {" 731 | #print search_string 732 | #print "***" 733 | extent_size_string = "extent_size = " 734 | extent_size_in_bytes=0 735 | dont_look_any_more = 0 736 | found_search_string = 0 737 | found_logical_volumes = 0 738 | for i in range(0, len(remaining)): 739 | if dont_look_any_more == 0: 740 | if ( remaining[i].find(extent_size_string) != -1 ): 741 | extent_elements = remaining[i].split() 742 | #print extent_elements 743 | 744 | extent_size_figure = extent_elements[-2] 745 | extent_size_units = extent_elements[-1][0] 746 | extent_size = extent_size_figure+extent_size_units 747 | #print "extent size is.. " 748 | #print extent_size 749 | #print "\n and in bytes .. " 750 | extent_size_in_bytes = calculate_size_in_bytes(extent_size) 751 | #print extent_size_in_bytes 752 | 753 | if (remaining[i].find("logical_volumes {") != -1): 754 | found_logical_volumes = 1 755 | #print "found the logical volumes" 756 | 757 | if ((" " + remaining[i].lstrip()).find(search_string) != -1): 758 | if(found_logical_volumes == 1): 759 | found_search_string = 1 760 | #print "found the search string" 761 | 762 | if (remaining[i].find("extent_count") != -1): 763 | if(found_search_string == 1): 764 | #print "found the extent count" 765 | num_tabs = remaining[i].count('\t') 766 | #print "number of tabs is " 767 | #print num_tabs 768 | 769 | num_whitespaces = len(remaining[i]) - len(remaining[i].lstrip()) 770 | #print num_whitespaces 771 | elements = remaining[i].split() 772 | new_size = (chunks_to_shrink_to * chunksize_in_bytes) / extent_size_in_bytes 773 | #print "number of extents to shrink to is " 774 | #print new_size 775 | new_string = "extent_count = " + str(new_size) + "\n" 776 | #new_string_len = len(new_string) + num_whitespaces 777 | #new_string_with_trailing_spaces = new_string.rjust(new_string_len) 778 | new_string_with_tabs=new_string 779 | for x in range(0,num_tabs-1): 780 | new_string_with_tabs = "\t" + new_string_with_tabs 781 | new_vgmeta.write(new_string_with_tabs) 782 | dont_look_any_more = 1 783 | continue 784 | 785 | new_vgmeta.write(remaining[i]) 786 | new_vgmeta.close() 787 | 788 | 789 | 790 | def restore_vg_metadata(pool_to_shrink): 791 | vg_and_lv = pool_to_shrink.split("/") 792 | vgname = vg_and_lv[0] 793 | lvname = vg_and_lv[1] 794 | cmd = "vgcfgrestore -f /tmp/changed_vgmeta " + vgname + " --force -y >/dev/null 2>&1" 795 | #os.system(cmd) 796 | result = subprocess.call(cmd, shell=True) 797 | if(result != 0): 798 | print ("could not run cmd %s" % (cmd)) 799 | 800 | 801 | def move_blocks(combined_changed_list,shrink_device,chunksize_string): 802 | progress=0 803 | percent_done = 0 804 | previous_percent = 0 805 | counter = 0 806 | print "Generated new metadata map. Now copying blocks to match the changed metadata." 807 | split_ranges_list = combined_changed_list[0] 808 | changed_list = combined_changed_list[1] 809 | 810 | for changed_entry in split_ranges_list: # move split ranges first 811 | progress=progress+1 812 | counter = counter + 1 813 | 814 | old_block = changed_entry 815 | new_block = split_ranges_list[changed_entry][0] 816 | length = split_ranges_list[changed_entry][1] 817 | bs = chunksize_string[0:-1] 818 | units = chunksize_string[-1].upper() 819 | bs_with_units = bs + units 820 | #if(length>1): 821 | # print ("moving %d blocks at %d to %d" % (length , old_block , new_block) ) 822 | #else: 823 | # print ("moving %d block at %d to %d" % (length , old_block , new_block) ) 824 | 825 | cmd = "dd if=/dev/mapper/" + shrink_device + " of=/dev/mapper/" + shrink_device + " bs=" + bs_with_units + " skip=" + str(old_block) + " seek=" + str(new_block) + " count=" + str(length) + " conv=notrunc >/dev/null 2>&1" 826 | #print cmd 827 | #os.system(cmd) 828 | result = subprocess.call(cmd, shell=True) 829 | if(result != 0): 830 | print ("could not run cmd %s" % (cmd)) 831 | 832 | one_tenth = len(split_ranges_list) / 10 833 | if(progress == one_tenth): 834 | print("%d moved of %d elements" % (counter, len(split_ranges_list))) 835 | progress=0 836 | 837 | 838 | for changed_entry in changed_list: 839 | 840 | progress=progress+1 841 | counter = counter + 1 842 | 843 | old_block = changed_entry 844 | new_block = changed_list[changed_entry][0] 845 | length = changed_list[changed_entry][1] 846 | bs = chunksize_string[0:-1] 847 | units = chunksize_string[-1].upper() 848 | bs_with_units = bs + units 849 | #if(length>1): 850 | # print ("moving %d blocks at %d to %d" % (length , old_block , new_block) ) 851 | #else: 852 | # print ("moving %d block at %d to %d" % (length , old_block , new_block) ) 853 | 854 | cmd = "dd if=/dev/mapper/" + shrink_device + " of=/dev/mapper/" + shrink_device + " bs=" + bs_with_units + " skip=" + str(old_block) + " seek=" + str(new_block) + " count=" + str(length) + " conv=notrunc >/dev/null 2>&1" 855 | #print cmd 856 | #os.system(cmd) 857 | result = subprocess.call(cmd, shell=True) 858 | if(result != 0): 859 | print ("could not run cmd %s" % (cmd)) 860 | 861 | one_tenth = len(changed_list) / 10 862 | if(progress == one_tenth): 863 | print("%d moved of %d elements" % (counter, len(changed_list))) 864 | progress=0 865 | 866 | print "Done with data copying" 867 | 868 | def cleanup(shrink_device, pool_to_shrink): 869 | vg_and_lv = pool_to_shrink.split("/") 870 | vgname = vg_and_lv[0] 871 | cmd = "dmsetup remove " + shrink_device 872 | #os.system(cmd) 873 | result = subprocess.call(cmd, shell=True) 874 | if(result != 0): 875 | print ("could not run cmd %s" % (cmd)) 876 | 877 | cmd = "lvremove " + vgname + "/shrink_restore_lv >/dev/null 2>&1" 878 | result = subprocess.call(cmd, shell=True) 879 | if(result != 0): 880 | print ("could not run cmd %s" % (cmd)) 881 | 882 | #os.system(cmd) 883 | cmd = "lvchange -an " + pool_to_shrink 884 | #os.system(cmd) 885 | result = subprocess.call(cmd, shell=True) 886 | if(result != 0): 887 | print ("could not run cmd %s" % (cmd)) 888 | 889 | 890 | def delete_restore_lv(pool_to_shrink): 891 | vg_and_lv = pool_to_shrink.split("/") 892 | vgname = vg_and_lv[0] 893 | cmd = "lvremove " + vgname + "/shrink_restore_lv" 894 | result = subprocess.call(cmd, shell=True) 895 | if(result != 0): 896 | print ("could not run cmd %s" % (cmd)) 897 | #os.system(cmd) 898 | 899 | #TODO close opened files 900 | 901 | 902 | def main(): 903 | 904 | #logfile = open("/tmp/shrink_logs", "w") 905 | #logfile.write("starting logs") 906 | ap = argparse.ArgumentParser() 907 | ap.add_argument("-L", "--size", required=True, help="size to shrink to") 908 | ap.add_argument("-t", "--thinpool", required=True, help="vgname/poolname") 909 | args = vars(ap.parse_args()) 910 | 911 | size_in_chunks=0L 912 | pool_to_shrink = args['thinpool'] 913 | 914 | #delete_restore_lv(pool_to_shrink) 915 | activate_pool(pool_to_shrink) 916 | total_mapped_blocks = get_total_mapped_blocks(pool_to_shrink) 917 | 918 | shrink_device = create_shrink_device(pool_to_shrink) 919 | 920 | chunksz_string = get_chunksize(pool_to_shrink) 921 | chunksize_in_bytes = calculate_size_in_bytes(chunksz_string) 922 | 923 | deactivate_pool(pool_to_shrink) 924 | activate_metadata_readonly(pool_to_shrink) 925 | thin_dump_metadata(pool_to_shrink) 926 | 927 | nr_chunks = get_nr_chunks() 928 | #print nr_chunks 929 | 930 | thin_rmap_metadata(pool_to_shrink, nr_chunks) 931 | deactivate_metadata(pool_to_shrink) 932 | 933 | size_to_shrink = args['size'] 934 | size_to_shrink_to_in_bytes = 0L 935 | size_to_shrink_to_in_bytes = calculate_size_in_bytes(size_to_shrink) 936 | #print size_to_shrink_to_in_bytes 937 | chunks_to_shrink_to = size_to_shrink_to_in_bytes/chunksize_in_bytes 938 | print "Need to shrink pool to number of chunks - " + str(chunks_to_shrink_to) 939 | 940 | if(chunks_to_shrink_to >= int(nr_chunks)): 941 | print "This thin pool cannot be shrunk. The pool is already smaller than the size provided." 942 | cleanup(shrink_device,pool_to_shrink) 943 | exit() 944 | 945 | if (total_mapped_blocks >= chunks_to_shrink_to): 946 | print "This thin pool cannot be shrunk. The mapped chunks are more than the lower size provided. Discarding allocated blocks from the pool may help." 947 | cleanup(shrink_device,pool_to_shrink) 948 | exit() 949 | 950 | if( check_pool_shrink_without_dd(chunks_to_shrink_to) == 1): 951 | change_xml(chunks_to_shrink_to, chunksize_in_bytes) 952 | restore_xml_and_swap_metadata(pool_to_shrink) 953 | change_vg_metadata(pool_to_shrink, chunks_to_shrink_to,nr_chunks,chunksize_in_bytes) 954 | restore_vg_metadata(pool_to_shrink) 955 | cleanup(shrink_device,pool_to_shrink) 956 | print("\nThis pool has been shrunk to the specified size of %s" % (size_to_shrink)) 957 | 958 | else: 959 | 960 | changed_list = change_xml(chunks_to_shrink_to, chunksize_in_bytes, 1) 961 | if(len(changed_list) > 0): 962 | move_blocks(changed_list,shrink_device,chunksz_string) 963 | restore_xml_and_swap_metadata(pool_to_shrink) 964 | change_vg_metadata(pool_to_shrink, chunks_to_shrink_to,nr_chunks,chunksize_in_bytes) 965 | restore_vg_metadata(pool_to_shrink) 966 | print("\nThis pool has been shrunk to the specified size of %s" % (size_to_shrink)) 967 | cleanup(shrink_device,pool_to_shrink) 968 | 969 | #change_xml(5000,1000,1) 970 | 971 | if __name__=="__main__": 972 | main() 973 | -------------------------------------------------------------------------------- /unit_tests: -------------------------------------------------------------------------------- 1 | Testing: 2 | 3 | Scenario 1) 4 | No moves required here, Pool can be shrunk because last mapped block is less than the reduced size specified. 5 | 6 | [root@localhost thin_shrink]# ./thin_shrink.py -L2600m -tthinvg/p1 7 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 8 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 9 | lvchange -an thinvg/p1 10 | lvchange -ay thinvg/p1_tmeta -y 11 | Allowing activation of component LV. 12 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 13 | thin_rmap --region 0..44800 /dev/thinvg/p1_tmeta > /tmp/rmap 14 | lvchange -an thinvg/p1_tmeta 15 | Need to shrink pool to this number of chunks ---- 41600 16 | Yes, this pool can be shrunk. Last mapped block is 40175 and new size in chunks is 41600 17 | 18 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 19 | lvcreate -n restore_lv -L8m thinvg 20 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (2.73 GiB). 21 | WARNING: You have not turned on protection against thin pools running out of space. 22 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 23 | Logical volume "restore_lv" created. 24 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 25 | Restoring: [==================================================] 100% 26 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 27 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 28 | Volume group "thinvg" successfully backed up. 29 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 30 | Restored volume group thinvg 31 | Logical volume "restore_lv" successfully removed 32 | 0 logical volume(s) in volume group "thinvg" now active 33 | This pool has been shrunk to the specified size of 2600m 34 | 35 | ------------------------------------------------------------------------------ 36 | Shrink further to 2.4G, moving blocks around is required. Only ranges in this case. 37 | 38 | [root@localhost thin_shrink]# ./thin_shrink.py -L2400m -tthinvg/p1 39 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 40 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 41 | lvchange -an thinvg/p1 42 | lvchange -ay thinvg/p1_tmeta -y 43 | Allowing activation of component LV. 44 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 45 | thin_rmap --region 0..41600 /dev/thinvg/p1_tmeta > /tmp/rmap 46 | lvchange -an thinvg/p1_tmeta 47 | Need to shrink pool to this number of chunks ---- 38400 48 | Changes needed to metadata and blocks will be copied 49 | 50 | allocated ranges are.. 51 | 52 | [[0, 2], [2, 1], [3, 1], [4, 160], [164, 1], [165, 1], [166, 1], [167, 1], [168, 1], [169, 1], [170, 1], [171, 1], [172, 1], [173, 1], [174, 1], [175, 1], [176, 1], [177, 1], [178, 1], [179, 2], [181, 1], [182, 1], [183, 160], [343, 1], [344, 1], [345, 1], [346, 1], [347, 1], [348, 1], [349, 1], [350, 1], [351, 1], [352, 1], [353, 1], [354, 192], [8350, 992], [9342, 1024], [10366, 1024], [11390, 984], [12374, 40], [12414, 1024], [13438, 1024], [14462, 1024], [15486, 864], [24350, 992], [25342, 1024], [26366, 1024], [27390, 984], [28374, 40], [28414, 1024], [29438, 1024], [30462, 1024], [31486, 864], [32351, 2047], [34398, 1249], [35656, 1], [35657, 1], [35658, 1], [35659, 1], [35660, 1], [35661, 1], [35662, 1], [35663, 1], [35664, 3808], [39472, 704]] 53 | 54 | sorted free ranges are 55 | 56 | [[32350, 1], [35647, 9], [546, 7804], [16350, 8000]] 57 | 58 | reverse sorted ranges requiring move are 59 | 60 | [[35664, 3808], [39472, 704]] 61 | change list is.. 62 | 63 | [[35664, 546, 3808], [39472, 4354, 704]] 64 | moving 3808 blocks at 35664 to 546 65 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35664 seek=546 count=3808 conv=notrunc 66 | 3808+0 records in 67 | 3808+0 records out 68 | 249561088 bytes (250 MB) copied, 3.86304 s, 64.6 MB/s 69 | moving 704 blocks at 39472 to 4354 70 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=39472 seek=4354 count=704 conv=notrunc 71 | 704+0 records in 72 | 704+0 records out 73 | 46137344 bytes (46 MB) copied, 0.414274 s, 111 MB/s 74 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 75 | lvcreate -n restore_lv -L8m thinvg 76 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (<2.54 GiB). 77 | WARNING: You have not turned on protection against thin pools running out of space. 78 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 79 | Logical volume "restore_lv" created. 80 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 81 | Restoring: [==================================================] 100% 82 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 83 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 84 | Volume group "thinvg" successfully backed up. 85 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 86 | Restored volume group thinvg 87 | Logical volume "restore_lv" successfully removed 88 | 0 logical volume(s) in volume group "thinvg" now active 89 | This pool has been shrunk to the specified size of 2400m 90 | 91 | [root@localhost thin_shrink]# diff /tmp/dump /tmp/changed.xml 92 | 1c1 93 | < 94 | --- 95 | > 96 | 24c24 97 | < 98 | --- 99 | > 100 | 62c62 101 | < 102 | --- 103 | > 104 | 105 | 106 | 107 | 108 | [root@localhost thin_shrink]# vgchange -ay 109 | 3 logical volume(s) in volume group "thinvg" now active 110 | 2 logical volume(s) in volume group "rhel_vm253-73" now active 111 | [root@localhost thin_shrink]# mount /dev/thinvg/t1 /home/nkshirsa/formt/t1 112 | [root@localhost thin_shrink]# umount /home/nkshirsa/formt/t1 113 | [root@localhost thin_shrink]# vgchange -an thinvg 114 | 0 logical volume(s) in volume group "thinvg" now active 115 | 116 | ------------------------------------------------------------------------------- 117 | 118 | Scenario 3) 119 | Another run which required both range and single mappings to be moved: 120 | Shrink further to 2200MB, 121 | 122 | [root@localhost thin_shrink]# ./thin_shrink.py -L2200m -tthinvg/p1 123 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 124 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 125 | lvchange -an thinvg/p1 126 | lvchange -ay thinvg/p1_tmeta -y 127 | Allowing activation of component LV. 128 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 129 | thin_rmap --region 0..38400 /dev/thinvg/p1_tmeta > /tmp/rmap 130 | lvchange -an thinvg/p1_tmeta 131 | Need to shrink pool to this number of chunks ---- 35200 132 | Changes needed to metadata and blocks will be copied 133 | 134 | allocated ranges are.. 135 | 136 | [[0, 2], [2, 1], [3, 1], [4, 160], [164, 1], [165, 1], [166, 1], [167, 1], [168, 1], [169, 1], [170, 1], [171, 1], [172, 1], [173, 1], [174, 1], [175, 1], [176, 1], [177, 1], [178, 1], [179, 2], [181, 1], [182, 1], [183, 160], [343, 1], [344, 1], [345, 1], [346, 1], [347, 1], [348, 1], [349, 1], [350, 1], [351, 1], [352, 1], [353, 1], [354, 192], [546, 3808], [4354, 704], [8350, 992], [9342, 1024], [10366, 1024], [11390, 984], [12374, 40], [12414, 1024], [13438, 1024], [14462, 1024], [15486, 864], [24350, 992], [25342, 1024], [26366, 1024], [27390, 984], [28374, 40], [28414, 1024], [29438, 1024], [30462, 1024], [31486, 864], [32351, 2047], [34398, 1249], [35656, 1], [35657, 1], [35658, 1], [35659, 1], [35660, 1], [35661, 1], [35662, 1], [35663, 1]] 137 | 138 | sorted free ranges are 139 | 140 | [[32350, 1], [5058, 3292], [16350, 8000]] 141 | 142 | reverse sorted ranges requiring move are 143 | 144 | [[34398, 1249], [35656, 1], [35657, 1], [35658, 1], [35659, 1], [35660, 1], [35661, 1], [35662, 1], [35663, 1]] 145 | change list is.. 146 | 147 | [[34398, 5058, 1249], [35656, 6307, 1], [35657, 6308, 1], [35658, 6309, 1], [35659, 6310, 1], [35660, 6311, 1], [35661, 6312, 1], [35662, 6313, 1], [35663, 6314, 1]] 148 | moving 1249 blocks at 34398 to 5058 149 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=34398 seek=5058 count=1249 conv=notrunc 150 | 1249+0 records in 151 | 1249+0 records out 152 | 81854464 bytes (82 MB) copied, 1.26161 s, 64.9 MB/s 153 | moving 1 blocks at 35656 to 6307 154 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35656 seek=6307 count=1 conv=notrunc 155 | 1+0 records in 156 | 1+0 records out 157 | 65536 bytes (66 kB) copied, 0.000855498 s, 76.6 MB/s 158 | moving 1 blocks at 35657 to 6308 159 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35657 seek=6308 count=1 conv=notrunc 160 | 1+0 records in 161 | 1+0 records out 162 | 65536 bytes (66 kB) copied, 0.00157816 s, 41.5 MB/s 163 | moving 1 blocks at 35658 to 6309 164 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35658 seek=6309 count=1 conv=notrunc 165 | 1+0 records in 166 | 1+0 records out 167 | 65536 bytes (66 kB) copied, 0.000324833 s, 202 MB/s 168 | moving 1 blocks at 35659 to 6310 169 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35659 seek=6310 count=1 conv=notrunc 170 | 1+0 records in 171 | 1+0 records out 172 | 65536 bytes (66 kB) copied, 0.000380861 s, 172 MB/s 173 | moving 1 blocks at 35660 to 6311 174 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35660 seek=6311 count=1 conv=notrunc 175 | 1+0 records in 176 | 1+0 records out 177 | 65536 bytes (66 kB) copied, 0.000324412 s, 202 MB/s 178 | moving 1 blocks at 35661 to 6312 179 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35661 seek=6312 count=1 conv=notrunc 180 | 1+0 records in 181 | 1+0 records out 182 | 65536 bytes (66 kB) copied, 0.000530414 s, 124 MB/s 183 | moving 1 blocks at 35662 to 6313 184 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35662 seek=6313 count=1 conv=notrunc 185 | 1+0 records in 186 | 1+0 records out 187 | 65536 bytes (66 kB) copied, 0.000383886 s, 171 MB/s 188 | moving 1 blocks at 35663 to 6314 189 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35663 seek=6314 count=1 conv=notrunc 190 | 1+0 records in 191 | 1+0 records out 192 | 65536 bytes (66 kB) copied, 0.000435793 s, 150 MB/s 193 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 194 | lvcreate -n restore_lv -L8m thinvg 195 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (2.34 GiB). 196 | WARNING: You have not turned on protection against thin pools running out of space. 197 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 198 | Logical volume "restore_lv" created. 199 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 200 | Restoring: [==================================================] 100% 201 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 202 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 203 | Volume group "thinvg" successfully backed up. 204 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 205 | Restored volume group thinvg 206 | Logical volume "restore_lv" successfully removed 207 | 0 logical volume(s) in volume group "thinvg" now active 208 | This pool has been shrunk to the specified size of 2200m 209 | [root@localhost thin_shrink]# 210 | 211 | 212 | [root@localhost thin_shrink]# diff /tmp/dump /tmp/changed.xml 213 | 1c1 214 | < 215 | --- 216 | > 217 | 5c5 218 | < 219 | --- 220 | > 221 | 7c7 222 | < 223 | --- 224 | > 225 | 9c9 226 | < 227 | --- 228 | > 229 | 11c11 230 | < 231 | --- 232 | > 233 | 13c13 234 | < 235 | --- 236 | > 237 | 15c15 238 | < 239 | --- 240 | > 241 | 17c17 242 | < 243 | --- 244 | > 245 | 19c19 246 | < 247 | --- 248 | > 249 | 23c23 250 | < 251 | --- 252 | > 253 | [root@localhost thin_shrink]# 254 | 255 | 256 | [root@localhost thin_shrink]# vgchange -ay thinvg 257 | 3 logical volume(s) in volume group "thinvg" now active 258 | [root@localhost thin_shrink]# mount /dev/thinvg/t1 /home/nkshirsa/formt/t1 259 | [root@localhost thin_shrink]# ls /home/nkshirsa/formt/t1/ 260 | folder2/ somefile 261 | [root@localhost thin_shrink]# ls /home/nkshirsa/formt/t1/ 262 | folder2 somefile 263 | [root@localhost thin_shrink]# 264 | 265 | 266 | [root@localhost thin_shrink]# diff /tmp/vgmeta_backup /tmp/changed_vgmeta 267 | 59c59 268 | < extent_count = 600 # 2.34375 Gigabytes 269 | --- 270 | > extent_count = 550 271 | [root@localhost thin_shrink]# 272 | 273 | 274 | [root@localhost thin_shrink]# lvs 275 | LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 276 | root rhel_vm253-73 -wi-ao---- <13.87g 277 | swap rhel_vm253-73 -wi-ao---- 1.60g 278 | p1 thinvg twi-aotz-- <2.15g 69.21 20.02 279 | t1 thinvg Vwi-aotz-- 10.00g p1 7.44 280 | t2 thinvg Vwi-a-tz-- 4.00g p1 18.58 281 | [root@localhost thin_shrink]# 282 | 283 | 284 | 285 | ----------------------- 286 | 287 | 288 | [root@localhost thin_shrink]# lvs 289 | LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 290 | root rhel_vm253-73 -wi-ao---- <13.87g 291 | swap rhel_vm253-73 -wi-ao---- 1.60g 292 | p1 thinvg twi-aotz-- <4.45g 27.96 18.55 293 | t1 thinvg Vwi-a-tz-- 10.00g p1 7.44 294 | t2 thinvg Vwi-a-tz-- 4.00g p1 12.48 295 | 296 | [root@localhost thin_shrink]# vgchange -an thinvg 297 | 0 logical volume(s) in volume group "thinvg" now active 298 | 299 | 300 | [root@localhost thin_shrink]# ./thin_shrink.py -L4000m -t thinvg/p1 301 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 302 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 303 | lvchange -an thinvg/p1 304 | lvchange -ay thinvg/p1_tmeta -y 305 | Allowing activation of component LV. 306 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 307 | thin_rmap --region 0..72832 /dev/thinvg/p1_tmeta > /tmp/rmap 308 | lvchange -an thinvg/p1_tmeta 309 | Need to shrink pool to this number of chunks ---- 64000 310 | Yes, this pool can be shrunk. Last mapped block is 40383 and new size in chunks is 64000 311 | 312 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 313 | lvcreate -n restore_lv -L8m thinvg 314 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (<4.45 GiB). 315 | WARNING: You have not turned on protection against thin pools running out of space. 316 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 317 | Logical volume "restore_lv" created. 318 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 319 | Restoring: [==================================================] 100% 320 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 321 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 322 | Volume group "thinvg" successfully backed up. 323 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 324 | Restored volume group thinvg 325 | Logical volume "restore_lv" successfully removed 326 | 0 logical volume(s) in volume group "thinvg" now active 327 | This pool has been shrunk to the specified size of 4000m 328 | 329 | 330 | 331 | [root@localhost thin_shrink]# ./thin_shrink.py -L3000m -t thinvg/p1 332 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 333 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 334 | lvchange -an thinvg/p1 335 | lvchange -ay thinvg/p1_tmeta -y 336 | Allowing activation of component LV. 337 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 338 | thin_rmap --region 0..64000 /dev/thinvg/p1_tmeta > /tmp/rmap 339 | lvchange -an thinvg/p1_tmeta 340 | Need to shrink pool to this number of chunks ---- 48000 341 | Yes, this pool can be shrunk. Last mapped block is 40383 and new size in chunks is 48000 342 | 343 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 344 | lvcreate -n restore_lv -L8m thinvg 345 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (<3.91 GiB). 346 | WARNING: You have not turned on protection against thin pools running out of space. 347 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 348 | Logical volume "restore_lv" created. 349 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 350 | Restoring: [==================================================] 100% 351 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 352 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 353 | Volume group "thinvg" successfully backed up. 354 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 355 | Restored volume group thinvg 356 | Logical volume "restore_lv" successfully removed 357 | 0 logical volume(s) in volume group "thinvg" now active 358 | This pool has been shrunk to the specified size of 3000m 359 | [root@localhost thin_shrink]# ./thin_shrink.py -L2200m -t thinvg/p1 360 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 361 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 362 | lvchange -an thinvg/p1 363 | lvchange -ay thinvg/p1_tmeta -y 364 | Allowing activation of component LV. 365 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 366 | thin_rmap --region 0..48000 /dev/thinvg/p1_tmeta > /tmp/rmap 367 | lvchange -an thinvg/p1_tmeta 368 | Need to shrink pool to this number of chunks ---- 35200 369 | Changes needed to metadata and blocks will be copied 370 | 371 | allocated ranges are.. 372 | 373 | [[0, 2], [4, 160], [164, 1], [165, 1], [166, 1], [167, 1], [168, 1], [169, 1], [170, 1], [171, 2], [175, 160], [335, 1], [336, 1], [337, 1], [338, 1], [339, 1], [340, 1], [341, 1], [342, 1], [343, 1], [344, 1], [345, 1], [346, 1], [347, 1], [348, 1], [349, 1], [350, 1016], [1366, 1024], [2390, 1024], [3414, 960], [4374, 64], [4438, 1024], [5462, 1024], [6486, 1024], [7510, 840], [16350, 1016], [17366, 1024], [18390, 1024], [19414, 960], [20374, 64], [20438, 1024], [21462, 1024], [22486, 1024], [23510, 840], [35631, 2047], [37678, 1953], [40357, 1], [40358, 1], [40359, 1], [40360, 1], [40361, 1], [40362, 1], [40363, 1], [40364, 1], [40365, 1], [40366, 1], [40375, 1], [40376, 1], [40377, 1], [40378, 1], [40379, 1], [40380, 1], [40381, 1], [40382, 1], [40383, 1]] 374 | 375 | sorted free ranges are 376 | 377 | [[2, 2], [173, 2], [8350, 8000]] 378 | 379 | reverse sorted ranges requiring move are 380 | 381 | [[35631, 2047], [37678, 1953], [40357, 1], [40358, 1], [40359, 1], [40360, 1], [40361, 1], [40362, 1], [40363, 1], [40364, 1], [40365, 1], [40366, 1], [40375, 1], [40376, 1], [40377, 1], [40378, 1], [40379, 1], [40380, 1], [40381, 1], [40382, 1], [40383, 1]] 382 | change list is.. 383 | 384 | [[35631, 8350, 2047], [37678, 10397, 1953], [40357, 2, 1], [40358, 173, 1], [40359, 12350, 1], [40360, 12351, 1], [40361, 12352, 1], [40362, 12353, 1], [40363, 12354, 1], [40364, 12355, 1], [40365, 12356, 1], [40366, 12357, 1], [40375, 12358, 1], [40376, 12359, 1], [40377, 12360, 1], [40378, 12361, 1], [40379, 12362, 1], [40380, 12363, 1], [40381, 12364, 1], [40382, 12365, 1], [40383, 12366, 1]] 385 | moving 2047 blocks at 35631 to 8350 386 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=35631 seek=8350 count=2047 conv=notrunc 387 | 2047+0 records in 388 | 2047+0 records out 389 | 134152192 bytes (134 MB) copied, 1.66689 s, 80.5 MB/s 390 | moving 1953 blocks at 37678 to 10397 391 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=37678 seek=10397 count=1953 conv=notrunc 392 | 1953+0 records in 393 | 1953+0 records out 394 | 127991808 bytes (128 MB) copied, 1.42024 s, 90.1 MB/s 395 | moving 1 blocks at 40357 to 2 396 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40357 seek=2 count=1 conv=notrunc 397 | 1+0 records in 398 | 1+0 records out 399 | 65536 bytes (66 kB) copied, 0.0109875 s, 6.0 MB/s 400 | moving 1 blocks at 40358 to 173 401 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40358 seek=173 count=1 conv=notrunc 402 | 1+0 records in 403 | 1+0 records out 404 | 65536 bytes (66 kB) copied, 0.0152313 s, 4.3 MB/s 405 | moving 1 blocks at 40359 to 12350 406 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40359 seek=12350 count=1 conv=notrunc 407 | 1+0 records in 408 | 1+0 records out 409 | 65536 bytes (66 kB) copied, 0.00566847 s, 11.6 MB/s 410 | moving 1 blocks at 40360 to 12351 411 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40360 seek=12351 count=1 conv=notrunc 412 | 1+0 records in 413 | 1+0 records out 414 | 65536 bytes (66 kB) copied, 0.0101351 s, 6.5 MB/s 415 | moving 1 blocks at 40361 to 12352 416 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40361 seek=12352 count=1 conv=notrunc 417 | 1+0 records in 418 | 1+0 records out 419 | 65536 bytes (66 kB) copied, 0.000340887 s, 192 MB/s 420 | moving 1 blocks at 40362 to 12353 421 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40362 seek=12353 count=1 conv=notrunc 422 | 1+0 records in 423 | 1+0 records out 424 | 65536 bytes (66 kB) copied, 0.00041912 s, 156 MB/s 425 | moving 1 blocks at 40363 to 12354 426 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40363 seek=12354 count=1 conv=notrunc 427 | 1+0 records in 428 | 1+0 records out 429 | 65536 bytes (66 kB) copied, 0.000262674 s, 249 MB/s 430 | moving 1 blocks at 40364 to 12355 431 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40364 seek=12355 count=1 conv=notrunc 432 | 1+0 records in 433 | 1+0 records out 434 | 65536 bytes (66 kB) copied, 0.00103939 s, 63.1 MB/s 435 | moving 1 blocks at 40365 to 12356 436 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40365 seek=12356 count=1 conv=notrunc 437 | 1+0 records in 438 | 1+0 records out 439 | 65536 bytes (66 kB) copied, 0.000243146 s, 270 MB/s 440 | moving 1 blocks at 40366 to 12357 441 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40366 seek=12357 count=1 conv=notrunc 442 | 1+0 records in 443 | 1+0 records out 444 | 65536 bytes (66 kB) copied, 0.000411183 s, 159 MB/s 445 | moving 1 blocks at 40375 to 12358 446 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40375 seek=12358 count=1 conv=notrunc 447 | 1+0 records in 448 | 1+0 records out 449 | 65536 bytes (66 kB) copied, 0.0125392 s, 5.2 MB/s 450 | moving 1 blocks at 40376 to 12359 451 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40376 seek=12359 count=1 conv=notrunc 452 | 1+0 records in 453 | 1+0 records out 454 | 65536 bytes (66 kB) copied, 0.0257946 s, 2.5 MB/s 455 | moving 1 blocks at 40377 to 12360 456 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40377 seek=12360 count=1 conv=notrunc 457 | 1+0 records in 458 | 1+0 records out 459 | 65536 bytes (66 kB) copied, 0.00115909 s, 56.5 MB/s 460 | moving 1 blocks at 40378 to 12361 461 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40378 seek=12361 count=1 conv=notrunc 462 | 1+0 records in 463 | 1+0 records out 464 | 65536 bytes (66 kB) copied, 0.00588273 s, 11.1 MB/s 465 | moving 1 blocks at 40379 to 12362 466 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40379 seek=12362 count=1 conv=notrunc 467 | 1+0 records in 468 | 1+0 records out 469 | 65536 bytes (66 kB) copied, 0.000306777 s, 214 MB/s 470 | moving 1 blocks at 40380 to 12363 471 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40380 seek=12363 count=1 conv=notrunc 472 | 1+0 records in 473 | 1+0 records out 474 | 65536 bytes (66 kB) copied, 0.000408732 s, 160 MB/s 475 | moving 1 blocks at 40381 to 12364 476 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40381 seek=12364 count=1 conv=notrunc 477 | 1+0 records in 478 | 1+0 records out 479 | 65536 bytes (66 kB) copied, 0.000367871 s, 178 MB/s 480 | moving 1 blocks at 40382 to 12365 481 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40382 seek=12365 count=1 conv=notrunc 482 | 1+0 records in 483 | 1+0 records out 484 | 65536 bytes (66 kB) copied, 0.000457262 s, 143 MB/s 485 | moving 1 blocks at 40383 to 12366 486 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=40383 seek=12366 count=1 conv=notrunc 487 | 1+0 records in 488 | 1+0 records out 489 | 65536 bytes (66 kB) copied, 0.000309906 s, 211 MB/s 490 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 491 | lvcreate -n restore_lv -L8m thinvg 492 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (<2.93 GiB). 493 | WARNING: You have not turned on protection against thin pools running out of space. 494 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 495 | Logical volume "restore_lv" created. 496 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 497 | Restoring: [==================================================] 100% 498 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 499 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 500 | Volume group "thinvg" successfully backed up. 501 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 502 | Restored volume group thinvg 503 | Logical volume "restore_lv" successfully removed 504 | 0 logical volume(s) in volume group "thinvg" now active 505 | This pool has been shrunk to the specified size of 2200m 506 | [root@localhost thin_shrink]# dmsetup info -c 507 | Name Maj Min Stat Open Targ Event UUID 508 | rhel_vm253--73-swap 253 3 L--w 2 1 0 LVM-RdasxZPawsqGgqspLkjPWiVKWoTp8gFwHrvi35ngnFUsF7rwOD0p3GlX74l626n9 509 | rhel_vm253--73-root 253 0 L--w 1 1 0 LVM-RdasxZPawsqGgqspLkjPWiVKWoTp8gFw9ukTcqFwiZgKZkGcY0nsBrMQa4L6b0hB 510 | [root@localhost thin_shrink]# lvs 511 | LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 512 | root rhel_vm253-73 -wi-ao---- <13.87g 513 | swap rhel_vm253-73 -wi-ao---- 1.60g 514 | p1 thinvg twi---tz-- <2.15g 515 | t1 thinvg Vwi---tz-- 10.00g p1 516 | t2 thinvg Vwi---tz-- 4.00g p1 517 | [root@localhost thin_shrink]# vgchange -ay thinvg 518 | 3 logical volume(s) in volume group "thinvg" now active 519 | [root@localhost thin_shrink]# mount /dev/thinvg/t2 /home/nkshirsa/formt/t2/ 520 | [root@localhost thin_shrink]# mount /dev/thinvg/t1 /home/nkshirsa/formt/t1 521 | [root@localhost thin_shrink]# ls /home/nkshirsa/formt/t1/ 522 | folder2 somefile 523 | 524 | [root@localhost thin_shrink]# ls /home/nkshirsa/formt/t2/ 525 | folder1 526 | 527 | 528 | 529 | 530 | 531 | -------- 532 | 533 | 534 | 535 | [root@localhost thin_shrink]# lvs 536 | LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 537 | root rhel_vm253-73 -wi-ao---- <13.87g 538 | swap rhel_vm253-73 -wi-ao---- 1.60g 539 | p1 thinvg twi---tz-- <2.15g 540 | t1 thinvg Vwi---tz-- 10.00g p1 541 | t2 thinvg Vwi---tz-- 4.00g p1 542 | [root@localhost thin_shrink]# ./thin_shrink.py -L2000m -t thinvg/p1 543 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 544 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 545 | lvchange -an thinvg/p1 546 | lvchange -ay thinvg/p1_tmeta -y 547 | Allowing activation of component LV. 548 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 549 | thin_rmap --region 0..35200 /dev/thinvg/p1_tmeta > /tmp/rmap 550 | lvchange -an thinvg/p1_tmeta 551 | Need to shrink pool to this number of chunks ---- 32000 552 | Yes, this pool can be shrunk. Last mapped block is 24349 and new size in chunks is 32000 553 | 554 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 555 | lvcreate -n restore_lv -L8m thinvg 556 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (<2.15 GiB). 557 | WARNING: You have not turned on protection against thin pools running out of space. 558 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 559 | Logical volume "restore_lv" created. 560 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 561 | Restoring: [==================================================] 100% 562 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 563 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 564 | Volume group "thinvg" successfully backed up. 565 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 566 | Restored volume group thinvg 567 | Logical volume "restore_lv" successfully removed 568 | 0 logical volume(s) in volume group "thinvg" now active 569 | This pool has been shrunk to the specified size of 2000m 570 | [root@localhost thin_shrink]# vgchange -ay thinvg 571 | 3 logical volume(s) in volume group "thinvg" now active 572 | [root@localhost thin_shrink]# vgchange -an thinvg 573 | 0 logical volume(s) in volume group "thinvg" now active 574 | [root@localhost thin_shrink]# ./thin_shrink.py -L1800m -t thinvg/p1 575 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 576 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 577 | lvchange -an thinvg/p1 578 | lvchange -ay thinvg/p1_tmeta -y 579 | Allowing activation of component LV. 580 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 581 | thin_rmap --region 0..32000 /dev/thinvg/p1_tmeta > /tmp/rmap 582 | lvchange -an thinvg/p1_tmeta 583 | Need to shrink pool to this number of chunks ---- 28800 584 | Yes, this pool can be shrunk. Last mapped block is 24349 and new size in chunks is 28800 585 | 586 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 587 | lvcreate -n restore_lv -L8m thinvg 588 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (1.95 GiB). 589 | WARNING: You have not turned on protection against thin pools running out of space. 590 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 591 | Logical volume "restore_lv" created. 592 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 593 | Restoring: [==================================================] 100% 594 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 595 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 596 | Volume group "thinvg" successfully backed up. 597 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 598 | Restored volume group thinvg 599 | Logical volume "restore_lv" successfully removed 600 | 0 logical volume(s) in volume group "thinvg" now active 601 | This pool has been shrunk to the specified size of 1800m 602 | 603 | 604 | 605 | 606 | [root@localhost thin_shrink]# ./thin_shrink.py -L1300m -t thinvg/p1 607 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 608 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 609 | lvchange -an thinvg/p1 610 | lvchange -ay thinvg/p1_tmeta -y 611 | Allowing activation of component LV. 612 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 613 | thin_rmap --region 0..28800 /dev/thinvg/p1_tmeta > /tmp/rmap 614 | lvchange -an thinvg/p1_tmeta 615 | Need to shrink pool to this number of chunks ---- 20800 616 | Changes needed to metadata and blocks will be copied 617 | 618 | allocated ranges are.. 619 | 620 | [[0, 2], [2, 1], [4, 160], [164, 1], [165, 1], [166, 1], [167, 1], [168, 1], [169, 1], [170, 1], [171, 2], [173, 1], [175, 160], [335, 1], [336, 1], [337, 1], [338, 1], [339, 1], [340, 1], [341, 1], [342, 1], [343, 1], [344, 1], [345, 1], [346, 1], [347, 1], [348, 1], [349, 1], [350, 1016], [1366, 1024], [2390, 1024], [3414, 960], [4374, 64], [4438, 1024], [5462, 1024], [6486, 1024], [7510, 840], [8350, 2047], [10397, 1953], [12350, 1], [12351, 1], [12352, 1], [12353, 1], [12354, 1], [12355, 1], [12356, 1], [12357, 1], [12358, 1], [12359, 1], [12360, 1], [12361, 1], [12362, 1], [12363, 1], [12364, 1], [12365, 1], [12366, 1], [16350, 1016], [17366, 1024], [18390, 1024], [19414, 960], [20374, 64], [20438, 1024], [21462, 1024], [22486, 1024], [23510, 840]] 621 | 622 | sorted free ranges are 623 | 624 | [[3, 1], [174, 1], [12367, 3983]] 625 | 626 | reverse sorted ranges requiring move are 627 | 628 | [[20438, 1024], [21462, 1024], [22486, 1024], [23510, 840]] 629 | change list is.. 630 | 631 | [[20438, 12367, 1024], [21462, 13391, 1024], [22486, 14415, 1024], [23510, 15439, 840]] 632 | moving 1024 blocks at 20438 to 12367 633 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=20438 seek=12367 count=1024 conv=notrunc 634 | 1024+0 records in 635 | 1024+0 records out 636 | 67108864 bytes (67 MB) copied, 2.11415 s, 31.7 MB/s 637 | moving 1024 blocks at 21462 to 13391 638 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=21462 seek=13391 count=1024 conv=notrunc 639 | 1024+0 records in 640 | 1024+0 records out 641 | 67108864 bytes (67 MB) copied, 1.14833 s, 58.4 MB/s 642 | moving 1024 blocks at 22486 to 14415 643 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=22486 seek=14415 count=1024 conv=notrunc 644 | 1024+0 records in 645 | 1024+0 records out 646 | 67108864 bytes (67 MB) copied, 1.22918 s, 54.6 MB/s 647 | moving 840 blocks at 23510 to 15439 648 | dd if=/dev/mapper/shrink_p1 of=/dev/mapper/shrink_p1 bs=64k skip=23510 seek=15439 count=840 conv=notrunc 649 | 840+0 records in 650 | 840+0 records out 651 | 55050240 bytes (55 MB) copied, 0.43219 s, 127 MB/s 652 | lvs -a | grep " thinvg " | grep "\[p1_tmeta]" > /tmp/metadata_lv 653 | lvcreate -n restore_lv -L8m thinvg 654 | WARNING: Sum of all thin volume sizes (14.00 GiB) exceeds the size of thin pools (<1.76 GiB). 655 | WARNING: You have not turned on protection against thin pools running out of space. 656 | WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. 657 | Logical volume "restore_lv" created. 658 | thin_restore -i /tmp/changed.xml -o /dev/thinvg/restore_lv 659 | Restoring: [==================================================] 100% 660 | lvconvert --thinpool thinvg/p1 --poolmetadata /dev/thinvg/restore_lv -y 661 | vgcfgbackup -f /tmp/vgmeta_backup thinvg 662 | Volume group "thinvg" successfully backed up. 663 | WARNING: Forced restore of Volume Group thinvg with thin volumes. 664 | Restored volume group thinvg 665 | Logical volume "restore_lv" successfully removed 666 | 0 logical volume(s) in volume group "thinvg" now active 667 | This pool has been shrunk to the specified size of 1300m 668 | [root@localhost thin_shrink]# 669 | 670 | 671 | 672 | 673 | 674 | ----- 675 | 676 | 677 | 678 | [root@localhost thin_shrink]# ./thin_shrink.py -L1200m -t thinvg/p1 679 | dmsetup create shrink_p1 --table '0 10485760 linear 252:48 2048' 680 | lvs -o +chunksize thinvg/p1 | grep -v Chunk > /tmp/chunksize 681 | lvchange -an thinvg/p1 682 | lvchange -ay thinvg/p1_tmeta -y 683 | Allowing activation of component LV. 684 | thin_dump /dev/thinvg/p1_tmeta > /tmp/dump 685 | thin_rmap --region 0..20800 /dev/thinvg/p1_tmeta > /tmp/rmap 686 | lvchange -an thinvg/p1_tmeta 687 | Need to shrink pool to this number of chunks ---- 19200 688 | This thin pool cannot be shrunk. The mapped chunks are more than the lower size provided. Discarding allocated blocks from the pool may help. 689 | Failed to find logical volume "thinvg/restore_lv" 690 | 0 logical volume(s) in volume group "thinvg" now active 691 | [root@localhost thin_shrink]# vgchange -ay thinvg 692 | 3 logical volume(s) in volume group "thinvg" now active 693 | [root@localhost thin_shrink]# 694 | 695 | 696 | 697 | --------------------------------------------------------------------------------