├── LICENSE ├── README.md ├── auto_drive ├── auto_drive.py ├── get_drive_uuid.sh └── readme.md ├── chianas ├── __init__.py ├── config_file_updater.py ├── drive_manager.py ├── drive_stats.sh ├── drivemanager_classes.py ├── export │ └── chianas01_export.json ├── move_local_plots.py ├── plot_manager.skel.yaml ├── readme.md ├── requirements.txt ├── system_logging.py ├── templates │ ├── daily_update.html │ ├── index.html │ └── new_plotting_drive.html └── utilities │ ├── auto_drive.py │ ├── get_drive_uuid.sh │ └── kill_nc.sh ├── chiaplot ├── config_file_updater.py ├── export │ └── readme.md ├── logs │ └── readme.md ├── plot_manager.py ├── plot_manager.skel.yaml ├── plotmanager_classes.py ├── readme.md └── system_logging.py ├── chiaplot_drive_temp_report_1.png ├── coin_monitor ├── coin_monitor.py ├── coin_monitor_config ├── logging.yaml ├── logs │ ├── coin_monitor.log │ └── new_coins.log ├── system_info.py ├── system_logging.py └── templates │ └── new_coin.html ├── extras ├── drive_structures │ ├── drive_structures │ ├── drive_structures_36_drives │ ├── drive_structures_45_drives │ ├── drive_structures_60_infinidat_enclosure1 │ ├── drive_structures_60_infinidat_enclosure2 │ ├── drive_structures_60_infinidat_enclosure3 │ ├── drive_structures_60_infinidat_enclosure4 │ ├── drive_structures_60_infinidat_enclosure5 │ ├── drive_structures_60_infinidat_enclosure6 │ ├── drive_structures_60_infinidat_enclosure7 │ ├── drive_structures_60_infinidat_enclosure8 │ ├── drive_structures_81_drives │ └── readme.md ├── install_instructions ├── no-snap.pref ├── nuke_snap.sh ├── readme.md ├── set_cpu_to_performance.sh └── sysctl.conf ├── farmer_health_check ├── farmer_health_check.py ├── farmer_health_config ├── logging.yaml ├── system_info.py └── system_logging.py ├── harvester_health_check ├── harvester_health_check.py ├── harvester_health_config ├── logging.yaml ├── system_info.py └── system_logging.py ├── images ├── chia_auto_drive_output.png ├── chia_new_coin_email.png ├── chia_plot_manager.png ├── chia_plot_manager_new.png ├── chia_plot_manager_v2.png ├── chianas_help_menu_0991.png ├── chiaplot_drive_listing.png ├── chiaplot_drive_temp_report_1.png ├── chiaplot_farm_report.png ├── chiaplot_plot_report.png ├── chiaplot_remote_harvester_health_report.png ├── chiaplot_uuid_found.png ├── chiaplot_uuid_not_found.png ├── plot_manager_network.jpg └── readme.md ├── install.sh └── logs └── readme.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 rjsears 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /auto_drive/auto_drive.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | # -*- coding: utf-8 -*- 4 | 5 | __author__ = 'Richard J. Sears' 6 | VERSION = "0.92 (2021-06-08)" 7 | 8 | """ 9 | STAND ALONE STAND ALONE STAND ALONE 10 | This script is part of the chia_plot_manager set of scripts. 11 | This is the STAND ALONE version of this script and will work 12 | without needing the rest of the repo. 13 | Script to help automate the addition of new hard drives 14 | to a Chia NAS/Harvester. Please make sure you understand 15 | everything that this script does before using it! 16 | This script is intended to make my life easier. It is ONLY 17 | designed to 1) work with unformatted drives with no existing 18 | partitions, and 2) utilizing the directory structure found 19 | in the readme. 20 | This script WILL NOT work if you are utilizing hard drives wiht 21 | no partitions as we would have no way to determine if the drive 22 | is newly added. 23 | It can be modified, of course, to do other things. 24 | 1) Looks for any drive device that is on the system but does not 25 | end in a '1'. For example: /dev/sda1 vs. /dev/sdh - In this case 26 | /dev/sdh has no partition so it likely is not mounted or being 27 | used. 28 | 2) Utilizing the directory structure that I have shared in the 29 | main readme, locate the next available mountpoint to use. 30 | 3) Utilizing sgdisk, creates a new GPT partition on the new 31 | drive. 32 | 4) Formats the drive with the xfs filesystem. 33 | 5) Verifies that the UUID of the drive does not already exist 34 | in /etc/fstab and if not, adds the correct entry to /etc/fstab. 35 | 6) Mounts the new drive 36 | 7) Add the new mountpoint to your chia harvester configuration. 37 | """ 38 | 39 | from glob import glob 40 | from os.path import ismount, abspath, exists 41 | import string 42 | import subprocess 43 | import yaml 44 | from natsort import natsorted 45 | import pathlib 46 | script_path = pathlib.Path(__file__).parent.resolve() 47 | 48 | # Do some housekeeping 49 | # Define some colors for our help message 50 | red='\033[0;31m' 51 | yellow='\033[0;33m' 52 | green='\033[0;32m' 53 | white='\033[0;37m' 54 | blue='\033[0;34m' 55 | nc='\033[0m' 56 | 57 | # Where can we find your Chia Config File? 58 | chia_config_file = '/root/.chia/mainnet/config/config.yaml' 59 | 60 | #Where did we put the get_drive_uuid.sh script: 61 | get_drive_uuid = script_path.joinpath('get_drive_uuid.sh') 62 | 63 | # What filesystem are you using: ext4 or xfs? 64 | file_system = 'xfs' 65 | 66 | 67 | # mark any drives here that you do not want touched at all for any reason, 68 | # formatted or not. 69 | do_not_use_drives = {'/dev/sda'} 70 | 71 | 72 | # Let's get started....... 73 | 74 | def get_next_mountpoint(): 75 | """ 76 | This function looks at the entire directory structure listed 77 | in the path_glob and then checks to see what directories are 78 | mounted and which are not. It then returns that in a sorted 79 | dictionary but only for those directories that are not 80 | mounted. We then return just the first one for use as our 81 | next mountpoint. 82 | abspath(d) = the full path to the directory 83 | ismount(d) = Returns True if abspath(d) is a mountpoint 84 | otherwise returns false. 85 | (d) looks like this: 86 | {'/mnt/enclosure0/front/column0/drive0': True} 87 | Make sure your path already exists and that the 'path_glob' 88 | ends with a `/*`. 89 | Notice that the path_glob does NOT include the actual drive0, 90 | drive1, etc. Your glob needs to be parsed with the *. 91 | So for this glob: path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*' 92 | this would be the directory structure: 93 | /mnt 94 | ├── enclosure0 95 | │ ├── front 96 | │ │ ├── column0 97 | │ │ │ ├── drive2 98 | │ │ │ ├── drive3 99 | │ │ │ ├── drive4 100 | │ │ │ └── drive5 101 | ├── enclosure1 102 | │ ├── rear 103 | │ │ ├── column0 104 | │ │ │ ├── drive6 105 | │ │ │ ├── drive7 106 | """ 107 | try: 108 | path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*' 109 | d = {abspath(d): ismount(d) for d in glob(path_glob)} 110 | return natsorted([p for p in d if not d[p]])[0] 111 | except IndexError: 112 | print(f'\nNo usable {green}Directories{nc} found, Please check your {green}path_glob{nc}, ') 113 | print(f'and verify that your directory structure exists. Also, please make sure your') 114 | print(f'{green}path_glob{nc} ends with a trailing backslash and an *:') 115 | print(f'Example: /mnt/enclosure[0-9]/*/column[0-9]{green}/*{nc}\n') 116 | 117 | 118 | def get_new_drives(): 119 | """ 120 | This functions creates two different lists. "all_drives" which 121 | is a sorted list of all drives as reported by the OS as read from 122 | `/dev/sd*` using glob(). The second is "formatted_drives" which is a list of 123 | all drives that end with a `1` indicating that they have a partition 124 | on them and as such we should not touch. We then strip the `1` off 125 | the back of the "formatted_drives" list and add it to a new set 126 | "formatted_set". We then iterate over the "all_drives" sorted list, 127 | strip any digits off the end, check to see if it exists in the 128 | "formatted_set" and if it does not, then we go ahead and add it 129 | to our final set "unformatted_drives". We then sort that and 130 | return the first drive from that sorted set. 131 | """ 132 | all_drives = sorted(glob('/dev/sd*')) 133 | formatted_drives = list(filter(lambda x: x.endswith('1'), all_drives)) 134 | formatted_set = set() 135 | for drive in formatted_drives: 136 | drive = drive.rstrip(string.digits) 137 | formatted_set.add(drive) 138 | formatted_and_do_not_use = set.union(formatted_set, do_not_use_drives) 139 | unformatted_drives = set() 140 | for drive in all_drives: 141 | drive = drive.rstrip(string.digits) 142 | if drive not in formatted_and_do_not_use: 143 | # if drive not in formatted_set: 144 | unformatted_drives.add(drive) 145 | if unformatted_drives: 146 | return sorted(unformatted_drives)[0] 147 | else: 148 | return False 149 | 150 | 151 | def add_new_drive(): 152 | """ 153 | This is the main function responsible for getting user input and calling all other 154 | functions. I tried to do a lot of error checking, but since we are working with a 155 | drive that `should` have zero data on it, we should not have too much of an issue. 156 | """ 157 | print(f'Welcome to auto_drive.py {blue}Version{nc}:{green}{VERSION}{nc}') 158 | can_we_run() 159 | if not get_new_drives(): 160 | print (f'\nNo new drives found! {red}EXITING{nc}\n') 161 | else: 162 | drive = get_new_drives() 163 | mountpoint = get_next_mountpoint() 164 | print (f'\nWe are going to format: {blue}{drive}{nc}, add it to {yellow}/etc/fstab{nc} and mount it at {yellow}{mountpoint}{nc}\n') 165 | format_continue = sanitise_user_input(f'Would you like to {white}CONTINUE{nc}?: {green}YES{nc} or {red}NO{nc} ', range_=('Y', 'y', 'YES', 'yes', 'N', 'n', 'NO', 'no')) 166 | if format_continue in ('Y', 'YES', 'y', 'yes'): 167 | print (f'We will {green}CONTINUE{nc}!') 168 | if sgdisk(drive): 169 | print (f'Drive Partitioning has been completed {green}successfully{nc}!') 170 | else: 171 | print(f'There was a {red}PROBLEM{nc} partitioning your drive, please handle manually!') 172 | exit() 173 | if make_filesystem(drive): 174 | print(f'Drive has been formatted as {file_system} {green}successfully{nc}!') 175 | else: 176 | print(f'There was a {red}PROBLEM{nc} formatting your drive as {green}{file_system}{nc}, please handle manually!') 177 | exit() 178 | if add_uuid_to_fstab(drive): 179 | print(f'Drive added to system {green}successfully{nc}!\n\n') 180 | else: 181 | print(f'There was a {red}PROBLEM{nc} adding your drive to /etc/fstab or mounting, please handle manually!') 182 | exit() 183 | add_to_chia = sanitise_user_input(f'Would you like to add {yellow}{mountpoint}{nc} to your {green}Chia{nc} Config File?: {green}YES{nc} or {red}NO{nc} ', range_=('Y', 'y', 'YES', 'yes', 'N', 'n', 'NO', 'no')) 184 | if add_to_chia in ('Y', 'YES', 'y', 'yes'): 185 | print (f'Adding {yellow}{mountpoint}{nc} to {green}Chia{nc} Configuration File......') 186 | if update_chia_config(mountpoint): 187 | print(f'Mountpoint: {green}{mountpoint}{nc} Successfully added to your Chia Config File') 188 | print(f'\n\nDrive Process Complete - Thank You and have a {red}G{yellow}R{white}E{green}A{blue}T{nc} Day!\n\n') 189 | else: 190 | print(f'\nThere was an {red}ERROR{nc} adding {mountpoint} to {chia_config_file}!') 191 | print(f'You need to {yellow}MANUALLY{nc} add or verify that it has been added to your config file,') 192 | print(f'otherwise plots on that drive will {red}NOT{nc} get {green}harvested{nc}!\n') 193 | print(f'\n\nDrive Process Complete - Thank You and have a {red}G{yellow}R{white}E{green}A{blue}T{nc} Day!') 194 | else: 195 | print(f'\n\nDrive Process Complete - Thank You and have a {red}G{yellow}R{white}E{green}A{blue}T{nc} Day!\n\n') 196 | else: 197 | print (f'{yellow}EXITING{nc}!') 198 | exit() 199 | 200 | def sgdisk(drive): 201 | """ 202 | Function to call sgdisk to create the disk partition and set GPT. 203 | """ 204 | try: 205 | print(f'Please wait while we get {blue}{drive}{nc} ready to partition.......') 206 | sgdisk_results = subprocess.run(['sgdisk', '-Z', drive], capture_output=True, text=True) 207 | print (sgdisk_results.stdout) 208 | print (sgdisk_results.stderr) 209 | except subprocess.CalledProcessError as e: 210 | print(f'sgdisk {red}Error{nc}: {e}') 211 | return False 212 | except Exception as e: 213 | print(f'sgdisk: Unknown {red}Error{nc}! Drive not Partitioned') 214 | return False 215 | try: 216 | print(f'Creating partition on {blue}{drive}{nc}.......') 217 | sgdisk_results = subprocess.run(['sgdisk', '-N0', drive], capture_output=True, text=True) 218 | print(sgdisk_results.stdout) 219 | print(sgdisk_results.stderr) 220 | except subprocess.CalledProcessError as e: 221 | print(f'sgdisk {red}Error{nc}: {e}') 222 | return False 223 | except Exception as e: 224 | print(f'sgdisk: Unknown {red}Error{nc}! Drive not Partitioned') 225 | return False 226 | try: 227 | print(f'Creating unique {white}UUID{nc} for drive {drive}....') 228 | sgdisk_results = subprocess.run(['sgdisk', '-G', drive], capture_output=True, text=True) 229 | print(sgdisk_results.stdout) 230 | print(sgdisk_results.stderr) 231 | print('') 232 | print(f'Process (sgdisk) {green}COMPLETE{nc}!') 233 | return True 234 | except subprocess.CalledProcessError as e: 235 | print(f'sgdisk {red}Error{nc}: {e}') 236 | return False 237 | except Exception as e: 238 | print(f'sgdisk: Unknown {red}Error{nc}! Drive not Partitioned') 239 | return False 240 | 241 | def make_filesystem(drive): 242 | """ 243 | Formats the new drive to selected filesystem 244 | """ 245 | drive = drive + '1' 246 | try: 247 | print(f'Please wait while we format {blue}{drive}{nc} as {green}{file_system}{nc}.................') 248 | if file_system=='xfs': 249 | mkfs_results = subprocess.run([f'mkfs.xfs', '-q', '-f', drive], capture_output=True, text=True) 250 | elif file_system=='ext4': 251 | mkfs_results = subprocess.run([f'mkfs.ext4', '-F', drive], capture_output=True, text=True) 252 | else: 253 | print(f'{red}ERROR{nc}: Unsupported Filesystem: {file_system}') 254 | return False 255 | print (mkfs_results.stdout) 256 | print (mkfs_results.stderr) 257 | print('') 258 | print(f'Process (mkfs.{file_system}) {green}COMPLETE{nc}!') 259 | return True 260 | except subprocess.CalledProcessError as e: 261 | print(f'mkfs {red}Error{nc}: {e}') 262 | return False 263 | except Exception as e: 264 | print(f'mkfs: Unknown {red}Error{nc}! Drive not Partitioned') 265 | return False 266 | 267 | 268 | def add_uuid_to_fstab(drive): 269 | """ 270 | Uses a little shell script (get_drive_uuid.sh) to get our drive UUID after it have been 271 | formatted. 272 | """ 273 | drive = drive + '1' 274 | try: 275 | print(f'Please wait while we add {blue}{drive}{nc} to /etc/fstab.......') 276 | uuid_results = subprocess.check_output([get_drive_uuid, drive]).decode('ascii').rstrip() 277 | if uuid_results == '': 278 | print(f'{red}BAD{nc} or {yellow}NO{nc} UUID returned! Please handle this drive manually!') 279 | return False 280 | print(f'Your drive UUID is: {green}{uuid_results}{nc}') 281 | print(f'Verifying that {green}{uuid_results}{nc} does not exist in /etc/fstab') 282 | with open('/etc/fstab') as fstab: 283 | if uuid_results in fstab.read(): 284 | print (f'{red}ERROR!{nc}: {green}{uuid_results}{nc} already exists /etc/fstab, exiting!') 285 | return False 286 | else: 287 | print (f'UUID: {green}{uuid_results}{nc} does not exist in /etc/fstab, adding it now.') 288 | mountpoint = get_next_mountpoint() 289 | with open ('/etc/fstab', 'a') as fstab_edit: 290 | fstab_edit.write(f'/dev/disk/by-uuid/{uuid_results} {mountpoint} xfs defaults,user 0 0 #Added by auto_drive.py\n') 291 | with open('/etc/fstab') as fstab: 292 | if uuid_results in fstab.read(): 293 | print(f'UUID: {green}{uuid_results}{nc} now exists in /etc/fstab, {green}continuing{nc}.....') 294 | else: 295 | print(f'{red}ERROR! {green}{uuid_results}{nc} was not added to /etc/fstab!') 296 | return False 297 | print(f'{blue}{drive}{nc} added to {red}/etc/fstab{nc} and will be mounted at {green}{mountpoint}{nc}') 298 | subprocess.run(['mount', mountpoint], capture_output=True, text=True) 299 | if ismount(mountpoint): 300 | print(f'Drive {blue}{drive}{nc} Successfully Mounted') 301 | else: 302 | print(f'Drive mount {red}FAILED!{nc}. Please manually check your system.') 303 | return False 304 | return True 305 | except subprocess.CalledProcessError as e: 306 | print(f'uuid error: {e}') 307 | return False 308 | except Exception as e: 309 | print(f'uuid error: {e}') 310 | return False 311 | 312 | 313 | def can_we_run(): 314 | """ 315 | Check to see if the chia configuration file noted above exists, exits if it does not. 316 | Check to see we are using a supported filesystem. 317 | """ 318 | if exists(chia_config_file): 319 | pass 320 | else: 321 | print(f'\n{red}ERROR{nc} opening {green}{chia_config_file}{nc}\nPlease check your {green}filepath{nc} and try again!\n\n') 322 | exit() 323 | if file_system not in {'ext4', 'xfs'}: 324 | print (f'\n{red}ERROR{nc}: {green}{file_system}{nc} is not a supported filesystem.\nPlease choose {green}ext4{nc} or {green}xfs{nc} and try again!\n\n') 325 | exit() 326 | else: 327 | pass 328 | if exists (get_drive_uuid): 329 | pass 330 | else: 331 | print(f'\n{red}ERROR{nc} opening {green}{get_drive_uuid}{nc}\nPlease check your {green}filepath{nc} and try again!\n\n') 332 | exit() 333 | 334 | 335 | def update_chia_config(mountpoint): 336 | """ 337 | This function adds the new mountpoint to your chia configuration file. 338 | """ 339 | try: 340 | with open(chia_config_file) as f: 341 | chia_config = yaml.safe_load(f) 342 | if mountpoint in f: 343 | print(f'{green}Mountpoint {red}Already{nc} Exists - We will not add it again!\n\n') 344 | return True 345 | else: 346 | chia_config['harvester']['plot_directories'].append(mountpoint) 347 | except IOError: 348 | print(f'{red}ERROR{nc} opening {yellow}{chia_config_file}{nc}! Please check your {yellow}filepath{nc} and try again!\n\n') 349 | return False 350 | try: 351 | with open(chia_config_file, 'w') as f: 352 | yaml.safe_dump(chia_config, f) 353 | return True 354 | except IOError: 355 | print(f'{red}ERROR{nc} opening {yellow}{chia_config_file}{nc}! Please check your {yellow}filepath{nc} and try again!\n\n') 356 | return False 357 | 358 | def sanitise_user_input(prompt, type_=None, min_=None, max_=None, range_=None): 359 | """ 360 | Quick and simple function to grab user input and make sure it's correct. 361 | """ 362 | if min_ is not None and max_ is not None and max_ < min_: 363 | raise ValueError("min_ must be less than or equal to max_.") 364 | while True: 365 | ui = input(prompt) 366 | if type_ is not None: 367 | try: 368 | ui = type_(ui) 369 | except ValueError: 370 | print("Input type must be {0}.".format(type_.__name__)) 371 | continue 372 | if max_ is not None and ui > max_: 373 | print("Input must be less than or equal to {0}.".format(max_)) 374 | elif min_ is not None and ui < min_: 375 | print("Input must be greater than or equal to {0}.".format(min_)) 376 | elif range_ is not None and ui not in range_: 377 | if isinstance(range_, range): 378 | template = "Input must be between {0.start} and {0.stop}." 379 | print(template.format(range_)) 380 | else: 381 | template = "Input must be {0}." 382 | if len(range_) == 1: 383 | print(template.format(*range_)) 384 | else: 385 | expected = " or ".join(( 386 | ", ".join(str(x) for x in range_[:-1]), 387 | str(range_[-1]) 388 | )) 389 | print(template.format(expected)) 390 | else: 391 | return ui 392 | 393 | 394 | def main(): 395 | add_new_drive() 396 | 397 | 398 | if __name__ == '__main__': 399 | main() 400 | -------------------------------------------------------------------------------- /auto_drive/get_drive_uuid.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | blkid -u filesystem $1 | awk -F "[= ]" '{print $3}' |tr -d "\"" 3 | -------------------------------------------------------------------------------- /auto_drive/readme.md: -------------------------------------------------------------------------------- 1 |
5 | 6 | 7 |
8 |
9 |
10 | As the title implies, I designed this script to help with the adding of new hard drives to your Chia Harvester/NAS. This is a command line drive interactive python script. Please see the top fo the script for more information. Also remember that running `auto_drive.py` invokes `fdisk` and `mkfs`. Make sure the user that you are running the
11 | script as somone who has the ability to execute those commands. `sudo` should work in this situation.
12 |
13 | Hopefully, this will be useful for someone!
14 |
15 |
16 |
7 | Multi Server Chia Plot and Drive Management Solution
8 |
7 | Multi Server Chia Plot and Drive Management Solution
8 |
17 |
20 |
18 |
19 | Installation and usage Instructions:
21 |
22 | 1) Download a copy of this script and the `get_drive_uuid.sh` shell script and place it in your working directory.
23 |
24 | 2) Edit the auto_drive.py script and alter the line pointing to your chia configuration file:
25 | `chia_config_file = '/root/.chia/mainnet/config/config.yaml'`
26 |
27 | 3) Alter the path_glob line to match your mount point directory structure. This script is specifically set up and tested
28 | with the directory structure that I have laid out in the main readme file for chia_plot_manager. It is possible that
29 | it will work with other structures but it is way beyond my capability to test for every possible directory structure
30 | combination. I would recommend reading and understanding what the `get_next_mountpoint()` function does and then
31 | see if it will work with your directory structure.
32 | `path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*`
33 |
34 | 4) Alter the following line to point to where you have installed the `get_drive_uuid.sh` shell script. Leave script name as well.
35 | `get_drive_uuid = '/root/plot_manager/get_drive_uuid.sh'`
36 |
37 | 5) `chmod +x` both the auto_drive.py script and the `get_drive_uuid.sh` script.
38 |
39 | 6) Change to the directory where you have installed this script and `./auto_drive.py` If the script finds any new drives it will
40 | then prompt you to accept the drive and mountpoint it plans to use. Once the drive has been readied for the system and mounted
41 | it will ask you if you want it added to your chia configuration file.
42 |
43 |
44 | Requirements........
45 |
46 | The only requirements beyond standard linux tools are the following:
47 |
48 |
49 |
52 |
53 |
54 |
55 | Notes......
56 |
57 | There are a couple of things to remember with `auto_drive.py`. First, in order to protect any possible data on drives, we will only
58 | look for drives that do not include a partition already on the drive. If you are reusing drives, and they have a partition
59 | already on the drive, `auto_drive.py` will not select any of those drive to work on. The best way to `reuse` a drive is to
60 | manually `fdisk` the drive in question, delete any existing partitions, and then `auto_drive.py` will then be able to use those
61 | drives.
62 |
63 | Second, I have determined that drives that have previously had VMFS partitions on them require special handling. `SGDisk` does not
64 | appear to have the ability to overwrite that partition information. In fact, `fdisk` seems to have an issue as well. After multiple
65 | attempts to figure this litte problem out, I found that this is the best solution:
66 |
67 | 1) Using `fdisk`, access the device in question.
68 | 2) Delete any existing partition and issue the `w` command.
69 | 3) Rerun the `fdisk` command on the disk, utilize the `g` command to create a new GPT signature.
70 | 4) Create a new partition by using the `n` command.
71 | 5) When you go to save the partition with the `w` command, you should get a notice that there is an existing VMFS partition.
72 | 6) Answer `y` to this question and issue the `w` command, exiting `fdisk`.
73 | 7) Once again, rerun `fdisk` and delete this new partition and issue the `w` command.
74 | 8) You should now be ready to use `auto_drive.py` with that drive.
75 |
76 | Finally, `auto_drive.py` will not automatically add your new mountpoint to your chia config file. It will first ask
77 | you if you want it to do this. This is just an added step in the process to make sure you are aware of what the script is
78 | doing before it does it. If you choose not to have `auto_drive.py` add the mountpoint to your chia config, be sure to do it
79 | on your own otherwise your harvester will not see any plots on that mountpoint.
80 |
81 |
82 | Troubleshooting Auto_Drive......
83 |
84 | As I noted above, the main issues I have run into running `auto_drive.py` on my and my friend's systems revolve around the
85 | way a `used` drive may have been partitioned. In the event we cannot correctly manage a drive, `auto_drive.py` may leave
86 | your system in a state in which you want to correct to retry the operation. For example, if the drive is not partitioned
87 | correctly due to a `VMFS`, `ZFS`, or other type of partition, the `UUID` will be incorrect causing `auto_drive.py` to report an error mounting
88 | the drive due to an incorrect `uuid`. In this case, you would need to:
89 |
90 | 1) Correct the partitioning problem and verify that the partition has been removed with `lsblk`
91 | 2) Remove the incorrect entry from `/etc/fstab`. Any entries added by `auto_drive.py` will be noted as such.
92 | 3) Run `fdisk /dev/xxx` against the drive in question. Remove `all` partitions and `w`.
93 | 4) Run `fdisk /dev/xxx` (yes again), create a new partition and `w`, answer `yes` when it warns you about existing signature.
94 | 5) Rerun `auto_drive.py` again allowing it to reselect the drive in question.
95 |
96 | I am currently working on a solution to this issues, but I am having difficulty finding a suitable tool to wack the drive wholesale.
97 |
98 |
99 | Beyond that, other issues could be related to the person running the script. Make sure you have sufficient user privileges
100 | to run the following commands:
101 |
102 | 1) mkfs.xfs or mkfs.ext4
103 | 2) get_drive_uuid.sh
104 | 3) sgdisk
105 | 4) mount (although we do use the `user`option on our mount point so this should never be an issue).
106 | 5) Make sure the user running the script has write access to your chia configuration file.
107 |
108 | One last thing, if you get an error adding the mountpoint to your chia configuration file and this is the `first`
109 | drive you have added, please verify that your plot_directories entry in your chia configuration file looks like
110 | the following. If it does not have the trailing `[]` it will not work:
111 |
112 | `plot_directories: []`
113 |
114 |
115 |
116 |
117 |
118 |
119 | Enjoy!!
120 |
--------------------------------------------------------------------------------
/chianas/__init__.py:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/chianas/config_file_updater.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Part of drive_manager. Checks to see if our config file needs updating and updated it for us.
6 | """
7 | VERSION = "V0.98 (2021-10-15)"
8 |
9 | import os
10 | import yaml
11 | from pathlib import Path
12 | import logging
13 | from system_logging import setup_logging
14 | from shutil import copyfile
15 | from flatten_dict import flatten
16 | from flatten_dict import unflatten
17 | from datetime import datetime
18 |
19 | script_path = Path(__file__).parent.resolve()
20 |
21 | # Date and Time Stuff
22 | current_military_time = datetime.now().strftime('%Y%m%d%H%M%S')
23 |
24 | config_file = (str(Path.home()) + '/.config/plot_manager/plot_manager.yaml')
25 | skel_config_file = script_path.joinpath('plot_manager.skel.yaml')
26 |
27 | # Define some colors for our help message
28 | red='\033[0;31m'
29 | yellow='\033[0;33m'
30 | green='\033[0;32m'
31 | white='\033[0;37m'
32 | blue='\033[0;34m'
33 | nc='\033[0m'
34 |
35 |
36 | # Setup Module logging. Main logging is configured in system_logging.py
37 | setup_logging()
38 | with open(config_file, 'r') as config:
39 | server = yaml.safe_load(config)
40 | level = server['log_level']
41 | level = logging._checkLevel(level)
42 | log = logging.getLogger('config_file_updater.py')
43 | log.setLevel(level)
44 |
45 |
46 | def config_file_update():
47 | """
48 | Function to determine if we need to update our yaml configuration file after an upgrade.
49 | """
50 | log.debug('config_file_update() Started....')
51 | if os.path.isfile(skel_config_file):
52 | log.debug('New config SKEL file located, checking to see if an update is needed:')
53 | with open(config_file, 'r') as current_config:
54 | current_config = yaml.safe_load(current_config)
55 | with open(skel_config_file, 'r') as temp_config:
56 | temp_config = yaml.safe_load(temp_config)
57 | temp_current_config = flatten(current_config)
58 | temp_temp_config = flatten(temp_config)
59 | updates = (dict((k, v) for k, v in temp_temp_config.items() if k not in temp_current_config))
60 | if updates != {}:
61 | copyfile(skel_config_file, (str(Path.home()) + '/.config/plot_manager/Config_Instructions.yaml'))
62 | copyfile(config_file, (str(Path.home()) + f'/.config/plot_manager/plot_manager.yaml.{current_military_time}'))
63 | temp_current_config.update(updates)
64 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k in temp_temp_config))
65 | else:
66 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k not in temp_temp_config))
67 | if new_config != {}:
68 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k in temp_temp_config))
69 | current_config = unflatten(new_config)
70 | current_config.update({'configured': False})
71 | with open((str(Path.home()) + '/.config/plot_manager/plot_manager.yaml'), 'w') as f:
72 | yaml.safe_dump(current_config, f)
73 | log.debug('Your config files needs updating!')
74 | log.debug(f'Config File: {config_file} updated. Update as necessary to run chia_plot_manager.')
75 | exit()
76 | else:
77 | log.debug('No config file changes necessary! No changes made.')
78 | log.debug(f'{skel_config_file} has been deleted!')
79 | os.remove(skel_config_file)
80 | else:
81 | log.debug('New configuration file not found. No changes made.')
82 |
83 |
84 |
85 | def main():
86 | print(f'Welcome to {green}config_file_updater.py{nc} {blue}VERSION: {nc}{VERSION}')
87 | config_file_update()
88 |
89 |
90 | if __name__ == '__main__':
91 | main()
92 |
--------------------------------------------------------------------------------
/chianas/drive_stats.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 |
3 | # Checks to see if 'move_local_plots.py' is running and is so
4 | # how much drive IO is being used.
5 |
6 | /usr/bin/pidstat -G move_local_plots.py -dlhH --human 5 1 > drive_stats.io
7 |
--------------------------------------------------------------------------------
/chianas/drivemanager_classes.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Part of drive_manager. These classes are for reading and updating out yaml
6 | config file.
7 | """
8 |
9 | VERSION = "V0.98 (2021-10-15)"
10 |
11 | import os
12 | import yaml
13 | from pathlib import Path
14 | import logging
15 | from system_logging import setup_logging
16 | import psutil
17 | from shutil import copyfile
18 | from flatten_dict import flatten
19 | from flatten_dict import unflatten
20 | from datetime import datetime
21 | from glob import glob
22 | import os
23 | from os.path import abspath
24 | from natsort import natsorted
25 |
26 | script_path = Path(__file__).parent.resolve()
27 |
28 | # Date and Time Stuff
29 | current_military_time = datetime.now().strftime('%Y%m%d%H%M%S')
30 |
31 | config_file = (str(Path.home()) + '/.config/plot_manager/plot_manager.yaml')
32 | skel_config_file = script_path.joinpath('plot_manager.skel.yaml')
33 |
34 | # Setup Module logging. Main logging is configured in system_logging.py
35 | setup_logging()
36 | with open(config_file, 'r') as config:
37 | server = yaml.safe_load(config)
38 | level = server['log_level']
39 | level = logging._checkLevel(level)
40 | log = logging.getLogger('drivemanager_classes.py')
41 | log.setLevel(level)
42 |
43 | class DriveManager:
44 | if not os.path.isfile(config_file):
45 | log.debug(f'Plot_Manager config file does not exist at: {config_file}')
46 | log.debug("Please check file path and try again.")
47 | exit()
48 | else:
49 | def __init__(self, configured, hostname, drive_temperature_limit, pools, replace_non_pool_plots, fill_empty_drives_first, empty_drives_low_water_mark, chia_log_file, chia_config_file,
50 | remote_harvester_reports, remote_harvesters, notifications, pb, email, sms, daily_update, farm_update, new_plot_drive, per_plot,
51 | local_plotter, temp_dirs, temp_dirs_critical, temp_dirs_critical_alert_sent, dst_dirs, dst_dirs_critical,
52 | dst_dirs_critical_alert_sent, warnings, emails, phones, twilio_from, twilio_account, twilio_token, pb_api,
53 | current_internal_drive, current_plotting_drive, total_plot_highwater_warning, total_plots_alert_sent, plot_receive_interface_threshold,
54 | current_total_plots_midnight, current_total_plots_daily, offlined_drives, logging, log_level, plot_receive_interface,
55 | current_portable_plots_midnight, current_portable_plots_daily, current_plot_replacement_drive, local_move_error, local_move_error_alert_sent):
56 | self.configured = configured
57 | self.hostname = hostname
58 | self.drive_temperature_limit = drive_temperature_limit
59 | self.pools = pools
60 | self.replace_non_pool_plots = replace_non_pool_plots
61 | self.fill_empty_drives_first = fill_empty_drives_first
62 | self.empty_drives_low_water_mark = empty_drives_low_water_mark
63 | self.chia_log_file = chia_log_file
64 | self.chia_config_file = chia_config_file
65 | self.remote_harvester_reports = remote_harvester_reports
66 | self.remote_harvesters = remote_harvesters
67 | self.notifications = notifications
68 | self.pb = pb
69 | self.email = email
70 | self.sms = sms
71 | self.daily_update = daily_update
72 | self.farm_update = farm_update
73 | self.new_plot_drive = new_plot_drive
74 | self.per_plot = per_plot
75 | self.warnings = warnings
76 | self.emails = emails
77 | self.phones = phones
78 | self.twilio_from = twilio_from
79 | self.twilio_account = twilio_account
80 | self.twilio_token = twilio_token
81 | self.pb_api = pb_api
82 | self.local_plotter = local_plotter
83 | self.temp_dirs = temp_dirs
84 | self.temp_dirs_critical = temp_dirs_critical
85 | self.temp_dirs_critical_alert_sent = temp_dirs_critical_alert_sent
86 | self.dst_dirs = dst_dirs
87 | self.dst_dirs_critical = dst_dirs_critical
88 | self.dst_dirs_critical_alert_sent = dst_dirs_critical_alert_sent
89 | self.current_internal_drive = current_internal_drive
90 | self.current_plotting_drive = current_plotting_drive
91 | self.total_plot_highwater_warning = total_plot_highwater_warning
92 | self.total_plots_alert_sent = total_plots_alert_sent
93 | self.current_portable_plots_midnight = current_portable_plots_midnight
94 | self.current_portable_plots_daily = current_portable_plots_daily
95 | self.current_total_plots_midnight = current_total_plots_midnight
96 | self.current_total_plots_daily = current_total_plots_daily
97 | self.offlined_drives = offlined_drives
98 | self.logging = logging
99 | self.log_level = log_level
100 | self.plot_receive_interface = plot_receive_interface
101 | self.plot_receive_interface_threshold = plot_receive_interface_threshold
102 | self.current_plot_replacement_drive = current_plot_replacement_drive
103 | self.local_move_error = local_move_error
104 | self.local_move_error_alert_sent = local_move_error_alert_sent
105 |
106 | @classmethod
107 | def read_configs(cls):
108 | with open (config_file, 'r') as config:
109 | server = yaml.safe_load(config)
110 | return cls(
111 | configured=server['configured'],
112 | hostname=server['hostname'],
113 | drive_temperature_limit=server['drive_temperature_limit'],
114 | pools=server['pools']['active'],
115 | replace_non_pool_plots=server['pools']['replace_non_pool_plots'],
116 | fill_empty_drives_first=server['pools']['fill_empty_drives_first'],
117 | empty_drives_low_water_mark=server['pools']['empty_drives_low_water_mark'],
118 | chia_log_file=server['chia_log_file'],
119 | chia_config_file=server['chia_config_file'],
120 | remote_harvester_reports=server['remote_harvester_reports']['active'],
121 | remote_harvesters=server['remote_harvester_reports']['remote_harvesters'],
122 | notifications=server['notifications']['active'],
123 | pb=server['notifications']['methods']['pb'],
124 | email=server['notifications']['methods']['email'],
125 | sms=server['notifications']['methods']['sms'],
126 | daily_update=server['notifications']['types']['daily_update'],
127 | farm_update=server['notifications']['types']['farm_update'],
128 | new_plot_drive=server['notifications']['types']['new_plot_drive'],
129 | per_plot=server['notifications']['types']['per_plot'],
130 | warnings=server['notifications']['types']['warnings'],
131 | emails=server['notifications']['emails'],
132 | phones=server['notifications']['phones'],
133 | twilio_from=server['notifications']['accounts']['twilio']['from'],
134 | twilio_account=server['notifications']['accounts']['twilio']['account'],
135 | twilio_token=server['notifications']['accounts']['twilio']['token'],
136 | pb_api=server['notifications']['accounts']['pushBullet']['api'],
137 | local_plotter=server['local_plotter']['active'],
138 | temp_dirs=server['local_plotter']['temp_dirs']['dirs'],
139 | temp_dirs_critical=server['local_plotter']['temp_dirs']['critical'],
140 | temp_dirs_critical_alert_sent=server['local_plotter']['temp_dirs']['critical_alert_sent'],
141 | dst_dirs=server['local_plotter']['dst_dirs']['dirs'],
142 | dst_dirs_critical=server['local_plotter']['dst_dirs']['critical'],
143 | dst_dirs_critical_alert_sent=server['local_plotter']['dst_dirs']['critical_alert_sent'],
144 | current_internal_drive=server['local_plotter']['current_internal_drive'],
145 | current_plotting_drive=server['harvester']['current_plotting_drive'],
146 | total_plot_highwater_warning=server['harvester']['total_plot_highwater_warning'],
147 | total_plots_alert_sent=server['harvester']['total_plots_alert_sent'],
148 | current_total_plots_midnight=server['harvester']['current_total_plots_midnight'],
149 | current_total_plots_daily=server['harvester']['current_total_plots_daily'],
150 | current_portable_plots_midnight=server['pools']['current_portable_plots_midnight'],
151 | current_portable_plots_daily=server['pools']['current_portable_plots_daily'],
152 | offlined_drives=server['harvester']['offlined_drives'],
153 | logging=server['logging'],
154 | log_level=server['log_level'],
155 | plot_receive_interface=server['plot_receive_interface'],
156 | plot_receive_interface_threshold=server['plot_receive_interface_threshold'],
157 | current_plot_replacement_drive=server['pools']['current_plot_replacement_drive'],
158 | local_move_error=server['local_plotter']['local_move_error'],
159 | local_move_error_alert_sent=server['local_plotter']['local_move_error_alert_sent'])
160 |
161 |
162 | def toggle_notification(self, notification):
163 | if getattr(self, notification):
164 | with open (config_file) as f:
165 | server = yaml.safe_load(f)
166 | server['notifications']['methods'][notification] = False
167 | with open('plot_manager.yaml', 'w') as f:
168 | yaml.safe_dump(server, f)
169 | else:
170 | with open(config_file) as f:
171 | server = yaml.safe_load(f)
172 | server['notifications']['methods'][notification] = True
173 | with open(config_file, 'w') as f:
174 | yaml.safe_dump(server, f)
175 |
176 | def set_local_move_error(self):
177 | if getattr(self, 'local_move_error'):
178 | log.debug('local_move_error already set to True! Nothing to do here.')
179 | pass
180 | else:
181 | with open(config_file) as f:
182 | server = yaml.safe_load(f)
183 | server['local_plotter']['local_move_error'] = True
184 | with open(config_file, 'w') as f:
185 | yaml.safe_dump(server, f)
186 | log.debug('local_move_error toggled to True!')
187 |
188 |
189 |
190 | def set_notification(self, notification, value):
191 | if getattr(self, notification) == value:
192 | pass
193 | else:
194 | with open(config_file) as f:
195 | server = yaml.safe_load(f)
196 | server['notifications']['methods'][notification] = value
197 | with open(config_file, 'w') as f:
198 | yaml.safe_dump(server, f)
199 |
200 | def update_current_plotting_drive(self, new_drive):
201 | with open(config_file) as f:
202 | server = yaml.safe_load(f)
203 | server['harvester']['current_plotting_drive'] = new_drive
204 | with open(config_file, 'w') as f:
205 | yaml.safe_dump(server, f)
206 |
207 | def update_current_internal_drive(self, new_drive):
208 | with open(config_file) as f:
209 | server = yaml.safe_load(f)
210 | server['local_plotter']['current_internal_drive'] = new_drive
211 | with open(config_file, 'w') as f:
212 | yaml.safe_dump(server, f)
213 |
214 | def update_current_plot_replacement_drive(self, new_drive):
215 | with open(config_file) as f:
216 | server = yaml.safe_load(f)
217 | server['pools']['current_plot_replacement_drive'] = new_drive
218 | with open(config_file, 'w') as f:
219 | yaml.safe_dump(server, f)
220 |
221 | def update_current_total_plots_midnight(self, type, plots):
222 | with open(config_file) as f:
223 | server = yaml.safe_load(f)
224 | if type == 'old':
225 | server['harvester']['current_total_plots_midnight'] = plots
226 | else:
227 | server['pools']['current_portable_plots_midnight'] = plots
228 | with open(config_file, 'w') as f:
229 | yaml.safe_dump(server, f)
230 |
231 | def update_current_total_plots_daily(self, type, plots):
232 | with open(config_file) as f:
233 | server = yaml.safe_load(f)
234 | if type == 'old':
235 | server['harvester']['current_total_plots_daily'] = plots
236 | else:
237 | server['pools']['current_portable_plots_daily'] = plots
238 | with open(config_file, 'w') as f:
239 | yaml.safe_dump(server, f)
240 |
241 |
242 | def onoffline_drives(self, onoffline, drive):
243 | if onoffline == 'offline':
244 | with open(config_file) as f:
245 | server = yaml.safe_load(f)
246 | server['harvester']['offlined_drives'].append(drive)
247 | with open(config_file, 'w') as f:
248 | yaml.safe_dump(server, f)
249 | else:
250 | with open(config_file) as f:
251 | server = yaml.safe_load(f)
252 | server['harvester']['offlined_drives'].remove(drive)
253 | with open(config_file, 'w') as f:
254 | yaml.safe_dump(server, f)
255 |
256 | def temp_dir_usage(self):
257 | temp_dir_usage = {}
258 | for dir in self.temp_dirs:
259 | usage = psutil.disk_usage(dir)
260 | temp_dir_usage[dir] = int(usage.percent)
261 | return temp_dir_usage
262 |
263 | def get_critical_temp_dir_usage(self):
264 | paths = self.temp_dir_usage()
265 | return dict((k, v) for k, v in paths.items() if v > self.temp_dirs_critical)
266 |
267 | def dst_dir_usage(self):
268 | dst_dir_usage = {}
269 | for dir in self.dst_dirs:
270 | usage = psutil.disk_usage(dir)
271 | dst_dir_usage[dir] = int(usage.percent)
272 | return dst_dir_usage
273 |
274 | def get_critical_dst_dir_usage(self):
275 | paths = self.dst_dir_usage()
276 | return dict((k, v) for k, v in paths.items() if v > self.dst_dirs_critical)
277 |
278 |
279 | def toggle_alert_sent(self, alert):
280 | if alert == 'temp_dirs_critical_alert_sent':
281 | if getattr(self, alert):
282 | with open (config_file) as f:
283 | server = yaml.safe_load(f)
284 | server['local_plotter']['temp_dirs']['critical_alert_sent'] = False
285 | with open(config_file, 'w') as f:
286 | yaml.safe_dump(server, f)
287 | else:
288 | with open(config_file) as f:
289 | server = yaml.safe_load(f)
290 | server['local_plotter']['temp_dirs']['critical_alert_sent'] = True
291 | with open(config_file, 'w') as f:
292 | yaml.safe_dump(server, f)
293 |
294 | elif alert == 'dst_dirs_critical_alert_sent':
295 | if getattr(self, alert):
296 | with open (config_file) as f:
297 | server = yaml.safe_load(f)
298 | server['local_plotter']['dst_dirs']['critical_alert_sent'] = False
299 | with open(config_file, 'w') as f:
300 | yaml.safe_dump(server, f)
301 | else:
302 | with open(config_file) as f:
303 | server = yaml.safe_load(f)
304 | server['local_plotter']['dst_dirs']['critical_alert_sent'] = True
305 | with open(config_file, 'w') as f:
306 | yaml.safe_dump(server, f)
307 |
308 | elif alert == 'total_plots_alert_sent':
309 | if getattr(self, alert):
310 | with open(config_file) as f:
311 | server = yaml.safe_load(f)
312 | server['harvester']['total_plots_alert_sent'] = False
313 | with open(config_file, 'w') as f:
314 | yaml.safe_dump(server, f)
315 | else:
316 | with open(config_file) as f:
317 | server = yaml.safe_load(f)
318 | server['harvester']['total_plots_alert_sent'] = True
319 | with open(config_file, 'w') as f:
320 | yaml.safe_dump(server, f)
321 |
322 | elif alert == 'local_move_error_alert_sent':
323 | if getattr(self, alert):
324 | with open(config_file) as f:
325 | server = yaml.safe_load(f)
326 | server['local_plotter']['local_move_error_alert_sent'] = False
327 | with open(config_file, 'w') as f:
328 | yaml.safe_dump(server, f)
329 | log.debug('local_move_error_alert_sent set to FALSE')
330 | else:
331 | with open(config_file) as f:
332 | server = yaml.safe_load(f)
333 | server['local_plotter']['local_move_error_alert_sent'] = True
334 | with open(config_file, 'w') as f:
335 | yaml.safe_dump(server, f)
336 | log.debug('local_move_error_alert_sent set to TRUE')
337 |
338 | class PlotManager:
339 | def __init__(self, plots_to_replace, number_of_old_plots, number_of_portable_plots, plot_drive,
340 | next_plot_to_replace, next_local_plot_to_replace, local_plot_drive):
341 | self.plots_to_replace = plots_to_replace
342 | self.number_of_old_plots = number_of_old_plots
343 | self.number_of_portable_plots = number_of_portable_plots
344 | self.plot_drive = plot_drive
345 | self.next_plot_to_replace = next_plot_to_replace
346 | self.next_local_plot_to_replace = next_local_plot_to_replace
347 | self.local_plot_drive = local_plot_drive
348 |
349 | @classmethod
350 | def get_plot_info(cls):
351 | return cls(
352 | plots_to_replace = number_of_plots_in_system('old')[1],
353 | number_of_old_plots = number_of_plots_in_system('old')[0],
354 | number_of_portable_plots = number_of_plots_in_system('portable')[0],
355 | plot_drive = os.path.dirname(str(get_next_plot_replacement('old')[0])),
356 | next_plot_to_replace= str(get_next_plot_replacement('old')[0]),
357 | next_local_plot_to_replace = str(get_next_plot_replacement('local')[0]),
358 | local_plot_drive = os.path.dirname(str(get_next_plot_replacement('local')[0])))
359 |
360 | def get_next_plot_replacement(type):
361 | if type == 'old':
362 | try:
363 | file_path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*/plot-*'
364 | d = {abspath(d): d for d in glob(file_path_glob)}
365 | old_plot_count = len(d)
366 | return natsorted([p for p in d])[0], old_plot_count
367 | except IndexError:
368 | return False, 0
369 | elif type == 'portable':
370 | try:
371 | file_path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*/portable*'
372 | d = {abspath(d): d for d in glob(file_path_glob)}
373 | old_plot_count = len(d)
374 | return natsorted([p for p in d])[0], old_plot_count
375 | except IndexError:
376 | return False, 0
377 | else:
378 | try:
379 | file_path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*/plot-*'
380 | d = {abspath(d): d for d in glob(file_path_glob)}
381 | old_plot_count = len(d)
382 | return natsorted([p for p in d], reverse=True)[0], old_plot_count
383 | except IndexError:
384 | return False, 0
385 |
386 | def number_of_plots_in_system(type):
387 | if type == 'old':
388 | plots_left = get_next_plot_replacement('old')
389 | if not plots_left[0]:
390 | return 0, False
391 | else:
392 | return plots_left[1], True
393 | else:
394 | plots_left = get_next_plot_replacement('portable')
395 | if not plots_left[0]:
396 | return 0, False
397 | else:
398 | return plots_left[1], True
399 |
400 |
401 | def main():
402 | print("Not intended to be run directly.")
403 | print("This is the systemwide DriveManager Class module.")
404 | print("It is called by other modules.")
405 | exit()
406 |
407 | if __name__ == '__main__':
408 | main()
409 |
--------------------------------------------------------------------------------
/chianas/export/chianas01_export.json:
--------------------------------------------------------------------------------
1 | #
2 |
--------------------------------------------------------------------------------
/chianas/plot_manager.skel.yaml:
--------------------------------------------------------------------------------
1 | # v0.98 2021-10-15
2 | # Once you have made the necessary modifications to this file, change this to
3 | # True.
4 | configured: False
5 |
6 | # Enter the hostname of this server:
7 | hostname: chianas01
8 |
9 | # Enter the name (as shown by ifconfig) of the interface that you RECEIVE plots on
10 | # from your plotters. This is used to check for network traffic to prevent multiple
11 | # plots from being transferred at the same time.
12 | plot_receive_interface: eth0
13 |
14 | # This is a number that represents at what percentage overall utilization of the above
15 | # interface we will assume that a plot transfer is taking place. You should really TEST
16 | # this to make sure it works for your needs. If you have a dedicated interface to move
17 | # plots, then it can be set very low (1 to 2), however if you have a shared interface,
18 | # you should test while a plot transfer is running and set it to what number makes sense.
19 | # To test simply run the following command and look at the very last number:
20 | # /usr/bin/sar -n DEV 1 50 | egrep eth0
21 | plot_receive_interface_threshold: 2
22 |
23 | # Set the max hard drive temperature limit that you wish to see on your hard drives. This
24 | # is used to format the Drive Temperature Report: ./drive_manager.py -ct
25 | drive_temperature_limit: 30
26 |
27 | # Are we plotting for pools? This has nothing to do with the actual plotting of
28 | # plots but rather just naming of the new plots and eventually the replacing of
29 | # old plots with portable plots.
30 | pools:
31 | active: False
32 | # Do we want to replace non-pool plots with new plots
33 | replace_non_pool_plots: True
34 | # Should we fill up empty drive space before replacing old non-pool plots?
35 | fill_empty_drives_first: True
36 | # When we get below this number of plots available on the system
37 | # we will switch to replacing plots. Has no effect is active: False is
38 | # set above
39 | empty_drives_low_water_mark: 100
40 | # How many Portable Plots per day are we generating
41 | current_portable_plots_daily: 0
42 | # What is our Midnight portable plot count?
43 | current_portable_plots_midnight: 1
44 | # What drive are we currently using to replace plots?
45 | current_plot_replacement_drive: /mnt/enclosure0/front/column0/drive4
46 |
47 | # Enter Logging Information
48 | logging: True
49 | log_level: DEBUG
50 |
51 |
52 | # Where is your chia log file located? Remember to set the logging level
53 | # in your chia config to INFO. By default, it is set to WARNING.
54 | chia_log_file: not_set
55 | chia_config_file: not_set
56 |
57 | # If you are running multiple remote harvesters, set this to true
58 | # and enter their hostnames below. These hostnames should NOT
59 | # include your local hostname listed above. Also, these hostname
60 | # should be configured for passwordless ssh and should be configured
61 | # such that when you ping the hostname, it goes across the fastest
62 | # interface you have between these harvesters. Set to True if you
63 | # have multiple harvesters and want combined reporting.
64 | remote_harvester_reports:
65 | active: False
66 | remote_harvesters:
67 | - chianas02
68 | - chianas03
69 |
70 | # This is the local drive where we store inbound plots from our
71 | # main plotter. Also stores information about our current plots
72 | # on our server. The total plot high water warning is the number
73 | # of plots left when the alert will be sent. When you have LESS
74 | # than this number of plots, you will get an alert.
75 | harvester:
76 | current_plotting_drive: /mnt/enclosure1/front/column1/drive36
77 | current_total_plots_midnight: 1
78 | current_total_plots_daily: 1
79 | total_plot_highwater_warning: 300
80 | total_plots_alert_sent: False
81 |
82 | # List of 'offlined' drives that we do not want plots written to
83 | # for any reason. In this case maybe 'drive0' and 'drive1' are our
84 | # OS drives, or maybe they are throwing errors and we don't want to
85 | # use them until they are replaced. If you have no offlined drives,
86 | # this line should look like this: offlined_drives: []
87 | offlined_drives:
88 | - drive0
89 | - drive1
90 |
91 | # I use Plotman to do my plotting, but this should work for anything. This
92 | # has NOTHING to do with setting up your plotting configuration and is
93 | # only used for monitoring drive space for notifications. Set to True if
94 | # locally plotting and configure the rest of the settings.
95 | local_plotter:
96 | active: False
97 |
98 | # Make sure to use the mountpoint
99 | temp_dirs:
100 | dirs:
101 | - /mnt/nvme_drive0
102 | - /mnt/nvme_drive1
103 | # What critical usage % should we send an error? Do not make this too low
104 | # or you will get nuisance reports.
105 | critical: 99
106 | critical_alert_sent: False
107 |
108 | # This is the directory that you are using for your plots. If you will be
109 | # utilizing the integrated 'move_local_plots.py' scripts, this is usually
110 | # just a single drive. The plots are then moved out of this directory to
111 | # their final resting place on the harvester. move_local_plots.py is
112 | # currently only written to support a single drive here.
113 | dst_dirs:
114 | dirs:
115 | - /mnt/enclosure1/rear/column3/drive79
116 |
117 | # At what % utilization do we send an error?
118 | critical: 95
119 | critical_alert_sent: False
120 |
121 | # This is the current internal drive we are using to stop plots moved off
122 | # of dst_dir above. This is not the same drive we use for storing plots
123 | # coming from an outside plotter. We use a different drive to prevent
124 | # drive IO saturation.
125 | current_internal_drive: /mnt/enclosure1/front/column3/drive59
126 |
127 | # During local moves where we are replacing plots, it is very important that
128 | # we stop all local processing if we detect an error, otherwise we could delete
129 | # a bunch of plots without meaning to, each time our script is run. This error is
130 | # set if we encounter an error and must be MANUALLY unset to continue to process
131 | # local plots if you have chosen to replace old plots:
132 | local_move_error: False
133 | # Once we get a local move error, did we send an alert?
134 | local_move_error_alert_sent: False
135 |
136 | # This is where we set up our notifications
137 | notifications:
138 | active: True
139 | methods:
140 | pb: False
141 | email: True
142 | sms: False
143 | types:
144 | new_plot_drive: True
145 | daily_update: True
146 | farm_update: True
147 | per_plot: False
148 | warnings: True
149 | # What email addresses will get the emails?
150 | emails:
151 | - someone@gmail.com
152 | - someoneelse@gmail.com
153 |
154 | # What phones numbers will received the SMS text messages? Include '+1'
155 | phones:
156 | - '+18584150987'
157 |
158 | # These are your notification account settings. For email, you must configure
159 | # your locate MTA. Installer installs Postfix by default. Twilio (SMS) requires
160 | # a paid account, PushBullet is free.
161 | accounts:
162 | twilio:
163 | from: '+18587491119'
164 | account: your_account_key
165 | token: your_account_token
166 | pushBullet:
167 | api: your_account_api
168 |
--------------------------------------------------------------------------------
/chianas/readme.md:
--------------------------------------------------------------------------------
1 |
2 |
6 |
3 |
4 | Chia Plot, Drive Manager, Coin Monitor & Auto Drive (V0.98 - October 15th, 2021)
5 |
11 |
12 |
13 | ### Basic Installation and Configuration Instructions
14 |
15 | It would be virtually impossible to account for every single configuration for each person's server farm setup, but many people are running this code on many different kinds
16 | of configurations. If you have issues, please open an issue and I will try my best to work throught them with you.
17 |
18 | In order to run this code, you should start with the main readme one level above this one. It covers things such as the Overview and Theory of operation, how the directory
19 | structories were designed, all of the various command line options and a **lot** of other information about my setup and configuration. This also includes discussions about
20 | network configurations and the like. It really is a **must read** before moving on to this readme.
21 |
22 | So on to the **basic** installation and configuration. In order to install the **NAS** portion of this setup (which can be both a NAS and a Plotter if you wish), start with
23 | a fresh installation of Ubuntu 20.04 (or whatever may work for you) and then run the following:
24 |
25 | ```git clone https://github.com/rjsears/chia_plot_manager.git && mv chia_plot_manager plot_manager && cd plot_manager && chmod +x install.sh && ./install.sh help```
26 |
27 | This will clone the currnet main branch of my repo, move it into the recommended directory ```(/root/plot_manager)```, set the correct file permissions and then launch the install
28 | script with the `-h` option which will print out the help menu. To run the actual install for the NAS, simply type ```./install.sh nas``` and hit enter. You will be prompted to
29 | answer a series of questions. Read the aformentioned readme to understand these questions, and continue with the install. The install will update your Ubuntu system completely,
30 | install all required dependancies, create the director to hold tha main configuration file ```(plot_manager.yaml)```, create a skel directory structure (if you request it to), and
31 | cleanup after itself. It is **highly** recommended that you do this on a clean system, however it should work on an in-use system as well. I have done a lot of testing on various
32 | production servers and have never had an issue.
33 |
34 | Before going further with this, make sure you have your ```Chia``` insalled and configured to your liking. You will need the full path to the logfile as well as the config file.
35 |
36 | Remember that you need to configure things like ```postfix``` once you have completed the initial install, otherwise you will not get notification or the daily emailed reports.
37 | Once complete, test it from the command line to make sure you can receive mail. We use the linux mail command to send out all email notifications. Lastly you need to edit your
38 | main configuration file prior to running ```./drive_manager.py``` for the first time. failure to do so will result in an error message telling you to do so!
39 |
40 | Some of the entries in the configu file (shown below) you set and some are set by the system. They can be overriden if you know what you are doing, but if not, I would leave them
41 | be. This is a standard YAML file, so leave the formatting as you see it or you will get errors when attempting to run ```drive_manager.py```. Start by setting the following options:
42 |
43 |
44 |
45 |
60 |
61 |
62 | ```
63 | # v0.98 2021-10-15
64 | # Once you have made the necessary modifications to this file, change this to
65 | # True.
66 | configured: False
67 |
68 | # Enter the hostname of this server:
69 | hostname: chianas01
70 |
71 | # Enter the name (as shown by ifconfig) of the interface that you RECEIVE plots on
72 | # from your plotters. This is used to check for network traffic to prevent multiple
73 | # plots from being transferred at the same time.
74 | plot_receive_interface: eth0
75 |
76 | # This is a number that represents at what percentage overall utilization of the above
77 | # interface we will assume that a plot transfer is taking place. You should really TEST
78 | # this to make sure it works for your needs. If you have a dedicated interface to move
79 | # plots, then it can be set very low (1 to 2), however if you have a shared interface,
80 | # you should test while a plot transfer is running and set it to what number makes sense.
81 | # To test simply run the following command and look at the very last number:
82 | # /usr/bin/sar -n DEV 1 50 | egrep eth0
83 | plot_receive_interface_threshold: 2
84 |
85 | # Set the max hard drive temperature limit that you wish to see on your hard drives. This
86 | # is used to format the Drive Temperature Report: ./drive_manager.py -ct
87 | drive_temperature_limit: 30
88 |
89 | # Are we plotting for pools? This has nothing to do with the actual plotting of
90 | # plots but rather just naming of the new plots and eventually the replacing of
91 | # old plots with portable plots.
92 | pools:
93 | active: False
94 | # Do we want to replace non-pool plots with new plots
95 | replace_non_pool_plots: True
96 | # Should we fill up empty drive space before replacing old non-pool plots?
97 | fill_empty_drives_first: True
98 | # When we get below this number of plots available on the system
99 | # we will switch to replacing plots. Has no effect is active: False is
100 | # set above
101 | empty_drives_low_water_mark: 100
102 | # How many Portable Plots per day are we generating
103 | current_portable_plots_daily: 0
104 | # What is our Midnight portable plot count?
105 | current_portable_plots_midnight: 1
106 | # What drive are we currently using to replace plots?
107 | current_plot_replacement_drive: /mnt/enclosure0/front/column0/drive4
108 |
109 | # Enter Logging Information
110 | logging: True
111 | log_level: DEBUG
112 |
113 |
114 | # Where is your chia log file located? Remember to set the logging level
115 | # in your chia config to INFO. By default, it is set to WARNING.
116 | chia_log_file: not_set
117 | chia_config_file: not_set
118 |
119 | # If you are running multiple remote harvesters, set this to true
120 | # and enter their hostnames below. These hostnames should NOT
121 | # include your local hostname listed above. Also, these hostname
122 | # should be configured for passwordless ssh and should be configured
123 | # such that when you ping the hostname, it goes across the fastest
124 | # interface you have between these harvesters. Set to True if you
125 | # have multiple harvesters and want combined reporting.
126 | remote_harvester_reports:
127 | active: False
128 | remote_harvesters:
129 | - chianas02
130 | - chianas03
131 |
132 | # This is the local drive where we store inbound plots from our
133 | # main plotter. Also stores information about our current plots
134 | # on our server. The total plot high water warning is the number
135 | # of plots left when the alert will be sent. When you have LESS
136 | # than this number of plots, you will get an alert.
137 | harvester:
138 | current_plotting_drive: /mnt/enclosure1/front/column1/drive36
139 | current_total_plots_midnight: 1
140 | current_total_plots_daily: 1
141 | total_plot_highwater_warning: 300
142 | total_plots_alert_sent: False
143 |
144 | # List of 'offlined' drives that we do not want plots written to
145 | # for any reason. In this case maybe 'drive0' and 'drive1' are our
146 | # OS drives, or maybe they are throwing errors and we don't want to
147 | # use them until they are replaced. If you have no offlined drives,
148 | # this line should look like this: offlined_drives: []
149 | offlined_drives:
150 | - drive0
151 | - drive1
152 |
153 | # I use Plotman to do my plotting, but this should work for anything. This
154 | # has NOTHING to do with setting up your plotting configuration and is
155 | # only used for monitoring drive space for notifications. Set to True if
156 | # locally plotting and configure the rest of the settings.
157 | local_plotter:
158 | active: False
159 |
160 | # Make sure to use the mountpoint
161 | temp_dirs:
162 | dirs:
163 | - /mnt/nvme_drive0
164 | - /mnt/nvme_drive1
165 | # What critical usage % should we send an error? Do not make this too low
166 | # or you will get nuisance reports.
167 | critical: 99
168 | critical_alert_sent: False
169 |
170 | # This is the directory that you are using for your plots. If you will be
171 | # utilizing the integrated 'move_local_plots.py' scripts, this is usually
172 | # just a single drive. The plots are then moved out of this directory to
173 | # their final resting place on the harvester. move_local_plots.py is
174 | # currently only written to support a single drive here.
175 | dst_dirs:
176 | dirs:
177 | - /mnt/enclosure1/rear/column3/drive79
178 |
179 | # At what % utilization do we send an error?
180 | critical: 95
181 | critical_alert_sent: False
182 |
183 | # This is the current internal drive we are using to stop plots moved off
184 | # of dst_dir above. This is not the same drive we use for storing plots
185 | # coming from an outside plotter. We use a different drive to prevent
186 | # drive IO saturation.
187 | current_internal_drive: /mnt/enclosure1/front/column3/drive59
188 |
189 | # During local moves where we are replacing plots, it is very important that
190 | # we stop all local processing if we detect an error, otherwise we could delete
191 | # a bunch of plots without meaning to, each time our script is run. This error is
192 | # set if we encounter an error and must be MANUALLY unset to continue to process
193 | # local plots if you have chosen to replace old plots:
194 | local_move_error: False
195 | # Once we get a local move error, did we send an alert?
196 | local_move_error_alert_sent: False
197 |
198 | # This is where we set up our notifications
199 | notifications:
200 | active: True
201 | methods:
202 | pb: False
203 | email: True
204 | sms: False
205 | types:
206 | new_plot_drive: True
207 | daily_update: True
208 | farm_update: True
209 | per_plot: False
210 | warnings: True
211 | # What email addresses will get the emails?
212 | emails:
213 | - someone@gmail.com
214 | - someoneelse@gmail.com
215 |
216 | # What phones numbers will received the SMS text messages? Include '+1'
217 | phones:
218 | - '+18584150987'
219 |
220 | # These are your notification account settings. For email, you must configure
221 | # your locate MTA. Installer installs Postfix by default. Twilio (SMS) requires
222 | # a paid account, PushBullet is free.
223 | accounts:
224 | twilio:
225 | from: '+18587491119'
226 | account: your_account_key
227 | token: your_account_token
228 | pushBullet:
229 | api: your_account_api
230 | ```
231 |
232 |
233 |
234 | Once you have completed all of the configuration changes same the file, switch to ```/root/plot_manager``` and run ```./drive_manager.py -h```
235 | If you get the help screen, it means that we see that you have configured your config file and we are ready to run. Next run it by itself:
236 | ```./drive_manager.py```, this will initialize everything and create the necessary files to run the system. If you get any error messages at
237 | this point, you should stop and address them before bringing you plotter online. One of the number one errors is **NOT** running it prior
238 | to starting your plotter process. If you so not run ```./drive_manager.py``` initially, it will **NOT** create the necessary receive scripts
239 | based on your system configuration and will not be able to receive inbound plots from your plotter.
240 |
--------------------------------------------------------------------------------
/chianas/requirements.txt:
--------------------------------------------------------------------------------
1 | pushbullet.py==0.12.0
2 | twilio==6.59.0
3 | Jinja2==3.0.1
4 | natsort==7.1.1
5 | paramiko==2.7.2
6 | PyYAML==5.4.1
7 | psutil==5.8.0
8 | flatten-dict==0.4.0
9 | rich==10.11.0
10 | pySMART==1.2.5
11 | inotify==0.2.10
12 |
--------------------------------------------------------------------------------
/chianas/system_logging.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Part of drive_manager. This is the logging module.
6 | For use with plot_manager V0.9
7 | """
8 |
9 | VERSION = "V0.94 (2021-08-08)"
10 |
11 | import logging.config
12 | import logging
13 | import logging.handlers
14 | import yaml
15 | import pathlib
16 |
17 | user_home_dir = str(pathlib.Path.home())
18 | config_file = (user_home_dir + '/.config/plot_manager/plot_manager.yaml')
19 | script_path = pathlib.Path(__file__).parent.resolve()
20 |
21 |
22 | def setup_logging(default_level=logging.CRITICAL):
23 | """Module to configure program-wide logging."""
24 | with open(config_file, 'r') as config:
25 | server = yaml.safe_load(config)
26 | log = logging.getLogger(__name__)
27 | level = logging._checkLevel(server['log_level'])
28 | log.setLevel(level)
29 | if server['logging']:
30 | try:
31 | logging.config.dictConfig(log_config)
32 | except Exception as e:
33 | print(e)
34 | print('Error in Logging Configuration. Using default configs. Check File Permissions (for a start)!')
35 | logging.basicConfig(level=default_level)
36 | return log
37 | else:
38 | log.disabled = True
39 |
40 | log_config = {
41 | "version": 1,
42 | "disable_existing_loggers": False,
43 | "formatters": {
44 | "console": {
45 | "format": "%(message)s"
46 | },
47 | "console_expanded": {
48 | "format": "%(module)2s:%(lineno)s - %(funcName)3s: %(levelname)3s: %(message)s"
49 | },
50 | "standard": {
51 | "format": "%(asctime)s - %(module)2s:%(lineno)s - %(funcName)3s: %(levelname)3s %(message)s"
52 | },
53 | "error": {
54 | "format": "%(levelname)s NAS Server: {{nas_server}}
4 |
5 | Daily Update Email - Generated at {{current_time}}
6 |
7 | Current Plotting Drive (by mountpoint)...................{{current_plotting_drive_by_mountpoint}}
8 | Current Plotting Drive (by device)...........................{{current_plotting_drive_by_device}}
9 | Drive Size...............................................................{{drive_size}}
10 |
11 |
12 | Environmental & Health
13 | Drive Serial Number................................................{{drive_serial_number}}
14 | Current Drive Temperature......................................{{current_drive_temperature}}°C
15 | Last Smart Test Health Assessment........................{{smart_health_assessment}}
16 |
17 |
18 | Other Information
19 | Total Plots on {{nas_server}}.........................................{{total_serverwide_plots}}
20 | Current Total Number of Plot Drives........................{{total_number_of_drives}}
21 | Number of k32 Plots until full...................................{{total_k32_plots_until_full}}
22 | Max # of Plots with current # of Drives....................{{max_number_of_plots}}
23 | Plots Being Farmed as reported by Chia.................{{total_serverwide_plots_chia}}
24 | Total Plot Space in use as reported by Chia............{{total_serverwide_space_per_chia}} TiB
25 |
26 |
27 | Plotting Speed
28 | Total Plots Last 24 Hours.........................................{{total_plots_last_day}}
29 | Average Plots Per Hour...........................................{{average_plots_per_hour}}
30 | Average Plotting Speed Last 24 Hours....................{{average_plotting_speed}} TiB/Day
31 | Approx. # of Days to fill all Plot Drives.....................{{days_to_fill_drives}}
32 |
33 |
34 |
--------------------------------------------------------------------------------
/chianas/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 | NAS Server: {{nas_server}}
4 |
5 | Plot Report Generated at {{current_time}}
6 |
7 | Current Plotting Drive (by mountpoint)...................{{current_plotting_drive_by_mountpoint}}
8 | Current Plotting Drive (by device)...........................{{current_plotting_drive_by_device}}
9 | Drive Size...............................................................{{drive_size}}
10 |
11 |
12 | Environmental & Health
13 | Drive Serial Number................................................{{drive_serial_number}}
14 | Current Drive Temperature......................................{{current_drive_temperature}}°C
15 | Last Smart Test Health Assessment........................{{smart_health_assessment}}
16 |
17 |
18 | Other Information
19 | Total Plots on {{nas_server}}.........................................{{total_serverwide_plots}}
20 | Plots Being Farmed as reported by Chia.................{{total_serverwide_plots_chia}}
21 | Current Total Number of Plot Drives........................{{total_number_of_drives}}
22 | Number of k32 Plots until full...................................{{total_k32_plots_until_full}}
23 | Max # of Plots with current # of Drives....................{{max_number_of_plots}}
24 | Total Plot Space in use as reported by Chia............{{total_serverwide_space_per_chia}} TiB
25 |
26 |
27 | Plotting Speed
28 | Total Plots Last 24 Hours.........................................{{total_plots_last_day}}
29 | Average Plots Per Hour...........................................{{average_plots_per_hour}}
30 | Average Plotting Speed Last 24 Hours....................{{average_plotting_speed}} TiB/Day
31 | Approx. # of Days to fill all Plot Drives.....................{{days_to_fill_drives}}
32 |
33 |
34 |
--------------------------------------------------------------------------------
/chianas/templates/new_plotting_drive.html:
--------------------------------------------------------------------------------
1 |
2 |
3 | Server: {{nas_server}}
4 |
5 | New Plot Drive Selected at {{current_time}}
6 |
7 | Previous Plotting Drive......................................{{previous_plotting_drive}}
8 | # of Plots on Previous Plotting Drive.................{{plots_on_previous_plotting_drive}}
9 |
10 | New Plotting Drive (by mountpoint)...................{{current_plotting_drive_by_mountpoint}}
11 | New Plotting Drive (by device)...........................{{current_plotting_drive_by_device}}
12 | Drive Size..........................................................{{drive_size}}
13 | # of Plots on we can put on this Drive...............{{plots_available}}
14 |
15 |
16 | Environmental & Health
17 | Drive Serial Number..........................................{{drive_serial_number}}
18 | Current Drive Temperature................................{{current_drive_temperature}}°C
19 | Last Smart Test Health Assessment..................{{smart_health_assessment}}
20 |
21 |
22 | Other Information
23 | Total Plots on {{nas_server}}..................................{{total_serverwide_plots}}
24 | Current Total Number of Plot Drives..................{{total_number_of_drives}}
25 | Number of k32 Plots until full.............................{{total_k32_plots_until_full}}
26 | Max # of Plots with current # of Drives..............{{max_number_of_plots}}
27 | Approx. # of Days to fill all Plot Drives..............{{days_to_fill_drives}}
28 | Plots Being Farmed as reported by Chia...............{{total_serverwide_plots_chia}}
29 | Total Plot Space in use as reported by Chia...........{{total_serverwide_space_per_chia}}TiB
30 |
31 |
32 |
--------------------------------------------------------------------------------
/chianas/utilities/auto_drive.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 |
3 | # -*- coding: utf-8 -*-
4 |
5 | __author__ = 'Richard J. Sears'
6 | VERSION = "0.98 (2021-10-15)"
7 |
8 | """
9 | STAND ALONE STAND ALONE STAND ALONE
10 | This script is part of the chia_plot_manager set of scripts.
11 | This is the STAND ALONE version of this script and will work
12 | without needing the rest of the repo.
13 | Script to help automate the addition of new hard drives
14 | to a Chia NAS/Harvester. Please make sure you understand
15 | everything that this script does before using it!
16 | This script is intended to make my life easier. It is ONLY
17 | designed to 1) work with unformatted drives with no existing
18 | partitions, and 2) utilizing the directory structure found
19 | in the readme.
20 | This script WILL NOT work if you are utilizing hard drives wiht
21 | no partitions as we would have no way to determine if the drive
22 | is newly added.
23 | It can be modified, of course, to do other things.
24 | 1) Looks for any drive device that is on the system but does not
25 | end in a '1'. For example: /dev/sda1 vs. /dev/sdh - In this case
26 | /dev/sdh has no partition so it likely is not mounted or being
27 | used.
28 | 2) Utilizing the directory structure that I have shared in the
29 | main readme, locate the next available mountpoint to use.
30 | 3) Utilizing sgdisk, creates a new GPT partition on the new
31 | drive.
32 | 4) Formats the drive with the xfs filesystem.
33 | 5) Verifies that the UUID of the drive does not already exist
34 | in /etc/fstab and if not, adds the correct entry to /etc/fstab.
35 | 6) Mounts the new drive
36 | 7) Add the new mountpoint to your chia harvester configuration.
37 | """
38 |
39 | from glob import glob
40 | from os.path import ismount, abspath, exists
41 | import string
42 | import subprocess
43 | import yaml
44 | from natsort import natsorted
45 | import pathlib
46 | script_path = pathlib.Path(__file__).parent.resolve()
47 |
48 | # Do some housekeeping
49 | # Define some colors for our help message
50 | red='\033[0;31m'
51 | yellow='\033[0;33m'
52 | green='\033[0;32m'
53 | white='\033[0;37m'
54 | blue='\033[0;34m'
55 | nc='\033[0m'
56 |
57 | # Where can we find your Chia Config File?
58 | chia_config_file = '/root/.chia/mainnet/config/config.yaml'
59 |
60 | #Where did we put the get_drive_uuid.sh script:
61 | get_drive_uuid = script_path.joinpath('get_drive_uuid.sh')
62 |
63 | # What filesystem are you using: ext4 or xfs?
64 | file_system = 'xfs'
65 |
66 |
67 | # mark any drives here that you do not want touched at all for any reason,
68 | # formatted or not.
69 | do_not_use_drives = {'/dev/sda'}
70 |
71 |
72 | # Let's get started.......
73 |
74 | def get_next_mountpoint():
75 | """
76 | This function looks at the entire directory structure listed
77 | in the path_glob and then checks to see what directories are
78 | mounted and which are not. It then returns that in a sorted
79 | dictionary but only for those directories that are not
80 | mounted. We then return just the first one for use as our
81 | next mountpoint.
82 | abspath(d) = the full path to the directory
83 | ismount(d) = Returns True if abspath(d) is a mountpoint
84 | otherwise returns false.
85 | (d) looks like this:
86 | {'/mnt/enclosure0/front/column0/drive0': True}
87 | Make sure your path already exists and that the 'path_glob'
88 | ends with a `/*`.
89 | Notice that the path_glob does NOT include the actual drive0,
90 | drive1, etc. Your glob needs to be parsed with the *.
91 | So for this glob: path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*'
92 | this would be the directory structure:
93 | /mnt
94 | ├── enclosure0
95 | │ ├── front
96 | │ │ ├── column0
97 | │ │ │ ├── drive2
98 | │ │ │ ├── drive3
99 | │ │ │ ├── drive4
100 | │ │ │ └── drive5
101 | ├── enclosure1
102 | │ ├── rear
103 | │ │ ├── column0
104 | │ │ │ ├── drive6
105 | │ │ │ ├── drive7
106 | """
107 | try:
108 | path_glob = '/mnt/enclosure[0-9]/*/column[0-9]/*'
109 | d = {abspath(d): ismount(d) for d in glob(path_glob)}
110 | return natsorted([p for p in d if not d[p]])[0]
111 | except IndexError:
112 | print(f'\nNo usable {green}Directories{nc} found, Please check your {green}path_glob{nc}, ')
113 | print(f'and verify that your directory structure exists. Also, please make sure your')
114 | print(f'{green}path_glob{nc} ends with a trailing backslash and an *:')
115 | print(f'Example: /mnt/enclosure[0-9]/*/column[0-9]{green}/*{nc}\n')
116 |
117 |
118 | def get_new_drives():
119 | """
120 | This functions creates two different lists. "all_drives" which
121 | is a sorted list of all drives as reported by the OS as read from
122 | `/dev/sd*` using glob(). The second is "formatted_drives" which is a list of
123 | all drives that end with a `1` indicating that they have a partition
124 | on them and as such we should not touch. We then strip the `1` off
125 | the back of the "formatted_drives" list and add it to a new set
126 | "formatted_set". We then iterate over the "all_drives" sorted list,
127 | strip any digits off the end, check to see if it exists in the
128 | "formatted_set" and if it does not, then we go ahead and add it
129 | to our final set "unformatted_drives". We then sort that and
130 | return the first drive from that sorted set.
131 | """
132 | all_drives = sorted(glob('/dev/sd*'))
133 | formatted_drives = list(filter(lambda x: x.endswith('1'), all_drives))
134 | formatted_set = set()
135 | for drive in formatted_drives:
136 | drive = drive.rstrip(string.digits)
137 | formatted_set.add(drive)
138 | formatted_and_do_not_use = set.union(formatted_set, do_not_use_drives)
139 | unformatted_drives = set()
140 | for drive in all_drives:
141 | drive = drive.rstrip(string.digits)
142 | if drive not in formatted_and_do_not_use:
143 | # if drive not in formatted_set:
144 | unformatted_drives.add(drive)
145 | if unformatted_drives:
146 | return sorted(unformatted_drives)[0]
147 | else:
148 | return False
149 |
150 |
151 | def add_new_drive():
152 | """
153 | This is the main function responsible for getting user input and calling all other
154 | functions. I tried to do a lot of error checking, but since we are working with a
155 | drive that `should` have zero data on it, we should not have too much of an issue.
156 | """
157 | print(f'Welcome to auto_drive.py {blue}Version{nc}:{green}{VERSION}{nc}')
158 | can_we_run()
159 | if not get_new_drives():
160 | print (f'\nNo new drives found! {red}EXITING{nc}\n')
161 | else:
162 | drive = get_new_drives()
163 | mountpoint = get_next_mountpoint()
164 | print (f'\nWe are going to format: {blue}{drive}{nc}, add it to {yellow}/etc/fstab{nc} and mount it at {yellow}{mountpoint}{nc}\n')
165 | format_continue = sanitise_user_input(f'Would you like to {white}CONTINUE{nc}?: {green}YES{nc} or {red}NO{nc} ', range_=('Y', 'y', 'YES', 'yes', 'N', 'n', 'NO', 'no'))
166 | if format_continue in ('Y', 'YES', 'y', 'yes'):
167 | print (f'We will {green}CONTINUE{nc}!')
168 | if sgdisk(drive):
169 | print (f'Drive Partitioning has been completed {green}successfully{nc}!')
170 | else:
171 | print(f'There was a {red}PROBLEM{nc} partitioning your drive, please handle manually!')
172 | exit()
173 | if make_filesystem(drive):
174 | print(f'Drive has been formatted as {file_system} {green}successfully{nc}!')
175 | else:
176 | print(f'There was a {red}PROBLEM{nc} formatting your drive as {green}{file_system}{nc}, please handle manually!')
177 | exit()
178 | if add_uuid_to_fstab(drive):
179 | print(f'Drive added to system {green}successfully{nc}!\n\n')
180 | else:
181 | print(f'There was a {red}PROBLEM{nc} adding your drive to /etc/fstab or mounting, please handle manually!')
182 | exit()
183 | add_to_chia = sanitise_user_input(f'Would you like to add {yellow}{mountpoint}{nc} to your {green}Chia{nc} Config File?: {green}YES{nc} or {red}NO{nc} ', range_=('Y', 'y', 'YES', 'yes', 'N', 'n', 'NO', 'no'))
184 | if add_to_chia in ('Y', 'YES', 'y', 'yes'):
185 | print (f'Adding {yellow}{mountpoint}{nc} to {green}Chia{nc} Configuration File......')
186 | if update_chia_config(mountpoint):
187 | print(f'Mountpoint: {green}{mountpoint}{nc} Successfully added to your Chia Config File')
188 | print(f'\n\nDrive Process Complete - Thank You and have a {red}G{yellow}R{white}E{green}A{blue}T{nc} Day!\n\n')
189 | else:
190 | print(f'\nThere was an {red}ERROR{nc} adding {mountpoint} to {chia_config_file}!')
191 | print(f'You need to {yellow}MANUALLY{nc} add or verify that it has been added to your config file,')
192 | print(f'otherwise plots on that drive will {red}NOT{nc} get {green}harvested{nc}!\n')
193 | print(f'\n\nDrive Process Complete - Thank You and have a {red}G{yellow}R{white}E{green}A{blue}T{nc} Day!')
194 | else:
195 | print(f'\n\nDrive Process Complete - Thank You and have a {red}G{yellow}R{white}E{green}A{blue}T{nc} Day!\n\n')
196 | else:
197 | print (f'{yellow}EXITING{nc}!')
198 | exit()
199 |
200 | def sgdisk(drive):
201 | """
202 | Function to call sgdisk to create the disk partition and set GPT.
203 | """
204 | try:
205 | print(f'Please wait while we get {blue}{drive}{nc} ready to partition.......')
206 | sgdisk_results = subprocess.run(['sgdisk', '-Z', drive], capture_output=True, text=True)
207 | print (sgdisk_results.stdout)
208 | print (sgdisk_results.stderr)
209 | except subprocess.CalledProcessError as e:
210 | print(f'sgdisk {red}Error{nc}: {e}')
211 | return False
212 | except Exception as e:
213 | print(f'sgdisk: Unknown {red}Error{nc}! Drive not Partitioned')
214 | return False
215 | try:
216 | print(f'Creating partition on {blue}{drive}{nc}.......')
217 | sgdisk_results = subprocess.run(['sgdisk', '-N0', drive], capture_output=True, text=True)
218 | print(sgdisk_results.stdout)
219 | print(sgdisk_results.stderr)
220 | except subprocess.CalledProcessError as e:
221 | print(f'sgdisk {red}Error{nc}: {e}')
222 | return False
223 | except Exception as e:
224 | print(f'sgdisk: Unknown {red}Error{nc}! Drive not Partitioned')
225 | return False
226 | try:
227 | print(f'Creating unique {white}UUID{nc} for drive {drive}....')
228 | sgdisk_results = subprocess.run(['sgdisk', '-G', drive], capture_output=True, text=True)
229 | print(sgdisk_results.stdout)
230 | print(sgdisk_results.stderr)
231 | print('')
232 | print(f'Process (sgdisk) {green}COMPLETE{nc}!')
233 | return True
234 | except subprocess.CalledProcessError as e:
235 | print(f'sgdisk {red}Error{nc}: {e}')
236 | return False
237 | except Exception as e:
238 | print(f'sgdisk: Unknown {red}Error{nc}! Drive not Partitioned')
239 | return False
240 |
241 | def make_filesystem(drive):
242 | """
243 | Formats the new drive to selected filesystem
244 | """
245 | drive = drive + '1'
246 | try:
247 | print(f'Please wait while we format {blue}{drive}{nc} as {green}{file_system}{nc}.................')
248 | if file_system=='xfs':
249 | mkfs_results = subprocess.run([f'mkfs.xfs', '-q', '-f', drive], capture_output=True, text=True)
250 | elif file_system=='ext4':
251 | mkfs_results = subprocess.run([f'mkfs.ext4', '-F', drive], capture_output=True, text=True)
252 | else:
253 | print(f'{red}ERROR{nc}: Unsupported Filesystem: {file_system}')
254 | return False
255 | print (mkfs_results.stdout)
256 | print (mkfs_results.stderr)
257 | print('')
258 | print(f'Process (mkfs.{file_system}) {green}COMPLETE{nc}!')
259 | return True
260 | except subprocess.CalledProcessError as e:
261 | print(f'mkfs {red}Error{nc}: {e}')
262 | return False
263 | except Exception as e:
264 | print(f'mkfs: Unknown {red}Error{nc}! Drive not Partitioned')
265 | return False
266 |
267 |
268 | def add_uuid_to_fstab(drive):
269 | """
270 | Uses a little shell script (get_drive_uuid.sh) to get our drive UUID after it have been
271 | formatted.
272 | """
273 | drive = drive + '1'
274 | try:
275 | print(f'Please wait while we add {blue}{drive}{nc} to /etc/fstab.......')
276 | uuid_results = subprocess.check_output([get_drive_uuid, drive]).decode('ascii').rstrip()
277 | if uuid_results == '':
278 | print(f'{red}BAD{nc} or {yellow}NO{nc} UUID returned! Please handle this drive manually!')
279 | return False
280 | print(f'Your drive UUID is: {green}{uuid_results}{nc}')
281 | print(f'Verifying that {green}{uuid_results}{nc} does not exist in /etc/fstab')
282 | with open('/etc/fstab') as fstab:
283 | if uuid_results in fstab.read():
284 | print (f'{red}ERROR!{nc}: {green}{uuid_results}{nc} already exists /etc/fstab, exiting!')
285 | return False
286 | else:
287 | print (f'UUID: {green}{uuid_results}{nc} does not exist in /etc/fstab, adding it now.')
288 | mountpoint = get_next_mountpoint()
289 | with open ('/etc/fstab', 'a') as fstab_edit:
290 | fstab_edit.write(f'/dev/disk/by-uuid/{uuid_results} {mountpoint} xfs defaults,user 0 0 #Added by auto_drive.py\n')
291 | with open('/etc/fstab') as fstab:
292 | if uuid_results in fstab.read():
293 | print(f'UUID: {green}{uuid_results}{nc} now exists in /etc/fstab, {green}continuing{nc}.....')
294 | else:
295 | print(f'{red}ERROR! {green}{uuid_results}{nc} was not added to /etc/fstab!')
296 | return False
297 | print(f'{blue}{drive}{nc} added to {red}/etc/fstab{nc} and will be mounted at {green}{mountpoint}{nc}')
298 | subprocess.run(['mount', mountpoint], capture_output=True, text=True)
299 | if ismount(mountpoint):
300 | print(f'Drive {blue}{drive}{nc} Successfully Mounted')
301 | else:
302 | print(f'Drive mount {red}FAILED!{nc}. Please manually check your system.')
303 | return False
304 | return True
305 | except subprocess.CalledProcessError as e:
306 | print(f'uuid error: {e}')
307 | return False
308 | except Exception as e:
309 | print(f'uuid error: {e}')
310 | return False
311 |
312 |
313 | def can_we_run():
314 | """
315 | Check to see if the chia configuration file noted above exists, exits if it does not.
316 | Check to see we are using a supported filesystem.
317 | """
318 | if exists(chia_config_file):
319 | pass
320 | else:
321 | print(f'\n{red}ERROR{nc} opening {green}{chia_config_file}{nc}\nPlease check your {green}filepath{nc} and try again!\n\n')
322 | exit()
323 | if file_system not in {'ext4', 'xfs'}:
324 | print (f'\n{red}ERROR{nc}: {green}{file_system}{nc} is not a supported filesystem.\nPlease choose {green}ext4{nc} or {green}xfs{nc} and try again!\n\n')
325 | exit()
326 | else:
327 | pass
328 | if exists (get_drive_uuid):
329 | pass
330 | else:
331 | print(f'\n{red}ERROR{nc} opening {green}{get_drive_uuid}{nc}\nPlease check your {green}filepath{nc} and try again!\n\n')
332 | exit()
333 |
334 |
335 | def update_chia_config(mountpoint):
336 | """
337 | This function adds the new mountpoint to your chia configuration file.
338 | """
339 | try:
340 | with open(chia_config_file) as f:
341 | chia_config = yaml.safe_load(f)
342 | if mountpoint in f:
343 | print(f'{green}Mountpoint {red}Already{nc} Exists - We will not add it again!\n\n')
344 | return True
345 | else:
346 | chia_config['harvester']['plot_directories'].append(mountpoint)
347 | except IOError:
348 | print(f'{red}ERROR{nc} opening {yellow}{chia_config_file}{nc}! Please check your {yellow}filepath{nc} and try again!\n\n')
349 | return False
350 | try:
351 | with open(chia_config_file, 'w') as f:
352 | yaml.safe_dump(chia_config, f)
353 | return True
354 | except IOError:
355 | print(f'{red}ERROR{nc} opening {yellow}{chia_config_file}{nc}! Please check your {yellow}filepath{nc} and try again!\n\n')
356 | return False
357 |
358 | def sanitise_user_input(prompt, type_=None, min_=None, max_=None, range_=None):
359 | """
360 | Quick and simple function to grab user input and make sure it's correct.
361 | """
362 | if min_ is not None and max_ is not None and max_ < min_:
363 | raise ValueError("min_ must be less than or equal to max_.")
364 | while True:
365 | ui = input(prompt)
366 | if type_ is not None:
367 | try:
368 | ui = type_(ui)
369 | except ValueError:
370 | print("Input type must be {0}.".format(type_.__name__))
371 | continue
372 | if max_ is not None and ui > max_:
373 | print("Input must be less than or equal to {0}.".format(max_))
374 | elif min_ is not None and ui < min_:
375 | print("Input must be greater than or equal to {0}.".format(min_))
376 | elif range_ is not None and ui not in range_:
377 | if isinstance(range_, range):
378 | template = "Input must be between {0.start} and {0.stop}."
379 | print(template.format(range_))
380 | else:
381 | template = "Input must be {0}."
382 | if len(range_) == 1:
383 | print(template.format(*range_))
384 | else:
385 | expected = " or ".join((
386 | ", ".join(str(x) for x in range_[:-1]),
387 | str(range_[-1])
388 | ))
389 | print(template.format(expected))
390 | else:
391 | return ui
392 |
393 |
394 | def main():
395 | add_new_drive()
396 |
397 |
398 | if __name__ == '__main__':
399 | main()
400 |
--------------------------------------------------------------------------------
/chianas/utilities/get_drive_uuid.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 | blkid -u filesystem $1 | awk -F "[= ]" '{print $3}' |tr -d "\""
3 |
--------------------------------------------------------------------------------
/chianas/utilities/kill_nc.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 | # Script to kill any lingering ncat processes
3 |
4 | /usr/bin/killall -9 ncat >/dev/null 2>&1
5 |
--------------------------------------------------------------------------------
/chiaplot/config_file_updater.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Part of drive_manager. Checks to see if our config file needs updating and updated it for us.
6 | """
7 | VERSION = "V0.98 (2021-10-15)"
8 |
9 | import os
10 | import yaml
11 | from pathlib import Path
12 | import logging
13 | from system_logging import setup_logging
14 | from shutil import copyfile
15 | from flatten_dict import flatten
16 | from flatten_dict import unflatten
17 | from datetime import datetime
18 |
19 | script_path = Path(__file__).parent.resolve()
20 |
21 | # Date and Time Stuff
22 | current_military_time = datetime.now().strftime('%Y%m%d%H%M%S')
23 |
24 | config_file = (str(Path.home()) + '/.config/plot_manager/plot_manager.yaml')
25 | skel_config_file = script_path.joinpath('plot_manager.skel.yaml')
26 |
27 | # Define some colors for our help message
28 | red='\033[0;31m'
29 | yellow='\033[0;33m'
30 | green='\033[0;32m'
31 | white='\033[0;37m'
32 | blue='\033[0;34m'
33 | nc='\033[0m'
34 |
35 |
36 | # Setup Module logging. Main logging is configured in system_logging.py
37 | setup_logging()
38 | with open(config_file, 'r') as config:
39 | server = yaml.safe_load(config)
40 | level = server['log_level']
41 | level = logging._checkLevel(level)
42 | log = logging.getLogger('config_file_updater.py')
43 | log.setLevel(level)
44 |
45 |
46 | def config_file_update():
47 | """
48 | Function to determine if we need to update our yaml configuration file after an upgrade.
49 | """
50 | log.debug('config_file_update() Started....')
51 | if os.path.isfile(skel_config_file):
52 | log.debug('New config SKEL file located, checking to see if an update is needed:')
53 | with open(config_file, 'r') as current_config:
54 | current_config = yaml.safe_load(current_config)
55 | with open(skel_config_file, 'r') as temp_config:
56 | temp_config = yaml.safe_load(temp_config)
57 | temp_current_config = flatten(current_config)
58 | temp_temp_config = flatten(temp_config)
59 | updates = (dict((k, v) for k, v in temp_temp_config.items() if k not in temp_current_config))
60 | if updates != {}:
61 | copyfile(skel_config_file, (str(Path.home()) + '/.config/plot_manager/Config_Instructions.yaml'))
62 | copyfile(config_file, (str(Path.home()) + f'/.config/plot_manager/plot_manager.yaml.{current_military_time}'))
63 | temp_current_config.update(updates)
64 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k in temp_temp_config))
65 | else:
66 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k not in temp_temp_config))
67 | if new_config != {}:
68 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k in temp_temp_config))
69 | current_config = unflatten(new_config)
70 | current_config.update({'configured': False})
71 | with open((str(Path.home()) + '/.config/plot_manager/plot_manager.yaml'), 'w') as f:
72 | yaml.safe_dump(current_config, f)
73 | log.debug('Your config files needs updating!')
74 | log.debug(f'Config File: {config_file} updated. Update as necessary to run chia_plot_manager.')
75 | exit()
76 | else:
77 | log.debug('No config file changes necessary! No changes made.')
78 | log.debug(f'{skel_config_file} has been deleted!')
79 | os.remove(skel_config_file)
80 | else:
81 | log.debug('New configuration file not found. No changes made.')
82 |
83 |
84 |
85 | def main():
86 | print(f'Welcome to {green}config_file_updater.py{nc} {blue}VERSION: {nc}{VERSION}')
87 | config_file_update()
88 |
89 |
90 | if __name__ == '__main__':
91 | main()
92 |
--------------------------------------------------------------------------------
/chiaplot/export/readme.md:
--------------------------------------------------------------------------------
1 | Exports
2 |
--------------------------------------------------------------------------------
/chiaplot/logs/readme.md:
--------------------------------------------------------------------------------
1 | Log Files
2 |
--------------------------------------------------------------------------------
/chiaplot/plot_manager.skel.yaml:
--------------------------------------------------------------------------------
1 | # VERSION = "V0.98 (2021-10-15)"
2 |
3 | # Once you have made the necessary modifications to this file, change this to
4 | # True.
5 |
6 | configured: False
7 |
8 | # Enter the hostname of this server:
9 | hostname: chiaplot01
10 | domain_name: mydomain.com
11 |
12 | #Are we plotting for pools (prepends `portable.` to the plot name)
13 | pools: True
14 |
15 | # Enter Logging Information
16 | logging: True
17 | log_level: DEBUG
18 |
19 | # I use Plotman to do my plotting, but this should work for anything. This
20 | # has NOTHING to do with setting up your plotting configuration and is
21 | # only used for monitoring drive space for notifications. Set to True if
22 | # locally plotting and configure the rest of the settings.
23 | local_plotter:
24 | active: True
25 |
26 | # Make sure to use the actual mountpoint not the device.
27 | temp_dirs:
28 | dirs:
29 | - /mnt/nvme/drive0
30 | - /mnt/nvme/drive1
31 | - /mnt/nvme/drive2
32 | - /mnt/nvme/drive3
33 | # What critical usage % should we send an error? Do not make this too low
34 | # or you will get nuisance reports.
35 | critical: 95
36 | critical_alert_sent: False
37 |
38 | # This is the `-d` directory that you are using for your plots. Currently
39 | # we only support a single drive here for monitoring. MUST have the
40 | # trailing '/'.
41 | dst_dirs:
42 | dirs:
43 | - /mnt/ssdraid/array0/
44 |
45 | # At what % utilization do we send an error?
46 | critical: 95
47 | critical_alert_sent: True
48 |
49 | # If you are running multiple remote harvesters,
50 | # and enter their hostnames below.These hostnames
51 | # should be configured for passwordless ssh and should be configured
52 | # such that when you ping the hostname, it goes across the fastest
53 | # interface you have between these harvesters. Set to True if you
54 | # have multiple harvesters and want to automate sending plots to
55 | # the Harvester with the least number of plots. If you are only
56 | # running a single harvester, just list its hostname here.
57 | remote_harvesters:
58 | - chianas01
59 | - chianas02
60 | - chianas03
61 |
62 |
63 | # If you run multiple harvesters, each of those harvesters can be configured to
64 | # either replace non-pool plots or not. If they are configured to replace non-pool
65 | # plots, you can also configure them to fill empty drives first (which one would think
66 | # would be the best course of actions in most cases). Each harvester will report back
67 | # to the plotter how many old plots it has to replace as well as home many 'free' plot
68 | # spaces it has to fill. Here is where you tell the plotter which of those to prioritize
69 | # in the event there are both. IN MOST CASES this will be fill if you want to maximize
70 | # the total NUMBER of plots on your harvesters. If you choose 'fill' here, then the
71 | # plotter looks to see which harvester has the most number of EMPTY spaces to fill and
72 | # selects that harvester to send the next plot to, however if you select 'replace' here,
73 | # then the plotter looks at the overall space available that each harvester reports and
74 | # utilizes that to determine where to send the plot. In most cases this will be old_plots +
75 | # free_space = total_plots_space_available.
76 | # fill = fill all empty space first
77 | # replace = replace all old plots first, then fill
78 | remote_harvester_priority: fill
79 |
80 | # Enter the name (as shown by ifconfig) of the interface that you SEND plots on
81 | # to your plotters. This is used to check for network traffic to prevent multiple
82 | # plots from being transferred at the same time.
83 | network_interface: eth0
84 |
85 | # This is a number that represents at what percentage overall utilization of the above
86 | # interface we will assume that a plot transfer is taking place. You should really TEST
87 | # this to make sure it works for your needs. If you have a dedicated interface to move
88 | # plots, then it can be set very low (1 to 2), however if you have a shared interface,
89 | # you should test while a plot transfer is running and set it to what number makes sense.
90 | # To test simply run the following command and look at the very last number:
91 | # /usr/bin/sar -n DEV 1 50 | egrep eth0
92 | network_interface_threshold: 2
93 |
94 |
95 | # This is where we set up our notifications
96 | notifications:
97 | active: True
98 | methods:
99 | pb: False
100 | email: True
101 | sms: False
102 | types:
103 | new_plot_drive: True
104 | daily_update: True
105 | per_plot: True
106 | warnings: True
107 | # What email addresses will get the emails?
108 | emails:
109 | - someone@gmail.com
110 | - someoneelse@gmail.com
111 | # What phones numbers will received the SMS text messages? Include '+1'
112 | phones:
113 | - '+18584140000'
114 |
115 | # These are your notification account settings. For email, you must configure
116 | # your locate MTA. Installer installs Postfix by default. Twilio (SMS) requires
117 | # a paid account, PushBullet is free.
118 | accounts:
119 | twilio:
120 | from: '+18582640000'
121 | account: your_account_id
122 | token: your_account_token
123 | pushBullet:
124 | api: your_pushbullet_api_token
125 |
--------------------------------------------------------------------------------
/chiaplot/plotmanager_classes.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Part of drive_manager. These classes are for reading and updating our yaml
6 | config file.
7 | """
8 |
9 | VERSION = "V0.98 (2021-10-15)"
10 |
11 | import os
12 | import yaml
13 | from pathlib import Path
14 | import logging
15 | from system_logging import setup_logging
16 | import psutil
17 | from flatten_dict import flatten
18 | from flatten_dict import unflatten
19 | from datetime import datetime
20 |
21 | script_path = Path(__file__).parent.resolve()
22 | #user_home_dir = str(Path.home())
23 | config_file = (str(Path.home()) + '/.config/plot_manager/plot_manager.yaml')
24 | #config_file = (user_home_dir + '/.config/plot_manager/plot_manager.yaml')
25 | skel_config_file = script_path.joinpath('plot_manager.skel.yaml')
26 |
27 | # Date and Time Stuff
28 | current_military_time = datetime.now().strftime('%Y%m%d%H%M%S')
29 |
30 | # Setup Module logging. Main logging is configured in system_logging.py
31 | setup_logging()
32 | with open(config_file, 'r') as config:
33 | server = yaml.safe_load(config)
34 | level = server['log_level']
35 | level = logging._checkLevel(level)
36 | log = logging.getLogger(__name__)
37 | log.setLevel(level)
38 |
39 |
40 | def config_file_update():
41 | """
42 | Function to determine if we need to update our yaml configuration file after an upgrade.
43 | """
44 | log.debug('config_file_update() Started....')
45 | if os.path.isfile(skel_config_file):
46 | with open(config_file, 'r') as current_config:
47 | current_config = yaml.safe_load(current_config)
48 | with open(skel_config_file, 'r') as temp_config:
49 | temp_config = yaml.safe_load(temp_config)
50 | temp_current_config = flatten(current_config)
51 | temp_temp_config = flatten(temp_config)
52 | updates = (dict((k, v) for k, v in temp_temp_config.items() if k not in temp_current_config))
53 | if updates != {}:
54 | copyfile(skel_config_file, (str(Path.home()) + '/.config/plot_manager/Config_Instructions.yaml'))
55 | copyfile(config_file, (str(Path.home()) + f'/.config/plot_manager/plot_manager.yaml.{current_military_time}'))
56 | temp_current_config.update(updates)
57 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k in temp_temp_config))
58 | else:
59 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k not in temp_temp_config))
60 | if new_config != {}:
61 | new_config = (dict((k, v) for k, v in temp_current_config.items() if k in temp_temp_config))
62 | current_config = unflatten(new_config)
63 | current_config.update({'configured': False})
64 | with open((str(Path.home()) + '/.config/plot_manager/plot_manager.yaml'), 'w') as f:
65 | yaml.safe_dump(current_config, f)
66 | log.debug(f'Config File: {config_file} updated. Update as necessary to run this script.')
67 | exit()
68 | else:
69 | log.debug('No config file changes necessary! No changes made.')
70 | else:
71 | log.debug('New configuration file not found. No changes made.')
72 |
73 |
74 |
75 | class PlotManager:
76 | if not os.path.isfile(config_file):
77 | log.debug(f'Plot_Manager config file does not exist at: {config_file}')
78 | log.debug("Please check file path and try again.")
79 | exit()
80 | else:
81 | def __init__(self, configured, hostname, pools, remote_harvesters, network_interface_threshold, domain_name,
82 | notifications, pb, email, sms, temp_dirs, temp_dirs_critical, remote_harvester_priority, network_interface,
83 | dst_dirs, dst_dirs_critical, dst_dirs_critical_alert_sent,temp_dirs_critical_alert_sent,
84 | warnings, emails, phones, twilio_from, twilio_account,
85 | twilio_token, pb_api, logging, log_level):
86 | self.configured = configured
87 | self.hostname = hostname
88 | self.domain_name = domain_name
89 | self.pools = pools
90 | self.remote_harvesters = remote_harvesters
91 | self.remote_harvester_priority = remote_harvester_priority
92 | self.network_interface = network_interface
93 | self.network_interface_threshold = network_interface_threshold
94 | self.notifications = notifications
95 | self.pb = pb
96 | self.email = email
97 | self.sms = sms
98 | self.warnings = warnings
99 | self.emails = emails
100 | self.phones = phones
101 | self.twilio_from = twilio_from
102 | self.twilio_account = twilio_account
103 | self.twilio_token = twilio_token
104 | self.pb_api = pb_api
105 | self.temp_dirs = temp_dirs
106 | self.temp_dirs_critical = temp_dirs_critical
107 | self.temp_dirs_critical_alert_sent = temp_dirs_critical_alert_sent
108 | self.dst_dirs = dst_dirs
109 | self.dst_dirs_critical = dst_dirs_critical
110 | self.dst_dirs_critical_alert_sent = dst_dirs_critical_alert_sent
111 | self.logging = logging
112 | self.log_level = log_level
113 |
114 | @classmethod
115 | def read_configs(cls):
116 | with open (config_file, 'r') as config:
117 | server = yaml.safe_load(config)
118 | return cls(
119 | configured=server['configured'],
120 | hostname=server['hostname'],
121 | domain_name=server['domain_name'],
122 | pools=server['pools'],
123 | remote_harvesters=server['remote_harvesters'],
124 | remote_harvester_priority=server['remote_harvester_priority'],
125 | network_interface=server['network_interface'],
126 | network_interface_threshold=server['network_interface_threshold'],
127 | notifications=server['notifications']['active'],
128 | pb=server['notifications']['methods']['pb'],
129 | email=server['notifications']['methods']['email'],
130 | sms=server['notifications']['methods']['sms'],
131 | warnings=server['notifications']['types']['warnings'],
132 | emails=server['notifications']['emails'],
133 | phones=server['notifications']['phones'],
134 | twilio_from=server['notifications']['accounts']['twilio']['from'],
135 | twilio_account=server['notifications']['accounts']['twilio']['account'],
136 | twilio_token=server['notifications']['accounts']['twilio']['token'],
137 | pb_api=server['notifications']['accounts']['pushBullet']['api'],
138 | temp_dirs=server['local_plotter']['temp_dirs']['dirs'],
139 | temp_dirs_critical=server['local_plotter']['temp_dirs']['critical'],
140 | temp_dirs_critical_alert_sent=server['local_plotter']['temp_dirs']['critical_alert_sent'],
141 | dst_dirs=server['local_plotter']['dst_dirs']['dirs'],
142 | dst_dirs_critical=server['local_plotter']['dst_dirs']['critical'],
143 | dst_dirs_critical_alert_sent=server['local_plotter']['dst_dirs']['critical_alert_sent'],
144 | logging=server['logging'],
145 | log_level=server['log_level'])
146 |
147 |
148 | def toggle_notification(self, notification):
149 | if getattr(self, notification):
150 | print('Changing to False')
151 | with open(config_file) as f:
152 | server = yaml.safe_load(f)
153 | server['notifications']['methods'][notification] = False
154 | with open(config_file, 'w') as f:
155 | yaml.safe_dump(server, f)
156 | else:
157 | print ('Changing to True')
158 | with open(config_file) as f:
159 | server = yaml.safe_load(f)
160 | server['notifications']['methods'][notification] = True
161 | with open(config_file, 'w') as f:
162 | yaml.safe_dump(server, f)
163 |
164 | def set_notification(self, notification, value):
165 | if getattr(self, notification) == value:
166 | pass
167 | else:
168 | with open(config_file) as f:
169 | server = yaml.safe_load(f)
170 | server['notifications']['methods'][notification] = value
171 | with open(config_file, 'w') as f:
172 | yaml.safe_dump(server, f)
173 |
174 | def temp_dir_usage(self):
175 | temp_dir_usage = {}
176 | for dir in self.temp_dirs:
177 | usage = psutil.disk_usage(dir)
178 | temp_dir_usage[dir] = int(usage.percent)
179 | return temp_dir_usage
180 |
181 | def get_critical_temp_dir_usage(self):
182 | paths = self.temp_dir_usage()
183 | return dict((k, v) for k, v in paths.items() if v > self.temp_dirs_critical)
184 |
185 |
186 | def dst_dir_usage(self):
187 | dst_dir_usage = {}
188 | for dir in self.dst_dirs:
189 | usage = psutil.disk_usage(dir)
190 | dst_dir_usage[dir] = int(usage.percent)
191 | return dst_dir_usage
192 |
193 | def get_critical_dst_dir_usage(self):
194 | paths = self.dst_dir_usage()
195 | return dict((k, v) for k, v in paths.items() if v > self.dst_dirs_critical)
196 |
197 |
198 | def toggle_alert_sent(self, alert):
199 | if alert == 'temp_dirs_critical_alert_sent':
200 | if getattr(self, alert):
201 | print('Changing to False')
202 | with open(config_file) as f:
203 | server = yaml.safe_load(f)
204 | server['local_plotter']['temp_dirs']['critical_alert_sent'] = False
205 | with open(config_file, 'w') as f:
206 | yaml.safe_dump(server, f)
207 | else:
208 | print ('Changing to True')
209 | with open(config_file) as f:
210 | server = yaml.safe_load(f)
211 | server['local_plotter']['temp_dirs']['critical_alert_sent'] = True
212 | with open(config_file, 'w') as f:
213 | yaml.safe_dump(server, f)
214 | elif alert == 'dst_dirs_critical_alert_sent':
215 | if getattr(self, alert):
216 | print('Changing to False')
217 | with open(config_file) as f:
218 | server = yaml.safe_load(f)
219 | server['local_plotter']['dst_dirs']['critical_alert_sent'] = False
220 | with open(config_file, 'w') as f:
221 | yaml.safe_dump(server, f)
222 | else:
223 | print('Changing to True')
224 | with open(config_file) as f:
225 | server = yaml.safe_load(f)
226 | server['local_plotter']['dst_dirs']['critical_alert_sent'] = True
227 | with open(config_file, 'w') as f:
228 | yaml.safe_dump(server, f)
229 |
230 | def main():
231 | print("Not intended to be run directly.")
232 | print("This is the systemwide DriveManager Class module.")
233 | print("It is called by other modules.")
234 | exit()
235 |
236 | if __name__ == '__main__':
237 | main()
238 |
--------------------------------------------------------------------------------
/chiaplot/readme.md:
--------------------------------------------------------------------------------
1 |
2 |
6 |
3 |
4 | Chia Plot, Drive Manager, Coin Monitor & Auto Drive (V0.98 - October 15th, 2021)
5 |
11 |
12 |
13 | ### Basic Installation and Configuration Instructions
14 |
15 | Please read the notes below from the config file instructions as most of these are self explanatory:
16 |
17 |
18 | ```
19 | # VERSION = "V0.98 (2021-10-15)"
20 |
21 | # Once you have made the necessary modifications to this file, change this to
22 | # True.
23 |
24 | configured: False
25 |
26 | # Enter the hostname of this server:
27 | hostname: chiaplot01
28 | domain_name: mydomain.com
29 |
30 | #Are we plotting for pools (prepends `portable.` to the plot name)
31 | pools: True
32 |
33 | # Enter Logging Information
34 | logging: True
35 | log_level: DEBUG
36 |
37 | # I use Plotman to do my plotting, but this should work for anything. This
38 | # has NOTHING to do with setting up your plotting configuration and is
39 | # only used for monitoring drive space for notifications. Set to True if
40 | # locally plotting and configure the rest of the settings.
41 | local_plotter:
42 | active: True
43 |
44 | # Make sure to use the actual mountpoint not the device.
45 | temp_dirs:
46 | dirs:
47 | - /mnt/nvme/drive0
48 | - /mnt/nvme/drive1
49 | - /mnt/nvme/drive2
50 | - /mnt/nvme/drive3
51 | # What critical usage % should we send an error? Do not make this too low
52 | # or you will get nuisance reports.
53 | critical: 95
54 | critical_alert_sent: False
55 |
56 | # This is the `-d` directory that you are using for your plots. Currently
57 | # we only support a single drive here for monitoring. MUST have the
58 | # trailing '/'.
59 | dst_dirs:
60 | dirs:
61 | - /mnt/ssdraid/array0/
62 |
63 | # At what % utilization do we send an error?
64 | critical: 95
65 | critical_alert_sent: True
66 |
67 | # If you are running multiple remote harvesters,
68 | # and enter their hostnames below.These hostnames
69 | # should be configured for passwordless ssh and should be configured
70 | # such that when you ping the hostname, it goes across the fastest
71 | # interface you have between these harvesters. Set to True if you
72 | # have multiple harvesters and want to automate sending plots to
73 | # the Harvester with the least number of plots. If you are only
74 | # running a single harvester, just list its hostname here.
75 | remote_harvesters:
76 | - chianas01
77 | - chianas02
78 | - chianas03
79 |
80 |
81 | # If you run multiple harvesters, each of those harvesters can be configured to
82 | # either replace non-pool plots or not. If they are configured to replace non-pool
83 | # plots, you can also configure them to fill empty drives first (which one would think
84 | # would be the best course of actions in most cases). Each harvester will report back
85 | # to the plotter how many old plots it has to replace as well as home many 'free' plot
86 | # spaces it has to fill. Here is where you tell the plotter which of those to prioritize
87 | # in the event there are both. IN MOST CASES this will be fill if you want to maximize
88 | # the total NUMBER of plots on your harvesters. If you choose 'fill' here, then the
89 | # plotter looks to see which harvester has the most number of EMPTY spaces to fill and
90 | # selects that harvester to send the next plot to, however if you select 'replace' here,
91 | # then the plotter looks at the overall space available that each harvester reports and
92 | # utilizes that to determine where to send the plot. In most cases this will be old_plots +
93 | # free_space = total_plots_space_available.
94 | # fill = fill all empty space first
95 | # replace = replace all old plots first, then fill
96 | remote_harvester_priority: fill
97 |
98 | # Enter the name (as shown by ifconfig) of the interface that you SEND plots on
99 | # to your plotters. This is used to check for network traffic to prevent multiple
100 | # plots from being transferred at the same time.
101 | network_interface: eth0
102 |
103 | # This is a number that represents at what percentage overall utilization of the above
104 | # interface we will assume that a plot transfer is taking place. You should really TEST
105 | # this to make sure it works for your needs. If you have a dedicated interface to move
106 | # plots, then it can be set very low (1 to 2), however if you have a shared interface,
107 | # you should test while a plot transfer is running and set it to what number makes sense.
108 | # To test simply run the following command and look at the very last number:
109 | # /usr/bin/sar -n DEV 1 50 | egrep eth0
110 | network_interface_threshold: 2
111 |
112 |
113 | # This is where we set up our notifications
114 | notifications:
115 | active: True
116 | methods:
117 | pb: False
118 | email: True
119 | sms: False
120 | types:
121 | new_plot_drive: True
122 | daily_update: True
123 | per_plot: True
124 | warnings: True
125 | # What email addresses will get the emails?
126 | emails:
127 | - someone@gmail.com
128 | - someoneelse@gmail.com
129 | # What phones numbers will received the SMS text messages? Include '+1'
130 | phones:
131 | - '+18584140000'
132 |
133 | # These are your notification account settings. For email, you must configure
134 | # your locate MTA. Installer installs Postfix by default. Twilio (SMS) requires
135 | # a paid account, PushBullet is free.
136 | accounts:
137 | twilio:
138 | from: '+18582640000'
139 | account: your_account_id
140 | token: your_account_token
141 | pushBullet:
142 | api: your_pushbullet_api_token
143 | ```
144 |
--------------------------------------------------------------------------------
/chiaplot/system_logging.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/python3
2 | # -*- coding: utf-8 -*-
3 |
4 | """
5 | Part of drive_manager. This is the logging module.
6 | For use with plot_manager V0.9
7 | """
8 |
9 | VERSION = "V0.92 (2021-05-31)"
10 |
11 | import logging.config
12 | import logging
13 | import logging.handlers
14 | import yaml
15 | import pathlib
16 |
17 | user_home_dir = str(pathlib.Path.home())
18 | config_file = (user_home_dir + '/.config/plot_manager/plot_manager.yaml')
19 | script_path = pathlib.Path(__file__).parent.resolve()
20 |
21 |
22 | def setup_logging(default_level=logging.CRITICAL):
23 | """Module to configure program-wide logging."""
24 | with open(config_file, 'r') as config:
25 | server = yaml.safe_load(config)
26 | log = logging.getLogger(__name__)
27 | level = logging._checkLevel(server['log_level'])
28 | log.setLevel(level)
29 | if server['logging']:
30 | try:
31 | logging.config.dictConfig(log_config)
32 | except Exception as e:
33 | print(e)
34 | print('Error in Logging Configuration. Using default configs. Check File Permissions (for a start)!')
35 | logging.basicConfig(level=default_level)
36 | return log
37 | else:
38 | log.disabled = True
39 |
40 | log_config = {
41 | "version": 1,
42 | "disable_existing_loggers": False,
43 | "formatters": {
44 | "console": {
45 | "format": "%(message)s"
46 | },
47 | "console_expanded": {
48 | "format": "%(module)2s:%(lineno)s - %(funcName)3s: %(levelname)3s: %(message)s"
49 | },
50 | "standard": {
51 | "format": "%(asctime)s - %(module)2s:%(lineno)s - %(funcName)3s: %(levelname)3s %(message)s"
52 | },
53 | "error": {
54 | "format": "%(levelname)s Congratulations!
5 |
At {{current_time}} you received some more Chia!
6 |
7 | You now have {{current_chia_coins}} Chia Coins!
8 |
9 |
10 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures:
--------------------------------------------------------------------------------
1 | drive_structures_36_drives
2 | drive_structures_45_drives
3 | drive_structures_60_infinidat_enclosure1
4 | drive_structures_60_infinidat_enclosure2
5 | drive_structures_60_infinidat_enclosure3
6 | drive_structures_60_infinidat_enclosure4
7 | drive_structures_60_infinidat_enclosure5
8 | drive_structures_60_infinidat_enclosure6
9 | drive_structures_60_infinidat_enclosure7
10 | drive_structures_60_infinidat_enclosure8
11 | drive_structures_81_drives
12 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_36_drives:
--------------------------------------------------------------------------------
1 | /mnt/enclosure0
2 | /mnt/enclosure0/front
3 | /mnt/enclosure0/front/column0
4 | /mnt/enclosure0/front/column0/drive0
5 | /mnt/enclosure0/front/column0/drive1
6 | /mnt/enclosure0/front/column0/drive2
7 | /mnt/enclosure0/front/column0/drive3
8 | /mnt/enclosure0/front/column0/drive4
9 | /mnt/enclosure0/front/column0/drive5
10 | /mnt/enclosure0/front/column1
11 | /mnt/enclosure0/front/column1/drive6
12 | /mnt/enclosure0/front/column1/drive7
13 | /mnt/enclosure0/front/column1/drive8
14 | /mnt/enclosure0/front/column1/drive9
15 | /mnt/enclosure0/front/column1/drive10
16 | /mnt/enclosure0/front/column1/drive11
17 | /mnt/enclosure0/front/column2
18 | /mnt/enclosure0/front/column2/drive12
19 | /mnt/enclosure0/front/column2/drive13
20 | /mnt/enclosure0/front/column2/drive14
21 | /mnt/enclosure0/front/column2/drive15
22 | /mnt/enclosure0/front/column2/drive16
23 | /mnt/enclosure0/front/column2/drive17
24 | /mnt/enclosure0/front/column3
25 | /mnt/enclosure0/front/column3/drive18
26 | /mnt/enclosure0/front/column3/drive19
27 | /mnt/enclosure0/front/column3/drive20
28 | /mnt/enclosure0/front/column3/drive21
29 | /mnt/enclosure0/front/column3/drive22
30 | /mnt/enclosure0/front/column3/drive23
31 | /mnt/enclosure0/rear
32 | /mnt/enclosure0/rear/column0
33 | /mnt/enclosure0/rear/column0/drive24
34 | /mnt/enclosure0/rear/column0/drive25
35 | /mnt/enclosure0/rear/column0/drive26
36 | /mnt/enclosure0/rear/column1
37 | /mnt/enclosure0/rear/column1/drive27
38 | /mnt/enclosure0/rear/column1/drive28
39 | /mnt/enclosure0/rear/column1/drive29
40 | /mnt/enclosure0/rear/column2
41 | /mnt/enclosure0/rear/column2/drive30
42 | /mnt/enclosure0/rear/column2/drive31
43 | /mnt/enclosure0/rear/column2/drive32
44 | /mnt/enclosure0/rear/column3
45 | /mnt/enclosure0/rear/column3/drive33
46 | /mnt/enclosure0/rear/column3/drive34
47 | /mnt/enclosure0/rear/column3/drive35
48 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_45_drives:
--------------------------------------------------------------------------------
1 | /mnt/enclosure0
2 | /mnt/enclosure0/front
3 | /mnt/enclosure0/front/column0
4 | /mnt/enclosure0/front/column0/drive0
5 | /mnt/enclosure0/front/column0/drive1
6 | /mnt/enclosure0/front/column0/drive2
7 | /mnt/enclosure0/front/column0/drive3
8 | /mnt/enclosure0/front/column0/drive4
9 | /mnt/enclosure0/front/column0/drive5
10 | /mnt/enclosure0/front/column1
11 | /mnt/enclosure0/front/column1/drive6
12 | /mnt/enclosure0/front/column1/drive7
13 | /mnt/enclosure0/front/column1/drive8
14 | /mnt/enclosure0/front/column1/drive9
15 | /mnt/enclosure0/front/column1/drive10
16 | /mnt/enclosure0/front/column1/drive11
17 | /mnt/enclosure0/front/column2
18 | /mnt/enclosure0/front/column2/drive12
19 | /mnt/enclosure0/front/column2/drive13
20 | /mnt/enclosure0/front/column2/drive14
21 | /mnt/enclosure0/front/column2/drive15
22 | /mnt/enclosure0/front/column2/drive16
23 | /mnt/enclosure0/front/column2/drive17
24 | /mnt/enclosure0/front/column3
25 | /mnt/enclosure0/front/column3/drive18
26 | /mnt/enclosure0/front/column3/drive19
27 | /mnt/enclosure0/front/column3/drive20
28 | /mnt/enclosure0/front/column3/drive21
29 | /mnt/enclosure0/front/column3/drive22
30 | /mnt/enclosure0/front/column3/drive23
31 | /mnt/enclosure0/rear
32 | /mnt/enclosure0/rear/column0
33 | /mnt/enclosure0/rear/column0/drive24
34 | /mnt/enclosure0/rear/column0/drive25
35 | /mnt/enclosure0/rear/column0/drive26
36 | /mnt/enclosure0/rear/column1
37 | /mnt/enclosure0/rear/column1/drive27
38 | /mnt/enclosure0/rear/column1/drive28
39 | /mnt/enclosure0/rear/column1/drive29
40 | /mnt/enclosure0/rear/column1/drive30
41 | /mnt/enclosure0/rear/column1/drive31
42 | /mnt/enclosure0/rear/column1/drive32
43 | /mnt/enclosure0/rear/column2
44 | /mnt/enclosure0/rear/column2/drive33
45 | /mnt/enclosure0/rear/column2/drive34
46 | /mnt/enclosure0/rear/column2/drive35
47 | /mnt/enclosure0/rear/column2/drive36
48 | /mnt/enclosure0/rear/column3/drive37
49 | /mnt/enclosure0/rear/column2/drive38
50 | /mnt/enclosure0/rear/column3
51 | /mnt/enclosure0/rear/column3/drive39
52 | /mnt/enclosure0/rear/column3/drive40
53 | /mnt/enclosure0/rear/column3/drive41
54 | /mnt/enclosure0/rear/column3/drive42
55 | /mnt/enclosure0/rear/column3/drive43
56 | /mnt/enclosure0/rear/column3/drive44
57 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure1:
--------------------------------------------------------------------------------
1 | /mnt/enclosure1
2 | /mnt/enclosure1/front
3 | /mnt/enclosure1/front/column0
4 | /mnt/enclosure1/front/column0/drive101
5 | /mnt/enclosure1/front/column0/drive102
6 | /mnt/enclosure1/front/column0/drive103
7 | /mnt/enclosure1/front/column0/drive104
8 | /mnt/enclosure1/front/column0/drive105
9 | /mnt/enclosure1/front/column0/drive106
10 | /mnt/enclosure1/front/column0/drive107
11 | /mnt/enclosure1/front/column0/drive108
12 | /mnt/enclosure1/front/column0/drive109
13 | /mnt/enclosure1/front/column0/drive110
14 | /mnt/enclosure1/front/column0/drive111
15 | /mnt/enclosure1/front/column0/drive112
16 | /mnt/enclosure1/front/column0/drive113
17 | /mnt/enclosure1/front/column0/drive114
18 | /mnt/enclosure1/front/column0/drive115
19 | /mnt/enclosure1/front/column0/drive116
20 | /mnt/enclosure1/front/column0/drive117
21 | /mnt/enclosure1/front/column0/drive118
22 | /mnt/enclosure1/front/column0/drive119
23 | /mnt/enclosure1/front/column0/drive120
24 | /mnt/enclosure1/front/column0/drive121
25 | /mnt/enclosure1/front/column0/drive122
26 | /mnt/enclosure1/front/column0/drive123
27 | /mnt/enclosure1/front/column0/drive124
28 | /mnt/enclosure1/front/column0/drive125
29 | /mnt/enclosure1/front/column0/drive126
30 | /mnt/enclosure1/front/column0/drive127
31 | /mnt/enclosure1/front/column0/drive128
32 | /mnt/enclosure1/front/column0/drive129
33 | /mnt/enclosure1/front/column0/drive130
34 | /mnt/enclosure1/front/column0/drive131
35 | /mnt/enclosure1/front/column0/drive132
36 | /mnt/enclosure1/front/column0/drive133
37 | /mnt/enclosure1/front/column0/drive134
38 | /mnt/enclosure1/front/column0/drive135
39 | /mnt/enclosure1/front/column0/drive136
40 | /mnt/enclosure1/front/column0/drive137
41 | /mnt/enclosure1/front/column0/drive138
42 | /mnt/enclosure1/front/column0/drive139
43 | /mnt/enclosure1/front/column0/drive140
44 | /mnt/enclosure1/front/column0/drive141
45 | /mnt/enclosure1/front/column0/drive142
46 | /mnt/enclosure1/front/column0/drive143
47 | /mnt/enclosure1/front/column0/drive144
48 | /mnt/enclosure1/front/column0/drive145
49 | /mnt/enclosure1/front/column0/drive146
50 | /mnt/enclosure1/front/column0/drive147
51 | /mnt/enclosure1/front/column0/drive148
52 | /mnt/enclosure1/front/column0/drive149
53 | /mnt/enclosure1/front/column0/drive150
54 | /mnt/enclosure1/front/column0/drive151
55 | /mnt/enclosure1/front/column0/drive152
56 | /mnt/enclosure1/front/column0/drive153
57 | /mnt/enclosure1/front/column0/drive154
58 | /mnt/enclosure1/front/column0/drive155
59 | /mnt/enclosure1/front/column0/drive156
60 | /mnt/enclosure1/front/column0/drive157
61 | /mnt/enclosure1/front/column0/drive158
62 | /mnt/enclosure1/front/column0/drive159
63 | /mnt/enclosure1/front/column0/drive160
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure2:
--------------------------------------------------------------------------------
1 | /mnt/enclosure2
2 | /mnt/enclosure2/front
3 | /mnt/enclosure2/front/column0
4 | /mnt/enclosure2/front/column0/drive201
5 | /mnt/enclosure2/front/column0/drive202
6 | /mnt/enclosure2/front/column0/drive203
7 | /mnt/enclosure2/front/column0/drive204
8 | /mnt/enclosure2/front/column0/drive205
9 | /mnt/enclosure2/front/column0/drive206
10 | /mnt/enclosure2/front/column0/drive207
11 | /mnt/enclosure2/front/column0/drive208
12 | /mnt/enclosure2/front/column0/drive209
13 | /mnt/enclosure2/front/column0/drive210
14 | /mnt/enclosure2/front/column0/drive211
15 | /mnt/enclosure2/front/column0/drive212
16 | /mnt/enclosure2/front/column0/drive213
17 | /mnt/enclosure2/front/column0/drive214
18 | /mnt/enclosure2/front/column0/drive215
19 | /mnt/enclosure2/front/column0/drive216
20 | /mnt/enclosure2/front/column0/drive217
21 | /mnt/enclosure2/front/column0/drive218
22 | /mnt/enclosure2/front/column0/drive219
23 | /mnt/enclosure2/front/column0/drive220
24 | /mnt/enclosure2/front/column0/drive221
25 | /mnt/enclosure2/front/column0/drive222
26 | /mnt/enclosure2/front/column0/drive223
27 | /mnt/enclosure2/front/column0/drive224
28 | /mnt/enclosure2/front/column0/drive225
29 | /mnt/enclosure2/front/column0/drive226
30 | /mnt/enclosure2/front/column0/drive227
31 | /mnt/enclosure2/front/column0/drive228
32 | /mnt/enclosure2/front/column0/drive229
33 | /mnt/enclosure2/front/column0/drive230
34 | /mnt/enclosure2/front/column0/drive231
35 | /mnt/enclosure2/front/column0/drive232
36 | /mnt/enclosure2/front/column0/drive233
37 | /mnt/enclosure2/front/column0/drive234
38 | /mnt/enclosure2/front/column0/drive235
39 | /mnt/enclosure2/front/column0/drive236
40 | /mnt/enclosure2/front/column0/drive237
41 | /mnt/enclosure2/front/column0/drive238
42 | /mnt/enclosure2/front/column0/drive239
43 | /mnt/enclosure2/front/column0/drive240
44 | /mnt/enclosure2/front/column0/drive241
45 | /mnt/enclosure2/front/column0/drive242
46 | /mnt/enclosure2/front/column0/drive243
47 | /mnt/enclosure2/front/column0/drive244
48 | /mnt/enclosure2/front/column0/drive245
49 | /mnt/enclosure2/front/column0/drive246
50 | /mnt/enclosure2/front/column0/drive247
51 | /mnt/enclosure2/front/column0/drive248
52 | /mnt/enclosure2/front/column0/drive249
53 | /mnt/enclosure2/front/column0/drive250
54 | /mnt/enclosure2/front/column0/drive251
55 | /mnt/enclosure2/front/column0/drive252
56 | /mnt/enclosure2/front/column0/drive253
57 | /mnt/enclosure2/front/column0/drive254
58 | /mnt/enclosure2/front/column0/drive255
59 | /mnt/enclosure2/front/column0/drive256
60 | /mnt/enclosure2/front/column0/drive257
61 | /mnt/enclosure2/front/column0/drive258
62 | /mnt/enclosure2/front/column0/drive259
63 | /mnt/enclosure2/front/column0/drive260
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure3:
--------------------------------------------------------------------------------
1 | /mnt/enclosure3
2 | /mnt/enclosure3/front
3 | /mnt/enclosure3/front/column0
4 | /mnt/enclosure3/front/column0/drive301
5 | /mnt/enclosure3/front/column0/drive302
6 | /mnt/enclosure3/front/column0/drive303
7 | /mnt/enclosure3/front/column0/drive304
8 | /mnt/enclosure3/front/column0/drive305
9 | /mnt/enclosure3/front/column0/drive306
10 | /mnt/enclosure3/front/column0/drive307
11 | /mnt/enclosure3/front/column0/drive308
12 | /mnt/enclosure3/front/column0/drive309
13 | /mnt/enclosure3/front/column0/drive310
14 | /mnt/enclosure3/front/column0/drive311
15 | /mnt/enclosure3/front/column0/drive312
16 | /mnt/enclosure3/front/column0/drive313
17 | /mnt/enclosure3/front/column0/drive314
18 | /mnt/enclosure3/front/column0/drive315
19 | /mnt/enclosure3/front/column0/drive316
20 | /mnt/enclosure3/front/column0/drive317
21 | /mnt/enclosure3/front/column0/drive318
22 | /mnt/enclosure3/front/column0/drive319
23 | /mnt/enclosure3/front/column0/drive320
24 | /mnt/enclosure3/front/column0/drive321
25 | /mnt/enclosure3/front/column0/drive322
26 | /mnt/enclosure3/front/column0/drive323
27 | /mnt/enclosure3/front/column0/drive324
28 | /mnt/enclosure3/front/column0/drive325
29 | /mnt/enclosure3/front/column0/drive326
30 | /mnt/enclosure3/front/column0/drive327
31 | /mnt/enclosure3/front/column0/drive328
32 | /mnt/enclosure3/front/column0/drive329
33 | /mnt/enclosure3/front/column0/drive330
34 | /mnt/enclosure3/front/column0/drive331
35 | /mnt/enclosure3/front/column0/drive332
36 | /mnt/enclosure3/front/column0/drive333
37 | /mnt/enclosure3/front/column0/drive334
38 | /mnt/enclosure3/front/column0/drive335
39 | /mnt/enclosure3/front/column0/drive336
40 | /mnt/enclosure3/front/column0/drive337
41 | /mnt/enclosure3/front/column0/drive338
42 | /mnt/enclosure3/front/column0/drive339
43 | /mnt/enclosure3/front/column0/drive340
44 | /mnt/enclosure3/front/column0/drive341
45 | /mnt/enclosure3/front/column0/drive342
46 | /mnt/enclosure3/front/column0/drive343
47 | /mnt/enclosure3/front/column0/drive344
48 | /mnt/enclosure3/front/column0/drive345
49 | /mnt/enclosure3/front/column0/drive346
50 | /mnt/enclosure3/front/column0/drive347
51 | /mnt/enclosure3/front/column0/drive348
52 | /mnt/enclosure3/front/column0/drive349
53 | /mnt/enclosure3/front/column0/drive350
54 | /mnt/enclosure3/front/column0/drive351
55 | /mnt/enclosure3/front/column0/drive352
56 | /mnt/enclosure3/front/column0/drive353
57 | /mnt/enclosure3/front/column0/drive354
58 | /mnt/enclosure3/front/column0/drive355
59 | /mnt/enclosure3/front/column0/drive356
60 | /mnt/enclosure3/front/column0/drive357
61 | /mnt/enclosure3/front/column0/drive358
62 | /mnt/enclosure3/front/column0/drive359
63 | /mnt/enclosure3/front/column0/drive360
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure4:
--------------------------------------------------------------------------------
1 | /mnt/enclosure4
2 | /mnt/enclosure4/front
3 | /mnt/enclosure4/front/column0
4 | /mnt/enclosure4/front/column0/drive401
5 | /mnt/enclosure4/front/column0/drive402
6 | /mnt/enclosure4/front/column0/drive403
7 | /mnt/enclosure4/front/column0/drive404
8 | /mnt/enclosure4/front/column0/drive405
9 | /mnt/enclosure4/front/column0/drive406
10 | /mnt/enclosure4/front/column0/drive407
11 | /mnt/enclosure4/front/column0/drive408
12 | /mnt/enclosure4/front/column0/drive409
13 | /mnt/enclosure4/front/column0/drive410
14 | /mnt/enclosure4/front/column0/drive411
15 | /mnt/enclosure4/front/column0/drive412
16 | /mnt/enclosure4/front/column0/drive413
17 | /mnt/enclosure4/front/column0/drive414
18 | /mnt/enclosure4/front/column0/drive415
19 | /mnt/enclosure4/front/column0/drive416
20 | /mnt/enclosure4/front/column0/drive417
21 | /mnt/enclosure4/front/column0/drive418
22 | /mnt/enclosure4/front/column0/drive419
23 | /mnt/enclosure4/front/column0/drive420
24 | /mnt/enclosure4/front/column0/drive421
25 | /mnt/enclosure4/front/column0/drive422
26 | /mnt/enclosure4/front/column0/drive423
27 | /mnt/enclosure4/front/column0/drive424
28 | /mnt/enclosure4/front/column0/drive425
29 | /mnt/enclosure4/front/column0/drive426
30 | /mnt/enclosure4/front/column0/drive427
31 | /mnt/enclosure4/front/column0/drive428
32 | /mnt/enclosure4/front/column0/drive429
33 | /mnt/enclosure4/front/column0/drive430
34 | /mnt/enclosure4/front/column0/drive431
35 | /mnt/enclosure4/front/column0/drive432
36 | /mnt/enclosure4/front/column0/drive433
37 | /mnt/enclosure4/front/column0/drive434
38 | /mnt/enclosure4/front/column0/drive435
39 | /mnt/enclosure4/front/column0/drive436
40 | /mnt/enclosure4/front/column0/drive437
41 | /mnt/enclosure4/front/column0/drive438
42 | /mnt/enclosure4/front/column0/drive439
43 | /mnt/enclosure4/front/column0/drive440
44 | /mnt/enclosure4/front/column0/drive441
45 | /mnt/enclosure4/front/column0/drive442
46 | /mnt/enclosure4/front/column0/drive443
47 | /mnt/enclosure4/front/column0/drive444
48 | /mnt/enclosure4/front/column0/drive445
49 | /mnt/enclosure4/front/column0/drive446
50 | /mnt/enclosure4/front/column0/drive447
51 | /mnt/enclosure4/front/column0/drive448
52 | /mnt/enclosure4/front/column0/drive449
53 | /mnt/enclosure4/front/column0/drive450
54 | /mnt/enclosure4/front/column0/drive451
55 | /mnt/enclosure4/front/column0/drive452
56 | /mnt/enclosure4/front/column0/drive453
57 | /mnt/enclosure4/front/column0/drive454
58 | /mnt/enclosure4/front/column0/drive455
59 | /mnt/enclosure4/front/column0/drive456
60 | /mnt/enclosure4/front/column0/drive457
61 | /mnt/enclosure4/front/column0/drive458
62 | /mnt/enclosure4/front/column0/drive459
63 | /mnt/enclosure4/front/column0/drive460
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure5:
--------------------------------------------------------------------------------
1 | /mnt/enclosure5
2 | /mnt/enclosure5/front
3 | /mnt/enclosure5/front/column0
4 | /mnt/enclosure5/front/column0/drive501
5 | /mnt/enclosure5/front/column0/drive502
6 | /mnt/enclosure5/front/column0/drive503
7 | /mnt/enclosure5/front/column0/drive504
8 | /mnt/enclosure5/front/column0/drive505
9 | /mnt/enclosure5/front/column0/drive506
10 | /mnt/enclosure5/front/column0/drive507
11 | /mnt/enclosure5/front/column0/drive508
12 | /mnt/enclosure5/front/column0/drive509
13 | /mnt/enclosure5/front/column0/drive510
14 | /mnt/enclosure5/front/column0/drive511
15 | /mnt/enclosure5/front/column0/drive512
16 | /mnt/enclosure5/front/column0/drive513
17 | /mnt/enclosure5/front/column0/drive514
18 | /mnt/enclosure5/front/column0/drive515
19 | /mnt/enclosure5/front/column0/drive516
20 | /mnt/enclosure5/front/column0/drive517
21 | /mnt/enclosure5/front/column0/drive518
22 | /mnt/enclosure5/front/column0/drive519
23 | /mnt/enclosure5/front/column0/drive520
24 | /mnt/enclosure5/front/column0/drive521
25 | /mnt/enclosure5/front/column0/drive522
26 | /mnt/enclosure5/front/column0/drive523
27 | /mnt/enclosure5/front/column0/drive524
28 | /mnt/enclosure5/front/column0/drive525
29 | /mnt/enclosure5/front/column0/drive526
30 | /mnt/enclosure5/front/column0/drive527
31 | /mnt/enclosure5/front/column0/drive528
32 | /mnt/enclosure5/front/column0/drive529
33 | /mnt/enclosure5/front/column0/drive530
34 | /mnt/enclosure5/front/column0/drive531
35 | /mnt/enclosure5/front/column0/drive532
36 | /mnt/enclosure5/front/column0/drive533
37 | /mnt/enclosure5/front/column0/drive534
38 | /mnt/enclosure5/front/column0/drive535
39 | /mnt/enclosure5/front/column0/drive536
40 | /mnt/enclosure5/front/column0/drive537
41 | /mnt/enclosure5/front/column0/drive538
42 | /mnt/enclosure5/front/column0/drive539
43 | /mnt/enclosure5/front/column0/drive540
44 | /mnt/enclosure5/front/column0/drive541
45 | /mnt/enclosure5/front/column0/drive542
46 | /mnt/enclosure5/front/column0/drive543
47 | /mnt/enclosure5/front/column0/drive544
48 | /mnt/enclosure5/front/column0/drive545
49 | /mnt/enclosure5/front/column0/drive546
50 | /mnt/enclosure5/front/column0/drive547
51 | /mnt/enclosure5/front/column0/drive548
52 | /mnt/enclosure5/front/column0/drive549
53 | /mnt/enclosure5/front/column0/drive550
54 | /mnt/enclosure5/front/column0/drive551
55 | /mnt/enclosure5/front/column0/drive552
56 | /mnt/enclosure5/front/column0/drive553
57 | /mnt/enclosure5/front/column0/drive554
58 | /mnt/enclosure5/front/column0/drive555
59 | /mnt/enclosure5/front/column0/drive556
60 | /mnt/enclosure5/front/column0/drive557
61 | /mnt/enclosure5/front/column0/drive558
62 | /mnt/enclosure5/front/column0/drive559
63 | /mnt/enclosure5/front/column0/drive560
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure6:
--------------------------------------------------------------------------------
1 | /mnt/enclosure6
2 | /mnt/enclosure6/front
3 | /mnt/enclosure6/front/column0
4 | /mnt/enclosure6/front/column0/drive601
5 | /mnt/enclosure6/front/column0/drive602
6 | /mnt/enclosure6/front/column0/drive603
7 | /mnt/enclosure6/front/column0/drive604
8 | /mnt/enclosure6/front/column0/drive605
9 | /mnt/enclosure6/front/column0/drive606
10 | /mnt/enclosure6/front/column0/drive607
11 | /mnt/enclosure6/front/column0/drive608
12 | /mnt/enclosure6/front/column0/drive609
13 | /mnt/enclosure6/front/column0/drive610
14 | /mnt/enclosure6/front/column0/drive611
15 | /mnt/enclosure6/front/column0/drive612
16 | /mnt/enclosure6/front/column0/drive613
17 | /mnt/enclosure6/front/column0/drive614
18 | /mnt/enclosure6/front/column0/drive615
19 | /mnt/enclosure6/front/column0/drive616
20 | /mnt/enclosure6/front/column0/drive617
21 | /mnt/enclosure6/front/column0/drive618
22 | /mnt/enclosure6/front/column0/drive619
23 | /mnt/enclosure6/front/column0/drive620
24 | /mnt/enclosure6/front/column0/drive621
25 | /mnt/enclosure6/front/column0/drive622
26 | /mnt/enclosure6/front/column0/drive623
27 | /mnt/enclosure6/front/column0/drive624
28 | /mnt/enclosure6/front/column0/drive625
29 | /mnt/enclosure6/front/column0/drive626
30 | /mnt/enclosure6/front/column0/drive627
31 | /mnt/enclosure6/front/column0/drive628
32 | /mnt/enclosure6/front/column0/drive629
33 | /mnt/enclosure6/front/column0/drive630
34 | /mnt/enclosure6/front/column0/drive631
35 | /mnt/enclosure6/front/column0/drive632
36 | /mnt/enclosure6/front/column0/drive633
37 | /mnt/enclosure6/front/column0/drive634
38 | /mnt/enclosure6/front/column0/drive635
39 | /mnt/enclosure6/front/column0/drive636
40 | /mnt/enclosure6/front/column0/drive637
41 | /mnt/enclosure6/front/column0/drive638
42 | /mnt/enclosure6/front/column0/drive639
43 | /mnt/enclosure6/front/column0/drive640
44 | /mnt/enclosure6/front/column0/drive641
45 | /mnt/enclosure6/front/column0/drive642
46 | /mnt/enclosure6/front/column0/drive643
47 | /mnt/enclosure6/front/column0/drive644
48 | /mnt/enclosure6/front/column0/drive645
49 | /mnt/enclosure6/front/column0/drive646
50 | /mnt/enclosure6/front/column0/drive647
51 | /mnt/enclosure6/front/column0/drive648
52 | /mnt/enclosure6/front/column0/drive649
53 | /mnt/enclosure6/front/column0/drive650
54 | /mnt/enclosure6/front/column0/drive651
55 | /mnt/enclosure6/front/column0/drive652
56 | /mnt/enclosure6/front/column0/drive653
57 | /mnt/enclosure6/front/column0/drive654
58 | /mnt/enclosure6/front/column0/drive655
59 | /mnt/enclosure6/front/column0/drive656
60 | /mnt/enclosure6/front/column0/drive657
61 | /mnt/enclosure6/front/column0/drive658
62 | /mnt/enclosure6/front/column0/drive659
63 | /mnt/enclosure6/front/column0/drive660
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure7:
--------------------------------------------------------------------------------
1 | /mnt/enclosure7
2 | /mnt/enclosure7/front
3 | /mnt/enclosure7/front/column0
4 | /mnt/enclosure7/front/column0/drive701
5 | /mnt/enclosure7/front/column0/drive702
6 | /mnt/enclosure7/front/column0/drive703
7 | /mnt/enclosure7/front/column0/drive704
8 | /mnt/enclosure7/front/column0/drive705
9 | /mnt/enclosure7/front/column0/drive706
10 | /mnt/enclosure7/front/column0/drive707
11 | /mnt/enclosure7/front/column0/drive708
12 | /mnt/enclosure7/front/column0/drive709
13 | /mnt/enclosure7/front/column0/drive710
14 | /mnt/enclosure7/front/column0/drive711
15 | /mnt/enclosure7/front/column0/drive712
16 | /mnt/enclosure7/front/column0/drive713
17 | /mnt/enclosure7/front/column0/drive714
18 | /mnt/enclosure7/front/column0/drive715
19 | /mnt/enclosure7/front/column0/drive716
20 | /mnt/enclosure7/front/column0/drive717
21 | /mnt/enclosure7/front/column0/drive718
22 | /mnt/enclosure7/front/column0/drive719
23 | /mnt/enclosure7/front/column0/drive720
24 | /mnt/enclosure7/front/column0/drive721
25 | /mnt/enclosure7/front/column0/drive722
26 | /mnt/enclosure7/front/column0/drive723
27 | /mnt/enclosure7/front/column0/drive724
28 | /mnt/enclosure7/front/column0/drive725
29 | /mnt/enclosure7/front/column0/drive726
30 | /mnt/enclosure7/front/column0/drive727
31 | /mnt/enclosure7/front/column0/drive728
32 | /mnt/enclosure7/front/column0/drive729
33 | /mnt/enclosure7/front/column0/drive730
34 | /mnt/enclosure7/front/column0/drive731
35 | /mnt/enclosure7/front/column0/drive732
36 | /mnt/enclosure7/front/column0/drive733
37 | /mnt/enclosure7/front/column0/drive734
38 | /mnt/enclosure7/front/column0/drive735
39 | /mnt/enclosure7/front/column0/drive736
40 | /mnt/enclosure7/front/column0/drive737
41 | /mnt/enclosure7/front/column0/drive738
42 | /mnt/enclosure7/front/column0/drive739
43 | /mnt/enclosure7/front/column0/drive740
44 | /mnt/enclosure7/front/column0/drive741
45 | /mnt/enclosure7/front/column0/drive742
46 | /mnt/enclosure7/front/column0/drive743
47 | /mnt/enclosure7/front/column0/drive744
48 | /mnt/enclosure7/front/column0/drive745
49 | /mnt/enclosure7/front/column0/drive746
50 | /mnt/enclosure7/front/column0/drive747
51 | /mnt/enclosure7/front/column0/drive748
52 | /mnt/enclosure7/front/column0/drive749
53 | /mnt/enclosure7/front/column0/drive750
54 | /mnt/enclosure7/front/column0/drive751
55 | /mnt/enclosure7/front/column0/drive752
56 | /mnt/enclosure7/front/column0/drive753
57 | /mnt/enclosure7/front/column0/drive754
58 | /mnt/enclosure7/front/column0/drive755
59 | /mnt/enclosure7/front/column0/drive756
60 | /mnt/enclosure7/front/column0/drive757
61 | /mnt/enclosure7/front/column0/drive758
62 | /mnt/enclosure7/front/column0/drive759
63 | /mnt/enclosure7/front/column0/drive760
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_60_infinidat_enclosure8:
--------------------------------------------------------------------------------
1 | /mnt/enclosure8
2 | /mnt/enclosure8/front
3 | /mnt/enclosure8/front/column0
4 | /mnt/enclosure8/front/column0/drive801
5 | /mnt/enclosure8/front/column0/drive802
6 | /mnt/enclosure8/front/column0/drive803
7 | /mnt/enclosure8/front/column0/drive804
8 | /mnt/enclosure8/front/column0/drive805
9 | /mnt/enclosure8/front/column0/drive806
10 | /mnt/enclosure8/front/column0/drive807
11 | /mnt/enclosure8/front/column0/drive808
12 | /mnt/enclosure8/front/column0/drive809
13 | /mnt/enclosure8/front/column0/drive810
14 | /mnt/enclosure8/front/column0/drive811
15 | /mnt/enclosure8/front/column0/drive812
16 | /mnt/enclosure8/front/column0/drive813
17 | /mnt/enclosure8/front/column0/drive814
18 | /mnt/enclosure8/front/column0/drive815
19 | /mnt/enclosure8/front/column0/drive816
20 | /mnt/enclosure8/front/column0/drive817
21 | /mnt/enclosure8/front/column0/drive818
22 | /mnt/enclosure8/front/column0/drive819
23 | /mnt/enclosure8/front/column0/drive820
24 | /mnt/enclosure8/front/column0/drive821
25 | /mnt/enclosure8/front/column0/drive822
26 | /mnt/enclosure8/front/column0/drive823
27 | /mnt/enclosure8/front/column0/drive824
28 | /mnt/enclosure8/front/column0/drive825
29 | /mnt/enclosure8/front/column0/drive826
30 | /mnt/enclosure8/front/column0/drive827
31 | /mnt/enclosure8/front/column0/drive828
32 | /mnt/enclosure8/front/column0/drive829
33 | /mnt/enclosure8/front/column0/drive830
34 | /mnt/enclosure8/front/column0/drive831
35 | /mnt/enclosure8/front/column0/drive832
36 | /mnt/enclosure8/front/column0/drive833
37 | /mnt/enclosure8/front/column0/drive834
38 | /mnt/enclosure8/front/column0/drive835
39 | /mnt/enclosure8/front/column0/drive836
40 | /mnt/enclosure8/front/column0/drive837
41 | /mnt/enclosure8/front/column0/drive838
42 | /mnt/enclosure8/front/column0/drive839
43 | /mnt/enclosure8/front/column0/drive840
44 | /mnt/enclosure8/front/column0/drive841
45 | /mnt/enclosure8/front/column0/drive842
46 | /mnt/enclosure8/front/column0/drive843
47 | /mnt/enclosure8/front/column0/drive844
48 | /mnt/enclosure8/front/column0/drive845
49 | /mnt/enclosure8/front/column0/drive846
50 | /mnt/enclosure8/front/column0/drive847
51 | /mnt/enclosure8/front/column0/drive848
52 | /mnt/enclosure8/front/column0/drive849
53 | /mnt/enclosure8/front/column0/drive850
54 | /mnt/enclosure8/front/column0/drive851
55 | /mnt/enclosure8/front/column0/drive852
56 | /mnt/enclosure8/front/column0/drive853
57 | /mnt/enclosure8/front/column0/drive854
58 | /mnt/enclosure8/front/column0/drive855
59 | /mnt/enclosure8/front/column0/drive856
60 | /mnt/enclosure8/front/column0/drive857
61 | /mnt/enclosure8/front/column0/drive858
62 | /mnt/enclosure8/front/column0/drive859
63 | /mnt/enclosure8/front/column0/drive860
64 |
--------------------------------------------------------------------------------
/extras/drive_structures/drive_structures_81_drives:
--------------------------------------------------------------------------------
1 | /mnt/enclosure0
2 | /mnt/enclosure0/front
3 | /mnt/enclosure0/front/column0
4 | /mnt/enclosure0/front/column0/drive0
5 | /mnt/enclosure0/front/column0/drive1
6 | /mnt/enclosure0/front/column0/drive2
7 | /mnt/enclosure0/front/column0/drive3
8 | /mnt/enclosure0/front/column0/drive4
9 | /mnt/enclosure0/front/column0/drive5
10 | /mnt/enclosure0/front/column1
11 | /mnt/enclosure0/front/column1/drive6
12 | /mnt/enclosure0/front/column1/drive7
13 | /mnt/enclosure0/front/column1/drive8
14 | /mnt/enclosure0/front/column1/drive9
15 | /mnt/enclosure0/front/column1/drive10
16 | /mnt/enclosure0/front/column1/drive11
17 | /mnt/enclosure0/front/column2
18 | /mnt/enclosure0/front/column2/drive12
19 | /mnt/enclosure0/front/column2/drive13
20 | /mnt/enclosure0/front/column2/drive14
21 | /mnt/enclosure0/front/column2/drive15
22 | /mnt/enclosure0/front/column2/drive16
23 | /mnt/enclosure0/front/column2/drive17
24 | /mnt/enclosure0/front/column3
25 | /mnt/enclosure0/front/column3/drive18
26 | /mnt/enclosure0/front/column3/drive19
27 | /mnt/enclosure0/front/column3/drive20
28 | /mnt/enclosure0/front/column3/drive21
29 | /mnt/enclosure0/front/column3/drive22
30 | /mnt/enclosure0/front/column3/drive23
31 | /mnt/enclosure0/rear
32 | /mnt/enclosure0/rear/column0
33 | /mnt/enclosure0/rear/column0/drive24
34 | /mnt/enclosure0/rear/column0/drive25
35 | /mnt/enclosure0/rear/column0/drive26
36 | /mnt/enclosure0/rear/column1
37 | /mnt/enclosure0/rear/column1/drive27
38 | /mnt/enclosure0/rear/column1/drive28
39 | /mnt/enclosure0/rear/column1/drive29
40 | /mnt/enclosure0/rear/column2
41 | /mnt/enclosure0/rear/column2/drive30
42 | /mnt/enclosure0/rear/column2/drive31
43 | /mnt/enclosure0/rear/column2/drive32
44 | /mnt/enclosure0/rear/column3
45 | /mnt/enclosure0/rear/column3/drive33
46 | /mnt/enclosure0/rear/column3/drive34
47 | /mnt/enclosure0/rear/column3/drive35
48 | /mnt/enclosure1
49 | /mnt/enclosure1/front
50 | /mnt/enclosure1/front/column0
51 | /mnt/enclosure1/front/column0/drive36
52 | /mnt/enclosure1/front/column0/drive38
53 | /mnt/enclosure1/front/column0/drive39
54 | /mnt/enclosure1/front/column0/drive40
55 | /mnt/enclosure1/front/column0/drive41
56 | /mnt/enclosure1/front/column0/drive37
57 | /mnt/enclosure1/front/column1
58 | /mnt/enclosure1/front/column1/drive42
59 | /mnt/enclosure1/front/column1/drive43
60 | /mnt/enclosure1/front/column1/drive44
61 | /mnt/enclosure1/front/column1/drive45
62 | /mnt/enclosure1/front/column1/drive46
63 | /mnt/enclosure1/front/column1/drive47
64 | /mnt/enclosure1/front/column2
65 | /mnt/enclosure1/front/column2/drive48
66 | /mnt/enclosure1/front/column2/drive49
67 | /mnt/enclosure1/front/column2/drive50
68 | /mnt/enclosure1/front/column2/drive51
69 | /mnt/enclosure1/front/column2/drive52
70 | /mnt/enclosure1/front/column2/drive53
71 | /mnt/enclosure1/front/column3
72 | /mnt/enclosure1/front/column3/drive54
73 | /mnt/enclosure1/front/column3/drive55
74 | /mnt/enclosure1/front/column3/drive56
75 | /mnt/enclosure1/front/column3/drive57
76 | /mnt/enclosure1/front/column3/drive58
77 | /mnt/enclosure1/front/column3/drive59
78 | /mnt/enclosure1/rear
79 | /mnt/enclosure1/rear/column0
80 | /mnt/enclosure1/rear/column0/drive60
81 | /mnt/enclosure1/rear/column0/drive61
82 | /mnt/enclosure1/rear/column0/drive62
83 | /mnt/enclosure1/rear/column1
84 | /mnt/enclosure1/rear/column1/drive63
85 | /mnt/enclosure1/rear/column1/drive64
86 | /mnt/enclosure1/rear/column1/drive65
87 | /mnt/enclosure1/rear/column1/drive66
88 | /mnt/enclosure1/rear/column1/drive67
89 | /mnt/enclosure1/rear/column1/drive68
90 | /mnt/enclosure1/rear/column2
91 | /mnt/enclosure1/rear/column2/drive69
92 | /mnt/enclosure1/rear/column2/drive70
93 | /mnt/enclosure1/rear/column2/drive71
94 | /mnt/enclosure1/rear/column2/drive72
95 | /mnt/enclosure1/rear/column2/drive73
96 | /mnt/enclosure1/rear/column2/drive74
97 | /mnt/enclosure1/rear/column3
98 | /mnt/enclosure1/rear/column3/drive75
99 | /mnt/enclosure1/rear/column3/drive76
100 | /mnt/enclosure1/rear/column3/drive77
101 | /mnt/enclosure1/rear/column3/drive78
102 | /mnt/enclosure1/rear/column3/drive79
103 | /mnt/enclosure1/rear/column3/drive80
104 |
--------------------------------------------------------------------------------
/extras/drive_structures/readme.md:
--------------------------------------------------------------------------------
1 | These example drive structures are used to create all the necessary mount points.
2 |
--------------------------------------------------------------------------------
/extras/install_instructions:
--------------------------------------------------------------------------------
1 | These are the step-by-step procedures that I follow for installing my new NAS/Harvester on Ubuntu 20.04LTS
2 |
3 | 1) Install Ubuntu 20.04 LTS Server
4 | - Select Updated Installer
5 | - Select openssh-server
6 | - Configure 2 x SSD for RAID1 Software RAID for OS
7 | - Configure Internal and External Network Interfaces
8 | - Add 'chia' user with password
9 | - Allow installation of all security updates
10 | - Reboot
11 | 2) Log in and test network connections
12 | 3) Adjust /etc/ssh/sshd_config as desired
13 | 4) vi /etc/.bashrc
14 | - Uncomment: force_color_yes
15 | - Adjust prompt as desired
16 | - source /etc/.bashrc
17 | 5) vi /etc/netplan/00-installer-config.yaml
18 | - Adjust MTU to 9000 for internal 10Gbe interface
19 | - netplan tyr
20 | - netplan apply
21 | 6) Add following to crontab: @reboot /sbin/ifconfig eno2 txqueuelen 10000
22 | 7) git clone https://github.com/rjsears/chia_plot_manager.git
23 | 8) mv chia_plot_manager plot_manager
24 | 9) cd plot_manager
25 | 10) chmod +x install.sh
26 | 11) ./install.sh install
27 | 12) Complete installation script as necessary
28 | 13) If you had directory structure created for you, adjust to eliminate drives as necessary
29 | For example, in my case I use drive0 and drive1 bays for the two SSD OS drives so I
30 | removed those drives from `/mnt/enclosure0/front/column0` so that drive_manager.py
31 | would not put plots on them.
32 | 14) Reboot to verify all software and changes took
33 | 15) After reboot check the following:
34 | - Correct network connectivity
35 | - ip a and look for your internal interface: 3: eno2: