├── original.ods ├── ezFIO User Guide.pdf ├── ezfio.bat ├── README ├── combine.py ├── COPYING ├── ezfio.py └── ezfio.ps1 /original.ods: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/earlephilhower/ezfio/HEAD/original.ods -------------------------------------------------------------------------------- /ezFIO User Guide.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/earlephilhower/ezfio/HEAD/ezFIO User Guide.pdf -------------------------------------------------------------------------------- /ezfio.bat: -------------------------------------------------------------------------------- 1 | @echo off 2 | REM Start EZFIO.PS1 in an elevated PowerShell interpreter. 3 | REM Here be dragons. 4 | REM First start a standard powershell and use it's Start-Process cmdlet 5 | REM to start *another* powershell, this one as administrator, to interpret 6 | REM the script. Care must be taken to properly quote the path to the script. 7 | 8 | set GO='%cd%\ezfio.ps1' 9 | powershell -Command "$p = new-object System.Diagnostics.ProcessStartInfo 'PowerShell'; $p.Arguments = {-WindowStyle hidden -Command ". %GO%"}; $p.Verb = 'RunAs'; [System.Diagnostics.Process]::Start($p) | out-null;" 10 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | ezFIO V1.0 2 | (C) Copyright 2015-18 HGST 3 | earle.philhower.iii@hgst.com 4 | 5 | ------------------------------------------------------------------------ 6 | ezFIO is free software: you can redistribute it and/or modify 7 | it under the terms of the GNU General Public License as published by 8 | the Free Software Foundation, either version 2 of the License, or 9 | (at your option) any later version. 10 | 11 | ezFIO is distributed in the hope that it will be useful, 12 | but WITHOUT ANY WARRANTY; without even the implied warranty of 13 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 | GNU General Public License for more details. 15 | 16 | You should have received a copy of the GNU General Public License 17 | along with ezFIO. If not, see . 18 | ------------------------------------------------------------------------ 19 | 20 | This test script is intended to give a block-level based overview of 21 | SSD performance (SATA, SAS, and NVME) under real-world conditions by 22 | focusing on sustained performance at different block sizes and queue 23 | depths. Both text-mode Linux and GUI and text-mode Windows versions 24 | are included. 25 | 26 | The results of multiple tests are summarized into a single OpenDoc format 27 | spreadsheet, readable under OpenOffice, LibreOffice, or Microsoft Excel. 28 | 29 | FIO is required to perform the actual IO tests. Please ensure the latest 30 | version is installed, either from your operating system's repository or 31 | sources available at https://github.com/axboe/fio or precompiled for 32 | Windows at https://ci.appveyor.com/project/axboe/fio (for the GIT latest) 33 | or from https://www.bluestop.org/fio/ . 34 | 35 | (There seems to be an issue with FIO 3.1 under Windows that is not present 36 | under earlier or later builds. In a nutshell, the 1200 second sustained 37 | performance test ends up running, under this version, for over 12 hours! 38 | While the final results are still good and the script continues, it does 39 | waste a large amount of time and so I recommend avoiding the BlueStop 3.1 40 | build. The CI.appveyor.com link above can be used to get current FIO 41 | head builds instead.) 42 | 43 | 44 | ------------------------------------------------------------------------ 45 | 46 | A new --cluster option allows for running multiple clients in parallel, 47 | to allow testing performance of shared storage systems like SANs or 48 | AFAs. 49 | 50 | Start a "fio --server" job on all clients, then on one of them run 51 | ./ezfio.py --cluster --drive host1:/dev/dr1,host2:/dev/dr2/... ... 52 | 53 | Basically add "--cluster" to the command line before the drive 54 | option, and in the drive option make a comma separated list of 55 | hostname:/path/to/storage . 56 | 57 | The first host in the list must be the one you're currently running 58 | ezfio from. ezfio will try using the local system to collect 59 | appropriate system info on the first drive. 60 | 61 | In the current implementation, all nodes/drives must be identical in 62 | size. There are no provisions for having volumes of differing sizes. 63 | 64 | All other graphs and results should be the aggregate of the entire 65 | cluster, as reported by fio. 66 | 67 | ex: 68 | 69 | Start up FIO servers on all systems to be tested 70 | (on host 1): 71 | # fio --server & 72 | (on host 2): 73 | # fio --server & 74 | (on host 3): 75 | # fio --server & 76 | 77 | Run a benchmark run: 78 | (on host 1) 79 | # ./ezfio.py --cluster --drive host1:/dev/nvme1n1,host2:/dev/nvme1n1,host3:/dev/nvme4n1 80 | 81 | ------------------------------------------------------------------------ 82 | 83 | ezFIO got where it is today through the help of many users who filed 84 | bugs when things didn't work, or submitted patches to support new CPUs. 85 | Please feel free to open issues or drop me a line if you have questions. 86 | 87 | Special thanks to @coolrecep (Recep Baltaş) who has spent literally days 88 | tracking down Windows issues. 89 | -------------------------------------------------------------------------------- /combine.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | # ezfio 1.0 4 | # earle.philhower.iii@hgst.com 5 | # 6 | # ------------------------------------------------------------------------ 7 | # ezfio is free software: you can redistribute it and/or modify 8 | # it under the terms of the GNU General Public License as published by 9 | # the Free Software Foundation, either version 2 of the License, or 10 | # (at your option) any later version. 11 | # 12 | # ezfio is distributed in the hope that it will be useful, 13 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 14 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 | # GNU General Public License for more details. 16 | # 17 | # You should have received a copy of the GNU General Public License 18 | # along with ezfio. If not, see . 19 | # ------------------------------------------------------------------------ 20 | # 21 | # Usage: ./append.py --source --append --suffix <_new> --color <223344> --output 22 | 23 | 24 | import argparse 25 | import base64 26 | import datetime 27 | import json 28 | import os 29 | import platform 30 | import pwd 31 | import re 32 | import shutil 33 | import socket 34 | import subprocess 35 | import sys 36 | import threading 37 | import time 38 | import zipfile 39 | 40 | def ParseArgs(): 41 | """Parse command line options into globals.""" 42 | global sourceODS, appendODS, destODS, suffix, color 43 | parser = argparse.ArgumentParser( 44 | formatter_class=argparse.RawDescriptionHelpFormatter, 45 | description="A tool to add a dataset to an existing ezFIO ODS file.", 46 | epilog="") 47 | parser.add_argument("--source", "-s", dest = "sourceODS", 48 | help="First ODS file with 1 or more test runs included", required=True) 49 | parser.add_argument("--append", "-a", dest="appendODS", 50 | help="ODS file with tests to append to source", required=True) 51 | parser.add_argument("--suffix", "-x", dest="suffix", 52 | help="Suffix to append to data tables from appended ODS", required=True) 53 | parser.add_argument("--color", "-c", dest="color", 54 | help="Color to use for graphed data in appended ODS (rrggbb format)", required=True) 55 | parser.add_argument("--output", "-o", dest="destODS", 56 | help="Location where results should be saved", required=True) 57 | args = parser.parse_args() 58 | sourceODS = args.sourceODS 59 | appendODS = args.appendODS 60 | destODS = args.destODS 61 | suffix = args.suffix 62 | color = args.color 63 | 64 | def GenerateCombinedODS(): 65 | """Builds a new ODS spreadsheet w/graphs from generated test CSV files.""" 66 | 67 | def GetContentXMLFromODS( odssrc ): 68 | """Extract content.xml from an ODS file, where the sheet lives.""" 69 | ziparchive = zipfile.ZipFile( odssrc ) 70 | content = ziparchive.read("content.xml") 71 | content = content.replace("\n", "") 72 | return content 73 | 74 | def CSVtoXMLSheet(sheetName, csvName): 75 | """Replace a named sheet with the contents of a CSV file.""" 76 | newt = ' ' 78 | newt += '' 91 | except: # It's not a float, so let's call it a string 92 | cell = '' 95 | newt += cell 96 | newt += '' 97 | f.close() 98 | # Close the tags 99 | newt += '' 100 | return newt 101 | 102 | def AppendSheetFromCSV(sheetName, csvName, xmltext): 103 | """Add a new sheet to the XML from the CSV file.""" 104 | newt = CSVtoXMLSheet(sheetName, csvName) 105 | 106 | # Replace the XML using lazy string matching 107 | searchstr = '' 108 | return re.sub(searchstr, newt + searchstr, xmltext) 109 | 110 | def UpdateContentXMLToODS_text( odssrc, odsdest, xmltext ): 111 | """Replace content.xml in an ODS w/an in-memory copy and write new. 112 | 113 | Replace content.xml in an ODS file with in-memory, modified copy and 114 | write new ODS. Can't just copy source.zip and replace one file, the 115 | output ZIP file is not correct in many cases (opens in Excel but fails 116 | ODF validation and LibreOffice fails to load under Windows). 117 | 118 | Also strips out any binary versions of objects and the thumbnail, 119 | since they are no longer valid once we've changed the data in the 120 | sheet. 121 | """ 122 | global suffix 123 | 124 | if os.path.exists(odsdest): 125 | os.unlink(odsdest) 126 | 127 | # Windows ZipArchive will not use "Store" even with "no compression" 128 | # so we need to have a mimetype.zip file encoded below to match spec: 129 | mimetypezip = """ 130 | UEsDBAoAAAAAAOKbNUiFbDmKLgAAAC4AAAAIAAAAbWltZXR5cGVhcHBsaWNhdGlvbi92bmQub2Fz 131 | aXMub3BlbmRvY3VtZW50LnNwcmVhZHNoZWV0UEsBAj8ACgAAAAAA4ps1SIVsOYouAAAALgAAAAgA 132 | JAAAAAAAAACAAAAAAAAAAG1pbWV0eXBlCgAgAAAAAAABABgAAAyCUsVU0QFH/eNMmlTRAUf940ya 133 | VNEBUEsFBgAAAAABAAEAWgAAAFQAAAAAAA== 134 | """ 135 | zipbytes = base64.b64decode( mimetypezip ) 136 | with open(odsdest, 'wb') as f: 137 | f.write(zipbytes) 138 | 139 | zasrc = zipfile.ZipFile(odssrc, 'r') 140 | zadst = zipfile.ZipFile(odsdest, 'a', zipfile.ZIP_DEFLATED) 141 | for entry in zasrc.namelist(): 142 | if entry == "mimetype": 143 | continue 144 | elif entry.endswith('/') or entry.endswith('\\'): 145 | continue 146 | elif entry == "content.xml": 147 | zadst.writestr( "content.xml", xmltext) 148 | elif ("Object" in entry) and ("content.xml" in entry): 149 | # Remove table 150 | rdbytes = zasrc.read(entry) 151 | outbytes = re.sub('.*', "", rdbytes) 152 | # Add in extra chart series following existing format... 153 | searchStr = '' 154 | match = re.search(searchStr, outbytes); 155 | addl = "" 156 | if match: 157 | fmt = match.group(0) 158 | addl = fmt; 159 | for sheet in [ "Tests", "Timeseries", "Exceedance"]: 160 | addl = re.sub( sheet, sheet+suffix, addl ) 161 | # Remove any existing label and add updated one 162 | addl = re.sub("loext:label-string=\".*?\"" , "", addl ); 163 | addl = re.sub ("" , outbytes ) 174 | if oldStyleMatch: 175 | oldStyle = oldStyleMatch.group(0) 176 | newStyle = re.sub( "\"" + styleName + "\"", "\"" + styleName + suffix + "\"", oldStyle) 177 | # Change the embedded color: 178 | newStyle = re.sub( "svg:stroke-color=\"#.*?\"", "svg:stroke-color=\"#" + color + "\"", newStyle ) 179 | # Add in the new style... 180 | outbytes = re.sub ( oldStyle, oldStyle + newStyle, outbytes ) 181 | # Add legend if it doesn't exist 182 | legendMatch = re.search("", outbytes) 183 | if not legendMatch: 184 | # Put in hardcoded one...looks like junk, but can be tweaked by user in application 185 | legend = ""; 186 | outbytes = re.sub ("", "" + legend, outbytes ) 187 | zadst.writestr(entry, outbytes) 188 | elif entry == "META-INF/manifest.xml": 189 | # Remove ObjectReplacements from the list 190 | rdbytes = zasrc.read(entry) 191 | outbytes = "" 192 | lines = rdbytes.split("\n") 193 | for line in lines: 194 | if not ( ("ObjectReplacement" in line) or ("Thumbnails" in line) ): 195 | outbytes = outbytes + line + "\n" 196 | zadst.writestr(entry, outbytes) 197 | elif ("Thumbnails" in entry) or ("ObjectReplacement" in entry): 198 | # Skip binary versions 199 | continue 200 | else: 201 | rdbytes = zasrc.read(entry) 202 | zadst.writestr(entry, rdbytes) 203 | zasrc.close() 204 | zadst.close() 205 | 206 | 207 | global sourceODS, appendODS, destODS 208 | 209 | # First rename and append the extra data sheets 210 | xmlsrc = GetContentXMLFromODS( sourceODS ) 211 | xmlapp = GetContentXMLFromODS( appendODS ) 212 | for tableName in [ "Tests", "Timeseries", "Exceedance" ]: 213 | searchStr = '' 214 | sheetMatch = re.search(searchStr, xmlapp); 215 | if sheetMatch: 216 | sheet = sheetMatch.group(0) 217 | # Rename the table 218 | sheet = re.sub( '"' + tableName + '"', '"' + tableName + suffix + '"', sheet); 219 | # Stick it right before the end of the list 220 | searchStr = '' 221 | xmlsrc = re.sub(searchStr, sheet + searchStr, xmlsrc) 222 | UpdateContentXMLToODS_text( sourceODS, destODS, xmlsrc ) 223 | 224 | sourceODS = "" 225 | appendODS = "" 226 | destODS = "" 227 | suffix = "" 228 | color = "" 229 | 230 | if __name__ == "__main__": 231 | ParseArgs() 232 | GenerateCombinedODS() 233 | 234 | -------------------------------------------------------------------------------- /COPYING: -------------------------------------------------------------------------------- 1 | 2 | GNU GENERAL PUBLIC LICENSE 3 | Version 2, June 1991 4 | 5 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 6 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 7 | Everyone is permitted to copy and distribute verbatim copies 8 | of this license document, but changing it is not allowed. 9 | 10 | Preamble 11 | 12 | The licenses for most software are designed to take away your 13 | freedom to share and change it. By contrast, the GNU General Public 14 | License is intended to guarantee your freedom to share and change free 15 | software--to make sure the software is free for all its users. This 16 | General Public License applies to most of the Free Software 17 | Foundation's software and to any other program whose authors commit to 18 | using it. (Some other Free Software Foundation software is covered by 19 | the GNU Lesser General Public License instead.) You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | this service if you wish), that you receive source code or can get it 26 | if you want it, that you can change the software or use pieces of it 27 | in new free programs; and that you know you can do these things. 28 | 29 | To protect your rights, we need to make restrictions that forbid 30 | anyone to deny you these rights or to ask you to surrender the rights. 31 | These restrictions translate to certain responsibilities for you if you 32 | distribute copies of the software, or if you modify it. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must give the recipients all the rights that 36 | you have. You must make sure that they, too, receive or can get the 37 | source code. And you must show them these terms so they know their 38 | rights. 39 | 40 | We protect your rights with two steps: (1) copyright the software, and 41 | (2) offer you this license which gives you legal permission to copy, 42 | distribute and/or modify the software. 43 | 44 | Also, for each author's protection and ours, we want to make certain 45 | that everyone understands that there is no warranty for this free 46 | software. If the software is modified by someone else and passed on, we 47 | want its recipients to know that what they have is not the original, so 48 | that any problems introduced by others will not reflect on the original 49 | authors' reputations. 50 | 51 | Finally, any free program is threatened constantly by software 52 | patents. We wish to avoid the danger that redistributors of a free 53 | program will individually obtain patent licenses, in effect making the 54 | program proprietary. To prevent this, we have made it clear that any 55 | patent must be licensed for everyone's free use or not licensed at all. 56 | 57 | The precise terms and conditions for copying, distribution and 58 | modification follow. 59 | 60 | GNU GENERAL PUBLIC LICENSE 61 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 62 | 63 | 0. This License applies to any program or other work which contains 64 | a notice placed by the copyright holder saying it may be distributed 65 | under the terms of this General Public License. The "Program", below, 66 | refers to any such program or work, and a "work based on the Program" 67 | means either the Program or any derivative work under copyright law: 68 | that is to say, a work containing the Program or a portion of it, 69 | either verbatim or with modifications and/or translated into another 70 | language. (Hereinafter, translation is included without limitation in 71 | the term "modification".) Each licensee is addressed as "you". 72 | 73 | Activities other than copying, distribution and modification are not 74 | covered by this License; they are outside its scope. The act of 75 | running the Program is not restricted, and the output from the Program 76 | is covered only if its contents constitute a work based on the 77 | Program (independent of having been made by running the Program). 78 | Whether that is true depends on what the Program does. 79 | 80 | 1. You may copy and distribute verbatim copies of the Program's 81 | source code as you receive it, in any medium, provided that you 82 | conspicuously and appropriately publish on each copy an appropriate 83 | copyright notice and disclaimer of warranty; keep intact all the 84 | notices that refer to this License and to the absence of any warranty; 85 | and give any other recipients of the Program a copy of this License 86 | along with the Program. 87 | 88 | You may charge a fee for the physical act of transferring a copy, and 89 | you may at your option offer warranty protection in exchange for a fee. 90 | 91 | 2. You may modify your copy or copies of the Program or any portion 92 | of it, thus forming a work based on the Program, and copy and 93 | distribute such modifications or work under the terms of Section 1 94 | above, provided that you also meet all of these conditions: 95 | 96 | a) You must cause the modified files to carry prominent notices 97 | stating that you changed the files and the date of any change. 98 | 99 | b) You must cause any work that you distribute or publish, that in 100 | whole or in part contains or is derived from the Program or any 101 | part thereof, to be licensed as a whole at no charge to all third 102 | parties under the terms of this License. 103 | 104 | c) If the modified program normally reads commands interactively 105 | when run, you must cause it, when started running for such 106 | interactive use in the most ordinary way, to print or display an 107 | announcement including an appropriate copyright notice and a 108 | notice that there is no warranty (or else, saying that you provide 109 | a warranty) and that users may redistribute the program under 110 | these conditions, and telling the user how to view a copy of this 111 | License. (Exception: if the Program itself is interactive but 112 | does not normally print such an announcement, your work based on 113 | the Program is not required to print an announcement.) 114 | 115 | These requirements apply to the modified work as a whole. If 116 | identifiable sections of that work are not derived from the Program, 117 | and can be reasonably considered independent and separate works in 118 | themselves, then this License, and its terms, do not apply to those 119 | sections when you distribute them as separate works. But when you 120 | distribute the same sections as part of a whole which is a work based 121 | on the Program, the distribution of the whole must be on the terms of 122 | this License, whose permissions for other licensees extend to the 123 | entire whole, and thus to each and every part regardless of who wrote it. 124 | 125 | Thus, it is not the intent of this section to claim rights or contest 126 | your rights to work written entirely by you; rather, the intent is to 127 | exercise the right to control the distribution of derivative or 128 | collective works based on the Program. 129 | 130 | In addition, mere aggregation of another work not based on the Program 131 | with the Program (or with a work based on the Program) on a volume of 132 | a storage or distribution medium does not bring the other work under 133 | the scope of this License. 134 | 135 | 3. You may copy and distribute the Program (or a work based on it, 136 | under Section 2) in object code or executable form under the terms of 137 | Sections 1 and 2 above provided that you also do one of the following: 138 | 139 | a) Accompany it with the complete corresponding machine-readable 140 | source code, which must be distributed under the terms of Sections 141 | 1 and 2 above on a medium customarily used for software interchange; or, 142 | 143 | b) Accompany it with a written offer, valid for at least three 144 | years, to give any third party, for a charge no more than your 145 | cost of physically performing source distribution, a complete 146 | machine-readable copy of the corresponding source code, to be 147 | distributed under the terms of Sections 1 and 2 above on a medium 148 | customarily used for software interchange; or, 149 | 150 | c) Accompany it with the information you received as to the offer 151 | to distribute corresponding source code. (This alternative is 152 | allowed only for noncommercial distribution and only if you 153 | received the program in object code or executable form with such 154 | an offer, in accord with Subsection b above.) 155 | 156 | The source code for a work means the preferred form of the work for 157 | making modifications to it. For an executable work, complete source 158 | code means all the source code for all modules it contains, plus any 159 | associated interface definition files, plus the scripts used to 160 | control compilation and installation of the executable. However, as a 161 | special exception, the source code distributed need not include 162 | anything that is normally distributed (in either source or binary 163 | form) with the major components (compiler, kernel, and so on) of the 164 | operating system on which the executable runs, unless that component 165 | itself accompanies the executable. 166 | 167 | If distribution of executable or object code is made by offering 168 | access to copy from a designated place, then offering equivalent 169 | access to copy the source code from the same place counts as 170 | distribution of the source code, even though third parties are not 171 | compelled to copy the source along with the object code. 172 | 173 | 4. You may not copy, modify, sublicense, or distribute the Program 174 | except as expressly provided under this License. Any attempt 175 | otherwise to copy, modify, sublicense or distribute the Program is 176 | void, and will automatically terminate your rights under this License. 177 | However, parties who have received copies, or rights, from you under 178 | this License will not have their licenses terminated so long as such 179 | parties remain in full compliance. 180 | 181 | 5. You are not required to accept this License, since you have not 182 | signed it. However, nothing else grants you permission to modify or 183 | distribute the Program or its derivative works. These actions are 184 | prohibited by law if you do not accept this License. Therefore, by 185 | modifying or distributing the Program (or any work based on the 186 | Program), you indicate your acceptance of this License to do so, and 187 | all its terms and conditions for copying, distributing or modifying 188 | the Program or works based on it. 189 | 190 | 6. Each time you redistribute the Program (or any work based on the 191 | Program), the recipient automatically receives a license from the 192 | original licensor to copy, distribute or modify the Program subject to 193 | these terms and conditions. You may not impose any further 194 | restrictions on the recipients' exercise of the rights granted herein. 195 | You are not responsible for enforcing compliance by third parties to 196 | this License. 197 | 198 | 7. If, as a consequence of a court judgment or allegation of patent 199 | infringement or for any other reason (not limited to patent issues), 200 | conditions are imposed on you (whether by court order, agreement or 201 | otherwise) that contradict the conditions of this License, they do not 202 | excuse you from the conditions of this License. If you cannot 203 | distribute so as to satisfy simultaneously your obligations under this 204 | License and any other pertinent obligations, then as a consequence you 205 | may not distribute the Program at all. For example, if a patent 206 | license would not permit royalty-free redistribution of the Program by 207 | all those who receive copies directly or indirectly through you, then 208 | the only way you could satisfy both it and this License would be to 209 | refrain entirely from distribution of the Program. 210 | 211 | If any portion of this section is held invalid or unenforceable under 212 | any particular circumstance, the balance of the section is intended to 213 | apply and the section as a whole is intended to apply in other 214 | circumstances. 215 | 216 | It is not the purpose of this section to induce you to infringe any 217 | patents or other property right claims or to contest validity of any 218 | such claims; this section has the sole purpose of protecting the 219 | integrity of the free software distribution system, which is 220 | implemented by public license practices. Many people have made 221 | generous contributions to the wide range of software distributed 222 | through that system in reliance on consistent application of that 223 | system; it is up to the author/donor to decide if he or she is willing 224 | to distribute software through any other system and a licensee cannot 225 | impose that choice. 226 | 227 | This section is intended to make thoroughly clear what is believed to 228 | be a consequence of the rest of this License. 229 | 230 | 8. If the distribution and/or use of the Program is restricted in 231 | certain countries either by patents or by copyrighted interfaces, the 232 | original copyright holder who places the Program under this License 233 | may add an explicit geographical distribution limitation excluding 234 | those countries, so that distribution is permitted only in or among 235 | countries not thus excluded. In such case, this License incorporates 236 | the limitation as if written in the body of this License. 237 | 238 | 9. The Free Software Foundation may publish revised and/or new versions 239 | of the General Public License from time to time. Such new versions will 240 | be similar in spirit to the present version, but may differ in detail to 241 | address new problems or concerns. 242 | 243 | Each version is given a distinguishing version number. If the Program 244 | specifies a version number of this License which applies to it and "any 245 | later version", you have the option of following the terms and conditions 246 | either of that version or of any later version published by the Free 247 | Software Foundation. If the Program does not specify a version number of 248 | this License, you may choose any version ever published by the Free Software 249 | Foundation. 250 | 251 | 10. If you wish to incorporate parts of the Program into other free 252 | programs whose distribution conditions are different, write to the author 253 | to ask for permission. For software which is copyrighted by the Free 254 | Software Foundation, write to the Free Software Foundation; we sometimes 255 | make exceptions for this. Our decision will be guided by the two goals 256 | of preserving the free status of all derivatives of our free software and 257 | of promoting the sharing and reuse of software generally. 258 | 259 | NO WARRANTY 260 | 261 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 262 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 263 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 264 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 265 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 266 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 267 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 268 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 269 | REPAIR OR CORRECTION. 270 | 271 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 272 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 273 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 274 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 275 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 276 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 277 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 278 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 279 | POSSIBILITY OF SUCH DAMAGES. 280 | 281 | END OF TERMS AND CONDITIONS 282 | 283 | How to Apply These Terms to Your New Programs 284 | 285 | If you develop a new program, and you want it to be of the greatest 286 | possible use to the public, the best way to achieve this is to make it 287 | free software which everyone can redistribute and change under these terms. 288 | 289 | To do so, attach the following notices to the program. It is safest 290 | to attach them to the start of each source file to most effectively 291 | convey the exclusion of warranty; and each file should have at least 292 | the "copyright" line and a pointer to where the full notice is found. 293 | 294 | 295 | Copyright (C) 296 | 297 | This program is free software; you can redistribute it and/or modify 298 | it under the terms of the GNU General Public License as published by 299 | the Free Software Foundation; either version 2 of the License, or 300 | (at your option) any later version. 301 | 302 | This program is distributed in the hope that it will be useful, 303 | but WITHOUT ANY WARRANTY; without even the implied warranty of 304 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 305 | GNU General Public License for more details. 306 | 307 | You should have received a copy of the GNU General Public License along 308 | with this program; if not, write to the Free Software Foundation, Inc., 309 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 310 | 311 | Also add information on how to contact you by electronic and paper mail. 312 | 313 | If the program is interactive, make it output a short notice like this 314 | when it starts in an interactive mode: 315 | 316 | Gnomovision version 69, Copyright (C) year name of author 317 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 318 | This is free software, and you are welcome to redistribute it 319 | under certain conditions; type `show c' for details. 320 | 321 | The hypothetical commands `show w' and `show c' should show the appropriate 322 | parts of the General Public License. Of course, the commands you use may 323 | be called something other than `show w' and `show c'; they could even be 324 | mouse-clicks or menu items--whatever suits your program. 325 | 326 | You should also get your employer (if you work as a programmer) or your 327 | school, if any, to sign a "copyright disclaimer" for the program, if 328 | necessary. Here is a sample; alter the names: 329 | 330 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 331 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 332 | 333 | , 1 April 1989 334 | Ty Coon, President of Vice 335 | 336 | This General Public License does not permit incorporating your program into 337 | proprietary programs. If your program is a subroutine library, you may 338 | consider it more useful to permit linking proprietary applications with the 339 | library. If this is what you want to do, use the GNU Lesser General 340 | Public License instead of this License. 341 | -------------------------------------------------------------------------------- /ezfio.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python3 2 | 3 | """ezfio 1.9 4 | earlephilhower@yahoo.com 5 | 6 | ------------------------------------------------------------------------ 7 | ezfio is free software: you can redistribute it and/or modify 8 | it under the terms of the GNU General Public License as published by 9 | the Free Software Foundation, either version 2 of the License, or 10 | (at your option) any later version. 11 | 12 | ezfio is distributed in the hope that it will be useful, 13 | but WITHOUT ANY WARRANTY; without even the implied warranty of 14 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 | GNU General Public License for more details. 16 | 17 | You should have received a copy of the GNU General Public License 18 | along with ezfio. If not, see . 19 | ------------------------------------------------------------------------ 20 | 21 | Usage: ./ezfio.py -d [-u <100..1>] 22 | Example: ./ezfio.py -d /dev/nvme0n1 -u 100 23 | 24 | This script requires root privileges so must be run as "root" or 25 | via "sudo ./ezfio.py" 26 | 27 | Please be sure to have FIO installed, or you will be prompted to install 28 | and re-run the script.""" 29 | 30 | from __future__ import print_function 31 | import argparse 32 | import base64 33 | from collections import OrderedDict 34 | import datetime 35 | import glob 36 | import json 37 | import os 38 | import platform 39 | import pwd 40 | import re 41 | import shutil 42 | import socket 43 | import subprocess 44 | import sys 45 | import tempfile 46 | import threading 47 | import time 48 | import zipfile 49 | 50 | 51 | def AppendFile(text, filename): 52 | """Equivalent to >> in BASH, append a line to a text file.""" 53 | with open(filename, "a") as f: 54 | f.write(text) 55 | f.write("\n") 56 | 57 | 58 | def Run(cmd): 59 | """Run a cmd[], return the exit code, stdout, and stderr.""" 60 | proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, 61 | stderr=subprocess.PIPE) 62 | out = proc.stdout.read() 63 | err = proc.stderr.read() 64 | code = proc.wait() 65 | return int(code), out.decode('UTF-8'), err.decode('UTF-8') 66 | 67 | 68 | def CheckAdmin(): 69 | """Check that we have root privileges for disk access, abort if not.""" 70 | if os.geteuid() != 0: 71 | sys.stderr.write("Root privileges are required for low-level disk ") 72 | sys.stderr.write("access.\nPlease restart this script as root ") 73 | sys.stderr.write("(sudo) to continue.\n") 74 | sys.exit(1) 75 | 76 | 77 | def FindFIO(): 78 | """Try the path and the CWD for a FIO executable, return path or exit.""" 79 | # Determine if FIO is in path or CWD 80 | try: 81 | ret, out, err = Run(["fio", "-v"]) 82 | if ret == 0: 83 | return "fio" 84 | except: 85 | try: 86 | ret, out, err = Run(['./fio', '-v']) 87 | if ret == 0: 88 | return "./fio" 89 | except: 90 | sys.stderr.write("FIO is required to run IO tests.\n") 91 | sys.stderr.write("The latest versions can be found at ") 92 | sys.stderr.write("https://github.com/axboe/fio.\n") 93 | sys.exit(1) 94 | 95 | 96 | def CheckFIOVersion(): 97 | """Check that we have a version of FIO installed that we can use.""" 98 | global fio, fioVerString, fioOutputFormat 99 | code, out, err = Run([fio, '--version']) 100 | try: 101 | fioVerString = out.split('\n')[0].rstrip() 102 | ver = out.split('\n')[0].rstrip().split('-')[1].split('.')[0] 103 | if int(ver) < 2: 104 | sys.stderr.write("ERROR: FIO version " + ver + " unsupported, ") 105 | sys.stderr.write("version 2.0 or later required. Exiting.\n") 106 | sys.exit(2) 107 | except: 108 | sys.stderr.write("ERROR: Unable to determine version of fio " + 109 | "installed. Exiting.\n") 110 | sys.exit(2) 111 | # Now see if we can make exceedance charts 112 | # Can't just try --output-format=json+ because the FIO in Ubuntu 16.04 113 | # repo doesn't understand it and *silently ignores ir*. Instead, use 114 | # the help output to see if "json+" exists at all... 115 | try: 116 | code, out, err = Run([fio, '--help']) 117 | if (code == 0) and ("json+" in out): 118 | fioOutputFormat = "json+" 119 | except: 120 | pass 121 | 122 | 123 | def CheckAIOLimits(): 124 | """Ensure kernel AIO max transactions is large enough to run test.""" 125 | global aioNeeded 126 | # If anything fails, silently continue. FIO will give error if it 127 | # can't run due to the AIO setting later on. 128 | try: 129 | code, out, err = Run(['cat', '/proc/sys/fs/aio-max-nr']) 130 | if code == 0: 131 | aiomaxnr = int(out.split("\n")[0].rstrip()) 132 | if aiomaxnr < int(aioNeeded): 133 | sys.stderr.write( 134 | "ERROR: The kernel's maximum outstanding async IO" + 135 | "setting (aio-max-nr) is too\n") 136 | sys.stderr.write(" low to complete the test run. Required value is " + str( 137 | aioNeeded) + ", current is " + str(aiomaxnr) + "\n") 138 | sys.stderr.write( 139 | " To fix this temporarially, please execute the following command:\n") 140 | sys.stderr.write( 141 | " sudo sysctl -w fs.aio-max-nr=" + str(aioNeeded) + "\n") 142 | sys.stderr.write("Unable to continue. Exiting.\n") 143 | sys.exit(2) 144 | except: 145 | pass 146 | 147 | 148 | def ParseArgs(): 149 | """Parse command line options into globals.""" 150 | global physDrive, physDriveDict, physDriveTxt, utilization, nullio, isFile 151 | global outputDest, offset, cluster, yes, quickie, verify, fastPrecond 152 | global readOnly, compressPct 153 | 154 | parser = argparse.ArgumentParser( 155 | formatter_class=argparse.RawDescriptionHelpFormatter, 156 | description="A tool to easily run FIO to benchmark sustained " 157 | "performance of NVME\nand other types of SSD.", 158 | epilog=""" 159 | Requirements:\n 160 | * Root access (log in as root, or sudo {prog}) 161 | * No filesytems or data on target device 162 | * FIO IO tester (available https://github.com/axboe/fio) 163 | * sdparm to identify the NVME device and serial number 164 | 165 | WARNING: All data on the target device will be DESTROYED by this test.""") 166 | parser.add_argument("--cluster", dest="cluster", action='store_true', 167 | help="Run the test on a cluster (--drive in "+ 168 | "host1:/dev/p1,host2:/dev/ps,...)", required=False) 169 | parser.add_argument("--verify", dest="verify", action='store_true', 170 | help="Have FIO perform data verifications on reads."+ 171 | " May impact performance", required=False) 172 | parser.add_argument("--drive", "-d", dest="physDrive", 173 | help="Device to test (ex: /dev/nvme0n1)", required=True) 174 | parser.add_argument("--utilization", "-u", dest="utilization", 175 | help="Amount of drive to test (in percent), 1...100", 176 | default="100", type=int, required=False) 177 | parser.add_argument("--offset", "-s", dest="offset", 178 | help="offset from start (in percent), 0...99", default="0", 179 | type=int, required=False) 180 | parser.add_argument("--output", "-o", dest="outputDest", 181 | help="Location where results should be saved", required=False) 182 | parser.add_argument("--yes", dest="yes", action='store_true', 183 | help="Skip the final warning prompt (for scripted tests)", 184 | required=False) 185 | parser.add_argument("--fast-precondition", dest='fastpre', action='store_true', 186 | help="Only do a single sequential write to precondition drive", 187 | required=False) 188 | parser.add_argument("--quickie", dest="quickie", help=argparse.SUPPRESS, 189 | action='store_true', required=False) 190 | parser.add_argument("--file", dest="file", help="Test using a regular file, not a device", 191 | action='store_true', required=False) 192 | parser.add_argument("--nullio", dest="nullio", help=argparse.SUPPRESS, 193 | action='store_true', required=False) 194 | parser.add_argument("--readonly", dest="readonly", help="Only run read-only tests, don't write to device", 195 | action='store_true', required=False) 196 | parser.add_argument("--compress_percentage", dest="compresspct", help="Set the target data compressibility", 197 | default="100", type=int, required=False) 198 | args = parser.parse_args() 199 | 200 | physDrive = args.physDrive 201 | physDriveTxt = physDrive 202 | utilization = args.utilization 203 | outputDest = args.outputDest 204 | offset = args.offset 205 | yes = args.yes 206 | quickie = args.quickie 207 | nullio = args.nullio 208 | verify = args.verify 209 | fastPrecond = args.fastpre 210 | cluster = args.cluster 211 | isFile = args.file 212 | readOnly = args.readonly 213 | compressPct = args.compresspct 214 | 215 | # For cluster mode, we add a new physDriveList dict and fake physDrive 216 | if cluster: 217 | nodes = physDrive.split(",") 218 | for node in nodes: 219 | physDriveDict[node.split(":")[0]] = node.split(":")[1] 220 | physDrive = nodes[0].split(":")[1] 221 | 222 | if (utilization < 1) or (utilization > 100): 223 | print("ERROR: Utilization must be between 1...100") 224 | parser.print_help() 225 | sys.exit(1) 226 | 227 | if (offset < 0) or (offset > 99) or (offset+utilization > 100): 228 | print("ERROR: offset must be between 0...99 while offset + utilization <= 100") 229 | parser.print_help() 230 | sys.exit(1) 231 | # Sanity check that the selected drive is not mounted by parsing mounts 232 | # This is not guaranteed to catch all as there's just too many different 233 | # naming conventions out there. Let's cover simple HDD/SSD/NVME patterns 234 | pdispart = (re.match('.*p?[1-9][0-9]*$', physDrive) and 235 | not re.match('.*/nvme[0-9]+n[1-9][0-9]*$', physDrive)) 236 | hit = "" 237 | with open("/proc/mounts", "r") as f: 238 | mounts = f.readlines() 239 | for l in mounts: 240 | dev = l.split()[0] 241 | mnt = l.split()[1] 242 | if dev == physDrive: 243 | hit = dev + " on " + mnt # Obvious exact match 244 | if pdispart: 245 | chkdev = dev 246 | else: 247 | # /dev/sdp# is special case, don't remove the "p" 248 | if re.match('^/dev/sdp.*$', dev): 249 | chkdev = re.sub('[1-9][0-9]*$', '', dev) 250 | else: 251 | # Need to see if mounted partition is on a raw device being tested 252 | chkdev = re.sub('p?[1-9][0-9]*$', '', dev) 253 | if chkdev == physDrive: 254 | hit = dev + " on " + mnt 255 | if hit != "": 256 | print("ERROR: Mounted volume '" + str(hit) + "' is on same device" + 257 | "as tested device '" + str(physDrive) + "'. ABORTING.") 258 | sys.exit(2) 259 | 260 | 261 | def grep(inlist, regex): 262 | """Implement grep in a non-Pythonic way to make it comprehensible to humans""" 263 | out = [] 264 | for i in inlist: 265 | if re.search(regex, i): 266 | out = out + [i] 267 | return out 268 | 269 | 270 | def CollectSystemInfo(): 271 | """Collect some OS and CPU information.""" 272 | global cpu, cpuCores, cpuFreqMHz, uname 273 | uname = " ".join(platform.uname()) 274 | code, cpuinfo, err = Run(['cat', '/proc/cpuinfo']) 275 | cpuinfo = cpuinfo.split("\n") 276 | if 'aarch64' in uname: 277 | code, cpuinfo, err = Run(['lscpu']) 278 | cpuinfo = cpuinfo.split("\n") 279 | cpu = grep(cpuinfo, r'Model name')[0].split(':')[1].lstrip() 280 | cpuCores = grep(cpuinfo, r'CPU')[1].split(':')[1].lstrip() 281 | try: 282 | code, dmidecode, err = Run(['dmidecode', '--type', 'processor']) 283 | cpuFreqMHz = int(round(float(grep(dmidecode.split("\n"), r'Current Speed')[0].rstrip().lstrip().split(" ")[2]))) 284 | except: 285 | cpuFreqMHz = grep(cpuinfo, r'max')[0].split(':')[1].lstrip() 286 | elif 'ppc64' in uname: 287 | # Implement grep and sed in Python... 288 | cpu = grep(cpuinfo, r'model')[0].split(': ')[1].replace('(R)', '').replace('(TM)', '') 289 | cpuCores = len(grep(cpuinfo, r'processor')) 290 | try: 291 | code, dmidecode, err = Run(['dmidecode', '--type', 'processor']) 292 | cpuFreqMHz = int(round(float(grep(dmidecode.split("\n"), r'Current Speed')[0].rstrip().lstrip().split(" ")[2]))) 293 | except: 294 | cpuFreqMHz = int(round(float(grep(cpuinfo, r'clock')[0].split(': ')[1][:-3]))) 295 | else: 296 | model_names = grep(cpuinfo, r'model name') 297 | cpu = model_names[0].split(': ')[1].replace('(R)', '').replace('(TM)', '') 298 | cpuCores = len(model_names) 299 | try: 300 | code, dmidecode, err = Run(['dmidecode', '--type', 'processor']) 301 | cpuFreqMHz = int(round(float(grep(dmidecode.split("\n"), r'Current Speed')[0].rstrip().lstrip().split(" ")[2]))) 302 | except: 303 | cpuFreqMHz = int(round(float(grep(cpuinfo, r'cpu MHz')[0].split(': ')[1]))) 304 | 305 | 306 | def VerifyContinue(): 307 | """User's last chance to abort the test. Exit if they don't agree.""" 308 | if not yes: 309 | print("-" * 75) 310 | print("WARNING! " * 9) 311 | print("THIS TEST WILL DESTROY ANY DATA AND FILESYSTEMS ON " + physDrive) 312 | cont = input("Please type the word \"yes\" and hit return to " + 313 | "continue, or anything else to abort.") 314 | print("-" * 75 + "\n") 315 | if cont != "yes": 316 | print("Performance test aborted, drive is untouched.") 317 | sys.exit(1) 318 | 319 | 320 | def CollectDriveInfo(): 321 | """Get important device information, exit if not possible.""" 322 | global physDriveGiB, physDriveGB, physDriveBase, testcapacity, testoffset 323 | global model, serial, physDrive, isFile 324 | # We absolutely need this information 325 | pd = physDrive.split(',')[0] 326 | try: 327 | if isFile: 328 | physDriveBase = os.path.basename(pd) 329 | physDriveBytes = str(os.stat(pd).st_size) + "\n" 330 | else: 331 | physDriveBase = os.path.basename(pd) 332 | code, physDriveBytes, err = Run(['blockdev', '--getsize64', pd]) 333 | if code != 0: 334 | raise Exception("Can't get drive size for " + pd) 335 | physDriveBytes = physDriveBytes.split('\n')[0] 336 | physDriveBytes = int(physDriveBytes) 337 | physDriveGB = int(physDriveBytes / (1000 * 1000 * 1000)) 338 | physDriveGiB = int(physDriveBytes / (1024 * 1024 * 1024)) 339 | testcapacity = int((physDriveGiB * utilization) / 100) 340 | testoffset = int((physDriveGiB * offset) / 100) 341 | except: 342 | print("ERROR: Can't get '" + pd + "' size. Incorrect device name?") 343 | sys.exit(1) 344 | # These are nice to have, but we can run without it 345 | model = "UNKNOWN" 346 | serial = "UNKNOWN" 347 | try: 348 | nvmeclicmd = ['nvme', 'list', '--output-format=json'] 349 | code, nvmecli, err = Run(nvmeclicmd) 350 | if code == 0: 351 | j = json.loads(nvmecli) 352 | for drive in j['Devices']: 353 | if drive['DevicePath'] == pd: 354 | model = drive['ModelNumber'] 355 | serial = drive['SerialNumber'] 356 | return 357 | except: 358 | pass # An error in nvme is not a problem 359 | try: 360 | sdparmcmd = ['sdparm', '--page', 'sn', '--inquiry', '--long', pd] 361 | code, sdparm, err = Run(sdparmcmd) 362 | lines = sdparm.split("\n") 363 | if len(lines) == 4: 364 | model = re.sub( 365 | r'\s+', " ", lines[0].split(":")[1].lstrip().rstrip()) 366 | serial = re.sub(r'\s+', " ", lines[2].lstrip().rstrip()) 367 | else: 368 | print("Unable to identify drive using sdparm. Continuing.") 369 | except: 370 | print("Install sdparm to allow model/serial extraction. Continuing.") 371 | 372 | 373 | def CSVInfoHeader(f): 374 | """Headers to the CSV file (ending up in the ODS at the test end).""" 375 | global physDriveTxt, model, serial, physDriveGiB, testcapacity, testoffset 376 | global cpu, cpuCores, cpuFreqMHz, uname, quickie, fastPrecond 377 | if quickie: 378 | prefix = "QUICKIE-INVALID-RESULTS-" 379 | else: 380 | prefix = "" 381 | if fastPrecond: 382 | prefix = "FASTPRECOND-" + prefix 383 | AppendFile("Drive," + prefix + str(physDriveTxt).replace(",", " "), f) 384 | AppendFile("Model," + prefix + str(model), f) 385 | AppendFile("Serial," + prefix + str(serial), f) 386 | AppendFile("AvailCapacity," + prefix + str(physDriveGiB) + ",GiB", f) 387 | if offset == 0: 388 | testcap = str(testcapacity) 389 | else: 390 | testcap = str(testcapacity) + " @ " + str(testoffset) 391 | AppendFile("TestedCapacity," + prefix + str(testcap) + ",GiB", f) 392 | AppendFile("CPU," + prefix + str(cpu), f) 393 | AppendFile("Cores," + prefix + str(cpuCores), f) 394 | AppendFile("Frequency," + prefix + str(cpuFreqMHz), f) 395 | AppendFile("OS," + prefix + str(uname), f) 396 | AppendFile("FIOVersion," + prefix + str(fioVerString), f) 397 | 398 | 399 | def SetupFiles(): 400 | """Set up names for all output/input files, place headers on CSVs.""" 401 | global ds, details, testcsv, timeseriescsv, odssrc, odsdest 402 | global physDriveBase, fioVerString, outputDest, timeseriesclatcsv 403 | global timeseriesslatcsv 404 | 405 | # Datestamp for run output files 406 | ds = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S") 407 | 408 | # The unique suffix we generate for all output files 409 | suffix = str(physDriveGB) + "GB_" + str(cpuCores) + "cores_" 410 | suffix += str(cpuFreqMHz) + "MHz_" + physDriveBase + "_" 411 | suffix += socket.gethostname() + "_" + ds 412 | 413 | if not outputDest: 414 | outputDest = os.getcwd() 415 | # The "details" directory contains the raw output of each FIO run 416 | details = outputDest + "/details_" + suffix 417 | if os.path.exists(details): 418 | shutil.rmtree(details) 419 | os.makedirs(details) 420 | # Copy this script into it for posterity 421 | shutil.copyfile(__file__, details + "/" + os.path.basename(__file__)) 422 | 423 | # Files we're going to generate, encode some system info in the names 424 | # If the output files already exist, erase them 425 | testcsv = details + "/ezfio_tests_"+suffix+".csv" 426 | if os.path.exists(testcsv): 427 | os.unlink(testcsv) 428 | CSVInfoHeader(testcsv) 429 | AppendFile("Type,Write %,Block Size,Threads,Queue Depth/Thread,IOPS," + 430 | "Bandwidth (MB/s),Read Latency (us),Write Latency (us)," + 431 | "System CPU,User CPU", testcsv) 432 | timeseriescsv = details + "/ezfio_timeseries_"+suffix+".csv" 433 | timeseriesclatcsv = details + "/ezfio_timeseriesclat_"+suffix+".csv" 434 | timeseriesslatcsv = details + "/ezfio_timeseriesslat_"+suffix+".csv" 435 | for f in [timeseriescsv, timeseriesclatcsv, timeseriesslatcsv]: 436 | if os.path.exists(f): 437 | os.unlink(f) 438 | CSVInfoHeader(f) 439 | AppendFile(",".join(["IOPS"] + list(physDriveDict.keys())), 440 | timeseriescsv) # Add IOPS header 441 | hdr = "" 442 | for host in physDriveDict.keys(): 443 | hdr = hdr + ',' + host + "-read" 444 | hdr = hdr + ',' + host + "-write" 445 | AppendFile('CLAT-read,CLAT-write' + hdr, 446 | timeseriesclatcsv) # Add IOPS header 447 | AppendFile('SLAT-read,SLAT-write' + hdr, 448 | timeseriesslatcsv) # Add IOPS header 449 | 450 | # ODS input and output files 451 | odssrc = os.path.dirname(os.path.realpath(__file__)) + "/original.ods" 452 | if not os.path.exists(odssrc): 453 | print("ERROR: Can't find original ODS spreadsheet '" + odssrc + "'.") 454 | sys.exit(1) 455 | odsdest = outputDest + "/ezfio_results_"+suffix+".ods" 456 | if os.path.exists(odsdest): 457 | os.unlink(odsdest) 458 | 459 | 460 | class FIOError(Exception): 461 | """Exception generated when FIO returns a non-success value 462 | 463 | Attributes: 464 | cmdline -- The FIO command that was executed 465 | code -- Error code FIO returned 466 | stderr -- STDERR output from FIO 467 | stdout -- STDOUT output from FIO 468 | """ 469 | 470 | def __init__(self, cmdline, code, stderr, stdout): 471 | super(FIOError, self).__init__() 472 | self.cmdline = cmdline 473 | self.code = code 474 | self.stderr = stderr 475 | self.stdout = stdout 476 | 477 | 478 | def TestName(seqrand, wmix, bs, threads, iodepth): 479 | """Return full path and filename prefix for test of specified params""" 480 | global details, physDriveBase 481 | testfile = str(details) + "/Test" + str(seqrand) + "_w" + str(wmix) 482 | testfile += "_bs" + str(bs) + "_threads" + str(threads) + "_iodepth" 483 | testfile += str(iodepth) + "_" + str(physDriveBase) + ".out" 484 | return testfile 485 | 486 | 487 | def SequentialConditioning(): 488 | """Sequentially fill the complete capacity of the drive once.""" 489 | global quickie, fastPrecond, nullio, readOnly, compressPct 490 | 491 | def GenerateJobfile(drive, testcapacity, testoffset): 492 | """Write the sequential jobfile for a single server""" 493 | jobfile = tempfile.NamedTemporaryFile(delete=False, mode='w') 494 | for dr in drive.split(','): 495 | jobfile.write("[SeqCond-" + dr + "]\n") 496 | # Note that we can't use regular test runner because this test needs 497 | # to run for a specified # of bytes, not a specified # of seconds. 498 | jobfile.write("readwrite=write\n") 499 | jobfile.write("bs=128k\n") 500 | if nullio: 501 | jobfile.write("ioengine=null\n") 502 | else: 503 | jobfile.write("ioengine=libaio\n") 504 | jobfile.write("iodepth=64\n") 505 | jobfile.write("direct=1\n") 506 | jobfile.write("filename=" + str(dr) + "\n") 507 | if quickie: 508 | jobfile.write("size=1G\n") 509 | else: 510 | jobfile.write("size=" + str(testcapacity) + "G\n") 511 | jobfile.write("thread=1\n") 512 | jobfile.write("offset=" + str(testoffset) + "G\n") 513 | if compressPct != 100: 514 | jobfile.write("buffer_compress_percentage=" + str(compressPct) + "\n") 515 | jobfile.close() 516 | return jobfile 517 | 518 | cmdline = [fio] 519 | if not cluster: 520 | jobfile = GenerateJobfile(physDrive, testcapacity, testoffset) 521 | cmdline = cmdline + [jobfile.name] 522 | else: 523 | jobfile = [] 524 | for host in physDriveDict.keys(): 525 | newjob = GenerateJobfile( 526 | physDriveDict[host], testcapacity, testoffset) 527 | cmdline = cmdline + ['--client=' + str(host), str(newjob.name)] 528 | jobfile = jobfile + [newjob] 529 | cmdline = cmdline + ['--output-format=' + str(fioOutputFormat)] 530 | 531 | if not readOnly: 532 | code, out, err = Run(cmdline) 533 | else: 534 | code = 0 535 | 536 | if cluster: 537 | for job in jobfile: 538 | os.unlink(job.name) 539 | else: 540 | os.unlink(jobfile.name) 541 | 542 | if code != 0: 543 | raise FIOError(" ".join(cmdline), code, err, out) 544 | else: 545 | return "DONE", "DONE", "DONE" 546 | 547 | 548 | def RandomConditioning(): 549 | """Randomly write entire device for the full capacity""" 550 | global quickie, nullio, readOnly, compressPct 551 | 552 | def GenerateJobfile(drive, testcapacity, testoffset): 553 | """Write the random jobfile""" 554 | jobfile = tempfile.NamedTemporaryFile(delete=False, mode='w') 555 | for dr in drive.split(','): 556 | jobfile.write("[RandCond-" + dr + "]\n") 557 | # Note that we can't use regular test runner because this test needs 558 | # to run for a specified # of bytes, not a specified # of seconds. 559 | jobfile.write("readwrite=randwrite\n") 560 | jobfile.write("bs=4k\n") 561 | jobfile.write("invalidate=1\n") 562 | jobfile.write("end_fsync=0\n") 563 | jobfile.write("group_reporting=1\n") 564 | jobfile.write("direct=1\n") 565 | jobfile.write("filename=" + str(dr) + "\n") 566 | if quickie: 567 | jobfile.write("size=1G\n") 568 | else: 569 | jobfile.write("size=" + str(testcapacity) + "G\n") 570 | if nullio: 571 | jobfile.write("ioengine=null\n") 572 | else: 573 | jobfile.write("ioengine=libaio\n") 574 | jobfile.write("iodepth=256\n") 575 | jobfile.write("norandommap\n") 576 | jobfile.write("randrepeat=0\n") 577 | jobfile.write("thread=1\n") 578 | jobfile.write("offset=" + str(testoffset) + "G\n") 579 | if compressPct != 100: 580 | jobfile.write("buffer_compress_percentage=" + str(compressPct) + "\n") 581 | jobfile.close() 582 | return jobfile 583 | 584 | cmdline = [fio] 585 | if not cluster: 586 | jobfile = GenerateJobfile(physDrive, testcapacity, testoffset) 587 | cmdline = cmdline + [jobfile.name] 588 | else: 589 | jobfile = [] 590 | for host in physDriveDict.keys(): 591 | newjob = GenerateJobfile( 592 | physDriveDict[host], testcapacity, testoffset) 593 | cmdline = cmdline + ['--client=' + str(host), str(newjob.name)] 594 | jobfile = jobfile + [newjob] 595 | cmdline = cmdline + ['--output-format=' + str(fioOutputFormat)] 596 | 597 | if not readOnly: 598 | code, out, err = Run(cmdline) 599 | else: 600 | code = 0 601 | 602 | if cluster: 603 | for job in jobfile: 604 | os.unlink(job.name) 605 | else: 606 | os.unlink(jobfile.name) 607 | 608 | if code != 0: 609 | raise FIOError(" ".join(cmdline), code, err, out) 610 | else: 611 | return "DONE", "DONE", "DONE" 612 | 613 | 614 | def RunTest(iops_log, seqrand, wmix, bs, threads, iodepth, runtime): 615 | """Runs the specified test, generates output CSV lines.""" 616 | global cluster, physDriveDict, compressPct 617 | 618 | # Taken from fio_latency2csv.py - needed to convert funky semi-log to normal latencies 619 | def plat_idx_to_val(idx, FIO_IO_U_PLAT_BITS=6, FIO_IO_U_PLAT_VAL=64): 620 | """Convert from lat bucket to real value, for obsolete FIO revisions""" 621 | # MSB <= (FIO_IO_U_PLAT_BITS-1), cannot be rounded off. Use 622 | # all bits of the sample as index 623 | if idx < (FIO_IO_U_PLAT_VAL << 1): 624 | return idx 625 | # Find the group and compute the minimum value of that group 626 | error_bits = (idx >> FIO_IO_U_PLAT_BITS) - 1 627 | base = 1 << (error_bits + FIO_IO_U_PLAT_BITS) 628 | # Find its bucket number of the group 629 | k = idx % FIO_IO_U_PLAT_VAL 630 | # Return the mean of the range of the bucket 631 | return base + ((k + 0.5) * (1 << error_bits)) 632 | 633 | def WriteExceedance(j, rdwr, outfile): 634 | """Generate an exceedance CSV for read or write from JSON output.""" 635 | global fioOutputFormat 636 | if fioOutputFormat == "json": 637 | return # This data not present in JSON format, only JSON+ 638 | # Generate a dict of combined bins, either for jobs[0] or client_stats[] 639 | bins = {} 640 | ios = 0 641 | try: 642 | # Non-cluster case will have jobs, only a single one needed 643 | ios = j['jobs'][0][rdwr]['total_ios'] 644 | if ('N' in j['jobs'][0][rdwr]['clat_ns']) and (j['jobs'][0][rdwr]['clat_ns']['N'] > 0): 645 | bins = j['jobs'][0][rdwr]['clat_ns']['bins'] 646 | else: 647 | bins = {} 648 | except: 649 | # Cluster case will have client_stats to combine 650 | for client_stats in j['client_stats']: 651 | if client_stats['jobname'] == 'All clients': 652 | # Don't bother looking at combined, bins doesn't exist there 653 | continue 654 | if client_stats[rdwr]['total_ios']: 655 | ios = ios + client_stats[rdwr]['total_ios'] 656 | for k in client_stats[rdwr]['clat_ns']['bins'].keys(): 657 | try: 658 | bins[k] = bins[k] + client_stats[rdwr]['clat_ns']['bins'][k] 659 | except: 660 | bins[k] = client_stats[rdwr]['clat_ns']['bins'][k] 661 | #ios = client[rdwr]['total_ios'] 662 | #bins = client[rdwr]['clat_ns']['bins'] 663 | if ios: 664 | runttl = 0 665 | # This was changed in 2.99 to be in nanoseconds and to discard the crazy _bits magic 666 | if float(fioVerString.split('-')[1]) >= 2.99: 667 | lat_ns = [] 668 | # JSON dict has keys of type string, need a sorted integer list for our work... 669 | for entry in bins: 670 | lat_ns.append(int(entry)) 671 | for entry in sorted(lat_ns): 672 | lat_us = float(entry) / 1000.0 673 | cnt = int(bins[str(entry)]) 674 | runttl += cnt 675 | pctile = 1.0 - float(runttl) / float(ios) 676 | if cnt > 0: 677 | AppendFile( 678 | ",".join((str(lat_us), str(pctile))), outfile) 679 | else: 680 | plat_bits = client[rdwr]['clat']['bins']['FIO_IO_U_PLAT_BITS'] 681 | plat_val = client[rdwr]['clat']['bins']['FIO_IO_U_PLAT_VAL'] 682 | for b in range(0, int(client[rdwr]['clat']['bins']['FIO_IO_U_PLAT_NR'])): 683 | cnt = int(client[rdwr]['clat']['bins'][str(b)]) 684 | runttl += cnt 685 | pctile = 1.0 - float(runttl) / float(ios) 686 | if cnt > 0: 687 | AppendFile( 688 | ",".join((str(plat_idx_to_val(b, plat_bits, plat_val)), 689 | str(pctile))), outfile) 690 | 691 | def GenerateJobfile(rw, wmix, bs, drive, testcapacity, runtime, threads, iodepth, testoffset): 692 | """Make a jobfile for the specified test parameters""" 693 | global verify, nullio 694 | jobfile = tempfile.NamedTemporaryFile(delete=False, mode='w') 695 | for dr in drive.split(","): 696 | jobfile.write("[test-" + dr + "]\n") 697 | jobfile.write("readwrite=" + str(rw) + "\n") 698 | jobfile.write("rwmixwrite=" + str(wmix) + "\n") 699 | jobfile.write("bs=" + str(bs) + "\n") 700 | jobfile.write("invalidate=1\n") 701 | jobfile.write("end_fsync=0\n") 702 | jobfile.write("group_reporting=1\n") 703 | jobfile.write("direct=1\n") 704 | jobfile.write("filename=" + str(dr) + "\n") 705 | jobfile.write("size=" + str(testcapacity) + "G\n") 706 | jobfile.write("time_based=1\n") 707 | jobfile.write("runtime=" + str(runtime) + "\n") 708 | if nullio: 709 | jobfile.write("ioengine=null\n") 710 | else: 711 | jobfile.write("ioengine=libaio\n") 712 | jobfile.write("numjobs=" + str(threads) + "\n") 713 | jobfile.write("iodepth=" + str(iodepth) + "\n") 714 | jobfile.write("norandommap=1\n") 715 | jobfile.write("randrepeat=0\n") 716 | jobfile.write("thread=1\n") 717 | jobfile.write("exitall=1\n") 718 | if verify: 719 | jobfile.write("verify=crc32c\n") 720 | jobfile.write("random_generator=lfsr\n") 721 | jobfile.write("offset=" + str(testoffset) + "G\n") 722 | if compressPct != 100: 723 | jobfile.write("buffer_compress_percentage=" + str(compressPct) + "\n") 724 | jobfile.close() 725 | return jobfile 726 | 727 | def CombineThreadOutputs(suffix, outcsv, lat): 728 | """Merge all FIO iops/lat logs across all servers""" 729 | # The lists may be called "iops" but the same works for clat/slat 730 | iops = [0] * (runtime + extra_runtime) 731 | # For latencies, need to keep the _w and _r separate 732 | iops_w = [0] * (runtime + extra_runtime) 733 | host_iops = OrderedDict() 734 | host_iops_w = OrderedDict() 735 | filecnt = 0 736 | if not cluster: 737 | pdd = OrderedDict() 738 | pdd['localhost'] = 1 # Just the single host, faked here 739 | else: 740 | pdd = physDriveDict 741 | for host in pdd.keys(): 742 | host_iops[host] = [0] * (runtime + extra_runtime) 743 | host_iops_w[host] = [0] * (runtime + extra_runtime) 744 | if not cluster: 745 | fileglob = testfile + str(suffix) + '.*log' 746 | else: 747 | fileglob = testfile + str(suffix) + '.*.log.' + host 748 | for filename in glob.glob(fileglob): 749 | filecnt = filecnt + 1 750 | catcmdline = ['cat', filename] 751 | catcode, catout, caterr = Run(catcmdline) 752 | if catcode != 0: 753 | AppendFile("ERROR", testcsv) 754 | raise FIOError(" ".join(catcmdline), 755 | catcode, caterr, catout) 756 | lines = catout.split("\n") 757 | # Set time 0 IOPS to first values 758 | riops = 0 759 | wiops = 0 760 | nexttime = 0 761 | for x in range(0, runtime + extra_runtime): 762 | if not lat: 763 | iops[x] = iops[x] + riops + wiops 764 | host_iops[host][x] = host_iops[host][x] + riops + wiops 765 | else: 766 | iops[x] = iops[x] + riops 767 | iops_w[x] = iops_w[x] + wiops 768 | host_iops[host][x] = host_iops[host][x] + riops 769 | host_iops_w[host][x] = host_iops_w[host][x] + wiops 770 | while len(lines) > 1 and (nexttime < x): 771 | parts = lines[0].split(",") 772 | nexttime = float(parts[0]) / 1000.0 773 | if int(lines[0].split(",")[2]) == 1: 774 | wiops = int(parts[1]) 775 | else: 776 | riops = int(parts[1]) 777 | lines = lines[1:] 778 | 779 | # Generate the combined CSV 780 | with open(outcsv, 'a') as f: 781 | for cnt in range(int(extra_runtime/2), runtime + extra_runtime): 782 | if filecnt > 0 and lat: 783 | line = str(float(iops[cnt])/float(filecnt)) 784 | line = line + ',' + str(float(iops_w[cnt])/float(filecnt)) 785 | else: 786 | line = str(iops[cnt]) 787 | if len(pdd.keys()) > 1: 788 | for host in pdd.keys(): 789 | if filecnt > 0 and lat: 790 | line = line + ',' + \ 791 | str(float(host_iops[host][cnt])/float(filecnt)) 792 | line = line + ',' + \ 793 | str(float(host_iops_w[host] 794 | [cnt])/float(filecnt)) 795 | else: 796 | line = line + "," + str(host_iops[host][cnt]) 797 | f.write(line + "\n") 798 | 799 | # Output file names 800 | testfile = TestName(seqrand, wmix, bs, threads, iodepth) 801 | 802 | if seqrand == "Seq": 803 | rw = "rw" 804 | else: 805 | rw = "randrw" 806 | 807 | if iops_log: 808 | extra_runtime = 10 809 | else: 810 | extra_runtime = 0 811 | 812 | cmdline = [fio] 813 | if not cluster: 814 | jobfile = GenerateJobfile(rw, wmix, bs, physDrive, testcapacity, 815 | runtime + extra_runtime, threads, iodepth, testoffset) 816 | cmdline = cmdline + [jobfile.name] 817 | AppendFile("[JOBFILE]", testfile) 818 | with open(jobfile.name, 'r') as of: 819 | txt = of.read() 820 | AppendFile(txt, testfile) 821 | if iops_log: 822 | AppendFile("write_iops_log=" + testfile, jobfile.name) 823 | AppendFile("write_lat_log=" + testfile, jobfile.name) 824 | AppendFile("log_avg_msec=1000", jobfile.name) 825 | AppendFile("log_unix_epoch=0", jobfile.name) 826 | else: 827 | jobfile = [] 828 | for host in physDriveDict.keys(): 829 | newjob = GenerateJobfile(rw, wmix, bs, physDriveDict[host], testcapacity, 830 | runtime + extra_runtime, threads, iodepth, testoffset) 831 | cmdline = cmdline + ['--client=' + str(host), str(newjob.name)] 832 | AppendFile('[JOBFILE-' + str(host) + "]", testfile) 833 | with open(newjob.name, 'r') as of: 834 | txt = of.read() 835 | AppendFile(txt, testfile) 836 | jobfile = jobfile + [newjob] 837 | if iops_log: 838 | AppendFile("write_iops_log=" + testfile, newjob.name) 839 | AppendFile("write_lat_log=" + testfile, newjob.name) 840 | AppendFile("log_avg_msec=1000", newjob.name) 841 | AppendFile("log_unix_epoch=0", newjob.name) 842 | 843 | cmdline = cmdline + ['--output-format=' + str(fioOutputFormat)] 844 | 845 | # There are some NVME drives with 4k physical and logical out there. 846 | # Check that we can actually do this size IO, OTW return 0 for all 847 | skiptest = False 848 | code, out, err = Run(['blockdev', '--getpbsz', str(physDrive.split(',')[0])]) 849 | if code == 0: 850 | iomin = int(out.split("\n")[0]) 851 | if int(bs) < iomin: 852 | skiptest = True 853 | 854 | if readOnly and wmix != 0: 855 | skiptest = True 856 | 857 | # Silently ignore failure to return min block size, FIO will fail and 858 | # we'll catch that a little later. 859 | if skiptest: 860 | code = 0 861 | out = "Test not run because block size " + str(bs) 862 | out += " below iominsize " + str(iomin) + "\n" 863 | out += "3;" + "0;" * 100 + "\n" # Bogus 0-filled resulte line 864 | err = "" 865 | else: 866 | code, out, err = Run(cmdline) 867 | AppendFile("[STDOUT]", testfile) 868 | AppendFile(out, testfile) 869 | AppendFile("[STDERR]", testfile) 870 | AppendFile(err, testfile) 871 | 872 | if cluster: 873 | for job in jobfile: 874 | os.unlink(job.name) 875 | else: 876 | os.unlink(jobfile.name) 877 | 878 | # Make sure we had successful completion, else note and abort run 879 | if code != 0: 880 | AppendFile("ERROR", testcsv) 881 | raise FIOError(" ".join(cmdline), code, err, out) 882 | 883 | if iops_log: 884 | CombineThreadOutputs('_iops', timeseriescsv, False) 885 | CombineThreadOutputs('_clat', timeseriesclatcsv, True) 886 | CombineThreadOutputs('_slat', timeseriesslatcsv, True) 887 | 888 | rdiops = 0 889 | wriops = 0 890 | rlat = 0 891 | wlat = 0 892 | syscpu = 0 893 | usrcpu = 0 894 | if not skiptest: 895 | # Chomp anything before the json. 896 | for i in range(0, len(out)): 897 | if out[i] == '{': 898 | out = out[i:] 899 | break 900 | j = json.loads(out) 901 | 902 | if cluster and len(physDriveDict.keys()) == 1: 903 | client = j['client_stats'][0] 904 | elif cluster: 905 | for res in j['client_stats']: 906 | if res['jobname'] == "All clients": 907 | client = res 908 | break 909 | else: 910 | client = j['jobs'][0] 911 | 912 | syscpu = float(client['sys_cpu']) 913 | usrcpu = float(client['usr_cpu']) 914 | 915 | rdiops = float(client['read']['iops']) 916 | wriops = float(client['write']['iops']) 917 | 918 | # 'lat' goes to 'lat_ns' in newest FIO JSON formats...ugh 919 | try: 920 | rlat = float(client['read']['lat_ns']['mean']) / 1000 # ns->us 921 | except: 922 | rlat = float(client['read']['lat']['mean']) 923 | try: 924 | wlat = float(client['write']['lat_ns']['mean']) / 1000 # ns->us 925 | except: 926 | wlat = float(client['write']['lat']['mean']) 927 | 928 | iops = "{0:0.0f}".format(rdiops + wriops) 929 | mbps = "{0:0.2f}".format((float((rdiops+wriops) * bs) / 930 | (1024.0 * 1024.0))) 931 | lat = "{0:0.1f}".format(max(rlat, wlat)) 932 | 933 | AppendFile(",".join((str(seqrand), str(wmix), str(bs), str(threads), 934 | str(iodepth), str(iops), str(mbps), str(rlat), 935 | str(wlat), str(syscpu), str(usrcpu))), testcsv) 936 | 937 | if skiptest: 938 | AppendFile("1,1\n", testfile + ".exc.read.csv") 939 | AppendFile("1,1\n", testfile + ".exc.write.csv") 940 | else: 941 | WriteExceedance(j, 'read', testfile + ".exc.read.csv") 942 | WriteExceedance(j, 'write', testfile + ".exc.write.csv") 943 | 944 | return iops, mbps, lat 945 | 946 | 947 | def DefineTests(): 948 | """Generate the work list for the main worker into OC.""" 949 | global oc, quickie, fastPrecond 950 | # What we're shmoo-ing across 951 | bslist = (512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072) 952 | qdlist = (1, 2, 4, 8, 16, 32, 64, 128, 256) 953 | threadslist = (1, 2, 4, 8, 16, 32, 64, 128, 256) 954 | 955 | shorttime = 120 # Runtime of point tests 956 | longtime = 1200 # Runtime of long-running tests 957 | if quickie: 958 | shorttime = int(shorttime / 10) 959 | longtime = int(longtime / 10) 960 | 961 | def AddTest(name, seqrand, writepct, blocksize, threads, qdperthread, 962 | iops_log, runtime, desc, cmdline): 963 | """Bare usage add a test to the list to execute""" 964 | if threads != "": 965 | qd = int(threads) * int(qdperthread) 966 | else: 967 | qd = 0 968 | dat = {} 969 | dat['name'] = name 970 | dat['seqrand'] = seqrand 971 | dat['wmix'] = writepct 972 | dat['bs'] = blocksize 973 | dat['qd'] = qd 974 | dat['qdperthread'] = qdperthread 975 | dat['threads'] = threads 976 | dat['bw'] = '' 977 | dat['iops'] = '' 978 | dat['lat'] = '' 979 | dat['desc'] = desc 980 | dat['iops_log'] = iops_log 981 | dat['runtime'] = runtime 982 | dat['cmdline'] = cmdline 983 | oc.append(dat) 984 | 985 | def DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc, 986 | iops_log, runtime): 987 | """Add an individual run to the list of tests to execute""" 988 | AddTest(testname, seqrand, wmix, bs, threads, iodepth, iops_log, 989 | runtime, desc, lambda o: {RunTest(o['iops_log'], 990 | o['seqrand'], o['wmix'], 991 | o['bs'], o['threads'], 992 | o['qdperthread'], 993 | o['runtime'])}) 994 | 995 | def AddTestBSShmoo(): 996 | """Add a sequence of tests varying the block size""" 997 | AddTest(testname, 'Preparation', '', '', '', '', '', '', '', 998 | lambda o: {AppendFile(o['name'], testcsv)}) 999 | for bs in bslist: 1000 | desc = testname + ", BS=" + str(bs) 1001 | DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc, 1002 | iops_log, runtime) 1003 | 1004 | def AddTestQDShmoo(): 1005 | """Add a sequence of tests varying the queue depth""" 1006 | AddTest(testname, 'Preparation', '', '', '', '', '', '', '', 1007 | lambda o: {AppendFile(o['name'], testcsv)}) 1008 | for iodepth in qdlist: 1009 | desc = testname + ", QD=" + str(iodepth) 1010 | DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc, 1011 | iops_log, runtime) 1012 | 1013 | def AddTestThreadsShmoo(): 1014 | """Add a sequence of tests varying the number of threads""" 1015 | AddTest(testname, 'Preparation', '', '', '', '', '', '', '', 1016 | lambda o: {AppendFile(o['name'], testcsv)}) 1017 | for threads in threadslist: 1018 | desc = testname + ", Threads=" + str(threads) 1019 | DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, desc, 1020 | iops_log, runtime) 1021 | 1022 | AddTest('Sequential Preconditioning', 'Preparation', '', '', '', '', '', 1023 | '', '', lambda o: {}) # Only for display on-screen 1024 | AddTest('Sequential Preconditioning', 'Seq Pass 1', '100', '131072', '1', 1025 | '256', False, '', 'Sequential Preconditioning Pass 1', 1026 | lambda o: {SequentialConditioning()}) 1027 | if not fastPrecond: 1028 | AddTest('Sequential Preconditioning', 'Seq Pass 2', '100', '131072', '1', 1029 | '256', False, '', 'Sequential Preconditioning Pass 2', 1030 | lambda o: {SequentialConditioning()}) 1031 | 1032 | testname = "Sustained Multi-Threaded Sequential Read Tests by Block Size" 1033 | seqrand = "Seq" 1034 | wmix = 0 1035 | threads = 1 1036 | runtime = shorttime 1037 | iops_log = False 1038 | iodepth = 256 1039 | AddTestBSShmoo() 1040 | 1041 | testname = "Sustained Multi-Threaded Random Read Tests by Block Size" 1042 | seqrand = "Rand" 1043 | wmix = 0 1044 | threads = 16 1045 | runtime = shorttime 1046 | iops_log = False 1047 | iodepth = 16 1048 | AddTestBSShmoo() 1049 | 1050 | testname = "Sequential Write Tests with Queue Depth=1 by Block Size" 1051 | seqrand = "Seq" 1052 | wmix = 100 1053 | threads = 1 1054 | runtime = shorttime 1055 | iops_log = False 1056 | iodepth = 1 1057 | AddTestBSShmoo() 1058 | 1059 | if not fastPrecond: 1060 | AddTest('Random Preconditioning', 'Preparation', '', '', '', '', '', '', 1061 | '', lambda o: {}) # Only for display on-screen 1062 | AddTest('Random Preconditioning', 'Rand Pass 1', '100', '4096', '1', 1063 | '256', False, '', 'Random Preconditioning', 1064 | lambda o: {RandomConditioning()}) 1065 | AddTest('Random Preconditioning', 'Rand Pass 2', '100', '4096', '1', 1066 | '256', False, '', 'Random Preconditioning', 1067 | lambda o: {RandomConditioning()}) 1068 | 1069 | testname = "Sustained 4KB Random Read Tests by Number of Threads" 1070 | seqrand = "Rand" 1071 | wmix = 0 1072 | bs = 4096 1073 | runtime = shorttime 1074 | iops_log = False 1075 | iodepth = 1 1076 | AddTestThreadsShmoo() 1077 | 1078 | testname = "Sustained 4KB Random mixed 30% Write Tests by Threads" 1079 | seqrand = "Rand" 1080 | wmix = 30 1081 | bs = 4096 1082 | runtime = shorttime 1083 | iops_log = False 1084 | iodepth = 1 1085 | AddTestThreadsShmoo() 1086 | 1087 | testname = "Sustained Perf Stability Test - 4KB Random 30% Write" 1088 | AddTest(testname, 'Preparation', '', '', '', '', '', '', '', 1089 | lambda o: {AppendFile(o['name'], testcsv)}) 1090 | seqrand = "Rand" 1091 | wmix = 30 1092 | bs = 4096 1093 | runtime = longtime 1094 | iops_log = True 1095 | iodepth = 1 1096 | threads = 256 1097 | DoAddTest(testname, seqrand, wmix, bs, threads, iodepth, testname, 1098 | iops_log, runtime) 1099 | 1100 | testname = "Sustained 4KB Random Write Tests by Number of Threads" 1101 | seqrand = "Rand" 1102 | wmix = 100 1103 | bs = 4096 1104 | runtime = shorttime 1105 | iops_log = False 1106 | iodepth = 1 1107 | AddTestThreadsShmoo() 1108 | 1109 | testname = "Sustained Multi-Threaded Random Write Tests by Block Size" 1110 | seqrand = "Rand" 1111 | wmix = 100 1112 | runtime = shorttime 1113 | iops_log = False 1114 | iodepth = 16 1115 | threads = 16 1116 | AddTestBSShmoo() 1117 | 1118 | 1119 | def RunAllTests(): 1120 | """Iterate through the OC work queue and run each job, show progress.""" 1121 | global ret_iops, ret_mbps, ret_lat, fioVerString 1122 | 1123 | # Determine some column widths to make format specifiers 1124 | maxlen = 0 1125 | for o in oc: 1126 | maxlen = max(maxlen, len(o['desc'])) 1127 | descfmt = "{0:" + str(maxlen) + "}" 1128 | resfmt = "{1: >8} {2: >9} {3: >8}" 1129 | fmtstr = descfmt + " " + resfmt 1130 | 1131 | def JobWrapper(**kwargs): 1132 | """Thread wrapper to store return values for parent to read later.""" 1133 | global ret_iops, ret_mbps, ret_lat, oc 1134 | # Until we know it's succeeded, we're in error 1135 | ret_iops = "ERROR" 1136 | ret_mbps = "ERROR" 1137 | ret_lat = "ERROR" 1138 | try: 1139 | val = o['cmdline'](o) 1140 | ret_iops = list(val)[0][0] 1141 | ret_mbps = list(val)[0][1] 1142 | ret_lat = list(val)[0][2] 1143 | except FIOError as e: 1144 | print("\nFIO Error!\n" + e.cmdline + "\nSTDOUT:\n" + e.stdout) 1145 | print("STDERR:\n" + e.stderr) 1146 | raise 1147 | except: 1148 | print("\nUnexpected error while running FIO job.") 1149 | raise 1150 | 1151 | print("*" * len(fmtstr.format("", "", "", ""))) 1152 | print("ezFio test parameters:\n") 1153 | 1154 | fmtinfo = "{0: >20}: {1}" 1155 | print(fmtinfo.format("Drive", str(physDriveTxt))) 1156 | print(fmtinfo.format("Model", str(model))) 1157 | print(fmtinfo.format("Serial", str(serial))) 1158 | print(fmtinfo.format("AvailCapacity", str(physDriveGiB) + " GiB")) 1159 | print(fmtinfo.format("TestedCapacity", str(testcapacity) + " GiB")) 1160 | print(fmtinfo.format("TestedOffset", str(testoffset) + " GiB")) 1161 | print(fmtinfo.format("CPU", str(cpu))) 1162 | print(fmtinfo.format("Cores", str(cpuCores))) 1163 | print(fmtinfo.format("Frequency", str(cpuFreqMHz))) 1164 | print(fmtinfo.format("FIO Version", str(fioVerString))) 1165 | 1166 | print("\n") 1167 | print(fmtstr.format("Test Description", "BW(MB/s)", "IOPS", "Lat(us)")) 1168 | print(fmtstr.format("-"*maxlen, "-"*8, "-"*9, "-"*8)) 1169 | for o in oc: 1170 | if o['desc'] == "": 1171 | # This is a header-printing job, don't thread out 1172 | print("\n" + fmtstr.format("---"+o['name']+"---", "", "", "")) 1173 | sys.stdout.flush() 1174 | o['cmdline'](o) 1175 | else: 1176 | # This is a real test job, run it in a thread 1177 | if sys.stdout.isatty(): 1178 | print(fmtstr.format(o['desc'], "Runtime", "00:00:00", "..."), end='\r') 1179 | else: 1180 | print(descfmt.format(o['desc']), end='') 1181 | sys.stdout.flush() 1182 | starttime = datetime.datetime.now() 1183 | job = threading.Thread(target=JobWrapper, kwargs=(o)) 1184 | job.start() 1185 | while job.is_alive(): 1186 | now = datetime.datetime.now() 1187 | delta = now - starttime 1188 | dstr = "{0:02}:{1:02}:{2:02}".format(int(delta.seconds / 3600), 1189 | int((delta.seconds % 3600)/60), 1190 | int(delta.seconds % 60)) 1191 | if sys.stdout.isatty(): 1192 | # Blink runtime to make it obvious stuff is happening 1193 | if (delta.seconds % 2) != 0: 1194 | print(fmtstr.format(o['desc'], "Runtime", dstr, "..."), end='\r') 1195 | else: 1196 | print(fmtstr.format(o['desc'], "", dstr, ""), end='\r') 1197 | sys.stdout.flush() 1198 | time.sleep(1) 1199 | job.join() 1200 | # Pretty-print with grouping, if possible 1201 | try: 1202 | ret_iops = "{:,}".format(int(ret_iops)) 1203 | ret_mbps = "{:0,.2f}".format(float(ret_mbps)) 1204 | except: 1205 | pass 1206 | if sys.stdout.isatty(): 1207 | print(fmtstr.format(o['desc'], ret_mbps, ret_iops, ret_lat)) 1208 | else: 1209 | print(" " + resfmt.format(o['desc'], 1210 | ret_mbps, ret_iops, ret_lat)) 1211 | sys.stdout.flush() 1212 | # On any error abort the test, all future results could be invalid 1213 | if ret_mbps == "ERROR": 1214 | print("ERROR DETECTED, ABORTING TEST RUN.") 1215 | sys.exit(2) 1216 | 1217 | 1218 | def GenerateResultODS(): 1219 | """Builds a new ODS spreadsheet w/graphs from generated test CSV files.""" 1220 | 1221 | def GetContentXMLFromODS(odssrc): 1222 | """Extract content.xml from an ODS file, where the sheet lives.""" 1223 | ziparchive = zipfile.ZipFile(odssrc) 1224 | content = ziparchive.read("content.xml").decode('UTF-8') 1225 | content = content.replace("\n", "") 1226 | return content 1227 | 1228 | def CSVtoXMLSheet(sheetName, csvName): 1229 | """Replace a named sheet with the contents of a CSV file.""" 1230 | newt = ' ' 1232 | newt += '' 1246 | except: # It's not a float, so let's call it a string 1247 | cell = '' 1250 | newt += cell 1251 | newt += '' 1252 | f.close() 1253 | # Close the tags 1254 | newt += '' 1255 | return newt 1256 | 1257 | def ReplaceSheetWithCSV_regex(sheetName, csvName, xmltext): 1258 | """Replace a named sheet with the contents of a CSV file.""" 1259 | newt = CSVtoXMLSheet(sheetName, csvName) 1260 | 1261 | # Replace the XML using lazy string matching 1262 | searchstr = '' 1264 | return re.sub(searchstr, newt, xmltext, flags=re.DOTALL) 1265 | 1266 | def AppendSheetFromCSV(sheetName, csvName, xmltext): 1267 | """Add a new sheet to the XML from the CSV file.""" 1268 | newt = CSVtoXMLSheet(sheetName, csvName) 1269 | 1270 | # Replace the XML using lazy string matching 1271 | searchstr = '' 1272 | return re.sub(searchstr, newt + searchstr, xmltext, flags=re.DOTALL) 1273 | 1274 | def UpdateContentXMLToODS_text(odssrc, odsdest, xmltext): 1275 | """Replace content.xml in an ODS w/an in-memory copy and write new. 1276 | 1277 | Replace content.xml in an ODS file with in-memory, modified copy and 1278 | write new ODS. Can't just copy source.zip and replace one file, the 1279 | output ZIP file is not correct in many cases (opens in Excel but fails 1280 | ODF validation and LibreOffice fails to load under Windows). 1281 | 1282 | Also strips out any binary versions of objects and the thumbnail, 1283 | since they are no longer valid once we've changed the data in the 1284 | sheet. 1285 | """ 1286 | if os.path.exists(odsdest): 1287 | os.unlink(odsdest) 1288 | 1289 | # Windows ZipArchive will not use "Store" even with "no compression" 1290 | # so we need to have a mimetype.zip file encoded below to match spec: 1291 | mimetypezip = """ 1292 | UEsDBBQAAAgAAICyN0+FbDmKLgAAAC4AAAAIAAAAbWltZXR5cGVhcHBsaWNhdGlvbi92bmQub2Fz 1293 | aXMub3BlbmRvY3VtZW50LnNwcmVhZHNoZWV0UEsBAhQAFAAACAAAgLI3T4VsOYouAAAALgAAAAgA 1294 | AAAAAAAAAAAAAAAAAAAAAG1pbWV0eXBlUEsFBgAAAAABAAEANgAAAFQAAAAAAA== 1295 | """ 1296 | zipbytes = base64.b64decode(mimetypezip) 1297 | with open(odsdest, 'wb') as f: 1298 | f.write(zipbytes) 1299 | 1300 | zasrc = zipfile.ZipFile(odssrc, 'r') 1301 | zadst = zipfile.ZipFile(odsdest, 'a', zipfile.ZIP_DEFLATED) 1302 | for entry in zasrc.namelist(): 1303 | if entry == "mimetype": 1304 | continue 1305 | elif entry.endswith('/') or entry.endswith('\\'): 1306 | continue 1307 | elif entry == "content.xml": 1308 | zadst.writestr("content.xml", xmltext) 1309 | elif ("Object" in entry) and ("content.xml" in entry): 1310 | # Remove table 1311 | rdbytes = zasrc.read(entry).decode('UTF-8') 1312 | outbytes = re.sub( 1313 | '.*', "", rdbytes, flags=re.DOTALL) 1314 | zadst.writestr(entry, outbytes) 1315 | elif entry == "META-INF/manifest.xml": 1316 | # Remove ObjectReplacements from the list 1317 | rdbytes = zasrc.read(entry).decode('UTF-8') 1318 | outbytes = "" 1319 | lines = rdbytes.split("\n") 1320 | for line in lines: 1321 | if not (("ObjectReplacement" in line) or ("Thumbnails" in line)): 1322 | outbytes = outbytes + line + "\n" 1323 | zadst.writestr(entry, outbytes) 1324 | elif ("Thumbnails" in entry) or ("ObjectReplacement" in entry): 1325 | # Skip binary versions 1326 | continue 1327 | else: 1328 | rdbytes = zasrc.read(entry) 1329 | zadst.writestr(entry, rdbytes) 1330 | zasrc.close() 1331 | zadst.close() 1332 | 1333 | def CombineExceedanceCSV(qdList, testType, testWpct, testBS, testIOdepth, suffix): 1334 | """Merge multiple exceedance CSVs into a single output file. 1335 | 1336 | Column merge multiple CSV files into a single one. Complicated by 1337 | the fact that the number of columns in each may vary. 1338 | """ 1339 | csv = details + "/ezfio_exceedance_"+suffix+".csv" 1340 | if os.path.exists(csv): 1341 | os.unlink(csv) 1342 | CSVInfoHeader(csv) 1343 | line1 = "" 1344 | line2 = "" 1345 | for qd in qdList: 1346 | line1 = line1 + \ 1347 | ("QD%d Read Exceedance,,QD%d Write Exceedance,,," % (qd, qd)) 1348 | line2 = line2 + "rdusec,rdpct,wrusec,wrpct,," 1349 | AppendFile(line1, csv) 1350 | AppendFile(line2, csv) 1351 | 1352 | files = [] 1353 | for qd in qdList: 1354 | try: 1355 | r = open(TestName(testType, testWpct, testBS, 1356 | qd, testIOdepth) + ".exc.read.csv") 1357 | except: 1358 | r = None 1359 | try: 1360 | w = open(TestName(testType, testWpct, testBS, 1361 | qd, testIOdepth) + ".exc.write.csv") 1362 | except: 1363 | w = None 1364 | files.append([r, w]) 1365 | while True: 1366 | all_empty = True 1367 | l = "" 1368 | for fset in files: 1369 | if fset[0] is None: 1370 | a = "" 1371 | else: 1372 | a = fset[0].readline().strip() 1373 | if fset[1] is None: 1374 | b = "" 1375 | else: 1376 | b = fset[1].readline().strip() 1377 | l += (a + ",", ",,")[not a] 1378 | l += (b + ",", ",,")[not b] 1379 | l += ',' 1380 | all_empty = all_empty and (not a) and (not b) 1381 | AppendFile(l, csv) 1382 | if all_empty: 1383 | break 1384 | return csv 1385 | 1386 | global odssrc, timeseriescsv, testcsv, physDrive, testcapacity, model, testoffset 1387 | global serial, uname, fioVerString, odsdest, timeseriesclatcsv, timeseriesslatcsv 1388 | 1389 | xmlsrc = GetContentXMLFromODS(odssrc) 1390 | xmlsrc = ReplaceSheetWithCSV_regex("Timeseries", timeseriescsv, xmlsrc) 1391 | xmlsrc = ReplaceSheetWithCSV_regex( 1392 | "TimeseriesCLAT", timeseriesclatcsv, xmlsrc) 1393 | xmlsrc = ReplaceSheetWithCSV_regex( 1394 | "TimeseriesSLAT", timeseriesslatcsv, xmlsrc) 1395 | xmlsrc = ReplaceSheetWithCSV_regex("Tests", testcsv, xmlsrc) 1396 | # Potentially add exceedance data if we have it 1397 | if fioOutputFormat == "json+": 1398 | csv = CombineExceedanceCSV( 1399 | [1, 4, 16, 32], "Rand", 30, 4096, 1, "exceedance30") 1400 | xmlsrc = ReplaceSheetWithCSV_regex("Exceedance", csv, xmlsrc) 1401 | # Remove draw:image references to deleted binary previews 1402 | xmlsrc = re.sub("", "", xmlsrc, flags=re.DOTALL) 1403 | # OpenOffice doesn't recalculate these cells on load?! 1404 | xmlsrc = xmlsrc.replace("_DRIVE", str(physDrive)) 1405 | xmlsrc = xmlsrc.replace("_TESTCAP", str(testcapacity)) 1406 | xmlsrc = xmlsrc.replace("_MODEL", str(model)) 1407 | xmlsrc = xmlsrc.replace("_SERIAL", str(serial)) 1408 | xmlsrc = xmlsrc.replace("_OS", str(uname)) 1409 | xmlsrc = xmlsrc.replace("_FIO", str(fioVerString)) 1410 | UpdateContentXMLToODS_text(odssrc, odsdest, xmlsrc) 1411 | 1412 | 1413 | fio = "" # FIO executable 1414 | fioVerString = "" # FIO self-reported version 1415 | fioOutputFormat = "json" # Can we make exceedance charts using JSON+ output? 1416 | cluster = False # Running multiple jobs in a cluster using fio --server 1417 | physDrive = "" # Device path to test 1418 | physDriveTxt = "" # Unadulterated drive line 1419 | physDriveDict = OrderedDict() # Device path to test 1420 | utilization = "" # Device utilization % 1..100 1421 | offset = "" # Test region offset % 0..99 1422 | yes = False # Skip user verification 1423 | quickie = False # Flag to indicate short runs, only for ezfio debugging! 1424 | nullio = False # Flag to do no IO at all, use nullio instead 1425 | fastPrecond = False # Only do 1x sequential write for preconditioning (no random) 1426 | verify = False # Use built-in FIO data verification 1427 | readOnly = False # Only run read-only tests 1428 | 1429 | cpu = "" # CPU model 1430 | cpuCores = "" # # of cores (including virtual) 1431 | cpuFreqMHz = "" # "Nominal" speed of CPU 1432 | uname = "" # Kernel name/info 1433 | 1434 | physDriveGiB = "" # Disk size in GiB (2^n) 1435 | physDriveGB = "" # Disk size in GB (10^n) 1436 | physDriveBase = "" # Basename (ex: nvme0n1) 1437 | testcapacity = "" # Total GiB to test 1438 | testoffset = "" # test region offset in GiB 1439 | model = "" # Drive model name 1440 | serial = "" # Drive serial number 1441 | 1442 | ds = "" # Datestamp to appent to files/directories to uniquify 1443 | pwd = "" # $CWD 1444 | 1445 | details = "" # Test details directory 1446 | testcsv = "" # Intermediate test output CSV file 1447 | timeseriescsv = "" # Intermediate iostat output CSV file 1448 | timeseriesclatcsv = "" # Intermediate iostat output CSV file 1449 | timeseriesslatcsv = "" # Intermediate iostat output CSV file 1450 | exceedancecsv = "" # Intermediate exceedance output CSV 1451 | 1452 | odssrc = "" # Original ODS spreadsheet file 1453 | odsdest = "" # Generated results ODS spreadsheet file 1454 | 1455 | oc = [] # The list of tests to run 1456 | aioNeeded = 4096 # Minimum AIO kernel setting to run all tests 1457 | 1458 | # These globals are used to return the output results of the test thread 1459 | # Required because it's difficult to pass back values from a threading.(). 1460 | ret_iops = 0 # Last test IOPS 1461 | ret_mbps = 0 # Last test MBPs 1462 | ret_lat = 0 # Last test in microseconds 1463 | 1464 | if __name__ == "__main__": 1465 | ParseArgs() 1466 | CheckAdmin() 1467 | fio = FindFIO() 1468 | CheckFIOVersion() 1469 | CheckAIOLimits() 1470 | CollectSystemInfo() 1471 | CollectDriveInfo() 1472 | VerifyContinue() 1473 | SetupFiles() 1474 | DefineTests() 1475 | RunAllTests() 1476 | GenerateResultODS() 1477 | 1478 | print("\nCOMPLETED!\nSpreadsheet file: " + odsdest) 1479 | -------------------------------------------------------------------------------- /ezfio.ps1: -------------------------------------------------------------------------------- 1 | # ezfio 1.0 2 | # earle.philhower.iii@hgst.com 3 | # 4 | # ------------------------------------------------------------------------ 5 | # ezfio is free software: you can redistribute it and/or modify 6 | # it under the terms of the GNU General Public License as published by 7 | # the Free Software Foundation, either version 2 of the License, or 8 | # (at your option) any later version. 9 | # 10 | # ezfio is distributed in the hope that it will be useful, 11 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 12 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 | # GNU General Public License for more details. 14 | # 15 | # You should have received a copy of the GNU General Public License 16 | # along with ezfio. If not, see . 17 | # ------------------------------------------------------------------------ 18 | # 19 | # Usage: ezfio.ps1 -drive {physicaldrive number} 20 | # Example: ezfio.ps1 -drive 3 21 | # 22 | # When no parameters are specified, the script will provide usage info 23 | # as well as a list of attached PhysicalDrives 24 | # 25 | # This script requires Administrator privileges so must be run from 26 | # a PowerShell session started with "Run as Administrator." 27 | # 28 | # If Windows errors with, "...cannot be loaded because running scripts is 29 | # disabled on this system...." you need to run the following line to enable 30 | # execution of local PowerShell scripts: 31 | # Set-ExecutionPolicy -scope CurrentUser RemoteSigned 32 | # 33 | # Please be sure to have FIO installed, or you will be prompted to install 34 | # and re-run the script. 35 | 36 | 37 | param ( 38 | [string]$drive = "none", 39 | [string]$outDir = "none", 40 | [int]$util = 100, 41 | [switch]$help, 42 | [switch]$yes, 43 | [switch]$nullio, 44 | [switch]$fastprecond, 45 | [switch]$quickie 46 | ) 47 | 48 | 49 | Add-Type -Assembly System.IO.Compression 50 | Add-Type -Assembly System.IO.Compression.FileSystem 51 | Add-Type -AssemblyName PresentationFramework, System.Windows.Forms 52 | Add-Type -AssemblyName PresentationCore 53 | 54 | Chdir (Split-Path $script:MyInvocation.MyCommand.Path) 55 | 56 | function WindowFromXAML( $xaml, $prefix ) 57 | { 58 | # Create a WPF window from XAML from DevStudio 59 | $xaml = $xaml -replace 'mc:Ignorable="d"', '' 60 | $xaml = $xaml -replace "x:N", 'N' 61 | $xaml = $xaml -replace '^ 142 | 143 |