├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── dist └── spart.spec ├── spart.1.gz ├── spart.c ├── spart.h ├── spart_data.h ├── spart_output.h └── spart_string.h /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | Please note that spart is a user-oriented partition info command for slurm. 3 | 4 | Slurm has a lot of tools they mainly administrator friendly. But, Slurm does not have a command showing partition info in a user-friendly way. So, please do not modify code to targetting administration purposes. 5 | 6 | Also, Please **Keep It Simple!** 7 | 8 | Thanks. 9 | 10 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | 294 | Copyright (C) 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | , 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Due to the GitHub’s the new two-factor authentication (2FA) policy, I can not login to my GitHub account, by September 28th, 2023. Because of this reason, I can not update the code of the spart, or can not reply to the issues. Sorry. 2 | 3 | __ 4 | 5 | # spart 6 | A user-oriented partition info command for slurm. It gives a brief view of the cluster. 7 | 8 | Slurm does not have a command showing partition info in a user-friendly way. 9 | I wrote a command, I hope you will find it useful. 10 | 11 | ## Table of contents 12 | 13 | * [Usage](#usage) 14 | * [The Cluster and Partition Statements](#the-cluster-and-partition-statements) 15 | * [Requirements](#requirements) 16 | * [Compiling](#compiling) 17 | * [Reporting Bugs](#reporting-bugs) 18 | 19 | 20 | ## Usage 21 | 22 | **Usage: spart [-m] [-a] [-c] [-g] [-i] [-t] [-f] [-s] [-J] [-p PARTITION_LIST] [-l] [-v] [-h]** 23 | 24 | This program shows **the user specific partition info** with core count of available nodes and pending jobs. It hides unnecessary information for users in the output i.e. unusable partitions, undefined limits, unusable nodes etc., but it shows related and usefull information such as how many pending jobs waiting for the resourses or for the other reasons. 25 | 26 | The output of spart without any parameters is as below: 27 | 28 | ``` 29 | $ spart 30 | QUEUE STA FREE TOTAL RESORC OTHER FREE TOTAL |YOUR PEND PEND YOUR | MIN MAX MAXIMUM CORES NODE 31 | PARTITION TUS CORES CORES PENDNG PENDNG NODES NODES | RUN RES OTHR TOTL | NODES NODES JOB-TIME /NODE MEM-GB 32 | defq * 84 2436 140 28 3 87 | 3 5 0 8 | 1 - 7 days 28 126 33 | shortq 84 2604 0 0 3 93 | 0 0 0 0 | 1 2 1 hour 28 126 34 | longq 120 336 0 120 5 14 | 1 0 0 1 | 1 - 21 days 24 62 35 | gpuq 0 112 0 0 0 4 | 1 0 0 1 | 1 - 7 days 28 126 36 | bigmemq C 56 280 0 0 2 10 | 0 0 0 0 | 1 - 7 days 28 510 37 | v100q 0 40 0 0 0 1 | 0 0 0 0 | 1 1 1 days 40 375 38 | b224q 84 2548 0 840 3 91 | 3 0 3 6 | 8 40 7 days 28 126 39 | core40q g 0 1400 560 400 0 35 | 0 0 0 0 | 1 - 7 days 40 190 40 | 41 | ``` 42 | The spart command output varies according to cluster configuration to help to the user. You can see very different output of the spart for a different cluster at below. Notice the columns added. Without the spart command, it is very difficult to see the configuration details of the slurm cluster: 43 | 44 | ``` 45 | $ spart 46 | WARNING: The Slurm settings have info restrictions! 47 | the spart can not show other users' waiting jobs info! 48 | 49 | QUEUE STA FREE TOTAL RESORC OTHER FREE TOTAL || MAX DEFMEM MAXMEM MAXIMUM CORES NODE QOS 50 | PARTITION TUS CORES CORES PENDNG PENDNG NODES NODES || NODES GB/CPU G/NODE JOB-TIME /NODE MEM-GB NAME 51 | defaultq * 295 2880 0 0 0 120 || - 4 124 15 days 24 128 - 52 | single 110 144 0 0 3 6 || - 9 252 15 days 24 256 - 53 | smp 184 224 0 0 0 1 || - 17 4121 8 days 224 4128 - 54 | short 736 9172 0 0 0 278 || - 8 252 4 hour 24 256 - 55 | mid 736 9172 0 0 0 278 || - 8 252 8 days 24 256 - 56 | long 736 9172 0 0 0 278 || - 8 252 15 days 24 256 - 57 | debug 1633 14532 0 0 8 461 || 4 8 252 15 mins 24 128 debug 58 | 59 | YOUR PEND PEND YOUR MIN DEFAULT 60 | RUN RES OTHR TOTL NODES JOB-TIME 61 | COMMON VALUES: 0 0 0 0 1 2 mins 62 | 63 | ``` 64 | 65 | 66 | In the **STA-TUS** column, the characters means, the partition is: 67 | ``` 68 | * default partition (default queue), 69 | . hidden partition, 70 | C closed to both the job submit and run, 71 | S closed to the job submit, but the submitted jobs will run, 72 | r requires the reservation, 73 | D open to the job submit, but the submitted jobs will not run, 74 | R open for only root, or closed to root (if you are root), 75 | A closed to all of your account(s), 76 | a closed to some of your accounts, 77 | G closed to all of your group(s), 78 | g closed to some of your groups, 79 | Q closed to all of your QOS(s), 80 | q closed to some of your QOSs. 81 | ``` 82 | 83 | The **RESOURCE PENDING** column shows core counts of pending jobs because of the busy resource. 84 | 85 | The **OTHER PENDING** column shows core counts of pending jobs because of the other reasons such 86 | as license or other limits. 87 | 88 | The **YOUR-RUN, PEND-RES, PEND-OTHR**, and **YOUR-TOTL** columns shows the counts of the running, 89 | resource pending, other pending, and total job count of the current user, respectively. 90 | If these four columns are have same values, These same values of that four columns will be 91 | shown at COMMON VALUES as four single values. 92 | 93 | The **MIN NODE** and **MAX NODE** columns show the permitted minimum and maximum node counts of the 94 | jobs which can be submited to the partition. 95 | 96 | The **MAXCPU/NODE** column shows the permitted maximum core counts of of the single node in 97 | the partition. 98 | 99 | 100 | The **DEFMEM GB/CPU** and **DEFMEM GB/NODE** columns show default maximum memory as GB which a job 101 | can use for a cpu or a node, respectively. 102 | 103 | The **MAXMEM GB/CPU** and **MAXMEM GB/NODE** columns show maximum memory as GB which requestable by 104 | a job for a cpu or a node, respectively. 105 | 106 | The **DEFAULT JOB-TIME** column shows the default time limit of the job which submited to the 107 | partition without a time limit. If the DEFAULT JOB-TIME limits are not setted, or setted 108 | same value with MAXIMUM JOB-TIME for all partitions in your cluster, DEFAULT JOB-TIME 109 | column will not be shown, except -l parameter was given. 110 | 111 | The **MAXIMUM JOB-TIME** column shows the maximum time limit of the job which submited to the 112 | partition. If the user give a time limit further than MAXIMUM JOB-TIME limit of the 113 | partition, the job will be rejected by the slurm. 114 | 115 | 116 | The **CORES /NODE** column shows the core count of the node with lowest core count in the 117 | partition. But if -l was given, both the lowest and highest core counts will be shown. 118 | 119 | The **NODE MEM-GB** column shows the memory of the lowest memory node in this partition. But if 120 | -l parameter was given, both the lowest and highest memory will be shown. 121 | 122 | The **QOS NAME** column shows the default qos limits the job which submited to the partition. 123 | If the QOS NAME of the partition are not setted for all partitions in your cluster, QOS NAME 124 | column will not be shown, execpt -l parameter was given. 125 | 126 | The **GRES (COUNT)** column shows the generic resources of the nodes in the partition, and (in 127 | paranteses) the total number of nodes in that partition containing that GRES. The GRES (COUNT) 128 | column will not be shown, execpt -l or -g parameter was given. 129 | 130 | If the partition's **QOS NAME, MIN NODES, MAX NODES, MAXCPU/NODE, DEFMEM GB/CPU|NODE, 131 | MAXMEM GB/CPU|NODE, DEFAULT JOB-TIME**, and **MAXIMUM JOB-TIME** limits are not setted for the 132 | all partitions in your cluster, corresponding column(s) will not be shown, except -l 133 | parameter was given. 134 | 135 | If the values of a column are same, this column will not be shown at partitions block. 136 | These same values of that column will be shown at **COMMON VALUES** as a single value. 137 | 138 | Parameters: 139 | 140 | **-m** both the lowest and highest values will be shown in the **CORES /NODE** 141 | and **NODE MEM-GB** columns. 142 | 143 | **-a** hidden partitions also be shown. 144 | 145 | **-c** partitions from federated clusters be shown. 146 | 147 | **-g** the ouput shows each GRES (gpu, mic etc.) defined in that partition 148 | and (in paranteses) the total number of nodes in that partition 149 | containing that GRES. 150 | 151 | **-i** the info about the groups, accounts, QOSs, and queues will be shown. 152 | 153 | **-t** the time info will be shown at DAY-HR:MN format, instead of verbal format. 154 | 155 | **-s** the simple output. spart don't show slurm config columns. 156 | 157 | **-J** the output does not shown the info about the user's jobs. 158 | 159 | **-f** the ouput shows each FEATURES defined in that partition and (in paranteses) 160 | the total number of nodes in that partition containing that FEATURES. 161 | 162 | **-p PARTITION_LIST** 163 | the output shows only the partitions which given with comma-seperated PARTITION_LIST. 164 | 165 | **-l** all posible columns will be shown, except the federated clusters column. 166 | 167 | **-v** shows info about STATUS LABELS. 168 | 169 | **-h** shows this usage text. 170 | 171 | If you compare the output above with the output with -l parameter (below), unusable and hidden partitions 172 | were not shown without -l parameter: 173 | ``` 174 | $ spart -l 175 | QUEUE STA FREE TOTAL RESORC OTHER FREE TOTAL || MIN MAX DEFAULT MAXIMUM CORES NODE GRES 176 | PARTITION TUS CORES CORES PENDNG PENDNG NODES NODES || NODES NODES JOB-TIME JOB-TIME /NODE MEM-GB (NODE-COUNT) 177 | defq * 0 2436 532 0 0 87 || 1 - 7 days 7 days 28 126-510 - 178 | shortq 0 2604 0 0 0 93 || 1 2 1 hour 1 hour 28 126-510 gpu:k20m:1(4) 179 | longq 72 336 0 0 3 14 || 1 - 21 days 21 days 24 62 - 180 | gpuq 0 112 0 0 0 4 || 1 - 7 days 7 days 28 126-510 gpu:k20m:1(4) 181 | bigmemq 0 280 0 0 0 10 || 1 - 7 days 7 days 28 510 gpu:k20m:1(1) 182 | v100q 0 40 0 0 0 1 || 1 1 1 days 1 days 40 375 gpu:v100:4(1) 183 | yzmq A 0 40 40 0 0 1 || 1 1 7 days 7 days 40 375 gpu:v100:4(1) 184 | b224q 0 2548 364 0 0 91 || 8 40 7 days 7 days 28 126-510 gpu:k20m:1(2) 185 | hbm513q G 0 2240 0 0 0 80 || 1 10 30 mins 30 mins 28 126 - 186 | core40q C 0 1400 0 0 0 35 || 1 - 7 days 7 days 40 190 - 187 | coronaq g 0 1400 0 1400 0 35 || 1 - 7 days 7 days 40 190 - 188 | all . 72 4380 0 0 3 143 || 1 - 1 days - 24-40 62-510 gpu:k20m:1(4),gpu:v100:4(1) 189 | 190 | YOUR PEND PEND YOUR MAXCPU DEFMEM MAXMEM QOS 191 | RUN RES OTHR TOTL /NODE G/NODE G/NODE NAME 192 | COMMON VALUES: 0 0 0 0 - - - - 193 | 194 | ``` 195 | 196 | The output of the spart with the -i parameter: 197 | ``` 198 | $ spart -i 199 | Your username: marfea 200 | Your group(s): eoss1 d0001 201 | Your account(s): d0001 eoss1 202 | Your qos(s): normal 203 | 204 | QUEUE STA FREE TOTAL RESORC OTHER FREE TOTAL || MIN MAX MAXIMUM CORES NODE 205 | PARTITION TUS CORES CORES PENDNG PENDNG NODES NODES || NODES NODES JOB-TIME /NODE MEM-GB 206 | defq * 0 2436 532 0 0 87 || 1 - 7 days 28 126 207 | shortq 0 2604 0 0 0 93 || 1 2 1 hour 28 126 208 | longq 72 336 0 0 3 14 || 1 - 21 days 24 62 209 | gpuq 0 112 0 0 0 4 || 1 - 7 days 28 126 210 | bigmemq 0 280 0 0 0 10 || 1 - 7 days 28 510 211 | v100q 0 40 0 0 0 1 || 1 1 1 days 40 375 212 | b224q 0 2548 364 0 0 91 || 8 40 7 days 28 126 213 | core40q C 0 1400 0 0 0 35 || 1 - 7 days 40 190 214 | coronaq g 0 1400 0 1400 0 35 || 1 - 7 days 40 190 215 | 216 | YOUR PEND PEND YOUR 217 | RUN RES OTHR TOTL 218 | COMMON VALUES: 0 0 0 0 219 | 220 | ``` 221 | 222 | ## The Cluster and Partition Statements 223 | 224 | When STATEMENT feature is on, the spart looks for the statements files. To open the STATEMENT 225 | feature, the **SPART_SHOW_STATEMENT** macro must be defined that can be achieved when you 226 | uncomment line 24 in spart.h file. If the spart finds any statement file, it shows the content 227 | of the files. If the spart can not find a particular staement file, simply ignore it without 228 | the notification. Therefore, there is no need to recompile to reshow or cancel statements. 229 | You can edit or remove statement files after compiling. If you want to show again, just write 230 | new file(s). 231 | 232 | The spart also can show the statements using different font sytles and background colors. The 233 | style of the cluster's statement and the partitions' statements can be setted differently. 234 | 235 | For example, the output at the below shows the cluster statement file exist: 236 | ``` 237 | $ spart 238 | QUEUE STA FREE TOTAL RESORC OTHER FREE TOTAL MIN MAX MAXJOBTIME CORES NODE 239 | PARTITION TUS CORES CORES PENDNG PENDNG NODES NODES NODES NODES DAY-HR:MN /NODE MEM-GB 240 | defq * 280 2436 400 0 10 87 1 - 7-00:00 28 126 241 | shortq 420 2604 0 0 15 93 1 2 0-01:00 28 126 242 | longq 336 336 0 0 14 14 1 - 21-00:00 24 62 243 | gpuq 84 112 0 0 3 4 1 - 7-00:00 28 126 244 | bigmemq 140 280 0 0 5 10 1 - 7-00:00 28 510 245 | v100q 40 40 0 0 1 1 1 1 1-00:00 40 375 246 | b224q 364 2548 0 280 13 91 8 40 1-00:00 28 126 247 | core40q 400 1400 0 0 10 35 1 - 7-00:00 40 190 248 | 249 | ============================================================================================ 250 | Dear Colleagues, 251 | 252 | Sariyer cluster will be stopped next week, Tuesday, February 14, for scheduled maintenance 253 | operations. The stop will start at 9:00 a.m. and we expect to bring the cluster back to 254 | production in the late afternoon. During the course of the maintenance operations, the login 255 | nodes will not be accessible. 256 | 257 | Best Regards, 258 | User Support, UHeM 259 | ============================================================================================ 260 | ``` 261 | 262 | When the -i parameter was given, the spart also shows the contents of the partition statement files: 263 | ``` 264 | $ spart -i 265 | Your username: mercan 266 | Your group(s): hsaat gaussian workshop ansys 267 | Your account(s): hsaat 268 | Your qos(s): normal 269 | 270 | QUEUE STA FREE TOTAL FREE TOTAL RESORC OTHER MIN MAX MAXJOBTIME CORES NODE 271 | PARTITION TUS CORES CORES NODES NODES PENDNG PENDNG NODES NODES DAY-HR:MN /NODE MEM-GB 272 | 273 | ============================================================================================ 274 | 275 | defq * 308 2436 11 87 0 0 1 - 7-00:00 28 126 276 | 277 | This is the default queue, i.e., if you don't give a partition name, 278 | defq is used as the partition name for your job. 279 | -------------------------------------------------------------------------------------------- 280 | 281 | shortq 392 2604 14 93 0 0 1 2 0-01:00 28 126 282 | -------------------------------------------------------------------------------------------- 283 | longq 96 336 4 14 0 0 1 - 21-00:00 24 62 284 | 285 | The nodes in the longq queue are older than those in other queues. Therefore, 286 | it's better not to use this queue, unless you really need the long time limit. 287 | -------------------------------------------------------------------------------------------- 288 | 289 | gpuq 56 112 2 4 0 0 1 - 7-00:00 28 126 290 | 291 | Each of the nodes in the gpuq queue contain one, very old Nvidia K20m GPU. 292 | -------------------------------------------------------------------------------------------- 293 | 294 | bigmemq 56 280 2 10 0 0 1 - 7-00:00 28 510 295 | -------------------------------------------------------------------------------------------- 296 | v100q 40 40 1 1 0 0 1 1 1-00:00 40 375 297 | -------------------------------------------------------------------------------------------- 298 | b224q 336 2548 12 91 0 0 8 40 1-00:00 28 126 299 | -------------------------------------------------------------------------------------------- 300 | core40q 160 1400 4 35 0 0 1 - 7-00:00 40 190 301 | -------------------------------------------------------------------------------------------- 302 | 303 | ============================================================================================ 304 | Dear Colleagues, 305 | 306 | Sariyer cluster will be stopped next week, Tuesday, February 14, for scheduled maintenance 307 | operations. The stop will start at 9:00 a.m. and we expect to bring the cluster back to 308 | production in the late afternoon. During the course of the maintenance operations, the login 309 | nodes will not be accessible. 310 | 311 | Best Regards, 312 | User Support, UHeM 313 | ============================================================================================ 314 | ``` 315 | 316 | ## Requirements 317 | 318 | The spart requires a running slurm which compiled with mysql/mariadb. 319 | 320 | Some features of the spart requires Slurm 18.08 or newer. With older versions, the spart works with 321 | reduced feature set i.e. without showing the federated clusters column and user-spesific output. 322 | 323 | Also, with Slurm 20.02.0 and 20.02.1, the slurm have a bug about to connect the slurm database via 324 | slurm api. Because of that, the spart works with reduced feature set at this version, too. 325 | 326 | The spart requires, the slurm configured to give permision for reading other users job info, 327 | node info, and other information. 328 | 329 | The Slurm API, can not run correctly on a "Configless" Slurm node. It looks for slurm.conf, and fails. Because of this reason, the spart can not run a node which does not contain the slurm conf files. But, even at a configless slurm cluster, if a node contains the slurm conf files (for example a slurmctl node), spart works correctly. 330 | 331 | ## Compiling 332 | 333 | Compiling is very simple. 334 | 335 | If your slurm installed at default location, you can compile the spart command as below: 336 | 337 | ```gcc -lslurm spart.c -o spart``` 338 | 339 | Don't add optimization flags (-O2 etc.). 340 | 341 | At before SLURM 19.05, you should compile with **-lslurmdb**: 342 | 343 | ```gcc -lslurm -lslurmdb spart.c -o spart``` 344 | 345 | If the slurm is not installed at default location, you should add locations of the headers and libraries: 346 | 347 | ```gcc -lslurm spart.c -o spart -I/location/of/slurm/header/files/ -L/location/of/slurm/library/files/``` 348 | 349 | After compiling, you can copy the spart file to the default slurm exe directory which is /usr/bin. Alternatively, you can copy spart file to any directory and you should set PATH environment variable. The default slurm man directory is /usr/share/man/man1/. You can copy the man file (spart.1.gz) to this directory, or you can set MANPATH variable. Don't forget to set reading permisions of the spart and spart.1.gz files for all users. 350 | 351 | Also, there is no need to have administrative rights (being root) to compile and use. If you want to use 352 | the spart command as a regular user, you can compile and use at your home directory. 353 | 354 | Alternatively, If you prefer to compile the spart as a rpm package, you can use the ```rpmbuild``` command: 355 | 356 | ```rpmbuild -ta spart-1.4.3.tar.gz``` 357 | 358 | 359 | ## Reporting Bugs 360 | 361 | If you notice a bug of the spart, please report using the issues page of the github site. Thanks. 362 | 363 | 364 | -------------------------------------------------------------------------------- /dist/spart.spec: -------------------------------------------------------------------------------- 1 | %global slurm_version %(rpm -q slurm-devel --qf "%{VERSION}" 2>/dev/null) 2 | 3 | # run "spectool -g -R spart.spec" to automatically download source files 4 | # spectool is part of the rpmdevtools package 5 | Name: spart 6 | Version: 1.5.1 7 | Release: %{slurm_version}.1%{?dist} 8 | Summary: A tool to display user-oriented Slurm partition information. 9 | 10 | Group: System Environment/Base 11 | License: GPLv2+ 12 | URL: https://github.com/mercanca/%{name} 13 | Source: https://github.com/mercanca/%{name}/archive/v%{version}/%{name}-%{version}.tar.gz 14 | 15 | BuildRequires: slurm-devel 16 | Requires: slurm 17 | 18 | %description 19 | %{name} is a tool to display user-oriented Slurm partition information. 20 | 21 | %prep 22 | %autosetup -n %{name}-%{version} 23 | 24 | %build 25 | gcc -lslurm %{name}.c -o %{name} 26 | 27 | %install 28 | mkdir -p %{buildroot}%{_bindir} 29 | mkdir -p %{buildroot}%{_mandir}/man1 30 | install -m 0755 ./%{name} %{buildroot}%{_bindir}/%{name} 31 | install -m 0644 ./%{name}.1.gz %{buildroot}%{_mandir}/man1/%{name}.1.gz 32 | 33 | %files 34 | %defattr(755,root,root) 35 | %{_bindir}/%{name} 36 | %{_mandir}/man1/%{name}.1.gz 37 | 38 | 39 | %changelog 40 | * Thu Dec 03 2020 Kilian Cavalotti 41 | - add Slurm version in package relase 42 | - fix man paths 43 | - fix slurm-devel build dependency 44 | * Thu May 02 2019 Kilian Cavalotti 45 | - initial package 46 | -------------------------------------------------------------------------------- /spart.1.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mercanca/spart/724554c194657d247f3eeca6a294bffe0b1f88c6/spart.1.gz -------------------------------------------------------------------------------- /spart.c: -------------------------------------------------------------------------------- 1 | /****************************************************************** 2 | * spart : a user-oriented partition info command for slurm 3 | * Author : Cem Ahmet Mercan, 2019-02-16 4 | * Licence : GNU General Public License v2.0 5 | * Note : Some part of this code taken from slurm api man pages 6 | *******************************************************************/ 7 | 8 | #include 9 | #include 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include "spart.h" 18 | #include "spart_string.h" 19 | #include "spart_data.h" 20 | #include "spart_output.h" 21 | 22 | /* ========== MAIN ========== */ 23 | int main(int argc, char *argv[]) { 24 | uint32_t i, j; 25 | int k, m, n; 26 | int total_width = 0; 27 | 28 | /* Slurm defined types */ 29 | #if SLURM_VERSION_NUMBER >= SLURM_VERSION_NUM(20, 11, 0) 30 | slurm_conf_t *conf_info_msg_ptr = NULL; 31 | #else 32 | slurm_ctl_conf_t *conf_info_msg_ptr = NULL; 33 | #endif 34 | partition_info_msg_t *part_buffer_ptr = NULL; 35 | partition_info_t *part_ptr = NULL; 36 | node_info_msg_t *node_buffer_ptr = NULL; 37 | job_info_msg_t *job_buffer_ptr = NULL; 38 | 39 | #if SLURM_VERSION_NUMBER > SLURM_VERSION_NUM(18, 7, 0) && \ 40 | SLURM_VERSION_NUMBER != SLURM_VERSION_NUM(20, 2, 0) && \ 41 | SLURM_VERSION_NUMBER != SLURM_VERSION_NUM(20, 2, 1) 42 | void **db_conn = NULL; 43 | slurmdb_assoc_cond_t assoc_cond; 44 | List assoc_list = NULL; 45 | ListIterator itr = NULL; 46 | 47 | List qosn_list = NULL; 48 | ListIterator itr_qosn = NULL; 49 | 50 | slurmdb_assoc_rec_t *assoc; 51 | 52 | List qos_list = NULL; 53 | ListIterator itr_qos = NULL; 54 | 55 | char *qos = NULL; 56 | char *qos2 = NULL; 57 | #else 58 | char sh_str[SPART_INFO_STRING_SIZE]; 59 | char *p_str = NULL; 60 | char *t_str = NULL; 61 | #endif 62 | char re_str[SPART_INFO_STRING_SIZE]; 63 | FILE *fo; 64 | 65 | int user_acct_count = 0; 66 | char **user_acct = NULL; 67 | 68 | int user_qos_count = 0; 69 | char **user_qos = NULL; 70 | 71 | int user_group_count = SPART_MAX_GROUP_SIZE; 72 | char **user_group = NULL; 73 | 74 | uint32_t mem, cpus, min_mem, max_mem; 75 | uint32_t max_cpu, min_cpu, free_cpu, free_node; 76 | uint64_t max_mem_per_cpu = 0; 77 | uint64_t def_mem_per_cpu = 0; 78 | /* These values are default/unsetted values */ 79 | const uint32_t default_min_nodes = 0, default_max_nodes = UINT_MAX; 80 | const uint64_t default_max_mem_per_cpu = 0; 81 | const uint64_t default_def_mem_per_cpu = 0; 82 | const uint32_t default_max_cpus_per_node = UINT_MAX; 83 | const uint32_t default_mjt_time = INFINITE; 84 | char *default_qos = "normal"; 85 | 86 | uint16_t alloc_cpus = 0; 87 | char mem_result[SPART_INFO_STRING_SIZE]; 88 | char strtmp[SPART_INFO_STRING_SIZE]; 89 | char user_name[SPART_INFO_STRING_SIZE]; 90 | int user_id; 91 | char legends[SPART_INFO_STRING_SIZE]; 92 | #ifdef __slurmdb_cluster_rec_t_defined 93 | char cluster_name[SPART_INFO_STRING_SIZE]; 94 | #endif 95 | 96 | #ifdef SPART_COMPILE_FOR_UHEM 97 | char *reason; 98 | #endif 99 | 100 | uint32_t state; 101 | uint16_t tmp_lenght = 0; 102 | int show_max_mem = 0; 103 | int show_max_mem_per_cpu = 0; 104 | int show_max_cpus_per_node = 0; 105 | int show_def_mem_per_cpu = 0; 106 | uint16_t show_partition = 0; 107 | uint16_t show_partition_qos = 0; 108 | uint16_t show_min_nodes = 0; 109 | uint16_t show_max_nodes = 0; 110 | uint16_t show_mjt_time = 0; 111 | uint16_t show_djt_time = 0; 112 | int show_parameter_L = 0; 113 | int show_gres = 0; 114 | int show_features = 0; 115 | int show_all_partition = 0; 116 | int show_given_partition = 0; 117 | int show_as_date = 0; 118 | int show_cluster_name = 0; 119 | int show_min_mem_gb = 0; 120 | int show_min_core = 0; 121 | int show_my_running = 1; 122 | int show_my_waiting_resource = 1; 123 | int show_my_waiting_other = 1; 124 | int show_my_total = 1; 125 | int show_simple = 0; 126 | int show_verbose = 0; 127 | 128 | uint16_t partname_lenght = 0; 129 | #ifdef __slurmdb_cluster_rec_t_defined 130 | uint16_t clusname_lenght = 0; 131 | #endif 132 | 133 | char partition_str[SPART_INFO_STRING_SIZE]; 134 | char job_parts_str[SPART_INFO_STRING_SIZE]; 135 | char given_part_list[SPART_INFO_STRING_SIZE]; 136 | 137 | sp_part_info_t *spData = NULL; 138 | uint32_t partition_count = 0; 139 | 140 | uint16_t sp_gres_count = 0; 141 | sp_gres_info_t spgres[SPART_GRES_ARRAY_SIZE]; 142 | 143 | uint16_t sp_features_count = 0; 144 | sp_gres_info_t spfeatures[SPART_GRES_ARRAY_SIZE]; 145 | 146 | sp_headers_t spheaders; 147 | 148 | legends[0] = 0; 149 | 150 | int show_info = 0; 151 | /* Get username */ 152 | gid_t *groupIDs = NULL; 153 | struct passwd *pw; 154 | struct group *gr; 155 | 156 | #if SLURM_VERSION_NUMBER >= SLURM_VERSION_NUM(20, 11, 0) 157 | slurm_init(NULL); 158 | #endif 159 | 160 | pw = getpwuid(geteuid()); 161 | sp_strn2cpy(user_name, SPART_INFO_STRING_SIZE, pw->pw_name, 162 | SPART_INFO_STRING_SIZE); 163 | user_id = pw->pw_uid; 164 | 165 | groupIDs = malloc(user_group_count * sizeof(gid_t)); 166 | if (groupIDs == NULL) { 167 | slurm_perror("Can not allocate User group list"); 168 | exit(1); 169 | } 170 | 171 | if (getgrouplist(user_name, pw->pw_gid, groupIDs, &user_group_count) == -1) { 172 | slurm_perror("Can not read User group list"); 173 | exit(1); 174 | } 175 | 176 | user_group = malloc(user_group_count * sizeof(char *)); 177 | for (k = 0; k < user_group_count; k++) { 178 | gr = getgrgid(groupIDs[k]); 179 | if (gr != NULL) { 180 | user_group[k] = malloc(SPART_INFO_STRING_SIZE * sizeof(char)); 181 | sp_strn2cpy(user_group[k], SPART_INFO_STRING_SIZE, gr->gr_name, 182 | SPART_INFO_STRING_SIZE); 183 | } 184 | } 185 | 186 | free(groupIDs); 187 | 188 | /* Set default column visibility */ 189 | sp_headers_set_defaults(&spheaders); 190 | 191 | for (k = 1; k < argc; k++) { 192 | if (argv[k][0] == '-') { 193 | for (m = 1; m < strlen(argv[k]); m++) { 194 | switch (argv[k][m]) { 195 | case 'm': 196 | show_max_mem = 1; 197 | spheaders.min_core.column_width = 8; 198 | spheaders.min_mem_gb.column_width = 10; 199 | break; 200 | case 'a': 201 | show_partition |= SHOW_ALL; 202 | show_all_partition = 1; 203 | break; 204 | case 'p': 205 | if ((k + 1) < argc) { 206 | sp_strn2cpy(given_part_list, SPART_INFO_STRING_SIZE, argv[k + 1], 207 | SPART_INFO_STRING_SIZE); 208 | sp_strn2cat(given_part_list, SPART_INFO_STRING_SIZE, ",", 2); 209 | m = INT_MAX; 210 | k++; 211 | show_partition |= SHOW_ALL; 212 | show_all_partition = 1; 213 | show_given_partition = 1; 214 | continue; 215 | } else { 216 | printf("\nParameter -p requires partition name(s)!\n"); 217 | sp_spart_usage(); 218 | printf("\nParameter -p requires partition name(s)!\n"); 219 | exit(1); 220 | } 221 | break; 222 | #ifdef __slurmdb_cluster_rec_t_defined 223 | case 'c': 224 | show_partition |= SHOW_FEDERATION; 225 | show_partition &= (~SHOW_LOCAL); 226 | spheaders.cluster_name.visible = 1; 227 | break; 228 | #endif 229 | case 'g': 230 | spheaders.gres.visible = 1; 231 | break; 232 | case 'f': 233 | spheaders.features.visible = 1; 234 | break; 235 | case 'i': 236 | show_info = 1; 237 | break; 238 | case 'v': 239 | show_verbose = 1; 240 | break; 241 | case 't': 242 | show_as_date = 1; 243 | break; 244 | case 's': 245 | show_gres = 0; 246 | show_min_nodes = 0; 247 | show_max_nodes = 0; 248 | show_max_cpus_per_node = 0; 249 | show_max_mem_per_cpu = 0; 250 | show_def_mem_per_cpu = 0; 251 | show_mjt_time = 0; 252 | show_djt_time = 0; 253 | show_partition_qos = 0; 254 | show_parameter_L = 0; 255 | show_simple = 1; 256 | spheaders.min_nodes.visible = 0; 257 | spheaders.max_nodes.visible = 0; 258 | spheaders.max_cpus_per_node.visible = 0; 259 | spheaders.max_mem_per_cpu.visible = 0; 260 | spheaders.def_mem_per_cpu.visible = 0; 261 | spheaders.djt_time.visible = 0; 262 | spheaders.mjt_time.visible = 0; 263 | spheaders.min_core.visible = 0; 264 | spheaders.min_mem_gb.visible = 0; 265 | spheaders.partition_qos.visible = 0; 266 | spheaders.gres.visible = 0; 267 | spheaders.features.visible = 0; 268 | break; 269 | case 'J': 270 | show_my_running = 0; 271 | show_my_waiting_resource = 0; 272 | show_my_waiting_other = 0; 273 | show_my_total = 0; 274 | break; 275 | case 'l': 276 | sp_headers_set_parameter_L(&spheaders); 277 | show_max_mem = 1; 278 | #ifdef __slurmdb_cluster_rec_t_defined 279 | show_partition |= (SHOW_FEDERATION | SHOW_ALL); 280 | #else 281 | show_partition |= SHOW_ALL; 282 | #endif 283 | show_gres = 1; 284 | show_min_nodes = 1; 285 | show_max_nodes = 1; 286 | show_max_cpus_per_node = 1; 287 | show_max_mem_per_cpu = 1; 288 | show_def_mem_per_cpu = 1; 289 | show_mjt_time = 1; 290 | show_djt_time = 1; 291 | show_partition_qos = 1; 292 | show_parameter_L = 1; 293 | show_all_partition = 1; 294 | break; 295 | case 'h': 296 | sp_spart_usage(); 297 | exit(1); 298 | break; 299 | default: 300 | printf("\nUnknown parameter: %c\n", argv[k][m]); 301 | sp_spart_usage(); 302 | printf("\nUnknown parameter: %c\n", argv[k][m]); 303 | exit(1); 304 | break; 305 | } 306 | } 307 | } else { 308 | printf("\nUnknown parameter: %s\n", argv[k]); 309 | sp_spart_usage(); 310 | printf("\nUnknown parameter: %s\n", argv[k]); 311 | exit(1); 312 | } 313 | } 314 | 315 | if (slurm_load_ctl_conf((time_t)NULL, &conf_info_msg_ptr)) { 316 | slurm_perror("slurm_load_ctl_conf error"); 317 | exit(1); 318 | } 319 | 320 | if (slurm_load_jobs((time_t)NULL, &job_buffer_ptr, SHOW_ALL)) { 321 | slurm_perror("slurm_load_jobs error"); 322 | exit(1); 323 | } 324 | 325 | if (slurm_load_node((time_t)NULL, &node_buffer_ptr, SHOW_ALL)) { 326 | slurm_perror("slurm_load_node error"); 327 | exit(1); 328 | } 329 | 330 | if (slurm_load_partitions((time_t)NULL, &part_buffer_ptr, show_partition)) { 331 | slurm_perror("slurm_load_partitions error"); 332 | exit(1); 333 | } 334 | 335 | #ifdef __slurmdb_cluster_rec_t_defined 336 | sp_strn2cpy(cluster_name, SPART_MAX_COLUMN_SIZE, 337 | conf_info_msg_ptr->cluster_name, SPART_MAX_COLUMN_SIZE); 338 | #endif 339 | 340 | /* to check that can we read pending jobs info */ 341 | if (conf_info_msg_ptr->private_data != 0) { 342 | printf("WARNING: The Slurm settings have info restrictions!\n"); 343 | 344 | /* to check that can we read pending jobs info */ 345 | if (conf_info_msg_ptr->private_data & PRIVATE_DATA_JOBS) { 346 | printf("\tthe spart can not show other users' waiting jobs info!\n"); 347 | if (show_parameter_L != 1) { 348 | spheaders.waiting_resource.visible = 0; 349 | spheaders.waiting_other.visible = 0; 350 | } 351 | } 352 | 353 | if (conf_info_msg_ptr->private_data & PRIVATE_DATA_NODES) { 354 | printf("\tthe spart can not show node status info!\n"); 355 | } 356 | 357 | if (conf_info_msg_ptr->private_data & PRIVATE_DATA_PARTITIONS) { 358 | printf("\tthe spart can not show partition info!\n"); 359 | } 360 | printf("\n"); 361 | } 362 | 363 | #if SLURM_VERSION_NUMBER > SLURM_VERSION_NUM(18, 7, 0) && \ 364 | SLURM_VERSION_NUMBER != SLURM_VERSION_NUM(20, 2, 0) && \ 365 | SLURM_VERSION_NUMBER != SLURM_VERSION_NUM(20, 2, 1) 366 | /* Getting user account info */ 367 | #if SLURM_VERSION_NUMBER >= SLURM_VERSION_NUM(20, 11, 0) 368 | db_conn = slurmdb_connection_get(NULL); 369 | #else 370 | db_conn = slurmdb_connection_get(); 371 | #endif 372 | if (errno != SLURM_SUCCESS) { 373 | slurm_perror("Can not connect to the slurm database"); 374 | exit(1); 375 | } 376 | 377 | memset(&assoc_cond, 0, sizeof(slurmdb_assoc_cond_t)); 378 | assoc_cond.user_list = slurm_list_create(NULL); 379 | slurm_list_append(assoc_cond.user_list, user_name); 380 | assoc_cond.acct_list = slurm_list_create(NULL); 381 | 382 | assoc_list = slurmdb_associations_get(db_conn, &assoc_cond); 383 | itr = slurm_list_iterator_create(assoc_list); 384 | 385 | user_acct_count = slurm_list_count(assoc_list); 386 | user_acct = malloc(user_acct_count * sizeof(char *)); 387 | user_qos_count = 0; 388 | 389 | for (k = 0; k < user_acct_count; k++) { 390 | assoc = slurm_list_next(itr); 391 | qos_list = assoc->qos_list; 392 | user_qos_count += slurm_list_count(qos_list); 393 | } 394 | 395 | user_qos = malloc(user_qos_count * sizeof(char *)); 396 | 397 | qosn_list = slurmdb_qos_get(db_conn, NULL); 398 | itr_qosn = slurm_list_iterator_create(qosn_list); 399 | 400 | n = 0; 401 | slurm_list_iterator_reset(itr); 402 | for (k = 0; k < user_acct_count; k++) { 403 | user_acct[k] = malloc(SPART_INFO_STRING_SIZE * sizeof(char)); 404 | assoc = slurm_list_next(itr); 405 | sp_strn2cpy(user_acct[k], SPART_INFO_STRING_SIZE, assoc->acct, 406 | SPART_INFO_STRING_SIZE); 407 | 408 | qos_list = assoc->qos_list; 409 | user_qos_count = slurm_list_count(qos_list); 410 | if (user_qos_count > 0) { 411 | itr_qos = slurm_list_iterator_create(qos_list); 412 | for (m = 0; m < user_qos_count; m++) { 413 | user_qos[n] = malloc(SPART_INFO_STRING_SIZE * sizeof(char)); 414 | qos = slurm_list_next(itr_qos); 415 | //qos2 = (char *)slurmdb_qos_str(qosn_list, atoi(qos)); 416 | sp_strn2cpy(user_qos[n], SPART_INFO_STRING_SIZE, qos, 417 | SPART_INFO_STRING_SIZE); 418 | n++; 419 | } 420 | } 421 | } 422 | 423 | slurm_list_iterator_destroy(itr); 424 | if (user_qos_count > 0) slurm_list_iterator_destroy(itr_qos); 425 | // slurm_list_destroy(qos_list); 426 | slurm_list_destroy(assoc_list); 427 | slurm_list_iterator_destroy(itr_qosn); 428 | slurm_list_destroy(qosn_list); 429 | #else 430 | snprintf(sh_str, SPART_INFO_STRING_SIZE, 431 | "sacctmgr list association format=account%%-30 where user=%s -n " 432 | "2>/dev/null |tr -s '\n, ' ' '", 433 | user_name); 434 | fo = popen(sh_str, "r"); 435 | if (fo) { 436 | fgets(re_str, SPART_INFO_STRING_SIZE, fo); 437 | if (re_str[0] == '\0') 438 | user_acct_count = 1; 439 | else 440 | user_acct_count = 0; 441 | k = 0; 442 | for (j = 0; re_str[j]; j++) 443 | if (re_str[j] == ' ') user_acct_count++; 444 | 445 | user_acct = malloc(user_acct_count * sizeof(char *)); 446 | for (p_str = strtok_r(re_str, " ", &t_str); p_str != NULL; 447 | p_str = strtok_r(NULL, " ", &t_str)) { 448 | user_acct[k] = malloc(SPART_INFO_STRING_SIZE * sizeof(char)); 449 | sp_strn2cpy(user_acct[k], SPART_INFO_STRING_SIZE, p_str, 450 | SPART_INFO_STRING_SIZE); 451 | k++; 452 | } 453 | } 454 | pclose(fo); 455 | 456 | snprintf(sh_str, SPART_INFO_STRING_SIZE, 457 | "sacctmgr list association format=qos%%-30 where user=%s -n " 458 | "2>/dev/null |tr -s '\n, ' ' '", 459 | user_name); 460 | fo = popen(sh_str, "r"); 461 | if (fo) { 462 | fgets(re_str, SPART_INFO_STRING_SIZE, fo); 463 | if (re_str[0] == '\0') 464 | user_qos_count = 1; 465 | else 466 | user_qos_count = 0; 467 | k = 0; 468 | for (j = 0; re_str[j]; j++) 469 | if (re_str[j] == ' ') user_qos_count++; 470 | 471 | user_qos = malloc(user_qos_count * sizeof(char *)); 472 | for (p_str = strtok_r(re_str, " ", &t_str); p_str != NULL; 473 | p_str = strtok_r(NULL, " ", &t_str)) { 474 | user_qos[k] = malloc(SPART_INFO_STRING_SIZE * sizeof(char)); 475 | sp_strn2cpy(user_qos[k], SPART_INFO_STRING_SIZE, p_str, 476 | SPART_INFO_STRING_SIZE); 477 | k++; 478 | } 479 | } 480 | pclose(fo); 481 | 482 | #endif 483 | 484 | if (show_info) { 485 | sp_print_user_info(user_name, user_group, user_group_count, user_acct, 486 | user_acct_count, user_qos, user_qos_count); 487 | } 488 | 489 | /* Initialize spart data for each partition */ 490 | partition_count = part_buffer_ptr->record_count; 491 | spData = (sp_part_info_t *)malloc(partition_count * sizeof(sp_part_info_t)); 492 | 493 | for (i = 0; i < partition_count; i++) { 494 | spData[i].free_cpu = 0; 495 | spData[i].total_cpu = 0; 496 | spData[i].free_node = 0; 497 | spData[i].total_node = 0; 498 | spData[i].waiting_resource = 0; 499 | spData[i].waiting_other = 0; 500 | spData[i].min_nodes = 0; 501 | spData[i].max_nodes = 0; 502 | spData[i].max_cpus_per_node = 0; 503 | spData[i].max_mem_per_cpu = 0; 504 | /* MaxJobTime */ 505 | spData[i].mjt_time = 0; 506 | /* DefaultJobTime */ 507 | spData[i].djt_time = 0; 508 | #ifdef SPART_SHOW_STATEMENT 509 | spData[i].show_statement = show_info; 510 | #endif 511 | spData[i].min_core = 0; 512 | spData[i].max_core = 0; 513 | spData[i].min_mem_gb = 0; 514 | spData[i].max_mem_gb = 0; 515 | spData[i].my_waiting_resource = 0; 516 | spData[i].my_waiting_other = 0; 517 | spData[i].my_running = 0; 518 | spData[i].my_total = 0; 519 | spData[i].visible = 1; 520 | /* partition_name[] */ 521 | #ifdef __slurmdb_cluster_rec_t_defined 522 | spData[i].cluster_name[0] = 0; 523 | #endif 524 | spData[i].partition_name[0] = 0; 525 | spData[i].partition_qos[0] = 0; 526 | spData[i].partition_status[0] = 0; 527 | } 528 | 529 | /* Finds resource/other waiting core count for each partition */ 530 | for (i = 0; i < job_buffer_ptr->record_count; i++) { 531 | /* add ',' character at the begining and the end */ 532 | sp_strn2cpy(job_parts_str, SPART_INFO_STRING_SIZE, ",", 533 | SPART_INFO_STRING_SIZE); 534 | sp_strn2cat(job_parts_str, SPART_INFO_STRING_SIZE, 535 | job_buffer_ptr->job_array[i].partition, SPART_INFO_STRING_SIZE); 536 | sp_strn2cat(job_parts_str, SPART_INFO_STRING_SIZE, ",", 2); 537 | 538 | for (j = 0; j < partition_count; j++) { 539 | /* add ',' character at the begining and the end */ 540 | sp_strn2cpy(partition_str, SPART_INFO_STRING_SIZE, ",", 541 | SPART_INFO_STRING_SIZE); 542 | sp_strn2cat(partition_str, SPART_INFO_STRING_SIZE, 543 | part_buffer_ptr->partition_array[j].name, 544 | SPART_INFO_STRING_SIZE); 545 | sp_strn2cat(partition_str + strlen(partition_str), SPART_INFO_STRING_SIZE, 546 | ",", 2); 547 | 548 | if (strstr(job_parts_str, partition_str) != NULL) { 549 | if (job_buffer_ptr->job_array[i].job_state == JOB_PENDING) { 550 | if ((job_buffer_ptr->job_array[i].state_reason == WAIT_RESOURCES) || 551 | (job_buffer_ptr->job_array[i].state_reason == 552 | WAIT_NODE_NOT_AVAIL) || 553 | (job_buffer_ptr->job_array[i].state_reason == WAIT_PRIORITY)) { 554 | spData[j].waiting_resource += job_buffer_ptr->job_array[i].num_cpus; 555 | if (job_buffer_ptr->job_array[i].user_id == user_id) 556 | spData[j].my_waiting_resource++; 557 | } else { 558 | spData[j].waiting_other += job_buffer_ptr->job_array[i].num_cpus; 559 | if (job_buffer_ptr->job_array[i].user_id == user_id) 560 | spData[j].my_waiting_other++; 561 | } 562 | } else { 563 | if ((job_buffer_ptr->job_array[i].user_id == user_id) && 564 | (job_buffer_ptr->job_array[i].job_state == JOB_RUNNING)) 565 | spData[j].my_running++; 566 | } 567 | if ((job_buffer_ptr->job_array[i].user_id == user_id) && 568 | ((job_buffer_ptr->job_array[i].job_state == JOB_PENDING) || 569 | (job_buffer_ptr->job_array[i].job_state == JOB_RUNNING) || 570 | (job_buffer_ptr->job_array[i].job_state == JOB_SUSPENDED))) 571 | spData[j].my_total++; 572 | } 573 | } 574 | } 575 | 576 | show_gres = spheaders.gres.visible; 577 | show_features = spheaders.features.visible; 578 | for (i = 0; i < partition_count; i++) { 579 | part_ptr = &part_buffer_ptr->partition_array[i]; 580 | 581 | min_mem = UINT_MAX; 582 | max_mem = 0; 583 | min_cpu = UINT_MAX; 584 | max_cpu = 0; 585 | free_cpu = 0; 586 | free_node = 0; 587 | alloc_cpus = 0; 588 | max_mem_per_cpu = 0; 589 | def_mem_per_cpu = 0; 590 | 591 | sp_gres_reset_counts(spgres, &sp_gres_count); 592 | sp_gres_reset_counts(spfeatures, &sp_features_count); 593 | 594 | for (j = 0; part_ptr->node_inx; j += 2) { 595 | if (part_ptr->node_inx[j] == -1) break; 596 | for (k = part_ptr->node_inx[j]; k <= part_ptr->node_inx[j + 1]; k++) { 597 | cpus = node_buffer_ptr->node_array[k].cpus; 598 | mem = (uint32_t)(node_buffer_ptr->node_array[k].real_memory); 599 | if (min_mem > mem) min_mem = mem; 600 | if (max_mem < mem) max_mem = mem; 601 | if (min_cpu > cpus) min_cpu = cpus; 602 | if (max_cpu < cpus) max_cpu = cpus; 603 | 604 | slurm_get_select_nodeinfo( 605 | node_buffer_ptr->node_array[k].select_nodeinfo, 606 | SELECT_NODEDATA_SUBCNT, NODE_STATE_ALLOCATED, &alloc_cpus); 607 | 608 | state = node_buffer_ptr->node_array[k].node_state; 609 | /* If gres will not show, don't run */ 610 | if ((show_gres) && (node_buffer_ptr->node_array[k].gres != NULL)) { 611 | sp_gres_add(spgres, &sp_gres_count, 612 | node_buffer_ptr->node_array[k].gres); 613 | } 614 | 615 | /* If features will not show, don't run */ 616 | if (show_features) { 617 | if (node_buffer_ptr->node_array[k].features_act != NULL) 618 | sp_gres_add(spfeatures, &sp_features_count, 619 | node_buffer_ptr->node_array[k].features_act); 620 | else if (node_buffer_ptr->node_array[k].features != NULL) 621 | sp_gres_add(spfeatures, &sp_features_count, 622 | node_buffer_ptr->node_array[k].features); 623 | } 624 | #ifdef SPART_COMPILE_FOR_UHEM 625 | reason = node_buffer_ptr->node_array[k].reason; 626 | #endif 627 | 628 | /* The PowerSave_PwrOffState and PwrON_State_PowerSave control 629 | * for an alternative power saving solution we developed. 630 | * It required for showing power-off nodes as idle */ 631 | if ((((state & NODE_STATE_DRAIN) != NODE_STATE_DRAIN) && 632 | ((state & NODE_STATE_BASE) != NODE_STATE_DOWN) && 633 | (state != NODE_STATE_UNKNOWN)) 634 | #ifdef SPART_COMPILE_FOR_UHEM 635 | || 636 | (strncmp(reason, "PowerSave_PwrOffState", 21) == 0) || 637 | (strncmp(reason, "PwrON_State_PowerSave", 21) == 0) 638 | #endif 639 | ) { 640 | if (alloc_cpus == 0) free_node += 1; 641 | free_cpu += cpus - alloc_cpus; 642 | } 643 | } 644 | } 645 | 646 | #ifdef __slurmdb_cluster_rec_t_defined 647 | if (part_ptr->cluster_name != NULL) { 648 | sp_strn2cpy(spData[i].cluster_name, SPART_MAX_COLUMN_SIZE, 649 | part_ptr->cluster_name, SPART_MAX_COLUMN_SIZE); 650 | tmp_lenght = strlen(part_ptr->cluster_name); 651 | if (tmp_lenght > clusname_lenght) clusname_lenght = tmp_lenght; 652 | } else { 653 | sp_strn2cpy(spData[i].cluster_name, SPART_MAX_COLUMN_SIZE, cluster_name, 654 | SPART_MAX_COLUMN_SIZE); 655 | tmp_lenght = strlen(cluster_name); 656 | if (tmp_lenght > clusname_lenght) clusname_lenght = tmp_lenght; 657 | } 658 | #endif 659 | 660 | /* Partition States from more important to less important 661 | * because, there is limited space. */ 662 | if (part_ptr->flags & PART_FLAG_DEFAULT) 663 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "*", 2); 664 | if (part_ptr->flags & PART_FLAG_HIDDEN) 665 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, ".", 2); 666 | 667 | #if SLURM_VERSION_NUMBER > SLURM_VERSION_NUM(18, 7, 0) && \ 668 | SLURM_VERSION_NUMBER != SLURM_VERSION_NUM(20, 2, 0) && \ 669 | SLURM_VERSION_NUMBER != SLURM_VERSION_NUM(20, 2, 1) 670 | 671 | k = sp_check_permision_set_legend( 672 | part_ptr->allow_accounts, user_acct, user_acct_count, 673 | spData[i].partition_status, NULL, "a", "A"); 674 | if ((!show_all_partition) && (k == 0)) spData[i].visible = 0; 675 | 676 | k = sp_check_permision_set_legend( 677 | part_ptr->deny_accounts, user_acct, user_acct_count, 678 | spData[i].partition_status, "A", "a", NULL); 679 | if ((!show_all_partition) && (k == 0)) spData[i].visible = 0; 680 | 681 | k = sp_check_permision_set_legend( 682 | part_ptr->allow_qos, user_qos, user_qos_count, 683 | spData[i].partition_status, NULL, "q", "Q"); 684 | if ((!show_all_partition) && (k == 0)) spData[i].visible = 0; 685 | 686 | k = sp_check_permision_set_legend( 687 | part_ptr->deny_qos, user_qos, user_qos_count, 688 | spData[i].partition_status, "Q", "q", NULL); 689 | if ((!show_all_partition) && (k == 0)) spData[i].visible = 0; 690 | 691 | k = sp_check_permision_set_legend( 692 | part_ptr->allow_groups, user_group, user_group_count, 693 | spData[i].partition_status, NULL, "g", "G"); 694 | if ((!show_all_partition) && (k == 0)) spData[i].visible = 0; 695 | 696 | #endif 697 | 698 | if (strncmp(user_name, "root", SPART_INFO_STRING_SIZE) == 0) { 699 | if (part_ptr->flags & PART_FLAG_NO_ROOT) { 700 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "R", 2); 701 | if (!show_all_partition) spData[i].visible = 0; 702 | } 703 | } else { 704 | if (part_ptr->flags & PART_FLAG_ROOT_ONLY) { 705 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "R", 2); 706 | if (!show_all_partition) spData[i].visible = 0; 707 | } 708 | } 709 | 710 | if (!(part_ptr->state_up == PARTITION_UP)) { 711 | if (part_ptr->state_up == PARTITION_INACTIVE) 712 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "C", 2); 713 | if (part_ptr->state_up == PARTITION_DRAIN) 714 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "S", 2); 715 | if (part_ptr->state_up == PARTITION_DOWN) 716 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "D", 2); 717 | } 718 | 719 | if (part_ptr->flags & PART_FLAG_REQ_RESV) 720 | sp_strn2cat(spData[i].partition_status, SPART_MAX_COLUMN_SIZE, "r", 2); 721 | 722 | /* if (part_ptr->flags & PART_FLAG_EXCLUSIVE_USER) 723 | strncat(spData[i].partition_status, "x", SPART_MAX_COLUMN_SIZE);*/ 724 | 725 | /* spgres (GRES) data converting to string */ 726 | if (sp_gres_count == 0) { 727 | sp_strn2cpy(spData[i].gres, SPART_INFO_STRING_SIZE, "-", 728 | SPART_INFO_STRING_SIZE); 729 | } else { 730 | sp_con_strprint(strtmp, SPART_INFO_STRING_SIZE, spgres[0].count); 731 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%s(%s)", 732 | spgres[0].gres_name, strtmp); 733 | sp_strn2cpy(spData[i].gres, SPART_INFO_STRING_SIZE, mem_result, 734 | SPART_INFO_STRING_SIZE); 735 | for (j = 1; j < sp_gres_count; j++) { 736 | sp_con_strprint(strtmp, SPART_INFO_STRING_SIZE, spgres[j].count); 737 | snprintf(mem_result, SPART_INFO_STRING_SIZE, ",%s(%s)", 738 | spgres[j].gres_name, strtmp); 739 | sp_strn2cat(spData[i].gres, SPART_INFO_STRING_SIZE, mem_result, 740 | SPART_INFO_STRING_SIZE); 741 | } 742 | } 743 | 744 | /* spfeatures data converting to string */ 745 | if (sp_features_count == 0) { 746 | sp_strn2cpy(spData[i].features, SPART_INFO_STRING_SIZE, "-", 747 | SPART_INFO_STRING_SIZE); 748 | } else { 749 | sp_con_strprint(strtmp, SPART_INFO_STRING_SIZE, spfeatures[0].count); 750 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%s(%s)", 751 | spfeatures[0].gres_name, strtmp); 752 | sp_strn2cpy(spData[i].features, SPART_INFO_STRING_SIZE, mem_result, 753 | SPART_INFO_STRING_SIZE); 754 | for (j = 1; j < sp_features_count; j++) { 755 | sp_con_strprint(strtmp, SPART_INFO_STRING_SIZE, spfeatures[j].count); 756 | snprintf(mem_result, SPART_INFO_STRING_SIZE, ",%s(%s)", 757 | spfeatures[j].gres_name, strtmp); 758 | sp_strn2cat(spData[i].features, SPART_INFO_STRING_SIZE, mem_result, 759 | SPART_INFO_STRING_SIZE); 760 | } 761 | } 762 | spData[i].free_cpu = free_cpu; 763 | spData[i].total_cpu = part_ptr->total_cpus; 764 | spData[i].free_node = free_node; 765 | spData[i].total_node = part_ptr->total_nodes; 766 | 767 | if (!show_simple) { 768 | spData[i].min_nodes = part_ptr->min_nodes; 769 | if ((part_ptr->min_nodes != default_min_nodes) && (spData[i].visible)) 770 | show_min_nodes = 1; 771 | spData[i].max_nodes = part_ptr->max_nodes; 772 | if ((part_ptr->max_nodes != default_max_nodes) && (spData[i].visible)) 773 | show_max_nodes = 1; 774 | spData[i].max_cpus_per_node = part_ptr->max_cpus_per_node; 775 | if ((part_ptr->max_cpus_per_node != default_max_cpus_per_node) && 776 | (part_ptr->max_cpus_per_node != 0) && (spData[i].visible)) 777 | show_max_cpus_per_node = 1; 778 | 779 | /* the def_mem_per_cpu and max_mem_per_cpu members contains 780 | * both FLAG bit (MEM_PER_CPU) for CPU/NODE selection, and values. */ 781 | def_mem_per_cpu = part_ptr->def_mem_per_cpu; 782 | if (def_mem_per_cpu & MEM_PER_CPU) { 783 | sp_strn2cpy(spheaders.def_mem_per_cpu.line2, 784 | spheaders.def_mem_per_cpu.column_width + 1, "GB/CPU", 785 | spheaders.def_mem_per_cpu.column_width + 1); 786 | def_mem_per_cpu = def_mem_per_cpu & (~MEM_PER_CPU); 787 | } else { 788 | sp_strn2cpy(spheaders.def_mem_per_cpu.line2, 789 | spheaders.def_mem_per_cpu.column_width + 1, "G/NODE", 790 | spheaders.def_mem_per_cpu.column_width + 1); 791 | } 792 | spData[i].def_mem_per_cpu = (uint64_t)(def_mem_per_cpu / 1000u); 793 | if ((def_mem_per_cpu != default_def_mem_per_cpu) && (spData[i].visible)) 794 | show_def_mem_per_cpu = 1; 795 | 796 | max_mem_per_cpu = part_ptr->max_mem_per_cpu; 797 | if (max_mem_per_cpu & MEM_PER_CPU) { 798 | sp_strn2cpy(spheaders.max_mem_per_cpu.line2, 799 | spheaders.max_mem_per_cpu.column_width + 1, "GB/CPU", 800 | spheaders.max_mem_per_cpu.column_width + 1); 801 | max_mem_per_cpu = max_mem_per_cpu & (~MEM_PER_CPU); 802 | } else { 803 | sp_strn2cpy(spheaders.max_mem_per_cpu.line2, 804 | spheaders.max_mem_per_cpu.column_width + 1, "G/NODE", 805 | spheaders.max_mem_per_cpu.column_width + 1); 806 | } 807 | spData[i].max_mem_per_cpu = (uint64_t)(max_mem_per_cpu / 1000u); 808 | if ((max_mem_per_cpu != default_max_mem_per_cpu) && (spData[i].visible)) 809 | show_max_mem_per_cpu = 1; 810 | 811 | spData[i].mjt_time = part_ptr->max_time; 812 | if ((part_ptr->max_time != default_mjt_time) && (spData[i].visible)) 813 | show_mjt_time = 1; 814 | spData[i].djt_time = part_ptr->default_time; 815 | if ((part_ptr->default_time != default_mjt_time) && 816 | (part_ptr->default_time != NO_VAL) && 817 | (part_ptr->default_time != part_ptr->max_time) && (spData[i].visible)) 818 | show_djt_time = 1; 819 | spData[i].min_core = min_cpu; 820 | spData[i].max_core = max_cpu; 821 | spData[i].max_mem_gb = (uint16_t)(max_mem / 1000u); 822 | spData[i].min_mem_gb = (uint16_t)(min_mem / 1000u); 823 | 824 | if ((part_ptr->qos_char != NULL) && (strlen(part_ptr->qos_char) > 0)) { 825 | sp_strn2cpy(spData[i].partition_qos, SPART_MAX_COLUMN_SIZE, 826 | part_ptr->qos_char, SPART_MAX_COLUMN_SIZE); 827 | if (strncmp(part_ptr->qos_char, default_qos, SPART_MAX_COLUMN_SIZE) != 828 | 0) 829 | show_partition_qos = 1; 830 | } else 831 | sp_strn2cpy(spData[i].partition_qos, SPART_MAX_COLUMN_SIZE, "-", 832 | SPART_MAX_COLUMN_SIZE); 833 | } 834 | sp_strn2cpy(spData[i].partition_name, SPART_MAX_COLUMN_SIZE, part_ptr->name, 835 | SPART_MAX_COLUMN_SIZE); 836 | tmp_lenght = strlen(part_ptr->name); 837 | if (tmp_lenght > partname_lenght) partname_lenght = tmp_lenght; 838 | } 839 | 840 | if (show_given_partition == 1) { 841 | char *strtmp = NULL; 842 | char *grestok; 843 | char pl[SPART_INFO_STRING_SIZE]; 844 | 845 | for (i = 0; i < partition_count; i++) { 846 | part_ptr = &part_buffer_ptr->partition_array[i]; 847 | sp_strn2cpy(pl, SPART_INFO_STRING_SIZE, given_part_list, 848 | SPART_INFO_STRING_SIZE); 849 | for (grestok = strtok_r(pl, ",", &strtmp); grestok != NULL; 850 | grestok = strtok_r(NULL, ",", &strtmp)) { 851 | if (strncmp(part_ptr->name, grestok, SPART_INFO_STRING_SIZE) == 0) { 852 | spData[i].visible = 1; 853 | break; 854 | } else { 855 | spData[i].visible = 0; 856 | } 857 | } 858 | } 859 | show_all_partition = 0; 860 | } 861 | 862 | if (partname_lenght > spheaders.partition_name.column_width) { 863 | m = partname_lenght - spheaders.partition_name.column_width; 864 | 865 | sp_strn2cpy(strtmp, SPART_INFO_STRING_SIZE, spheaders.partition_name.line1, 866 | SPART_INFO_STRING_SIZE); 867 | strcpy(spheaders.partition_name.line1, " "); 868 | for (k = 1; k < m; k++) 869 | sp_strn2cat(spheaders.partition_name.line1, partname_lenght, " ", 2); 870 | sp_strn2cat(spheaders.partition_name.line1, partname_lenght, strtmp, 871 | partname_lenght); 872 | 873 | sp_strn2cpy(strtmp, SPART_INFO_STRING_SIZE, spheaders.partition_name.line2, 874 | SPART_INFO_STRING_SIZE); 875 | strcpy(spheaders.partition_name.line2, " "); 876 | for (k = 1; k < m; k++) 877 | sp_strn2cat(spheaders.partition_name.line2, partname_lenght, " ", 2); 878 | sp_strn2cat(spheaders.partition_name.line2, partname_lenght, strtmp, 879 | partname_lenght); 880 | 881 | spheaders.partition_name.column_width = partname_lenght; 882 | } 883 | 884 | #ifdef __slurmdb_cluster_rec_t_defined 885 | if (clusname_lenght > spheaders.cluster_name.column_width) { 886 | m = clusname_lenght - spheaders.cluster_name.column_width; 887 | 888 | sp_strn2cpy(strtmp, SPART_INFO_STRING_SIZE, spheaders.cluster_name.line1, 889 | SPART_INFO_STRING_SIZE); 890 | strcpy(spheaders.cluster_name.line1, " "); 891 | for (k = 1; k < m; k++) 892 | sp_strn2cat(spheaders.cluster_name.line1, clusname_lenght, " ", 2); 893 | sp_strn2cat(spheaders.cluster_name.line1, clusname_lenght, strtmp, 894 | clusname_lenght); 895 | 896 | sp_strn2cpy(strtmp, SPART_INFO_STRING_SIZE, spheaders.cluster_name.line2, 897 | SPART_INFO_STRING_SIZE); 898 | strcpy(spheaders.cluster_name.line2, " "); 899 | for (k = 1; k < m; k++) 900 | sp_strn2cat(spheaders.cluster_name.line2, clusname_lenght, " ", 2); 901 | sp_strn2cat(spheaders.cluster_name.line2, clusname_lenght, strtmp, 902 | clusname_lenght); 903 | 904 | spheaders.cluster_name.column_width = clusname_lenght; 905 | } 906 | #endif 907 | 908 | /* If these column at default values, don't show */ 909 | if ((!show_parameter_L) && (!show_simple)) { 910 | spheaders.min_nodes.visible = show_min_nodes; 911 | spheaders.max_nodes.visible = show_max_nodes; 912 | spheaders.max_cpus_per_node.visible = show_max_cpus_per_node; 913 | spheaders.max_mem_per_cpu.visible = show_max_mem_per_cpu; 914 | spheaders.def_mem_per_cpu.visible = show_def_mem_per_cpu; 915 | spheaders.mjt_time.visible = show_mjt_time; 916 | spheaders.djt_time.visible = show_djt_time; 917 | spheaders.partition_qos.visible = show_partition_qos; 918 | } 919 | spheaders.my_running.visible = show_my_running; 920 | spheaders.my_waiting_resource.visible = show_my_waiting_resource; 921 | spheaders.my_waiting_other.visible = show_my_waiting_other; 922 | spheaders.my_total.visible = show_my_total; 923 | 924 | if (show_all_partition) { 925 | for (i = 0; i < partition_count; i++) { 926 | spData[i].visible = 1; 927 | } 928 | } 929 | 930 | /* Output width calculation */ 931 | total_width = 7; /* for || and space charecters */ 932 | total_width += spheaders.partition_name.column_width; 933 | total_width += spheaders.partition_status.column_width; 934 | total_width += spheaders.free_cpu.column_width; 935 | total_width += spheaders.total_cpu.column_width; 936 | total_width += spheaders.free_node.column_width; 937 | total_width += spheaders.total_node.column_width; 938 | total_width += spheaders.waiting_resource.column_width; 939 | total_width += spheaders.waiting_other.column_width; 940 | 941 | /* Common Values scanning */ 942 | /* reuse local show_xxx variables for different purpose */ 943 | show_all_partition = 0; /* are there common feature */ 944 | if (show_simple) { 945 | total_width += spheaders.my_running.column_width + 946 | spheaders.my_waiting_resource.column_width + 947 | spheaders.my_waiting_other.column_width + 948 | spheaders.my_total.column_width + 4; 949 | } else { 950 | k = -1; /* first visible row (partition) */ 951 | for (i = 0; i < partition_count; i++) 952 | if ((spData[i].visible) && (k == -1)) { 953 | k = i; /* first visible partition */ 954 | } 955 | 956 | if (spheaders.my_running.visible) {/* is column visible */ 957 | show_my_running = 1; 958 | for (i = 0; i < partition_count; i++) { 959 | if (spData[i].visible) /* is row visible */ 960 | { 961 | if (spData[i].my_running != spData[k].my_running) { 962 | show_my_running = 0; /* it is not common */ 963 | break; 964 | } 965 | } 966 | } 967 | } 968 | 969 | if (spheaders.my_waiting_resource.visible) {/* is column visible */ 970 | show_my_waiting_resource = 1; 971 | for (i = 0; i < partition_count; i++) { 972 | if (spData[i].visible) /* is row visible */ 973 | { 974 | if (spData[i].my_waiting_resource != spData[k].my_waiting_resource) { 975 | show_my_waiting_resource = 0; /* it is not common */ 976 | break; 977 | } 978 | } 979 | } 980 | } 981 | 982 | if (spheaders.my_waiting_other.visible) {/* is column visible */ 983 | show_my_waiting_other = 1; 984 | for (i = 0; i < partition_count; i++) { 985 | if (spData[i].visible) /* is row visible */ 986 | { 987 | if (spData[i].my_waiting_other != spData[k].my_waiting_other) { 988 | show_my_waiting_other = 0; /* it is not common */ 989 | break; 990 | } 991 | } 992 | } 993 | } 994 | 995 | if (spheaders.my_total.visible) {/* is column visible */ 996 | show_my_total = 1; 997 | for (i = 0; i < partition_count; i++) { 998 | if (spData[i].visible) /* is row visible */ 999 | { 1000 | if (spData[i].my_total != spData[k].my_total) { 1001 | show_my_total = 0; /* it is not common */ 1002 | break; 1003 | } 1004 | } 1005 | } 1006 | if (show_my_waiting_resource && show_my_waiting_other && 1007 | show_my_running && show_my_total) { 1008 | show_all_partition = 1; 1009 | spheaders.my_running.visible = 0; 1010 | spheaders.my_waiting_resource.visible = 0; 1011 | spheaders.my_waiting_other.visible = 0; 1012 | spheaders.my_total.visible = 0; 1013 | } else 1014 | total_width += spheaders.my_running.column_width + 1015 | spheaders.my_waiting_resource.column_width + 1016 | spheaders.my_waiting_other.column_width + 1017 | spheaders.my_total.column_width + 4; 1018 | } 1019 | 1020 | if (spheaders.min_nodes.visible) {/* is column visible */ 1021 | show_min_nodes = 1; 1022 | for (i = 0; i < partition_count; i++) { 1023 | if (spData[i].visible) /* is row visible */ 1024 | { 1025 | if (spData[i].min_nodes != spData[k].min_nodes) { 1026 | show_min_nodes = 0; /* it is not common */ 1027 | break; 1028 | } 1029 | } 1030 | } 1031 | if (show_min_nodes == 1) { 1032 | show_all_partition = 1; 1033 | spheaders.min_nodes.visible = 0; 1034 | } else 1035 | total_width += spheaders.min_nodes.column_width + 1; 1036 | } 1037 | 1038 | if (spheaders.max_nodes.visible) {/* is column visible */ 1039 | show_max_nodes = 1; 1040 | for (i = 0; i < partition_count; i++) { 1041 | if (spData[i].visible) /* is row visible */ 1042 | { 1043 | if (spData[i].max_nodes != spData[k].max_nodes) { 1044 | show_max_nodes = 0; /* it is not common */ 1045 | break; 1046 | } 1047 | } 1048 | } 1049 | if (show_max_nodes == 1) { 1050 | show_all_partition = 1; 1051 | spheaders.max_nodes.visible = 0; 1052 | } else 1053 | total_width += spheaders.max_nodes.column_width + 1; 1054 | } 1055 | 1056 | if (spheaders.def_mem_per_cpu.visible) {/* is column visible */ 1057 | show_def_mem_per_cpu = 1; 1058 | for (i = 0; i < partition_count; i++) { 1059 | if (spData[i].visible) /* is row visible */ 1060 | { 1061 | if (spData[i].def_mem_per_cpu != spData[k].def_mem_per_cpu) { 1062 | show_def_mem_per_cpu = 0; /* it is not common */ 1063 | break; 1064 | } 1065 | } 1066 | } 1067 | if (show_def_mem_per_cpu == 1) { 1068 | show_all_partition = 1; 1069 | spheaders.def_mem_per_cpu.visible = 0; 1070 | } else 1071 | total_width += spheaders.def_mem_per_cpu.column_width + 1; 1072 | } 1073 | 1074 | if (spheaders.max_mem_per_cpu.visible) {/* is column visible */ 1075 | show_max_mem_per_cpu = 1; 1076 | for (i = 0; i < partition_count; i++) { 1077 | if (spData[i].visible) /* is row visible */ 1078 | { 1079 | if (spData[i].max_mem_per_cpu != spData[k].max_mem_per_cpu) { 1080 | show_max_mem_per_cpu = 0; /* it is not common */ 1081 | break; 1082 | } 1083 | } 1084 | } 1085 | if (show_max_mem_per_cpu == 1) { 1086 | show_all_partition = 1; 1087 | spheaders.max_mem_per_cpu.visible = 0; 1088 | } else 1089 | total_width += spheaders.max_mem_per_cpu.column_width + 1; 1090 | } 1091 | 1092 | if (spheaders.max_cpus_per_node.visible) {/* is column visible */ 1093 | show_max_cpus_per_node = 1; 1094 | for (i = 0; i < partition_count; i++) { 1095 | if (spData[i].visible) /* is row visible */ 1096 | { 1097 | if (spData[i].max_cpus_per_node != spData[k].max_cpus_per_node) { 1098 | show_max_cpus_per_node = 0; /* it is not common */ 1099 | break; 1100 | } 1101 | } 1102 | } 1103 | if (show_max_cpus_per_node == 1) { 1104 | show_all_partition = 1; 1105 | spheaders.max_cpus_per_node.visible = 0; 1106 | } else 1107 | total_width += spheaders.max_cpus_per_node.column_width + 1; 1108 | } 1109 | 1110 | if (spheaders.mjt_time.visible) {/* is column visible */ 1111 | show_mjt_time = 1; 1112 | for (i = 0; i < partition_count; i++) { 1113 | if (spData[i].visible) /* is row visible */ 1114 | { 1115 | if (spData[i].mjt_time != spData[k].mjt_time) { 1116 | show_mjt_time = 0; /* it is not common */ 1117 | break; 1118 | } 1119 | } 1120 | } 1121 | if (show_mjt_time == 1) { 1122 | show_all_partition = 1; 1123 | spheaders.mjt_time.visible = 0; 1124 | } else 1125 | total_width += spheaders.mjt_time.column_width + 1; 1126 | } 1127 | 1128 | if (spheaders.djt_time.visible) {/* is column visible */ 1129 | show_djt_time = 1; 1130 | for (i = 0; i < partition_count; i++) { 1131 | if (spData[i].visible) /* is row visible */ 1132 | { 1133 | if (spData[i].djt_time != spData[k].djt_time) { 1134 | show_djt_time = 0; /* it is not common */ 1135 | break; 1136 | } 1137 | } 1138 | } 1139 | if (show_djt_time == 1) { 1140 | show_all_partition = 1; 1141 | spheaders.djt_time.visible = 0; 1142 | } else 1143 | total_width += spheaders.djt_time.column_width + 1; 1144 | } 1145 | 1146 | if (spheaders.min_core.visible) {/* is column visible */ 1147 | show_min_core = 1; 1148 | for (i = 0; i < partition_count; i++) { 1149 | if (spData[i].visible) /* is row visible */ 1150 | { 1151 | if (spData[i].min_core != spData[k].min_core) { 1152 | show_min_core = 0; /* it is not common */ 1153 | break; 1154 | } 1155 | } 1156 | } 1157 | /* is max_core visible */ 1158 | if (show_max_mem) /* is column visible */ 1159 | for (i = 0; i < partition_count; i++) { 1160 | if (spData[i].visible) /* is row visible */ 1161 | { 1162 | if (spData[i].max_core != spData[k].max_core) { 1163 | show_min_core = 0; /* it is not common */ 1164 | break; 1165 | } 1166 | } 1167 | } 1168 | if (show_min_core == 1) { 1169 | show_all_partition = 1; 1170 | spheaders.min_core.visible = 0; 1171 | } else 1172 | total_width += spheaders.min_core.column_width + 1; 1173 | } 1174 | 1175 | if (spheaders.min_mem_gb.visible) {/* is column visible */ 1176 | show_min_mem_gb = 1; 1177 | for (i = 0; i < partition_count; i++) { 1178 | if (spData[i].visible) /* is row visible */ 1179 | { 1180 | if (spData[i].min_mem_gb != spData[k].min_mem_gb) { 1181 | show_min_mem_gb = 0; /* it is not common */ 1182 | break; 1183 | } 1184 | } 1185 | } 1186 | /* is max_mem_gb visible */ 1187 | if (show_max_mem) /* is column visible */ 1188 | for (i = 0; i < partition_count; i++) { 1189 | if (spData[i].visible) /* is row visible */ 1190 | { 1191 | if (spData[i].max_mem_gb != spData[k].max_mem_gb) { 1192 | show_min_core = 0; /* it is not common */ 1193 | break; 1194 | } 1195 | } 1196 | } 1197 | if (show_min_mem_gb == 1) { 1198 | show_all_partition = 1; 1199 | spheaders.min_mem_gb.visible = 0; 1200 | } else 1201 | total_width += spheaders.min_mem_gb.column_width + 1; 1202 | } 1203 | 1204 | if (spheaders.partition_qos.visible) {/* is column visible */ 1205 | show_partition_qos = 1; 1206 | for (i = 0; i < partition_count; i++) { 1207 | if (spData[i].visible) /* is row visible */ 1208 | { 1209 | if (strncmp(spData[i].partition_qos, spData[k].partition_qos, 1210 | SPART_MAX_COLUMN_SIZE) != 0) { 1211 | show_partition_qos = 0; /* it is not common */ 1212 | break; 1213 | } 1214 | } 1215 | } 1216 | if (show_partition_qos == 1) { 1217 | show_all_partition = 1; 1218 | spheaders.partition_qos.visible = 0; 1219 | } else 1220 | total_width += spheaders.partition_qos.column_width + 1; 1221 | } 1222 | 1223 | if (spheaders.gres.visible) {/* is column visible */ 1224 | show_gres = 1; 1225 | for (i = 0; i < partition_count; i++) { 1226 | if (spData[i].visible) /* is row visible */ 1227 | { 1228 | if (strncmp(spData[i].gres, spData[k].gres, SPART_MAX_COLUMN_SIZE) != 1229 | 0) { 1230 | show_gres = 0; /* it is not common */ 1231 | break; 1232 | } 1233 | } 1234 | } 1235 | if (show_gres == 1) { 1236 | show_all_partition = 1; 1237 | spheaders.gres.visible = 0; 1238 | } else 1239 | total_width += spheaders.gres.column_width + 1; 1240 | } 1241 | 1242 | if (spheaders.features.visible) {/* is column visible */ 1243 | show_features = 1; 1244 | for (i = 0; i < partition_count; i++) { 1245 | if (spData[i].visible) /* is row visible */ 1246 | { 1247 | if (strncmp(spData[i].features, spData[k].features, 1248 | SPART_MAX_COLUMN_SIZE) != 0) { 1249 | show_features = 0; /* it is not common */ 1250 | break; 1251 | } 1252 | } 1253 | } 1254 | if (show_features == 1) { 1255 | show_all_partition = 1; 1256 | spheaders.features.visible = 0; 1257 | } else 1258 | total_width += spheaders.features.column_width + 1; 1259 | } 1260 | 1261 | #ifdef __slurmdb_cluster_rec_t_defined 1262 | if (spheaders.cluster_name.visible) {/* is column visible */ 1263 | show_cluster_name = 1; 1264 | for (i = 0; i < partition_count; i++) { 1265 | if (spData[i].visible) /* is row visible */ 1266 | { 1267 | if (strncmp(spData[i].cluster_name, spData[k].cluster_name, 1268 | SPART_MAX_COLUMN_SIZE) != 0) { 1269 | show_cluster_name = 0; /* it is not common */ 1270 | break; 1271 | } 1272 | } 1273 | } 1274 | if (show_cluster_name == 1) { 1275 | show_all_partition = 1; 1276 | spheaders.cluster_name.visible = 0; 1277 | } else 1278 | total_width += spheaders.cluster_name.column_width + 1; 1279 | } 1280 | #endif 1281 | } 1282 | /* Headers is printing */ 1283 | sp_headers_print(&spheaders); 1284 | 1285 | #ifdef SPART_SHOW_STATEMENT 1286 | if (show_info) { 1287 | printf("\n %s ", SPART_STATEMENT_LINEPRE); 1288 | sp_seperator_print('=', total_width); 1289 | printf(" %s\n\n", SPART_STATEMENT_LINEPOST); 1290 | } 1291 | #endif 1292 | 1293 | /* Output is printing */ 1294 | for (i = 0; i < partition_count; i++) { 1295 | sp_partition_print(&(spData[i]), &spheaders, show_max_mem, show_as_date, 1296 | total_width); 1297 | } 1298 | if (show_verbose) { 1299 | for (i = 0; i < partition_count; i++) { 1300 | if (spData[i].visible == 1) { 1301 | sp_char_check(legends, SPART_INFO_STRING_SIZE, 1302 | spData[i].partition_status, SPART_MAX_COLUMN_SIZE); 1303 | } 1304 | } 1305 | } 1306 | 1307 | /* Common values are printing */ 1308 | if (show_all_partition) { 1309 | for (i = 0; i < partition_count; i++) { 1310 | spData[i].visible = 0; 1311 | } 1312 | spData[k].visible = 1; 1313 | 1314 | #ifdef __slurmdb_cluster_rec_t_defined 1315 | spheaders.cluster_name.visible = 0; 1316 | #endif 1317 | spheaders.partition_name.visible = 0; 1318 | spheaders.partition_status.visible = 0; 1319 | spheaders.free_cpu.visible = 0; 1320 | spheaders.total_cpu.visible = 0; 1321 | spheaders.free_node.visible = 0; 1322 | spheaders.total_node.visible = 0; 1323 | spheaders.waiting_resource.visible = 0; 1324 | spheaders.waiting_other.visible = 0; 1325 | spheaders.my_running.visible = 0; 1326 | spheaders.my_waiting_resource.visible = 0; 1327 | spheaders.my_waiting_other.visible = 0; 1328 | spheaders.my_total.visible = 0; 1329 | spheaders.min_nodes.visible = 0; 1330 | spheaders.max_nodes.visible = 0; 1331 | spheaders.max_cpus_per_node.visible = 0; 1332 | spheaders.max_mem_per_cpu.visible = 0; 1333 | spheaders.def_mem_per_cpu.visible = 0; 1334 | spheaders.djt_time.visible = 0; 1335 | spheaders.mjt_time.visible = 0; 1336 | spheaders.min_core.visible = 0; 1337 | spheaders.min_mem_gb.visible = 0; 1338 | spheaders.partition_qos.visible = 0; 1339 | spheaders.gres.visible = 0; 1340 | spheaders.features.visible = 0; 1341 | 1342 | printf("\n"); 1343 | #ifdef __slurmdb_cluster_rec_t_defined 1344 | if (show_cluster_name) spheaders.cluster_name.visible = 1; 1345 | #endif 1346 | if (show_my_running && show_my_waiting_resource && show_my_waiting_other && 1347 | show_my_total) { 1348 | spheaders.my_running.visible = 1; 1349 | spheaders.my_waiting_resource.visible = 1; 1350 | spheaders.my_waiting_other.visible = 1; 1351 | spheaders.my_total.visible = 1; 1352 | } 1353 | if (show_min_nodes) spheaders.min_nodes.visible = 1; 1354 | if (show_max_nodes) spheaders.max_nodes.visible = 1; 1355 | if (show_max_cpus_per_node) spheaders.max_cpus_per_node.visible = 1; 1356 | if (show_def_mem_per_cpu) spheaders.def_mem_per_cpu.visible = 1; 1357 | if (show_max_mem_per_cpu) spheaders.max_mem_per_cpu.visible = 1; 1358 | if (show_djt_time) spheaders.djt_time.visible = 1; 1359 | if (show_mjt_time) spheaders.mjt_time.visible = 1; 1360 | if (show_min_core) spheaders.min_core.visible = 1; 1361 | if (show_min_mem_gb) spheaders.min_mem_gb.visible = 1; 1362 | if (show_partition_qos) spheaders.partition_qos.visible = 1; 1363 | if (show_gres) spheaders.gres.visible = 1; 1364 | if (show_features) spheaders.features.visible = 1; 1365 | spheaders.hspace.visible = 1; 1366 | sp_headers_print(&spheaders); 1367 | sp_partition_print(&(spData[k]), &spheaders, show_max_mem, show_as_date, 1368 | total_width); 1369 | } 1370 | 1371 | if (show_verbose) { 1372 | printf("\n STATUS LABELS:\n"); 1373 | for (i = 0; i < strlen(legends); i++) { 1374 | for (j = 0; j < legend_count; j++) { 1375 | if (legends[i] == legend_info[j][0]) { 1376 | printf(" %s\n", legend_info[j]); 1377 | } 1378 | } 1379 | } 1380 | } 1381 | 1382 | #ifdef SPART_SHOW_STATEMENT 1383 | /* Statement is printing */ 1384 | fo = fopen(SPART_STATEMENT_DIR SPART_STATEMENT_FILE, "r"); 1385 | if (fo) { 1386 | printf("\n %s ", SPART_STATEMENT_LINEPRE); 1387 | sp_seperator_print('=', total_width); 1388 | printf(" %s\n", SPART_STATEMENT_LINEPOST); 1389 | while (fgets(re_str, SPART_INFO_STRING_SIZE, fo)) { 1390 | /* To correctly frame some wide chars, but not all */ 1391 | m = 0; 1392 | for (k = 0; (re_str[k] != '\0') && k < SPART_INFO_STRING_SIZE; k++) { 1393 | if ((re_str[k] < -58) && (re_str[k] > -62)) m++; 1394 | if (re_str[k] == '\n') re_str[k] = '\0'; 1395 | } 1396 | // printf(" %s %-*s %s\n", SPART_STATEMENT_LINEPRE, 92 + m, re_str, 1397 | printf(" %s %-*s %s\n", SPART_STATEMENT_LINEPRE, total_width, re_str, 1398 | SPART_STATEMENT_LINEPOST); 1399 | } 1400 | printf(" %s ", SPART_STATEMENT_LINEPRE); 1401 | sp_seperator_print('=', total_width); 1402 | printf(" %s\n", SPART_STATEMENT_LINEPOST); 1403 | pclose(fo); 1404 | } 1405 | #endif 1406 | /* free allocations */ 1407 | for (k = 0; k < user_acct_count; k++) { 1408 | free(user_acct[k]); 1409 | } 1410 | free(user_acct); 1411 | for (k = 0; k < user_qos_count; k++) { 1412 | free(user_qos[k]); 1413 | } 1414 | free(user_qos); 1415 | for (k = 0; k < user_group_count; k++) { 1416 | free(user_group[k]); 1417 | } 1418 | free(user_group); 1419 | 1420 | free(spData); 1421 | slurm_free_job_info_msg(job_buffer_ptr); 1422 | slurm_free_node_info_msg(node_buffer_ptr); 1423 | slurm_free_partition_info_msg(part_buffer_ptr); 1424 | slurm_free_ctl_conf(conf_info_msg_ptr); 1425 | exit(0); 1426 | } 1427 | -------------------------------------------------------------------------------- /spart.h: -------------------------------------------------------------------------------- 1 | /****************************************************************** 2 | * spart : a user-oriented partition info command for slurm 3 | * Author : Cem Ahmet Mercan, 2019-02-16 4 | * Licence : GNU General Public License v2.0 5 | * Note : Some part of this code taken from slurm api man pages 6 | *******************************************************************/ 7 | 8 | #ifndef SPART_SPART_H_incl 9 | #define SPART_SPART_H_incl 10 | 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | 21 | /* for UHeM-ITU-Turkey specific settings */ 22 | /* #define SPART_COMPILE_FOR_UHEM */ 23 | 24 | /* #define SPART_SHOW_STATEMENT */ 25 | 26 | /* if you want to use STATEMENT feature, uncomment 27 | * SPART_SHOW_STATEMENT at upper line, and set 28 | * SPART_STATEMENT_DIR to show the correct directory 29 | * which will be saved all statement files. 30 | * Write your statement into statement file which 31 | * SPART_STATEMENT_DIR/SPART_STATEMENT_FILE 32 | * as a regular text, but max 91 chars per line. 33 | * SPART_STATEMENT_DIR and SPART_STATEMENT_FILE 34 | * should be readable by everyone. 35 | * You can change or remove STATEMENT file without 36 | * recompiling the spart. If the spart find a 37 | * statement file, it will show your statement. */ 38 | #ifdef SPART_SHOW_STATEMENT 39 | 40 | #define SPART_STATEMENT_DIR "/okyanus/SLURM/spart/" 41 | #define SPART_STATEMENT_FILE "spart_STATEMENT.txt" 42 | 43 | /* if you want to show a bold STATEMENT, define like these 44 | * #define SPART_STATEMENT_LINEPRE "\033[1m" 45 | * #define SPART_STATEMENT_LINEPOST "\033[0m" 46 | */ 47 | /* for the white text over the red background 48 | * #define SPART_STATEMENT_LINEPRE "\033[37;41;1m" 49 | * #define SPART_STATEMENT_LINEPOST "\033[0m" 50 | */ 51 | /* and, for regular statement text, use these 52 | #define SPART_STATEMENT_LINEPRE "" 53 | #define SPART_STATEMENT_LINEPOST "" 54 | */ 55 | #define SPART_STATEMENT_LINEPRE "" 56 | #define SPART_STATEMENT_LINEPOST "" 57 | 58 | /* Same as SPART_STATEMENT_LINEPRE, 59 | * but these for QUEUE STATEMENTs */ 60 | #define SPART_STATEMENT_QUEUE_LINEPRE "" 61 | #define SPART_STATEMENT_QUEUE_LINEPOST "" 62 | 63 | /* The partition statements will be located at 64 | * SPART_STATEMENT_DIR/ 65 | * SPART_STATEMENT_QUEPRESPART_STATEMENT_QUEPOST 66 | * and will be shown, if -i parameter is given. 67 | */ 68 | #define SPART_STATEMENT_QUEPRE "spart_partition_" 69 | #define SPART_STATEMENT_QUEPOST ".txt" 70 | #endif 71 | 72 | #define SPART_INFO_STRING_SIZE 4096 73 | #define SPART_GRES_ARRAY_SIZE 256 74 | #define SPART_MAX_COLUMN_SIZE 64 75 | #define SPART_MAX_GROUP_SIZE 32 76 | 77 | char *legend_info[] = { 78 | "* : default partition (default queue)", 79 | ". : hidden partition", 80 | "C : closed to both the job submit and run", 81 | "S : closed to the job submit, but the submitted jobs will run", 82 | "r : requires the reservation", 83 | "D : open to the job submit, but the submitted jobs will not run", 84 | "R : open for only root, or closed to root (if you are root)", 85 | "A : closed to all of your account(s)", 86 | "a : closed to some of your account(s)", 87 | "G : closed to all of your group(s)", 88 | "g : closed to some of your group(s)", 89 | "Q : closed to all of your QOS(s)", 90 | "q : closed to some of your QOS(s)"}; 91 | int legend_count = 12; 92 | 93 | /* Prints Command Usage and Exit */ 94 | int sp_spart_usage() { 95 | printf( 96 | "\nUsage: spart [-m] [-a] " 97 | #ifdef __slurmdb_cluster_rec_t_defined 98 | "[-c] " 99 | #endif 100 | "[-g] [-i] [-t] [-f] [-l] [-s] [-J] [-p PARTITION_LIST] [-v] [-h]\n\n"); 101 | printf( 102 | "This program shows brief partition info with core count of available " 103 | "nodes and pending jobs.\n\n"); 104 | printf( 105 | "In the STA-TUS column, the characters means, the partition is:\n" 106 | "\t*\tdefault partition (default queue),\n" 107 | "\t.\thidden partition,\n" 108 | "\tC\tclosed to both the job submit and run,\n" 109 | "\tS\tclosed to the job submit, but the submitted jobs will run,\n" 110 | "\tr\trequires the reservation,\n" 111 | /* "\tx\tthe exclusive job permitted,\n" */ 112 | "\tD\topen to the job submit, but the submitted jobs will not run,\n" 113 | "\tR\topen for only root, or closed to root (if you are root),\n" 114 | "\tA\tclosed to all of your account(s),\n" 115 | "\ta\tclosed to some of your account(s),\n" 116 | "\tG\tclosed to all of your group(s),\n" 117 | "\tg\tclosed to some of your group(s),\n" 118 | "\tQ\tclosed to all of your QOS(s),\n" 119 | "\tq\tclosed to some of your QOS(s).\n\n"); 120 | printf( 121 | "The RESOURCE PENDING column shows core counts of pending jobs " 122 | "because of the busy resource.\n\n"); 123 | printf( 124 | "The OTHER PENDING column shows core counts of pending jobs because " 125 | "of the other reasons such\n as license or other limits.\n\n"); 126 | printf( 127 | "The YOUR-RUN, PEND-RES, PEND-OTHR, and YOUR-TOTL columns shows " 128 | "the counts of the running,\n resource pending, other pending, and " 129 | "total job count of the current user, respectively.\n If these four " 130 | "columns are have same values, These same values of that four columns " 131 | "will be\n shown at COMMON VALUES as four single values.\n\n"); 132 | printf( 133 | "The MIN NODE and MAX NODE columns show the permitted minimum and " 134 | "maximum node counts of the\n jobs which can be submited to the " 135 | "partition.\n\n"); 136 | printf( 137 | "The MAXCPU/NODE column shows the permitted maximum core counts of " 138 | "of the single node in\n the partition.\n\n"); 139 | printf( 140 | "The DEFMEM GB/CPU and DEFMEM GB/NODE columns show default maximum " 141 | "memory as GB which a job\n can use for a cpu or a node, " 142 | "respectively.\n\n"); 143 | printf( 144 | "The MAXMEM GB/CPU and MAXMEM GB/NODE columns show maximum " 145 | "memory as GB which requestable by\n a job for a cpu or a node, " 146 | "respectively.\n\n"); 147 | printf( 148 | "The DEFAULT JOB-TIME column shows the default time limit of the job " 149 | "which submited to the\n partition without a time limit. If " 150 | "the DEFAULT JOB-TIME limits are not setted, or setted\n same value with " 151 | "MAXIMUM JOB-TIME for all partitions in your cluster, DEFAULT JOB-TIME\n " 152 | "column will not be shown, except -l parameter was given.\n\n"); 153 | printf( 154 | "The MAXIMUM JOB-TIME column shows the maximum time limit of the job " 155 | "which submited to the\n partition. If the user give a time limit " 156 | "further than MAXIMUM JOB-TIME limit of the\n partition, the job will be " 157 | "rejected by the slurm.\n\n"); 158 | printf( 159 | "The CORES/NODE column shows the core count of the node with " 160 | "lowest core count in the\n partition. But if -l was given, both " 161 | "the lowest and highest core counts will be shown.\n\n"); 162 | printf( 163 | "The NODE MEM-GB column shows the memory of the lowest memory node " 164 | "in this partition. But if\n -l parameter was given, both the lowest " 165 | "and highest memory will be shown.\n\n"); 166 | printf( 167 | "The QOS NAME column shows the default qos limits the job " 168 | "which submited to the partition.\n If the QOS NAME of the partition " 169 | "are not setted for all partitions in your cluster, QOS\n NAME column " 170 | "will not be shown, execpt -l parameter was given.\n\n"); 171 | printf( 172 | "The GRES (COUNT) column shows the generic resources of the nodes " 173 | "in the partition, and (in\n paranteses) the total number of nodes " 174 | "in that partition containing that GRES. The GRES\n (COUNT) column " 175 | "will not be shown, execpt -l or -g parameter was given.\n\n"); 176 | printf( 177 | "If the partition's QOS NAME, MIN NODES, MAX NODES, MAXCPU/NODE, " 178 | "DEFMEM GB/CPU|NODE,\n MAXMEM GB/CPU|NODE, DEFAULT JOB-TIME, and MAXIMUM " 179 | "JOB-TIME limits are not setted for the\n all partitions in your " 180 | "cluster, corresponding column(s) will not be shown, except -l\n " 181 | "parameter was given.\n\n"); 182 | printf( 183 | "If the values of a column are same, this column will not be shown " 184 | "at partitions block.\n These same values of that column will be show at " 185 | "COMMON VALUES as a single value.\n\n"); 186 | printf("Parameters:\n\n"); 187 | printf( 188 | "\t-m\tboth the lowest and highest values will be shown in the CORES" 189 | " /NODE\n\t\tand NODE MEM-GB columns.\n\n"); 190 | printf("\t-a\thidden partitions also be shown.\n\n"); 191 | #ifdef __slurmdb_cluster_rec_t_defined 192 | printf("\t-c\tpartitions from federated clusters be shown.\n\n"); 193 | #endif 194 | printf( 195 | "\t-g\tthe ouput shows each GRES (gpu, mic etc.) defined in that " 196 | "partition and\n\t\t(in paranteses) the total number of nodes in that " 197 | "partition containing\n\t\tthat GRES.\n\n"); 198 | printf( 199 | "\t-i\tthe info about the groups, accounts, QOSs, and queues will " 200 | "be shown.\n\n"); 201 | printf( 202 | "\t-t\tthe time info will be shown at DAY-HR:MN format, instead of " 203 | "verbal format.\n\n"); 204 | printf( 205 | "\t-f\tthe ouput shows each FEATURES defined in that partition " 206 | "and (in paranteses)\n\t\tthe total number of nodes in that " 207 | "partition containing that FEATURES.\n\n"); 208 | printf("\t-s\tthe simple output. spart don't show slurm config columns.\n\n"); 209 | printf("\t-J\tthe output does not shown the info about the user's jobs.\n\n"); 210 | printf( 211 | "\t-p PARTITION_LIST\n\t\tthe output shows only the partitions which " 212 | "given with comma-seperated \n\t\tPARTITION_LIST.\n\n"); 213 | printf( 214 | "\t-l\tall posible columns will be shown, except" 215 | " the federated clusters column.\n\n"); 216 | printf("\t-v\tshows info about STATUS LABELS.\n\n"); 217 | printf("\t-h\tshows this usage text.\n\n"); 218 | #ifdef SPART_COMPILE_FOR_UHEM 219 | printf("This is UHeM Version of the spart command.\n"); 220 | #endif 221 | printf("spart version 1.5.1\n\n"); 222 | // exit(1); 223 | } 224 | 225 | /* To store partition info */ 226 | typedef struct sp_part_info { 227 | uint32_t free_cpu; 228 | uint32_t total_cpu; 229 | uint32_t free_node; 230 | uint32_t total_node; 231 | uint32_t waiting_resource; 232 | uint32_t waiting_other; 233 | uint32_t min_nodes; 234 | uint32_t max_nodes; 235 | uint64_t def_mem_per_cpu; 236 | uint64_t max_mem_per_cpu; 237 | uint32_t max_cpus_per_node; 238 | /* MaxJobTime */ 239 | uint32_t mjt_time; 240 | /* DefaultJobTime */ 241 | uint32_t djt_time; 242 | uint32_t min_core; 243 | uint32_t max_core; 244 | uint16_t min_mem_gb; 245 | uint16_t max_mem_gb; 246 | uint16_t visible; 247 | #ifdef SPART_SHOW_STATEMENT 248 | uint16_t show_statement; 249 | #endif 250 | uint32_t my_waiting_resource; 251 | uint32_t my_waiting_other; 252 | uint32_t my_running; 253 | uint32_t my_total; 254 | 255 | char partition_name[SPART_MAX_COLUMN_SIZE]; 256 | #ifdef __slurmdb_cluster_rec_t_defined 257 | char cluster_name[SPART_MAX_COLUMN_SIZE]; 258 | #endif 259 | char partition_qos[SPART_MAX_COLUMN_SIZE]; 260 | char gres[SPART_INFO_STRING_SIZE]; 261 | char features[SPART_INFO_STRING_SIZE]; 262 | char partition_status[SPART_MAX_COLUMN_SIZE]; 263 | } sp_part_info_t; 264 | 265 | /* To storing info about a gres */ 266 | typedef struct sp_gres_info { 267 | uint32_t count; 268 | char gres_name[SPART_INFO_STRING_SIZE]; 269 | } sp_gres_info_t; 270 | 271 | /* An output column header info */ 272 | typedef struct sp_column_header { 273 | char line1[SPART_INFO_STRING_SIZE]; 274 | char line2[SPART_INFO_STRING_SIZE]; 275 | uint16_t column_width; 276 | uint16_t visible; 277 | } sp_column_header_t; 278 | 279 | /* To storing Output headers */ 280 | typedef struct sp_headers { 281 | sp_column_header_t hspace; 282 | #ifdef __slurmdb_cluster_rec_t_defined 283 | sp_column_header_t cluster_name; 284 | #endif 285 | sp_column_header_t partition_name; 286 | sp_column_header_t partition_status; 287 | sp_column_header_t free_cpu; 288 | sp_column_header_t total_cpu; 289 | sp_column_header_t free_node; 290 | sp_column_header_t total_node; 291 | sp_column_header_t waiting_resource; 292 | sp_column_header_t waiting_other; 293 | sp_column_header_t my_running; 294 | sp_column_header_t my_waiting_resource; 295 | sp_column_header_t my_waiting_other; 296 | sp_column_header_t my_total; 297 | sp_column_header_t min_nodes; 298 | sp_column_header_t max_nodes; 299 | sp_column_header_t max_cpus_per_node; 300 | sp_column_header_t def_mem_per_cpu; 301 | sp_column_header_t max_mem_per_cpu; 302 | /* MaxJobTime */ 303 | sp_column_header_t mjt_time; 304 | sp_column_header_t djt_time; 305 | sp_column_header_t min_core; 306 | sp_column_header_t min_mem_gb; 307 | sp_column_header_t partition_qos; 308 | sp_column_header_t gres; 309 | sp_column_header_t features; 310 | } sp_headers_t; 311 | 312 | #endif /* SPART_SPART_H_incl */ 313 | -------------------------------------------------------------------------------- /spart_data.h: -------------------------------------------------------------------------------- 1 | /****************************************************************** 2 | * spart : a user-oriented partition info command for slurm 3 | * Author : Cem Ahmet Mercan, 2019-02-16 4 | * Licence : GNU General Public License v2.0 5 | * Note : Some part of this code taken from slurm api man pages 6 | *******************************************************************/ 7 | 8 | #ifndef SPART_SPART_DATA_H_incl 9 | #define SPART_SPART_DATA_H_incl 10 | 11 | #include "spart_string.h" 12 | 13 | /* Add one gres to partition gres list */ 14 | void sp_gres_add(sp_gres_info_t spga[], uint16_t *sp_gres_count, 15 | char *node_gres) { 16 | uint16_t i; 17 | int found = 0; 18 | char *strtmp = NULL; 19 | char *grestok; 20 | 21 | for (grestok = strtok_r(node_gres, ",", &strtmp); grestok != NULL; 22 | grestok = strtok_r(NULL, ",", &strtmp)) { 23 | for (i = 0; i < *sp_gres_count; i++) { 24 | if (strncmp(spga[i].gres_name, grestok, SPART_INFO_STRING_SIZE) == 0) { 25 | spga[i].count++; 26 | found = 1; 27 | break; 28 | } 29 | } 30 | 31 | if (found == 0) { 32 | sp_strn2cpy(spga[*sp_gres_count].gres_name, SPART_INFO_STRING_SIZE, 33 | grestok, SPART_INFO_STRING_SIZE); 34 | spga[*sp_gres_count].count = 1; 35 | (*sp_gres_count)++; 36 | } 37 | } 38 | } 39 | 40 | /* Sets all counts as zero in the partition gres list */ 41 | void sp_gres_reset_counts(sp_gres_info_t *spga, uint16_t *sp_gres_count) { 42 | uint16_t i; 43 | for (i = 0; i < *sp_gres_count; i++) { 44 | spga[i].count = 0; 45 | } 46 | (*sp_gres_count) = 0; 47 | } 48 | 49 | /* it checks for permision string for user_spec list, return 0 if partition 50 | * should be hide */ 51 | int sp_check_permision_set_legend(char *permisions, char **user_spec, 52 | int user_spec_count, char *legendstr, 53 | const char *r_all, const char *r_some, 54 | const char *r_none) { 55 | int found_count; 56 | // char strtmp[SPART_INFO_STRING_SIZE]; 57 | char *strtmp; 58 | int nstrtmp; 59 | 60 | if (permisions != NULL) { 61 | nstrtmp = strlen(permisions); 62 | if (nstrtmp != 0) { 63 | strtmp = malloc(nstrtmp * sizeof(char)); 64 | sp_strn2cpy(strtmp, nstrtmp, permisions, nstrtmp); 65 | found_count = sp_account_check(user_spec, user_spec_count, strtmp); 66 | free(strtmp); 67 | if (found_count) { 68 | /* more than zero in the list */ 69 | if (found_count != user_spec_count) { 70 | /* partial match */ 71 | sp_strn2cat(legendstr, SPART_MAX_COLUMN_SIZE, r_some, 2); 72 | } else { 73 | /* found_count = ALL */ 74 | if (r_all != NULL) { 75 | /* this is an deny list */ 76 | sp_strn2cat(legendstr, SPART_MAX_COLUMN_SIZE, r_all, 2); 77 | return 0; 78 | } 79 | } 80 | } else { 81 | /* found_count = 0 */ 82 | if (r_none != NULL) { 83 | /* this is an allow list */ 84 | sp_strn2cat(legendstr, SPART_MAX_COLUMN_SIZE, r_none, 2); 85 | return 0; 86 | } 87 | } 88 | } 89 | } 90 | return 1; 91 | } 92 | 93 | #endif /* SPART_SPART_DATA_H_incl */ 94 | -------------------------------------------------------------------------------- /spart_output.h: -------------------------------------------------------------------------------- 1 | /****************************************************************** 2 | * spart : a user-oriented partition info command for slurm 3 | * Author : Cem Ahmet Mercan, 2019-02-16 4 | * Licence : GNU General Public License v2.0 5 | * Note : Some part of this code taken from slurm api man pages 6 | *******************************************************************/ 7 | 8 | #ifndef SPART_SPART_OUTPUT_H_incl 9 | #define SPART_SPART_OUTPUT_H_incl 10 | 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | #include "spart.h" 21 | #include "spart_string.h" 22 | #include "spart_data.h" 23 | #include "spart_output.h" 24 | 25 | /* Initialize all column headers */ 26 | void sp_headers_set_defaults(sp_headers_t *sph) { 27 | sph->hspace.visible = 0; 28 | sph->hspace.column_width = 17; 29 | sp_strn2cpy(sph->hspace.line1, sph->hspace.column_width, " ", 30 | sph->hspace.column_width+1); 31 | sp_strn2cpy(sph->hspace.line2, sph->hspace.column_width, " ", 32 | sph->hspace.column_width+1); 33 | #ifdef __slurmdb_cluster_rec_t_defined 34 | sph->cluster_name.visible = 0; 35 | sph->cluster_name.column_width = 8; 36 | sp_strn2cpy(sph->cluster_name.line1, sph->cluster_name.column_width, 37 | " CLUSTER", sph->cluster_name.column_width+1); 38 | sp_strn2cpy(sph->cluster_name.line2, sph->cluster_name.column_width, 39 | " NAME", sph->cluster_name.column_width+1); 40 | #endif 41 | sph->partition_name.visible = 1; 42 | sph->partition_name.column_width = 10; 43 | sp_strn2cpy(sph->partition_name.line1, sph->partition_name.column_width, 44 | " QUEUE", sph->partition_name.column_width+1); 45 | sp_strn2cpy(sph->partition_name.line2, sph->partition_name.column_width, 46 | " PARTITION", sph->partition_name.column_width+1); 47 | sph->partition_status.visible = 1; 48 | sph->partition_status.column_width = 3; 49 | sp_strn2cpy(sph->partition_status.line1, sph->partition_status.column_width, 50 | "STA", sph->partition_status.column_width+1); 51 | sp_strn2cpy(sph->partition_status.line2, sph->partition_status.column_width, 52 | "TUS", sph->partition_status.column_width+1); 53 | sph->free_cpu.visible = 1; 54 | sph->free_cpu.column_width = 6; 55 | sp_strn2cpy(sph->free_cpu.line1, sph->free_cpu.column_width, " FREE", 56 | sph->free_cpu.column_width+1); 57 | sp_strn2cpy(sph->free_cpu.line2, sph->free_cpu.column_width, " CORES", 58 | sph->free_cpu.column_width+1); 59 | sph->total_cpu.visible = 1; 60 | sph->total_cpu.column_width = 6; 61 | sp_strn2cpy(sph->total_cpu.line1, sph->total_cpu.column_width, " TOTAL", 62 | sph->total_cpu.column_width+1); 63 | sp_strn2cpy(sph->total_cpu.line2, sph->total_cpu.column_width, " CORES", 64 | sph->total_cpu.column_width+1); 65 | sph->free_node.visible = 1; 66 | sph->free_node.column_width = 6; 67 | sp_strn2cpy(sph->free_node.line1, sph->free_node.column_width, " FREE", 68 | sph->free_node.column_width+1); 69 | sp_strn2cpy(sph->free_node.line2, sph->free_node.column_width, " NODES", 70 | sph->free_node.column_width+1); 71 | sph->total_node.visible = 1; 72 | sph->total_node.column_width = 6; 73 | sp_strn2cpy(sph->total_node.line1, sph->total_node.column_width, " TOTAL", 74 | sph->total_node.column_width+1); 75 | sp_strn2cpy(sph->total_node.line2, sph->total_node.column_width, " NODES", 76 | sph->total_node.column_width+1); 77 | sph->waiting_resource.visible = 1; 78 | sph->waiting_resource.column_width = 6; 79 | sp_strn2cpy(sph->waiting_resource.line1, sph->waiting_resource.column_width, 80 | "RESORC", sph->waiting_resource.column_width+1); 81 | sp_strn2cpy(sph->waiting_resource.line2, sph->waiting_resource.column_width, 82 | "PENDNG", sph->waiting_resource.column_width+1); 83 | sph->waiting_other.visible = 1; 84 | sph->waiting_other.column_width = 6; 85 | sp_strn2cpy(sph->waiting_other.line1, sph->waiting_other.column_width, 86 | " OTHER", sph->waiting_other.column_width+1); 87 | sp_strn2cpy(sph->waiting_other.line2, sph->waiting_other.column_width, 88 | "PENDNG", sph->waiting_other.column_width+1); 89 | sph->my_running.visible = 1; 90 | sph->my_running.column_width = 4; 91 | sp_strn2cpy(sph->my_running.line1, sph->my_running.column_width, "YOUR", 92 | sph->my_running.column_width+1); 93 | sp_strn2cpy(sph->my_running.line2, sph->my_running.column_width, " RUN", 94 | sph->my_running.column_width+1); 95 | sph->my_waiting_resource.visible = 1; 96 | sph->my_waiting_resource.column_width = 4; 97 | sp_strn2cpy(sph->my_waiting_resource.line1, 98 | sph->my_waiting_resource.column_width, "PEND", 99 | sph->my_waiting_resource.column_width+1); 100 | sp_strn2cpy(sph->my_waiting_resource.line2, 101 | sph->my_waiting_resource.column_width, " RES", 102 | sph->my_waiting_resource.column_width+1); 103 | sph->my_waiting_other.visible = 1; 104 | sph->my_waiting_other.column_width = 4; 105 | sp_strn2cpy(sph->my_waiting_other.line1, sph->my_waiting_other.column_width, 106 | "PEND", sph->my_waiting_other.column_width+1); 107 | sp_strn2cpy(sph->my_waiting_other.line2, sph->my_waiting_other.column_width, 108 | "OTHR", sph->my_waiting_other.column_width+1); 109 | sph->my_total.visible = 1; 110 | sph->my_total.column_width = 4; 111 | sp_strn2cpy(sph->my_total.line1, sph->my_total.column_width, "YOUR", 112 | sph->my_total.column_width+1); 113 | sp_strn2cpy(sph->my_total.line2, sph->my_total.column_width, "TOTL", 114 | sph->my_total.column_width+1); 115 | sph->min_nodes.visible = 1; 116 | sph->min_nodes.column_width = 5; 117 | sp_strn2cpy(sph->min_nodes.line1, sph->min_nodes.column_width, " MIN", 118 | sph->min_nodes.column_width+1); 119 | sp_strn2cpy(sph->min_nodes.line2, sph->min_nodes.column_width, "NODES", 120 | sph->min_nodes.column_width+1); 121 | sph->max_nodes.visible = 1; 122 | sph->max_nodes.column_width = 5; 123 | sp_strn2cpy(sph->max_nodes.line1, sph->max_nodes.column_width, " MAX", 124 | sph->max_nodes.column_width+1); 125 | sp_strn2cpy(sph->max_nodes.line2, sph->max_nodes.column_width, "NODES", 126 | sph->max_nodes.column_width+1); 127 | sph->max_cpus_per_node.visible = 0; 128 | sph->max_cpus_per_node.column_width = 6; 129 | sp_strn2cpy(sph->max_cpus_per_node.line1, sph->max_cpus_per_node.column_width, 130 | "MAXCPU", sph->max_cpus_per_node.column_width+1); 131 | sp_strn2cpy(sph->max_cpus_per_node.line2, sph->max_cpus_per_node.column_width, 132 | " /NODE", sph->max_cpus_per_node.column_width+1); 133 | sph->max_mem_per_cpu.visible = 0; 134 | sph->max_mem_per_cpu.column_width = 6; 135 | sp_strn2cpy(sph->max_mem_per_cpu.line1, sph->max_mem_per_cpu.column_width, 136 | "MAXMEM", sph->max_mem_per_cpu.column_width+1); 137 | sp_strn2cpy(sph->max_mem_per_cpu.line2, sph->max_mem_per_cpu.column_width, 138 | "GB/CPU", sph->max_mem_per_cpu.column_width+1); 139 | sph->def_mem_per_cpu.visible = 0; 140 | sph->def_mem_per_cpu.column_width = 6; 141 | sp_strn2cpy(sph->def_mem_per_cpu.line1, sph->def_mem_per_cpu.column_width, 142 | "DEFMEM", sph->def_mem_per_cpu.column_width+1); 143 | sp_strn2cpy(sph->def_mem_per_cpu.line2, sph->def_mem_per_cpu.column_width, 144 | "GB/CPU", sph->def_mem_per_cpu.column_width+1); 145 | sph->djt_time.visible = 0; 146 | sph->djt_time.column_width = 10; 147 | sp_strn2cpy(sph->djt_time.line1, sph->djt_time.column_width, " DEFAULT", 148 | sph->djt_time.column_width+1); 149 | sp_strn2cpy(sph->djt_time.line2, sph->djt_time.column_width, " JOB-TIME", 150 | sph->djt_time.column_width+1); 151 | sph->mjt_time.visible = 1; 152 | sph->mjt_time.column_width = 10; 153 | sp_strn2cpy(sph->mjt_time.line1, sph->mjt_time.column_width, " MAXIMUM", 154 | sph->mjt_time.column_width+1); 155 | sp_strn2cpy(sph->mjt_time.line2, sph->mjt_time.column_width, " JOB-TIME", 156 | sph->mjt_time.column_width+1); 157 | sph->min_core.visible = 1; 158 | sph->min_core.column_width = 6; 159 | sp_strn2cpy(sph->min_core.line1, sph->min_core.column_width, " CORES", 160 | sph->min_core.column_width+1); 161 | sp_strn2cpy(sph->min_core.line2, sph->min_core.column_width, " /NODE", 162 | sph->min_core.column_width+1); 163 | sph->min_mem_gb.visible = 1; 164 | sph->min_mem_gb.column_width = 6; 165 | sp_strn2cpy(sph->min_mem_gb.line1, sph->min_mem_gb.column_width, " NODE", 166 | sph->min_mem_gb.column_width+1); 167 | sp_strn2cpy(sph->min_mem_gb.line2, sph->min_mem_gb.column_width, "MEM-GB", 168 | sph->min_mem_gb.column_width+1); 169 | sph->partition_qos.visible = 1; 170 | sph->partition_qos.column_width = 6; 171 | sp_strn2cpy(sph->partition_qos.line1, sph->partition_qos.column_width, 172 | " QOS", sph->partition_qos.column_width+1); 173 | sp_strn2cpy(sph->partition_qos.line2, sph->partition_qos.column_width, 174 | " NAME", sph->partition_qos.column_width+1); 175 | sph->gres.visible = 0; 176 | sph->gres.column_width = 12; 177 | sp_strn2cpy(sph->gres.line1, sph->gres.column_width, " GRES ", 178 | sph->gres.column_width+1); 179 | sp_strn2cpy(sph->gres.line2, sph->gres.column_width, "(NODE-COUNT)", 180 | sph->gres.column_width+1); 181 | sph->features.visible = 0; 182 | sph->features.column_width = 12; 183 | sp_strn2cpy(sph->features.line1, sph->features.column_width, " FEATURES ", 184 | sph->features.column_width+1); 185 | sp_strn2cpy(sph->features.line2, sph->features.column_width, "(NODE-COUNT)", 186 | sph->features.column_width+1); 187 | } 188 | 189 | /* Sets all columns as visible */ 190 | void sp_headers_set_parameter_L(sp_headers_t *sph) { 191 | #ifdef __slurmdb_cluster_rec_t_defined 192 | sph->cluster_name.visible = 0; 193 | sph->partition_name.visible = SHOW_LOCAL; 194 | #else 195 | sph->partition_name.visible = 1; 196 | #endif 197 | sph->partition_status.visible = 1; 198 | sph->free_cpu.visible = 1; 199 | sph->total_cpu.visible = 1; 200 | sph->free_node.visible = 1; 201 | sph->total_node.visible = 1; 202 | sph->waiting_resource.visible = 1; 203 | sph->waiting_other.visible = 1; 204 | sph->my_running.visible = 1; 205 | sph->my_waiting_resource.visible = 1; 206 | sph->my_waiting_other.visible = 1; 207 | sph->my_total.visible = 1; 208 | sph->min_nodes.visible = 1; 209 | sph->max_nodes.visible = 1; 210 | sph->max_cpus_per_node.visible = 1; 211 | sph->max_mem_per_cpu.visible = 1; 212 | sph->def_mem_per_cpu.visible = 1; 213 | sph->djt_time.visible = 1; 214 | sph->mjt_time.visible = 1; 215 | sph->min_core.visible = 1; 216 | sph->min_mem_gb.visible = 1; 217 | sph->partition_qos.visible = 1; 218 | sph->gres.visible = 1; 219 | sph->features.visible = 0; 220 | sph->min_core.column_width = 8; 221 | sph->min_mem_gb.column_width = 10; 222 | } 223 | 224 | /* If column visible, it prints header to the line1 and line2 strings */ 225 | void sp_column_header_print(char *line1, char *line2, 226 | sp_column_header_t *spcol) { 227 | char cresult[SPART_MAX_COLUMN_SIZE]; 228 | if (spcol->visible) { 229 | snprintf(cresult, SPART_MAX_COLUMN_SIZE, "%*s", spcol->column_width, 230 | spcol->line1); 231 | sp_strn2cat(line1, SPART_INFO_STRING_SIZE, cresult, spcol->column_width+1); 232 | snprintf(cresult, SPART_MAX_COLUMN_SIZE, "%*s", spcol->column_width, 233 | spcol->line2); 234 | sp_strn2cat(line2, SPART_INFO_STRING_SIZE, cresult, spcol->column_width+1); 235 | sp_strn2cat(line1, SPART_INFO_STRING_SIZE, " ", 2); 236 | sp_strn2cat(line2, SPART_INFO_STRING_SIZE, " ", 2); 237 | } 238 | } 239 | 240 | /* Prints visible Headers */ 241 | void sp_headers_print(sp_headers_t *sph) { 242 | char line1[SPART_INFO_STRING_SIZE]; 243 | char line2[SPART_INFO_STRING_SIZE]; 244 | 245 | line1[0] = 0; 246 | line2[0] = 0; 247 | 248 | sp_column_header_print(line1, line2, &(sph->hspace)); 249 | #ifdef __slurmdb_cluster_rec_t_defined 250 | sp_column_header_print(line1, line2, &(sph->cluster_name)); 251 | #endif 252 | sp_column_header_print(line1, line2, &(sph->partition_name)); 253 | sp_column_header_print(line1, line2, &(sph->partition_status)); 254 | sp_column_header_print(line1, line2, &(sph->free_cpu)); 255 | sp_column_header_print(line1, line2, &(sph->total_cpu)); 256 | sp_column_header_print(line1, line2, &(sph->waiting_resource)); 257 | sp_column_header_print(line1, line2, &(sph->waiting_other)); 258 | sp_column_header_print(line1, line2, &(sph->free_node)); 259 | sp_column_header_print(line1, line2, &(sph->total_node)); 260 | if (!(sph->hspace.visible)) { 261 | sp_strn2cat(line1, SPART_INFO_STRING_SIZE, "|", 2); 262 | sp_strn2cat(line2, SPART_INFO_STRING_SIZE, "|", 2); 263 | } 264 | sp_column_header_print(line1, line2, &(sph->my_running)); 265 | sp_column_header_print(line1, line2, &(sph->my_waiting_resource)); 266 | sp_column_header_print(line1, line2, &(sph->my_waiting_other)); 267 | sp_column_header_print(line1, line2, &(sph->my_total)); 268 | if (!(sph->hspace.visible)) { 269 | sp_strn2cat(line1, SPART_INFO_STRING_SIZE, "| ", 3); 270 | sp_strn2cat(line2, SPART_INFO_STRING_SIZE, "| ", 3); 271 | } 272 | sp_column_header_print(line1, line2, &(sph->min_nodes)); 273 | sp_column_header_print(line1, line2, &(sph->max_nodes)); 274 | sp_column_header_print(line1, line2, &(sph->max_cpus_per_node)); 275 | sp_column_header_print(line1, line2, &(sph->def_mem_per_cpu)); 276 | sp_column_header_print(line1, line2, &(sph->max_mem_per_cpu)); 277 | sp_column_header_print(line1, line2, &(sph->djt_time)); 278 | sp_column_header_print(line1, line2, &(sph->mjt_time)); 279 | sp_column_header_print(line1, line2, &(sph->min_core)); 280 | sp_column_header_print(line1, line2, &(sph->min_mem_gb)); 281 | sp_column_header_print(line1, line2, &(sph->partition_qos)); 282 | sp_column_header_print(line1, line2, &(sph->gres)); 283 | sp_column_header_print(line1, line2, &(sph->features)); 284 | printf("%s\n%s\n", line1, line2); 285 | } 286 | 287 | /* Condensed printing for big numbers (k,m) */ 288 | void sp_con_print(uint32_t num, uint16_t column_width) { 289 | char cresult[SPART_MAX_COLUMN_SIZE]; 290 | uint16_t clong, cres; 291 | snprintf(cresult, SPART_MAX_COLUMN_SIZE, "%d", num); 292 | clong = strlen(cresult); 293 | if (clong > column_width) 294 | cres = clong - column_width; 295 | else 296 | cres = 0; 297 | switch (cres) { 298 | case 1: 299 | case 2: 300 | printf("%*d%s ", column_width - 1, (uint32_t)(num / 1000), "k"); 301 | break; 302 | 303 | case 3: 304 | printf("%*.1f%s ", column_width - 1, (float)(num / 1000000.0f), "m"); 305 | break; 306 | 307 | case 4: 308 | case 5: 309 | printf("%*d%s ", column_width - 1, (uint32_t)(num / 1000000), "m"); 310 | break; 311 | 312 | case 6: 313 | printf("%*.1f%s ", column_width - 1, (float)(num / 1000000000.0f), "g"); 314 | break; 315 | 316 | case 7: 317 | case 8: 318 | printf("%*d%s ", column_width - 1, (uint32_t)(num / 1000000000), "g"); 319 | break; 320 | 321 | default: 322 | printf("%*d ", column_width, num); 323 | } 324 | } 325 | 326 | /* Date printing */ 327 | void sp_date_print(uint32_t time_to_show, uint16_t column_width, 328 | int show_as_date) { 329 | uint16_t tday; 330 | uint16_t thour; 331 | uint16_t tminute; 332 | int len; 333 | char cresult[SPART_MAX_COLUMN_SIZE]; 334 | char ctmp[SPART_MAX_COLUMN_SIZE]; 335 | 336 | cresult[0] = '\0'; 337 | 338 | tday = time_to_show / 1440; 339 | thour = (time_to_show - (tday * 1440)) / 60; 340 | tminute = time_to_show - (tday * 1440) - (thour * (uint16_t)60); 341 | 342 | if (show_as_date == 0) { 343 | if ((time_to_show == INFINITE) || (time_to_show == NO_VAL)) 344 | printf(" - "); 345 | else { 346 | if (tday != 0) { 347 | snprintf(cresult, SPART_MAX_COLUMN_SIZE, "%d days ", tday); 348 | } 349 | if (thour != 0) { 350 | snprintf(ctmp, SPART_MAX_COLUMN_SIZE, "%d hour ", thour); 351 | sp_strn2cat(cresult, column_width, ctmp, column_width+1); 352 | } 353 | if (tminute != 0) { 354 | snprintf(ctmp, SPART_MAX_COLUMN_SIZE, "%d mins ", tminute); 355 | sp_strn2cat(cresult, column_width, ctmp, column_width+1); 356 | } 357 | len = strlen(cresult); 358 | /* delete last space */ 359 | cresult[len - 1] = '\0'; 360 | if (len < column_width) 361 | printf("%10s ", cresult); 362 | else 363 | printf("%4d-%02d:%02d ", tday, thour, tminute); 364 | } 365 | } else { 366 | if ((time_to_show == INFINITE) || (time_to_show == NO_VAL)) 367 | printf(" - "); 368 | else { 369 | printf("%4d-%02d:%02d ", tday, thour, tminute); 370 | } 371 | } 372 | } 373 | 374 | /* Condensed printing for big numbers (k,m) to the string */ 375 | void sp_con_strprint(char *str, uint16_t size, uint32_t num) { 376 | char cresult[SPART_MAX_COLUMN_SIZE]; 377 | snprintf(cresult, SPART_MAX_COLUMN_SIZE, "%d", num); 378 | switch (strlen(cresult)) { 379 | case 5: 380 | case 6: 381 | snprintf(str, size, "%d%s", (uint32_t)(num / 1000), "k"); 382 | break; 383 | 384 | case 7: 385 | snprintf(str, size, "%.1f%s", (float)(num / 1000000.0f), "m"); 386 | break; 387 | 388 | case 8: 389 | case 9: 390 | snprintf(str, size, "%d%s", (uint32_t)(num / 1000000), "m"); 391 | break; 392 | 393 | case 10: 394 | snprintf(str, size, "%.1f%s", (float)(num / 1000000000.0f), "g"); 395 | break; 396 | 397 | case 11: 398 | case 12: 399 | snprintf(str, size, "%d%s", (uint32_t)(num / 1000000000), "g"); 400 | break; 401 | 402 | default: 403 | snprintf(str, size, "%d", num); 404 | } 405 | } 406 | 407 | /* Prints a horizontal separetor such as ======= */ 408 | void sp_seperator_print(const char ch, const int count) { 409 | int x; 410 | for (x = 0; x < count; x++) printf("%c", ch); 411 | } 412 | 413 | #ifdef SPART_SHOW_STATEMENT 414 | void sp_statement_print(const char *stfile, const char *stpartition, 415 | const int total_width) { 416 | 417 | char re_str[SPART_INFO_STRING_SIZE]; 418 | FILE *fo; 419 | int m, k; 420 | fo = fopen(stfile, "r"); 421 | if (fo) { 422 | printf("\n"); 423 | while (fgets(re_str, SPART_INFO_STRING_SIZE, fo)) { 424 | /* To correctly frame some wide chars, but not all */ 425 | m = 0; 426 | for (k = 0; (re_str[k] != '\0') && k < SPART_INFO_STRING_SIZE; k++) { 427 | if ((re_str[k] < -58) && (re_str[k] > -62)) m++; 428 | if (re_str[k] == '\n') re_str[k] = '\0'; 429 | } 430 | printf(" %s %-*s %s\n", SPART_STATEMENT_QUEUE_LINEPRE, 92 + m, re_str, 431 | SPART_STATEMENT_QUEUE_LINEPOST); 432 | } 433 | printf(" %s ", SPART_STATEMENT_QUEUE_LINEPRE); 434 | sp_seperator_print('-', total_width); 435 | printf(" %s\n\n", SPART_STATEMENT_QUEUE_LINEPOST); 436 | pclose(fo); 437 | } else { 438 | printf(" %s ", SPART_STATEMENT_QUEUE_LINEPRE); 439 | sp_seperator_print('-', total_width); 440 | printf(" %s\n", SPART_STATEMENT_QUEUE_LINEPOST); 441 | } 442 | } 443 | #endif 444 | 445 | /* Prints a partition info */ 446 | void sp_partition_print(sp_part_info_t *sp, sp_headers_t *sph, int show_max_mem, 447 | int show_as_date, int total_width) { 448 | char mem_result[SPART_INFO_STRING_SIZE]; 449 | if (sp->visible) { 450 | if (sph->hspace.visible) 451 | printf("%*s ", sph->hspace.column_width, "COMMON VALUES:"); 452 | #ifdef __slurmdb_cluster_rec_t_defined 453 | if (sph->cluster_name.visible) 454 | printf("%*s ", sph->cluster_name.column_width, sp->cluster_name); 455 | #endif 456 | if (sph->partition_name.visible) 457 | printf("%*s ", sph->partition_name.column_width, sp->partition_name); 458 | if (sph->partition_status.visible) 459 | printf("%*s ", sph->partition_status.column_width, sp->partition_status); 460 | if (sph->free_cpu.visible) 461 | sp_con_print(sp->free_cpu, sph->free_cpu.column_width); 462 | if (sph->total_cpu.visible) 463 | sp_con_print(sp->total_cpu, sph->total_cpu.column_width); 464 | if (sph->waiting_resource.visible) 465 | sp_con_print(sp->waiting_resource, sph->waiting_resource.column_width); 466 | if (sph->waiting_other.visible) 467 | sp_con_print(sp->waiting_other, sph->waiting_other.column_width); 468 | if (sph->free_node.visible) 469 | sp_con_print(sp->free_node, sph->free_node.column_width); 470 | if (sph->total_cpu.visible) 471 | sp_con_print(sp->total_node, sph->total_cpu.column_width); 472 | if (!(sph->hspace.visible)) printf("|"); 473 | if (sph->my_running.visible) 474 | sp_con_print(sp->my_running, sph->my_running.column_width); 475 | if (sph->my_waiting_resource.visible) 476 | sp_con_print(sp->my_waiting_resource, 477 | sph->my_waiting_resource.column_width); 478 | if (sph->my_waiting_other.visible) 479 | sp_con_print(sp->my_waiting_other, sph->my_waiting_other.column_width); 480 | if (sph->my_total.visible) 481 | sp_con_print(sp->my_total, sph->my_total.column_width); 482 | if (!(sph->hspace.visible)) printf("| "); 483 | if (sph->min_nodes.visible) 484 | sp_con_print(sp->min_nodes, sph->min_nodes.column_width); 485 | if (sph->max_nodes.visible) { 486 | if (sp->max_nodes == UINT_MAX) 487 | printf("%*s ", sph->max_nodes.column_width, "-"); 488 | else 489 | sp_con_print(sp->max_nodes, sph->max_nodes.column_width); 490 | } 491 | if (sph->max_cpus_per_node.visible) { 492 | if ((sp->max_cpus_per_node == UINT_MAX) || (sp->max_cpus_per_node == 0)) 493 | printf("%*s ", sph->max_cpus_per_node.column_width, "-"); 494 | else 495 | sp_con_print(sp->max_cpus_per_node, 496 | sph->max_cpus_per_node.column_width); 497 | } 498 | if (sph->def_mem_per_cpu.visible) { 499 | if ((sp->def_mem_per_cpu == UINT_MAX) || (sp->def_mem_per_cpu == 0)) 500 | printf("%*s ", sph->def_mem_per_cpu.column_width, "-"); 501 | else 502 | sp_con_print(sp->def_mem_per_cpu, sph->def_mem_per_cpu.column_width); 503 | } 504 | if (sph->max_mem_per_cpu.visible) { 505 | if ((sp->max_mem_per_cpu == UINT_MAX) || (sp->max_mem_per_cpu == 0)) 506 | printf("%*s ", sph->max_mem_per_cpu.column_width, "-"); 507 | else 508 | sp_con_print(sp->max_mem_per_cpu, sph->max_mem_per_cpu.column_width); 509 | } 510 | if (sph->djt_time.visible) { 511 | sp_date_print(sp->djt_time, sph->djt_time.column_width, show_as_date); 512 | } 513 | if (sph->mjt_time.visible) { 514 | sp_date_print(sp->mjt_time, sph->mjt_time.column_width, show_as_date); 515 | } 516 | if (sph->min_core.visible) { 517 | if ((show_max_mem == 1) && (sp->min_core != sp->max_core)) 518 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%d-%d", sp->min_core, 519 | sp->max_core); 520 | else 521 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%*d", 522 | sph->min_core.column_width, sp->min_core); 523 | printf("%*s ", sph->min_core.column_width, mem_result); 524 | } 525 | if (sph->min_mem_gb.visible) { 526 | if ((show_max_mem == 1) && (sp->min_mem_gb != sp->max_mem_gb)) 527 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%d-%d", sp->min_mem_gb, 528 | sp->max_mem_gb); 529 | else 530 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%*d", 531 | sph->min_mem_gb.column_width, sp->min_mem_gb); 532 | printf("%*s ", sph->min_mem_gb.column_width, mem_result); 533 | } 534 | if (sph->partition_qos.visible) 535 | printf("%*s ", sph->partition_qos.column_width, sp->partition_qos); 536 | 537 | if (sph->gres.visible) printf("%-*s ", sph->gres.column_width, sp->gres); 538 | if (sph->features.visible) 539 | printf("%-*s ", sph->features.column_width, sp->features); 540 | printf("\n"); 541 | #ifdef SPART_SHOW_STATEMENT 542 | if (sp->show_statement && !(sph->hspace.visible)) { 543 | snprintf(mem_result, SPART_INFO_STRING_SIZE, "%s%s%s%s", 544 | SPART_STATEMENT_DIR, SPART_STATEMENT_QUEPRE, sp->partition_name, 545 | SPART_STATEMENT_QUEPOST); 546 | sp_statement_print(mem_result, sp->partition_name, total_width); 547 | } 548 | #endif 549 | } 550 | } 551 | 552 | /* print user info ( -i parameter output ) */ 553 | void sp_print_user_info(char *user_name, char **user_group, 554 | int user_group_count, char **user_acct, 555 | int user_acct_count, char **user_qos, 556 | int user_qos_count) { 557 | int k; 558 | printf(" Your username: %s\n", user_name); 559 | printf(" Your group(s): "); 560 | for (k = 0; k < user_group_count; k++) { 561 | printf("%s ", user_group[k]); 562 | } 563 | printf("\n"); 564 | printf(" Your account(s): "); 565 | for (k = 0; k < user_acct_count; k++) { 566 | printf("%s ", user_acct[k]); 567 | } 568 | printf("\n Your qos(s): "); 569 | for (k = 0; k < user_qos_count; k++) { 570 | printf("%s ", user_qos[k]); 571 | } 572 | printf("\n"); 573 | } 574 | 575 | #endif /* SPART_SPART_OUTPUT_H_incl */ 576 | -------------------------------------------------------------------------------- /spart_string.h: -------------------------------------------------------------------------------- 1 | /****************************************************************** 2 | * spart : a user-oriented partition info command for slurm 3 | * Author : Cem Ahmet Mercan, 2019-02-16 4 | * Licence : GNU General Public License v2.0 5 | * Note : Some part of this code taken from slurm api man pages 6 | *******************************************************************/ 7 | 8 | #ifndef SPART_SPART_STRING_H_incl 9 | #define SPART_SPART_STRING_H_incl 10 | 11 | #include 12 | 13 | size_t sp_str_available(char *s, size_t maxlen) { 14 | size_t filled = strnlen(s, maxlen); 15 | if (filled < maxlen) 16 | return maxlen - filled - 1; 17 | else 18 | return 0; 19 | } 20 | 21 | char *sp_strn2cat(char *dest, size_t ndest, const char *src, size_t nsrc) { 22 | size_t available = 0; 23 | size_t filleddest = strnlen(dest, ndest); 24 | // size_t filledsrc = strnlen(src, nsrc); 25 | dest[ndest - 1] = 0; 26 | if (filleddest >= ndest) return dest; 27 | available = ndest - filleddest - 1; 28 | // if (filledsrc > nsrc) filledsrc = nsrc; 29 | // if (filledsrc > available) filledsrc = available; 30 | // return strncat(dest, src, filledsrc); 31 | if (available > nsrc) available = nsrc; 32 | return strncat(dest, src, available); 33 | } 34 | 35 | char *sp_strn2cpy(char *dest, size_t ndest, const char *src, size_t nsrc) { 36 | // size_t filledsrc = strnlen(src, nsrc); 37 | size_t available = nsrc; 38 | // printf("sp_strn2cpy dest=%s ndest=%d src=%s nsrc=%d\n", 39 | // dest,ndest,src,nsrc); 40 | // return strncpy(dest, src, nsrc); 41 | // if (filledsrc > nsrc) filledsrc = nsrc; 42 | // if (filledsrc > ndest) filledsrc = ndest; 43 | // dest[ndest - 1] = 0; 44 | // return strncpy(dest, src, filledsrc); 45 | if (ndest < nsrc) available = ndest; 46 | return strncpy(dest, src, available); 47 | } 48 | 49 | /* Checks the user accounts present at the partition accounts */ 50 | /* Search each of the keys in a string which contains comma seperated keys */ 51 | int sp_account_check(char **key_list, int key_count, char *comma_sep_str) { 52 | uint16_t i; 53 | int *found = NULL; 54 | int parti = 0; 55 | char *strtmp = NULL; 56 | char *grestok; 57 | 58 | found = malloc(key_count * sizeof(int)); 59 | for (i = 0; i < key_count; i++) found[i] = 0; 60 | 61 | for (grestok = strtok_r(comma_sep_str, ",", &strtmp); grestok != NULL; 62 | grestok = strtok_r(NULL, ",", &strtmp)) { 63 | for (i = 0; i < key_count; i++) { 64 | if (strncmp(key_list[i], grestok, SPART_INFO_STRING_SIZE) == 0) { 65 | found[i]++; 66 | break; 67 | } 68 | } 69 | } 70 | /* How many accounts/groups of user listed in the partition list */ 71 | for (i = 0; i < key_count; i++) 72 | if (found[i] > 0) parti++; 73 | free(found); 74 | return parti; 75 | } 76 | 77 | /* Search for chr in str. If not fund, adds chr to str. */ 78 | void sp_char_check(char *str, int nstr, const char *chr, int nchr) { 79 | char c[2]; 80 | int l = strnlen(chr, nchr); 81 | int i = 0; 82 | c[1] = 0; 83 | for (i = 0; i < l; i++) { 84 | c[0] = chr[i]; 85 | if (strstr(str, c) == NULL) { 86 | sp_strn2cat(str, nstr, c, 2); 87 | } 88 | } 89 | } 90 | 91 | #endif /* SPART_SPART_STRING_H_incl */ 92 | --------------------------------------------------------------------------------