├── .gitignore ├── LICENSE.txt ├── README.md ├── UnixBench ├── .cproject ├── .project ├── Makefile ├── README ├── Run ├── USAGE ├── WRITING_TESTS ├── pgms │ ├── gfx-x11 │ ├── index.base │ ├── multi.sh │ ├── tst.sh │ └── unixbench.logo ├── src │ ├── arith.c │ ├── big.c │ ├── context1.c │ ├── dhry.h │ ├── dhry_1.c │ ├── dhry_2.c │ ├── dummy.c │ ├── execl.c │ ├── fstime.c │ ├── hanoi.c │ ├── looper.c │ ├── pipe.c │ ├── spawn.c │ ├── syscall.c │ ├── time-polling.c │ ├── timeit.c │ ├── ubgears.c │ └── whets.c └── testdir │ ├── cctest.c │ ├── dc.dat │ ├── large.txt │ └── sort.src └── unixbench.sh /.gitignore: -------------------------------------------------------------------------------- 1 | UnixBench/pgms 2 | UnixBench/results 3 | UnixBench/tmp 4 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc., 5 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Lesser General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | 294 | Copyright (C) 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License along 307 | with this program; if not, write to the Free Software Foundation, Inc., 308 | 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 309 | 310 | Also add information on how to contact you by electronic and paper mail. 311 | 312 | If the program is interactive, make it output a short notice like this 313 | when it starts in an interactive mode: 314 | 315 | Gnomovision version 69, Copyright (C) year name of author 316 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 317 | This is free software, and you are welcome to redistribute it 318 | under certain conditions; type `show c' for details. 319 | 320 | The hypothetical commands `show w' and `show c' should show the appropriate 321 | parts of the General Public License. Of course, the commands you use may 322 | be called something other than `show w' and `show c'; they could even be 323 | mouse-clicks or menu items--whatever suits your program. 324 | 325 | You should also get your employer (if you work as a programmer) or your 326 | school, if any, to sign a "copyright disclaimer" for the program, if 327 | necessary. Here is a sample; alter the names: 328 | 329 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 330 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 331 | 332 | , 1 April 1989 333 | Ty Coon, President of Vice 334 | 335 | This General Public License does not permit incorporating your program into 336 | proprietary programs. If your program is a subroutine library, you may 337 | consider it more useful to permit linking proprietary applications with the 338 | library. If this is what you want to do, use the GNU Lesser General 339 | Public License instead of this License. 340 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # byte-unixbench 2 | 3 | **UnixBench** is the original BYTE UNIX benchmark suite, updated and revised by many people over the years. 4 | 5 | The purpose of UnixBench is to provide a basic indicator of the performance of a Unix-like system; hence, multiple 6 | tests are used to test various aspects of the system's performance. These test results are then compared to the 7 | scores from a baseline system to produce an index value, which is generally easier to handle than the raw scores. 8 | The entire set of index values is then combined to make an overall index for the system. 9 | 10 | Some very simple graphics tests are included to measure the 2D and 3D graphics performance of the system. 11 | 12 | Multi-CPU systems are handled. If your system has multiple CPUs, the default behaviour is to run the selected tests 13 | twice -- once with one copy of each test program running at a time, and once with N copies, where N is the number of 14 | CPUs. This is designed to allow you to assess: 15 | 16 | * the performance of your system when running a single task 17 | * the performance of your system when running multiple tasks 18 | * the gain from your system's implementation of parallel processing 19 | 20 | Do be aware that this is a system benchmark, not a CPU, RAM or disk benchmark. The results will depend not only on 21 | your hardware, but on your operating system, libraries, and even compiler. 22 | 23 | ## History 24 | 25 | **UnixBench** was first started in 1983 at Monash University, as a simple synthetic benchmarking application. It 26 | was then taken and expanded by **Byte Magazine**. Linux mods by Jon Tombs, and original authors Ben Smith, 27 | Rick Grehan, and Tom Yager. The tests compare Unix systems by comparing their results to a set of scores set 28 | by running the code on a benchmark system, which is a SPARCstation 20-61 (rated at 10.0). 29 | 30 | David C. Niemi maintained the program for quite some time, and made some major modifications and updates, 31 | and produced **UnixBench 4**. He later gave the program to Ian Smith to maintain. Ian subsequently made 32 | some major changes and revised it from version 4 to version 5. 33 | 34 | Thanks to Ian Smith for managing the release up to 5.1.3. As of the next release (5.2), [Anthony F. Voellm](https://github.com/voellm) is going to help maintain the code base. The releases will happen once there are enough pull requests to warrant a new release. 35 | 36 | The general process will be the following: 37 | 38 | * Open a bug announcing that a new release will happen. 39 | * Everything on the `dev` branch will be run. 40 | * Code will move from the `dev` branch into `main` and be tagged. Bug fix releases with increment the subversion and major functionality changes will increase the major version. 41 | 42 | ## Included Tests 43 | 44 | UnixBench consists of a number of individual tests that are targeted at specific areas. Here is a summary of what 45 | each test does: 46 | 47 | ### Dhrystone 48 | 49 | Developed by Reinhold Weicker in 1984. This benchmark is used to measure and compare the performance of computers. The test focuses on string handling, as there are no floating point operations. It is heavily influenced by hardware and software design, compiler and linker options, code optimization, cache memory, wait states, and integer data types. 50 | 51 | ### Whetstone 52 | 53 | This test measures the speed and efficiency of floating-point operations. This test contains several modules that are meant to represent a mix of operations typically performed in scientific applications. A wide variety of C functions including `sin`, `cos`, `sqrt`, `exp`, and `log` are used as well as integer and floating-point math operations, array accesses, conditional branches, and procedure calls. This test measure both integer and floating-point arithmetic. 54 | 55 | ### `execl` Throughput 56 | 57 | This test measures the number of `execl` calls that can be performed per second. `execl` is part of the exec family of functions that replaces the current process image with a new process image. It and many other similar commands are front ends for the function `execve()`. 58 | 59 | ### File Copy 60 | 61 | This measures the rate at which data can be transferred from one file to another, using various buffer sizes. The file read, write and copy tests capture the number of characters that can be written, read and copied in a specified time (default is 10 seconds). 62 | 63 | ### Pipe Throughput 64 | 65 | A pipe is the simplest form of communication between processes. Pipe throughput is the number of times (per second) a process can write 512 bytes to a pipe and read them back. The pipe throughput test has no real counterpart in real-world programming. 66 | 67 | ### Pipe-based Context Switching 68 | 69 | This test measures the number of times two processes can exchange an increasing integer through a pipe. The pipe-based context switching test is more like a real-world application. The test program spawns a child process with which it carries on a bi-directional pipe conversation. 70 | 71 | ### Process Creation 72 | 73 | This test measure the number of times a process can fork and reap a child that immediately exits. Process creation refers to actually creating process control blocks and memory allocations for new processes, so this applies directly to memory bandwidth. Typically, this benchmark would be used to compare various implementations of operating system process creation calls. 74 | 75 | ### Shell Scripts 76 | 77 | The shells scripts test measures the number of times per minute a process can start and reap a set of one, two, four and eight concurrent copies of a shell scripts where the shell script applies a series of transformation to a data file. 78 | 79 | ### System Call Overhead 80 | 81 | This estimates the cost of entering and leaving the operating system kernel, i.e., the overhead for performing a system call. It consists of a simple program repeatedly calling the `getpid` (which returns the process id of the calling process) system call. The time to execute such calls is used to estimate the cost of entering and exiting the kernel. 82 | 83 | ### Graphical Tests 84 | 85 | Both 2D and 3D graphical tests are provided; at the moment, the 3D suite in particular is very limited, consisting of the `ubgears` program. These tests are intended to provide a very rough idea of the system's 2D and 3D graphics performance. Bear in mind, of course, that the reported performance will depend not only on hardware, but on whether your system has appropriate drivers for it. 86 | 87 | # License 88 | 89 | This project is released under the [GPL v2](LICENSE.txt) license. 90 | -------------------------------------------------------------------------------- /UnixBench/.cproject: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 28 | 29 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | 95 | 96 | 97 | 98 | 99 | 100 | 101 | 102 | 103 | 104 | 105 | 106 | 107 | 108 | 109 | 110 | 111 | 112 | 113 | 114 | 115 | 116 | 117 | 118 | 119 | 120 | 121 | 122 | 123 | 124 | 125 | 126 | 127 | 128 | 129 | 130 | 131 | 132 | 133 | 134 | 135 | 136 | 137 | 138 | 139 | 140 | 141 | 142 | 143 | 146 | 147 | 150 | 151 | 152 | 153 | 154 | 155 | 156 | 157 | 158 | 159 | 160 | 161 | 162 | 163 | 164 | 165 | 166 | 167 | 168 | 169 | 170 | 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | 185 | 186 | 187 | 188 | 189 | 190 | 191 | 192 | 193 | 194 | 195 | 196 | 197 | 198 | 199 | 200 | 201 | 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | 215 | 216 | 217 | 218 | 219 | 220 | 221 | 222 | 223 | 224 | 225 | 226 | 227 | 228 | 229 | 230 | 231 | 232 | 233 | 234 | 235 | 236 | 237 | 238 | 239 | 240 | 241 | 242 | 243 | 244 | 245 | 246 | -------------------------------------------------------------------------------- /UnixBench/.project: -------------------------------------------------------------------------------- 1 | 2 | 3 | UnixBench 4 | 5 | 6 | 7 | 8 | 9 | org.eclipse.cdt.managedbuilder.core.genmakebuilder 10 | clean,full,incremental, 11 | 12 | 13 | ?name? 14 | 15 | 16 | 17 | org.eclipse.cdt.make.core.append_environment 18 | true 19 | 20 | 21 | org.eclipse.cdt.make.core.autoBuildTarget 22 | all 23 | 24 | 25 | org.eclipse.cdt.make.core.buildArguments 26 | 27 | 28 | 29 | org.eclipse.cdt.make.core.buildCommand 30 | make 31 | 32 | 33 | org.eclipse.cdt.make.core.buildLocation 34 | ${workspace_loc:/UnixBench/Debug} 35 | 36 | 37 | org.eclipse.cdt.make.core.cleanBuildTarget 38 | clean 39 | 40 | 41 | org.eclipse.cdt.make.core.contents 42 | org.eclipse.cdt.make.core.activeConfigSettings 43 | 44 | 45 | org.eclipse.cdt.make.core.enableAutoBuild 46 | false 47 | 48 | 49 | org.eclipse.cdt.make.core.enableCleanBuild 50 | true 51 | 52 | 53 | org.eclipse.cdt.make.core.enableFullBuild 54 | true 55 | 56 | 57 | org.eclipse.cdt.make.core.fullBuildTarget 58 | all 59 | 60 | 61 | org.eclipse.cdt.make.core.stopOnError 62 | true 63 | 64 | 65 | org.eclipse.cdt.make.core.useDefaultBuildCmd 66 | true 67 | 68 | 69 | 70 | 71 | org.eclipse.cdt.managedbuilder.core.ScannerConfigBuilder 72 | 73 | 74 | 75 | 76 | 77 | org.eclipse.cdt.core.cnature 78 | org.eclipse.cdt.core.ccnature 79 | org.eclipse.cdt.managedbuilder.core.managedBuildNature 80 | org.eclipse.cdt.managedbuilder.core.ScannerConfigNature 81 | 82 | 83 | -------------------------------------------------------------------------------- /UnixBench/Makefile: -------------------------------------------------------------------------------- 1 | ############################################################################## 2 | # UnixBench v5.1.3 3 | # Based on The BYTE UNIX Benchmarks - Release 3 4 | # Module: Makefile SID: 3.9 5/15/91 19:30:15 5 | # 6 | ############################################################################## 7 | # Bug reports, patches, comments, suggestions should be sent to: 8 | # David C Niemi 9 | # 10 | # Original Contacts at Byte Magazine: 11 | # Ben Smith or Tom Yager at BYTE Magazine 12 | # bensmith@bytepb.byte.com tyager@bytepb.byte.com 13 | # 14 | ############################################################################## 15 | # Modification Log: 7/28/89 cleaned out workload files 16 | # 4/17/90 added routines for installing from shar mess 17 | # 7/23/90 added compile for dhrystone version 2.1 18 | # (this is not part of Run file. still use old) 19 | # removed HZ from everything but dhry. 20 | # HZ is read from the environment, if not 21 | # there, you must define it in this file 22 | # 10/30/90 moved new dhrystone into standard set 23 | # new pgms (dhry included) run for a specified 24 | # time rather than specified number of loops 25 | # 4/5/91 cleaned out files not needed for 26 | # release 3 -- added release 3 files -ben 27 | # 10/22/97 added compiler options for strict ANSI C 28 | # checking for gcc and DEC's cc on 29 | # Digital Unix 4.x (kahn@zk3.dec.com) 30 | # 09/26/07 changes for UnixBench 5.0 31 | # 09/30/07 adding ubgears, GRAPHIC_TESTS switch 32 | # 10/14/07 adding large.txt 33 | # 01/13/11 added support for parallel compilation 34 | # 01/07/16 [refer to version control commit messages and 35 | # cease using two-digit years in date formats] 36 | ############################################################################## 37 | 38 | ############################################################################## 39 | # CONFIGURATION 40 | ############################################################################## 41 | 42 | SHELL = /bin/sh 43 | 44 | # GRAPHIC TESTS: Uncomment the definition of "GRAPHIC_TESTS" to enable 45 | # the building of the graphics benchmarks. This will require the 46 | # X11 libraries on your system. (e.g. libX11-devel mesa-libGL-devel) 47 | # 48 | # Comment the line out to disable these tests. 49 | # GRAPHIC_TESTS = defined 50 | 51 | # Set "GL_LIBS" to the libraries needed to link a GL program. 52 | GL_LIBS = -lGL -lXext -lX11 53 | 54 | 55 | # COMPILER CONFIGURATION: Set "CC" to the name of the compiler to use 56 | # to build the binary benchmarks. You should also set "$cCompiler" in the 57 | # Run script to the name of the compiler you want to test. 58 | CC=gcc 59 | 60 | # OPTIMISATION SETTINGS: 61 | # Use gcc option if defined UB_GCC_OPTIONS via "Environment variable" or "Command-line arguments". 62 | ifdef UB_GCC_OPTIONS 63 | OPTON = $(UB_GCC_OPTIONS) 64 | 65 | else 66 | ## Very generic 67 | #OPTON = -O 68 | 69 | ## For Linux 486/Pentium, GCC 2.7.x and 2.8.x 70 | #OPTON = -O2 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math \ 71 | # -m486 -malign-loops=2 -malign-jumps=2 -malign-functions=2 72 | 73 | ## For Linux, GCC previous to 2.7.0 74 | #OPTON = -O2 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math -m486 75 | 76 | #OPTON = -O2 -fomit-frame-pointer -fforce-addr -fforce-mem -ffast-math \ 77 | # -m386 -malign-loops=1 -malign-jumps=1 -malign-functions=1 78 | 79 | ## For Solaris 2, or general-purpose GCC 2.7.x 80 | #OPTON = -O2 -fomit-frame-pointer -fforce-addr -ffast-math -Wall 81 | 82 | ## For Digital Unix v4.x, with DEC cc v5.x 83 | #OPTON = -O4 84 | #CFLAGS = -DTIME -std1 -verbose -w0 85 | 86 | ## gcc optimization flags 87 | ## (-ffast-math) disables strict IEEE or ISO rules/specifications for math funcs 88 | OPTON = -O3 -ffast-math 89 | 90 | ## OS detection. Comment out if gmake syntax not supported by other 'make'. 91 | OSNAME:=$(shell uname -s) 92 | ARCH := $(shell uname -p) 93 | ifeq ($(OSNAME),Linux) 94 | # Not all CPU architectures support "-march" or "-march=native". 95 | # - Supported : x86, x86_64, ARM, AARCH64, etc.. 96 | # - Not Supported: RISC-V, IBM Power, etc... 97 | ifneq ($(ARCH),$(filter $(ARCH),ppc64 ppc64le)) 98 | OPTON += -march=native -mtune=native 99 | else 100 | OPTON += -mcpu=native -mtune=native 101 | endif 102 | endif 103 | 104 | ifeq ($(OSNAME),Darwin) 105 | # (adjust flags or comment out this section for older versions of XCode or OS X) 106 | # (-mmacosx-versin-min= requires at least that version of SDK be installed) 107 | ifneq ($(ARCH),$(filter $(ARCH),ppc64 ppc64le)) 108 | OPTON += -march=native -mmacosx-version-min=10.10 109 | else 110 | OPTON += -mcpu=native 111 | endif 112 | #http://stackoverflow.com/questions/9840207/how-to-use-avx-pclmulqdq-on-mac-os-x-lion/19342603#19342603 113 | CFLAGS += -Wa,-q 114 | endif 115 | 116 | endif 117 | 118 | 119 | ## generic gcc CFLAGS. -DTIME must be included. 120 | CFLAGS += -Wall -pedantic $(OPTON) -I $(SRCDIR) -DTIME 121 | 122 | 123 | ############################################################################## 124 | # END CONFIGURATION 125 | ############################################################################## 126 | 127 | 128 | # local directories 129 | PROGDIR = ./pgms 130 | SRCDIR = ./src 131 | TESTDIR = ./testdir 132 | RESULTDIR = ./results 133 | TMPDIR = ./tmp 134 | # other directories 135 | INCLDIR = /usr/include 136 | LIBDIR = /lib 137 | SCRIPTS = unixbench.logo multi.sh tst.sh index.base 138 | SOURCES = arith.c big.c context1.c \ 139 | dummy.c execl.c \ 140 | fstime.c hanoi.c \ 141 | pipe.c spawn.c \ 142 | syscall.c looper.c timeit.c time-polling.c \ 143 | dhry_1.c dhry_2.c dhry.h whets.c ubgears.c 144 | TESTS = sort.src cctest.c dc.dat large.txt 145 | 146 | ifneq (,$(GRAPHIC_TESTS)) 147 | GRAPHIC_BINS = $(PROGDIR)/ubgears 148 | else 149 | GRAPHIC_BINS = 150 | endif 151 | 152 | # Program binaries. 153 | BINS = $(PROGDIR)/arithoh $(PROGDIR)/register $(PROGDIR)/short \ 154 | $(PROGDIR)/int $(PROGDIR)/long $(PROGDIR)/float $(PROGDIR)/double \ 155 | $(PROGDIR)/hanoi $(PROGDIR)/syscall $(PROGDIR)/context1 \ 156 | $(PROGDIR)/pipe $(PROGDIR)/spawn $(PROGDIR)/execl \ 157 | $(PROGDIR)/dhry2 $(PROGDIR)/dhry2reg $(PROGDIR)/looper \ 158 | $(PROGDIR)/fstime $(PROGDIR)/whetstone-double $(GRAPHIC_BINS) 159 | ## These compile only on some platforms... 160 | # $(PROGDIR)/poll $(PROGDIR)/poll2 $(PROGDIR)/select 161 | 162 | # Required non-binary files. 163 | REQD = $(BINS) $(PROGDIR)/unixbench.logo \ 164 | $(PROGDIR)/multi.sh $(PROGDIR)/tst.sh $(PROGDIR)/index.base \ 165 | $(PROGDIR)/gfx-x11 \ 166 | $(TESTDIR)/sort.src $(TESTDIR)/cctest.c $(TESTDIR)/dc.dat \ 167 | $(TESTDIR)/large.txt 168 | 169 | # ######################### the big ALL ############################ 170 | all: 171 | ## Ick!!! What is this about??? How about let's not chmod everything bogusly. 172 | # @chmod 744 * $(SRCDIR)/* $(PROGDIR)/* $(TESTDIR)/* $(DOCDIR)/* 173 | $(MAKE) distr 174 | $(MAKE) programs 175 | 176 | # ####################### a check for Run ###################### 177 | check: $(REQD) 178 | $(MAKE) all 179 | # ############################################################## 180 | # distribute the files out to subdirectories if they are in this one 181 | distr: 182 | @echo "Checking distribution of files" 183 | # scripts 184 | @if test ! -d $(PROGDIR) \ 185 | ; then \ 186 | mkdir $(PROGDIR) \ 187 | ; mv $(SCRIPTS) $(PROGDIR) \ 188 | ; else \ 189 | echo "$(PROGDIR) exists" \ 190 | ; fi 191 | # C sources 192 | @if test ! -d $(SRCDIR) \ 193 | ; then \ 194 | mkdir $(SRCDIR) \ 195 | ; mv $(SOURCES) $(SRCDIR) \ 196 | ; else \ 197 | echo "$(SRCDIR) exists" \ 198 | ; fi 199 | # test data 200 | @if test ! -d $(TESTDIR) \ 201 | ; then \ 202 | mkdir $(TESTDIR) \ 203 | ; mv $(TESTS) $(TESTDIR) \ 204 | ; else \ 205 | echo "$(TESTDIR) exists" \ 206 | ; fi 207 | # temporary work directory 208 | @if test ! -d $(TMPDIR) \ 209 | ; then \ 210 | mkdir $(TMPDIR) \ 211 | ; else \ 212 | echo "$(TMPDIR) exists" \ 213 | ; fi 214 | # directory for results 215 | @if test ! -d $(RESULTDIR) \ 216 | ; then \ 217 | mkdir $(RESULTDIR) \ 218 | ; else \ 219 | echo "$(RESULTDIR) exists" \ 220 | ; fi 221 | 222 | .PHONY: all check distr programs run clean spotless 223 | 224 | programs: $(BINS) 225 | 226 | # (use $< to link only the first dependency, instead of $^, 227 | # since the programs matching this pattern have only 228 | # one input file, and others are #include "xxx.c" 229 | # within the first. (not condoning, just documenting)) 230 | # (dependencies could be generated by modern compilers, 231 | # but let's not assume modern compilers are present) 232 | $(PROGDIR)/%: 233 | $(CC) -o $@ $(CFLAGS) $< $(LDFLAGS) 234 | 235 | # Individual programs 236 | # Sometimes the same source file is compiled in different ways. 237 | # This limits the 'make' patterns that can usefully be applied. 238 | 239 | $(PROGDIR)/arithoh: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 240 | $(PROGDIR)/arithoh: CFLAGS += -Darithoh 241 | $(PROGDIR)/register: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 242 | $(PROGDIR)/register: CFLAGS += -Ddatum='register int' 243 | $(PROGDIR)/short: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 244 | $(PROGDIR)/short: CFLAGS += -Ddatum=short 245 | $(PROGDIR)/int: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 246 | $(PROGDIR)/int: CFLAGS += -Ddatum=int 247 | $(PROGDIR)/long: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 248 | $(PROGDIR)/long: CFLAGS += -Ddatum=long 249 | $(PROGDIR)/float: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 250 | $(PROGDIR)/float: CFLAGS += -Ddatum=float 251 | $(PROGDIR)/double: $(SRCDIR)/arith.c $(SRCDIR)/timeit.c 252 | $(PROGDIR)/double: CFLAGS += -Ddatum=double 253 | 254 | $(PROGDIR)/poll: $(SRCDIR)/time-polling.c 255 | $(PROGDIR)/poll: CFLAGS += -DUNIXBENCH -DHAS_POLL 256 | $(PROGDIR)/poll2: $(SRCDIR)/time-polling.c 257 | $(PROGDIR)/poll2: CFLAGS += -DUNIXBENCH -DHAS_POLL2 258 | $(PROGDIR)/select: $(SRCDIR)/time-polling.c 259 | $(PROGDIR)/select: CFLAGS += -DUNIXBENCH -DHAS_SELECT 260 | 261 | $(PROGDIR)/whetstone-double: $(SRCDIR)/whets.c 262 | $(PROGDIR)/whetstone-double: CFLAGS += -DDP -DGTODay -DUNIXBENCH 263 | $(PROGDIR)/whetstone-double: LDFLAGS += -lm 264 | 265 | $(PROGDIR)/pipe: $(SRCDIR)/pipe.c $(SRCDIR)/timeit.c 266 | 267 | $(PROGDIR)/execl: $(SRCDIR)/execl.c $(SRCDIR)/big.c 268 | 269 | $(PROGDIR)/spawn: $(SRCDIR)/spawn.c $(SRCDIR)/timeit.c 270 | 271 | $(PROGDIR)/hanoi: $(SRCDIR)/hanoi.c $(SRCDIR)/timeit.c 272 | 273 | $(PROGDIR)/fstime: $(SRCDIR)/fstime.c 274 | 275 | $(PROGDIR)/syscall: $(SRCDIR)/syscall.c $(SRCDIR)/timeit.c 276 | 277 | $(PROGDIR)/context1: $(SRCDIR)/context1.c $(SRCDIR)/timeit.c 278 | 279 | $(PROGDIR)/looper: $(SRCDIR)/looper.c $(SRCDIR)/timeit.c 280 | 281 | $(PROGDIR)/ubgears: $(SRCDIR)/ubgears.c 282 | $(PROGDIR)/ubgears: LDFLAGS += -lm $(GL_LIBS) 283 | 284 | $(PROGDIR)/dhry2: CFLAGS += -DHZ=${HZ} 285 | $(PROGDIR)/dhry2: $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c \ 286 | $(SRCDIR)/dhry.h $(SRCDIR)/timeit.c 287 | $(CC) -o $@ ${CFLAGS} $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c 288 | 289 | $(PROGDIR)/dhry2reg: CFLAGS += -DHZ=${HZ} -DREG=register 290 | $(PROGDIR)/dhry2reg: $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c \ 291 | $(SRCDIR)/dhry.h $(SRCDIR)/timeit.c 292 | $(CC) -o $@ ${CFLAGS} $(SRCDIR)/dhry_1.c $(SRCDIR)/dhry_2.c 293 | 294 | # Run the benchmarks and create the reports 295 | run: 296 | sh ./Run 297 | 298 | clean: 299 | $(RM) $(BINS) core *~ */*~ 300 | 301 | spotless: clean 302 | $(RM) $(RESULTDIR)/* $(TMPDIR)/* 303 | 304 | ## END ## 305 | -------------------------------------------------------------------------------- /UnixBench/README: -------------------------------------------------------------------------------- 1 | Version 5.1.6 -- 2021-01-08 2 | 3 | ================================================================ 4 | To use Unixbench: 5 | 6 | 1. UnixBench from version 5.1 on has both system and graphics tests. 7 | If you want to use the graphic tests, edit the Makefile and make sure 8 | that the line "GRAPHIC_TESTS = defined" is not commented out; then check 9 | that the "GL_LIBS" definition is OK for your system. Also make sure 10 | that the "x11perf" command is on your search path. 11 | 12 | If you don't want the graphics tests, then comment out the 13 | "GRAPHIC_TESTS = defined" line. Note: comment it out, don't 14 | set it to anything. 15 | 16 | 2. Do "make". 17 | 18 | 3. Do "Run" to run the system test; "Run graphics" to run the graphics 19 | tests; "Run gindex" to run both. 20 | 21 | You will need perl, as Run is written in perl. 22 | 23 | For more information on using the tests, read "USAGE". 24 | 25 | For information on adding tests into the benchmark, see "WRITING_TESTS". 26 | 27 | 28 | ===================== RELEASE NOTES ===================================== 29 | 30 | v5.1.6 31 | Optimize syscall.c: Change getpid method to syscall 32 | According to http://man7.org/linux/man-pages/man2/getpid.2.html: 33 | 34 | From glibc version 2.3.4 up to and including version 2.24, the glibc 35 | wrapper function for getpid() cached PIDs, with the goal of avoiding 36 | additional system calls when a process calls getpid() repeatedly. 37 | 38 | So it's not suitable to messure the system call performance through 39 | getpid(). Directly call syscall(SYS_getpid) is more appropriate. 40 | 41 | From glibc version 2.25, cached pid is removed to fix some bugs which 42 | makes the testsuite wrongly report performance regression on system call. 43 | 44 | 45 | 46 | v5.1.5 47 | Optimize whets.c: 48 | When the main frequency of the CPU is higher, the time of N8 process is nonlinear. 49 | More infomation, you can see: https://yq.aliyun.com/articles/674732 50 | 51 | ======================== JUN 2 2018 ========================== 52 | v5.1.4 53 | Optimize contetxt1.c: 54 | Sometimes parent and child process will run on the same CPU, now we make parent process and child process run on different CPUs. 55 | 56 | 57 | ======================== Jan 13 ========================== 58 | 59 | v5.1.3 60 | 61 | Fixed issue that would cause a race condition if you attempted to compile in 62 | parallel with more than 3 parallel jobs. 63 | 64 | 65 | Kelly Lucas, Jan 13, 2011 66 | kdlucas at gmail period com 67 | 68 | 69 | ======================== Dec 07 ========================== 70 | 71 | v5.1.2 72 | 73 | One big fix: if unixbench is installed in a directory whose pathname contains 74 | a space, it should now run (previously it failed). 75 | 76 | To avoid possible clashes, the environment variables unixbench uses are now 77 | prefixed with "UB_". These are all optional, and for most people will be 78 | completely unnecessary, but if you want you can set these: 79 | 80 | UB_BINDIR Directory where the test programs live. 81 | UB_TMPDIR Temp directory, for temp files. 82 | UB_RESULTDIR Directory to put results in. 83 | UB_TESTDIR Directory where the tests are executed. 84 | 85 | And a couple of tiny fixes: 86 | * In pgms/tst.sh, changed "sort -n +1" to "sort -n -k 1" 87 | * In Makefile, made it clearer that GRAPHIC_TESTS should be commented 88 | out (not set to 0) to disable graphics 89 | Thanks to nordi for pointing these out. 90 | 91 | 92 | Ian Smith, December 26, 2007 93 | johantheghost at yahoo period com 94 | 95 | 96 | ======================== Oct 07 ========================== 97 | 98 | v5.1.1 99 | 100 | It turns out that the setting of LANG is crucial to the results. This 101 | explains why people in different regions were seeing odd results, and also 102 | why runlevel 1 produced odd results -- runlevel 1 doesn't set LANG, and 103 | hence reverts to ASCII, whereas most people use a UTF-8 encoding, which is 104 | much slower in some tests (eg. shell tests). 105 | 106 | So now we manually set LANG to "en_US.utf8", which is configured with the 107 | variable "$language". Don't change this if you want to share your results. 108 | We also report the language settings in use. 109 | 110 | See "The Language Setting" in USAGE for more info. Thanks to nordi for 111 | pointing out the LANG issue. 112 | 113 | I also added the "grep" and "sysexec" tests. These are non-index tests, 114 | and "grep" uses the system's grep, so it's not much use for comparing 115 | different systems. But some folks on the OpenSuSE list have been finding 116 | these useful. They aren't in any of the main test groups; do "Run grep 117 | sysexec" to run them. 118 | 119 | Index Changes 120 | ------------- 121 | 122 | The setting of LANG will affect consistency with systems where this is 123 | not the default value. However, it should produce more consistent results 124 | in future. 125 | 126 | 127 | Ian Smith, October 15, 2007 128 | johantheghost at yahoo period com 129 | 130 | 131 | ======================== Oct 07 ========================== 132 | 133 | v5.1 134 | 135 | The major new feature in this version is the addition of graphical 136 | benchmarks. Since these may not compile on all systems, you can enable/ 137 | disable them with the GRAPHIC_TESTS variable in the Makefile. 138 | 139 | As before, each test is run for 3 or 10 iterations. However, we now discard 140 | the worst 1/3 of the scores before averaging the remainder. The logic is 141 | that a glitch in the system (background process waking up, for example) may 142 | make one or two runs go slow, so let's discard those. Hopefully this will 143 | produce more consistent and repeatable results. Check the log file 144 | for a test run to see the discarded scores. 145 | 146 | Made the tests compile and run on x86-64/Linux (fixed an execl bug passing 147 | int instead of pointer). 148 | 149 | Also fixed some general bugs. 150 | 151 | Thanks to Stefan Esser for help and testing / bug reporting. 152 | 153 | Index Changes 154 | ------------- 155 | 156 | The tests are now divided into categories, and each category generates 157 | its own index. This keeps the graphics test results separate from 158 | the system tests. 159 | 160 | The "graphics" test and corresponding index are new. 161 | 162 | The "discard the worst scores" strategy should produce slightly higher 163 | test scores, but at least they should (hopefully!) be more consistent. 164 | The scores should not be higher than the best scores you would have got 165 | with 5.0, so this should not be a huge consistency issue. 166 | 167 | Ian Smith, October 11, 2007 168 | johantheghost at yahoo period com 169 | 170 | 171 | ======================== Sep 07 ========================== 172 | 173 | v5.0 174 | 175 | All the work I've done on this release is Linux-based, because that's 176 | the only Unix I have access to. I've tried to make it more OS-agnostic 177 | if anything; for example, it no longer has to figure out the format reported 178 | by /usr/bin/time. However, it's possible that portability has been damaged. 179 | If anyone wants to fix this, please feel free to mail me patches. 180 | 181 | In particular, the analysis of the system's CPUs is done via /proc/cpuinfo. 182 | For systems which don't have this, please make appropriate changes in 183 | getCpuInfo() and getSystemInfo(). 184 | 185 | The big change has been to make the tests multi-CPU aware. See the 186 | "Multiple CPUs" section in "USAGE" for details. Other changes: 187 | 188 | * Completely rewrote Run in Perl; drastically simplified the way data is 189 | processed. The confusing system of interlocking shell and awk scripts is 190 | now just one script. Various intermediate files used to store and process 191 | results are now replaced by Perl data structures internal to the script. 192 | 193 | * Removed from the index runs file system read and write tests which were 194 | ignored for the index and wasted about 10 minutes per run (see fstime.c). 195 | The read and write tests can now be selected individually. Made fstime.c 196 | take parameters, so we no longer need to build 3 versions of it. 197 | 198 | * Made the output file names unique; they are built from 199 | hostname-date-sequence. 200 | 201 | * Worked on result reporting, error handling, and logging. See TESTS. 202 | We now generate both text and HTML reports. 203 | 204 | * Removed some obsolete files. 205 | 206 | Index Changes 207 | ------------- 208 | 209 | The index is still based on David Niemi's SPARCstation 20-61 (rated at 10.0), 210 | and the intention in the changes I've made has been to keep the tests 211 | unchanged, in order to maintain consistency with old result sets. 212 | 213 | However, the following changes have been made to the index: 214 | 215 | * The Pipe-based Context Switching test (context1) was being dropped 216 | from the index report in v4.1.0 due to a bug; I've put it back in. 217 | 218 | * I've added shell1 to the index, to get a measure of how the shell tests 219 | scale with multiple CPUs (shell8 already exercises all the CPUs, even 220 | in single-copy mode). I made up the baseline score for this by 221 | extrapolation. 222 | 223 | Both of these test can be dropped, if you wish, by editing the "TEST 224 | SPECIFICATIONS" section of Run. 225 | 226 | Ian Smith, September 20, 2007 227 | johantheghost at yahoo period com 228 | 229 | ======================== Aug 97 ========================== 230 | 231 | v4.1.0 232 | 233 | Double precision Whetstone put in place instead of the old "double" benchmark. 234 | 235 | Removal of some obsolete files. 236 | 237 | "system" suite adds shell8. 238 | 239 | perlbench and poll added as "exhibition" (non-index) benchmarks. 240 | 241 | Incorporates several suggestions by Andre Derrick Balsa 242 | 243 | Code cleanups to reduce compiler warnings by David C Niemi 244 | and Andy Kahn ; Digital Unix options by Andy Kahn. 245 | 246 | ======================== Jun 97 ========================== 247 | 248 | v4.0.1 249 | 250 | Minor change to fstime.c to fix overflow problems on fast machines. Counting 251 | is now done in units of 256 (smallest BUFSIZE) and unsigned longs are used, 252 | giving another 23 dB or so of headroom ;^) Results should be virtually 253 | identical aside from very small rounding errors. 254 | 255 | ======================== Dec 95 ========================== 256 | 257 | v4.0 258 | 259 | Byte no longer seems to have anything to do with this benchmark, and I was 260 | unable to reach any of the original authors; so I have taken it upon myself 261 | to clean it up. 262 | 263 | This is version 4. Major assumptions made in these benchmarks have changed 264 | since they were written, but they are nonetheless popular (particularly for 265 | measuring hardware for Linux). Some changes made: 266 | 267 | - The biggest change is to put a lot more operating system-oriented 268 | tests into the index. I experimented for a while with a decibel-like 269 | logarithmic scale, but finally settled on using a geometric mean for 270 | the final index (the individual scores are a normalized, and their 271 | logs are averaged; the resulting value is exponentiated). 272 | 273 | "George", certain SPARCstation 20-61 with 128 MB RAM, a SPARC Storage 274 | Array, and Solaris 2.3 is my new baseline; it is rated at 10.0 in each 275 | of the index scores for a final score of 10.0. 276 | 277 | Overall I find the geometric averaging is a big improvement for 278 | avoiding the skew that was once possible (e.g. a Pentium-75 which got 279 | 40 on the buggy version of fstime, such that fstime accounted for over 280 | half of its total score and hence wildly skewed its average). 281 | 282 | I also expect that the new numbers look different enough from the old 283 | ones that no one is too likely to casually mistake them for each other. 284 | 285 | I am finding new SPARCs running Solaris 2.4 getting about 15-20, and 286 | my 486 DX2-66 Compaq running Linux 1.3.45 got a 9.1. It got 287 | understandably poor scores on CPU and FPU benchmarks (a horrible 288 | 1.8 on "double" and 1.3 on "fsdisk"); but made up for it by averaging 289 | over 20 on the OS-oriented benchmarks. The Pentium-75 running 290 | Linux gets about 20 (and it *still* runs Windows 3.1 slowly. Oh well). 291 | 292 | - It is difficult to get a modern compiler to even consider making 293 | dhry2 without registers, short of turning off *all* optimizations. 294 | This is also not a terribly meaningful test, even if it were possible, 295 | as noone compiles without registers nowadays. Replaced this benchmark 296 | with dhry2reg in the index, and dropped it out of usage in general as 297 | it is so hard to make a legitimate one. 298 | 299 | - fstime: this had some bugs when compiled on modern systems which return 300 | the number of bytes read/written for read(2)/write(2) calls. The code 301 | assumed that a negative return code was given for EOF, but most modern 302 | systems return 0 (certainly on SunOS 4, Solaris2, and Linux, which is 303 | what counts for me). The old code yielded wildly inflated read scores, 304 | would eat up tens of MB of disk space on fast systems, and yielded 305 | roughly 50% lower than normal copy scores than it should have. 306 | 307 | Also, it counted partial blocks *fully*; made it count the proportional 308 | part of the block which was actually finished. 309 | 310 | Made bigger and smaller variants of fstime which are designed to beat 311 | up the disk I/O and the buffer cache, respectively. Adjusted the 312 | sleeps so that they are short for short benchmarks. 313 | 314 | - Instead of 1,2,4, and 8-shell benchmarks, went to 1, 8, and 16 to 315 | give a broader range of information (and to run 1 fewer test). 316 | The only real problem with this is that not many iterations get 317 | done with 16 at a time on slow systems, so there are some significant 318 | rounding errors; 8 therefore still used for the benchmark. There is 319 | also the problem that the last (uncompleted) loop is counted as a full 320 | loop, so it is impossible to score below 1.0 lpm (which gave my laptop 321 | a break). Probably redesigning Shell to do each loop a bit more 322 | quickly (but with less intensity) would be a good idea. 323 | 324 | This benchmark appears to be very heavily influenced by the speed 325 | of the loader, by which shell is being used as /bin/sh, and by how 326 | well-compiled some of the common shell utilities like grep, sed, and 327 | sort are. With a consistent tool set it is also a good indicator of 328 | the bandwidth between main memory and the CPU (e.g. Pentia score about 329 | twice as high as 486es due to their 64-bit bus). Small, sometimes 330 | broken shells like "ash-linux" do particularly well here, while big, 331 | robust shells like bash do not. 332 | 333 | - "dc" is a somewhat iffy benchmark, because there are two versions of 334 | it floating around, one being small, very fast, and buggy, and one 335 | being more correct but slow. It was never in the index anyway. 336 | 337 | - Execl is a somewhat troubling benchmark in that it yields much higher 338 | scores if compiled statically. I frown on this practice because it 339 | distorts the scores away from reflecting how programs are really used 340 | (i.e. dynamically linked). 341 | 342 | - Arithoh is really more an indicator of the compiler quality than of 343 | the computer itself. For example, GCC 2.7.x with -O2 and a few extra 344 | options optimizes much of it away, resulting in about a 1200% boost 345 | to the score. Clearly not a good one for the index. 346 | 347 | I am still a bit unhappy with the variance in some of the benchmarks, most 348 | notably the fstime suite; and with how long it takes to run. But I think 349 | it gets significantly more reliable results than the older version in less 350 | time. 351 | 352 | If anyone has ideas on how to make these benchmarks faster, lower-variance, 353 | or more meaningful; or has nice, new, portable benchmarks to add, don't 354 | hesitate to e-mail me. 355 | 356 | David C Niemi 7 Dec 1995 357 | 358 | ======================== May 91 ========================== 359 | This is version 3. This set of programs should be able to determine if 360 | your system is BSD or SysV. (It uses the output format of time (1) 361 | to see. If you have any problems, contact me (by email, 362 | preferably): ben@bytepb.byte.com 363 | 364 | --- 365 | 366 | The document doc/bench.doc describes the basic flow of the 367 | benchmark system. The document doc/bench3.doc describes the major 368 | changes in design of this version. As a user of the benchmarks, 369 | you should understand some of the methods that have been 370 | implemented to generate loop counts: 371 | 372 | Tests that are compiled C code: 373 | The function wake_me(second, func) is included (from the file 374 | timeit.c). This function uses signal and alarm to set a countdown 375 | for the time request by the benchmark administration script 376 | (Run). As soon as the clock is started, the test is run with a 377 | counter keeping track of the number of loops that the test makes. 378 | When alarm sends its signal, the loop counter value is sent to stderr 379 | and the program terminates. Since the time resolution, signal 380 | trapping and other factors don't insure that the test is for the 381 | precise time that was requested, the test program is also run 382 | from the time (1) command. The real time value returned from time 383 | (1) is what is used in calculating the number of loops per second 384 | (or minute, depending on the test). As is obvious, there is some 385 | overhead time that is not taken into account, therefore the 386 | number of loops per second is not absolute. The overhead of the 387 | test starting and stopping and the signal and alarm calls is 388 | common to the overhead of real applications. If a program loads 389 | quickly, the number of loops per second increases; a phenomenon 390 | that favors systems that can load programs quickly. (Setting the 391 | sticky bit of the test programs is not considered fair play.) 392 | 393 | Test that use existing UNIX programs or shell scripts: 394 | The concept is the same as that of compiled tests, except the 395 | alarm and signal are contained in separate compiled program, 396 | looper (source is looper.c). Looper uses an execvp to invoke the 397 | test with its arguments. Here, the overhead includes the 398 | invocation and execution of looper. 399 | 400 | -- 401 | 402 | The index numbers are generated from a baseline file that is in 403 | pgms/index.base. You can put tests that you wish in this file. 404 | All you need to do is take the results/log file from your 405 | baseline machine, edit out the comment and blank lines, and sort 406 | the result (vi/ex command: 1,$!sort). The sort in necessary 407 | because the process of generating the index report uses join (1). 408 | You can regenerate the reports by running "make report." 409 | 410 | -- 411 | 412 | ========================= Jan 90 ============================= 413 | Tom Yager has joined the effort here at BYTE; he is responsible 414 | for many refinements in the UNIX benchmarks. 415 | 416 | The memory access tests have been deleted from the benchmarks. 417 | The file access tests have been reversed so that the test is run 418 | for a fixed time. The amount of data transfered (written, read, 419 | and copied) is the variable. !WARNING! This test can eat up a 420 | large hunk of disk space. 421 | 422 | The initial line of all shell scripts has been changed from the 423 | SCO and XENIX form (:) to the more standard form "#! /bin/sh". 424 | But different systems handle shell switching differently. Check 425 | the documentation on your system and find out how you are 426 | supposed to do it. Or, simpler yet, just run the benchmarks from 427 | the Bourne shell. (You may need to set SHELL=/bin/sh as well.) 428 | 429 | The options to Run have not been checked in a while. They may no 430 | longer function. Next time, I'll get back on them. There needs to 431 | be another option added (next time) that halts testing between 432 | each test. !WARNING! Some systems have caches that are not getting flushed 433 | before the next test or iteration is run. This can cause 434 | erroneous values. 435 | 436 | ========================= Sept 89 ============================= 437 | The database (db) programs now have a tuneable message queue space. 438 | queue space. The default set in the Run script is 1024 bytes. 439 | Other major changes are in the format of the times. We now show 440 | Arithmetic and Geometric mean and standard deviation for User 441 | Time, System Time, and Real Time. Generally, in reporting, we 442 | plan on using the Real Time values with the benchs run with one 443 | active user (the bench user). Comments and arguments are requested. 444 | 445 | contact: BIX bensmith or rick_g 446 | -------------------------------------------------------------------------------- /UnixBench/USAGE: -------------------------------------------------------------------------------- 1 | Running the Tests 2 | ================= 3 | 4 | All the tests are executed using the "Run" script in the top-level directory. 5 | 6 | The simplest way to generate results is with the commmand: 7 | ./Run 8 | 9 | This will run a standard "index" test (see "The BYTE Index" below), and 10 | save the report in the "results" directory, with a filename like 11 | hostname-2007-09-23-01 12 | An HTML version is also saved. 13 | 14 | If you want to generate both the basic system index and the graphics index, 15 | then do: 16 | ./Run gindex 17 | 18 | If your system has more than one CPU, the tests will be run twice -- once 19 | with a single copy of each test running at once, and once with N copies, 20 | where N is the number of CPUs. Some categories of tests, however (currently 21 | the graphics tests) will only run with a single copy. 22 | 23 | Since the tests are based on constant time (variable work), a "system" 24 | run usually takes about 29 minutes; the "graphics" part about 18 minutes. 25 | A "gindex" run on a dual-core machine will do 2 "system" passes (single- 26 | and dual-processing) and one "graphics" run, for a total around one and 27 | a quarter hours. 28 | 29 | ============================================================================ 30 | 31 | Detailed Usage 32 | ============== 33 | 34 | The Run script takes a number of options which you can use to customise a 35 | test, and you can specify the names of the tests to run. The full usage 36 | is: 37 | 38 | Run [ -q | -v ] [-i ] [-c [-c ...]] [test ...] 39 | 40 | The option flags are: 41 | 42 | -q Run in quiet mode. 43 | -v Run in verbose mode. 44 | -i Run iterations for each test -- slower tests 45 | use / 3, but at least 1. Defaults to 10 (3 for 46 | slow tests). 47 | -c Run copies of each test in parallel. 48 | 49 | The -c option can be given multiple times; for example: 50 | 51 | ./Run -c 1 -c 4 52 | 53 | will run a single-streamed pass, then a 4-streamed pass. Note that some 54 | tests (currently the graphics tests) will only run in a single-streamed pass. 55 | 56 | The remaining non-flag arguments are taken to be the names of tests to run. 57 | The default is to run "index". See "Tests" below. 58 | 59 | When running the tests, I do *not* recommend switching to single-user mode 60 | ("init 1"). This seems to change the results in ways I don't understand, 61 | and it's not realistic (unless your system will actually be running in this 62 | mode, of course). However, if using a windowing system, you may want to 63 | switch to a minimal window setup (for example, log in to a "twm" session), 64 | so that randomly-churning background processes don't randomise the results 65 | too much. This is particularly true for the graphics tests. 66 | 67 | 68 | Output can be specified by setting the following environment variables: 69 | 70 | * "UB_RESULTDIR" : Absolute path of output directory of result files. 71 | * "UB_TMPDIR" : Absolute path of temporary files for IO tests. 72 | * "UB_OUTPUT_FILE_NAME" : Output file name. If exists it will be overwritten. 73 | * "UB_OUTPUT_CSV" : If set "true", output results(score only) to .csv. 74 | ============================================================================ 75 | 76 | Tests 77 | ===== 78 | 79 | The available tests are organised into categories; when generating index 80 | scores (see "The BYTE Index" below) the results for each category are 81 | produced separately. The categories are: 82 | 83 | system The original Unix system tests (not all are actually 84 | in the index) 85 | 2d 2D graphics tests (not all are actually in the index) 86 | 3d 3D graphics tests 87 | misc Various non-indexed tests 88 | 89 | The following individual tests are available: 90 | 91 | system: 92 | dhry2reg Dhrystone 2 using register variables 93 | whetstone-double Double-Precision Whetstone 94 | syscall System Call Overhead 95 | pipe Pipe Throughput 96 | context1 Pipe-based Context Switching 97 | spawn Process Creation 98 | execl Execl Throughput 99 | fstime-w File Write 1024 bufsize 2000 maxblocks 100 | fstime-r File Read 1024 bufsize 2000 maxblocks 101 | fstime File Copy 1024 bufsize 2000 maxblocks 102 | fsbuffer-w File Write 256 bufsize 500 maxblocks 103 | fsbuffer-r File Read 256 bufsize 500 maxblocks 104 | fsbuffer File Copy 256 bufsize 500 maxblocks 105 | fsdisk-w File Write 4096 bufsize 8000 maxblocks 106 | fsdisk-r File Read 4096 bufsize 8000 maxblocks 107 | fsdisk File Copy 4096 bufsize 8000 maxblocks 108 | shell1 Shell Scripts (1 concurrent) (runs "looper 60 multi.sh 1") 109 | shell8 Shell Scripts (8 concurrent) (runs "looper 60 multi.sh 8") 110 | shell16 Shell Scripts (8 concurrent) (runs "looper 60 multi.sh 16") 111 | 112 | 2d: 113 | 2d-rects 2D graphics: rectangles 114 | 2d-lines 2D graphics: lines 115 | 2d-circle 2D graphics: circles 116 | 2d-ellipse 2D graphics: ellipses 117 | 2d-shapes 2D graphics: polygons 118 | 2d-aashapes 2D graphics: aa polygons 119 | 2d-polys 2D graphics: complex polygons 120 | 2d-text 2D graphics: text 121 | 2d-blit 2D graphics: images and blits 122 | 2d-window 2D graphics: windows 123 | 124 | 3d: 125 | ubgears 3D graphics: gears 126 | 127 | misc: 128 | C C Compiler Throughput ("looper 60 $cCompiler cctest.c") 129 | arithoh Arithoh (huh?) 130 | short Arithmetic Test (short) (this is arith.c configured for 131 | "short" variables; ditto for the ones below) 132 | int Arithmetic Test (int) 133 | long Arithmetic Test (long) 134 | float Arithmetic Test (float) 135 | double Arithmetic Test (double) 136 | dc Dc: sqrt(2) to 99 decimal places (runs 137 | "looper 30 dc < dc.dat", using your system's copy of "dc") 138 | hanoi Recursion Test -- Tower of Hanoi 139 | grep Grep for a string in a large file, using your system's 140 | copy of "grep" 141 | sysexec Exercise fork() and exec(). 142 | 143 | The following pseudo-test names are aliases for combinations of other 144 | tests: 145 | 146 | arithmetic Runs arithoh, short, int, long, float, double, 147 | and whetstone-double 148 | dhry Alias for dhry2reg 149 | dhrystone Alias for dhry2reg 150 | whets Alias for whetstone-double 151 | whetstone Alias for whetstone-double 152 | load Runs shell1, shell8, and shell16 153 | misc Runs C, dc, and hanoi 154 | speed Runs the arithmetic and system groups 155 | oldsystem Runs execl, fstime, fsbuffer, fsdisk, pipe, context1, 156 | spawn, and syscall 157 | system Runs oldsystem plus shell1, shell8, and shell16 158 | fs Runs fstime-w, fstime-r, fstime, fsbuffer-w, 159 | fsbuffer-r, fsbuffer, fsdisk-w, fsdisk-r, and fsdisk 160 | shell Runs shell1, shell8, and shell16 161 | 162 | index Runs the tests which constitute the official index: 163 | the oldsystem group, plus dhry2reg, whetstone-double, 164 | shell1, and shell8 165 | See "The BYTE Index" below for more information. 166 | graphics Runs the tests which constitute the graphics index: 167 | 2d-rects, 2d-ellipse, 2d-aashapes, 2d-text, 2d-blit, 168 | 2d-window, and ubgears 169 | gindex Runs the index and graphics groups, to generate both 170 | sets of index results 171 | 172 | all Runs all tests 173 | 174 | 175 | ============================================================================ 176 | 177 | The BYTE Index 178 | ============== 179 | 180 | The purpose of this test is to provide a basic indicator of the performance 181 | of a Unix-like system; hence, multiple tests are used to test various 182 | aspects of the system's performance. These test results are then compared 183 | to the scores from a baseline system to produce an index value, which is 184 | generally easier to handle than the raw sores. The entire set of index 185 | values is then combined to make an overall index for the system. 186 | 187 | Since 1995, the baseline system has been "George", a SPARCstation 20-61 188 | with 128 MB RAM, a SPARC Storage Array, and Solaris 2.3, whose ratings 189 | were set at 10.0. (So a system which scores 520 is 52 times faster than 190 | this machine.) Since the numbers are really only useful in a relative 191 | sense, there's no particular reason to update the base system, so for the 192 | sake of consistency it's probably best to leave it alone. George's scores 193 | are in the file "pgms/index.base"; this file is used to calculate the 194 | index scores for any particular run. 195 | 196 | Over the years, various changes have been made to the set of tests in the 197 | index. Although there is a desire for a consistent baseline, various tests 198 | have been determined to be misleading, and have been removed; and a few 199 | alternatives have been added. These changes are detailed in the README, 200 | and should be born in mind when looking at old scores. 201 | 202 | A number of tests are included in the benchmark suite which are not part of 203 | the index, for various reasons; these tests can of course be run manually. 204 | See "Tests" above. 205 | 206 | 207 | ============================================================================ 208 | 209 | Graphics Tests 210 | ============== 211 | 212 | As of version 5.1, UnixBench now contains some graphics benchmarks. These 213 | are intended to give a rough idea of the general graphics performance of 214 | a system. 215 | 216 | The graphics tests are in categories "2d" and "3d", so the index scores 217 | for these tests are separate from the basic system index. This seems 218 | like a sensible division, since the graphics performance of a system 219 | depends largely on the graphics adaptor. 220 | 221 | The tests currently consist of some 2D "x11perf" tests and "ubgears". 222 | 223 | * The 2D tests are a selection of the x11perf tests, using the host 224 | system's x11perf command (which must be installed and in the search 225 | path). Only a few of the x11perf tests are used, in the interests 226 | of completing a test run in a reasonable time; if you want to do 227 | detailed diagnosis of an X server or graphics chip, then use x11perf 228 | directly. 229 | 230 | * The 3D test is "ubgears", a modified version of the familiar "glxgears". 231 | This version runs for 5 seconds to "warm up", then performs a timed 232 | run and displays the average frames-per-second. 233 | 234 | On multi-CPU systems, the graphics tests will only run in single-processing 235 | mode. This is because the meaning of running two copies of a test at once 236 | is dubious; and the test windows tend to overlay each other, meaning that 237 | the window behind isn't actually doing any work. 238 | 239 | 240 | ============================================================================ 241 | 242 | Multiple CPUs 243 | ============= 244 | 245 | If your system has multiple CPUs, the default behaviour is to run the selected 246 | tests twice -- once with one copy of each test program running at a time, 247 | and once with N copies, where N is the number of CPUs. (You can override 248 | this with the "-c" option; see "Detailed Usage" above.) This is designed to 249 | allow you to assess: 250 | 251 | - the performance of your system when running a single task 252 | - the performance of your system when running multiple tasks 253 | - the gain from your system's implementation of parallel processing 254 | 255 | The results, however, need to be handled with care. Here are the results 256 | of two runs on a dual-processor system, one in single-processing mode, one 257 | dual-processing: 258 | 259 | Test Single Dual Gain 260 | -------------------- ------ ------ ---- 261 | Dhrystone 2 562.5 1110.3 97% 262 | Double Whetstone 320.0 640.4 100% 263 | Execl Throughput 450.4 880.3 95% 264 | File Copy 1024 759.4 595.9 -22% 265 | File Copy 256 535.8 438.8 -18% 266 | File Copy 4096 1261.8 1043.4 -17% 267 | Pipe Throughput 481.0 979.3 104% 268 | Pipe-based Switching 326.8 1229.0 276% 269 | Process Creation 917.2 1714.1 87% 270 | Shell Scripts (1) 1064.9 1566.3 47% 271 | Shell Scripts (8) 1567.7 1709.9 9% 272 | System Call Overhead 944.2 1445.5 53% 273 | -------------------- ------ ------ ---- 274 | Index Score: 678.2 1026.2 51% 275 | 276 | As expected, the heavily CPU-dependent tasks -- dhrystone, whetstone, 277 | execl, pipe throughput, process creation -- show close to 100% gain when 278 | running 2 copies in parallel. 279 | 280 | The Pipe-based Context Switching test measures context switching overhead 281 | by sending messages back and forth between 2 processes. I don't know why 282 | it shows such a huge gain with 2 copies (ie. 4 processes total) running, 283 | but it seems to be consistent on my system. I think this may be an issue 284 | with the SMP implementation. 285 | 286 | The System Call Overhead shows a lesser gain, presumably because it uses a 287 | lot of CPU time in single-threaded kernel code. The shell scripts test with 288 | 8 concurrent processes shows no gain -- because the test itself runs 8 289 | scripts in parallel, it's already using both CPUs, even when the benchmark 290 | is run in single-stream mode. The same test with one process per copy 291 | shows a real gain. 292 | 293 | The filesystem throughput tests show a loss, instead of a gain, when 294 | multi-processing. That there's no gain is to be expected, since the tests 295 | are presumably constrained by the throughput of the I/O subsystem and the 296 | disk drive itself; the drop in performance is presumably down to the 297 | increased contention for resources, and perhaps greater disk head movement. 298 | 299 | So what tests should you use, how many copies should you run, and how should 300 | you interpret the results? Well, that's up to you, since it depends on 301 | what it is you're trying to measure. 302 | 303 | Implementation 304 | -------------- 305 | 306 | The multi-processing mode is implemented at the level of test iterations. 307 | During each iteration of a test, N slave processes are started using fork(). 308 | Each of these slaves executes the test program using fork() and exec(), 309 | reads and stores the entire output, times the run, and prints all the 310 | results to a pipe. The Run script reads the pipes for each of the slaves 311 | in turn to get the results and times. The scores are added, and the times 312 | averaged. 313 | 314 | The result is that each test program has N copies running at once. They 315 | should all finish at around the same time, since they run for constant time. 316 | 317 | If a test program itself starts off K multiple processes (as with the shell8 318 | test), then the effect will be that there are N * K processes running at 319 | once. This is probably not very useful for testing multi-CPU performance. 320 | 321 | 322 | ============================================================================ 323 | 324 | The Language Setting 325 | ==================== 326 | 327 | The $LANG environment variable determines how programs abnd library 328 | routines interpret text. This can have a big impact on the test results. 329 | 330 | If $LANG is set to POSIX, or is left unset, text is treated as ASCII; if 331 | it is set to en_US.UTF-8, foir example, then text is treated as being 332 | encoded in UTF-8, which is more complex and therefore slower. Setting 333 | it to other languages can have varying results. 334 | 335 | To ensure consistency between test runs, the Run script now (as of version 336 | 5.1.1) sets $LANG to "en_US.utf8". 337 | 338 | This setting which is configured with the variable "$language". You 339 | should not change this if you want to share your results to allow 340 | comparisons between systems; however, you may want to change it to see 341 | how different language settings affect performance. 342 | 343 | Each test report now includes the language settings in use. The reported 344 | language is what is set in $LANG, and is not necessarily supported by the 345 | system; but we also report the character mapping and collation order which 346 | are actually in use (as reported by "locale"). 347 | 348 | 349 | ============================================================================ 350 | 351 | Interpreting the Results 352 | ======================== 353 | 354 | Interpreting the results of these tests is tricky, and totally depends on 355 | what you're trying to measure. 356 | 357 | For example, are you trying to measure how fast your CPU is? Or how good 358 | your compiler is? Because these tests are all recompiled using your host 359 | system's compiler, the performance of the compiler will inevitably impact 360 | the performance of the tests. Is this a problem? If you're choosing a 361 | system, you probably care about its overall speed, which may well depend 362 | on how good its compiler is; so including that in the test results may be 363 | the right answer. But you may want to ensure that the right compiler is 364 | used to build the tests. 365 | 366 | On the other hand, with the vast majority of Unix systems being x86 / PC 367 | compatibles, running Linux and the GNU C compiler, the results will tend 368 | to be more dependent on the hardware; but the versions of the compiler and 369 | OS can make a big difference. (I measured a 50% gain between SUSE 10.1 370 | and OpenSUSE 10.2 on the same machine.) So you may want to make sure that 371 | all your test systems are running the same version of the OS; or at least 372 | publish the OS and compuiler versions with your results. Then again, it may 373 | be compiler performance that you're interested in. 374 | 375 | The C test is very dubious -- it tests the speed of compilation. If you're 376 | running the exact same compiler on each system, OK; but otherwise, the 377 | results should probably be discarded. A slower compilation doesn't say 378 | anything about the speed of your system, since the compiler may simply be 379 | spending more time to super-optimise the code, which would actually make it 380 | faster. 381 | 382 | This will be particularly true on architectures like IA-64 (Itanium etc.) 383 | where the compiler spends huge amounts of effort scheduling instructions 384 | to run in parallel, with a resultant significant gain in execution speed. 385 | 386 | Some tests are even more dubious in terms of host-dependency -- for example, 387 | the "dc" test uses the host's version of dc (a calculator program). The 388 | version of this which is available can make a huge difference to the score, 389 | which is why it's not in the index group. Read through the release notes 390 | for more on these kinds of issues. 391 | 392 | Another age-old issue is that of the benchmarks being too trivial to be 393 | meaningful. With compilers getting ever smarter, and performing more 394 | wide-ranging flow path analyses, the danger of parts of the benchmarks 395 | simply being optimised out of existance is always present. 396 | 397 | All in all, the "index" and "gindex" tests (see above) are designed to 398 | give a reasonable measure of overall system performance; but the results 399 | of any test run should always be used with care. 400 | 401 | -------------------------------------------------------------------------------- /UnixBench/WRITING_TESTS: -------------------------------------------------------------------------------- 1 | Writing a Test 2 | ============== 3 | 4 | Writing a test program is pretty easy. Basically, a test is configured via 5 | a monster array in the Run script, which specifics (among other things) the 6 | program to execute and the parameters to pass it. 7 | 8 | The test itself is simply a program which is given the optional parameters 9 | on the command line, and produces logging data on stdout and its results on 10 | stderr. 11 | 12 | 13 | ============================================================================ 14 | 15 | Test Configuration 16 | ================== 17 | 18 | In Run, all tests are named in the "$testList" array. This names the 19 | individual tests, and also sets up aliases for groups of tests, eg. "index". 20 | 21 | The test specifications are in the "$testParams" array. This contains the 22 | details of each individual test as a hash. The fields in the hash are: 23 | 24 | * "logmsg": the full name to display for this test. 25 | * "cat": the category this test belongs to; must be configured 26 | in $testCats. 27 | * "prog": the name of the program to execute; defaults to the name of 28 | the benchmark. 29 | * "repeat": number of passes to run; either 'short' (the default), 30 | 'long', or 'single'. For 'short' and 'long', the actual numbers of 31 | passes are given by $shortIterCount and $longIterCount, which are 32 | configured at the top of the script or by the "-i" flag. 'single' 33 | means just run one pass; this should be used for test which do their 34 | own multi-pass handling internally. 35 | * "stdout": non-0 to add the test's stdout to the log file; defaults to 1. 36 | Set to 0 for tests that are too wordy. 37 | * "stdin": name of a file to send to the program's stdin; default null. 38 | * "options": options to be put on the program's command line; default null. 39 | 40 | 41 | ============================================================================ 42 | 43 | Output Format 44 | ============= 45 | 46 | The results on stderr take the form of a line header and fields, separated 47 | by "|" characters. A result line can be one of: 48 | 49 | COUNT|score|timebase|label 50 | TIME|seconds 51 | ERROR|message 52 | 53 | Any other text on stderr is treated as if it were: 54 | 55 | ERROR|text 56 | 57 | Any output to stdout is placed in a log file, and can be used for debugging. 58 | 59 | COUNT 60 | ----- 61 | 62 | The COUNT line is the line used to report a test score. 63 | 64 | * "score" is the result, typically the number of loops performed during 65 | the run 66 | * "timebase" is the time base used for the final report to the user. A 67 | value of 1 reports the score as is; a value of 60, for example, divides 68 | the time taken by 60 to get loops per minute. Atimebase of zero indicates 69 | that the score is already a rate, ie. a count of things per second. 70 | * "label" is the label to use for the score; like "lps" (loops per 71 | second), etc. 72 | 73 | TIME 74 | ---- 75 | 76 | The TIME line is optionally used to report the time taken. The Run script 77 | normally measures this, but if your test has signifant overhead outside the 78 | actual test loop, you should use TIME to report the time taken for the actual 79 | test. The argument is the time in seconds in floating-point. 80 | 81 | ERROR 82 | ----- 83 | 84 | The argument is an error message; this will abort the benchmarking run and 85 | display the message. 86 | 87 | Any output to stderr which is not a formatted line will be treated as an 88 | error message, so use of ERROR is optional. 89 | 90 | 91 | ============================================================================ 92 | 93 | Test Examples 94 | ============= 95 | 96 | Iteration Count 97 | --------------- 98 | 99 | The simplest thing is to count the number of loops executed in a given time; 100 | see eg. arith.c. The utlilty functions in timeit.c can be used to implement 101 | the fixed time interval, which is generally passed in on the command line. 102 | 103 | The result is reported simply as the number of iterations completed: 104 | 105 | fprintf(stderr,"COUNT|%lu|1|lps\n", iterations); 106 | 107 | The bnenchmark framework will measure the time taken itself. If the test 108 | code has significant overhead (eg. a "pump-priming" pass), then you should 109 | explicitly report the time taken for the test by adding a line like this: 110 | 111 | fprintf(stderr, "TIME|%.1f\n", seconds); 112 | 113 | If you want results reported as loops per minute, then set timebase to 60: 114 | 115 | fprintf(stderr,"COUNT|%lu|60|lpm\n", iterations); 116 | 117 | Note that this only affects the final report; all times passed to or 118 | from the test are still in seconds. 119 | 120 | Rate 121 | ---- 122 | 123 | The other technique is to calculate the rate (things per second) in the test, 124 | and report that directly. To do this, just set timebase to 0: 125 | 126 | fprintf(stderr, "COUNT|%ld|0|KBps\n", kbytes_per_sec); 127 | 128 | Again, you can use TIME to explicitly report the time taken: 129 | 130 | fprintf(stderr, "TIME|%.1f\n", end - start); 131 | 132 | but this isn't so important since you've already calculated the rate. 133 | 134 | -------------------------------------------------------------------------------- /UnixBench/pgms/index.base: -------------------------------------------------------------------------------- 1 | # Baseline benchmark scores, used for calculating index results. 2 | 3 | # Scores from "George", a SPARCstation 20-61. 4 | dhry2reg|10|lps|116700|116700|2 5 | whetstone-double|10|MWIPS|55.0|55.0|2 6 | execl|20|lps|43.0|43.0|1 7 | fstime|20|KBps|3960|3960|1 8 | fsbuffer|20|KBps|1655|1655|1 9 | fsdisk|20|KBps|5800|5800|1 10 | pipe|10|lps|12440|12440|2 11 | context1|10|lps|4000|4000|2 12 | spawn|20|lps|126|126|1 13 | shell8|60|lpm|6|6|1 14 | syscall|10|lps|15000|15000|2 15 | 16 | # The shell1 test was added to the index in 5.0, and this baseline score 17 | # was extrapolated to roughly match George's performance. 18 | shell1|60|lpm|42.4|42.4|1 19 | 20 | # The 2D baseline scores were derived from a test run on an HP Compaq nc8430 21 | # with an ATI Mobility Radeon X1600 Video (256MB) — this is a fairly 22 | # common modern adaptor with 3D. The baseline scores here are then 23 | # 1/66.6 of the values from that run, to bring them roughly in line with 24 | # George. (The HP has an index score of 666.6 single-process.) 25 | 2d-rects|3|score|15|15|1 26 | #2d-lines|3|score|15|15|1 27 | #2d-circle|3|score|15|15|1 28 | 2d-ellipse|3|score|15|15|1 29 | #2d-shapes|3|score|15|15|1 30 | 2d-aashapes|3|score|15|15|1 31 | #2d-polys|3|score|15|15|1 32 | 2d-text|3|score|15|15|1 33 | 2d-blit|3|score|15|15|1 34 | 2d-window|3|score|15|15|1 35 | 36 | # The gears test score is derived from a test run on an HP Compaq nc8430 37 | # with an ATI Mobility Radeon X1600 Video (256MB) — this is a fairly 38 | # common modern adaptor with 3D. The baseline scores here are then 39 | # 1/66.6 of the values from that run, to bring them roughly in line with 40 | # George. 41 | ubgears|20|fps|33.4|33.4|3 42 | 43 | # The grep and sysexec tests were added in 5.1.1; they are not index tests, 44 | # but these baseline scores were added for convenience. 45 | grep|30|lpm|1|1|3 46 | sysexec|10|lps|25|25|10 47 | -------------------------------------------------------------------------------- /UnixBench/pgms/multi.sh: -------------------------------------------------------------------------------- 1 | #! /bin/sh 2 | ############################################################################### 3 | # The BYTE UNIX Benchmarks - Release 3 4 | # Module: multi.sh SID: 3.4 5/15/91 19:30:24 5 | # 6 | ############################################################################### 7 | # Bug reports, patches, comments, suggestions should be sent to: 8 | # 9 | # Ben Smith or Rick Grehan at BYTE Magazine 10 | # ben@bytepb.UUCP rick_g@bytepb.UUCP 11 | # 12 | ############################################################################### 13 | # Modification Log: 14 | # 15 | ############################################################################### 16 | ID="@(#)multi.sh:3.4 -- 5/15/91 19:30:24"; 17 | instance=1 18 | while [ $instance -le $1 ]; do 19 | /bin/sh "$UB_BINDIR/tst.sh" & 20 | instance=`expr $instance + 1` 21 | done 22 | wait 23 | 24 | -------------------------------------------------------------------------------- /UnixBench/pgms/tst.sh: -------------------------------------------------------------------------------- 1 | #! /bin/sh 2 | ############################################################################### 3 | # The BYTE UNIX Benchmarks - Release 3 4 | # Module: tst.sh SID: 3.4 5/15/91 19:30:24 5 | # 6 | ############################################################################### 7 | # Bug reports, patches, comments, suggestions should be sent to: 8 | # 9 | # Ben Smith or Rick Grehan at BYTE Magazine 10 | # ben@bytepb.UUCP rick_g@bytepb.UUCP 11 | # 12 | ############################################################################### 13 | # Modification Log: 14 | # 15 | ############################################################################### 16 | ID="@(#)tst.sh:3.4 -- 5/15/91 19:30:24"; 17 | sort >sort.$$ od.$$ 19 | grep the sort.$$ | tee grep.$$ | wc > wc.$$ 20 | rm sort.$$ grep.$$ od.$$ wc.$$ 21 | -------------------------------------------------------------------------------- /UnixBench/pgms/unixbench.logo: -------------------------------------------------------------------------------- 1 | 2 | # # # # # # # ##### ###### # # #### # # 3 | # # ## # # # # # # # ## # # # # # 4 | # # # # # # ## ##### ##### # # # # ###### 5 | # # # # # # ## # # # # # # # # # 6 | # # # ## # # # # # # # ## # # # # 7 | #### # # # # # ##### ###### # # #### # # 8 | 9 | 10 | Version 5.1.6 Change getpid method to syscall 11 | 12 | Multi-CPU version Version 5 revisions by Ian Smith, 13 | Sunnyvale, CA, USA 14 | January 13, 2011 johantheghost at yahoo period com 15 | 16 | -------------------------------------------------------------------------------- /UnixBench/src/arith.c: -------------------------------------------------------------------------------- 1 | 2 | /******************************************************************************* 3 | * The BYTE UNIX Benchmarks - Release 3 4 | * Module: arith.c SID: 3.3 5/15/91 19:30:19 5 | * 6 | ******************************************************************************* 7 | * Bug reports, patches, comments, suggestions should be sent to: 8 | * 9 | * Ben Smith, Rick Grehan or Tom Yager 10 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 11 | * 12 | ******************************************************************************* 13 | * Modification Log: 14 | * May 12, 1989 - modified empty loops to avoid nullifying by optimizing 15 | * compilers 16 | * August 28, 1990 - changed timing relationship--now returns total number 17 | * of iterations (ty) 18 | * November 9, 1990 - made changes suggested by Keith Cantrell 19 | * (digi!kcantrel) to defeat optimization 20 | * to non-existence 21 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 22 | * Andy Kahn 23 | * 24 | ******************************************************************************/ 25 | 26 | char SCCSid[] = "@(#) @(#)arith.c:3.3 -- 5/15/91 19:30:19"; 27 | /* 28 | * arithmetic test 29 | * 30 | */ 31 | 32 | #include 33 | #include 34 | #include "timeit.c" 35 | 36 | int dumb_stuff(int); 37 | 38 | volatile unsigned long iter; 39 | 40 | /* this function is called when the alarm expires */ 41 | void report() 42 | { 43 | fprintf(stderr,"COUNT|%ld|1|lps\n", iter); 44 | exit(0); 45 | } 46 | 47 | int main(argc, argv) 48 | int argc; 49 | char *argv[]; 50 | { 51 | int duration; 52 | int result = 0; 53 | 54 | if (argc != 2) { 55 | printf("Usage: %s duration\n", argv[0]); 56 | exit(1); 57 | } 58 | 59 | duration = atoi(argv[1]); 60 | 61 | /* set up alarm call */ 62 | iter = 0; /* init iteration count */ 63 | wake_me(duration, report); 64 | 65 | /* this loop will be interrupted by the alarm call */ 66 | while (1) 67 | { 68 | /* in switching to time-based (instead of iteration-based), 69 | the following statement was added. It should not skew 70 | the timings too much--there was an increment and test 71 | in the "while" expression above. The only difference is 72 | that now we're incrementing a long instead of an int. (ty) */ 73 | ++iter; 74 | /* the loop calls a function to insure that something is done 75 | the results of the function are fed back in (just so they 76 | they won't be thrown away. A loop with 77 | unused assignments may get optimized out of existence */ 78 | result = dumb_stuff(result); 79 | } 80 | } 81 | 82 | 83 | /************************** dumb_stuff *******************/ 84 | int dumb_stuff(i) 85 | int i; 86 | { 87 | #ifndef arithoh 88 | datum x, y, z; 89 | z = 0; 90 | #endif 91 | /* 92 | * 101 93 | * sum i*i/(i*i-1) 94 | * i=2 95 | */ 96 | /* notice that the i value is always reset by the loop */ 97 | for (i=2; i<=101; i++) 98 | { 99 | #ifndef arithoh 100 | x = i; 101 | y = x*x; 102 | z += y/(y-1); 103 | } 104 | return(x+y+z); 105 | #else 106 | } 107 | return(0); 108 | #endif 109 | } 110 | 111 | -------------------------------------------------------------------------------- /UnixBench/src/big.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: big.c SID: 3.3 5/15/91 19:30:18 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * 10/22/97 - code cleanup to remove ANSI C compiler warnings 14 | * Andy Kahn 15 | * 16 | ******************************************************************************/ 17 | /* 18 | * dummy code for execl test [ old version of makework.c ] 19 | * 20 | * makework [ -r rate ] [ -c copyfile ] nusers 21 | * 22 | * job streams are specified on standard input with lines of the form 23 | * full_path_name_for_command [ options ] [ 32 | #include 33 | #include 34 | #include 35 | #include 36 | #include 37 | #include 38 | #include 39 | #include 40 | 41 | 42 | #define DEF_RATE 5.0 43 | #define GRANULE 5 44 | #define CHUNK 60 45 | #define MAXCHILD 12 46 | #define MAXWORK 10 47 | 48 | void wrapup(const char *); 49 | void onalarm(int); 50 | void pipeerr(); 51 | void grunt(); 52 | void getwork(void); 53 | #if debug 54 | void dumpwork(void); 55 | #endif 56 | void fatal(const char *s); 57 | 58 | float thres; 59 | float est_rate = DEF_RATE; 60 | int nusers; /* number of concurrent users to be simulated by 61 | * this process */ 62 | int firstuser; /* ordinal identification of first user for this 63 | * process */ 64 | int nwork = 0; /* number of job streams */ 65 | int exit_status = 0; /* returned to parent */ 66 | int sigpipe; /* pipe write error flag */ 67 | 68 | struct st_work { 69 | char *cmd; /* name of command to run */ 70 | char **av; /* arguments to command */ 71 | char *input; /* standard input buffer */ 72 | int inpsize; /* size of standard input buffer */ 73 | char *outf; /* standard output (filename) */ 74 | } work[MAXWORK]; 75 | 76 | struct { 77 | int xmit; /* # characters sent */ 78 | char *bp; /* std input buffer pointer */ 79 | int blen; /* std input buffer length */ 80 | int fd; /* stdin to command */ 81 | int pid; /* child PID */ 82 | char *line; /* start of input line */ 83 | int firstjob; /* inital piece of work */ 84 | int thisjob; /* current piece of work */ 85 | } child[MAXCHILD], *cp; 86 | 87 | int main(argc, argv) 88 | int argc; 89 | char *argv[]; 90 | { 91 | int i; 92 | int l; 93 | int fcopy = 0; /* fd for copy output */ 94 | int master = 1; /* the REAL master, == 0 for clones */ 95 | int nchild; /* no. of children for a clone to run */ 96 | int done; /* count of children finished */ 97 | int output; /* aggregate output char count for all 98 | children */ 99 | int c; 100 | int thiswork = 0; /* next job stream to allocate */ 101 | int nch; /* # characters to write */ 102 | int written; /* # characters actully written */ 103 | char logname[15]; /* name of the log file(s) */ 104 | int pvec[2]; /* for pipes */ 105 | char *p; 106 | char *prog; /* my name */ 107 | 108 | #if ! debug 109 | freopen("masterlog.00", "a", stderr); 110 | #endif 111 | prog = argv[0]; 112 | while (argc > 1 && argv[1][0] == '-') { 113 | p = &argv[1][1]; 114 | argc--; 115 | argv++; 116 | while (*p) { 117 | switch (*p) { 118 | case 'r': 119 | est_rate = atoi(argv[1]); 120 | sscanf(argv[1], "%f", &est_rate); 121 | if (est_rate <= 0) { 122 | fprintf(stderr, "%s: bad rate, reset to %.2f chars/sec\n", prog, DEF_RATE); 123 | est_rate = DEF_RATE; 124 | } 125 | argc--; 126 | argv++; 127 | break; 128 | 129 | case 'c': 130 | fcopy = open(argv[1], 1); 131 | if (fcopy < 0) 132 | fcopy = creat(argv[1], 0600); 133 | if (fcopy < 0) { 134 | fprintf(stderr, "%s: cannot open copy file '%s'\n", 135 | prog, argv[1]); 136 | exit(2); 137 | } 138 | lseek(fcopy, 0L, 2); /* append at end of file */ 139 | argc--; 140 | argv++; 141 | break; 142 | 143 | default: 144 | fprintf(stderr, "%s: bad flag '%c'\n", prog, *p); 145 | exit(4); 146 | } 147 | p++; 148 | } 149 | } 150 | 151 | if (argc < 2) { 152 | fprintf(stderr, "%s: missing nusers\n", prog); 153 | exit(4); 154 | } 155 | 156 | nusers = atoi(argv[1]); 157 | if (nusers < 1) { 158 | fprintf(stderr, "%s: impossible nusers (%d<-%s)\n", prog, nusers, argv[1]); 159 | exit(4); 160 | } 161 | fprintf(stderr, "%d Users\n", nusers); 162 | argc--; 163 | argv++; 164 | 165 | /* build job streams */ 166 | getwork(); 167 | #if debug 168 | dumpwork(); 169 | #endif 170 | 171 | /* clone copies of myself to run up to MAXCHILD jobs each */ 172 | firstuser = MAXCHILD; 173 | fprintf(stderr, "master pid %d\n", getpid()); 174 | fflush(stderr); 175 | while (nusers > MAXCHILD) { 176 | fflush(stderr); 177 | if (nusers >= 2*MAXCHILD) 178 | /* the next clone must run MAXCHILD jobs */ 179 | nchild = MAXCHILD; 180 | else 181 | /* the next clone must run the leftover jobs */ 182 | nchild = nusers - MAXCHILD; 183 | if ((l = fork()) == -1) { 184 | /* fork failed */ 185 | fatal("** clone fork failed **\n"); 186 | goto bepatient; 187 | } else if (l > 0) { 188 | fprintf(stderr, "master clone pid %d\n", l); 189 | /* I am the master with nchild fewer jobs to run */ 190 | nusers -= nchild; 191 | firstuser += MAXCHILD; 192 | continue; 193 | } else { 194 | /* I am a clone, run MAXCHILD jobs */ 195 | #if ! debug 196 | sprintf(logname, "masterlog.%02d", firstuser/MAXCHILD); 197 | freopen(logname, "w", stderr); 198 | #endif 199 | master = 0; 200 | nusers = nchild; 201 | break; 202 | } 203 | } 204 | if (master) 205 | firstuser = 0; 206 | 207 | close(0); 208 | for (i = 0; i < nusers; i++ ) { 209 | fprintf(stderr, "user %d job %d ", firstuser+i, thiswork); 210 | if (pipe(pvec) == -1) { 211 | /* this is fatal */ 212 | fatal("** pipe failed **\n"); 213 | goto bepatient; 214 | } 215 | fflush(stderr); 216 | if ((child[i].pid = fork()) == 0) { 217 | int fd; 218 | /* the command */ 219 | if (pvec[0] != 0) { 220 | close(0); 221 | dup(pvec[0]); 222 | } 223 | #if ! debug 224 | sprintf(logname, "userlog.%02d", firstuser+i); 225 | freopen(logname, "w", stderr); 226 | #endif 227 | for (fd = 3; fd < 24; fd++) 228 | close(fd); 229 | if (work[thiswork].outf[0] != '\0') { 230 | /* redirect std output */ 231 | char *q; 232 | for (q = work[thiswork].outf; *q != '\n'; q++) ; 233 | *q = '\0'; 234 | if (freopen(work[thiswork].outf, "w", stdout) == NULL) { 235 | fprintf(stderr, "makework: cannot open %s for std output\n", 236 | work[thiswork].outf); 237 | fflush(stderr); 238 | } 239 | *q = '\n'; 240 | } 241 | execv(work[thiswork].cmd, work[thiswork].av); 242 | /* don't expect to get here! */ 243 | fatal("** exec failed **\n"); 244 | goto bepatient; 245 | } 246 | else if (child[i].pid == -1) { 247 | fatal("** fork failed **\n"); 248 | goto bepatient; 249 | } 250 | else { 251 | close(pvec[0]); 252 | child[i].fd = pvec[1]; 253 | child[i].line = child[i].bp = work[thiswork].input; 254 | child[i].blen = work[thiswork].inpsize; 255 | child[i].thisjob = thiswork; 256 | child[i].firstjob = thiswork; 257 | fprintf(stderr, "pid %d pipe fd %d", child[i].pid, child[i].fd); 258 | if (work[thiswork].outf[0] != '\0') { 259 | char *q; 260 | fprintf(stderr, " > "); 261 | for (q=work[thiswork].outf; *q != '\n'; q++) 262 | fputc(*q, stderr); 263 | } 264 | fputc('\n', stderr); 265 | thiswork++; 266 | if (thiswork >= nwork) 267 | thiswork = 0; 268 | } 269 | } 270 | fflush(stderr); 271 | 272 | srand(time(0)); 273 | thres = 0; 274 | done = output = 0; 275 | for (i = 0; i < nusers; i++) { 276 | if (child[i].blen == 0) 277 | done++; 278 | else 279 | thres += est_rate * GRANULE; 280 | } 281 | est_rate = thres; 282 | 283 | signal(SIGALRM, onalarm); 284 | signal(SIGPIPE, pipeerr); 285 | alarm(GRANULE); 286 | while (done < nusers) { 287 | for (i = 0; i < nusers; i++) { 288 | cp = &child[i]; 289 | if (cp->xmit >= cp->blen) continue; 290 | l = rand() % CHUNK + 1; /* 1-CHUNK chars */ 291 | if (l == 0) continue; 292 | if (cp->xmit + l > cp->blen) 293 | l = cp->blen - cp->xmit; 294 | p = cp->bp; 295 | cp->bp += l; 296 | cp->xmit += l; 297 | #if debug 298 | fprintf(stderr, "child %d, %d processed, %d to go\n", i, cp->xmit, cp->blen - cp->xmit); 299 | #endif 300 | while (p < cp->bp) { 301 | if (*p == '\n' || (p == &cp->bp[-1] && cp->xmit >= cp->blen)) { 302 | /* write it out */ 303 | nch = p - cp->line + 1; 304 | if ((written = write(cp->fd, cp->line, nch)) != nch) { 305 | /* argh! */ 306 | cp->line[nch] = '\0'; 307 | fprintf(stderr, "user %d job %d cmd %s ", 308 | firstuser+i, cp->thisjob, cp->line); 309 | fprintf(stderr, "write(,,%d) returns %d\n", nch, written); 310 | if (sigpipe) 311 | fatal("** SIGPIPE error **\n"); 312 | else 313 | fatal("** write error **\n"); 314 | goto bepatient; 315 | 316 | } 317 | if (fcopy) 318 | write(fcopy, cp->line, p - cp->line + 1); 319 | #if debug 320 | fprintf(stderr, "child %d gets \"", i); 321 | { 322 | char *q = cp->line; 323 | while (q <= p) { 324 | if (*q >= ' ' && *q <= '~') 325 | fputc(*q, stderr); 326 | else 327 | fprintf(stderr, "\\%03o", *q); 328 | q++; 329 | } 330 | } 331 | fputc('"', stderr); 332 | #endif 333 | cp->line = &p[1]; 334 | } 335 | p++; 336 | } 337 | if (cp->xmit >= cp->blen) { 338 | done++; 339 | close(cp->fd); 340 | #if debug 341 | fprintf(stderr, "child %d, close std input\n", i); 342 | #endif 343 | } 344 | output += l; 345 | } 346 | while (output > thres) { 347 | pause(); 348 | #if debug 349 | fprintf(stderr, "after pause: output, thres, done %d %.2f %d\n", output, thres, done); 350 | #endif 351 | } 352 | } 353 | 354 | bepatient: 355 | alarm(0); 356 | /**** 357 | * If everything is going OK, we should simply be able to keep 358 | * looping unitil 'wait' fails, however some descendent process may 359 | * be in a state from which it can never exit, and so a timeout 360 | * is used. 361 | * 5 minutes should be ample, since the time to run all jobs is of 362 | * the order of 5-10 minutes, however some machines are painfully slow, 363 | * so the timeout has been set at 20 minutes (1200 seconds). 364 | ****/ 365 | signal(SIGALRM, grunt); 366 | alarm(1200); 367 | while ((c = wait(&l)) != -1) { 368 | for (i = 0; i < nusers; i++) { 369 | if (c == child[i].pid) { 370 | fprintf(stderr, "user %d job %d pid %d done", firstuser+i, child[i].thisjob, c); 371 | if (l != 0) { 372 | if (l & 0x7f) 373 | fprintf(stderr, " status %d", l & 0x7f); 374 | if (l & 0xff00) 375 | fprintf(stderr, " exit code %d", (l>>8) & 0xff); 376 | exit_status = 4; 377 | } 378 | fputc('\n', stderr); 379 | c = child[i].pid = -1; 380 | break; 381 | } 382 | } 383 | if (c != -1) { 384 | fprintf(stderr, "master clone done, pid %d ", c); 385 | if (l != 0) { 386 | if (l & 0x7f) 387 | fprintf(stderr, " status %d", l & 0x7f); 388 | if (l & 0xff00) 389 | fprintf(stderr, " exit code %d", (l>>8) & 0xff); 390 | exit_status = 4; 391 | } 392 | fputc('\n', stderr); 393 | } 394 | } 395 | alarm(0); 396 | wrapup("Finished waiting ..."); 397 | 398 | exit(0); 399 | } 400 | 401 | void onalarm(int foo) 402 | { 403 | thres += est_rate; 404 | signal(SIGALRM, onalarm); 405 | alarm(GRANULE); 406 | } 407 | 408 | void grunt() 409 | { 410 | /* timeout after label "bepatient" in main */ 411 | exit_status = 4; 412 | wrapup("Timed out waiting for jobs to finish ..."); 413 | } 414 | 415 | void pipeerr() 416 | { 417 | sigpipe++; 418 | } 419 | 420 | void wrapup(const char *reason) 421 | { 422 | int i; 423 | int killed = 0; 424 | fflush(stderr); 425 | for (i = 0; i < nusers; i++) { 426 | if (child[i].pid > 0 && kill(child[i].pid, SIGKILL) != -1) { 427 | if (!killed) { 428 | killed++; 429 | fprintf(stderr, "%s\n", reason); 430 | fflush(stderr); 431 | } 432 | fprintf(stderr, "user %d job %d pid %d killed off\n", firstuser+i, child[i].thisjob, child[i].pid); 433 | fflush(stderr); 434 | } 435 | } 436 | exit(exit_status); 437 | } 438 | 439 | #define MAXLINE 512 440 | void getwork(void) 441 | { 442 | int i; 443 | int f; 444 | int ac=0; 445 | char *lp = (void *)0; 446 | char *q = (void *)0; 447 | struct st_work *w = (void *)0; 448 | char line[MAXLINE]; 449 | 450 | while (fgets(line, MAXLINE, stdin) != NULL) { 451 | if (nwork >= MAXWORK) { 452 | fprintf(stderr, "Too many jobs specified, .. increase MAXWORK\n"); 453 | exit(4); 454 | } 455 | w = &work[nwork]; 456 | lp = line; 457 | i = 1; 458 | while (*lp && *lp != ' ') { 459 | i++; 460 | lp++; 461 | } 462 | w->cmd = (char *)malloc(i); 463 | strncpy(w->cmd, line, i-1); 464 | w->cmd[i-1] = '\0'; 465 | w->inpsize = 0; 466 | w->input = ""; 467 | /* start to build arg list */ 468 | ac = 2; 469 | w->av = (char **)malloc(2*sizeof(char *)); 470 | q = w->cmd; 471 | while (*q) q++; 472 | q--; 473 | while (q >= w->cmd) { 474 | if (*q == '/') { 475 | q++; 476 | break; 477 | } 478 | q--; 479 | } 480 | w->av[0] = q; 481 | while (*lp) { 482 | if (*lp == ' ') { 483 | /* space */ 484 | lp++; 485 | continue; 486 | } 487 | else if (*lp == '<') { 488 | /* standard input for this job */ 489 | q = ++lp; 490 | while (*lp && *lp != ' ') lp++; 491 | *lp = '\0'; 492 | if ((f = open(q, 0)) == -1) { 493 | fprintf(stderr, "cannot open input file (%s) for job %d\n", 494 | q, nwork); 495 | exit(4); 496 | } 497 | /* gobble input */ 498 | w->input = (char *)malloc(512); 499 | while ((i = read(f, &w->input[w->inpsize], 512)) > 0) { 500 | w->inpsize += i; 501 | w->input = (char *)realloc(w->input, w->inpsize+512); 502 | } 503 | w->input = (char *)realloc(w->input, w->inpsize); 504 | close(f); 505 | /* extract stdout file name from line beginning "C=" */ 506 | w->outf = ""; 507 | for (q = w->input; q < &w->input[w->inpsize-10]; q++) { 508 | if (*q == '\n' && strncmp(&q[1], "C=", 2) == 0) { 509 | w->outf = &q[3]; 510 | break; 511 | } 512 | } 513 | #if debug 514 | if (*w->outf) { 515 | fprintf(stderr, "stdout->"); 516 | for (q=w->outf; *q != '\n'; q++) 517 | fputc(*q, stderr); 518 | fputc('\n', stderr); 519 | } 520 | #endif 521 | } 522 | else { 523 | /* a command option */ 524 | ac++; 525 | w->av = (char **)realloc(w->av, ac*sizeof(char *)); 526 | q = lp; 527 | i = 1; 528 | while (*lp && *lp != ' ') { 529 | lp++; 530 | i++; 531 | } 532 | w->av[ac-2] = (char *)malloc(i); 533 | strncpy(w->av[ac-2], q, i-1); 534 | w->av[ac-2][i-1] = '\0'; 535 | } 536 | } 537 | w->av[ac-1] = (char *)0; 538 | nwork++; 539 | } 540 | } 541 | 542 | #if debug 543 | void dumpwork(void) 544 | { 545 | int i; 546 | int j; 547 | 548 | for (i = 0; i < nwork; i++) { 549 | fprintf(stderr, "job %d: cmd: %s\n", i, work[i].cmd); 550 | j = 0; 551 | while (work[i].av[j]) { 552 | fprintf(stderr, "argv[%d]: %s\n", j, work[i].av[j]); 553 | j++; 554 | } 555 | fprintf(stderr, "input: %d chars text: ", work[i].inpsize); 556 | if (work[i].input == (char *)0) 557 | fprintf(stderr, "\n"); 558 | else { 559 | register char *pend; 560 | char *p; 561 | char c; 562 | p = work[i].input; 563 | while (*p) { 564 | pend = p; 565 | while (*pend && *pend != '\n') 566 | pend++; 567 | c = *pend; 568 | *pend = '\0'; 569 | fprintf(stderr, "%s\n", p); 570 | *pend = c; 571 | p = &pend[1]; 572 | } 573 | } 574 | } 575 | } 576 | #endif 577 | 578 | void fatal(const char *s) 579 | { 580 | int i; 581 | fprintf(stderr, "%s", s); 582 | fflush(stderr); 583 | perror("Reason?"); 584 | fflush(stderr); 585 | for (i = 0; i < nusers; i++) { 586 | if (child[i].pid > 0 && kill(child[i].pid, SIGKILL) != -1) { 587 | fprintf(stderr, "pid %d killed off\n", child[i].pid); 588 | fflush(stderr); 589 | } 590 | } 591 | exit_status = 4; 592 | } 593 | -------------------------------------------------------------------------------- /UnixBench/src/context1.c: -------------------------------------------------------------------------------- 1 | 2 | /******************************************************************************* 3 | * The BYTE UNIX Benchmarks - Release 3 4 | * Module: context1.c SID: 3.3 5/15/91 19:30:18 5 | * 6 | ******************************************************************************* 7 | * Bug reports, patches, comments, suggestions should be sent to: 8 | * 9 | * Ben Smith, Rick Grehan or Tom Yager 10 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 11 | * 12 | ******************************************************************************* 13 | * Modification Log: 14 | * $Header: context1.c,v 3.4 87/06/22 14:22:59 kjmcdonell Beta $ 15 | * August 28, 1990 - changed timing routines--now returns total number of 16 | * iterations in specified time period 17 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 18 | * Andy Kahn 19 | * 20 | ******************************************************************************/ 21 | char SCCSid[] = "@(#) @(#)context1.c:3.3 -- 5/15/91 19:30:18"; 22 | /* 23 | * Context switching via synchronized unbuffered pipe i/o 24 | * 25 | */ 26 | 27 | #define _GNU_SOURCE 28 | #include 29 | #include 30 | #include 31 | #include 32 | #include 33 | #include "timeit.c" 34 | 35 | unsigned long iter; 36 | 37 | void report() 38 | { 39 | fprintf(stderr, "COUNT|%lu|1|lps\n", iter); 40 | exit(0); 41 | } 42 | 43 | static int get_cpu_num(void){ 44 | return sysconf(_SC_NPROCESSORS_ONLN); 45 | } 46 | 47 | int main(argc, argv) 48 | int argc; 49 | char *argv[]; 50 | { 51 | int duration; 52 | unsigned long check; 53 | int p1[2], p2[2]; 54 | ssize_t ret; 55 | int need_affinity; 56 | 57 | if (argc != 2) { 58 | fprintf(stderr, "Usage: context duration\n"); 59 | exit(1); 60 | } 61 | 62 | duration = atoi(argv[1]); 63 | 64 | /* if os has more than one cpu, will bind parent process to cpu 0 and child process to other cpus 65 | * In this way, we can ensure context switch always happen 66 | * */ 67 | need_affinity = get_cpu_num() >> 1; 68 | 69 | /* set up alarm call */ 70 | iter = 0; 71 | wake_me(duration, report); 72 | signal(SIGPIPE, SIG_IGN); 73 | 74 | if (pipe(p1) || pipe(p2)) { 75 | perror("pipe create failed"); 76 | exit(1); 77 | } 78 | 79 | if (fork()) { /* parent process */ 80 | if (need_affinity) { 81 | cpu_set_t pmask; 82 | int i; 83 | CPU_ZERO(&pmask); 84 | for (i = 0; i < need_affinity; i++) 85 | CPU_SET(i, &pmask); 86 | 87 | if (sched_setaffinity(0, sizeof(cpu_set_t), &pmask) == -1) 88 | { 89 | perror("parent sched_setaffinity failed"); 90 | } 91 | } 92 | 93 | /* master, write p1 & read p2 */ 94 | close(p1[0]); close(p2[1]); 95 | while (1) { 96 | if ((ret = write(p1[1], (char *)&iter, sizeof(iter))) != sizeof(iter)) { 97 | if ((ret == -1) && (errno == EPIPE)) { 98 | alarm(0); 99 | report(); /* does not return */ 100 | } 101 | if ((ret == -1) && (errno != 0) && (errno != EINTR)) 102 | perror("master write failed"); 103 | exit(1); 104 | } 105 | if ((ret = read(p2[0], (char *)&check, sizeof(check))) != sizeof(check)) { 106 | if ((ret == 0)) { /* end-of-stream */ 107 | alarm(0); 108 | report(); /* does not return */ 109 | } 110 | if ((ret == -1) && (errno != 0) && (errno != EINTR)) 111 | perror("master read failed"); 112 | exit(1); 113 | } 114 | if (check != iter) { 115 | fprintf(stderr, "Master sync error: expect %lu, got %lu\n", 116 | iter, check); 117 | exit(2); 118 | } 119 | iter++; 120 | } 121 | } 122 | else { /* child process */ 123 | if (need_affinity) { 124 | cpu_set_t pmask; 125 | int i; 126 | CPU_ZERO(&pmask); 127 | for (i = need_affinity; i < (need_affinity << 1); i++) 128 | CPU_SET(i, &pmask); 129 | 130 | if (sched_setaffinity(0, sizeof(cpu_set_t), &pmask) == -1) 131 | { 132 | perror("child sched_setaffinity failed"); 133 | } 134 | } 135 | /* slave, read p1 & write p2 */ 136 | close(p1[1]); close(p2[0]); 137 | while (1) { 138 | if ((ret = read(p1[0], (char *)&check, sizeof(check))) != sizeof(check)) { 139 | if ((ret == 0)) { /* end-of-stream */ 140 | alarm(0); 141 | report(); /* does not return */ 142 | } 143 | if ((ret == -1) && (errno != 0) && (errno != EINTR)) 144 | perror("slave read failed"); 145 | exit(1); 146 | } 147 | if (check != iter) { 148 | fprintf(stderr, "Slave sync error: expect %lu, got %lu\n", 149 | iter, check); 150 | exit(2); 151 | } 152 | if ((ret = write(p2[1], (char *)&iter, sizeof(iter))) != sizeof(check)) { 153 | if ((ret == -1) && (errno == EPIPE)) { 154 | alarm(0); 155 | report(); /* does not return */ 156 | } 157 | if ((ret == -1) && (errno != 0) && (errno != EINTR)) 158 | perror("slave write failed"); 159 | exit(1); 160 | } 161 | iter++; 162 | } 163 | } 164 | } 165 | -------------------------------------------------------------------------------- /UnixBench/src/dhry_1.c: -------------------------------------------------------------------------------- 1 | /***************************************************************************** 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: dhry_1.c SID: 3.4 5/15/91 19:30:21 4 | * 5 | ***************************************************************************** 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ***************************************************************************** 12 | * 13 | * *** WARNING **** With BYTE's modifications applied, results obtained with 14 | * ******* this version of the Dhrystone program may not be applicable 15 | * to other versions. 16 | * 17 | * Modification Log: 18 | * 10/22/97 - code cleanup to remove ANSI C compiler warnings 19 | * Andy Kahn 20 | * 21 | * Adapted from: 22 | * 23 | * "DHRYSTONE" Benchmark Program 24 | * ----------------------------- 25 | * 26 | * Version: C, Version 2.1 27 | * 28 | * File: dhry_1.c (part 2 of 3) 29 | * 30 | * Date: May 25, 1988 31 | * 32 | * Author: Reinhold P. Weicker 33 | * 34 | ***************************************************************************/ 35 | char SCCSid[] = "@(#) @(#)dhry_1.c:3.4 -- 5/15/91 19:30:21"; 36 | 37 | #include 38 | #include 39 | #include 40 | #include "dhry.h" 41 | #include "timeit.c" 42 | 43 | unsigned long Run_Index; 44 | 45 | void report() 46 | { 47 | fprintf(stderr,"COUNT|%ld|1|lps\n", Run_Index); 48 | exit(0); 49 | } 50 | 51 | /* Global Variables: */ 52 | 53 | Rec_Pointer Ptr_Glob, 54 | Next_Ptr_Glob; 55 | int Int_Glob; 56 | Boolean Bool_Glob; 57 | char Ch_1_Glob, 58 | Ch_2_Glob; 59 | int Arr_1_Glob [50]; 60 | int Arr_2_Glob [50] [50]; 61 | 62 | Enumeration Func_1 (); 63 | /* forward declaration necessary since Enumeration may not simply be int */ 64 | 65 | #ifndef REG 66 | Boolean Reg = false; 67 | #define REG 68 | /* REG becomes defined as empty */ 69 | /* i.e. no register variables */ 70 | #else 71 | Boolean Reg = true; 72 | #endif 73 | 74 | /* variables for time measurement: */ 75 | 76 | #ifdef TIMES 77 | #include 78 | #include 79 | #define Too_Small_Time 120 80 | /* Measurements should last at least about 2 seconds */ 81 | #endif 82 | #ifdef TIME 83 | #include 84 | #define Too_Small_Time 2 85 | /* Measurements should last at least 2 seconds */ 86 | #endif 87 | 88 | long Begin_Time, 89 | End_Time, 90 | User_Time; 91 | float Microseconds, 92 | Dhrystones_Per_Second; 93 | 94 | /* end of variables for time measurement */ 95 | 96 | void Proc_1 (REG Rec_Pointer Ptr_Val_Par); 97 | void Proc_2 (One_Fifty *Int_Par_Ref); 98 | void Proc_3 (Rec_Pointer *Ptr_Ref_Par); 99 | void Proc_4 (void); 100 | void Proc_5 (void); 101 | 102 | 103 | extern Boolean Func_2(Str_30, Str_30); 104 | extern void Proc_6(Enumeration, Enumeration *); 105 | extern void Proc_7(One_Fifty, One_Fifty, One_Fifty *); 106 | extern void Proc_8(Arr_1_Dim, Arr_2_Dim, int, int); 107 | 108 | int main (argc, argv) 109 | int argc; 110 | char *argv[]; 111 | /* main program, corresponds to procedures */ 112 | /* Main and Proc_0 in the Ada version */ 113 | { 114 | int duration; 115 | One_Fifty Int_1_Loc; 116 | REG One_Fifty Int_2_Loc; 117 | One_Fifty Int_3_Loc; 118 | REG char Ch_Index; 119 | Enumeration Enum_Loc; 120 | Str_30 Str_1_Loc; 121 | Str_30 Str_2_Loc; 122 | 123 | /* Initializations */ 124 | 125 | Next_Ptr_Glob = (Rec_Pointer) malloc (sizeof (Rec_Type)); 126 | Ptr_Glob = (Rec_Pointer) malloc (sizeof (Rec_Type)); 127 | 128 | Ptr_Glob->Ptr_Comp = Next_Ptr_Glob; 129 | Ptr_Glob->Discr = Ident_1; 130 | Ptr_Glob->variant.var_1.Enum_Comp = Ident_3; 131 | Ptr_Glob->variant.var_1.Int_Comp = 40; 132 | strcpy (Ptr_Glob->variant.var_1.Str_Comp, 133 | "DHRYSTONE PROGRAM, SOME STRING"); 134 | strcpy (Str_1_Loc, "DHRYSTONE PROGRAM, 1'ST STRING"); 135 | 136 | Arr_2_Glob [8][7] = 10; 137 | /* Was missing in published program. Without this statement, */ 138 | /* Arr_2_Glob [8][7] would have an undefined value. */ 139 | /* Warning: With 16-Bit processors and Number_Of_Runs > 32000, */ 140 | /* overflow may occur for this array element. */ 141 | 142 | #ifdef PRATTLE 143 | printf ("\n"); 144 | printf ("Dhrystone Benchmark, Version 2.1 (Language: C)\n"); 145 | printf ("\n"); 146 | if (Reg) 147 | { 148 | printf ("Program compiled with 'register' attribute\n"); 149 | printf ("\n"); 150 | } 151 | else 152 | { 153 | printf ("Program compiled without 'register' attribute\n"); 154 | printf ("\n"); 155 | } 156 | printf ("Please give the number of runs through the benchmark: "); 157 | { 158 | int n; 159 | scanf ("%d", &n); 160 | Number_Of_Runs = n; 161 | } 162 | printf ("\n"); 163 | 164 | printf ("Execution starts, %d runs through Dhrystone\n", Number_Of_Runs); 165 | #endif /* PRATTLE */ 166 | 167 | if (argc != 2) { 168 | fprintf(stderr, "Usage: %s duration\n", argv[0]); 169 | exit(1); 170 | } 171 | 172 | duration = atoi(argv[1]); 173 | Run_Index = 0; 174 | wake_me(duration, report); 175 | 176 | /***************/ 177 | /* Start timer */ 178 | /***************/ 179 | 180 | #ifdef SELF_TIMED 181 | #ifdef TIMES 182 | times (&time_info); 183 | Begin_Time = (long) time_info.tms_utime; 184 | #endif 185 | #ifdef TIME 186 | Begin_Time = time ( (long *) 0); 187 | #endif 188 | #endif /* SELF_TIMED */ 189 | 190 | for (Run_Index = 1; ; ++Run_Index) 191 | { 192 | 193 | Proc_5(); 194 | Proc_4(); 195 | /* Ch_1_Glob == 'A', Ch_2_Glob == 'B', Bool_Glob == true */ 196 | Int_1_Loc = 2; 197 | Int_2_Loc = 3; 198 | strcpy (Str_2_Loc, "DHRYSTONE PROGRAM, 2'ND STRING"); 199 | Enum_Loc = Ident_2; 200 | Bool_Glob = ! Func_2 (Str_1_Loc, Str_2_Loc); 201 | /* Bool_Glob == 1 */ 202 | while (Int_1_Loc < Int_2_Loc) /* loop body executed once */ 203 | { 204 | Int_3_Loc = 5 * Int_1_Loc - Int_2_Loc; 205 | /* Int_3_Loc == 7 */ 206 | Proc_7 (Int_1_Loc, Int_2_Loc, &Int_3_Loc); 207 | /* Int_3_Loc == 7 */ 208 | Int_1_Loc += 1; 209 | } /* while */ 210 | /* Int_1_Loc == 3, Int_2_Loc == 3, Int_3_Loc == 7 */ 211 | Proc_8 (Arr_1_Glob, Arr_2_Glob, Int_1_Loc, Int_3_Loc); 212 | /* Int_Glob == 5 */ 213 | Proc_1 (Ptr_Glob); 214 | for (Ch_Index = 'A'; Ch_Index <= Ch_2_Glob; ++Ch_Index) 215 | /* loop body executed twice */ 216 | { 217 | if (Enum_Loc == Func_1 (Ch_Index, 'C')) 218 | /* then, not executed */ 219 | { 220 | Proc_6 (Ident_1, &Enum_Loc); 221 | strcpy (Str_2_Loc, "DHRYSTONE PROGRAM, 3'RD STRING"); 222 | Int_2_Loc = Run_Index; 223 | Int_Glob = Run_Index; 224 | } 225 | } 226 | /* Int_1_Loc == 3, Int_2_Loc == 3, Int_3_Loc == 7 */ 227 | Int_2_Loc = Int_2_Loc * Int_1_Loc; 228 | Int_1_Loc = Int_2_Loc / Int_3_Loc; 229 | Int_2_Loc = 7 * (Int_2_Loc - Int_3_Loc) - Int_1_Loc; 230 | /* Int_1_Loc == 1, Int_2_Loc == 13, Int_3_Loc == 7 */ 231 | Proc_2 (&Int_1_Loc); 232 | /* Int_1_Loc == 5 */ 233 | 234 | } /* loop "for Run_Index" */ 235 | 236 | /**************/ 237 | /* Stop timer */ 238 | /**************/ 239 | #ifdef SELF_TIMED 240 | #ifdef TIMES 241 | times (&time_info); 242 | End_Time = (long) time_info.tms_utime; 243 | #endif 244 | #ifdef TIME 245 | End_Time = time ( (long *) 0); 246 | #endif 247 | #endif /* SELF_TIMED */ 248 | 249 | /* BYTE version never executes this stuff */ 250 | #ifdef SELF_TIMED 251 | printf ("Execution ends\n"); 252 | printf ("\n"); 253 | printf ("Final values of the variables used in the benchmark:\n"); 254 | printf ("\n"); 255 | printf ("Int_Glob: %d\n", Int_Glob); 256 | printf (" should be: %d\n", 5); 257 | printf ("Bool_Glob: %d\n", Bool_Glob); 258 | printf (" should be: %d\n", 1); 259 | printf ("Ch_1_Glob: %c\n", Ch_1_Glob); 260 | printf (" should be: %c\n", 'A'); 261 | printf ("Ch_2_Glob: %c\n", Ch_2_Glob); 262 | printf (" should be: %c\n", 'B'); 263 | printf ("Arr_1_Glob[8]: %d\n", Arr_1_Glob[8]); 264 | printf (" should be: %d\n", 7); 265 | printf ("Arr_2_Glob[8][7]: %d\n", Arr_2_Glob[8][7]); 266 | printf (" should be: Number_Of_Runs + 10\n"); 267 | printf ("Ptr_Glob->\n"); 268 | printf (" Ptr_Comp: %d\n", (int) Ptr_Glob->Ptr_Comp); 269 | printf (" should be: (implementation-dependent)\n"); 270 | printf (" Discr: %d\n", Ptr_Glob->Discr); 271 | printf (" should be: %d\n", 0); 272 | printf (" Enum_Comp: %d\n", Ptr_Glob->variant.var_1.Enum_Comp); 273 | printf (" should be: %d\n", 2); 274 | printf (" Int_Comp: %d\n", Ptr_Glob->variant.var_1.Int_Comp); 275 | printf (" should be: %d\n", 17); 276 | printf (" Str_Comp: %s\n", Ptr_Glob->variant.var_1.Str_Comp); 277 | printf (" should be: DHRYSTONE PROGRAM, SOME STRING\n"); 278 | printf ("Next_Ptr_Glob->\n"); 279 | printf (" Ptr_Comp: %d\n", (int) Next_Ptr_Glob->Ptr_Comp); 280 | printf (" should be: (implementation-dependent), same as above\n"); 281 | printf (" Discr: %d\n", Next_Ptr_Glob->Discr); 282 | printf (" should be: %d\n", 0); 283 | printf (" Enum_Comp: %d\n", Next_Ptr_Glob->variant.var_1.Enum_Comp); 284 | printf (" should be: %d\n", 1); 285 | printf (" Int_Comp: %d\n", Next_Ptr_Glob->variant.var_1.Int_Comp); 286 | printf (" should be: %d\n", 18); 287 | printf (" Str_Comp: %s\n", 288 | Next_Ptr_Glob->variant.var_1.Str_Comp); 289 | printf (" should be: DHRYSTONE PROGRAM, SOME STRING\n"); 290 | printf ("Int_1_Loc: %d\n", Int_1_Loc); 291 | printf (" should be: %d\n", 5); 292 | printf ("Int_2_Loc: %d\n", Int_2_Loc); 293 | printf (" should be: %d\n", 13); 294 | printf ("Int_3_Loc: %d\n", Int_3_Loc); 295 | printf (" should be: %d\n", 7); 296 | printf ("Enum_Loc: %d\n", Enum_Loc); 297 | printf (" should be: %d\n", 1); 298 | printf ("Str_1_Loc: %s\n", Str_1_Loc); 299 | printf (" should be: DHRYSTONE PROGRAM, 1'ST STRING\n"); 300 | printf ("Str_2_Loc: %s\n", Str_2_Loc); 301 | printf (" should be: DHRYSTONE PROGRAM, 2'ND STRING\n"); 302 | printf ("\n"); 303 | 304 | User_Time = End_Time - Begin_Time; 305 | 306 | if (User_Time < Too_Small_Time) 307 | { 308 | printf ("Measured time too small to obtain meaningful results\n"); 309 | printf ("Please increase number of runs\n"); 310 | printf ("\n"); 311 | } 312 | else 313 | { 314 | #ifdef TIME 315 | Microseconds = (float) User_Time * Mic_secs_Per_Second 316 | / (float) Number_Of_Runs; 317 | Dhrystones_Per_Second = (float) Number_Of_Runs / (float) User_Time; 318 | #else 319 | Microseconds = (float) User_Time * Mic_secs_Per_Second 320 | / ((float) HZ * ((float) Number_Of_Runs)); 321 | Dhrystones_Per_Second = ((float) HZ * (float) Number_Of_Runs) 322 | / (float) User_Time; 323 | #endif 324 | printf ("Microseconds for one run through Dhrystone: "); 325 | printf ("%6.1f \n", Microseconds); 326 | printf ("Dhrystones per Second: "); 327 | printf ("%6.1f \n", Dhrystones_Per_Second); 328 | printf ("\n"); 329 | } 330 | #endif /* SELF_TIMED */ 331 | } 332 | 333 | 334 | void Proc_1 (REG Rec_Pointer Ptr_Val_Par) 335 | /* executed once */ 336 | { 337 | REG Rec_Pointer Next_Record = Ptr_Val_Par->Ptr_Comp; 338 | /* == Ptr_Glob_Next */ 339 | /* Local variable, initialized with Ptr_Val_Par->Ptr_Comp, */ 340 | /* corresponds to "rename" in Ada, "with" in Pascal */ 341 | 342 | structassign (*Ptr_Val_Par->Ptr_Comp, *Ptr_Glob); 343 | Ptr_Val_Par->variant.var_1.Int_Comp = 5; 344 | Next_Record->variant.var_1.Int_Comp 345 | = Ptr_Val_Par->variant.var_1.Int_Comp; 346 | Next_Record->Ptr_Comp = Ptr_Val_Par->Ptr_Comp; 347 | Proc_3 (&Next_Record->Ptr_Comp); 348 | /* Ptr_Val_Par->Ptr_Comp->Ptr_Comp 349 | == Ptr_Glob->Ptr_Comp */ 350 | if (Next_Record->Discr == Ident_1) 351 | /* then, executed */ 352 | { 353 | Next_Record->variant.var_1.Int_Comp = 6; 354 | Proc_6 (Ptr_Val_Par->variant.var_1.Enum_Comp, 355 | &Next_Record->variant.var_1.Enum_Comp); 356 | Next_Record->Ptr_Comp = Ptr_Glob->Ptr_Comp; 357 | Proc_7 (Next_Record->variant.var_1.Int_Comp, 10, 358 | &Next_Record->variant.var_1.Int_Comp); 359 | } 360 | else /* not executed */ 361 | structassign (*Ptr_Val_Par, *Ptr_Val_Par->Ptr_Comp); 362 | } /* Proc_1 */ 363 | 364 | 365 | void Proc_2 (One_Fifty *Int_Par_Ref) 366 | /* executed once */ 367 | /* *Int_Par_Ref == 1, becomes 4 */ 368 | { 369 | One_Fifty Int_Loc; 370 | Enumeration Enum_Loc; 371 | 372 | Enum_Loc = 0; 373 | 374 | Int_Loc = *Int_Par_Ref + 10; 375 | do /* executed once */ 376 | if (Ch_1_Glob == 'A') 377 | /* then, executed */ 378 | { 379 | Int_Loc -= 1; 380 | *Int_Par_Ref = Int_Loc - Int_Glob; 381 | Enum_Loc = Ident_1; 382 | } /* if */ 383 | while (Enum_Loc != Ident_1); /* true */ 384 | } /* Proc_2 */ 385 | 386 | 387 | void Proc_3 (Rec_Pointer *Ptr_Ref_Par) 388 | /* executed once */ 389 | /* Ptr_Ref_Par becomes Ptr_Glob */ 390 | { 391 | if (Ptr_Glob != Null) 392 | /* then, executed */ 393 | *Ptr_Ref_Par = Ptr_Glob->Ptr_Comp; 394 | Proc_7 (10, Int_Glob, &Ptr_Glob->variant.var_1.Int_Comp); 395 | } /* Proc_3 */ 396 | 397 | 398 | void Proc_4 (void) /* without parameters */ 399 | /* executed once */ 400 | { 401 | Boolean Bool_Loc; 402 | 403 | Bool_Loc = Ch_1_Glob == 'A'; 404 | Bool_Glob = Bool_Loc | Bool_Glob; 405 | Ch_2_Glob = 'B'; 406 | } /* Proc_4 */ 407 | 408 | void Proc_5 (void) /* without parameters */ 409 | /*******/ 410 | /* executed once */ 411 | { 412 | Ch_1_Glob = 'A'; 413 | Bool_Glob = false; 414 | } /* Proc_5 */ 415 | 416 | 417 | /* Procedure for the assignment of structures, */ 418 | /* if the C compiler doesn't support this feature */ 419 | #ifdef NOSTRUCTASSIGN 420 | memcpy (d, s, l) 421 | register char *d; 422 | register char *s; 423 | register int l; 424 | { 425 | while (l--) *d++ = *s++; 426 | } 427 | #endif 428 | 429 | 430 | -------------------------------------------------------------------------------- /UnixBench/src/dhry_2.c: -------------------------------------------------------------------------------- 1 | /***************************************************************************** 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: dhry_2.c SID: 3.4 5/15/91 19:30:22 4 | * 5 | ***************************************************************************** 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ***************************************************************************** 12 | * Modification Log: 13 | * 10/22/97 - code cleanup to remove ANSI C compiler warnings 14 | * Andy Kahn 15 | * 16 | * Adapted from: 17 | * 18 | * "DHRYSTONE" Benchmark Program 19 | * ----------------------------- 20 | * 21 | * **** WARNING **** See warning in n.dhry_1.c 22 | * 23 | * Version: C, Version 2.1 24 | * 25 | * File: dhry_2.c (part 3 of 3) 26 | * 27 | * Date: May 25, 1988 28 | * 29 | * Author: Reinhold P. Weicker 30 | * 31 | ****************************************************************************/ 32 | /* SCCSid is defined in dhry_1.c */ 33 | 34 | #include 35 | #include "dhry.h" 36 | 37 | #ifndef REG 38 | #define REG 39 | /* REG becomes defined as empty */ 40 | /* i.e. no register variables */ 41 | #endif 42 | 43 | extern int Int_Glob; 44 | extern char Ch_1_Glob; 45 | 46 | void Proc_6(Enumeration, Enumeration *); 47 | void Proc_7(One_Fifty, One_Fifty, One_Fifty *); 48 | void Proc_8(Arr_1_Dim, Arr_2_Dim, int, int); 49 | Enumeration Func_1(Capital_Letter, Capital_Letter); 50 | Boolean Func_2(Str_30, Str_30); 51 | Boolean Func_3(Enumeration); 52 | 53 | void Proc_6 (Enumeration Enum_Val_Par, Enumeration *Enum_Ref_Par) 54 | /* executed once */ 55 | /* Enum_Val_Par == Ident_3, Enum_Ref_Par becomes Ident_2 */ 56 | { 57 | *Enum_Ref_Par = Enum_Val_Par; 58 | if (! Func_3 (Enum_Val_Par)) 59 | /* then, not executed */ 60 | *Enum_Ref_Par = Ident_4; 61 | switch (Enum_Val_Par) 62 | { 63 | case Ident_1: 64 | *Enum_Ref_Par = Ident_1; 65 | break; 66 | case Ident_2: 67 | if (Int_Glob > 100) 68 | /* then */ 69 | *Enum_Ref_Par = Ident_1; 70 | else *Enum_Ref_Par = Ident_4; 71 | break; 72 | case Ident_3: /* executed */ 73 | *Enum_Ref_Par = Ident_2; 74 | break; 75 | case Ident_4: break; 76 | case Ident_5: 77 | *Enum_Ref_Par = Ident_3; 78 | break; 79 | } /* switch */ 80 | } /* Proc_6 */ 81 | 82 | void Proc_7 (Int_1_Par_Val, Int_2_Par_Val, Int_Par_Ref) 83 | One_Fifty Int_1_Par_Val; 84 | One_Fifty Int_2_Par_Val; 85 | One_Fifty *Int_Par_Ref; 86 | /**********************************************/ 87 | /* executed three times */ 88 | /* first call: Int_1_Par_Val == 2, Int_2_Par_Val == 3, */ 89 | /* Int_Par_Ref becomes 7 */ 90 | /* second call: Int_1_Par_Val == 10, Int_2_Par_Val == 5, */ 91 | /* Int_Par_Ref becomes 17 */ 92 | /* third call: Int_1_Par_Val == 6, Int_2_Par_Val == 10, */ 93 | /* Int_Par_Ref becomes 18 */ 94 | { 95 | One_Fifty Int_Loc; 96 | 97 | Int_Loc = Int_1_Par_Val + 2; 98 | *Int_Par_Ref = Int_2_Par_Val + Int_Loc; 99 | } /* Proc_7 */ 100 | 101 | 102 | void Proc_8 (Arr_1_Par_Ref, Arr_2_Par_Ref, Int_1_Par_Val, Int_2_Par_Val) 103 | /*********************************************************************/ 104 | /* executed once */ 105 | /* Int_Par_Val_1 == 3 */ 106 | /* Int_Par_Val_2 == 7 */ 107 | Arr_1_Dim Arr_1_Par_Ref; 108 | Arr_2_Dim Arr_2_Par_Ref; 109 | int Int_1_Par_Val; 110 | int Int_2_Par_Val; 111 | { 112 | REG One_Fifty Int_Index; 113 | REG One_Fifty Int_Loc; 114 | 115 | Int_Loc = Int_1_Par_Val + 5; 116 | Arr_1_Par_Ref [Int_Loc] = Int_2_Par_Val; 117 | Arr_1_Par_Ref [Int_Loc+1] = Arr_1_Par_Ref [Int_Loc]; 118 | Arr_1_Par_Ref [Int_Loc+30] = Int_Loc; 119 | for (Int_Index = Int_Loc; Int_Index <= Int_Loc+1; ++Int_Index) 120 | Arr_2_Par_Ref [Int_Loc] [Int_Index] = Int_Loc; 121 | Arr_2_Par_Ref [Int_Loc] [Int_Loc-1] += 1; 122 | Arr_2_Par_Ref [Int_Loc+20] [Int_Loc] = Arr_1_Par_Ref [Int_Loc]; 123 | Int_Glob = 5; 124 | } /* Proc_8 */ 125 | 126 | 127 | Enumeration Func_1 (Capital_Letter Ch_1_Par_Val, Capital_Letter Ch_2_Par_Val) 128 | /*************************************************/ 129 | /* executed three times */ 130 | /* first call: Ch_1_Par_Val == 'H', Ch_2_Par_Val == 'R' */ 131 | /* second call: Ch_1_Par_Val == 'A', Ch_2_Par_Val == 'C' */ 132 | /* third call: Ch_1_Par_Val == 'B', Ch_2_Par_Val == 'C' */ 133 | { 134 | Capital_Letter Ch_1_Loc; 135 | Capital_Letter Ch_2_Loc; 136 | 137 | Ch_1_Loc = Ch_1_Par_Val; 138 | Ch_2_Loc = Ch_1_Loc; 139 | if (Ch_2_Loc != Ch_2_Par_Val) 140 | /* then, executed */ 141 | return (Ident_1); 142 | else /* not executed */ 143 | { 144 | Ch_1_Glob = Ch_1_Loc; 145 | return (Ident_2); 146 | } 147 | } /* Func_1 */ 148 | 149 | 150 | 151 | Boolean Func_2 (Str_1_Par_Ref, Str_2_Par_Ref) 152 | /*************************************************/ 153 | /* executed once */ 154 | /* Str_1_Par_Ref == "DHRYSTONE PROGRAM, 1'ST STRING" */ 155 | /* Str_2_Par_Ref == "DHRYSTONE PROGRAM, 2'ND STRING" */ 156 | 157 | Str_30 Str_1_Par_Ref; 158 | Str_30 Str_2_Par_Ref; 159 | { 160 | REG One_Thirty Int_Loc; 161 | Capital_Letter Ch_Loc; 162 | 163 | Ch_Loc = 'A'; 164 | Int_Loc = 2; 165 | while (Int_Loc <= 2) /* loop body executed once */ 166 | if (Func_1 (Str_1_Par_Ref[Int_Loc], 167 | Str_2_Par_Ref[Int_Loc+1]) == Ident_1) 168 | /* then, executed */ 169 | { 170 | Ch_Loc = 'A'; 171 | Int_Loc += 1; 172 | } /* if, while */ 173 | if (Ch_Loc >= 'W' && Ch_Loc < 'Z') 174 | /* then, not executed */ 175 | Int_Loc = 7; 176 | if (Ch_Loc == 'R') 177 | /* then, not executed */ 178 | return (true); 179 | else /* executed */ 180 | { 181 | if (strcmp (Str_1_Par_Ref, Str_2_Par_Ref) > 0) 182 | /* then, not executed */ 183 | { 184 | Int_Loc += 7; 185 | Int_Glob = Int_Loc; 186 | return (true); 187 | } 188 | else /* executed */ 189 | return (false); 190 | } /* if Ch_Loc */ 191 | } /* Func_2 */ 192 | 193 | 194 | Boolean Func_3 (Enum_Par_Val) 195 | /***************************/ 196 | /* executed once */ 197 | /* Enum_Par_Val == Ident_3 */ 198 | Enumeration Enum_Par_Val; 199 | { 200 | Enumeration Enum_Loc; 201 | 202 | Enum_Loc = Enum_Par_Val; 203 | if (Enum_Loc == Ident_3) 204 | /* then, executed */ 205 | return (true); 206 | else /* not executed */ 207 | return (false); 208 | } /* Func_3 */ 209 | 210 | -------------------------------------------------------------------------------- /UnixBench/src/dummy.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: dummy.c SID: 3.3 5/15/91 19:30:19 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * 10/22/97 - code cleanup to remove ANSI C compiler warnings 14 | * Andy Kahn 15 | * 16 | ******************************************************************************/ 17 | /* 18 | * Hacked up C program for use in the standard shell.? scripts of 19 | * the multiuser test. This is based upon makework.c, and is typically 20 | * edited using edscript.2 before compilation. 21 | * 22 | * $Header: dummy.c,v 3.4 87/06/23 15:54:53 kjmcdonell Beta $ 23 | */ 24 | char SCCSid[] = "@(#) @(#)dummy.c:3.3 -- 5/15/91 19:30:19"; 25 | 26 | #include 27 | #include 28 | 29 | #define DEF_RATE 5.0 30 | #define GRANULE 5 31 | #define CHUNK 60 32 | #define MAXCHILD 12 33 | #define MAXWORK 10 34 | 35 | float thres; 36 | float est_rate = DEF_RATE; 37 | int nusers; /* number of concurrent users to be simulated by 38 | * this process */ 39 | int firstuser; /* ordinal identification of first user for this 40 | * process */ 41 | int nwork = 0; /* number of job streams */ 42 | int exit_status = 0; /* returned to parent */ 43 | int sigpipe; /* pipe write error flag */ 44 | 45 | struct st_work { 46 | char *cmd; /* name of command to run */ 47 | char **av; /* arguments to command */ 48 | char *input; /* standard input buffer */ 49 | int inpsize; /* size of standard input buffer */ 50 | } work[MAXWORK]; 51 | 52 | struct { 53 | int xmit; /* # characters sent */ 54 | char *bp; /* std input buffer pointer */ 55 | int blen; /* std input buffer length */ 56 | int fd; /* stdin to command */ 57 | int pid; /* child PID */ 58 | char *line; /* start of input line */ 59 | int firstjob; /* inital piece of work */ 60 | int thisjob; /* current piece of work */ 61 | } child[MAXCHILD], *cp; 62 | 63 | main(argc, argv) 64 | int argc; 65 | char *argv[]; 66 | { 67 | int i; 68 | int l; 69 | int fcopy = 0; /* fd for copy output */ 70 | int master = 1; /* the REAL master, == 0 for clones */ 71 | int nchild; /* no. of children for a clone to run */ 72 | int done; /* count of children finished */ 73 | int output; /* aggregate output char count for all 74 | children */ 75 | int c; 76 | int thiswork = 0; /* next job stream to allocate */ 77 | int nch; /* # characters to write */ 78 | int written; /* # characters actully written */ 79 | char logname[15]; /* name of the log file(s) */ 80 | void onalarm(void); 81 | void pipeerr(void); 82 | void wrapup(void); 83 | void grunt(void); 84 | char *malloc(); 85 | int pvec[2]; /* for pipes */ 86 | char *p; 87 | char *prog; /* my name */ 88 | 89 | #if ! debug 90 | freopen("masterlog.00", "a", stderr); 91 | #endif 92 | fprintf(stderr, "*** New Run *** "); 93 | prog = argv[0]; 94 | while (argc > 1 && argv[1][0] == '-') { 95 | p = &argv[1][1]; 96 | argc--; 97 | argv++; 98 | while (*p) { 99 | switch (*p) { 100 | case 'r': 101 | /* code DELETED here */ 102 | argc--; 103 | argv++; 104 | break; 105 | 106 | case 'c': 107 | /* code DELETED here */ 108 | lseek(fcopy, 0L, 2); /* append at end of file */ 109 | break; 110 | 111 | default: 112 | fprintf(stderr, "%s: bad flag '%c'\n", prog, *p); 113 | exit(4); 114 | } 115 | p++; 116 | } 117 | } 118 | 119 | if (argc < 2) { 120 | fprintf(stderr, "%s: missing nusers\n", prog); 121 | exit(4); 122 | } 123 | 124 | nusers = atoi(argv[1]); 125 | if (nusers < 1) { 126 | fprintf(stderr, "%s: impossible nusers (%d<-%s)\n", prog, nusers, argv[1]); 127 | exit(4); 128 | } 129 | fprintf(stderr, "%d Users\n", nusers); 130 | argc--; 131 | argv++; 132 | 133 | /* build job streams */ 134 | getwork(); 135 | #if debug 136 | dumpwork(); 137 | #endif 138 | 139 | /* clone copies of myself to run up to MAXCHILD jobs each */ 140 | firstuser = MAXCHILD; 141 | fprintf(stderr, "master pid %d\n", getpid()); 142 | fflush(stderr); 143 | while (nusers > MAXCHILD) { 144 | fflush(stderr); 145 | if (nusers >= 2*MAXCHILD) 146 | /* the next clone must run MAXCHILD jobs */ 147 | nchild = MAXCHILD; 148 | else 149 | /* the next clone must run the leftover jobs */ 150 | nchild = nusers - MAXCHILD; 151 | if ((l = fork()) == -1) { 152 | /* fork failed */ 153 | fatal("** clone fork failed **\n"); 154 | goto bepatient; 155 | } else if (l > 0) { 156 | fprintf(stderr, "master clone pid %d\n", l); 157 | /* I am the master with nchild fewer jobs to run */ 158 | nusers -= nchild; 159 | firstuser += MAXCHILD; 160 | continue; 161 | } else { 162 | /* I am a clone, run MAXCHILD jobs */ 163 | #if ! debug 164 | sprintf(logname, "masterlog.%02d", firstuser/MAXCHILD); 165 | freopen(logname, "w", stderr); 166 | #endif 167 | master = 0; 168 | nusers = nchild; 169 | break; 170 | } 171 | } 172 | if (master) 173 | firstuser = 0; 174 | 175 | close(0); 176 | 177 | /* code DELETED here */ 178 | 179 | fflush(stderr); 180 | 181 | srand(time(0)); 182 | thres = 0; 183 | done = output = 0; 184 | for (i = 0; i < nusers; i++) { 185 | if (child[i].blen == 0) 186 | done++; 187 | else 188 | thres += est_rate * GRANULE; 189 | } 190 | est_rate = thres; 191 | 192 | signal(SIGALRM, onalarm); 193 | signal(SIGPIPE, pipeerr); 194 | alarm(GRANULE); 195 | while (done < nusers) { 196 | for (i = 0; i < nusers; i++) { 197 | cp = &child[i]; 198 | if (cp->xmit >= cp->blen) continue; 199 | l = rand() % CHUNK + 1; /* 1-CHUNK chars */ 200 | if (l == 0) continue; 201 | if (cp->xmit + l > cp->blen) 202 | l = cp->blen - cp->xmit; 203 | p = cp->bp; 204 | cp->bp += l; 205 | cp->xmit += l; 206 | #if debug 207 | fprintf(stderr, "child %d, %d processed, %d to go\n", i, cp->xmit, cp->blen - cp->xmit); 208 | #endif 209 | while (p < cp->bp) { 210 | if (*p == '\n' || (p == &cp->bp[-1] && cp->xmit >= cp->blen)) { 211 | /* write it out */ 212 | nch = p - cp->line + 1; 213 | if ((written = write(cp->fd, cp->line, nch)) != nch) { 214 | 215 | /* code DELETED here */ 216 | 217 | } 218 | if (fcopy) 219 | write(fcopy, cp->line, p - cp->line + 1); 220 | #if debug 221 | fprintf(stderr, "child %d gets \"", i); 222 | { 223 | char *q = cp->line; 224 | while (q <= p) { 225 | if (*q >= ' ' && *q <= '~') 226 | fputc(*q, stderr); 227 | else 228 | fprintf(stderr, "\\%03o", *q); 229 | q++; 230 | } 231 | } 232 | fputc('"', stderr); 233 | #endif 234 | cp->line = &p[1]; 235 | } 236 | p++; 237 | } 238 | if (cp->xmit >= cp->blen) { 239 | done++; 240 | close(cp->fd); 241 | #if debug 242 | fprintf(stderr, "child %d, close std input\n", i); 243 | #endif 244 | } 245 | output += l; 246 | } 247 | while (output > thres) { 248 | pause(); 249 | #if debug 250 | fprintf(stderr, "after pause: output, thres, done %d %.2f %d\n", output, thres, done); 251 | #endif 252 | } 253 | } 254 | 255 | bepatient: 256 | alarm(0); 257 | /**** 258 | * If everything is going OK, we should simply be able to keep 259 | * looping unitil 'wait' fails, however some descendent process may 260 | * be in a state from which it can never exit, and so a timeout 261 | * is used. 262 | * 5 minutes should be ample, since the time to run all jobs is of 263 | * the order of 5-10 minutes, however some machines are painfully slow, 264 | * so the timeout has been set at 20 minutes (1200 seconds). 265 | ****/ 266 | 267 | /* code DELETED here */ 268 | 269 | } 270 | 271 | onalarm() 272 | { 273 | thres += est_rate; 274 | signal(SIGALRM, onalarm); 275 | alarm(GRANULE); 276 | } 277 | 278 | grunt() 279 | { 280 | /* timeout after label "bepatient" in main */ 281 | exit_status = 4; 282 | wrapup(); 283 | } 284 | 285 | pipeerr() 286 | { 287 | sigpipe++; 288 | } 289 | 290 | wrapup() 291 | { 292 | /* DUMMY, real code dropped */ 293 | } 294 | 295 | getwork() 296 | { 297 | 298 | /* DUMMY, real code dropped */ 299 | gets(); 300 | strncpy(); 301 | malloc(); realloc(); 302 | open(); close(); 303 | } 304 | 305 | fatal(s) 306 | char *s; 307 | { 308 | int i; 309 | fprintf(stderr, s); 310 | fflush(stderr); 311 | perror("Reason?"); 312 | for (i = 0; i < nusers; i++) { 313 | if (child[i].pid > 0 && kill(child[i].pid, SIGKILL) != -1) 314 | fprintf(stderr, "pid %d killed off\n", child[i].pid); 315 | } 316 | fflush(stderr); 317 | exit_status = 4; 318 | return; 319 | } 320 | -------------------------------------------------------------------------------- /UnixBench/src/execl.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: execl.c SID: 3.3 5/15/91 19:30:19 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * $Header: execl.c,v 3.5 87/06/22 15:37:08 kjmcdonell Beta $ 14 | * August 28, 1990 - Modified timing routines 15 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 16 | * Andy Kahn 17 | * 18 | ******************************************************************************/ 19 | /* 20 | * Execing 21 | * 22 | */ 23 | char SCCSid[] = "@(#) @(#)execl.c:3.3 -- 5/15/91 19:30:19"; 24 | 25 | #include 26 | #include 27 | #include 28 | #include 29 | 30 | char bss[8*1024]; /* something worthwhile */ 31 | 32 | #define main dummy 33 | 34 | #include "big.c" /* some real code */ 35 | 36 | #undef main 37 | 38 | /* added by BYTE */ 39 | char *getenv(); 40 | 41 | 42 | int main(argc, argv) /* the real program */ 43 | int argc; 44 | char *argv[]; 45 | { 46 | unsigned long iter = 0; 47 | char *ptr; 48 | char *fullpath; 49 | int duration; 50 | char count_str[12], start_str[24], path_str[256], *dur_str; 51 | time_t start_time, this_time; 52 | 53 | #ifdef DEBUG 54 | int count; 55 | for(count = 0; count < argc; ++ count) 56 | printf("%s ",argv[count]); 57 | printf("\n"); 58 | #endif 59 | if (argc < 2) 60 | { 61 | fprintf(stderr, "Usage: %s duration\n", argv[0]); 62 | exit(1); 63 | } 64 | 65 | 66 | duration = atoi(argv[1]); 67 | if (duration > 0) 68 | /* the first invocation */ 69 | { 70 | dur_str = argv[1]; 71 | if((ptr = getenv("UB_BINDIR")) != NULL) 72 | sprintf(path_str,"%s/execl",ptr); 73 | fullpath=path_str; 74 | time(&start_time); 75 | } 76 | else /* one of those execl'd invocations */ 77 | { 78 | /* real duration follow the phoney null duration */ 79 | duration = atoi(argv[2]); 80 | dur_str = argv[2]; 81 | iter = (unsigned long)atoi(argv[3]); /* where are we now ? */ 82 | sscanf(argv[4], "%lu", (unsigned long *) &start_time); 83 | fullpath = argv[0]; 84 | } 85 | 86 | sprintf(count_str, "%lu", ++iter); /* increment the execl counter */ 87 | sprintf(start_str, "%lu", (unsigned long) start_time); 88 | time(&this_time); 89 | if (this_time - start_time >= duration) { /* time has run out */ 90 | fprintf(stderr, "COUNT|%lu|1|lps\n", iter); 91 | exit(0); 92 | } 93 | execl(fullpath, fullpath, "0", dur_str, count_str, start_str, (void *) 0); 94 | fprintf(stderr, "Exec failed at iteration %lu\n", iter); 95 | perror("Reason"); 96 | exit(1); 97 | } 98 | -------------------------------------------------------------------------------- /UnixBench/src/fstime.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: fstime.c SID: 3.5 5/15/91 19:30:19 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * $Header: fstime.c,v 3.4 87/06/22 14:23:05 kjmcdonell Beta $ 14 | * 10/19/89 - rewrote timing calcs and added clock check (Ben Smith) 15 | * 10/26/90 - simplify timing, change defaults (Tom Yager) 16 | * 11/16/90 - added better error handling and changed output format (Ben Smith) 17 | * 11/17/90 - changed the whole thing around (Ben Smith) 18 | * 2/22/91 - change a few style elements and improved error handling (Ben Smith) 19 | * 4/17/91 - incorporated suggestions from Seckin Unlu (seckin@sumac.intel.com) 20 | * 4/17/91 - limited size of file, will rewind when reaches end of file 21 | * 7/95 - fixed mishandling of read() and write() return codes 22 | * Carl Emilio Prelz 23 | * 12/95 - Massive changes. Made sleep time proportional increase with run 24 | * time; added fsbuffer and fsdisk variants; added partial counting 25 | * of partial reads/writes (was *full* credit); added dual syncs. 26 | * David C Niemi 27 | * 10/22/97 - code cleanup to remove ANSI C compiler warnings 28 | * Andy Kahn 29 | * 9/24/07 - Separate out the read and write tests; 30 | * output the actual time used in the results. 31 | * Ian Smith 32 | ******************************************************************************/ 33 | char SCCSid[] = "@(#) @(#)fstime.c:3.5 -- 5/15/91 19:30:19"; 34 | 35 | #include 36 | #include 37 | #include 38 | #include 39 | #include 40 | #include 41 | #include 42 | 43 | #define SECONDS 10 44 | 45 | #define MAX_BUFSIZE 8192 46 | 47 | /* This must be set to the smallest BUFSIZE or 1024, whichever is smaller */ 48 | #define COUNTSIZE 256 49 | #define HALFCOUNT (COUNTSIZE/2) /* Half of COUNTSIZE */ 50 | 51 | #define FNAME0 "dummy0" 52 | #define FNAME1 "dummy1" 53 | 54 | int w_test(int timeSecs); 55 | int r_test(int timeSecs); 56 | int c_test(int timeSecs); 57 | 58 | long read_score = 1, write_score = 1, copy_score = 1; 59 | 60 | /****************** GLOBALS ***************************/ 61 | 62 | /* The buffer size for the tests. */ 63 | int bufsize = 1024; 64 | 65 | /* 66 | * The max number of 1024-byte blocks in the file. 67 | * Don't limit it much, so that memory buffering 68 | * can be overcome. 69 | */ 70 | int max_blocks = 2000; 71 | 72 | /* The max number of BUFSIZE blocks in the file. */ 73 | int max_buffs = 2000; 74 | 75 | /* Countable units per 1024 bytes */ 76 | int count_per_k; 77 | 78 | /* Countable units per bufsize */ 79 | int count_per_buf; 80 | 81 | /* The actual buffer. */ 82 | /* char *buf = 0; */ 83 | /* Let's carry on using a static buffer for this, like older versions 84 | * of the code did. It turns out that if you use a malloc buffer, 85 | * it goes 50% slower on reads, when using a 4k buffer -- at least on 86 | * my OpenSUSE 10.2 system. 87 | * What up wit dat? 88 | */ 89 | char buf[MAX_BUFSIZE]; 90 | 91 | int f; 92 | int g; 93 | int i; 94 | void stop_count(); 95 | void clean_up(); 96 | int sigalarm = 0; 97 | 98 | /******************** MAIN ****************************/ 99 | 100 | int main(argc, argv) 101 | int argc; 102 | char *argv[]; 103 | { 104 | /* The number of seconds to run for. */ 105 | int seconds = SECONDS; 106 | 107 | /* The type of test to run. */ 108 | char test = 'c'; 109 | 110 | int status; 111 | int i; 112 | 113 | for (i = 1; i < argc; ++i) { 114 | if (argv[i][0] == '-') { 115 | switch (argv[i][1]) { 116 | case 'c': 117 | case 'r': 118 | case 'w': 119 | test = argv[i][1]; 120 | break; 121 | case 'b': 122 | bufsize = atoi(argv[++i]); 123 | break; 124 | case 'm': 125 | max_blocks = atoi(argv[++i]); 126 | break; 127 | case 't': 128 | seconds = atoi(argv[++i]); 129 | break; 130 | case 'd': 131 | if (chdir(argv[++i]) < 0) { 132 | perror("fstime: chdir"); 133 | exit(1); 134 | } 135 | break; 136 | default: 137 | fprintf(stderr, "Usage: fstime [-c|-r|-w] [-b ] [-m ] [-t ]\n"); 138 | exit(2); 139 | } 140 | } else { 141 | fprintf(stderr, "Usage: fstime [-c|-r|-w] [-b ] [-m ] [-t ]\n"); 142 | exit(2); 143 | } 144 | } 145 | 146 | if (bufsize < COUNTSIZE || bufsize > MAX_BUFSIZE) { 147 | fprintf(stderr, "fstime: buffer size must be in range %d-%d\n", 148 | COUNTSIZE, 1024*1024); 149 | exit(3); 150 | } 151 | if (max_blocks < 1 || max_blocks > 1024*1024) { 152 | fprintf(stderr, "fstime: max blocks must be in range %d-%d\n", 153 | 1, 1024*1024); 154 | exit(3); 155 | } 156 | if (seconds < 1 || seconds > 3600) { 157 | fprintf(stderr, "fstime: time must be in range %d-%d seconds\n", 158 | 1, 3600); 159 | exit(3); 160 | } 161 | 162 | max_buffs = max_blocks * 1024 / bufsize; 163 | count_per_k = 1024 / COUNTSIZE; 164 | count_per_buf = bufsize / COUNTSIZE; 165 | 166 | /* 167 | if ((buf = malloc(bufsize)) == 0) { 168 | fprintf(stderr, "fstime: failed to malloc %d bytes\n", bufsize); 169 | exit(4); 170 | } 171 | */ 172 | 173 | if((f = creat(FNAME0, 0600)) == -1) { 174 | perror("fstime: creat"); 175 | exit(1); 176 | } 177 | close(f); 178 | 179 | if((g = creat(FNAME1, 0600)) == -1) { 180 | perror("fstime: creat"); 181 | exit(1); 182 | } 183 | close(g); 184 | 185 | if( (f = open(FNAME0, 2)) == -1) { 186 | perror("fstime: open"); 187 | exit(1); 188 | } 189 | if( ( g = open(FNAME1, 2)) == -1 ) { 190 | perror("fstime: open"); 191 | exit(1); 192 | } 193 | 194 | /* fill buffer */ 195 | for (i=0; i < bufsize; ++i) 196 | buf[i] = i & 0xff; 197 | 198 | signal(SIGKILL,clean_up); 199 | 200 | /* 201 | * Run the selected test. 202 | * When I got here, this program ran full 30-second tests for 203 | * write, read, and copy, outputting the results for each. BUT 204 | * only the copy results are actually used in the benchmark index. 205 | * With multiple iterations and three sets of FS tests, that amounted 206 | * to about 10 minutes of wasted time per run. 207 | * 208 | * So, I've made the test selectable. Except that the read and write 209 | * passes are used to create the test file and calibrate the rates used 210 | * to tweak the results of the copy test. So, for copy tests, we do 211 | * a few seconds of write and read to prime the pump. 212 | * 213 | * Note that this will also pull the file into the FS cache on any 214 | * modern system prior to the copy test. Whether this is good or 215 | * bad is a matter of perspective, but it's how it was when I got 216 | * here. 217 | * 218 | * Ian Smith 21 Sep 2007 219 | */ 220 | switch (test) { 221 | case 'w': 222 | status = w_test(seconds); 223 | break; 224 | case 'r': 225 | w_test(2); 226 | status = r_test(seconds); 227 | break; 228 | case 'c': 229 | w_test(2); 230 | r_test(2); 231 | status = c_test(seconds); 232 | break; 233 | default: 234 | fprintf(stderr, "fstime: unknown test \'%c\'\n", test); 235 | exit(6); 236 | } 237 | if (status) { 238 | clean_up(); 239 | exit(1); 240 | } 241 | 242 | clean_up(); 243 | exit(0); 244 | } 245 | 246 | 247 | static double getFloatTime() 248 | { 249 | struct timeval t; 250 | 251 | gettimeofday(&t, 0); 252 | return (double) t.tv_sec + (double) t.tv_usec / 1000000.0; 253 | } 254 | 255 | 256 | /* 257 | * Run the write test for the time given in seconds. 258 | */ 259 | int w_test(int timeSecs) 260 | { 261 | unsigned long counted = 0L; 262 | unsigned long tmp; 263 | long f_blocks; 264 | double start, end; 265 | extern int sigalarm; 266 | 267 | /* Sync and let it settle */ 268 | sync(); 269 | sleep(2); 270 | sync(); 271 | sleep(2); 272 | 273 | /* Set an alarm. */ 274 | sigalarm = 0; 275 | signal(SIGALRM, stop_count); 276 | alarm(timeSecs); 277 | 278 | start = getFloatTime(); 279 | 280 | while (!sigalarm) { 281 | for(f_blocks=0; f_blocks < max_buffs; ++f_blocks) { 282 | if ((tmp=write(f, buf, bufsize)) != bufsize) { 283 | if (errno != EINTR) { 284 | perror("fstime: write"); 285 | return(-1); 286 | } 287 | stop_count(); 288 | counted += ((tmp+HALFCOUNT)/COUNTSIZE); 289 | } else 290 | counted += count_per_buf; 291 | } 292 | lseek(f, 0L, 0); /* rewind */ 293 | } 294 | 295 | /* stop clock */ 296 | end = getFloatTime(); 297 | write_score = (long) ((double) counted / ((end - start) * count_per_k)); 298 | printf("Write done: %ld in %.4f, score %ld\n", 299 | counted, end - start, write_score); 300 | 301 | /* 302 | * Output the test results. Use the true time. 303 | */ 304 | fprintf(stderr, "COUNT|%ld|0|KBps\n", write_score); 305 | fprintf(stderr, "TIME|%.1f\n", end - start); 306 | 307 | return(0); 308 | } 309 | 310 | /* 311 | * Run the read test for the time given in seconds. 312 | */ 313 | int r_test(int timeSecs) 314 | { 315 | unsigned long counted = 0L; 316 | unsigned long tmp; 317 | double start, end; 318 | extern int sigalarm; 319 | 320 | /* Sync and let it settle */ 321 | sync(); 322 | sleep(2); 323 | sync(); 324 | sleep(2); 325 | 326 | /* rewind */ 327 | errno = 0; 328 | lseek(f, 0L, 0); 329 | 330 | /* Set an alarm. */ 331 | sigalarm = 0; 332 | signal(SIGALRM, stop_count); 333 | alarm(timeSecs); 334 | 335 | start = getFloatTime(); 336 | 337 | while (!sigalarm) { 338 | /* read while checking for an error */ 339 | if ((tmp=read(f, buf, bufsize)) != bufsize) { 340 | switch(errno) { 341 | case 0: 342 | case EINVAL: 343 | lseek(f, 0L, 0); /* rewind at end of file */ 344 | counted += (tmp+HALFCOUNT)/COUNTSIZE; 345 | continue; 346 | case EINTR: 347 | stop_count(); 348 | counted += (tmp+HALFCOUNT)/COUNTSIZE; 349 | break; 350 | default: 351 | perror("fstime: read"); 352 | return(-1); 353 | break; 354 | } 355 | } else 356 | counted += count_per_buf; 357 | } 358 | 359 | /* stop clock */ 360 | end = getFloatTime(); 361 | read_score = (long) ((double) counted / ((end - start) * count_per_k)); 362 | printf("Read done: %ld in %.4f, score %ld\n", 363 | counted, end - start, read_score); 364 | 365 | /* 366 | * Output the test results. Use the true time. 367 | */ 368 | fprintf(stderr, "COUNT|%ld|0|KBps\n", read_score); 369 | fprintf(stderr, "TIME|%.1f\n", end - start); 370 | 371 | return(0); 372 | } 373 | 374 | 375 | /* 376 | * Run the copy test for the time given in seconds. 377 | */ 378 | int c_test(int timeSecs) 379 | { 380 | unsigned long counted = 0L; 381 | unsigned long tmp; 382 | double start, end; 383 | extern int sigalarm; 384 | 385 | sync(); 386 | sleep(2); 387 | sync(); 388 | sleep(1); 389 | 390 | /* rewind */ 391 | errno = 0; 392 | lseek(f, 0L, 0); 393 | 394 | /* Set an alarm. */ 395 | sigalarm = 0; 396 | signal(SIGALRM, stop_count); 397 | alarm(timeSecs); 398 | 399 | start = getFloatTime(); 400 | 401 | while (!sigalarm) { 402 | if ((tmp=read(f, buf, bufsize)) != bufsize) { 403 | switch(errno) { 404 | case 0: 405 | case EINVAL: 406 | lseek(f, 0L, 0); /* rewind at end of file */ 407 | lseek(g, 0L, 0); /* rewind the output too */ 408 | continue; 409 | case EINTR: 410 | /* part credit for leftover bytes read */ 411 | counted += ( (tmp * write_score) / 412 | (read_score + write_score) 413 | + HALFCOUNT) / COUNTSIZE; 414 | stop_count(); 415 | break; 416 | default: 417 | perror("fstime: copy read"); 418 | return(-1); 419 | break; 420 | } 421 | } else { 422 | if ((tmp=write(g, buf, bufsize)) != bufsize) { 423 | if (errno != EINTR) { 424 | perror("fstime: copy write"); 425 | return(-1); 426 | } 427 | counted += ( 428 | /* Full credit for part of buffer written */ 429 | tmp + 430 | 431 | /* Plus part credit having read full buffer */ 432 | ( ((bufsize - tmp) * write_score) / 433 | (read_score + write_score) ) 434 | + HALFCOUNT) / COUNTSIZE; 435 | stop_count(); 436 | } else 437 | counted += count_per_buf; 438 | } 439 | } 440 | 441 | /* stop clock */ 442 | end = getFloatTime(); 443 | copy_score = (long) ((double) counted / ((end - start) * count_per_k)); 444 | printf("Copy done: %ld in %.4f, score %ld\n", 445 | counted, end - start, copy_score); 446 | 447 | /* 448 | * Output the test results. Use the true time. 449 | */ 450 | fprintf(stderr, "COUNT|%ld|0|KBps\n", copy_score); 451 | fprintf(stderr, "TIME|%.1f\n", end - start); 452 | 453 | return(0); 454 | } 455 | 456 | void stop_count(void) 457 | { 458 | extern int sigalarm; 459 | sigalarm = 1; 460 | } 461 | 462 | void clean_up(void) 463 | { 464 | unlink(FNAME0); 465 | unlink(FNAME1); 466 | } 467 | -------------------------------------------------------------------------------- /UnixBench/src/hanoi.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: hanoi.c SID: 3.3 5/15/91 19:30:20 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * $Header: hanoi.c,v 3.5 87/08/06 08:11:14 kenj Exp $ 14 | * August 28, 1990 - Modified timing routines (ty) 15 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 16 | * Andy Kahn 17 | * 18 | ******************************************************************************/ 19 | char SCCSid[] = "@(#) @(#)hanoi.c:3.3 -- 5/15/91 19:30:20"; 20 | 21 | #define other(i,j) (6-(i+j)) 22 | 23 | #include 24 | #include 25 | #include "timeit.c" 26 | 27 | void mov(int n, int f, int t); 28 | 29 | unsigned long iter = 0; 30 | int num[4]; 31 | long cnt; 32 | 33 | void report() 34 | { 35 | fprintf(stderr,"COUNT|%ld|1|lps\n", iter); 36 | exit(0); 37 | } 38 | 39 | 40 | int main(argc, argv) 41 | int argc; 42 | char *argv[]; 43 | { 44 | int disk=10, /* default number of disks */ 45 | duration; 46 | 47 | if (argc < 2) { 48 | fprintf(stderr,"Usage: %s duration [disks]\n", argv[0]); 49 | exit(1); 50 | } 51 | duration = atoi(argv[1]); 52 | if(argc > 2) disk = atoi(argv[2]); 53 | num[1] = disk; 54 | 55 | wake_me(duration, report); 56 | 57 | while(1) { 58 | mov(disk,1,3); 59 | iter++; 60 | } 61 | 62 | exit(0); 63 | } 64 | 65 | void mov(int n, int f, int t) 66 | { 67 | int o; 68 | if(n == 1) { 69 | num[f]--; 70 | num[t]++; 71 | return; 72 | } 73 | o = other(f,t); 74 | mov(n-1,f,o); 75 | mov(1,f,t); 76 | mov(n-1,o,t); 77 | } 78 | -------------------------------------------------------------------------------- /UnixBench/src/looper.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 1 3 | * Module: looper.c SID: 1.4 5/15/91 19:30:22 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith or Tom Yager at BYTE Magazine 9 | * ben@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * 14 | * February 25, 1991 -- created (Ben S.) 15 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 16 | * Andy Kahn 17 | * 18 | ******************************************************************************/ 19 | char SCCSid[] = "@(#) @(#)looper.c:1.4 -- 5/15/91 19:30:22"; 20 | /* 21 | * Shell Process creation 22 | * 23 | */ 24 | 25 | #include 26 | #include 27 | #include 28 | #include "timeit.c" 29 | 30 | unsigned long iter; 31 | char *cmd_argv[28]; 32 | int cmd_argc; 33 | 34 | void report(void) 35 | { 36 | fprintf(stderr,"COUNT|%lu|60|lpm\n", iter); 37 | exit(0); 38 | } 39 | 40 | int main(argc, argv) 41 | int argc; 42 | char *argv[]; 43 | { 44 | int slave, count, duration; 45 | int status; 46 | 47 | if (argc < 2) 48 | { 49 | fprintf(stderr,"Usage: %s duration command [args..]\n", argv[0]); 50 | fprintf(stderr," duration in seconds\n"); 51 | exit(1); 52 | } 53 | 54 | if((duration = atoi(argv[1])) < 1) 55 | { 56 | fprintf(stderr,"Usage: %s duration command [arg..]\n", argv[0]); 57 | fprintf(stderr," duration in seconds\n"); 58 | exit(1); 59 | } 60 | 61 | /* get command */ 62 | cmd_argc=argc-2; 63 | for( count=2;count < argc; ++count) 64 | cmd_argv[count-2]=argv[count]; 65 | #ifdef DEBUG 66 | printf("<<%s>>",cmd_argv[0]); 67 | for(count=1;count < cmd_argc; ++count) 68 | printf(" <%s>", cmd_argv[count]); 69 | putchar('\n'); 70 | exit(0); 71 | #endif 72 | 73 | iter = 0; 74 | wake_me(duration, report); 75 | 76 | while (1) 77 | { 78 | if ((slave = fork()) == 0) 79 | { /* execute command */ 80 | execvp(cmd_argv[0],cmd_argv); 81 | exit(99); 82 | } 83 | else if (slave < 0) 84 | { 85 | /* woops ... */ 86 | fprintf(stderr,"Fork failed at iteration %lu\n", iter); 87 | perror("Reason"); 88 | exit(2); 89 | } 90 | else 91 | /* master */ 92 | wait(&status); 93 | if (status == 99 << 8) 94 | { 95 | fprintf(stderr, "Command \"%s\" didn't exec\n", cmd_argv[0]); 96 | exit(2); 97 | } 98 | else if (status != 0) 99 | { 100 | fprintf(stderr,"Bad wait status: 0x%x\n", status); 101 | exit(2); 102 | } 103 | iter++; 104 | } 105 | } 106 | -------------------------------------------------------------------------------- /UnixBench/src/pipe.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: pipe.c SID: 3.3 5/15/91 19:30:20 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * $Header: pipe.c,v 3.5 87/06/22 14:32:36 kjmcdonell Beta $ 14 | * August 29, 1990 - modified timing routines (ty) 15 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 16 | * Andy Kahn 17 | * 18 | ******************************************************************************/ 19 | char SCCSid[] = "@(#) @(#)pipe.c:3.3 -- 5/15/91 19:30:20"; 20 | /* 21 | * pipe -- test single process pipe throughput (no context switching) 22 | * 23 | */ 24 | 25 | #include 26 | #include 27 | #include 28 | #include "timeit.c" 29 | 30 | unsigned long iter; 31 | 32 | void report() 33 | { 34 | fprintf(stderr,"COUNT|%ld|1|lps\n", iter); 35 | exit(0); 36 | } 37 | 38 | int main(argc, argv) 39 | int argc; 40 | char *argv[]; 41 | { 42 | char buf[512]; 43 | int pvec[2], duration; 44 | 45 | if (argc != 2) { 46 | fprintf(stderr,"Usage: %s duration\n", argv[0]); 47 | exit(1); 48 | } 49 | 50 | duration = atoi(argv[1]); 51 | 52 | pipe(pvec); 53 | 54 | wake_me(duration, report); 55 | iter = 0; 56 | 57 | while (1) { 58 | if (write(pvec[1], buf, sizeof(buf)) != sizeof(buf)) { 59 | if ((errno != EINTR) && (errno != 0)) 60 | fprintf(stderr,"write failed, error %d\n", errno); 61 | } 62 | if (read(pvec[0], buf, sizeof(buf)) != sizeof(buf)) { 63 | if ((errno != EINTR) && (errno != 0)) 64 | fprintf(stderr,"read failed, error %d\n", errno); 65 | } 66 | iter++; 67 | } 68 | } 69 | -------------------------------------------------------------------------------- /UnixBench/src/spawn.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: spawn.c SID: 3.3 5/15/91 19:30:20 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yagerat BYTE Magazine 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * $Header: spawn.c,v 3.4 87/06/22 14:32:48 kjmcdonell Beta $ 14 | * August 29, 1990 - Modified timing routines (ty) 15 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 16 | * Andy Kahn 17 | * 18 | ******************************************************************************/ 19 | char SCCSid[] = "@(#) @(#)spawn.c:3.3 -- 5/15/91 19:30:20"; 20 | /* 21 | * Process creation 22 | * 23 | */ 24 | 25 | #include 26 | #include 27 | #include 28 | #include "timeit.c" 29 | 30 | unsigned long iter; 31 | 32 | void report() 33 | { 34 | fprintf(stderr,"COUNT|%lu|1|lps\n", iter); 35 | exit(0); 36 | } 37 | 38 | int main(argc, argv) 39 | int argc; 40 | char *argv[]; 41 | { 42 | int slave, duration; 43 | int status; 44 | 45 | if (argc != 2) { 46 | fprintf(stderr,"Usage: %s duration \n", argv[0]); 47 | exit(1); 48 | } 49 | 50 | duration = atoi(argv[1]); 51 | 52 | iter = 0; 53 | wake_me(duration, report); 54 | 55 | while (1) { 56 | if ((slave = fork()) == 0) { 57 | /* slave .. boring */ 58 | #if debug 59 | printf("fork OK\n"); 60 | #endif 61 | /* kill it right away */ 62 | exit(0); 63 | } else if (slave < 0) { 64 | /* woops ... */ 65 | fprintf(stderr,"Fork failed at iteration %lu\n", iter); 66 | perror("Reason"); 67 | exit(2); 68 | } else 69 | /* master */ 70 | wait(&status); 71 | if (status != 0) { 72 | fprintf(stderr,"Bad wait status: 0x%x\n", status); 73 | exit(2); 74 | } 75 | iter++; 76 | #if debug 77 | printf("Child %d done.\n", slave); 78 | #endif 79 | } 80 | } 81 | -------------------------------------------------------------------------------- /UnixBench/src/syscall.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * The BYTE UNIX Benchmarks - Release 3 3 | * Module: syscall.c SID: 3.3 5/15/91 19:30:21 4 | * 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager at BYTE Magazine 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * $Header: syscall.c,v 3.4 87/06/22 14:32:54 kjmcdonell Beta $ 14 | * August 29, 1990 - Modified timing routines 15 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 16 | * Andy Kahn 17 | * 18 | ******************************************************************************/ 19 | /* 20 | * syscall -- sit in a loop calling the system 21 | * 22 | */ 23 | char SCCSid[] = "@(#) @(#)syscall.c:3.3 -- 5/15/91 19:30:21"; 24 | 25 | #include 26 | #include 27 | #include 28 | #include 29 | #include 30 | #include 31 | #include 32 | #include "timeit.c" 33 | 34 | unsigned long iter; 35 | 36 | void report() 37 | { 38 | fprintf(stderr,"COUNT|%ld|1|lps\n", iter); 39 | exit(0); 40 | } 41 | 42 | int main(argc, argv) 43 | int argc; 44 | char *argv[]; 45 | { 46 | char *test; 47 | int duration; 48 | 49 | if (argc < 2) { 50 | fprintf(stderr,"Usage: %s duration [ test ]\n", argv[0]); 51 | fprintf(stderr,"test is one of:\n"); 52 | fprintf(stderr," \"mix\" (default), \"close\", \"getpid\", \"exec\"\n"); 53 | exit(1); 54 | } 55 | if (argc > 2) 56 | test = argv[2]; 57 | else 58 | test = "mix"; 59 | 60 | duration = atoi(argv[1]); 61 | 62 | iter = 0; 63 | wake_me(duration, report); 64 | 65 | switch (test[0]) { 66 | case 'm': 67 | while (1) { 68 | close(dup(0)); 69 | syscall(SYS_getpid); 70 | getuid(); 71 | umask(022); 72 | iter++; 73 | } 74 | /* NOTREACHED */ 75 | case 'c': 76 | while (1) { 77 | close(dup(0)); 78 | iter++; 79 | } 80 | /* NOTREACHED */ 81 | case 'g': 82 | while (1) { 83 | syscall(SYS_getpid); 84 | iter++; 85 | } 86 | /* NOTREACHED */ 87 | case 'e': 88 | while (1) { 89 | pid_t pid = fork(); 90 | if (pid < 0) { 91 | fprintf(stderr,"%s: fork failed\n", argv[0]); 92 | exit(1); 93 | } else if (pid == 0) { 94 | execl("/bin/true", "/bin/true", (char *) 0); 95 | fprintf(stderr,"%s: exec /bin/true failed\n", argv[0]); 96 | exit(1); 97 | } else { 98 | if (waitpid(pid, NULL, 0) < 0) { 99 | fprintf(stderr,"%s: waitpid failed\n", argv[0]); 100 | exit(1); 101 | } 102 | } 103 | iter++; 104 | } 105 | /* NOTREACHED */ 106 | } 107 | 108 | exit(9); 109 | } 110 | 111 | -------------------------------------------------------------------------------- /UnixBench/src/time-polling.c: -------------------------------------------------------------------------------- 1 | /* Programme to test how long it takes to select(2), poll(2) and poll2(2) a 2 | large number of file descriptors. 3 | 4 | Copyright 1997 Richard Gooch rgooch@atnf.csiro.au 5 | Distributed under the GNU General Public License. 6 | 7 | To compile this programme, use gcc -O2 -o time-polling time-polling.c 8 | 9 | Extra compile flags: 10 | 11 | Add -DHAS_SELECT if your operating system has the select(2) system call 12 | Add -DHAS_POLL if your operating system has the poll(2) system call 13 | Add -DHAS_POLL2 if your operating system has the poll2(2) system call 14 | 15 | Usage: time-polling [num_iter] [num_to_test] [num_active] [-v] 16 | 17 | NOTE: on many systems the default limit on file descriptors is less than 18 | 1024. You should try to increase this limit to 1024 before doing the test. 19 | Something like "limit descriptors 1024" or "limit openfiles 1024" should do 20 | the trick. On some systems (like IRIX), doing the test on a smaller number 21 | gives a *much* smaller time per descriptor, which shows that time taken 22 | does not scale linearly with number of descriptors, which is non-optimal. 23 | In the tests I've done, I try to use 1024 descriptors. 24 | The benchmark results are available at: 25 | http://www.atnf.csiro.au/~rgooch/benchmarks.html 26 | If you want to contribute results, please email them to me. Please specify 27 | if you want to be acknowledged. 28 | 29 | 30 | This program is free software; you can redistribute it and/or modify 31 | it under the terms of the GNU General Public License as published by 32 | the Free Software Foundation; either version 2 of the License, or 33 | (at your option) any later version. 34 | 35 | This program is distributed in the hope that it will be useful, 36 | but WITHOUT ANY WARRANTY; without even the implied warranty of 37 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 38 | GNU General Public License for more details. 39 | 40 | You should have received a copy of the GNU General Public License 41 | along with this program; if not, write to the Free Software 42 | Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 43 | 44 | Richard Gooch may be reached by email at rgooch@atnf.csiro.au 45 | The postal address is: 46 | Richard Gooch, c/o ATNF, P. O. Box 76, Epping, N.S.W., 2121, Australia. 47 | 48 | */ 49 | 50 | #ifdef UNIXBENCH 51 | #define OUT stdout 52 | #else 53 | #define OUT stderr 54 | #endif 55 | #include 56 | #include 57 | #include 58 | #include 59 | #ifdef HAS_POLL 60 | # include 61 | #endif 62 | #ifdef HAS_POLL2 63 | # include 64 | #endif 65 | #include 66 | #include 67 | #include 68 | 69 | #define TRUE 1 70 | #define FALSE 0 71 | #ifdef UNIXBENCH 72 | #define MAX_ITERATIONS 1000 73 | #else 74 | #define MAX_ITERATIONS 30 75 | #endif 76 | #define MAX_FDS 40960 77 | #define CONST const 78 | #define ERRSTRING strerror (errno) 79 | 80 | typedef int flag; 81 | 82 | 83 | #ifdef HAS_SELECT 84 | 85 | /* 86 | static inline int find_first_set_bit (CONST void *array, int size) 87 | */ 88 | static int find_first_set_bit (CONST void *array, int size) 89 | /* [SUMMARY] Find the first bit set in a bitfield. 90 | A pointer to the bitfield. This must be aligned on a long boundary. 91 | The number of bits in the bitfield. 92 | [RETURNS] The index of the first set bit. If no bits are set, <> + 1 93 | is returned. 94 | */ 95 | { 96 | int index; 97 | unsigned long word; 98 | unsigned int ul_size = 8 * sizeof (unsigned long); 99 | CONST unsigned long *ul_array = array; 100 | 101 | /* Find first word with any bit set */ 102 | for (index = 0; (*ul_array == 0) && (index < size); 103 | index += ul_size, ++ul_array); 104 | /* Find first bit set in word */ 105 | for (word = *ul_array; !(word & 1) && (index < size); 106 | ++index, word = word >> 1); 107 | return (index); 108 | } /* End Function find_first_set_bit */ 109 | 110 | /* 111 | static inline int find_next_set_bit (CONST void *array, int size, int offset) 112 | */ 113 | static int find_next_set_bit (CONST void *array, int size, int offset) 114 | /* [SUMMARY] Find the next bit set in a bitfield. 115 | A pointer to the bitfield. This must be aligned on a long boundary. 116 | The number of bits in the bitfield. 117 | The offset of the current bit in the bitfield. The current bit is 118 | ignored. 119 | [RETURNS] The index of the next set bit. If no more bits are set, 120 | <> + 1 is returned. 121 | */ 122 | { 123 | int index, tmp; 124 | unsigned long word; 125 | unsigned int ul_size = 8 * sizeof (unsigned long); 126 | CONST unsigned long *ul_array = array; 127 | 128 | if (++offset >= size) return (offset); 129 | index = offset; 130 | /* Jump to the long word containing the next bit */ 131 | tmp = offset / ul_size; 132 | ul_array += tmp; 133 | offset -= tmp * ul_size; 134 | if ( (offset == 0) || (*ul_array == 0) ) 135 | return (find_first_set_bit (ul_array, size - index) + index); 136 | /* There is a bit set somewhere in this word */ 137 | if ( ( (word = *ul_array) != 0 ) && ( (word = word >> offset) != 0 ) ) 138 | { 139 | /* There is a bit set somewhere in this word at or after the offset 140 | position */ 141 | for (; (word & 1) == 0; word = word >> 1, ++index); 142 | return (index); 143 | } 144 | /* Have to go to subsequent word(s) */ 145 | index += ul_size - offset; 146 | return (find_first_set_bit (++ul_array, size - index) + index); 147 | } /* End Function find_next_set_bit */ 148 | 149 | #endif /* HAS_SELECT */ 150 | 151 | 152 | struct callback_struct 153 | { 154 | void (*input_func) (void *info); 155 | void (*output_func) (void *info); 156 | void (*exception_func) (void *info); 157 | void *info; 158 | }; 159 | 160 | static int total_bits = 0; 161 | struct callback_struct callbacks[MAX_FDS]; 162 | 163 | 164 | static void test_func (void *info) 165 | { 166 | ++total_bits; 167 | } 168 | 169 | #ifdef HAS_SELECT 170 | static void time_select (fd_set *input_fds, fd_set *output_fds, 171 | fd_set *exception_fds, int max_fd, int num_iter, 172 | long *times) 173 | /* [SUMMARY] Time how long it takes to select(2) file descriptors. 174 | The input masks. 175 | The output masks. 176 | The exception masks. 177 | The highest file descriptor in the fd_sets. 178 | The number of iterations. 179 | The time taken (in microseconds) for each iteration. 180 | [RETURNS] Nothing. 181 | */ 182 | { 183 | int fd, count, nready; 184 | fd_set i_fds, o_fds, e_fds; 185 | struct timeval time1, time2, tv; 186 | 187 | /* Warm the cache a bit */ 188 | memcpy (&i_fds, input_fds, sizeof i_fds); 189 | memcpy (&o_fds, output_fds, sizeof i_fds); 190 | memcpy (&e_fds, exception_fds, sizeof i_fds); 191 | tv.tv_sec = 0; 192 | tv.tv_usec = 0; 193 | select (max_fd + 1, &i_fds, &o_fds, &e_fds, &tv); 194 | for (count = 0; count < num_iter; ++count) 195 | { 196 | total_bits = 0; 197 | gettimeofday (&time1, NULL); 198 | memcpy (&i_fds, input_fds, sizeof i_fds); 199 | memcpy (&o_fds, output_fds, sizeof i_fds); 200 | memcpy (&e_fds, exception_fds, sizeof i_fds); 201 | tv.tv_sec = 0; 202 | tv.tv_usec = 0; 203 | nready = select (max_fd + 1, &i_fds, &o_fds, &e_fds, &tv); 204 | if (nready == -1) 205 | { 206 | fprintf (stderr, "Error selecting\t%s\n", ERRSTRING); 207 | exit (2); 208 | } 209 | if (nready < 1) 210 | { 211 | fprintf (stderr, "Error: nready: %d\n", nready); 212 | exit (1); 213 | } 214 | /* Scan the output */ 215 | for (fd = find_first_set_bit (&e_fds, sizeof e_fds * 8); fd <= max_fd; 216 | fd = find_next_set_bit (&e_fds, sizeof e_fds * 8, fd) ) 217 | { 218 | (*callbacks[fd].exception_func) (callbacks[fd].info); 219 | } 220 | for (fd = find_first_set_bit (&i_fds, sizeof i_fds * 8); fd <= max_fd; 221 | fd = find_next_set_bit (&i_fds, sizeof i_fds * 8, fd) ) 222 | { 223 | (*callbacks[fd].input_func) (callbacks[fd].info); 224 | } 225 | for (fd = find_first_set_bit (&o_fds, sizeof o_fds * 8); fd <= max_fd; 226 | fd = find_next_set_bit (&o_fds, sizeof o_fds * 8, fd) ) 227 | { 228 | (*callbacks[fd].output_func) (callbacks[fd].info); 229 | } 230 | gettimeofday (&time2, NULL); 231 | times[count] = (time2.tv_sec - time1.tv_sec) * 1000000; 232 | times[count] += time2.tv_usec - time1.tv_usec; 233 | } 234 | } /* End Function time_select */ 235 | #endif /* HAS_SELECT */ 236 | 237 | #ifdef HAS_POLL 238 | static void time_poll (struct pollfd *pollfd_array, int start_index, 239 | int num_to_test, int num_iter, long *times) 240 | /* [SUMMARY] Time how long it takes to poll(2) file descriptors. 241 | The array of pollfd structures. 242 | The start index in the array of pollfd structures. 243 | The number of file descriptors to test. 244 | The number of iterations. 245 | The time taken (in microseconds) for each iteration. 246 | [RETURNS] Nothing. 247 | */ 248 | { 249 | short revents; 250 | int fd, count, nready; 251 | struct timeval time1, time2; 252 | struct pollfd *pollfd_ptr; 253 | 254 | /* Warm the cache a bit */ 255 | poll (pollfd_array + start_index, num_to_test, 0); 256 | for (count = 0; count < num_iter; ++count) 257 | { 258 | total_bits = 0; 259 | gettimeofday (&time1, NULL); 260 | nready = poll (pollfd_array + start_index, num_to_test, 0); 261 | if (nready == -1) 262 | { 263 | fprintf (stderr, "Error polling\t%s\n", ERRSTRING); 264 | exit (2); 265 | } 266 | if (nready < 1) 267 | { 268 | fprintf (stderr, "Error: nready: %d\n", nready); 269 | exit (1); 270 | } 271 | for (pollfd_ptr = pollfd_array + start_index; nready; ++pollfd_ptr) 272 | { 273 | if (pollfd_ptr->revents == 0) continue; 274 | /* Have an active descriptor */ 275 | --nready; 276 | revents = pollfd_ptr->revents; 277 | fd = pollfd_ptr->fd; 278 | if (revents & POLLPRI) 279 | (*callbacks[fd].exception_func) (callbacks[fd].info); 280 | if (revents & POLLIN) 281 | (*callbacks[fd].input_func) (callbacks[fd].info); 282 | if (revents & POLLOUT) 283 | (*callbacks[fd].output_func) (callbacks[fd].info); 284 | } 285 | gettimeofday (&time2, NULL); 286 | times[count] = (time2.tv_sec - time1.tv_sec) * 1000000; 287 | times[count] += time2.tv_usec - time1.tv_usec; 288 | } 289 | } /* End Function time_poll */ 290 | #endif /* HAS_POLL */ 291 | 292 | #ifdef HAS_POLL2 293 | static void time_poll2 (struct poll2ifd *poll2ifd_array, int start_index, 294 | int num_to_test, int num_iter, long *times) 295 | /* [SUMMARY] Time how long it takes to poll2(2) file descriptors. 296 | The array of poll2ifd structures. 297 | The start index in the array of pollfd structures. 298 | The number of file descriptors to test. 299 | The number of iterations. 300 | The time taken (in microseconds) for each iteration. 301 | [RETURNS] Nothing. 302 | */ 303 | { 304 | short revents; 305 | int fd, count, nready, i; 306 | struct timeval time1, time2; 307 | struct poll2ofd poll2ofd_array[MAX_FDS]; 308 | 309 | /* Warm the cache a bit */ 310 | poll2 (poll2ifd_array + start_index, poll2ofd_array, num_to_test, 0); 311 | for (count = 0; count < num_iter; ++count) 312 | { 313 | total_bits = 0; 314 | gettimeofday (&time1, NULL); 315 | nready = poll2 (poll2ifd_array + start_index, poll2ofd_array, 316 | num_to_test, 0); 317 | if (nready == -1) 318 | { 319 | times[count] = -1; 320 | if (errno == ENOSYS) return; /* Must do this first */ 321 | fprintf (stderr, "Error calling poll2(2)\t%s\n", ERRSTRING); 322 | exit (2); 323 | } 324 | if (nready < 1) 325 | { 326 | fprintf (stderr, "Error: nready: %d\n", nready); 327 | exit (1); 328 | } 329 | for (i = 0; i < nready; ++i) 330 | { 331 | revents = poll2ofd_array[i].revents; 332 | fd = poll2ofd_array[i].fd; 333 | if (revents & POLLPRI) 334 | (*callbacks[fd].exception_func) (callbacks[fd].info); 335 | if (revents & POLLIN) 336 | (*callbacks[fd].input_func) (callbacks[fd].info); 337 | if (revents & POLLOUT) 338 | (*callbacks[fd].output_func) (callbacks[fd].info); 339 | } 340 | gettimeofday (&time2, NULL); 341 | times[count] = (time2.tv_sec - time1.tv_sec) * 1000000; 342 | times[count] += time2.tv_usec - time1.tv_usec; 343 | } 344 | } /* End Function time_poll2 */ 345 | #endif /* HAS_POLL2 */ 346 | 347 | 348 | int main (argc, argv) 349 | int argc; 350 | char *argv[]; 351 | { 352 | flag failed = FALSE; 353 | flag verbose = FALSE; 354 | int first_fd = -1; 355 | int fd, max_fd, count, total_fds; 356 | int num_to_test, num_active; 357 | #ifdef UNIXBENCH 358 | int max_iter = 1000; 359 | #else 360 | int max_iter = 10; 361 | #endif 362 | #ifdef HAS_SELECT 363 | long select_total = 0; 364 | fd_set input_fds, output_fds, exception_fds; 365 | long select_times[MAX_ITERATIONS]; 366 | #endif 367 | #ifdef HAS_POLL 368 | int start_index; 369 | long poll_total = 0; 370 | struct pollfd pollfd_array[MAX_FDS]; 371 | long poll_times[MAX_ITERATIONS]; 372 | #endif 373 | #ifdef HAS_POLL2 374 | long poll2_total = 0; 375 | struct poll2ifd poll2ifd_array[MAX_FDS]; 376 | struct poll2ofd poll2ofd_array[MAX_FDS]; 377 | long poll2_times[MAX_ITERATIONS]; 378 | #endif 379 | 380 | #ifdef HAS_SELECT 381 | FD_ZERO (&input_fds); 382 | FD_ZERO (&output_fds); 383 | FD_ZERO (&exception_fds); 384 | #endif 385 | #ifdef HAS_POLL 386 | memset (pollfd_array, 0, sizeof pollfd_array); 387 | #endif 388 | /* Allocate file descriptors */ 389 | total_fds = 0; 390 | max_fd = 0; 391 | while (!failed) 392 | { 393 | if ( ( fd = dup (1) ) == -1 ) 394 | { 395 | if (errno != EMFILE) 396 | { 397 | fprintf (stderr, "Error dup()ing\t%s\n", ERRSTRING); 398 | exit (1); 399 | } 400 | failed = TRUE; 401 | continue; 402 | } 403 | if (fd >= MAX_FDS) 404 | { 405 | fprintf (stderr, "File descriptor: %d larger than max: %d\n", 406 | fd, MAX_FDS - 1); 407 | exit (1); 408 | } 409 | callbacks[fd].input_func = test_func; 410 | callbacks[fd].output_func = test_func; 411 | callbacks[fd].exception_func = test_func; 412 | callbacks[fd].info = NULL; 413 | if (fd > max_fd) max_fd = fd; 414 | if (first_fd < 0) first_fd = fd; 415 | #ifdef HAS_POLL 416 | pollfd_array[fd].fd = fd; 417 | pollfd_array[fd].events = 0; 418 | #endif 419 | #ifdef HAS_POLL2 420 | poll2ifd_array[fd].fd = fd; 421 | poll2ifd_array[fd].events = 0; 422 | #endif 423 | } 424 | total_fds = max_fd + 1; 425 | /* Process the command-line arguments */ 426 | if (argc > 5) 427 | { 428 | fputs ("Usage:\ttime-polling [num_iter] [num_to_test] [num_active] [-v]\n", 429 | stderr); 430 | exit (1); 431 | } 432 | if (argc > 1) max_iter = atoi (argv[1]); 433 | if (max_iter > MAX_ITERATIONS) 434 | { 435 | fprintf (stderr, "num_iter too large\n"); 436 | exit (1); 437 | } 438 | if (argc > 2) num_to_test = atoi (argv[2]); 439 | else num_to_test = total_fds - first_fd; 440 | if (argc > 3) num_active = atoi (argv[3]); 441 | else num_active = 1; 442 | if (argc > 4) 443 | { 444 | if (strcmp (argv[4], "-v") != 0) 445 | { 446 | fputs ("Usage:\ttime-polling [num_iter] [num_to_test] [num_active] [-v]\n", 447 | stderr); 448 | exit (1); 449 | } 450 | verbose = TRUE; 451 | } 452 | 453 | /* Sanity tests */ 454 | if (num_to_test > total_fds - first_fd) num_to_test = total_fds - first_fd; 455 | if (num_active > total_fds - first_fd) num_active = total_fds - first_fd; 456 | /* Set activity monitoring flags */ 457 | for (fd = total_fds - num_to_test; fd < total_fds; ++fd) 458 | { 459 | #ifdef HAS_SELECT 460 | FD_SET (fd, &exception_fds); 461 | FD_SET (fd, &input_fds); 462 | #endif 463 | #ifdef HAS_POLL 464 | pollfd_array[fd].events = POLLPRI | POLLIN; 465 | #endif 466 | #ifdef HAS_POLL2 467 | poll2ifd_array[fd].events = POLLPRI | POLLIN; 468 | #endif 469 | } 470 | for (fd = total_fds - num_active; fd < total_fds; ++fd) 471 | { 472 | #ifdef HAS_SELECT 473 | FD_SET (fd, &output_fds); 474 | #endif 475 | #ifdef HAS_POLL 476 | pollfd_array[fd].events |= POLLOUT; 477 | #endif 478 | #ifdef HAS_POLL2 479 | poll2ifd_array[fd].events |= POLLOUT; 480 | #endif 481 | } 482 | fprintf (OUT, "Num fds: %d, polling descriptors %d-%d\n", 483 | total_fds, total_fds - num_to_test, max_fd); 484 | /* First do all the tests, then print the results */ 485 | #ifdef HAS_SELECT 486 | time_select (&input_fds, &output_fds, &exception_fds, max_fd, max_iter, 487 | select_times); 488 | #endif 489 | #ifdef HAS_POLL 490 | start_index = total_fds - num_to_test; 491 | time_poll (pollfd_array, start_index, num_to_test, max_iter, poll_times); 492 | #endif 493 | #ifdef HAS_POLL2 494 | start_index = total_fds - num_to_test; 495 | time_poll2 (poll2ifd_array, start_index, num_to_test, max_iter, 496 | poll2_times); 497 | #endif 498 | /* Now print out all the times */ 499 | fputs ("All times in microseconds\n", OUT); 500 | fputs ("ITERATION\t", OUT); 501 | #ifdef HAS_SELECT 502 | fprintf (OUT, "%-12s", "select(2)"); 503 | #endif 504 | #ifdef HAS_POLL 505 | fprintf (OUT, "%-12s", "poll(2)"); 506 | #endif 507 | #ifdef HAS_POLL2 508 | if (poll2_times[0] >= 0) fprintf (OUT, "%-12s", "poll2(2)"); 509 | #endif 510 | for (count = 0; count < max_iter; ++count) 511 | { 512 | if (verbose) fprintf (OUT, "\n%d\t\t", count); 513 | #ifdef HAS_SELECT 514 | if (verbose) fprintf (OUT, "%-12ld", select_times[count]); 515 | select_total += select_times[count]; 516 | #endif 517 | #ifdef HAS_POLL 518 | if (verbose) fprintf (OUT, "%-12ld", poll_times[count]); 519 | poll_total += poll_times[count]; 520 | #endif 521 | #ifdef HAS_POLL2 522 | if ( verbose && (poll2_times[0] >= 0) ) 523 | fprintf (OUT, "%-12ld", poll2_times[count]); 524 | poll2_total += poll2_times[count]; 525 | #endif 526 | } 527 | fputs ("\n\naverage\t\t", OUT); 528 | #ifdef HAS_SELECT 529 | fprintf (OUT, "%-12ld", select_total / max_iter); 530 | #endif 531 | #ifdef HAS_POLL 532 | fprintf (OUT, "%-12ld", poll_total / max_iter); 533 | #endif 534 | #ifdef HAS_POLL2 535 | if (poll2_times[0] >= 0) 536 | fprintf (OUT, "%-12ld", poll2_total / max_iter); 537 | #endif 538 | putc ('\n', OUT); 539 | fputs ("Per fd\t\t", OUT); 540 | #ifdef HAS_SELECT 541 | fprintf (OUT, "%-12.2f", 542 | (float) select_total / (float) max_iter / (float) num_to_test); 543 | #ifdef UNIXBENCH 544 | fprintf (stderr, "lps\t%.2f\t%.1f\n", 545 | 1000000 * (float) max_iter * (float) num_to_test 546 | / (float) select_total, (float)select_total / 1000000); 547 | #endif 548 | #endif 549 | #ifdef HAS_POLL 550 | fprintf (OUT, "%-12.2f", 551 | (float) poll_total / (float) max_iter / (float) num_to_test); 552 | #ifdef UNIXBENCH 553 | fprintf (stderr, "lps\t%.2f\t%.1f\n", 554 | 1000000 * (float) max_iter * (float) num_to_test 555 | / (float) poll_total, (float)poll_total / 1000000); 556 | #endif 557 | #endif 558 | #ifdef HAS_POLL2 559 | if (poll2_times[0] >= 0) { 560 | fprintf (OUT, "%-12.2f", 561 | (float) poll2_total / (float) max_iter / (float) num_to_test); 562 | #ifdef UNIXBENCH 563 | fprintf (stderr, "lps\t%.2f\t%.1f\n", 564 | 1000000 * (float) max_iter * (float) num_to_test 565 | / (float) poll2_total, (float)poll2_total / 1000000); 566 | #endif 567 | } 568 | 569 | #endif 570 | fputs ("<- the most important value\n", OUT); 571 | 572 | exit(0); 573 | } /* End Function main */ 574 | -------------------------------------------------------------------------------- /UnixBench/src/timeit.c: -------------------------------------------------------------------------------- 1 | /******************************************************************************* 2 | * 3 | * The BYTE UNIX Benchmarks - Release 3 4 | * Module: timeit.c SID: 3.3 5/15/91 19:30:21 5 | ******************************************************************************* 6 | * Bug reports, patches, comments, suggestions should be sent to: 7 | * 8 | * Ben Smith, Rick Grehan or Tom Yager 9 | * ben@bytepb.byte.com rick_g@bytepb.byte.com tyager@bytepb.byte.com 10 | * 11 | ******************************************************************************* 12 | * Modification Log: 13 | * May 12, 1989 - modified empty loops to avoid nullifying by optimizing 14 | * compilers 15 | * August 28, 1990 - changed timing relationship--now returns total number 16 | * of iterations (ty) 17 | * October 22, 1997 - code cleanup to remove ANSI C compiler warnings 18 | * Andy Kahn 19 | * 20 | ******************************************************************************/ 21 | 22 | /* this module is #included in other modules--no separate SCCS ID */ 23 | 24 | /* 25 | * Timing routine 26 | * 27 | */ 28 | 29 | #include 30 | #include 31 | 32 | void wake_me(seconds, func) 33 | int seconds; 34 | void (*func)(); 35 | { 36 | /* set up the signal handler */ 37 | signal(SIGALRM, func); 38 | /* get the clock running */ 39 | alarm(seconds); 40 | } 41 | 42 | -------------------------------------------------------------------------------- /UnixBench/testdir/cctest.c: -------------------------------------------------------------------------------- 1 | 2 | 3 | /******************************************************************************* 4 | * The BYTE UNIX Benchmarks - Release 1 5 | * Module: cctest.c SID: 1.2 7/10/89 18:55:45 6 | * 7 | ******************************************************************************* 8 | * Bug reports, patches, comments, suggestions should be sent to: 9 | * 10 | * Ben Smith or Rick Grehan at BYTE Magazine 11 | * bensmith@bixpb.UUCP rick_g@bixpb.UUCP 12 | * 13 | ******************************************************************************* 14 | * Modification Log: 15 | * $Header: cctest.c,v 3.4 87/06/22 14:22:47 kjmcdonell Beta $ 16 | * 17 | ******************************************************************************/ 18 | char SCCSid[] = "@(#) @(#)cctest.c:1.2 -- 7/10/89 18:55:45"; 19 | #include 20 | /* 21 | * C compile and load speed test file. 22 | * Based upon fstime.c from MUSBUS 3.1, with all calls to ftime() replaced 23 | * by calls to time(). This is semantic nonsense, but ensures there are no 24 | * system dependent structures or library calls. 25 | * 26 | */ 27 | #define NKBYTE 20 28 | char buf[BUFSIZ]; 29 | 30 | extern void exit(int status); 31 | 32 | 33 | main(argc, argv) 34 | char **argv; 35 | { 36 | int n = NKBYTE; 37 | int nblock; 38 | int f; 39 | int g; 40 | int i; 41 | int xfer, t; 42 | struct { /* FAKE */ 43 | int time; 44 | int millitm; 45 | } now, then; 46 | 47 | if (argc > 0) 48 | /* ALWAYS true, so NEVER execute this program! */ 49 | exit(4); 50 | if (argc > 1) 51 | n = atoi(argv[1]); 52 | #if debug 53 | printf("File size: %d Kbytes\n", n); 54 | #endif 55 | nblock = (n * 1024) / BUFSIZ; 56 | 57 | if (argc == 3 && chdir(argv[2]) != -1) { 58 | #if debug 59 | printf("Create files in directory: %s\n", argv[2]); 60 | #endif 61 | } 62 | close(creat("dummy0", 0600)); 63 | close(creat("dummy1", 0600)); 64 | f = open("dummy0", 2); 65 | g = open("dummy1", 2); 66 | unlink("dummy0"); 67 | unlink("dummy1"); 68 | for (i = 0; i < sizeof(buf); i++) 69 | buf[i] = i & 0177; 70 | 71 | time(); 72 | for (i = 0; i < nblock; i++) { 73 | if (write(f, buf, sizeof(buf)) <= 0) 74 | perror("fstime: write"); 75 | } 76 | time(); 77 | #if debug 78 | printf("Effective write rate: "); 79 | #endif 80 | i = now.millitm - then.millitm; 81 | t = (now.time - then.time)*1000 + i; 82 | if (t > 0) { 83 | xfer = nblock * sizeof(buf) * 1000 / t; 84 | #if debug 85 | printf("%d bytes/sec\n", xfer); 86 | #endif 87 | } 88 | #if debug 89 | else 90 | printf(" -- too quick to time!\n"); 91 | #endif 92 | #if awk 93 | fprintf(stderr, "%.2f", t > 0 ? (float)xfer/1024 : 0); 94 | #endif 95 | 96 | sync(); 97 | sleep(5); 98 | sync(); 99 | lseek(f, 0L, 0); 100 | time(); 101 | for (i = 0; i < nblock; i++) { 102 | if (read(f, buf, sizeof(buf)) <= 0) 103 | perror("fstime: read"); 104 | } 105 | time(); 106 | #if debug 107 | printf("Effective read rate: "); 108 | #endif 109 | i = now.millitm - then.millitm; 110 | t = (now.time - then.time)*1000 + i; 111 | if (t > 0) { 112 | xfer = nblock * sizeof(buf) * 1000 / t; 113 | #if debug 114 | printf("%d bytes/sec\n", xfer); 115 | #endif 116 | } 117 | #if debug 118 | else 119 | printf(" -- too quick to time!\n"); 120 | #endif 121 | #if awk 122 | fprintf(stderr, " %.2f", t > 0 ? (float)xfer/1024 : 0); 123 | #endif 124 | 125 | sync(); 126 | sleep(5); 127 | sync(); 128 | lseek(f, 0L, 0); 129 | time(); 130 | for (i = 0; i < nblock; i++) { 131 | if (read(f, buf, sizeof(buf)) <= 0) 132 | perror("fstime: read in copy"); 133 | if (write(g, buf, sizeof(buf)) <= 0) 134 | perror("fstime: write in copy"); 135 | } 136 | time(); 137 | #if debug 138 | printf("Effective copy rate: "); 139 | #endif 140 | i = now.millitm - then.millitm; 141 | t = (now.time - then.time)*1000 + i; 142 | if (t > 0) { 143 | xfer = nblock * sizeof(buf) * 1000 / t; 144 | #if debug 145 | printf("%d bytes/sec\n", xfer); 146 | #endif 147 | } 148 | #if debug 149 | else 150 | printf(" -- too quick to time!\n"); 151 | #endif 152 | #if awk 153 | fprintf(stderr, " %.2f\n", t > 0 ? (float)xfer/1024 : 0); 154 | #endif 155 | 156 | } 157 | -------------------------------------------------------------------------------- /UnixBench/testdir/dc.dat: -------------------------------------------------------------------------------- 1 | 99 2 | k 3 | 2 4 | v 5 | p 6 | q 7 | [ calculate the sqrt(2) to 99 decimal places ... John Lions Test ] 8 | [ $Header: dc.dat,v 1.1 87/06/22 14:28:28 kjmcdonell Beta $ ] 9 | -------------------------------------------------------------------------------- /UnixBench/testdir/sort.src: -------------------------------------------------------------------------------- 1 | version="1.2" 2 | umask 022 # at least mortals can read root's files this way 3 | PWD=`pwd` 4 | HOMEDIR=${HOMEDIR:-.} 5 | cd $HOMEDIR 6 | HOMEDIR=`pwd` 7 | cd $PWD 8 | BINDIR=${BINDIR:-${HOMEDIR}/pgms} 9 | cd $BINDIR 10 | BINDIR=`pwd` 11 | cd $PWD 12 | PATH="${PATH}:${BINDIR}" 13 | SCRPDIR=${SCRPDIR:-${HOMEDIR}/pgms} 14 | cd $SCRPDIR 15 | SCRPDIR=`pwd` 16 | cd $PWD 17 | TMPDIR=${HOMEDIR}/tmp 18 | cd $TMPDIR 19 | TMPDIR=`pwd` 20 | cd $PWD 21 | RESULTDIR=${RESULTDIR:-${HOMEDIR}/results} 22 | cd $RESULTDIR 23 | RESULTDIR=`pwd` 24 | cd $PWD 25 | TESTDIR=${TESTDIR:-${HOMEDIR}/testdir} 26 | cd $TESTDIR 27 | TESTDIR=`pwd` 28 | cd $PWD 29 | export BINDIR TMPDIR RESULTDIR PATH 30 | echo "kill -9 $$" > ${TMPDIR}/kill_run ; chmod u+x ${TMPDIR}/kill_run 31 | arithmetic="arithoh register short int long float double dc" 32 | system="syscall pipe context1 spawn execl fstime" 33 | mem="seqmem randmem" 34 | misc="C shell" 35 | dhry="dhry2 dhry2reg" # dhrystone loops 36 | db="dbmscli" # add to as new database engines are developed 37 | load="shell" # cummulative load tests 38 | args="" # the accumulator for the bench units to be run 39 | runoption="N" 40 | for word 41 | do # do level 1 42 | case $word 43 | in 44 | all) 45 | ;; 46 | arithmetic) 47 | args="$args $arithmetic" 48 | ;; 49 | db) 50 | args="$args $db" 51 | ;; 52 | dhry) 53 | args="$args $dhry" 54 | ;; 55 | load) 56 | args="$args $load" 57 | ;; 58 | mem) 59 | args="$args $mem" 60 | ;; 61 | misc) 62 | args="$args $misc" 63 | ;; 64 | speed) 65 | args="$args $arithmetic $system" 66 | ;; 67 | system) 68 | args="$args $system" 69 | ;; 70 | -q|-Q) 71 | runoption="Q" #quiet 72 | ;; 73 | -v|-V) 74 | runoption="V" #verbose 75 | ;; 76 | -d|-D) 77 | runoption="D" #debug 78 | ;; 79 | *) 80 | args="$args $word" 81 | ;; 82 | esac 83 | done # end do level 1 84 | set - $args 85 | if test $# -eq 0 #no arguments specified 86 | then 87 | set - $dhry $arithmetic $system $misc # db and work not included 88 | fi 89 | if test "$runoption" = 'D' 90 | then 91 | set -x 92 | set -v 93 | fi 94 | date=`date` 95 | tmp=${TMPDIR}/$$.tmp 96 | LOGFILE=${RESULTDIR}/log 97 | if test -w ${RESULTDIR}/log 98 | then 99 | if test -w ${RESULTDIR}/log.accum 100 | then 101 | cat ${RESULTDIR}/log >> ${RESULTDIR}/log.accum 102 | rm ${RESULTDIR}/log 103 | else 104 | mv ${RESULTDIR}/log ${RESULTDIR}/log.accum 105 | fi 106 | echo "Start Benchmark Run (BYTE Version $version)" >>$LOGFILE 107 | echo " $date (long iterations $iter times)" >>$LOGFILE 108 | echo " " `who | wc -l` "interactive users." >>$LOGFILE 109 | uname -a >>$LOGFILE 110 | iter=${iterations-6} 111 | if test $iter -eq 6 112 | then 113 | longloop="1 2 3 4 5 6" 114 | shortloop="1 2 3" 115 | else # generate list of loop numbers 116 | short=`expr \( $iter + 1 \) / 2` 117 | longloop="" 118 | shortloop="" 119 | while test $iter -gt 0 120 | do # do level 1 121 | longloop="$iter $longloop" 122 | if test $iter -le $short 123 | then 124 | shortloop="$iter $shortloop" 125 | fi 126 | iter=`expr $iter - 1` 127 | done # end do level 1 128 | fi #loop list genration 129 | for bench # line argument processing 130 | do # do level 1 131 | # set some default values 132 | prog=${BINDIR}/$bench # the bench name is default program 133 | need=$prog # we need the at least the program 134 | paramlist="#" # a dummy parameter to make anything run 135 | testdir="${TESTDIR}" # the directory in which to run the test 136 | prepcmd="" # preparation command or script 137 | parammsg="" 138 | repeat="$longloop" 139 | stdout="$LOGFILE" 140 | stdin="" 141 | cleanopt="-t $tmp" 142 | bgnumber="" 143 | trap "${SCRPDIR}/cleanup -l $LOGFILE -a; exit" 1 2 3 15 144 | if [ $runoption != 'Q' ] 145 | then 146 | echo "$bench: \c" 147 | fi 148 | echo "" >>$LOGFILE 149 | ###################### select the bench specific values ########## 150 | case $bench 151 | in 152 | dhry2) 153 | options=${dhryloops-10000} 154 | logmsg="Dhrystone 2 without register variables" 155 | cleanopt="-d $tmp" 156 | ;; 157 | dhry2reg) 158 | options=${dhryloops-10000} 159 | logmsg="Dhrystone 2 using register variables" 160 | cleanopt="-d $tmp" 161 | ;; 162 | arithoh|register|short|int|long|float|double) 163 | options=${arithloop-10000} 164 | logmsg="Arithmetic Test (type = $bench): $options Iterations" 165 | ;; 166 | dc) need=dc.dat 167 | prog=dc 168 | options="" 169 | stdin=dc.dat 170 | stdout=/dev/null 171 | logmsg="Arithmetic Test (sqrt(2) with dc to 99 decimal places)" 172 | ;; 173 | hanoi) options='$param' 174 | stdout=/dev/null 175 | logmsg="Recursion Test: Tower of Hanoi Problem" 176 | paramlist="${ndisk-17}" 177 | parammsg='$param Disk Problem:' 178 | ;; 179 | syscall) 180 | options=${ncall-4000} 181 | logmsg="System Call Overhead Test: 5 x $options Calls" 182 | ;; 183 | context1) 184 | options=${switch1-500} 185 | logmsg="Pipe-based Context Switching Test: 2 x $options Switches" 186 | ;; 187 | pipe) options=${io-2048} 188 | logmsg="Pipe Throughput Test: read & write $options x 512 byte blocks" 189 | ;; 190 | spawn) options=${children-100} 191 | logmsg="Process Creation Test: $options forks" 192 | ;; 193 | execl) options=${nexecs-100} 194 | logmsg="Execl Throughput Test: $options execs" 195 | ;; 196 | randmem|seqmem) 197 | if test $bench = seqmem 198 | then 199 | type=Sequential 200 | else 201 | type=Random 202 | fi 203 | poke=${poke-1000000} 204 | options='-s$param '"-n$poke" 205 | logmsg="$type Memory Access Test: $poke Accesses" 206 | paramlist=${arrays-"512 1024 2048 8192 16384"} 207 | parammsg='Array Size: $param bytes' 208 | cleanopt="-m $tmp" 209 | ;; 210 | fstime) repeat="$shortloop" 211 | where=${where-${TMPDIR}} 212 | options='$param '"$where" 213 | logmsg="Filesystem Throughput Test:" 214 | paramlist=${blocks-"512 1024 2048 8192"} 215 | parammsg='File Size: $param blocks' 216 | cleanopt="-f $tmp" 217 | ;; 218 | C) need=cctest.c 219 | prog=cc 220 | options='$param' 221 | stdout=/dev/null 222 | repeat="$shortloop" 223 | logmsg="C Compiler Test:" 224 | paramlist="cctest.c" 225 | parammsg='cc $param' 226 | rm -f a.out 227 | ;; 228 | dbmscli) 229 | repeat="$shortloop" 230 | need="db.dat" 231 | prepcmd='${BINDIR}/dbprep ${testdir}/db.dat 10000' 232 | paramlist=${clients-"1 2 4 8"} 233 | parammsg='$param client processes. (filesize `cat ${testdir}/db.dat|wc -c` bytes)' 234 | logmsg="Client/Server Database Engine:" 235 | options='${testdir}/db.dat $param 0 1000' # $param clients; 236 | # 0 sleep; 1000 iterations 237 | ;; 238 | shell) 239 | prog="multi.sh" 240 | repeat="$shortloop" 241 | logmsg="Bourne shell script and Unix utilities" 242 | paramlist=${background-"1 2 4 8"} 243 | parammsg='$param concurrent background processes' 244 | bgnumber='$param' 245 | testdir="shelldir" 246 | ;; 247 | *) ${BINDIR}/cleanup -l $LOGFILE -r "run: unknown benchmark \"$bench\"" -a 248 | exit 1 249 | ;; 250 | esac 251 | echo "$logmsg" >>$LOGFILE 252 | for param in $paramlist 253 | do # level 2 254 | param=`echo $param | sed 's/_/ /g'` # be sure that spaces are used 255 | # underscore can couple params 256 | if [ "$runoption" != "Q" ] 257 | then 258 | echo "\n [$param] -\c" # generate message to user 259 | fi 260 | eval msg='"'$parammsg'"' # the eval is used to 261 | if test "$msg" # evaluate any embedded 262 | then # variables in the parammsg 263 | echo "" >>$LOGFILE 264 | echo "$msg" >>$LOGFILE 265 | fi 266 | eval opt='"'$options'"' # evaluate any vars in options 267 | eval prep='"'$prepcmd'"' # evaluate any prep command 268 | eval bg='"'$bgnumber'"' # evaluate bgnumber string 269 | rm -f $tmp # remove any tmp files 270 | # if the test requires mulitple concurrent processes, 271 | # prepare the background process string (bgstr) 272 | # this is just a string of "+"s that will provides a 273 | # parameter count for a "for" loop 274 | bgstr="" 275 | if test "$bg" != "" 276 | then 277 | count=`expr "$bg"` 278 | while test $count -gt 0 279 | do 280 | bgstr="+ $bgstr" 281 | count=`expr $count - 1` 282 | done 283 | fi 284 | # 285 | for i in $repeat # loop for the specified number 286 | do # do depth 3 287 | if [ "$runoption" != 'D' ] # level 1 288 | then 289 | # regular Run - set logfile to go on signal 290 | trap "${SCRPDIR}/cleanup -l $LOGFILE -i $i $cleanopt -a; exit" 1 2 3 15 291 | else 292 | trap "exit" 1 2 3 15 293 | fi #end level 1 294 | if [ "$runoption" != 'Q' ] 295 | then 296 | echo " $i\c" # display repeat number 297 | fi 298 | pwd=`pwd` # remember where we are 299 | cd $testdir # move to the test directory 300 | if [ "$runoption" = "V" ] 301 | then 302 | echo 303 | echo "BENCH COMMAND TO BE EXECUTED:" 304 | echo "$prog $opt" 305 | fi 306 | # execute any prepratory command string 307 | if [ -n "$prep" ] 308 | then 309 | $prep >>$stdout 310 | fi 311 | ############ THE BENCH IS TIMED ############## 312 | if test "$stdin" = "" 313 | then # without redirected stdin 314 | time $prog $opt $bgstr 2>>$tmp >>$stdout 315 | else # with redirected stdin 316 | time $prog $opt $bgstr <$stdin 2>>$tmp >>$stdout 317 | fi 318 | time $benchcmd 319 | ############################################### 320 | cd $pwd # move back home 321 | status=$? # save the result code 322 | if test $status != 0 # must have been an error 323 | then 324 | if test -f $tmp # is there an error file ? 325 | then 326 | cp $tmp ${TMPDIR}/save.$bench.$param 327 | ${SCRPDIR}/cleanup -l $LOGFILE -i $i $cleanopt -r \ 328 | "run: bench=$bench param=$param fatalstatus=$status" -a 329 | else 330 | ${SCRPDIR}/cleanup -l $LOGFILE -r \ 331 | "run: bench=$bench param=$param fatalstatus=$status" -a 332 | fi 333 | exit # leave the script if there are errors 334 | fi # end level 1 335 | done # end do depth 3 - repeat of bench 336 | if [ "$runoption" != 'D' ] 337 | then 338 | ${SCRPDIR}/cleanup -l $LOGFILE $cleanopt # finalize this bench 339 | # with these options 340 | # & calculate results 341 | fi 342 | done # end do depth 2 - end of all options for this bench 343 | ########### some specific cleanup routines ############## 344 | case $bench 345 | in 346 | C) 347 | rm -f cctest.o a.out 348 | ;; 349 | esac 350 | if [ "$runoption" != 'Q' ] 351 | then 352 | echo "" 353 | fi 354 | done # end do level 1 - all benchmarks requested 355 | echo "" >>$LOGFILE 356 | echo " " `who | wc -l` "interactive users." >>$LOGFILE 357 | echo "End Benchmark Run ($date) ...." >>$LOGFILE 358 | if [ "$runoption" != 'Q' ] 359 | then 360 | pg $LOGFILE 361 | fi 362 | exit 363 | -------------------------------------------------------------------------------- /unixbench.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | #==============================================================# 3 | # Description: Unixbench script, copy from :https://raw.githubusercontent.com/teddysun/across/master/unixbench.sh, change to v5.1.6 # 4 | # Author: Teddysun # 5 | # Intro: https://teddysun.com/245.html # 6 | #==============================================================# 7 | cur_dir=/opt/unixbench 8 | 9 | # Check System 10 | [[ $EUID -ne 0 ]] && echo 'Error: This script must be run as root!' && exit 1 11 | [[ -f /etc/redhat-release ]] && os='centos' 12 | [[ ! -z "`egrep -i debian /etc/issue`" ]] && os='debian' 13 | [[ ! -z "`egrep -i ubuntu /etc/issue`" ]] && os='ubuntu' 14 | [[ "$os" == '' ]] && echo 'Error: Your system is not supported to run it!' && exit 1 15 | 16 | # Install necessary libaries 17 | if [ "$os" == 'centos' ]; then 18 | yum -y install make automake gcc autoconf gcc-c++ time perl-Time-HiRes 19 | else 20 | apt-get -y update 21 | apt-get -y install make automake gcc autoconf time perl 22 | fi 23 | 24 | # Create new soft download dir 25 | mkdir -p ${cur_dir} 26 | cd ${cur_dir} 27 | 28 | # Download UnixBench5.1.6 29 | if [ -s UnixBench-5.1.6.tar.gz ]; then 30 | echo "UnixBench-5.1.6.tar.gz [found]" 31 | else 32 | echo "UnixBench-5.1.6.tar.gz not found!!!download now..." 33 | if ! wget -c https://github.com/aliyun/byte-unixbench/releases/download/v5.1.6/UnixBench-5.1.6.tar.gz; then 34 | echo "Failed to download UnixBench-5.1.6.tar.gz, please download it to ${cur_dir} directory manually and try again." 35 | exit 1 36 | fi 37 | fi 38 | tar -zxvf UnixBench-5.1.6.tar.gz && rm -f UnixBench-5.1.6.tar.gz 39 | cd UnixBench-5.1.6/UnixBench 40 | 41 | #Run unixbench 42 | make 43 | ./Run 44 | 45 | echo 46 | echo 47 | echo "======= Script description and score comparison completed! ======= " 48 | echo 49 | echo 50 | --------------------------------------------------------------------------------