├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── atc_old.sh
├── build.sh
├── etcd.sh
├── fio_suite.sh
├── fio_suite2.sh
├── iostat.log
├── iostat.sh
├── metrics.sh
├── must.sh
├── ntp.md
├── push.sh
├── runner.sh
├── sketchbook.md
├── top.log
└── top.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | node_modules
2 | static
3 | push.sh
4 | inspect
5 | cleanfsynctest.log
6 | fiotest
7 |
--------------------------------------------------------------------------------
/Dockerfile:
--------------------------------------------------------------------------------
1 | # building container
2 | FROM registry.fedoraproject.org/fedora
3 | RUN dnf install fio util-linux python-pip wget -y && dnf clean all -y
4 | # RUN /usr/bin/python3 -m pip install --upgrade pip
5 | # RUN pip install numpy
6 | # RUN pip install matplotlib
7 |
8 | WORKDIR /
9 | COPY etcd.sh /
10 | COPY fio_suite.sh /
11 | COPY fio_suite2.sh /
12 | COPY runner.sh /usr/local/bin/
13 | RUN chmod +x /fio_suite.sh
14 | RUN chmod +x /fio_suite2.sh
15 | RUN chmod +x /etcd.sh
16 | RUN chmod +x /usr/local/bin/runner.sh
17 | CMD ["/usr/local/bin/runner.sh"]
18 | ENTRYPOINT ["/usr/local/bin/runner.sh"]
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 |
635 | Copyright (C)
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | Copyright (C)
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # openshift-etcd-suite
2 |
3 | **DEPRECATED!!!** instead use [etcd-tools](https://github.com/peterducai/etcd-tools/)
4 |
5 | tools to troubleshoot ETCD on Openshift 4
6 |
7 | For easy use of container you can create alias for openshift-etcd-suite
8 |
9 | > alias oes="podman run --volume /$(pwd):/test:Z quay.io/peterducai/openshift-etcd-suite:latest"
10 |
11 | to build container just run
12 |
13 | > buildah bud -t openshift-etcd-suite:latest .
14 |
15 | *IMPORTANT*: latest version requires gnuplot and will create folder (with generated charts) in HOME.
16 |
17 | ## etcd.sh script
18 |
19 | ETCD script will make collect info from ETCD pods, make little summary and search for errors/issues and explains what are expected values.
20 |
21 | Fastest way to use it with must-gather is
22 |
23 | ```
24 | alias etcdcheck='podman run --privileged --volume /$(pwd):/test quay.io/peterducai/openshift-etcd-suite:latest etcd '
25 | etcdcheck /test/
26 | ```
27 |
28 | **You dont have to use full path, but /test/ is important**
29 |
30 | You can either do *oc login* and then run
31 |
32 | > chmod +x etcd.sh && ./etcd.sh
33 |
34 | > ./etcd.sh /\
35 |
36 | or
37 |
38 | > podman run --privileged --volume /$(pwd):/test quay.io/peterducai/openshift-etcd-suite:latest etcd /test/\
39 |
40 |
41 | ## fio_suite
42 |
43 | fio_suite is benchmark tool which runs several fio tests to see how IOPS change under different load.
44 |
45 | Run
46 |
47 | > ./fio_suite.sh
48 |
49 | or thru podman/docker
50 |
51 | > podman run --volume /$(pwd):/test:Z quay.io/peterducai/openshift-etcd-suite:latest fio
52 |
53 | but on RHCOS run
54 |
55 | > podman run --privileged --volume /$(pwd):/test quay.io/peterducai/openshift-etcd-suite:latest fio
56 |
57 | or to benchmark disk where ETCD resides
58 |
59 | > podman run --privileged --volume /var/lib/etcd:/test quay.io/peterducai/openshift-etcd-suite:latest fio
60 |
61 | **NOTE:** don't run it in / or /home/user as its top folder and you get Selinux error
62 |
63 | ```
64 | podman run --privileged --volume /$(pwd):/test quay.io/peterducai/openshift-etcd-suite:latest fio
65 | FIO SUITE version 0.1
66 |
67 | WARNING: this test will run for several minutes without any progress! Please wait until it finish!
68 |
69 | - [MAX CONCURRENT READ] ---
70 | This job is a read-heavy workload with lots of parallelism that is likely to show off the device's best throughput:
71 |
72 | read: IOPS=4282, BW=268MiB/s (281MB/s)(1024MiB/3826msec)
73 | read: IOPS=3760, BW=235MiB/s (246MB/s)(200MiB/851msec)
74 | - [REQUEST OVERHEAD AND SEEK TIMES] ---
75 | This job is a latency-sensitive workload that stresses per-request overhead and seek times. Random reads.
76 |
77 | read: IOPS=258k, BW=1009MiB/s (1058MB/s)(1024MiB/1015msec)
78 | read: IOPS=263k, BW=1026MiB/s (1075MB/s)(200MiB/195msec)
79 |
80 | - [SEQUENTIAL IOPS UNDER DIFFERENT READ/WRITE LOAD] ---
81 |
82 | -- [ SINGLE JOB, 70% read, 30% write] --
83 |
84 | write: IOPS=41.6k, BW=162MiB/s (170MB/s)(308MiB/1894msec); 0 zone resets
85 | write: IOPS=42.5k, BW=166MiB/s (174MB/s)(59.9MiB/361msec); 0 zone resets
86 | -- [ SINGLE JOB, 30% read, 70% write] --
87 |
88 | write: IOPS=35.7k, BW=139MiB/s (146MB/s)(140MiB/1002msec); 0 zone resets
89 | write: IOPS=35.4k, BW=138MiB/s (145MB/s)(715MiB/5171msec); 0 zone resets
90 | -- [ 8 PARALLEL JOBS, 70% read, 30% write] --
91 |
92 | write: IOPS=5662, BW=22.1MiB/s (23.2MB/s)(91.4MiB/4130msec); 0 zone resets
93 | write: IOPS=5632, BW=22.0MiB/s (23.1MB/s)(59.6MiB/2708msec); 0 zone resets
94 | -- [ 8 PARALLEL JOBS, 30% read, 70% write] --
95 |
96 | write: IOPS=6202, BW=24.2MiB/s (25.4MB/s)(140MiB/5765msec); 0 zone resets
97 | write: IOPS=6219, BW=24.3MiB/s (25.5MB/s)(485MiB/19974msec); 0 zone resets
98 |
99 | - END -----------------------------------------
100 |
101 | ```
102 |
103 |
104 |
105 | [](https://quay.io/repository/peterducai/openshift-etcd-suite)
106 |
--------------------------------------------------------------------------------
/atc_old.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #
4 | # Author : Peter Ducai
5 | # Homepage : https://github.com/peterducai/automatic_traffic_shaper
6 | # License : Apache2
7 | # Copyright (c) 2017, Peter Ducai
8 | # All rights reserved.
9 | #
10 |
11 | # Purpose : dynamic linux traffic shaper
12 | # Usage : ats.sh for more options
13 |
14 |
15 | # TERMINAL COLORS -----------------------------------------------------------------
16 |
17 | NONE='\033[00m'
18 | RED='\033[01;31m'
19 | GREEN='\033[01;32m'
20 | YELLOW='\033[01;33m'
21 | BLACK='\033[30m'
22 | BLUE='\033[34m'
23 | VIOLET='\033[35m'
24 | CYAN='\033[36m'
25 | GREY='\033[37m'
26 |
27 | # INTERFACES
28 |
29 | EXT_IF="eth0"
30 | INT_IF="eth1"
31 | IP="xxx" #HERE PUT SERVER IP ADDRESS
32 |
33 | TC="/sbin/tc"
34 | IPT="/sbin/iptables"
35 | MOD="/sbin/modprobe"
36 |
37 | # tc uses the following units when passed as a parameter.
38 | # kbps: Kilobytes per second
39 | # mbps: Megabytes per second
40 | # kbit: Kilobits per second
41 | # mbit: Megabits per second
42 | # bps: Bytes per second
43 |
44 | # DO NOT EDIT BELOW THIS LINE ______________________________________________
45 | totaldown=0
46 | totalup=0
47 | total_clients=0
48 | total_groups=0
49 |
50 | groups_index=()
51 | groups_name=()
52 | groups_id=()
53 | groups_down=()
54 | groups_up=()
55 | groups_aggr=()
56 | groups_prio=()
57 | groups_active=()
58 | groups_client_count=()
59 | groups_sub_count=()
60 |
61 | all_ip=()
62 | all_parentid=()
63 | all_classid=()
64 |
65 | #########################################
66 | # increment and set to 2 decimal places #
67 | #########################################
68 | function inc_2_leadzeroes {
69 | local val=$1
70 | val=$(($val+1))
71 |
72 | if [[ $val -lt 10 ]]; then
73 | val="0"$val
74 | echo "$val"
75 | return $val
76 | fi
77 | echo "$val"
78 | return $val
79 | }
80 |
81 | #########################################
82 | # increment and set to 3 decimal places #
83 | #########################################
84 | function inc_3_leadzeroes {
85 | local val=$1
86 | val=$(($val+1))
87 |
88 | if [[ $val -lt 10 ]]
89 | then
90 | val="00"$val
91 | echo "$val"
92 | return $val
93 | elif [[ $val -gt 9 && $val -lt 100 ]]
94 | then
95 | val="0"$val
96 | echo "$val"
97 | return $val
98 | else
99 | echo "$val"
100 | return $val
101 | fi
102 | }
103 |
104 | #################################
105 | # Check if IP address is valid #
106 | #################################
107 | function validate_IP {
108 | local ip=$1
109 | local stat=1
110 | # Check the IP address under test to see if it matches the extended REGEX
111 |
112 | if [[ $ip =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
113 | # Record current default field separator
114 | OIFS=$IFS
115 | # Set default field separator to .
116 | IFS='.'
117 | # Create an array of the IP Address being tested
118 | ip=($ip)
119 | # Reset the field separator to the original saved above IFS=$OIFS
120 | # Is each octet between 0 and 255?
121 | [[ ${ip[0]} -le 255 && ${ip[1]} -le 255 && ${ip[2]} -le 255 && ${ip[3]} -le 255 ]]
122 | # Set the return code.
123 | stat=$?
124 | fi
125 | }
126 |
127 | ###########################################
128 | # load kernel modules for shaper/firewall #
129 | ###########################################
130 | function ipt_load_modules {
131 | $MOD ip_tables ## Core Netfilter Module
132 | $MOD ip_conntrack ## Stateful Connections
133 | $MOD ip_filter
134 | $MOD ip_mangle
135 | $MOD ip_nat
136 | $MOD ip_nat_ftp
137 | $MOD ip_nat_irc
138 | $MOD ip_conntrack
139 | $MOD ip_conntrack_ftp
140 | $MOD ip_conntrack_irc
141 | $MOD iptable_filter ## Filter Table
142 | $MOD ipt_MASQUERADE ## Masquerade Target
143 |
144 | #$MOD ip6_tables
145 | #$MOD ip6_filter
146 | #$MOD ip6_mangle
147 |
148 | }
149 |
150 |
151 | #TODO
152 | function tc_print_counters {
153 | echo -e "${GREEN}total download:$totaldown upload:$totalup${NONE}"
154 | echo -e "${GREEN}========================================================${NONE}"
155 | echo -e "${YELLOW}total ${totalgroups} groups and ${totalsubs} subgroups ${NONE}"
156 | }
157 |
158 | ######################
159 | # remove root qdisc #
160 | ######################
161 | function tc_remove {
162 | echo "REMOVING ROOT QDISC"
163 | $TC qdisc del dev $INT_IF root
164 | $TC qdisc del dev $EXT_IF root
165 | $TC qdisc del dev $INT_IF ingress
166 | $TC qdisc del dev $EXT_IF ingress
167 | }
168 |
169 | function tc_add_group { # $parent, $classid, $total, $ceil
170 | local parent=$1
171 | local classid=$2
172 | local total=$3
173 | local ceil=$4
174 | local DEV=$5
175 | $TC class add dev $DEV parent $parent classid $classid htb rate ${total}kbit ceil ${ceil}kbit
176 | }
177 |
178 | function tc_add_group_down {
179 | tc_add_group $1 $2 $3 $4 $EXT_IF
180 | totaldown=$(($(($totaldown))+$(($4))))
181 | }
182 |
183 | function tc_add_group_up {
184 | tc_add_group $1 $2 $3 $4 $INT_IF
185 | totalup=$(($(($totalup))+$(($4))))
186 | }
187 |
188 | #####################################
189 | # list root qdisc and it's classes #
190 | #####################################
191 | function tc_show {
192 | echo "SHOW ROOT QDISC"
193 | $TC -s qdisc show dev $EXT_IF
194 | $TC -s class show dev $EXT_IF
195 | $TC -s qdisc show dev $INT_IF
196 | $TC -s class show dev $INT_IF
197 | }
198 |
199 | ##########################################################################
200 | # load groups from definition file and compute traffic values like ceil #
201 | ##########################################################################
202 | function prepare_group_definitions {
203 | local z=0
204 | #check if group.definitions exist, if not, create it with default Fiber30 group
205 | if [ -s "config/group.definitions" ]; then
206 | echo -e "${GREEN}found group.definitions and processing...${NONE}" >> /dev/null
207 | else
208 | echo -e "${RED} group.definitions NOT FOUND!!! recreating...${NONE}"
209 | mkdir config
210 | touch config/group.definitions;
211 | echo "#name download upload aggregation prio_group" > config/group.definitions
212 | echo "Fiber30 30720 5120 8 0" >> config/group.definitions
213 | exit
214 | fi
215 |
216 | #read group.definitions and fill arrays with values
217 | while read line
218 | do
219 | #take line and split it into array, first value is main! (aka group,client..etc)
220 | arrl=($(echo $line | tr " " "\n"))
221 | if [ -z "$line" ]; then
222 | echo "EMPTY LINE"
223 | else
224 | case "${arrl[0]}" in
225 | '#'*)
226 | ;;
227 | *) echo "--------------------------------------------------------------------------------------"
228 | echo -e "${GREEN}found GROUP called ${arrl[0]} | download $((arrl[1] / 1024)) Mbs | upload $((arrl[2]/1024)) Mbs | aggr ${arrl[3]} ${NONE}"
229 | echo "--------------------------------------------------------------------------------------"
230 | groups_name[$z]=${arrl[0]}
231 | groups_index[$z]=$((${z}+1))
232 | groups_down[$z]=${arrl[1]}
233 | groups_up[$z]=${arrl[2]}
234 | groups_aggr[$z]=${arrl[3]}
235 | groups_prio[$z]=${arrl[4]}
236 | groups_sub_count[$z]=0
237 | groups_active[$z]=0 #by default mark group inactive
238 |
239 | #if file .group exist, MARK GROUP ACTIVE
240 | if [ -f config/${arrl[0]}.group ];
241 | then
242 | groups_active[$z]=1
243 | else
244 | echo -e "${RED}#### ERROR! #### File ${groups_name[$z]}.group does NOT EXIST but is defined in group.definitions${NONE}"
245 | echo "creating dummy .group file"
246 | touch config/${arrl[0]}.group
247 | echo "127.0.0.0" > config/${arrl[0]}.group
248 | echo -e "${RED}FIX config/${arrl[0]}.group FILE BEFORE RUNNING AGAIN!!!${NONE}"
249 | exit #exit so user can change 127.0.0.1 in .group file
250 | fi
251 |
252 | z=$((${z}+1))
253 | ;;
254 | esac
255 | fi
256 | done > /dev/null
360 | else
361 | echo "no match: ${all_classid[$m]} " >> /dev/null
362 | fi
363 | done
364 | }
365 |
366 | #####################################################################
367 | # print commands of all IP addresses with certain download class ID #
368 | #####################################################################
369 | function print_command_with_classid_down {
370 | local size=${#all_ip[*]}
371 |
372 | local subdown=$2
373 | local subceil=$3
374 | local NIC=$4
375 | local class_set=0
376 |
377 | for (( m=0; m<$size; m++ ))
378 | do
379 | if [[ ${all_classid[$m]} == $1 ]];
380 | then
381 | par=${all_classid[$m]%?}
382 | par=${par%?}
383 |
384 | if [[ $class_set == '0' ]]; then
385 | echo -e " | | | |_ $TC class add dev $NIC parent 1:$(printf %x $((1$par))) classid 1:$(printf %x $((1${all_classid[$m]}))) htb rate ${GREEN}${subdown}${NONE}Kbit ceil ${GREEN}${subceil}${NONE}Kbit buffer 1600"
386 | class_set=1
387 | fi
388 |
389 | echo -e " | | | |_ $IPT -A POSTROUTING -t mangle -o $NIC -d ${YELLOW}${all_ip[$m]}${NONE} -j CLASSIFY --set-class 1:$(printf %x $((1${all_classid[$m]}))) "
390 |
391 | else
392 | echo "no match: ${all_classid[$m]}" >> /dev/null
393 | fi
394 | done
395 | }
396 |
397 | ###################################################################
398 | # print commands of all IP addresses with certain upload class ID #
399 | ###################################################################
400 | function print_command_with_classid_up {
401 | #echo "print_command_with_classid_up for ${#all_ip[*]} addresses"
402 |
403 | local size=${#all_ip[*]}
404 |
405 | local subdown=$2
406 | local subceil=$3
407 | local NIC=$4
408 | local class_set=0
409 |
410 | for (( m=0; m<$size; m++ ))
411 | do
412 | if [[ ${all_classid[$m]} == $1 ]];
413 | then
414 | #echo "MATCH ${all_classid[$m]} == $1"
415 | #echo " | | | - " #>> /dev/null
416 | par=${all_classid[$m]%?}
417 | par=${par%?}
418 |
419 | if [[ $class_set == '0' ]]; then
420 | echo -e " | | | |_ $TC class add dev $NIC parent 1:$(printf %x $((2$par))) classid 1:$(printf %x $((2${all_classid[$m]}))) htb rate ${GREEN}${subdown}${NONE}Kbit ceil ${GREEN}${subceil}${NONE}Kbit buffer 1600"
421 | class_set=1
422 | fi
423 |
424 | echo -e " | | | |_ $IPT -A POSTROUTING -t mangle -o $NIC -s ${YELLOW}${all_ip[$m]}${NONE} -j CLASSIFY --set-class 1:$(printf %x $((2${all_classid[$m]})))"
425 |
426 | else
427 | echo "no match: ${all_classid[$m]}" >> /dev/null
428 | fi
429 | done
430 | }
431 |
432 |
433 | ##########################################################################
434 | # execute commands of all IP addresses with certain download class ID #
435 | ##########################################################################
436 | function execute_command_with_classid_down {
437 | local size=${#all_ip[*]}
438 |
439 | local subdown=$2
440 | local subceil=$3
441 | local NIC=$4
442 | local class_set=0
443 |
444 | for (( m=0; m<$size; m++ ))
445 | do
446 | if [[ ${all_classid[$m]} == $1 ]];
447 | then
448 | par=${all_classid[$m]%?}
449 | par=${par%?}
450 |
451 | if [[ $class_set == '0' ]]; then
452 | echo "$TC class add dev $NIC parent 1:$(printf %x $((1$par))) classid 1:$(printf %x 1${all_classid[$m]}) htb rate ${subdown}Kbit ceil ${subceil}Kbit buffer 1600"
453 | $TC class add dev $NIC parent 1:$(printf %x $((1$par))) classid 1:$(printf %x 1${all_classid[$m]}) htb rate ${subdown}Kbit ceil ${subceil}Kbit buffer 1600
454 | class_set=1
455 | fi
456 | echo "$IPT -A POSTROUTING -t mangle -o $NIC -d ${all_ip[$m]} -j CLASSIFY --set-class 1:$(printf %x $((1${all_classid[$m]})))"
457 | $IPT -A POSTROUTING -t mangle -o $NIC -d "${all_ip[$m]}" -j CLASSIFY --set-class 1:$(printf %x $((1${all_classid[$m]})))
458 |
459 | else
460 | echo "no match: ${all_classid[$m]}" >> /dev/null
461 | fi
462 | done
463 | echo "END execute_command_with_classid_down"
464 | }
465 |
466 | #########################################################################
467 | # execute commands of all IP addresses with certain download class ID #
468 | #########################################################################
469 | function execute_command_with_classid_up {
470 | #echo "print_command_with_classid_up for ${#all_ip[*]} addresses"
471 |
472 | local size=${#all_ip[*]}
473 |
474 | local subdown=$2
475 | local subceil=$3
476 | local NIC=$4
477 | local class_set=0
478 |
479 | for (( m=0; m<$size; m++ ))
480 | do
481 | if [[ ${all_classid[$m]} == $1 ]];
482 | then
483 | #echo "MATCH ${all_classid[$m]} == $1"
484 | #echo " | | | - " #>> /dev/null
485 | par=${all_classid[$m]%?}
486 | par=${par%?}
487 |
488 | if [[ $class_set == '0' ]]; then
489 | echo "TC class add dev $NIC parent 1:$(printf %x $((2$par))) classid 1:$(printf %x 2${all_classid[$m]}) htb rate ${subdown}Kbit ceil ${subceil}Kbit buffer 1600"
490 | $TC class add dev $NIC parent 1:$(printf %x $((2$par))) classid 1:$(printf %x 2${all_classid[$m]}) htb rate ${subdown}Kbit ceil ${subceil}Kbit buffer 1600
491 | class_set=1
492 | fi
493 | echo "$IPT -A POSTROUTING -t mangle -o $NIC -s ${all_ip[$m]} -j CLASSIFY --set-class 1:$(printf %x 2${all_classid[$m]})"
494 | $IPT -A POSTROUTING -t mangle -o $NIC -s "${all_ip[$m]}" -j CLASSIFY --set-class 1:$(printf %x 2${all_classid[$m]})
495 |
496 | else
497 | echo "no match: ${all_classid[$m]}" >> /dev/null
498 | fi
499 | done
500 | }
501 |
502 |
503 | ######################################################################
504 | # Generate only tree visualization of leaves, don't create anything #
505 | ######################################################################
506 | function tc_generate_tree {
507 | local size=${#groups_index[*]}
508 |
509 | echo -e "${GREEN}=============================${NONE}"
510 | echo -e "${GREEN}= Shaper tree visualization =${NONE}"
511 | echo -e "${GREEN}=============================${NONE}"
512 | echo -e "interfaces:"
513 | echo -e "-LAN--->---${GREEN}[$EXT_IF]${NONE}--|| SERVER ||--${YELLOW}[$INT_IF]${NONE}--->---WAN-"
514 | echo -e ""
515 | echo -e "${GREEN}=============================${NONE}"
516 |
517 | #-- download ------------------------
518 | echo " |"
519 | echo "[root qdisc]"
520 | echo " |"
521 | echo " [1:1]------"
522 | echo " | |"
523 |
524 | for (( i=0; i<$size; i++ ))
525 | do
526 | pgridd=`inc_2_leadzeroes $i`
527 | echo " | |"
528 | echo " | |-[${groups_name[$i]}]- $(printf %x $((1${pgridd}))) ---------"
529 | echo " | | |"
530 |
531 | #process sub groups
532 | subsize=${groups_sub_count[$i]}
533 | for (( z=0; z<$subsize; z++ ))
534 | do
535 | sgridd=`inc_2_leadzeroes $z`
536 | echo " | | |"
537 | echo " | | |--sub $(printf %x $((1${pgridd}${sgridd}))) -"
538 | #echo "print_IP_only_with_classid ${pgridd}${sgridd}"
539 | print_IP_only_with_classid ${pgridd}${sgridd}
540 | done
541 | done
542 |
543 | #-- upload ------------------------
544 |
545 | echo " |"
546 | echo " [1:2]------"
547 | echo " | |"
548 |
549 | for (( i=0; i<$size; i++ ))
550 | do
551 | pgridu=`inc_2_leadzeroes $i`
552 | echo " | |"
553 | echo " | |-[${groups_name[$i]}]- $(printf %x $((2${pgridu}))) ---------"
554 | echo " | | |"
555 | #process sub groups
556 | subsize=${groups_sub_count[$i]}
557 | for (( z=0; z<$subsize; z++ ))
558 | do
559 | sgridu=`inc_2_leadzeroes $z`
560 | echo " | | |"
561 | echo " | | |--sub $(printf %x $((2${pgridu}${sgridu}))) -"
562 |
563 | print_IP_only_with_classid ${pgridu}${sgridu}
564 | done
565 | done
566 | }
567 |
568 | ##########################################################
569 | # Generate only printed tc commands, don't run anything #
570 | ##########################################################
571 | function tc_generate_fake_commands {
572 | local size=${#groups_index[*]}
573 |
574 | echo -e "${GREEN}=======================================${NONE}"
575 | echo -e "${GREEN}= Shaper command visualization NO RUN =${NONE}"
576 | echo -e "${GREEN}=======================================${NONE}"
577 | echo -e "interfaces:"
578 | echo -e "-LAN--->---${GREEN}[$EXT_IF]${NONE}--|| SERVER ||--${YELLOW}[$INT_IF]${NONE}--->---WAN-"
579 | echo -e ""
580 | echo -e "${GREEN}=============================${NONE}"
581 |
582 | #-- remove old qdisc
583 | echo " |"
584 | echo -e "${RED}[deleting root qdisc] ${NONE}"
585 | echo "-------------------------------------------"
586 | echo "$TC qdisc del dev $INT_IF root"
587 | echo "$TC qdisc del dev $EXT_IF root"
588 | echo "$TC qdisc del dev $INT_IF ingress"
589 | echo "$TC qdisc del dev $EXT_IF ingress"
590 | echo "-------------------------------------------"
591 |
592 | #-- root qdisc ----------------------
593 |
594 | totaldown=0
595 | totalup=0
596 |
597 | for (( i=0; i<$size; i++ ))
598 | do
599 | totaldown=$(($(($totaldown))+$(($(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))))))
600 | done
601 |
602 | for (( i=0; i<$size; i++ ))
603 | do
604 | totalup=$(($(($totalup))+$(($(($((${groups_sub_count[$i]}))*$((${groups_up[$i]}))))))))
605 | done
606 | echo " |"
607 | echo -e "${YELLOW}==| TOTAL down: $(($((${totaldown}))/1024))Mbit up: $(($((${totalup}))/1024))Mbit |==${NONE}"
608 | echo " |"
609 | echo " |"
610 | echo "--------------------------------------------------------------------------------------------------------------"
611 | echo "$TC qdisc add dev $INT_IF root handle 1: htb default 1 r2q 10"
612 | echo "$TC qdisc add dev $EXT_IF root handle 1: htb default 1 r2q 10"
613 | echo "$TC class add dev $INT_IF parent 1: classid 1:1 htb rate ${totaldown}Kbit ceil ${totaldown}Kbit buffer 1600"
614 | echo "$TC class add dev $EXT_IF parent 1: classid 1:2 htb rate ${totalup}Kbit ceil ${totalup}Kbit buffer 1600 "
615 | echo "--------------------------------------------------------------------------------------------------------------"
616 |
617 | #-- download ------------------------
618 | echo " |"
619 | echo " 1:1------"
620 | echo " | |"
621 |
622 | for (( i=0; i<$size; i++ ))
623 | do
624 | pgridd=`inc_2_leadzeroes $i`
625 | echo " | |"
626 | echo -e " | |--------${GREEN}[${groups_name[$i]}]${NONE}----------"
627 | echo " | | |_$TC class add dev $EXT_IF parent 1:1 classid 1:$(printf %x $((1${pgridd}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600"
628 | echo " | | |"
629 |
630 | #process sub groups
631 | subsize=${groups_sub_count[$i]}
632 | for (( z=0; z<$subsize; z++ ))
633 | do
634 | sgridd=`inc_2_leadzeroes $z`
635 | echo " | | |"
636 | echo " | | |--sub 1${pgridd}${sgridd} (hex $(printf %x $((1${pgridd}${sgridd})))) -"
637 | print_command_with_classid_down ${pgridd}${sgridd} $(($((${groups_down[$i]}))/$((${groups_aggr[$i]})))) ${groups_down[$i]} $EXT_IF
638 | done
639 | done
640 |
641 | #-- upload ------------------------
642 | echo " |"
643 | echo " 1:2------"
644 | echo " | |"
645 |
646 | for (( i=0; i<$size; i++ ))
647 | do
648 | pgridu=`inc_2_leadzeroes $i`
649 | echo " | |"
650 | echo -e " | |--------${GREEN}[${groups_name[$i]}]${NONE}----------"
651 | echo " | | |_$TC class add dev $INT_IF parent 1:1 classid 1:$(printf %x $((2${pgridu}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600"
652 | echo " | | |"
653 | #process sub groups
654 | subsize=${groups_sub_count[$i]}
655 | for (( z=0; z<$subsize; z++ ))
656 | do
657 | sgridu=`inc_2_leadzeroes $z`
658 | echo " | | |"
659 | echo " | | |--sub $((2${pgridu}${sgridu})) hex $(printf %x $((2${pgridu}${sgridu})))-"
660 | print_command_with_classid_up ${pgridu}${sgridu} $(($((${groups_up[$i]}))/$((${groups_aggr[$i]})))) ${groups_up[$i]} $INT_IF
661 | done
662 | done
663 | }
664 |
665 |
666 | #########################################################
667 | # Generate only printed tc commands, don't run anything #
668 | #########################################################
669 | function tc_execute_commands {
670 | local size=${#groups_index[*]}
671 |
672 | echo -e "${GREEN}=========================================${NONE}"
673 | echo -e "${GREEN}= Shaper command REAL RUN visualization =${NONE}"
674 | echo -e "${GREEN}=========================================${NONE}"
675 | echo -e "interfaces:"
676 | echo -e "-LAN--->---${GREEN}[$EXT_IF]${NONE}--|| SERVER ||--${YELLOW}[$INT_IF]${NONE}--->---WAN-"
677 | echo -e ""
678 | echo -e "${GREEN}=============================${NONE}"
679 |
680 | #-- remove old qdisc
681 | echo " |"
682 | echo -e "${RED}[deleting root qdisc] ${NONE}"
683 | echo "-------------------------------------------"
684 | echo "$TC qdisc del dev $INT_IF root"
685 | $TC qdisc del dev $INT_IF root
686 | echo "$TC qdisc del dev $EXT_IF root"
687 | $TC qdisc del dev $EXT_IF root
688 | echo "$TC qdisc del dev $INT_IF ingress"
689 | $TC qdisc del dev $INT_IF ingress
690 | echo "$TC qdisc del dev $EXT_IF ingress"
691 | $TC qdisc del dev $EXT_IF ingress
692 | echo "-------------------------------------------"
693 |
694 | #-- root qdisc ----------------------
695 |
696 | totaldown=0
697 | totalup=0
698 |
699 | for (( i=0; i<$size; i++ ))
700 | do
701 | totaldown=$(($(($totaldown))+$(($(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))))))
702 | done
703 |
704 | for (( i=0; i<$size; i++ ))
705 | do
706 | totalup=$(($(($totalup))+$(($(($((${groups_sub_count[$i]}))*$((${groups_up[$i]}))))))))
707 | done
708 | echo " |"
709 | echo -e "${YELLOW}==| TOTAL down: $(($((${totaldown}))/1024))Mbit up: $(($((${totalup}))/1024))Mbit |==${NONE}"
710 | echo " |"
711 | echo " |"
712 | echo "--------------------------------------------------------------------------------------------------------------"
713 | echo "$TC qdisc add dev $INT_IF root handle 1: htb default 1 r2q 10"
714 | $TC qdisc add dev $INT_IF root handle 1: htb default 1 r2q 10
715 | echo "$TC qdisc add dev $EXT_IF root handle 1: htb default 1 r2q 10"
716 | $TC qdisc add dev $EXT_IF root handle 1: htb default 1 r2q 10
717 |
718 | echo "$TC class add dev $INT_IF parent 1: classid 1:1 htb rate ${totaldown}Kbit ceil ${totaldown}Kbit buffer 1600"
719 | $TC class add dev $INT_IF parent 1: classid 1:1 htb rate ${totaldown}Kbit ceil ${totaldown}Kbit buffer 1600
720 | echo "$TC class add dev $EXT_IF parent 1: classid 1:2 htb rate ${totalup}Kbit ceil ${totalup}Kbit buffer 1600"
721 | $TC class add dev $EXT_IF parent 1: classid 1:2 htb rate ${totalup}Kbit ceil ${totalup}Kbit buffer 1600
722 | echo "--------------------------------------------------------------------------------------------------------------"
723 |
724 | #-- download ------------------------
725 |
726 | echo " |"
727 | echo " 1:1------"
728 | echo " | |"
729 |
730 | for (( i=0; i<$size; i++ ))
731 | do
732 | pgridd=`inc_2_leadzeroes $i`
733 | echo " | |"
734 | echo -e " | |--------${GREEN}[${groups_name[$i]}]${NONE}----------"
735 | echo " | | |_$TC class add dev $INT_IF parent 1:1 classid 1:$(printf %x $((1${pgridd}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600"
736 | $TC class add dev $INT_IF parent 1:1 classid 1:$(printf %x $((1${pgridd}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600
737 | echo " | | |"
738 |
739 | #process sub groups
740 | subsize=${groups_sub_count[$i]}
741 | for (( z=0; z<$subsize; z++ ))
742 | do
743 | sgridd=`inc_2_leadzeroes $z`
744 | echo " | | |"
745 | echo " | | |--sub 1${pgridd}${sgridd} (hex $(printf %x $((1${pgridd}${sgridd})))) -"
746 |
747 |
748 |
749 | execute_command_with_classid_down ${pgridd}${sgridd} $(($((${groups_down[$i]}))/$((${groups_aggr[$i]})))) ${groups_down[$i]} $INT_IF
750 | done
751 | done
752 |
753 | #-- upload ------------------------
754 |
755 | echo " |"
756 | echo " 1:2------"
757 | echo " | |"
758 |
759 | for (( i=0; i<$size; i++ ))
760 | do
761 | pgridu=`inc_2_leadzeroes $i`
762 | echo " | |"
763 | echo -e " | |--------${GREEN}[${groups_name[$i]}]${NONE}----------"
764 | echo " | | |_$TC class add dev $EXT_IF parent 1:2 classid 1:$(printf %x $((2${pgridu}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600"
765 | $TC class add dev $EXT_IF parent 1:2 classid 1:$(printf %x $((2${pgridu}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600
766 | echo " | | |"
767 |
768 | #process sub groups
769 | subsize=${groups_sub_count[$i]}
770 | for (( z=0; z<$subsize; z++ ))
771 | do
772 | sgridu=`inc_2_leadzeroes $z`
773 | echo " | | |"
774 | echo " | | |--sub $((2${pgridu}${sgridu})) hex $(printf %x $((2${pgridu}${sgridu})))-"
775 |
776 |
777 |
778 | execute_command_with_classid_up ${pgridu}${sgridu} $(($((${groups_up[$i]}))/$((${groups_aggr[$i]})))) ${groups_up[$i]} $EXT_IF
779 | done
780 | done
781 | }
782 |
783 |
784 | #######################################
785 | # print download commands into file #
786 | #######################################
787 | function execute2file_command_with_classid_down {
788 | local size=${#all_ip[*]}
789 |
790 | local subdown=$2
791 | local subceil=$3
792 | local NIC=$4
793 | local oufile=$5
794 | local class_set=0
795 |
796 | for (( m=0; m<$size; m++ ))
797 | do
798 | if [[ ${all_classid[$m]} == $1 ]];
799 | then
800 | par=${all_classid[$m]%?}
801 | par=${par%?}
802 |
803 | if [[ $class_set == '0' ]]; then
804 | echo -e "$TC class add dev $NIC parent 1:$(printf %x $((1$par))) classid 1:$(printf %x 1${all_classid[$m]}) htb rate ${subdown}Kbit ceil ${subceil}Kbit buffer 1600" >> "$oufile"
805 | class_set=1
806 | fi
807 | echo -e "$IPT -A POSTROUTING -t mangle -o $NIC -d ${all_ip[$m]} -j CLASSIFY --set-class 1:$(printf %x $((1${all_classid[$m]})))" >> "$oufile"
808 | else
809 | echo "# no match: ${all_classid[$m]}" >> /dev/null
810 | fi
811 | done
812 | echo "END execute_command_with_classid_down"
813 | }
814 |
815 | ####################################
816 | # print upload commands into file #
817 | ####################################
818 | function execute2file_command_with_classid_up {
819 | #echo "print_command_with_classid_up for ${#all_ip[*]} addresses"
820 |
821 | local size=${#all_ip[*]}
822 |
823 | local subdown=$2
824 | local subceil=$3
825 | local NIC=$4
826 | local ofile=$5
827 | local class_set=0
828 |
829 | echo "execute2file_command_with_classid_up GENERATE $ofile"
830 | for (( m=0; m<$size; m++ ))
831 | do
832 | if [[ ${all_classid[$m]} == $1 ]];
833 | then
834 | #echo "MATCH ${all_classid[$m]} == $1"
835 | #echo " | | | - " #>> /dev/null
836 | par=${all_classid[$m]%?}
837 | par=${par%?}
838 |
839 | if [[ $class_set == '0' ]]; then
840 | echo -e "$TC class add dev $NIC parent 1:$(printf %x $((2$par))) classid 1:$(printf %x 2${all_classid[$m]}) htb rate ${subdown}Kbit ceil ${subceil}Kbit buffer 1600" >> "$ofile"
841 | class_set=1
842 | fi
843 | echo -e "$IPT -A POSTROUTING -t mangle -o $NIC -d ${all_ip[$m]} -j CLASSIFY --set-class 1:$(printf %x 2${all_classid[$m]})" >> "$ofile"
844 | else
845 | echo "# no match: ${all_classid[$m]}" >> /dev/null
846 | fi
847 | done
848 | }
849 |
850 | #################################################
851 | # Generate file with all tc/iptables commands #
852 | #################################################
853 | function generate_file {
854 |
855 | prepare_group_definitions
856 | count_clients
857 |
858 | local size=${#groups_index[*]}
859 | local file=$1
860 |
861 | echo "re-creating $file"
862 | rm -rf $file
863 | touch $file
864 | chmod 777 $file
865 | echo "adding shebang to $file"
866 |
867 | echo -e "#!/bin/bash" >> "$file"
868 | echo -e "" >> "$file"
869 | echo -e "# generated by https://sourceforge.net/projects/bashtools/" >> "$file"
870 | echo -e "" >> "$file"
871 |
872 | echo -e "# deleting root qdisc" >> "$file"
873 | echo -e "$TC qdisc del dev $INT_IF root" >> "$file"
874 | echo -e "$TC qdisc del dev $EXT_IF root" >> "$file"
875 | echo -e "$TC qdisc del dev $INT_IF ingress" >> "$file"
876 | echo -e "$TC qdisc del dev $EXT_IF ingress" >> "$file"
877 | echo -e "" >> "$file"
878 |
879 | totaldown=0
880 | totalup=0
881 |
882 | for (( i=0; i<$size; i++ ))
883 | do
884 | totaldown=$(($(($totaldown))+$(($(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))))))
885 | done
886 |
887 | for (( i=0; i<$size; i++ ))
888 | do
889 | totalup=$(($(($totalup))+$(($(($((${groups_sub_count[$i]}))*$((${groups_up[$i]}))))))))
890 | done
891 |
892 | echo -e "adding ROOT"
893 | echo -e "# COMPUTATED ESTIMATE TRAFFIC: total download $(($((${totaldown}))/1024))Mbit upload $(($((${totalup}))/1024))Mbit " >> "$file"
894 | echo -e "" >> "$file"
895 | echo -e "# create root" >> "$file"
896 | echo -e "$TC qdisc add dev $INT_IF root handle 1: htb default 1 r2q 10" >> "$file"
897 | echo -e "$TC qdisc add dev $EXT_IF root handle 1: htb default 1 r2q 10" >> "$file"
898 | echo -e "$TC class add dev $INT_IF parent 1: classid 1:1 htb rate ${totaldown}Kbit ceil ${totaldown}Kbit buffer 1600" >> "$file"
899 | echo -e "$TC class add dev $EXT_IF parent 1: classid 1:2 htb rate ${totalup}Kbit ceil ${totalup}Kbit buffer 1600" >> "$file"
900 |
901 |
902 |
903 | #-- download ------------------------
904 | echo "download leaves..."
905 | for (( i=0; i<$size; i++ ))
906 | do
907 | pgridd=`inc_2_leadzeroes $i`
908 | echo -e "$TC class add dev $EXT_IF parent 1:1 classid 1:$(printf %x $((1${pgridd}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600" >> "$file"
909 |
910 | #process sub groups
911 | subsize=${groups_sub_count[$i]}
912 | for (( z=0; z<$subsize; z++ ))
913 | do
914 | sgridd=`inc_2_leadzeroes $z`
915 | execute2file_command_with_classid_down ${pgridd}${sgridd} $(($((${groups_down[$i]}))/$((${groups_aggr[$i]})))) ${groups_down[$i]} $EXT_IF "$file"
916 | done
917 | done
918 |
919 | #-- upload ------------------------
920 |
921 | for (( i=0; i<$size; i++ ))
922 | do
923 | pgridu=`inc_2_leadzeroes $i`
924 | echo -e "$TC class add dev $INT_IF parent 1:2 classid 1:$(printf %x $((2${pgridu}))) htb rate $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit ceil $(($((${groups_sub_count[$i]}))*$((${groups_down[$i]}))))Kbit buffer 1600" >> "$file"
925 |
926 | #process sub groups
927 | subsize=${groups_sub_count[$i]}
928 | for (( z=0; z<$subsize; z++ ))
929 | do
930 | sgridu=`inc_2_leadzeroes $z`
931 | execute2file_command_with_classid_up ${pgridu}${sgridu} $(($((${groups_up[$i]}))/$((${groups_aggr[$i]})))) ${groups_up[$i]} $INT_IF "$file"
932 | done
933 | done
934 | }
935 |
936 | #################################
937 | # print tree of shaper leaves #
938 | #################################
939 | function print_tree {
940 | #echo "############################################################################################"
941 |
942 | prepare_group_definitions
943 | count_clients
944 | tc_generate_tree
945 |
946 | #echo "############################################################################################"
947 | }
948 |
949 | ##############################################
950 | # print tree with commands of shaper leaves #
951 | ##############################################
952 | function print_tc_tree {
953 | #echo "############################################################################################"
954 |
955 | prepare_group_definitions
956 | count_clients
957 | tc_generate_fake_commands
958 |
959 | #echo "############################################################################################"
960 | }
961 |
962 | #########################
963 | # execute normal start #
964 | #########################
965 | function tc_start {
966 | tc_remove
967 | tc_show
968 | tc_print_counters
969 | echo "############################################################################################"
970 |
971 | ipt_load_modules
972 | prepare_group_definitions
973 | count_clients
974 | tc_execute_commands
975 |
976 | echo "############################################################################################"
977 | tc_print_counters
978 | tc_show
979 | }
980 |
981 |
982 | ###############
983 | # MAIN #
984 | ###############
985 | echo " "
986 | echo "----------------------------------------------------------------------------------------"
987 | echo "ATS aka automatic traffic shaper. https://github.com/peterducai/automatic_traffic_shaper"
988 | echo "----------------------------------------------------------------------------------------"
989 |
990 | case "$1" in
991 | start)
992 | tc_start
993 | ;;
994 | printtree) print_tree
995 | ;;
996 | printtc) print_tc_tree
997 | ;;
998 | generatetc) generate_file $2
999 | ;;
1000 | stop) tc_remove
1001 | ;;
1002 | restart) tc_remove
1003 | tc_start
1004 | ;;
1005 |
1006 | status)
1007 | echo -e "${GREEN}= IPSET =========================================${NONE}";
1008 | ipset -L;
1009 | echo -e "${GREEN}= IPTABLES ======================================${NONE}";
1010 | iptables -L -n -v --line-numbers;
1011 | tc_print_counters
1012 | ;;
1013 |
1014 | version)
1015 | echo "ATS aka automatic traffic shaper. https://github.com/peterducai/automatic_traffic_shaper"
1016 | ;;
1017 |
1018 | *)
1019 | echo "usage: $0 (start|printtree|printtc|generatetc |stop|restart|status|version)"
1020 | exit 1
1021 | esac
1022 |
1023 | exit $?
1024 |
--------------------------------------------------------------------------------
/build.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | buildah bud -t quay.io/peterducai/openshift-etcd-suite:latest .
4 | podman tag quay.io/peterducai/openshift-etcd-suite:latest quay.io/peterducai/openshift-etcd-suite:0.1.28
5 |
6 |
--------------------------------------------------------------------------------
/etcd.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | MUST_PATH=$1
4 | PLOT=$2
5 | STAMP=$(date +%Y-%m-%d_%H-%M-%S)
6 | REPORT_FOLDER="$HOME/ETCD-SUMMARY_$STAMP"
7 | mkdir -p $REPORT_FOLDER
8 | echo "created $REPORT_FOLDER"
9 |
10 | # TERMINAL COLORS -----------------------------------------------------------------
11 |
12 | NONE='\033[00m'
13 | RED='\033[01;31m'
14 | GREEN='\033[01;32m'
15 | YELLOW='\033[01;33m'
16 | BLACK='\033[30m'
17 | BLUE='\033[34m'
18 | VIOLET='\033[35m'
19 | CYAN='\033[36m'
20 | GREY='\033[37m'
21 |
22 | cd $MUST_PATH
23 | cd $(echo */)
24 | # ls
25 |
26 | if [ -z "$3" ]; then
27 | OCP_VERSION=$(cat cluster-scoped-resources/config.openshift.io/clusterversions.yaml |grep "Cluster version is"| grep -Po "(\d+\.)+\d+")
28 | else
29 | OCP_VERSION=$3
30 | fi
31 |
32 | if [ -z "$OCP_VERSION" ]; then
33 | echo -e "Cluster version is EMPTY! Script cannot be run without defining proper version!"
34 | echo -e "Run script with: ./etcd.sh false 4.10 # for 4.10 or replace with your version"
35 | else
36 | echo -e "Cluster version is $OCP_VERSION"
37 | fi
38 | echo -e ""
39 |
40 | cd cluster-scoped-resources/core/nodes
41 | NODES_NUMBER=$(ls|wc -l)
42 | echo -e "There are $NODES_NUMBER nodes in cluster"
43 |
44 | cd ../persistentvolumes
45 | PV_NUMBER=$(ls|wc -l)
46 | echo -e "There are $PV_NUMBER PVs in cluster"
47 |
48 | cd ../nodes
49 |
50 | NODES=()
51 | MASTER=()
52 | INFRA=()
53 | WORKER=()
54 |
55 | help_etcd_objects() {
56 | echo -e ""
57 | echo -e "- Number of objects ---"
58 | echo -e ""
59 | echo -e "List number of objects in ETCD:"
60 | echo -e ""
61 | echo -e "$ oc project openshift-etcd"
62 | echo -e "oc get pods"
63 | echo -e "oc rsh etcd-ip-10-0-150-204.eu-central-1.compute.internal"
64 | echo -e "> etcdctl get / --prefix --keys-only | sed '/^$/d' | cut -d/ -f3 | sort | uniq -c | sort -rn"
65 | echo -e ""
66 | echo -e "[HINT] Any number of CRDs (secrets, deployments, etc..) above 8k could cause performance issues on storage with not enough IOPS."
67 |
68 | echo -e ""
69 | echo -e "List secrets per namespace:"
70 | echo -e ""
71 | echo -e "> oc get secrets -A --no-headers | awk '{ns[\$1]++}END{for (i in ns) print i,ns[i]}'"
72 | echo -e ""
73 | echo -e "[HINT] Any namespace with 20+ secrets should be cleaned up (unless there's specific customer need for so many secrets)."
74 | echo -e ""
75 | }
76 |
77 | help_etcd_troubleshoot() {
78 | echo -e ""
79 | echo -e "- Generic troubleshooting ---"
80 | echo -e ""
81 | echo -e "More details about troubleshooting ETCD can be found at https://access.redhat.com/articles/6271341"
82 | }
83 |
84 | help_etcd_metrics() {
85 | echo -e ""
86 | echo -e "- ETCD metrics ---"
87 | echo -e ""
88 | echo -e "How to collect ETCD metrics. https://access.redhat.com/solutions/5489721"
89 | }
90 |
91 | help_etcd_networking() {
92 | echo -e ""
93 | echo -e "- ETCD networking troubleshooting ---"
94 | echo -e ""
95 | echo -e "From masters check if there are no dropped packets or RX/TX errors on main NIC."
96 | echo -e "> ip -s link show"
97 | echo -e ""
98 | echo -e "but also check latency against API (expected value is 2-5ms, 0.002-0.005 in output)"
99 | echo -e "> curl -k https://api..com -w \"%{time_connect}\""
100 | echo -e "Any higher latency could mean network bottleneck."
101 | }
102 |
103 | # help_etcd_objects
104 |
105 |
106 | for filename in *.yaml; do
107 | [ -e "$filename" ] || continue
108 | [ ! -z "$(cat $filename |grep node-role|grep -w 'node-role.kubernetes.io/master:')" ] && MASTER+=("${filename::-5}") && NODES+=("$filename [master]") || true
109 | done
110 |
111 | for filename in *.yaml; do
112 | [ -e "$filename" ] || continue
113 | [ ! -z "$(cat $filename |grep node-role|grep -w 'node-role.kubernetes.io/infra:')" ] && INFRA+=("${filename::-5}") && NODES+=("$filename [infra]") || true
114 | done
115 |
116 | for filename in *.yaml; do
117 | [ -e "$filename" ] || continue
118 | [ ! -z "$(cat $filename |grep node-role|grep -w 'node-role.kubernetes.io/worker:')" ] && WORKER+=("${filename::-5}") && NODES+=("$filename [worker]") || true
119 | done
120 |
121 | echo -e ""
122 | # echo ${NODES[@]}
123 |
124 | echo -e "${#MASTER[@]} masters"
125 | if [ "${#MASTER[@]}" != "3" ]; then
126 | echo -e "[WARNING] only 3 masters are supported, you have ${#MASTER[@]}."
127 | fi
128 | printf "%s\n" "${MASTER[@]}"
129 | echo -e ""
130 | echo -e "${#INFRA[@]} infra nodes"
131 | printf "%s\n" "${INFRA[@]}"
132 | echo -e ""
133 | echo -e "${#WORKER[@]} worker nodes"
134 | printf "%s\n" "${WORKER[@]}"
135 |
136 | # for i in ${NODES[@]}; do echo $i; done
137 |
138 |
139 | cd $MUST_PATH
140 | cd $(echo */)
141 | cd namespaces/openshift-etcd/pods
142 | echo -e ""
143 | echo -e "[ETCD]"
144 |
145 | OVRL=0
146 | NTP=0
147 | HR=0
148 | TK=0
149 | LED=0
150 |
151 |
152 | gnuplot_render() {
153 | cat > $REPORT_FOLDER/etcd-$1.plg <<- EOM
154 | #! /usr/bin/gnuplot
155 |
156 | set terminal png
157 | set title '$3'
158 | set xlabel '$4'
159 | set ylabel '$5'
160 |
161 | set autoscale
162 | set xrange [1:$2]
163 | set yrange [1:800]
164 |
165 | # labels
166 | #set label "- GOOD" at 0, 100
167 | #set label "- BAD" at 0, 300
168 | #set label "- SUPER BAD" at 0, 500
169 |
170 | plot '$7' with lines
171 | EOM
172 |
173 | gnuplot $REPORT_FOLDER/etcd-$1.plg > $REPORT_FOLDER/$1$6.png
174 | }
175 |
176 | etcd_overload() {
177 | OVERLOAD=$(cat $1/etcd/etcd/logs/current.log|grep 'overload'|wc -l)
178 | LAST=$(cat $1/etcd/etcd/logs/current.log|grep 'overload'|tail -1)
179 | LOGEND=$(cat $1/etcd/etcd/logs/current.log|tail -1)
180 | if [ "$OVERLOAD" != "0" ]; then
181 | echo -e "${RED}[WARNING]${NONE} we found $OVERLOAD 'server is likely overloaded' messages in $1"
182 | echo -e "Last occurrence:"
183 | echo -e "$LAST"| cut -d " " -f1
184 | echo -e "Log ends at "
185 | echo -e "$LOGEND"| cut -d " " -f1
186 | echo -e ""
187 | OVRL=$(($OVRL+$OVERLOAD))
188 | # else
189 | # echo -e "${GREEN}[OK]${NONE} zero messages in $1"
190 | fi
191 | }
192 |
193 | etcd_took_too_long() {
194 | TOOKS_MS=()
195 | MS=$(cat $1/etcd/etcd/logs/current.log|grep 'took too long'|tail -1)
196 | echo $MS
197 | TOOK=$(cat $1/etcd/etcd/logs/current.log|grep 'took too long'|wc -l)
198 | SUMMARY=$(cat $1/etcd/etcd/logs/current.log |awk -v min=999 '/took too long/ {t++} /context deadline exceeded/ {b++} /finished scheduled compaction/ {gsub("\"",""); sub("ms}",""); split($0,a,":"); if (a[12]max) max=a[12]; avg+=a[12]; c++} END{printf "took too long: %d\ndeadline exceeded: %d\n",t,b; printf "compaction times:\n min: %d\n max: %d\n avg:%d\n",min,max,avg/c}'
199 | )
200 | if [ "$PLOT" = true ]; then
201 | for lines in $(cat $1/etcd/etcd/logs/current.log||grep "took too long"|grep -ohE "took\":\"[0-9]+(.[0-9]+)ms"|cut -c8-);
202 | do
203 | TOOKS_MS+=("$lines");
204 | if [ "$lines" != "}" ]; then
205 | echo $lines >> $REPORT_FOLDER/$1-long.data
206 | fi
207 | done
208 | fi
209 | if [ "$PLOT" = true ]; then
210 | gnuplot_render $1 "${#TOOKS_MS[@]}" "took too long messages" "Sample number" "Took (ms)" "tooktoolong_graph" "$REPORT_FOLDER/$1-long.data"
211 | fi
212 | if [ "$TOOK" != "0" ]; then
213 | echo -e "${RED}[WARNING]${NONE} we found $TOOK took too long messages in $1"
214 | echo -e "$SUMMARY"
215 | TK=$(($TK+$TOOK))
216 | echo -e ""
217 | fi
218 | }
219 |
220 | etcd_ntp() {
221 | CLOCK=$(cat $1/etcd/etcd/logs/current.log|grep 'clock difference'|wc -l)
222 | LASTNTP=$(cat $1/etcd/etcd/logs/current.log|grep 'clock difference'|tail -1)
223 | LONGDRIFT=$(cat $1/etcd/etcd/logs/current.log|grep 'clock-drift'|wc -l)
224 | LASTLONGDRIFT=$(cat $1/etcd/etcd/logs/current.log|grep 'clock-drift'|tail -1)
225 | LOGENDNTP=$(cat $1/etcd/etcd/logs/current.log|tail -1)
226 | if [ "$CLOCK" != "0" ]; then
227 | echo -e "${RED}[WARNING]${NONE} we found $CLOCK ntp clock difference messages in $1"
228 | NTP=$(($NTP+$CLOCK))
229 | echo -e "Last occurrence:"
230 | echo -e "$LASTNTP"| cut -d " " -f1
231 | echo -e "Log ends at "
232 | echo -e "$LOGENDNTP"| cut -d " " -f1
233 | echo -e ""
234 | echo -e "Long drift: $LONGDRIFT"
235 | echo -e "Last long drift:"
236 | echo -e $LASTLONGDRIFT
237 | fi
238 | }
239 |
240 | etcd_heart() {
241 | HEART=$(cat $1/etcd/etcd/logs/current.log|grep 'failed to send out heartbeat on time'|wc -l)
242 | if [ "$HEART" != "0" ]; then
243 | echo -e "${RED}[WARNING]${NONE} we found $HEART failed to send out heartbeat on time messages in $1"
244 | HR=$(($HR+$HEART))
245 | fi
246 | }
247 |
248 | etcd_space() {
249 | SPACE=$(cat $member/etcd/etcd/logs/current.log|grep 'database space exceeded'|wc -l)
250 | if [ "$SPACE" != "0" ]; then
251 | echo -e "${RED}[WARNING]${NONE} we found $SPACE 'database space exceeded' in $1"
252 | SP=$(($SP+$SPACE))
253 | fi
254 | }
255 |
256 | etcd_leader() {
257 | LEADER=$(cat $member/etcd/etcd/logs/current.log|grep 'leader changed'|wc -l)
258 | if [ "$LEADER" != "0" ]; then
259 | echo -e "${RED}[WARNING]${NONE} we found $LEADER 'leader changed' in $1"
260 | LED=$(($LED+$LEADER))
261 | fi
262 | }
263 |
264 |
265 | etcd_compaction() {
266 | #WORKER+=("${filename::-5}")
267 | COMPACTIONS_MS=()
268 | COMPACTIONS_SEC=()
269 |
270 | echo -e "- $1"
271 | # echo -e ""
272 | case "${OCP_VERSION}" in
273 | 4.9*|4.8*|4.10*)
274 | echo "# compaction" > $REPORT_FOLDER/$1.data
275 | if [ "$PLOT" == true ]; then
276 | for lines in $(cat $1/etcd/etcd/logs/current.log|grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)ms"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+');
277 | do
278 | COMPACTIONS_MS+=("$lines");
279 | if [ "$lines" != "}" ]; then
280 | echo $lines >> $REPORT_FOLDER/$1-comp.data
281 | fi
282 | done
283 | gnuplot_render $1 "${#COMPACTIONS_MS[@]}" "ETCD compaction (ms)" "Sample number" "Compaction (ms)" "compaction_graph" "$REPORT_FOLDER/$1-comp.data"
284 | fi
285 |
286 | echo "found ${#COMPACTIONS_MS[@]} compaction entries"
287 | echo -e ""
288 |
289 | echo -e "[highest (seconds)]"
290 | cat $1/etcd/etcd/logs/current.log|grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)s"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+'|sort| tail -4
291 | echo -e ""
292 | echo -e "[highest (ms)]"
293 | cat $1/etcd/etcd/logs/current.log|grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)ms"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+'|sort| tail -4
294 | echo -e ""
295 | echo -e "last 5 compaction entries:"
296 | cat $1/etcd/etcd/logs/current.log|grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)ms"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+'|tail -5
297 | ;;
298 | 4.7*)
299 | echo -e "[highest seconds]"
300 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)s"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6
301 | echo -e ""
302 | echo -e "[highest ms]"
303 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)ms"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6
304 | ;;
305 | 4.6*)
306 | echo -e "[highest seconds]"
307 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)s"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6 #was f12, but doesnt work on some gathers
308 | echo -e ""
309 | echo -e "[highest ms]"
310 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)ms"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6 #was f12, but doesnt work on some gathers
311 | ;;
312 | *)
313 | echo -e "unknown version ${OCP_VERSION} !"
314 | ;;
315 | esac
316 | echo -e ""
317 | }
318 |
319 |
320 |
321 | # MAIN FUNCS
322 |
323 | overload_solution() {
324 | echo -e "SOLUTION: Review ETCD and CPU metrics as this could be caused by CPU bottleneck or slow disk."
325 | echo -e ""
326 | }
327 |
328 |
329 | overload_check() {
330 | echo -e ""
331 | echo -e "[OVERLOADED MESSAGES]"
332 | echo -e ""
333 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
334 | etcd_overload $member
335 | done
336 | echo -e "Found together $OVRL 'server is likely overloaded' messages."
337 | echo -e ""
338 | if [[ $OVRL -ne "0" ]];then
339 | overload_solution
340 | fi
341 | }
342 |
343 | tooklong_solution() {
344 | echo -e ""
345 | echo -e "SOLUTION: Even with a slow mechanical disk or a virtualized network disk, applying a request should normally take fewer than 50 milliseconds (and around 5ms for fast SSD/NVMe disk)."
346 | echo -e ""
347 | }
348 |
349 | tooklong_check() {
350 | echo -e ""
351 | echo -e "[TOOK TOO LONG MESSAGES]"
352 | echo -e ""
353 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
354 | etcd_took_too_long $member
355 | done
356 | echo -e ""
357 | if [[ $TK -eq "0" ]];then
358 | echo -e "Found zero 'took too long' messages. OK"
359 | else
360 | echo -e "Found together $TK 'took too long' messages."
361 | fi
362 | if [[ $TK -ne "0" ]];then
363 | tooklong_solution
364 | fi
365 | }
366 |
367 |
368 |
369 | ntp_solution() {
370 | echo -e ""
371 | echo -e "SOLUTION: When clocks are out of sync with each other they are causing I/O timeouts and the liveness probe is failing which makes the ETCD pod to restart frequently. Check if Chrony is enabled, running, and in sync with:"
372 | echo -e " - chronyc sources"
373 | echo -e " - chronyc tracking"
374 | echo -e ""
375 | }
376 |
377 | ntp_check() {
378 | echo -e "[NTP MESSAGES]"
379 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
380 | etcd_ntp $member
381 | done
382 | echo -e ""
383 | if [[ $NTP -eq "0" ]];then
384 | echo -e "Found zero NTP out of sync messages. OK"
385 | else
386 | echo -e "Found together $NTP NTP out of sync messages."
387 | fi
388 | echo -e ""
389 | if [[ $NTP -ne "0" ]];then
390 | ntp_solution
391 | fi
392 | }
393 |
394 | heart_solution() {
395 | echo -e ""
396 | echo -e "SOLUTION: Usually this issue is caused by a slow disk. The disk could be experiencing contention among ETCD and other applications, or the disk is too simply slow."
397 | echo -e ""
398 | }
399 |
400 | heart_check() {
401 | # echo -e ""
402 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
403 | etcd_heart $member
404 | done
405 | echo -e ""
406 | if [[ $HEART -eq "0" ]];then
407 | echo -e "Found zero 'failed to send out heartbeat on time' messages. OK"
408 | else
409 | echo -e "Found together $HR 'failed to send out heartbeat on time' messages."
410 | fi
411 | echo -e ""
412 | if [[ $HEART -ne "0" ]];then
413 | heart_solution
414 | fi
415 | }
416 |
417 | space_solution() {
418 | echo -e ""
419 | echo -e "SOLUTION: Defragment and clean up ETCD, remove unused secrets or deployments."
420 | echo -e ""
421 | }
422 |
423 | space_check() {
424 | echo -e "[SPACE EXCEEDED MESSAGES]"
425 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
426 | etcd_space $member
427 | done
428 | echo -e ""
429 | if [[ $SP -eq "0" ]];then
430 | echo -e "Found zero 'database space exceeded' messages. OK"
431 | else
432 | echo -e "Found together $SP 'database space exceeded' messages."
433 | fi
434 | echo -e ""
435 | if [[ $SPACE -ne "0" ]];then
436 | space_solution
437 | fi
438 | }
439 |
440 |
441 | leader_solution() {
442 | echo -e ""
443 | echo -e "SOLUTION: Defragment and clean up ETCD. Also consider faster storage."
444 | echo -e ""
445 | }
446 |
447 | leader_check() {
448 | echo -e "[LEADER CHANGED MESSAGES]"
449 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
450 | etcd_leader $member
451 | done
452 | echo -e ""
453 | if [[ $LED -eq "0" ]];then
454 | echo -e "Found zero 'leader changed' messages. OK"
455 | else
456 | echo -e "Found together $LED 'leader changed' messages."
457 | fi
458 | if [[ $LED -ne "0" ]];then
459 | leader_solution
460 | fi
461 | }
462 |
463 | compaction_check() {
464 | echo -e ""
465 | echo -e "[COMPACTION]"
466 | echo -e "should be ideally below 100ms (and below 10ms on fast SSD/NVMe)"
467 | echo -e "anything above 300ms could mean serious performance issues (including issues with oc login)"
468 | echo -e ""
469 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
470 | etcd_compaction $member
471 | done
472 | echo -e ""
473 | # echo -e " Found together $LED 'leader changed' messages."
474 | # if [[ $LED -ne "0" ]];then
475 | # leader_solution
476 | # fi
477 | }
478 |
479 | # timed out waiting for read index response (local node might have slow network)
480 |
481 | compaction_check
482 | overload_check
483 | tooklong_check
484 | ntp_check
485 | # heart_check
486 | space_check
487 | leader_check
488 |
489 |
490 | echo -e ""
491 | echo -e "[NETWORKING]"
492 | cd ../../../cluster-scoped-resources/network.openshift.io/clusternetworks/
493 | cat default.yaml |grep CIDR
494 | cat default.yaml | grep serviceNetwork
495 |
496 | echo -e ""
497 | echo -e "ADDITIONAL HELP:"
498 | help_etcd_troubleshoot
499 | help_etcd_metrics
500 | help_etcd_networking
501 | help_etcd_objects
502 |
--------------------------------------------------------------------------------
/fio_suite.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | echo -e "FIO SUITE version 0.1.27"
4 | echo -e " "
5 | echo -e "WARNING: this test can run for several minutes without any progress! Please wait until it finish!"
6 | echo -e " "
7 |
8 | cd /test
9 |
10 | STAMP=$(date +%Y-%m-%d_%H-%M-%S)
11 | REPORT_FOLDER="$HOME/ETCD-SUMMARY_$STAMP"
12 | FSYNC_THRESHOLD=10000
13 |
14 | # if [ -z "$(rpm -qa | grep fio)" ]
15 | # then
16 | # echo "sudo dnf install fio -y"
17 | # else
18 | # echo "fio is installed.. OK"
19 | # fi
20 |
21 |
22 |
23 | echo -e ""
24 | echo -e "[ RANDOM IOPS TEST ]"
25 | echo -e ""
26 | echo -e "[ RANDOM IOPS TEST ] - REQUEST OVERHEAD AND SEEK TIMES] ---"
27 | echo -e "This job is a latency-sensitive workload that stresses per-request overhead and seek times. Random reads."
28 | echo -e " "
29 |
30 | fio --name=seek1g --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --iodepth=4 --readwrite=randread --blocksize=4k --size=1G > rand_1G_d1.log
31 | #cat rand_1G_d1.log
32 | echo -e ""
33 | overhead_big=$(cat rand_1G_d1.log |grep IOPS|tail -1)
34 | FSYNC=$(cat rand_1G_d1.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
35 | IOPS=$(cat rand_1G_d1.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev)
36 | if [[ "$IOPS" == *"k" ]]; then
37 | IOPS=$(echo $IOPS|rev|cut -c2-|rev)
38 | xIO=${IOPS%%.*}
39 | IOPS=$(( $xIO * 1000 ))
40 | #IOPS=$(($((${IOPS%%.*}))*1000))
41 | fi
42 |
43 | echo -e "1GB file transfer:"
44 | echo -e "$overhead_big"
45 | echo -e "--------------------------"
46 | echo -e "RANDOM IOPS: $IOPS"
47 | echo -e "--------------------------"
48 | # rm fiotest
49 | #rm test*
50 | #rm rand_1G_d1.log
51 |
52 | echo -e ""
53 | /usr/bin/fio --name=seek1mb --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --iodepth=4 --readwrite=randread --blocksize=4k --size=200M > rand_200M_d1.log
54 | overhead_small=$(cat rand_200M_d1.log |grep IOPS|tail -1)
55 | FSYNC=$(cat rand_200M_d1.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
56 | IOPS=$(cat rand_200M_d1.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev)
57 | if [[ "$IOPS" == *"k" ]]; then
58 | IOPS=$(echo $IOPS|rev|cut -c2-|rev)
59 | xIO=${IOPS%%.*}
60 | IOPS=$(( $xIO * 1000 ))
61 | fi
62 |
63 | echo -e "200MB file transfer:"
64 | echo -e "$overhead_small"
65 | echo -e "--------------------------"
66 | echo -e "RANDOM IOPS: $IOPS"
67 | echo -e "--------------------------"
68 | #rm rand_200M_d1.log
69 | # rm fiotest
70 | #rm test*
71 |
72 |
73 | # echo -e " "
74 | echo -e ""
75 | echo -e "[ SEQUENTIAL IOPS TEST ]"
76 | echo -e ""
77 |
78 | echo -e "[ SEQUENTIAL IOPS TEST ] - [ ETCD-like FSYNC WRITE with fsync engine ]"
79 | echo -e ""
80 | echo -e "the 99th percentile of this metric should be less than 10ms"
81 | mkdir -p test-data
82 | /usr/bin/fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=cleanfsynctest > cleanfsynctest.log
83 | FSYNC=$(cat cleanfsynctest.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
84 | echo -e ""
85 | cat cleanfsynctest.log
86 | echo -e ""
87 | IOPS=$(cat cleanfsynctest.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev)
88 | if [[ "$IOPS" == *"k" ]]; then
89 | IOPS=$(echo $IOPS|rev|cut -c2-|rev)
90 | xIO=${IOPS%%.*}
91 | IOPS=$(( $xIO * 1000 ))
92 | #IOPS=$(($((${IOPS%%.*}))*1000))
93 | fi
94 |
95 | echo -e "--------------------------"
96 | echo -e "SEQUENTIAL IOPS: $IOPS"
97 | if (( "$FSYNC" > 10000 )); then
98 | echo -e "BAD.. 99th fsync is higher than 10ms (10k). $FSYNC"
99 | else
100 | echo -e "OK.. 99th fsync is less than 10ms (10k). $FSYNC"
101 | fi
102 | echo -e "--------------------------"
103 | echo -e ""
104 | rm -rf test-data
105 | #rm cleanfsynctest
106 |
107 |
108 | echo -e "[ SEQUENTIAL IOPS TEST ] - [ libaio engine SINGLE JOB, 70% read, 30% write]"
109 | echo -e " "
110 |
111 | /usr/bin/fio --name=seqread1g --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=1G --percentage_random=0 > r70_w30_1G_d4.log
112 | s7030big=$(cat r70_w30_1G_d4.log |grep IOPS|tail -2)
113 | FSYNC=$(cat r70_w30_1G_d4.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
114 | wIOPS=$(cat r70_w30_1G_d4.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
115 | rIOPS=$(cat r70_w30_1G_d4.log |grep IOPS|head -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
116 | if [[ "$rIOPS" == *"k" ]]; then
117 | IOPS=$(echo $rIOPS|rev|cut -c2-|rev)
118 | xIO=${rIOPS%%.*}
119 | rIOPS=$(( $xIO * 1000 ))
120 | fi
121 | if [[ "$wIOPS" == *"k" ]]; then
122 | IOPS=$(echo $wIOPS|rev|cut -c2-|rev)
123 | xIO=${wIOPS%%.*}
124 | wIOPS=$(( $xIO * 1000 ))
125 | fi
126 |
127 | echo -e "--------------------------"
128 | echo -e "1GB file transfer:"
129 | echo -e "$s7030big"
130 | echo -e "SEQUENTIAL WRITE IOPS: $wIOPS"
131 | echo -e "SEQUENTIAL READ IOPS: $rIOPS"
132 | echo -e "--------------------------"
133 | #rm r70_w30_1G_d4.log
134 | rm fiotest
135 | # rm read*
136 |
137 | /usr/bin/fio --name=seqread1mb --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=200M > r70_w30_200M_d4.log
138 | s7030small=$(cat r70_w30_200M_d4.log |grep IOPS|tail -2)
139 | FSYNC=$(cat r70_w30_200M_d4.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
140 | wIOPS=$(cat r70_w30_200M_d4.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
141 | rIOPS=$(cat r70_w30_200M_d4.log |grep IOPS|head -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
142 | if [[ "$rIOPS" == *"k" ]]; then
143 | IOPS=$(echo $rIOPS|rev|cut -c2-|rev)
144 | xIO=${rIOPS%%.*}
145 | rIOPS=$(( $xIO * 1000 ))
146 | fi
147 | if [[ "$wIOPS" == *"k" ]]; then
148 | IOPS=$(echo $wIOPS|rev|cut -c2-|rev)
149 | xIO=${wIOPS%%.*}
150 | wIOPS=$(( $xIO * 1000 ))
151 | fi
152 |
153 | echo -e "--------------------------"
154 | echo -e "200MB file transfer:"
155 | echo -e "$s7030small"
156 | echo -e "SEQUENTIAL WRITE IOPS: $wIOPS"
157 | echo -e "SEQUENTIAL READ IOPS: $rIOPS"
158 | echo -e "--------------------------"
159 | #rm r70_w30_200M_d4.log
160 | rm fiotest
161 | # rm read*
162 |
163 | echo -e " "
164 | echo -e "-- [ libaio engine SINGLE JOB, 30% read, 70% write] --"
165 | echo -e " "
166 |
167 | /usr/bin/fio --name=seqwrite1G --filename=fiotest --runtime=120 --bs=2k --ioengine=libaio --direct=1 --ramp_time=10 --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=200M > r30_w70_200M_d1.log
168 | so7030big=$(cat r30_w70_200M_d1.log |grep IOPS|tail -2)
169 | FSYNC=$(cat r30_w70_200M_d1.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
170 | wIOPS=$(cat r30_w70_200M_d1.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
171 | rIOPS=$(cat r30_w70_200M_d1.log |grep IOPS|head -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
172 | if [[ "$rIOPS" == *"k" ]]; then
173 | IOPS=$(echo $rIOPS|rev|cut -c2-|rev)
174 | xIO=${rIOPS%%.*}
175 | rIOPS=$(( $xIO * 1000 ))
176 | fi
177 | if [[ "$wIOPS" == *"k" ]]; then
178 | IOPS=$(echo $wIOPS|rev|cut -c2-|rev)
179 | xIO=${wIOPS%%.*}
180 | wIOPS=$(( $xIO * 1000 ))
181 | fi
182 |
183 | echo -e "--------------------------"
184 | echo -e "200MB file transfer:"
185 | echo -e "$so7030big"
186 | echo -e "SEQUENTIAL WRITE IOPS: $wIOPS"
187 | echo -e "SEQUENTIAL READ IOPS: $rIOPS"
188 | echo -e "--------------------------"
189 | #rm r30_w70_200M_d1.log
190 | rm fiotest
191 | # rm read*
192 |
193 | echo -e " "
194 | /usr/bin/fio --name=seqwrite1mb --filename=fiotest --runtime=120 --bs=2k --ioengine=libaio --direct=1 --ramp_time=10 --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=1G > r30_w70_1G_d1.log
195 | so7030small=$(cat r30_w70_1G_d1.log |grep IOPS|tail -2)
196 | FSYNC=$(cat r30_w70_1G_d1.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
197 | wIOPS=$(cat r30_w70_1G_d1.log |grep IOPS|tail -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
198 | rIOPS=$(cat r30_w70_1G_d1.log |grep IOPS|head -1| cut -d ' ' -f2-|cut -d ' ' -f3|rev|cut -c2-|rev|cut -c6-)
199 | if [[ "$rIOPS" == *"k" ]]; then
200 | IOPS=$(echo $rIOPS|rev|cut -c2-|rev)
201 | xIO=${rIOPS%%.*}
202 | rIOPS=$(( $xIO * 1000 ))
203 | fi
204 | if [[ "$wIOPS" == *"k" ]]; then
205 | IOPS=$(echo $wIOPS|rev|cut -c2-|rev)
206 | xIO=${wIOPS%%.*}
207 | wIOPS=$(( $xIO * 1000 ))
208 | fi
209 |
210 | echo -e "--------------------------"
211 | echo -e "1GB file transfer:"
212 | echo -e "$so7030small"
213 | echo -e "SEQUENTIAL WRITE IOPS: $wIOPS"
214 | echo -e "SEQUENTIAL READ IOPS: $rIOPS"
215 | echo -e "--------------------------"
216 | #rm r30_w70_1G_d1.log
217 | rm fiotest
218 | # rm read*
219 |
220 | # echo -e " "
221 | # echo -e "-- [ libaio engine 8 PARALLEL JOBS, 70% read, 30% write] ----"
222 | # echo -e " "
223 |
224 | # /usr/bin/fio --name=seqparread1g8 --filename=fiotest --runtime=120 --bs=2k --ioengine=libaio --direct=1 --ramp_time=10 --numjobs=8 --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=1G --percentage_random=0 > r70_w30_1G_d4.log
225 | # s7030big=$(cat r70_w30_1G_d4.log |grep IOPS|tail -1)
226 | # echo -e "1GB file:"
227 | # echo -e "$s7030big"
228 | # rm r70_w30_1G_d4.log
229 | # rm fiotest
230 | # # rm read*
231 |
232 | # echo -e " "
233 | # /usr/bin/fio --name=seqparread1mb8 --filename=fiotest --runtime=120 --bs=2k --ioengine=libaio --direct=1 --ramp_time=10 --numjobs=8 --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=200M > r70_w30_200M_d4.log
234 | # s7030small=$(cat r70_w30_200M_d4.log |grep IOPS|tail -1)
235 | # echo -e "200MB file:"
236 | # echo -e "$s7030small"
237 | # rm r70_w30_200M_d4.log
238 | # rm fiotest
239 | # # rm read*
240 |
241 | # echo -e " "
242 | # echo -e "-- [ libaio engine 8 PARALLEL JOBS, 30% read, 70% write] ----"
243 | # echo -e " "
244 |
245 | # /usr/bin/fio --name=seqparwrite1g8 --filename=fiotest --runtime=120 --bs=2k --ioengine=libaio --direct=1 --ramp_time=10 --numjobs=8 --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=200M > r30_w70_200M_d1.log
246 | # so7030big=$(cat r30_w70_200M_d1.log |grep IOPS|tail -1)
247 | # echo -e "1GB file:"
248 | # echo -e "$so7030big"
249 | # rm r30_w70_200M_d1.log
250 | # rm fiotest
251 | # # rm read*
252 | # echo -e " "
253 |
254 | # /usr/bin/fio --name=seqparwrite1mb8 --filename=fiotest --runtime=120 --bs=2k --ioengine=libaio --direct=1 --ramp_time=10 --numjobs=8 --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=1G > r30_w70_1G_d1.log
255 | # so7030small=$(cat r30_w70_1G_d1.log |grep IOPS|tail -1)
256 | # echo -e "200MB file:"
257 | # echo -e "$so7030small"
258 | # rm r30_w70_1G_d1.log
259 | # rm fiotest
260 | # # rm read*
261 |
262 | echo -e " "
263 | echo -e "- END -----------------------------------------"
--------------------------------------------------------------------------------
/fio_suite2.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | echo -e "FIO SUITE version 0.1"
4 | echo -e " "
5 | echo -e "WARNING: this test will run for several minutes without any progress! Please wait until it finish!"
6 | echo -e " "
7 |
8 | # to make sure that each write fio invokes is followed by a fdatasync, use --fdatasync=1.
9 |
10 | echo -e "[FSYNC WRITE SEQUENTIALY]"
11 | echo -e ""
12 | echo -e "the 99th percentile of this metric should be less than 10ms"
13 | mkdir -p test-data
14 | /usr/bin/fio --rw=write --ioengine=sync --fdatasync=1 --directory=test-data --size=22m --bs=2300 --name=cleanfsynctest > cleanfsynctest.log
15 | FSYNC=$(cat cleanfsynctest.log |grep "99.00th"|tail -1|cut -c17-|grep -oE "([0-9]+)]" -m1|cut -d ']' -f 1|head -1)
16 | echo -e ""
17 | cat cleanfsynctest.log
18 | echo -e ""
19 | if (( $FSYNC > 10000 )); then
20 | echo -e "BAD.. 99th fsync is higher than 10ms. $FSYNC"
21 | else
22 | echo -e "OK.. 99th fsync is less than 10ms. $FSYNC"
23 | fi
24 | rm -rf test-data
25 |
26 |
27 |
28 | # fio --name=seek1g --filename=fiotest --runtime=120 --ioengine=sync --direct=1 --ramp_time=2 --iodepth=1 --readwrite=write --blocksize=4k --size=1G --bs=2k
29 |
30 |
31 |
32 | # echo -e "- [MAX CONCURRENT READ] ---"
33 | # echo -e "This job is a read-heavy workload with lots of parallelism that is likely to show off the device's best throughput:"
34 | # echo -e " "
35 |
36 | # /usr/bin/fio --size=1G --name=maxoneg --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --name=read1 --iodepth=16 --readwrite=randread --numjobs=16 --blocksize=64k --offset_increment=128m > best_1G_d4.log
37 | # best_large=$(cat best_1G_d4.log |grep IOPS|tail -1)
38 | # echo -e "$best_large"
39 | # rm fiotest
40 | # rm best_1G_d4.log
41 |
42 |
43 | # /usr/bin/fio --size=200M --name=maxonemb --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --name=read1 --iodepth=16 --readwrite=randread --numjobs=16 --blocksize=64k --offset_increment=128m > best_200M_d4.log
44 | # best_small=$(cat best_200M_d4.log |grep IOPS|tail -1)
45 | # echo -e "$best_small"
46 | # rm best_200M_d4.log
47 |
48 |
49 |
50 | # echo -e "- [REQUEST OVERHEAD AND SEEK TIMES] ---"
51 | # echo -e "This job is a latency-sensitive workload that stresses per-request overhead and seek times. Random reads."
52 | # echo -e " "
53 |
54 | # fio --name=seek1g --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --iodepth=4 --readwrite=randread --blocksize=4k --size=1G > rand_1G_d1.log
55 | # overhead_big=$(cat rand_1G_d1.log |grep IOPS|tail -1)
56 | # echo -e "$overhead_big"
57 | # rm rand_1G_d1.log
58 |
59 | # /usr/bin/fio --name=seek1mb --filename=fiotest --runtime=120 --ioengine=libaio --direct=1 --ramp_time=10 --iodepth=4 --readwrite=randread --blocksize=4k --size=200M > rand_200M_d1.log
60 | # overhead_small=$(cat rand_200M_d1.log |grep IOPS|tail -1)
61 | # echo -e "$overhead_small"
62 | # rm rand_200M_d1.log
63 |
64 | # echo -e " "
65 | # echo -e "- [SEQUENTIAL IOPS UNDER DIFFERENT READ/WRITE LOAD] ---"
66 | # echo -e " "
67 |
68 | # echo -e "-- [ SINGLE JOB, 70% read, 30% write] --"
69 | # echo -e " "
70 |
71 | # /usr/bin/fio --name=seqread1g --filename=fiotest --runtime=120 --ioengine=sync --fdatasync=1 --direct=1 --bs=2k --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=1G --percentage_random=0 > r70_w30_1G_d4.log
72 | # s7030big=$(cat r70_w30_1G_d4.log |grep IOPS|tail -1)
73 | # echo -e "$s7030big"
74 | # cat r70_w30_1G_d4.log
75 | # #rm r70_w30_1G_d4.log
76 | # echo -e " "
77 | # /usr/bin/fio --name=seqread1mb --filename=fiotest --runtime=120 --ioengine=sync --fdatasync=1 --direct=1 --bs=2k --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=200M > r70_w30_200M_d4.log
78 | # s7030small=$(cat r70_w30_200M_d4.log |grep IOPS|tail -1)
79 | # echo -e "$s7030small"
80 | # cat r70_w30_200M_d4.log
81 | # #rm r70_w30_200M_d4.log
82 | # echo -e " "
83 | # echo -e "-- [ SINGLE JOB, 30% read, 70% write] --"
84 | # echo -e " "
85 |
86 | # /usr/bin/fio --name=seqwrite1G --filename=fiotest --runtime=120 --bs=2k --ioengine=sync --fdatasync=1 --direct=1 --bs=2k --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=200M > r30_w70_200M_d1.log
87 | # so7030big=$(cat r30_w70_200M_d1.log |grep IOPS|tail -1)
88 | # echo -e "$so7030big"
89 | # cat r30_w70_200M_d1.log
90 | # #rm r30_w70_200M_d1.log
91 | # echo -e " "
92 | # /usr/bin/fio --name=seqwrite1mb --filename=fiotest --runtime=120 --bs=2k --ioengine=sync --fdatasync=1 --direct=1 --bs=2k --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=1G > r30_w70_1G_d1.log
93 | # so7030small=$(cat r30_w70_1G_d1.log |grep IOPS|tail -1)
94 | # echo -e "$so7030small"
95 | # cat r30_w70_1G_d1.log
96 | # #rm r30_w70_1G_d1.log
97 | # rm fiotest
98 |
99 | # echo -e " "
100 | # echo -e "-- [ 8 PARALLEL JOBS, 70% read, 30% write] ----"
101 | # echo -e " "
102 |
103 | # /usr/bin/fio --name=seqparread1g8 --filename=fiotest --runtime=120 --bs=2k --ioengine=sync --fdatasync=1 --direct=1 --numjobs=8 --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=1G --percentage_random=0 > r70_w30_1G_d4.log
104 | # s7030big=$(cat r70_w30_1G_d4.log |grep IOPS|tail -1)
105 | # echo -e "$s7030big"
106 | # #rm r70_w30_1G_d4.log
107 | # echo -e " "
108 | # /usr/bin/fio --name=seqparread1mb8 --filename=fiotest --runtime=120 --bs=2k --ioengine=sync --fdatasync=1 --direct=1 --numjobs=8 --readwrite=rw --rwmixread=70 --rwmixwrite=30 --iodepth=1 --blocksize=4k --size=200M > r70_w30_200M_d4.log
109 | # s7030small=$(cat r70_w30_200M_d4.log |grep IOPS|tail -1)
110 | # echo -e "$s7030small"
111 | # rm r70_w30_200M_d4.log
112 |
113 | # echo -e " "
114 | # echo -e "-- [ 8 PARALLEL JOBS, 30% read, 70% write] ----"
115 | # echo -e " "
116 |
117 | # /usr/bin/fio --name=seqparwrite1g8 --filename=fiotest --runtime=120 --bs=2k --ioengine=sync --fdatasync=1 --direct=1 --numjobs=8 --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=200M > r30_w70_200M_d1.log
118 | # so7030big=$(cat r30_w70_200M_d1.log |grep IOPS|tail -1)
119 | # echo -e "$so7030big"
120 | # #rm r30_w70_200M_d1.log
121 | # echo -e " "
122 | # /usr/bin/fio --name=seqparwrite1mb8 --filename=fiotest --runtime=120 --bs=2k --ioengine=sync --fdatasync=1 --direct=1 --numjobs=8 --readwrite=rw --rwmixread=30 --rwmixwrite=70 --iodepth=1 --blocksize=4k --size=1G > r30_w70_1G_d1.log
123 | # so7030small=$(cat r30_w70_1G_d1.log |grep IOPS|tail -1)
124 | # echo -e "$so7030small"
125 | # #rm r30_w70_1G_d1.log
126 | # rm fiotest
127 |
128 | echo -e " "
129 | echo -e "- END -----------------------------------------"
--------------------------------------------------------------------------------
/iostat.log:
--------------------------------------------------------------------------------
1 | Linux 5.17.11-300.fc36.x86_64 (localhost.localdomain) 06/01/2022 _x86_64_ (12 CPU)
2 |
3 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
4 | mmcblk0 0.00 0.00 0.00 0.00 1133.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04
5 | nvme0n1 222.97 1.14 0.05 0.02 0.14 5.25 247.21 1.94 0.47 0.19 0.46 8.04 0.00 0.00 0.00 0.00 0.00 0.00 1.38 1.05 0.15 3.10
6 | sda 0.02 0.00 0.00 0.00 1.89 15.09 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
7 | zram0 0.01 0.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
8 |
9 |
10 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
11 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
12 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 15.00 0.14 0.00 0.00 0.57 9.47 0.00 0.00 0.00 0.00 0.00 0.00 2.00 1.00 0.01 0.85
13 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
14 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
15 |
16 |
17 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
18 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
19 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.01 0.00 0.00 44.50 5.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.09 3.05
20 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
21 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
22 |
23 |
24 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
25 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
26 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 28.50 0.41 1.00 3.39 2.12 14.74 0.00 0.00 0.00 0.00 0.00 0.00 3.00 0.83 0.06 1.45
27 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
28 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
29 |
30 |
31 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
32 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
33 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.09 0.50 5.88 0.75 11.75 0.00 0.00 0.00 0.00 0.00 0.00 1.50 1.00 0.01 0.75
34 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
35 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
36 |
37 |
38 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
39 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
40 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00 44.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 2.25
41 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
42 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
43 |
44 |
45 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
46 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
47 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 2.50 0.06 0.50 16.67 1.00 24.80 0.00 0.00 0.00 0.00 0.00 0.00 0.50 1.00 0.00 0.30
48 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
49 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
50 |
51 |
52 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
53 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
54 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 225.00 10.76 0.50 0.22 1.27 48.99 0.00 0.00 0.00 0.00 0.00 0.00 2.50 0.80 0.29 3.20
55 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
56 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
57 |
58 |
59 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
60 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
61 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 37.00 0.41 3.00 7.50 0.42 11.41 0.00 0.00 0.00 0.00 0.00 0.00 7.00 0.93 0.02 1.75
62 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
63 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
64 |
65 |
66 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
67 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
68 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 10.00 0.30 0.00 0.00 0.65 31.20 0.00 0.00 0.00 0.00 0.00 0.00 1.50 1.00 0.01 0.80
69 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
70 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
71 |
72 |
73 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
74 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
75 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 13.50 0.11 0.50 3.57 0.48 8.59 0.00 0.00 0.00 0.00 0.00 0.00 2.50 0.80 0.01 0.75
76 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
77 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
78 |
79 |
80 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
81 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
82 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 1.50 0.01 0.00 0.00 44.67 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.07 2.25
83 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
84 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
85 |
86 |
87 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
88 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
89 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 7.00 0.03 0.00 0.00 1.00 4.29 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.85
90 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
91 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
92 |
93 |
94 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
95 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
96 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 2.50 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20
97 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
98 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
99 |
100 |
101 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
102 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
103 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 5.00 0.27 0.00 0.00 0.90 55.20 0.00 0.00 0.00 0.00 0.00 0.00 1.00 1.00 0.01 0.60
104 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
105 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
106 |
107 |
108 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
109 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
110 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 2.50 0.03 0.00 0.00 1.20 11.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20
111 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
112 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
113 |
114 |
115 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
116 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
117 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 13.50 0.32 0.50 3.57 0.48 24.30 0.00 0.00 0.00 0.00 0.00 0.00 2.50 1.00 0.01 0.75
118 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
119 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
120 |
121 |
122 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
123 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
124 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 8.00 0.11 1.00 11.11 0.62 14.25 0.00 0.00 0.00 0.00 0.00 0.00 1.50 1.00 0.01 0.65
125 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
126 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
127 |
128 |
129 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
130 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
131 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 14.50 0.10 0.50 3.33 2.66 7.31 0.00 0.00 0.00 0.00 0.00 0.00 0.50 1.00 0.04 0.60
132 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
133 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
134 |
135 |
136 | Device r/s rMB/s rrqm/s %rrqm r_await rareq-sz w/s wMB/s wrqm/s %wrqm w_await wareq-sz d/s dMB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
137 | mmcblk0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
138 | nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 0.00 44.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.02 2.25
139 | sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
140 | zram0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
141 |
142 |
143 |
--------------------------------------------------------------------------------
/iostat.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # run iostat 20 times with delay of 2 seconds
4 | iostat -dmx 2 20 > iostat.log
--------------------------------------------------------------------------------
/metrics.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | function promqlQuery() {
4 | END_TIME=$(date -u +%s)
5 | START_TIME=$(date -u --date="60 minutes ago" +%s)
6 |
7 | oc exec -c prometheus -n openshift-monitoring prometheus-k8s-0 -- curl --data-urlencode "query=$1" --data-urlencode "step=10" --data-urlencode "start=$START_TIME" --data-urlencode "end=$END_TIME" http://localhost:9090/api/v1/query_range
8 | }
9 | promqlQuery "rate(node_disk_read_time_seconds_total[1m])" > ./out/node_disk_read_time_seconds_total
10 | promqlQuery "rate(node_disk_write_time_seconds_total[1m])" > ./out/node_disk_write_time_seconds_total
11 | promqlQuery "rate(node_schedstat_running_seconds_total[1m])" > ./out/node_schedstat_running_seconds_total
12 | promqlQuery "rate(node_schedstat_waiting_seconds_total[1m])" > ./out/node_schedstat_waiting_seconds_total
13 | promqlQuery "rate(node_cpu_seconds_total[1m])" > ./out/node_cpu_seconds_total
14 | promqlQuery "rate(node_network_receive_errs_total[1m])" > ./out/node_network_receive_errs_total
15 | promqlQuery "rate(node_network_receive_drop_total[1m])" > ./out/node_network_receive_drop_total
16 | promqlQuery "rate(node_network_receive_bytes_total[1m])" > ./out/node_network_receive_bytes_total
17 | promqlQuery "rate(node_network_transmit_errs_total[1m])" > ./out/node_network_transmit_errs_total
18 | promqlQuery "rate(node_network_transmit_drop_total[1m])" > ./out/node_network_transmit_drop_total
19 | promqlQuery "rate(node_network_transmit_bytes_total[1m])" > ./out/node_network_transmit_bytes_total
20 | promqlQuery "instance:node_cpu_utilisation:rate1m" > ./out/node_cpu_utilisation
21 | promqlQuery "instance_device:node_disk_io_time_seconds:rate1m" > ./out/node_disk_io_time_seconds
22 | promqlQuery "rate(node_disk_io_time_seconds_total[1m])" > ./out/node_disk_io_time_seconds_total
23 | promqlQuery "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket{job=\"etcd\"}[1m])) by (instance, le))" > ./out/etcd_disk_backend_commit_duration_seconds_bucket_.99
24 | promqlQuery "histogram_quantile(0.999, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket{job=\"etcd\"}[1m])) by (instance, le))" > ./out/etcd_disk_backend_commit_duration_seconds_bucket_.999
25 | promqlQuery "histogram_quantile(0.9999, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket{job=\"etcd\"}[1m])) by (instance, le))" > ./out/etcd_disk_backend_commit_duration_seconds_bucket_.9999
26 |
27 | tar cfz metrics.tar.gz ./out
--------------------------------------------------------------------------------
/must.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | MUST_PATH=$1
4 |
5 | # TERMINAL COLORS -----------------------------------------------------------------
6 |
7 | NONE='\033[00m'
8 | RED='\033[01;31m'
9 | GREEN='\033[01;32m'
10 | YELLOW='\033[01;33m'
11 | BLACK='\033[30m'
12 | BLUE='\033[34m'
13 | VIOLET='\033[35m'
14 | CYAN='\033[36m'
15 | GREY='\033[37m'
16 |
17 | cd $MUST_PATH
18 | cd $(echo */)
19 | # ls
20 |
21 | OCP_VERSION=$(cat cluster-scoped-resources/config.openshift.io/clusterversions.yaml |grep "Cluster version is"| grep -Po "(\d+\.)+\d+")
22 | echo -e "Cluster version is $OCP_VERSION"
23 | echo -e ""
24 |
25 | cd cluster-scoped-resources/core/nodes
26 | NODES_NUMBER=$(ls|wc -l)
27 | echo -e "There are $NODES_NUMBER nodes in cluster"
28 |
29 | cd ../persistentvolumes
30 | PV_NUMBER=$(ls|wc -l)
31 | echo -e "There are $PV_NUMBER PVs in cluster"
32 |
33 | cd ../nodes
34 |
35 | NODES=()
36 | MASTER=()
37 | INFRA=()
38 | WORKER=()
39 |
40 | help_etcd_objects() {
41 | echo -e ""
42 | echo -e "- Number of objects ---"
43 | echo -e ""
44 | echo -e "List number of objects in ETCD:"
45 | echo -e ""
46 | echo -e "oc rsh -n openshift-etcd -c etcd"
47 | echo -e "> etcdctl get / --prefix --keys-only | sed '/^$/d' | cut -d/ -f3 | sort | uniq -c | sort -rn"
48 | echo -e ""
49 | echo -e "[HINT] Any number of CRDs (secrets, deployments, etc..) above 8k could cause performance issues on storage with not enough IOPS."
50 |
51 | echo -e ""
52 | echo -e "List secrets per namespace:"
53 | echo -e ""
54 | echo -e "> oc get secrets -A --no-headers | awk '{ns[$1]++}END{for (i in ns) print i,ns[i]}'"
55 | echo -e ""
56 | echo -e "[HINT] Any namespace with 20+ secrets should be cleaned up (unless there's specific customer need for so many secrets)."
57 | echo -e ""
58 | }
59 |
60 | help_etcd_troubleshoot() {
61 | echo -e ""
62 | echo -e "- Generic troubleshooting ---"
63 | echo -e ""
64 | echo -e "More details about troubleshooting ETCD can be found at https://access.redhat.com/articles/6271341"
65 | }
66 |
67 | help_etcd_metrics() {
68 | echo -e ""
69 | echo -e "- ETCD metrics ---"
70 | echo -e ""
71 | echo -e "How to collect ETCD metrics. https://access.redhat.com/solutions/5489721"
72 | }
73 |
74 | help_etcd_networking() {
75 | echo -e ""
76 | echo -e "- ETCD networking troubleshooting ---"
77 | echo -e ""
78 | echo -e "From masters check if there are no dropped packets or RX/TX errors on main NIC."
79 | echo -e "> ip -s link show"
80 | echo -e ""
81 | echo -e "but also check latency against API (expected value is 2-5ms, 0.002-0.005 in output)"
82 | echo -e "> curl -k https://api..com -w \"%{time_connect}\""
83 | echo -e "Any higher latency could mean network bottleneck."
84 | }
85 |
86 | # help_etcd_objects
87 |
88 |
89 | for filename in *.yaml; do
90 | [ -e "$filename" ] || continue
91 | [ ! -z "$(cat $filename |grep node-role|grep -w 'node-role.kubernetes.io/master:')" ] && MASTER+=("$filename") && NODES+=("$filename [master]") || true
92 | done
93 |
94 | for filename in *.yaml; do
95 | [ -e "$filename" ] || continue
96 | [ ! -z "$(cat $filename |grep node-role|grep -w 'node-role.kubernetes.io/infra:')" ] && INFRA+=("$filename") && NODES+=("$filename [infra]") || true
97 | done
98 |
99 | for filename in *.yaml; do
100 | [ -e "$filename" ] || continue
101 | [ ! -z "$(cat $filename |grep node-role|grep -w 'node-role.kubernetes.io/worker:')" ] && WORKER+=("$filename") && NODES+=("$filename [worker]") || true
102 | done
103 |
104 | echo -e " --------------- "
105 | # echo ${NODES[@]}
106 |
107 | echo -e "${#MASTER[@]} masters"
108 | if [ "${#MASTER[@]}" != "3" ]; then
109 | echo -e "[WARNING] only 3 masters are supported, you have ${#MASTER[@]}."
110 | fi
111 |
112 | echo -e "${#INFRA[@]} infra nodes"
113 | echo -e "${#WORKER[@]} worker nodes"
114 |
115 | # for i in ${NODES[@]}; do echo $i; done
116 |
117 |
118 | cd $MUST_PATH
119 | cd $(echo */)
120 | cd namespaces/openshift-etcd/pods
121 | echo -e ""
122 | echo -e "[ETCD]"
123 |
124 | OVRL=0
125 | NTP=0
126 | HR=0
127 | TK=0
128 | LED=0
129 |
130 | etcd_overload() {
131 | OVERLOAD=$(cat $1/etcd/etcd/logs/current.log|grep 'overload'|wc -l)
132 | if [ "$OVERLOAD" != "0" ]; then
133 | echo -e "${RED}[WARNING]${NONE} we found $OVERLOAD 'server is likely overloaded' messages in $1"
134 | echo -e ""
135 | OVRL=$(($OVRL+$OVERLOAD))
136 | fi
137 | }
138 |
139 | etcd_took_too_long() {
140 | TOOK=$(cat $1/etcd/etcd/logs/current.log|grep 'took too long'|wc -l)
141 | if [ "$TOOK" != "0" ]; then
142 | echo -e "${RED}[WARNING]${NONE} we found $TOOK took too long messages in $1"
143 | TK=$(($TK+$TOOK))
144 | echo -e ""
145 | fi
146 | }
147 |
148 | etcd_ntp() {
149 | CLOCK=$(cat $1/etcd/etcd/logs/current.log|grep 'clock difference'|wc -l)
150 | if [ "$CLOCK" != "0" ]; then
151 | echo -e "${RED}[WARNING]${NONE} we found $CLOCK ntp clock difference messages in $1"
152 | NTP=$(($NTP+$CLOCK))
153 | fi
154 | }
155 |
156 | etcd_heart() {
157 | HEART=$(cat $1/etcd/etcd/logs/current.log|grep 'failed to send out heartbeat on time'|wc -l)
158 | if [ "$HEART" != "0" ]; then
159 | echo -e "${RED}[WARNING]${NONE} we found $HEART failed to send out heartbeat on time messages in $1"
160 | HR=$(($HR+$HEART))
161 | fi
162 | }
163 |
164 | etcd_space() {
165 | SPACE=$(cat $member/etcd/etcd/logs/current.log|grep 'database space exceeded'|wc -l)
166 | if [ "$SPACE" != "0" ]; then
167 | echo -e "${RED}[WARNING]${NONE} we found $SPACE 'database space exceeded' in $1"
168 | SP=$(($SP+$SPACE))
169 | fi
170 | }
171 |
172 | etcd_leader() {
173 | LEADER=$(cat $member/etcd/etcd/logs/current.log|grep 'leader changed'|wc -l)
174 | if [ "$LEADER" != "0" ]; then
175 | echo -e "${RED}[WARNING]${NONE} we found $LEADER 'leader changed' in $1"
176 | LED=$(($LED+$LEADER))
177 | fi
178 | }
179 |
180 |
181 | etcd_compaction() {
182 |
183 | echo -e "Compaction on $1"
184 | case "${OCP_VERSION}" in
185 | 4.9*|4.8*)
186 | echo -e "[highest seconds]"
187 | cat $1/etcd/etcd/logs/current.log|grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)s"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+'|sort| tail -6
188 | echo -e ""
189 | echo -e "[highest ms]"
190 | cat $1/etcd/etcd/logs/current.log|grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)ms"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+'|sort| tail -6
191 |
192 | # ${CLIENT} logs pod/$1 -n ${NS} -c etcd | grep "compaction"| grep -v downgrade| grep -E "[0-9]+(.[0-9]+)*"|grep -o '[^,]*$'| cut -d":" -f2|grep -oP '"\K[^"]+'|sort| tail -10
193 | ;;
194 | 4.7*)
195 | echo -e "[highest seconds]"
196 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)s"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6
197 | echo -e ""
198 | echo -e "[highest ms]"
199 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)ms"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6
200 | ;;
201 | 4.6*)
202 | echo -e "[highest seconds]"
203 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)s"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6 #was f12, but doesnt work on some gathers
204 | echo -e ""
205 | echo -e "[highest ms]"
206 | cat $1/etcd/etcd/logs/current.log | grep "compaction"| grep -E "[0-9]+(.[0-9]+)ms"|cut -d " " -f13| cut -d ')' -f 1 |sort|tail -6 #was f12, but doesnt work on some gathers
207 | ;;
208 | *)
209 | echo -e "unknown version ${OCP_VERSION} !"
210 | ;;
211 | esac
212 | echo -e ""
213 | }
214 |
215 |
216 |
217 | # MAIN FUNCS
218 |
219 | overload_solution() {
220 | echo -e "SOLUTION: Review ETCD and CPU metrics as this could be caused by CPU bottleneck or slow disk."
221 | echo -e ""
222 | }
223 |
224 |
225 | overload_check() {
226 | # echo -e ""
227 | # echo -e "[ETCD - looking for 'server is likely overloaded' messages.]"
228 | # echo -e ""
229 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
230 | etcd_overload $member
231 | done
232 | echo -e "Found together $OVRL 'server is likely overloaded' messages."
233 | echo -e ""
234 | if [[ $OVRL -ne "0" ]];then
235 | overload_solution
236 | fi
237 | }
238 |
239 | tooklong_solution() {
240 | echo -e ""
241 | echo -e "SOLUTION: Even with a slow mechanical disk or a virtualized network disk, applying a request should normally take fewer than 50 milliseconds (and around 5ms for fast SSD/NVMe disk)."
242 | echo -e ""
243 | }
244 |
245 | tooklong_check() {
246 | # echo -e ""
247 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
248 | etcd_took_too_long $member
249 | done
250 | echo -e ""
251 | if [[ $TK -eq "0" ]];then
252 | echo -e "Found zero 'took too long' messages. OK"
253 | else
254 | echo -e "Found together $TK 'took too long' messages."
255 | fi
256 | if [[ $TK -ne "0" ]];then
257 | tooklong_solution
258 | fi
259 | }
260 |
261 |
262 |
263 | ntp_solution() {
264 | echo -e ""
265 | echo -e "SOLUTION: When clocks are out of sync with each other they are causing I/O timeouts and the liveness probe is failing which makes the ETCD pod to restart frequently. Check if Chrony is enabled, running, and in sync with:"
266 | echo -e " - chronyc sources"
267 | echo -e " - chronyc tracking"
268 | echo -e ""
269 | }
270 |
271 | ntp_check() {
272 | # echo -e ""
273 | # echo -e "[ETCD - looking for 'rafthttp: the clock difference against peer XXXX is too high' messages.]"
274 | # echo -e ""
275 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
276 | etcd_ntp $member
277 | done
278 | echo -e ""
279 | if [[ $NTP -eq "0" ]];then
280 | echo -e "Found zero NTP out of sync messages. OK"
281 | else
282 | echo -e "Found together $NTP NTP out of sync messages."
283 | fi
284 | echo -e ""
285 | if [[ $NTP -ne "0" ]];then
286 | ntp_solution
287 | fi
288 | }
289 |
290 | heart_solution() {
291 | echo -e ""
292 | echo -e "SOLUTION: Usually this issue is caused by a slow disk. The disk could be experiencing contention among ETCD and other applications, or the disk is too simply slow."
293 | echo -e ""
294 | }
295 |
296 | heart_check() {
297 | # echo -e ""
298 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
299 | etcd_heart $member
300 | done
301 | echo -e ""
302 | if [[ $HEART -eq "0" ]];then
303 | echo -e "Found zero 'failed to send out heartbeat on time' messages. OK"
304 | else
305 | echo -e "Found together $HR 'failed to send out heartbeat on time' messages."
306 | fi
307 | echo -e ""
308 | if [[ $HEART -ne "0" ]];then
309 | heart_solution
310 | fi
311 | }
312 |
313 | space_solution() {
314 | echo -e ""
315 | echo -e "SOLUTION: Defragment and clean up ETCD, remove unused secrets or deployments."
316 | echo -e ""
317 | }
318 |
319 | space_check() {
320 | # echo -e ""
321 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
322 | etcd_space $member
323 | done
324 | echo -e ""
325 | if [[ $SP -eq "0" ]];then
326 | echo -e "Found zero 'database space exceeded' messages. OK"
327 | else
328 | echo -e "Found together $SP 'database space exceeded' messages."
329 | fi
330 | echo -e ""
331 | if [[ $SPACE -ne "0" ]];then
332 | space_solution
333 | fi
334 | }
335 |
336 |
337 | leader_solution() {
338 | echo -e ""
339 | echo -e "SOLUTION: Defragment and clean up ETCD. Also consider faster storage."
340 | echo -e ""
341 | }
342 |
343 | leader_check() {
344 | # echo -e ""
345 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
346 | etcd_leader $member
347 | done
348 | echo -e ""
349 | if [[ $LED -eq "0" ]];then
350 | echo -e "Found zero 'leader changed' messages. OK"
351 | else
352 | echo -e "Found together $LED 'leader changed' messages."
353 | fi
354 | if [[ $LED -ne "0" ]];then
355 | leader_solution
356 | fi
357 | }
358 |
359 | compaction_check() {
360 | echo -e "-- ETCD COMPACTION ---"
361 | echo -e ""
362 | echo -e "should be ideally below 100ms (and below 10ms on fast SSD/NVMe)"
363 | echo -e ""
364 | for member in $(ls |grep -v "revision"|grep -v "quorum"); do
365 | etcd_compaction $member
366 | done
367 | echo -e ""
368 | # echo -e " Found together $LED 'leader changed' messages."
369 | # if [[ $LED -ne "0" ]];then
370 | # leader_solution
371 | # fi
372 | }
373 |
374 | compaction_check
375 | echo -e ""
376 | echo -e "- ERROR CHECK ---"
377 | overload_check
378 | tooklong_check
379 | ntp_check
380 | heart_check
381 | space_check
382 | leader_check
383 |
384 |
385 |
386 |
387 | # for member in $(ls |grep -v "revision"|grep -v "quorum"); do
388 | # echo -e "- $member ----------------"
389 | # echo -e ""
390 |
391 | # if [ "$LEADER" != "0" ]; then
392 | # echo -e " ${RED}[WARNING]${NONE} we found $LEADER leader changed messages!"
393 | # fi
394 |
395 | # echo -e ""
396 | # done
397 |
398 |
399 | echo -e ""
400 | echo -e "[NETWORKING]"
401 | cd ../../../cluster-scoped-resources/network.openshift.io/clusternetworks/
402 | cat default.yaml |grep CIDR
403 | cat default.yaml | grep serviceNetwork
404 |
405 | echo -e ""
406 | echo -e "ADDITIONAL HELP:"
407 | help_etcd_troubleshoot
408 | help_etcd_metrics
409 | help_etcd_networking
410 | help_etcd_objects
411 |
--------------------------------------------------------------------------------
/ntp.md:
--------------------------------------------------------------------------------
1 | etcd.sh failed to find a NTP issue in the cluster.
2 | [NTP MESSAGES]
3 | Found zero NTP out of sync messages. OK
4 | -bash-4.2$ grep 'clock difference' namespaces/openshift-etcd/pods/etcd-pdhppr-xc55z-master-0/etcd/etcd/logs/current.log
5 | -bash-4.2$
6 | But there is a huge clock-drift and the cluster is not working:
7 | $ tail -1 namespaces/openshift-etcd/pods/etcd-pdhppr-xc55z-master-0/etcd/etcd/logs/current.log
8 | 2022-07-04T06:35:26.512448489Z {"level":"warn","ts":"2022-07-04T06:35:26.512Z","caller":"rafthttp/probing_status.go:86","msg":"prober found high clock drift","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"d66e878e1cd03cfc","clock-drift":"8m0.640985013s","rtt":"502.906µs"}
9 |
10 |
11 |
12 |
13 |
14 | 4:01
15 | maybe we can add this check to the script.
--------------------------------------------------------------------------------
/push.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | podman login -u peterducai -p 7+xYl7py8Cbd/iPLUXeCFvZmpZ34H0yxG+SC3ds+t1eBlBzCKvzkMJV/dQ/lJdvd quay.io
4 | podman push quay.io/peterducai/openshift-etcd-suite:latest
5 | podman push quay.io/peterducai/openshift-etcd-suite:0.1.28
6 |
--------------------------------------------------------------------------------
/runner.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | case "$1" in
4 | etcd)
5 | ./etcd.sh $2 $3
6 | ;;
7 | toolong)
8 | etcd_tooktoolong.py $2
9 | ;;
10 | fio)
11 | ./fio_suite.sh
12 | ;;
13 | full)
14 | ./iostat.sh &
15 | ./top.sh &
16 | sleep 1
17 | ./fio_suite2.sh & > fio.log
18 | wait
19 | echo "All tests are complete"
20 | echo -e ""
21 | echo -e "RESULTS:"
22 | cat top.log
23 | echo -e ""
24 | cat iostat.log
25 | echo -e ""
26 | cat r70_w30_1G_d4.log
27 | echo -e ""
28 | cat r70_w30_200M_d4.log
29 | echo -e ""
30 | cat r30_w70_200M_d1.log
31 | echo -e ""
32 | cat r30_w70_1G_d1.log
33 | echo -e ""
34 | cat r70_w30_1G_d4.log
35 | echo -e ""
36 | cat r30_w70_200M_d1.log
37 | echo -e ""
38 | cat r30_w70_1G_d1.log
39 | #
40 | rm r70_w30_1G_d4.log
41 | rm r70_w30_200M_d4.log
42 | rm r30_w70_200M_d1.log
43 | rm r30_w70_1G_d1.log
44 | rm r70_w30_1G_d4.log
45 | rm r30_w70_200M_d1.log
46 | rm r30_w70_1G_d1.log
47 | ;;
48 | *)
49 | echo -e "NO PARAMS. Choose 'etcd' or 'fio' or 'toolong'"
50 | ;;
51 | esac
--------------------------------------------------------------------------------
/sketchbook.md:
--------------------------------------------------------------------------------
1 | # other stuff
2 |
3 | ```
4 | ip -s link show
5 | curl -k http://api..com -w "%{time_connect},%{time_total},%{speed_download},%{http_code},%{size_download},%{url_effective}\n"
6 | journalctl -xe
7 | dmesg
8 | ```
9 |
10 |
11 |
12 |
13 | To modify an Amazon EBS volume using the AWS Management Console:
14 | 1. Open the Amazon EC2 console [1].
15 | 2. Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
16 | 3. The Modify Volume window displays the volume ID and the volume’s current configuration, including type, size, IOPS, and throughput. Set new configuration values as follows:
17 | - To modify the type, choose io1 for Volume Type.
18 | - To modify the IOPS, enter a new value for IOPS.
19 | - After you have finished changing the volume settings, choose Modify. When prompted for confirmation, choose Yes.
20 |
21 |
22 | https://aws.amazon.com/it/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/
23 |
24 |
25 |
26 | https://docs.openshift.com/container-platform/4.9/scalability_and_performance/planning-your-environment-according-to-object-maximums.html
27 |
28 |
29 | https://aws.amazon.com/it/blogs/storage/migrate-your-amazon-ebs-volumes-from-gp2-to-gp3-and-save-up-to-20-on-costs/
--------------------------------------------------------------------------------
/top.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # run top 20 times with delay of 2 seconds
4 | top -b -d2 -n20 > top.log
--------------------------------------------------------------------------------