├── .DS_Store ├── .gitignore ├── LICENSE ├── amq-broker.adoc ├── bash.adoc ├── buildah.adoc ├── debezium-architecture.png ├── debezium-openshift.adoc ├── dnf-linux-commands.adoc ├── git-commands.adoc ├── images ├── amq-broker │ ├── amq-broker-replicated-journal.png │ └── amq-broker-shared-journal.png ├── df-h.png ├── df.png ├── du-h.png ├── du.png ├── git-workflow.png └── top.png ├── insights-api.adoc ├── kubernetes-java-operators.adoc ├── linux-advanced.adoc ├── linux-intermediate.adoc ├── linux-introduction.adoc ├── linux-systemd.adoc ├── mongodb.adoc ├── nodejs-command-line-flags.adoc ├── odo.adoc ├── package-management-commands.adoc ├── podman.adoc ├── quarkus-kubernetes-1.adoc ├── quarkus-kubernetes-2.adoc ├── quarkus-observability.adoc ├── quarkus-reactive-messaging.adoc ├── quarkus-spring.adoc ├── quarkus-testing.adoc └── rhel-8-linux-commands.adoc /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/.DS_Store -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.pdf 2 | *.html 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Creative Commons Attribution-NoDerivatives 4.0 International Public License 2 | 3 | By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NoDerivatives 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions. 4 | 5 | Section 1 – Definitions. 6 | 7 | Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image. 8 | Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights. 9 | Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements. 10 | Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material. 11 | Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License. 12 | Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license. 13 | Licensor means the individual(s) or entity(ies) granting rights under this Public License. 14 | Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them. 15 | Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world. 16 | You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning. 17 | 18 | Section 2 – Scope. 19 | 20 | License grant. 21 | Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to: 22 | reproduce and Share the Licensed Material, in whole or in part; and 23 | produce and reproduce, but not Share, Adapted Material. 24 | Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions. 25 | Term. The term of this Public License is specified in Section 6(a). 26 | Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material. 27 | Downstream recipients. 28 | Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License. 29 | No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material. 30 | No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i). 31 | 32 | Other rights. 33 | Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise. 34 | Patent and trademark rights are not licensed under this Public License. 35 | To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties. 36 | 37 | Section 3 – License Conditions. 38 | 39 | Your exercise of the Licensed Rights is expressly made subject to the following conditions. 40 | 41 | Attribution. 42 | 43 | If You Share the Licensed Material, You must: 44 | retain the following if it is supplied by the Licensor with the Licensed Material: 45 | identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); 46 | a copyright notice; 47 | a notice that refers to this Public License; 48 | a notice that refers to the disclaimer of warranties; 49 | a URI or hyperlink to the Licensed Material to the extent reasonably practicable; 50 | indicate if You modified the Licensed Material and retain an indication of any previous modifications; and 51 | indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. 52 | For the avoidance of doubt, You do not have permission under this Public License to Share Adapted Material. 53 | You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information. 54 | If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable. 55 | 56 | Section 4 – Sui Generis Database Rights. 57 | 58 | Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material: 59 | 60 | for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database, provided You do not Share Adapted Material; 61 | if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and 62 | You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database. 63 | 64 | For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights. 65 | 66 | Section 5 – Disclaimer of Warranties and Limitation of Liability. 67 | 68 | Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You. 69 | To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You. 70 | 71 | The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability. 72 | 73 | Section 6 – Term and Termination. 74 | 75 | This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically. 76 | 77 | Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates: 78 | automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or 79 | upon express reinstatement by the Licensor. 80 | For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License. 81 | For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License. 82 | Sections 1, 5, 6, 7, and 8 survive termination of this Public License. 83 | 84 | Section 7 – Other Terms and Conditions. 85 | 86 | The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed. 87 | Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License. 88 | 89 | Section 8 – Interpretation. 90 | 91 | For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License. 92 | To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions. 93 | No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor. 94 | Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority. 95 | 96 | Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses. 97 | 98 | Creative Commons may be contacted at creativecommons.org. 99 | -------------------------------------------------------------------------------- /buildah.adoc: -------------------------------------------------------------------------------- 1 | = Buildah Cheat Sheet 2 | :experimental: true 3 | :product-name: Buildah Cheat Sheet 4 | 5 | Buildah is a tool for creating OCI-compliant container images. Also, Buildah provides the capability to create a container based on a particular container image, update the contents of that container, and then create a brand new container image based on the altered container. 6 | 7 | Although you can use Buildah to run a container on a local machine in a limited manner, the tool's fundamental intention is to provide versatility for creating container images and pushing them to a container registry. 8 | 9 | The sections that follow show you how to use Buildah to work with existing container images and build new ones. There is also a section that goes over the basics of pushing a container to a container registry. 10 | 11 | The `$` symbol in the examples in the sections below represents the console prompt for a terminal window. 12 | 13 | == Installing buildah 14 | 15 | `yum -y install buildah` 16 | 17 | == Working with container images 18 | 19 | The following sections describe how to: 20 | 21 | * List container images. 22 | * Pull a container image. 23 | * Create a container image. 24 | * Delete a container image. 25 | 26 | There is also a section that demonstrates how to create a new container image based on an existing container image. 27 | 28 | === List all local container images 29 | 30 | `buildah images` 31 | 32 | *Example* 33 | 34 | The following example demonstrates how to list container images stored on the local machine: 35 | 36 | ``` 37 | $ buildah images 38 | 39 | REPOSITORY TAG IMAGE ID CREATED SIZE 40 | docker.io/library/busybox latest 1a80408de790 5 weeks ago 1.46 MB 41 | quay.io/app-sre/ubi8-nodejs-14 latest 528baa338298 8 months ago 659 MB 42 | docker.io/library/node 12.18-alpine e13d60032d4d 19 months ago 93.8 MB 43 | docker.io/reselbob/pinger latest c5fa4df9cfe4 3 years ago 74.8 MB 44 | ``` 45 | 46 | === Pulling a container image 47 | 48 | `buildah from /:` 49 | 50 | *Example* 51 | 52 | The following example pulls the container image `busybox:latest` from the remote registry `docker.io`: 53 | 54 | ``` 55 | $ buildah from docker.io/busybox:latest 56 | 57 | Trying to pull docker.io/library/busybox:latest... 58 | Getting image source signatures 59 | Copying blob 50e8d59317eb done 60 | Copying config 1a80408de7 done 61 | Writing manifest to image destination 62 | Storing signatures 63 | 1a80408de790c0b1075d0a7e23ff7da78b311f85f36ea10098e4a6184c200964 64 | ``` 65 | 66 | === Deleting a container image from a local machine 67 | 68 | `buildah rmi /:` 69 | 70 | or 71 | 72 | `buildah rmi ` 73 | 74 | *Example* 75 | 76 | The following example demonstrates how delete a container image with the name `buildah rmi docker.io/library/busybox`: 77 | 78 | ``` 79 | $ buildah rmi docker.io/library/busybox 80 | untagged: docker.io/library/busybox:latest 81 | 1a80408de790c0b1075d0a7e23ff7da78b311f85f36ea10098e4a6184c200964 82 | ``` 83 | 84 | === Deleting all container images from local machine 85 | 86 | `buildah rmi --all` 87 | 88 | *Example* 89 | 90 | The following example demonstrates how to remove all container images stored on the local machine: 91 | 92 | ``` 93 | $ buildah rmi --all` 94 | 95 | untagged: docker.io/library/node:12.18-alpine 96 | untagged: quay.io/app-sre/ubi8-nodejs-14:latest 97 | untagged: docker.io/reselbob/pinger:latest 98 | untagged: docker.io/library/busybox:latest 99 | untagged: docker.io/library/httpd:latest 100 | untagged: registry.access.redhat.com/ubi8/ubi:latest 101 | e13d60032d4d14e88485db13b65a7e38b2588bc3101456278b5e2daddec7e862 102 | 528baa33829859e2b854b1f7c2356a8223e449aff78d18ee4cc2fad298199611 103 | c5fa4df9cfe436469dab3d89be4dafcbfb61bc9e594778e12de02ee89ca7fa9a 104 | 1a80408de790c0b1075d0a7e23ff7da78b311f85f36ea10098e4a6184c200964 105 | c58ef9bfbb5789a9882cee610ba778b1368d21b513d6caf32e3075542e13fe81 106 | 1264065f6ae851d6a33d7be03ffde100356592e385b9b72f65f91b5d9b944b92 107 | ``` 108 | 109 | === Building a container image 110 | 111 | `buildah bud -t ` 112 | 113 | *Example* 114 | 115 | The following example creates a container file and then builds a container image using that file. 116 | 117 | Create the container file: 118 | 119 | ``` 120 | $ echo "echo This container works!" > myecho 121 | 122 | $ chmod 755 myecho 123 | 124 | $ cat << 'EOF' > Containerfile 125 | FROM registry.access.redhat.com/ubi8/ubi 126 | ADD myecho /tmp 127 | ENTRYPOINT "/tmp/myecho" 128 | EOF 129 | 130 | $ buildah bud -t myecho-image Containerfile 131 | ``` 132 | 133 | Build the container image: 134 | 135 | ``` 136 | STEP 1/3: FROM registry.access.redhat.com/ubi8/ubi 137 | STEP 2/3: ADD myecho /tmp 138 | STEP 3/3: ENTRYPOINT "/tmp/myecho" 139 | COMMIT myecho_image 140 | Getting image source signatures 141 | Copying blob 5bf135c4a0de skipped: already exists 142 | Copying blob 773711fd02f0 skipped: already exists 143 | Copying blob 12113fa850f7 done 144 | Copying config b479141386 done 145 | Writing manifest to image destination 146 | Storing signatures 147 | --> b4791413861 148 | Successfully tagged localhost/myecho_image:latest 149 | b4791413861b0245023d9781857000709f5c4ea22d464d16fcc6ce1b5daee2d5 150 | ``` 151 | 152 | List the container image: 153 | 154 | ``` 155 | $ buildah images 156 | REPOSITORY TAG IMAGE ID CREATED SIZE 157 | localhost/myecho-image latest 636de016ba7a 9 seconds ago 225 MB 158 | ``` 159 | 160 | === Inspect a container image 161 | 162 | `buildah inspect --type image image-id` 163 | 164 | or 165 | 166 | `buildah inspect --type image image-name` 167 | 168 | The `buildah inspect` command returns a very large JSON object that describes the many details of a container image. 169 | 170 | *Example* 171 | 172 | The following example demonstrates executing the `build inspect` command against the image id `a134be2e5346`. The command produces a good deal of screen output. Thus the example shows a snippet of output: 173 | 174 | ``` 175 | buildah inspect --type image a134be2e5346 176 | 177 | { 178 | "Type": "buildah 0.0.1", 179 | "FromImage": "localhost/instrumentreseller:latest", 180 | "FromImageID": "a134be2e5346307e5999d059bbfabafa43763318b90be569454474e9d2289cf9", 181 | "FromImageDigest": "sha256:ab86f8d2e3907728f9dcdeb62271e9f165b9dff6aa4507e352df97fc2e81e367", 182 | "Config": "{\"created\":\"2022-06-14T18:16:42.578103429Z\",\"architecture\":\"amd64\",\"os\":\"............\"} 183 | 184 | } 185 | ``` 186 | 187 | === Modifying an existing container image to create a new one 188 | 189 | *Example* 190 | 191 | The following example creates a working container based on the container image `myecho-image`. Then, a new file is created that echoes a message. There is an older version of the new file already in the container. The older file has an older message. 192 | 193 | The command `buildah copy` is used to replace the older file with the contents of the new file. Finally the command `buildah commit` is used to create a new container image named `new-myecho-image`. The container image `new-myecho-image` has the content of the new file under the same file name as the legacy container image. 194 | 195 | Create the working container: 196 | 197 | ``` 198 | $ buildah from myecho-image 199 | myecho-image-working-container 200 | ``` 201 | 202 | Create the new file with new content: 203 | 204 | ``` 205 | $ echo "echo This is another container that works!"" > myecho 206 | ``` 207 | 208 | Copy the new file contents into the running working container: 209 | 210 | ``` 211 | buildah copy myecho-image-working-container myecho /tmp/myecho 212 | 5f270702af64a52e355b3bcff955bdde2648418bea6e9e4d5d68cbba91450598 213 | ``` 214 | 215 | Exercise the running working container to verify that the contents of the new file will be displayed: 216 | 217 | ``` 218 | $ buildah run myecho-image-working-container sh /tmp/myecho 219 | This is another container that works! 220 | ``` 221 | 222 | Create a new container image based on the file system of the legacy container image that also has replacement content in the script file `/tmp/myecho`: 223 | 224 | ``` 225 | $ buildah commit myecho-image-working-container new-myecho-image 226 | Getting image source signatures 227 | Copying blob 5bf135c4a0de skipped: already exists 228 | Copying blob 773711fd02f0 skipped: already exists 229 | Copying blob 80062d3ed257 skipped: already exists 230 | Copying blob c823fae997d4 done 231 | Copying config f6dc970a52 done 232 | Writing manifest to image destination 233 | Storing signatures 234 | f6dc970a528ce2c94eba3d957170ac612537e1bd9a9f6def15e246d5b965f4e5 235 | ``` 236 | 237 | List the local container images on the machine to verify that the new container image has been created: 238 | 239 | ``` 240 | $ buildah images 241 | REPOSITORY TAG IMAGE ID CREATED SIZE 242 | localhost/new-myecho-image latest f6dc970a528c 10 seconds ago 225 MB 243 | localhost/myecho-image latest 636de016ba7a 3 hours ago 225 MB 244 | ``` 245 | 246 | == Working with a container image registry 247 | 248 | The following sections show you how to: 249 | 250 | * Log into a container image registry. 251 | * Push a container image to a registry. 252 | * Add an additional tag to a container image. 253 | 254 | === Logging into a remote container image registry 255 | 256 | `buildah login ` 257 | 258 | *Example* 259 | 260 | The following example executes `buildah login`. The command prompts for a username and password: 261 | 262 | ``` 263 | buildah login quay.io 264 | 265 | Username: 266 | Password: 267 | Login Succeeded! 268 | ``` 269 | 270 | === Pushing a container image to a container image registry 271 | 272 | `buildah push : //:` 273 | 274 | *Example* 275 | 276 | The following command pushes the local container image to the repository of a user named `cooluser` on remote registry `quay.io`: 277 | 278 | ``` 279 | buildah push localhost/myecho-image quay.io/cooluser/myecho-image:v1.0 280 | ``` 281 | 282 | === Create an additional image tag on an existing image 283 | 284 | `buildah tag : :` 285 | 286 | *Example* 287 | 288 | The following example creates a new tag, `verylatest` and applies it to the existing container image `docker.io/library/nginx` that has the tag `latest`. Notice that the values of the `IMAGE ID`` are identical: 289 | 290 | ``` 291 | $ buildah images 292 | REPOSITORY TAG IMAGE ID CREATED SIZE 293 | docker.io/library/nginx latest de2543b9436b 2 days ago 146 MB 294 | 295 | $ buildah tag docker.io/library/nginx:latest docker.io/library/nginx:verylatest 296 | 297 | $ buildah images 298 | REPOSITORY TAG IMAGE ID CREATED SIZE 299 | docker.io/library/nginx latest de2543b9436b 2 days ago 146 MB 300 | docker.io/library/nginx verylatest de2543b9436b 2 days ago 146 MB 301 | ``` 302 | 303 | == Working with containers 304 | 305 | The following sections show you how to: 306 | 307 | * List all working containers. 308 | * Run a working container. 309 | * Display details about a working container. 310 | * Delete a working container. 311 | 312 | === List all working containers 313 | 314 | `buildah containers` 315 | 316 | The command `buildah containers` lists all working containers. A working container is a container that has been created using the `buildah from ` command. 317 | 318 | *Examples* 319 | 320 | The following example creates three working containers using the `buildah from` command. Then, the working directories are listed using the `buildah containers` command. 321 | 322 | Create the containers 323 | 324 | ``` 325 | $ buildah from httpd 326 | httpd-working-container 327 | $ buildah from busybox 328 | busybox-working-container 329 | $ buildah from nginx 330 | nginx-working-container 331 | ``` 332 | 333 | List the containers 334 | 335 | ``` 336 | $ buildah containers 337 | 338 | $ buildah containers 339 | CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 340 | 7071c5bab4ff * c58ef9bfbb57 docker.io/library/httpd:latest httpd-working-container 341 | da51dced0afe * 1a80408de790 docker.io/library/busybox:latest busybox-working-container 342 | bc1473702c2d * de2543b9436b docker.io/library/nginx:latest nginx-working-container 343 | ``` 344 | 345 | === Running a container with buildah 346 | 347 | `buildah run [options] ` 348 | 349 | *Example* 350 | 351 | The following example builds a working container from the image `httpd`. Since the image might exist in a number of remote registries, `buildah` displays a interactive list of registries to choose from: 352 | 353 | ``` 354 | $ buildah from httpd 355 | 356 | Please select an image: 357 | quay.io/httpd:latest 358 | registry.fedoraproject.org/httpd:latest 359 | registry.access.redhat.com/httpd:latest 360 | registry.centos.org/httpd:latest 361 | ▸ docker.io/library/httpd:latest 362 | 363 | httpd-working-container 364 | ``` 365 | 366 | The `buildah run` command is then executed against the working container created by `buildah from`. The example executes the `ls /var` command, listing the contents of the `/var` directory located within the working container: 367 | 368 | ``` 369 | $ buildah run httpd-working-container ls /var 370 | backups cache lib local lock log mail opt run spool tmp 371 | ``` 372 | 373 | === Display details about a container 374 | 375 | `buildah inspect [options] ` 376 | 377 | or 378 | 379 | `buildah inspect [options] ` 380 | 381 | The `buildah inspect` command returns a very large JSON object that describes the many details of the working container. 382 | 383 | **Example** 384 | 385 | The following example inspects the working container image named `registry.access.redhat.com/ubi8/ubi`. The option `--format '{{.IDMappingOptions}}'` is used to display only the information associated with the `IDMappingOptions` property of the JSON object: 386 | 387 | ``` 388 | $ buildah inspect --format '{{.IDMappingOptions}}' --type image registry.access.redhat.com/ubi8/ubi 389 | {true true [] []} 390 | ``` 391 | 392 | === Delete a container 393 | 394 | `buildah delete ` 395 | 396 | or 397 | 398 | `buildah delete ` 399 | 400 | **Examples** 401 | 402 | The following examples demonstrate deleting containers created by buildah: 403 | 404 | ``` 405 | $ buildah delete 35b88d7ef180 406 | 35b88d7ef1807a4d5e085472a23cea6425920ac94845fdcb33c036d89a804f3e 407 | ``` 408 | 409 | or 410 | 411 | ``` 412 | $ buildah delete httpd-working-container 413 | f892d7f36f5f1d0b70fd40ebb00c0861cab44260f6b44add9574381673307ef5 414 | ``` 415 | 416 | === Delete all containers on a machine, technique 1 417 | 418 | ` buildah rm --all` 419 | 420 | *Example* 421 | 422 | The following example deletes all containers on the local machine: 423 | 424 | ``` 425 | $ buildah rm --all 426 | 427 | 4666ea9b554494c204dcc5c30ae0fcad8f8195a3d896845d100899b4e956313f 428 | 9b181a3172cefa5c92e33bd7ff2619bd6ac2bab9d87ab2a2bd9a226f70016282 429 | 430 | ``` 431 | === Delete all containers on a machine, technique 2 432 | 433 | `buildah delete $(buildah list -a -q)` 434 | 435 | *Example* 436 | 437 | The following example deletes all containers created under `buildah run`. If no containers are running, the command will throw an error: 438 | 439 | ``` 440 | $ buildah delete $(buildah list -a -q) 441 | 442 | 7071c5bab4ff60de473b37c5a152b2c566e0f6a8d401ba916ba761d77ad88d7a 443 | da51dced0afec9db1178eb48631433462d26853baa2f472d67b587b2f04c7866 444 | bc1473702c2d82f0a14741a49747a8077149bf2945177e107ad4057d7c9b67dc 445 | ``` 446 | 447 | -------------------------------------------------------------------------------- /debezium-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/debezium-architecture.png -------------------------------------------------------------------------------- /debezium-openshift.adoc: -------------------------------------------------------------------------------- 1 | = Debezium on OpenShift 2 | :experimental: false 3 | :product-name: Debezium 4 | :version: 1.2.0 5 | 6 | This cheat sheet covers how to deploy/create/run/update a Debezium Connector on OpenShift. 7 | 8 | == What’s Debezium ? 9 | 10 | https://debezium.io/[Debezium] is a distributed open-source platform for change data capture. Start it up, point it at your databases, and your apps can start responding to all of the inserts, updates, and deletes that other apps commit to your databases. 11 | Debezium is durable and fast, so your apps can respond quickly and never miss an event, even when things go wrong. 12 | 13 | image::debezium-architecture.png[I am the image alt text.] 14 | 15 | == Deployment 16 | 17 | Debezium is based on Apache Kafka and Kafka Connect, and can be run on `Kubernetes` and `OpenShift` via the https://strimzi.io[Strimzi] project. `Strimzi` provides a set of operators and container images for running Kafka on Kubernetes and OpenShift. 18 | 19 | === Deploy Kafka & Kafka Connect 20 | 21 | [source, shell,indent=0] 22 | ---- 23 | oc new-project myproject 24 | # install the Strimzi operator 25 | oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.19.0/strimzi-cluster-operator-0.19.0.yaml 26 | # Deploy a single node Kafka broker 27 | oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/kafka/kafka-persistent-single.yaml 28 | # Deploy a single instance of Kafka Connect with no plug-in installed 29 | oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.19.0/examples/connect/kafka-connect-s2i-single-node-kafka.yaml 30 | ---- 31 | 32 | === Extend Kafka Connect with Debezium Binaries: 33 | 34 | ** `Source-to-Image` (S2I): 35 | 36 | [source, bash,indent=0] 37 | ---- 38 | export DEBEZIUM_VERSION=1.2.0.Final 39 | mkdir -p plugins && cd plugins && \ 40 | for PLUGIN in {mongodb,mysql,postgres}; do \ 41 | curl https://repo1.maven.org/maven2/io/debezium/debezium-connector-$PLUGIN/$DEBEZIUM_VERSION/debezium-connector-$PLUGIN-$DEBEZIUM_VERSION-plugin.tar.gz | tar xz; \ 42 | done && \ 43 | oc start-build my-connect-cluster-connect --from-dir=. --follow && \ 44 | cd .. && rm -rf plugins 45 | ---- 46 | 47 | ** `Docker`: 48 | 49 | [source, shell,indent=0] 50 | ---- 51 | export IMG_NAME="debezium-connect" 52 | export DEBEZIUM_VERSION=1.2.0.Final 53 | 54 | mkdir -p plugins && cd plugins && \ 55 | for PLUGIN in {mongodb,mysql,postgres}; do \ 56 | curl https://repo1.maven.org/maven2/io/debezium/debezium-connector-$PLUGIN/$DEBEZIUM_VERSION/debezium-connector-$PLUGIN-$DEBEZIUM_VERSION-plugin.tar.gz | tar xz; \ 57 | done 58 | cd .. 59 | cat < Dockerfile 60 | FROM strimzi/kafka:0.19.0-kafka-2.5.0 61 | USER root:root 62 | COPY ./plugins/ /opt/kafka/plugins/ 63 | USER 1001 64 | EOF 65 | 66 | oc new-build --binary --name=$IMG_NAME -l app=$IMG_NAME 67 | oc patch bc/$IMG_NAME -p '{"spec":{"strategy":{"dockerStrategy":{"dockerfilePath":"Dockerfile"}}}}' 68 | oc start-build $IMG_NAME --from-dir=. --follow 69 | 70 | oc create -f - < 55 | ---- 56 | 57 | Creates a local repository that will be represented by the directory named `.git`. This command is not needed when working in a remote repository. Use `git clone` to start work with other repositories. 58 | 59 | If the optional `` parameter is not provided, the `.git` file is created in the current directory. If the parameter `` is provided, that directory is created and the `.git` directory is created in that directory. 60 | 61 | *Example:* 62 | 63 | The following example uses the `git init` command to create a local repository in the `/home` directory of the user `lennonjohn`. The example creates the directory `coolcode` as the repository directory in which the repository `.git` file is stored: 64 | 65 | ---- 66 | $ git init ./coolcode 67 | Initialized empty Git repository in /home/lennonjohn/coolcode/.git/ 68 | ---- 69 | 70 | [] 71 | **Note:**: The initial `$` in the examples is the command prompt. 72 | 73 | === git clone 74 | 75 | ---- 76 | git clone [options] 77 | ---- 78 | 79 | Downloads content from a remote repository for local operations. If the parameter `` is provided, the repository contents are downloaded into that directory. (That directory must be empty.) Otherwise, `git` creates a directory based on the remote repository name. 80 | 81 | *Example:* 82 | 83 | The following example uses the command `git clone` to download the content of the remote repository found at `https://github.com/redhat-developer/developers.redhat.com` into the local directory `./rh`. 84 | 85 | ---- 86 | git clone https://github.com/redhat-developer/developers.redhat.com.git ./rh 87 | ---- 88 | 89 | === git pull 90 | 91 | When called within a local repository, downloads the latest, current assets from the associated remote repository. 92 | 93 | ---- 94 | git pull [options] 95 | ---- 96 | 97 | *Example:* 98 | 99 | The following example uses `git pull` to download files from the associated remote repository. Because contents in the remote repository and local repository are the same, the command responds with a message `Already up to date.` 100 | 101 | ---- 102 | $ git pull 103 | Already up to date. 104 | ---- 105 | 106 | === git fetch 107 | 108 | ---- 109 | git fetch [options] 110 | ---- 111 | 112 | Downloads code and assets from the remote repository to the local machine without overwriting the existing local code and assets in the current branch. If the optional `` is not provided, `git fetch` is executed against the Git repository associated with the present working directory. 113 | 114 | *Example:* 115 | 116 | The following example uses the `git fetch` command to downloaded updated assets from the corresponding remote repository, but will not merge the deltas in the branches on the local repository. 117 | 118 | ---- 119 | $ git fetch 120 | ---- 121 | 122 | === git log 123 | 124 | ---- 125 | git log [options] 126 | ---- 127 | 128 | Displays the Git log file that contains a history of all transactions in the repository. 129 | 130 | *Example:* 131 | 132 | The following example uses `git log` with the `--oneline` option to show all activities in the repository in an abbreviated format: 133 | 134 | ---- 135 | $ git log --oneline 136 | 80f6259 (HEAD -> main) adding newfile.txt to main 137 | 665ecf1 (origin/your-feature, origin/main, origin/dev, origin/HEAD) reorganizing repo structure 138 | c9b791c reorganizing repo structure 139 | af0f400 Update eapxp-quickstarts.yaml 140 | 28d8577 Update README.md 141 | f8be8a1 Update README.md 142 | 456b537 Update README.md 143 | 415ce57 Update eapxp-quickstarts.yaml 144 | 70233e6 Update README.md 145 | 9263b26 Update README.md 146 | 886f7c1 Update README.md 147 | 3a0f42d Update README.md 148 | 1768b69 Example YAML: Develop MicroProfile app on JBoss EAP 7.3 149 | 10b9670 Added directions on how to create an asset inventory in the README 150 | 41e85e1 Initial commit 151 | ---- 152 | 153 | == Working with branches 154 | 155 | The following sections describe the various `git branch` command expressions you can use to work with branches in a repository. 156 | 157 | === Getting the current branch name 158 | 159 | ---- 160 | git branch 161 | ---- 162 | 163 | Shows all branches in the local repository, flagging the current branch that is checked out from the local repository. 164 | 165 | *Example:* 166 | 167 | The following example reports the current branch that is being worked within in the local repository. In this case the current branch is `my_feature` and is indicated by the asterisk before the branch name: 168 | 169 | ---- 170 | $ git branch 171 | dev 172 | main 173 | * my_feature 174 | ---- 175 | 176 | === Viewing remote branches 177 | 178 | ---- 179 | git branch -r 180 | ---- 181 | 182 | Displays all the branches in the remote repository. 183 | 184 | *Example:* 185 | 186 | The following example uses the `git branch` command along with the `-r` option to display the names of all branches on the remote repository: 187 | 188 | ---- 189 | $ git branch -r 190 | origin/HEAD -> origin/main 191 | origin/main 192 | origin/my_feature 193 | origin/your-feature 194 | ---- 195 | 196 | === Viewing all branches 197 | 198 | ---- 199 | git branch -a 200 | ---- 201 | 202 | Displays all branches both on the local and remote repositories. 203 | 204 | *Example:* 205 | The following example displays all branches, local and remote, for the repository associated with the current working directory. The `*` symbol indicates the current working branch, in this case `my_feature`: 206 | 207 | ---- 208 | $ git branch -a 209 | dev 210 | main 211 | * my_feature 212 | remotes/origin/HEAD -> origin/main 213 | remotes/origin/main 214 | remotes/origin/my_feature 215 | remotes/origin/your-feature 216 | ---- 217 | 218 | === Creating a branch in the local repository 219 | 220 | ---- 221 | git branch 222 | ---- 223 | 224 | Creates a new branch. If the optional parameter `` is not provided, the new branch is derived from the current working branch. 225 | 226 | *Example:* 227 | 228 | The following example creates the a branch named `dev` that has the directories and files from the existing branch named `main`: 229 | 230 | ---- 231 | $ git branch dev main 232 | ---- 233 | 234 | === Changing branches 235 | 236 | ---- 237 | git checkout 238 | ---- 239 | 240 | Retrieves the files in the branch named `` in the local repository. Once `git checkout` is called, developers can work on the files in that branch. 241 | 242 | *Example:* 243 | 244 | The following example changes the current working branch to the branch named `dev`. The `checkout` command is followed by a `git branch` command to verify the branch change. The `*` symbol indicates the current working branch, in this case `dev`. 245 | 246 | ---- 247 | $ git checkout dev 248 | Switched to branch 'dev' 249 | 250 | $ git branch 251 | * dev 252 | main 253 | my_feature 254 | ---- 255 | 256 | == Working with content 257 | 258 | The following sections describe the various `git` commands you can use to inspect and manage files in a local repository. 259 | 260 | === Determining the status of the local filesystem 261 | 262 | ---- 263 | git status [options] 264 | ---- 265 | 266 | Reports the status of the current filesystem associated with the local repository. The `` parameter is optional. If no directory or filename is provided, the status of the present working directory is reported. 267 | 268 | *Example:* 269 | 270 | The following example uses `git status` to report the status of the file and directories in the present working directory, in comparison to the state of the local repository. The final line of output shows that the local repository is currently in sync with the working directory: 271 | 272 | ---- 273 | $ git status 274 | On branch dev 275 | Changes not staged for commit: 276 | (use "git add ..." to update what will be committed) 277 | (use "git restore ..." to discard changes in working directory) 278 | modified: git_cheat_sheet/readme.md 279 | 280 | no changes added to commit (use "git add" and/or "git commit -a") 281 | ---- 282 | 283 | === Adding new or updated content to staging 284 | 285 | ---- 286 | git add [options] 287 | ---- 288 | 289 | Adds content to the staging environment from the current branch in the local computer's working directory. 290 | 291 | *Example:* 292 | 293 | The following example creates a directory named `git_cheat_sheet` in the current branch. Then a file named `readme.md` is added to the directory. Finally, the `git add` command adds the contents of the directory to the local staging environment: 294 | 295 | ---- 296 | $ mkdir git_cheat_sheet 297 | $ touch ./git_cheat_sheet/readme.md 298 | $ git add ./git_cheat_sheet/ 299 | ---- 300 | 301 | === Committing new or updated content to the local repository 302 | 303 | ---- 304 | git commit [options] 305 | ---- 306 | 307 | Commits content from the staging environment to the local repository. 308 | 309 | *Example:* 310 | 311 | The following example uses the `git commit` command to commit the file `./git_cheat_sheet/readme.md` to the local repository along with a descriptive message: "adding new file for git-cheat-sheet": 312 | 313 | ---- 314 | $ git commit -m "adding new file for git-cheat-sheet" ./git_cheat_sheet/readme.md 315 | [dev 0c0fb31] adding content for git-cheat-sheet 316 | 1 file changed, 0 insertions(+), 0 deletions(-) 317 | create mode 100644 git_cheat_sheet/readme.md 318 | ---- 319 | 320 | === Pushing new or updated content to the remote repository 321 | 322 | ---- 323 | git push [options] 324 | ---- 325 | 326 | Uploads content from the local repository to the remote repository. The `` parameter is optional. If no remote repository is defined, content is pushed to the repository associated with the current working directory. If the remote repository has updates that are not reflected in the local repository, the `push` command fails with an error message. 327 | 328 | *Example:* 329 | 330 | The following example uploads all content committed to the local repository to the default remote repository associated with the current working directory: 331 | 332 | ---- 333 | git push 334 | ---- 335 | 336 | === Rolling a file back from the staging environment 337 | 338 | ---- 339 | git restore [options] 340 | ---- 341 | 342 | Rolls back a file to its previous state under version control. 343 | 344 | *Example:* 345 | 346 | The following example uses `git add` to add a file named `config.json` to the staging environment, and then uses `git status` to inspect the state of the file, which is now awaiting a commit. 347 | 348 | Then the command `git restore` is used with the `--staged` option to remove the `config.json` file from the staging environment. The `git status` command is called again to reveal that the file `config.json` is no longer part of the staging environment: 349 | 350 | ---- 351 | $ git add config.json 352 | 353 | $ git status 354 | On branch dev 355 | Changes to be committed: 356 | (use "git restore --staged ..." to unstage) 357 | 358 | $ git restore --staged config.json 359 | 360 | $ git status 361 | On branch dev 362 | Changes not staged for commit: 363 | (use "git add ..." to update what will be committed) 364 | (use "git restore ..." to discard changes in working directory) 365 | modified: config.json 366 | 367 | no changes added to commit (use "git add" and/or "git commit -a") 368 | ---- 369 | 370 | === Removing files that were added but not staged 371 | 372 | ---- 373 | git clean [options] 374 | ---- 375 | 376 | Rolls one or more files back to a particular state according to particular context with the repository–local or remote. For example, rolling back to the last commit. 377 | 378 | *Example:* 379 | 380 | The following example displays the files in the working directory associated with a local repository. Then a new file named `config.json` is added to the directory. Finally the command `git clean` is called with the `-f` option to reset the directory to the local repository's original state, removing the added file. The `ls -1` command is called again to show that the file `config.json` has been removed from the working directory: 381 | 382 | ---- 383 | $ ls -1 384 | readme.md 385 | 386 | $ echo "{"isCool": 1}" > config.json 387 | 388 | $ ls -1 389 | config.json 390 | readme.md 391 | 392 | $ git clean -f 393 | Removing config.json 394 | 395 | $ ls -1 396 | readme.md 397 | ---- 398 | 399 | == Rolling back to the most recent commit 400 | 401 | ---- 402 | git revert [options] 403 | ---- 404 | 405 | Reverts the filesystem associated with a local `.git` repository to a previous state. Also updates changes to the local `git` log. 406 | 407 | *Example:* 408 | 409 | The following example displays the files in the directory associated with a local repository. Then a new file named `newfile.txt` is added to the directory and committed to the local repository. The contents of the directory are listed again. The `git log` command shows the latest Git activity. 410 | 411 | Then `git revert 98d7128 --no-edit` reverts the state of the directory to the point before the commit `98d7128` was executed. The contents of the reverted directory are displayed. The reversion activity has been captured and is displayed by calling `git log`: 412 | 413 | ---- 414 | $ ls -1 415 | config.json 416 | readme.md 417 | 418 | $ touch newfile.txt 419 | $ git add . 420 | $ git commit -m "adding a file named newfile.txt" 421 | 422 | $ ls -1 423 | config.json 424 | newfile.txt 425 | readme.md 426 | 427 | $ git log --oneline 428 | 98d7128 (HEAD -> main) adding a file named newfile.txt 429 | e5cf841 adding configuration file 430 | 665ecf1 (origin/your-feature, origin/main, origin/dev, origin/HEAD) reorganizing repo structure 431 | 432 | $ git revert 98d7128 --no-edit 433 | Removing newfile.txt 434 | [main 3f10573] Revert "adding a file named newfile.txt" 435 | Date: Tue Feb 15 09:13:06 2022 -0800 436 | 1 file changed, 0 insertions(+), 0 deletions(-) 437 | delete mode 100644 newfile.txt 438 | 439 | $ ls 440 | config.json 441 | readme.md 442 | 443 | $ git log --oneline 444 | 3f10573 (HEAD -> main) Revert "adding a file named newfile.txt" 445 | 98d7128 adding a file named newfile.txt 446 | e5cf841 adding configuration file 447 | 665ecf1 (origin/your-feature, origin/main, origin/dev, origin/HEAD) reorganizing repo structure 448 | 449 | ---- 450 | 451 | == Merging content between branches 452 | 453 | The following sections describe how to merge files between branches, rebase files between branches, and invoke the a `diff` tool when merge conflicts occur. 454 | 455 | === git merge 456 | 457 | ---- 458 | git merge [options] 459 | ---- 460 | 461 | Merges the files and directories from `` into the ``. If the `` parameter is not provided, the files and directories in the `` are merged into the current branch. 462 | 463 | *Example:* 464 | 465 | The following example shows the current branch as well as the files in that branch. The `dev` branch has two files, `newfile.txt` and `readme.md`. 466 | 467 | Then the branch is changed to `main`. The `main` branch has one file, `readme.md`. The command `git merge dev --no-edit` merges the files from the `dev` branch into the the current `main` branch. The option `--no-edit` is used to avoid having to write a message describing the merge. Finally, the `ls -1` command shows that the merge successfully added `newfile.txt` from the `dev` branch to `main`: 468 | 469 | ---- 470 | $ git branch 471 | * dev 472 | main 473 | 474 | $ ls -1 475 | newfile.txt 476 | readme.md 477 | 478 | $ git checkout main 479 | 480 | $ ls -1 481 | readme.md 482 | 483 | $ git merge dev --no-edit 484 | Merge made by the 'recursive' strategy. 485 | newfile.txt | 0 486 | 1 file changed, 0 insertions(+), 0 deletions(-) 487 | create mode 100644 newfile.txt 488 | 489 | $ ls -1 490 | newfile.txt 491 | readme.md 492 | ---- 493 | 494 | === git rebase 495 | 496 | ---- 497 | git rebase [options] 498 | ---- 499 | 500 | Merges one repository onto another while also transferring the commits from the merge-from branch onto the merge-to branch. Operationally, Git can delete commits from one branch while adding them to another. 501 | 502 | *Example:* 503 | 504 | The following example checks out the branch `dev` and then rebases the updates made in the branch `new_feature` onto the branch `dev`. The commits that were part of `new_feature` are now part of `dev`: 505 | 506 | ---- 507 | $ git checkout dev 508 | Switched to branch 'dev' 509 | 510 | $ git rebase new_feature 511 | Successfully rebased and updated refs/heads/dev. 512 | ---- 513 | 514 | === git mergetool 515 | 516 | ---- 517 | git mergetool 518 | ---- 519 | 520 | Invokes an editing tool to resolve merge conflicts between files. If no `` parameter is provided, `mergetool` uses the globally configured merge editor. You can register a merge editor using the following command: 521 | 522 | `git config --global merge.tool vimdiff` 523 | 524 | In this case, the command indicates that `vimdiff` should be used by default to show diffs between branches. 525 | 526 | You also use an alterative merge editor by using the `--tool` option. 527 | 528 | *Example:* 529 | 530 | The following example creates a merge conflict and then invokes `mergetool` using the `--tool` option to run merge editor `vimdiff`. 531 | 532 | [] 533 | **Note:**: The `vimdiff` tool has to be installed on the computer prior to using it with `mergetool`. The output that follows is an emulation of the command-line interface for `vimdiff`. 534 | 535 | ---- 536 | 537 | $ git merge dev 538 | Auto-merging newfile.txt 539 | CONFLICT (content): Merge conflict in newfile.txt 540 | Automatic merge failed; fix conflicts and then commit the result 541 | 542 | $ git mergetool --tool=vimdiff 543 | 544 | Hit return to start merge resolution tool (vimdiff): 545 | +-----------------------------------------------------+ 546 | | MAIN | BASE | DEV | 547 | +-----------------|--------------|--------------------+ 548 | | I am cool | <<<<<<< HEAD | He was cool | 549 | | | I am cool | | 550 | | |======= | | 551 | | |I was cool | | 552 | | |>>>>>>> dev | | 553 | +-----------------------------------------------------+ 554 | 555 | ---- 556 | 557 | == Change control 558 | 559 | The following sections show some ways to keep track of changes in Git. 560 | 561 | === git blame 562 | 563 | ---- 564 | git blame [options] 565 | ---- 566 | 567 | Displays a list of recent commits on a file by committer along with changes in the file. By default each list item displays the commit UUID, the committer, the date of commit, the locale, and the actual content added. 568 | 569 | *Example:* 570 | 571 | The following example uses `git blame` to list recent commits on the file `readme.md`. Note that commit `2a86f76f` (the third line in the output) was the most recent change, because its timestamp `2022-02-16 08:41:07` is the most recent: 572 | 573 | ---- 574 | $ git blame readme.md 575 | c9b791ce (John Lennon 2022-02-08 11:00:30 -0800 1) # RHEL 8 Cheat Sheet: Additional Resources 576 | c9b791ce (John Lennon 2022-02-08 11:00:30 -0800 2) 577 | 2a86f76f (Mick Jagger 2022-02-16 08:41:07 -0800 3) Contains a list of additional resources. 578 | 4dfb6c37 (Mick Jagger 2022-02-16 08:32:12 -0800 4) 579 | 4dfb6c37 (Mick Jagger 2022-02-16 08:32:12 -0800 5) It is still a work in progress. 580 | 4dfb6c37 (Mick Jagger 2022-02-16 08:32:12 -0800 6) 581 | ---- 582 | 583 | === git tag 584 | 585 | ---- 586 | git tag [options] 587 | ---- 588 | 589 | Tags a repository. This command is usually used to mark a release. If the `` parameter is not provided, the command displays a list of existing tags. 590 | 591 | *Examples:* 592 | 593 | The following example uses `git tag` to declare a tag with the value `v1.0`. The option `-m` is used to apply a message to the tag: 594 | 595 | ---- 596 | $ git tag v1.0 -m "first release of project" 597 | ---- 598 | 599 | The following example uses `git tag` to display a list of existing tags on the repository. The `-n` option is used to show the user-defined message associated with each tag: 600 | 601 | ---- 602 | $ git tag -n 603 | v1.0 first release of project 604 | ---- 605 | -------------------------------------------------------------------------------- /images/amq-broker/amq-broker-replicated-journal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/amq-broker/amq-broker-replicated-journal.png -------------------------------------------------------------------------------- /images/amq-broker/amq-broker-shared-journal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/amq-broker/amq-broker-shared-journal.png -------------------------------------------------------------------------------- /images/df-h.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/df-h.png -------------------------------------------------------------------------------- /images/df.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/df.png -------------------------------------------------------------------------------- /images/du-h.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/du-h.png -------------------------------------------------------------------------------- /images/du.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/du.png -------------------------------------------------------------------------------- /images/git-workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/git-workflow.png -------------------------------------------------------------------------------- /images/top.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-developer/cheat-sheets/b93949f60e5f7fc1137c8e77a2b081d013ef76a0/images/top.png -------------------------------------------------------------------------------- /insights-api.adoc: -------------------------------------------------------------------------------- 1 | = Red Hat Insights API 2 | :experimental: true 3 | :product-name: 4 | 5 | Red Hat Insights allows you to investigate and make changes to the configuration of hosts managed by Red Hat through REST APIs. This cheat sheet covers the basic APIs. All examples are performed on https://console.redhat.com/api/[https://console.redhat.com/api/] using `v1` endpoints. Please refer to https://console.redhat.com/docs/api-docs[https://console.redhat.com/docs/api-docs] for the latest API specifications and deprecations. 6 | 7 | == Authentication 8 | 9 | The Insights API provides secure REST services over HTTPS endpoints. This protects authentication credentials in transit. 10 | 11 | === Basic Authentication 12 | 13 | . Create a base64 encoding for your username and password, submitting them with the syntax `:`: 14 | + 15 | ---- 16 | echo -n 'admin:' | openssl base64 17 | ---- 18 | 19 | . Include the output in an `Authorization: Basic` HTTP header in your request: 20 | + 21 | ---- 22 | Authorization: Basic 23 | ---- 24 | 25 | === Red Hat API Token Authentication (recommended) 26 | 27 | . Log in to the https://access.redhat.com/[Red Hat Customer Portal] with your username and password. 28 | . Generate an offline token using https://access.redhat.com/management/api[Red Hat API Tokens] by following the instructions. 29 | . Generate an access token, submitting the offline token generated in the previous step: 30 | + 31 | ---- 32 | curl https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token -d grant_type=refresh_token -d client_id=rhsm-api -d refresh_token= 33 | ---- 34 | . Include the access_token generated in the previous step in an `Authorization: Bearer` HTTP header in your request: 35 | + 36 | ---- 37 | Authorization: Bearer 38 | ---- 39 | 40 | == Common Activities 41 | 42 | === Host Inventory 43 | 44 | Get all hosts in the account: 45 | ---- 46 | GET /inventory/v1/hosts 47 | ---- 48 | 49 | Get system details (e.g., after registration, using the UUID provided): 50 | ---- 51 | GET /inventory/v1/hosts/ 52 | ---- 53 | 54 | Get facts about all systems’ system profiles: 55 | ---- 56 | GET /inventory/v1/hosts//system_profile 57 | ---- 58 | 59 | Get facts about a specific system’s system profile (e.g., `last_boot_time` and `os_kernel_version`): 60 | ---- 61 | GET /inventory/v1/hosts//system_profile+?fields[system_profile]=last_boot_time,os_kernel_version 62 | ---- 63 | 64 | NOTE: Only the following facts are syndicated at present: `bios_release_date`, `bios_vendor`, `bios_version`, `infrastructure_type`, `operating_system`, `owner_id`, `rhsm`, `sap_sids`, `sap_system`. 65 | 66 | Get a system’s tags: 67 | ---- 68 | GET /inventory/v1/hosts//tags 69 | ---- 70 | 71 | Get a subset of systems (using a filter on system profile): 72 | ---- 73 | GET /inventory/v1/hosts?filter[system_profile][infrastructure_type]=virtual 74 | ---- 75 | 76 | Delete a system: 77 | ---- 78 | DELETE /inventory/v1/hosts/ 79 | ---- 80 | 81 | == Advisor 82 | 83 | Get all active hits for the account: 84 | ---- 85 | GET /insights/v1/rule/ 86 | ---- 87 | 88 | Get all rule hits on hosts: 89 | ---- 90 | GET /insights/v1/export/hits/ 91 | ---- 92 | 93 | NOTE: Exports are available as CSV and JSON. 94 | 95 | Get all active hits with Ansible remediation playbooks: 96 | ---- 97 | GET /insights/v1/export/hits?has_playbook=true 98 | ---- 99 | 100 | Get summary of all hits for a given system : 101 | ---- 102 | GET /insights/v1/system/ 103 | ---- 104 | 105 | == Drift 106 | 107 | Get defined baselines: 108 | ---- 109 | GET /system-baseline/v1/baselines 110 | ---- 111 | 112 | Create a new baseline by passing a JSON request body with a `baseline_facts` or a `inventory_uuid` or `hsp_uuid` to copy the baseline facts from e.g. 113 | ---- 114 | POST /system-baseline/v1/baselines 115 | { 116 | "baseline_facts": [ 117 | { 118 | "name": "arch", 119 | "value": "x86_64" 120 | } 121 | ], 122 | "display_name": "my_baseline" 123 | } 124 | ---- 125 | 126 | NOTE: DELETE and PATCH operations are also available on `/system-baseline/v1/baselines/`. 127 | 128 | Run a comparison, passing a list of systems, baselines, historical system profiles, and a reference for comparison (multiple UUIDs or other items are formatted as comma-separated lists): 129 | ---- 130 | GET /drift/v1/comparison_report?system_ids[]=,baseline_ids[]=,historical_system_profile_ids[]=,reference_id= 131 | ---- 132 | 133 | Get historical system profiles on a system: 134 | ---- 135 | GET /historical-system-profiles/v1/systems/ 136 | ---- 137 | 138 | Get a specific historical system profile on a system: 139 | ---- 140 | GET /historical-system-profiles/v1/profiles/ 141 | ---- 142 | 143 | == Vulnerabilities 144 | 145 | Get vulnerabilities affecting systems in the account: 146 | ---- 147 | GET /vulnerability/v1/vulnerabilities/cves?affecting=true 148 | ---- 149 | 150 | Get executive reports, e.g., CVEs by severity, top CVEs, etc.: 151 | ---- 152 | GET /vulnerability/v1/report/executive 153 | ---- 154 | 155 | == Compliance 156 | 157 | Get systems associated with Security Content Automation Protocol (SCAP) policies: 158 | ---- 159 | GET /compliance/v1/systems 160 | ---- 161 | 162 | Get systems' compliance/failures for defined reports: 163 | ---- 164 | GET /compliance/v1/profiles 165 | ---- 166 | 167 | == Policies 168 | 169 | Get all defined policies: 170 | ---- 171 | GET /policies/v1/policies 172 | ---- 173 | 174 | Create a new policy: 175 | ---- 176 | POST /policies/v1/policies 177 | { 178 | "name": "my_policy", 179 | "description": "My policy", 180 | "isEnabled": true, 181 | "conditions": "arch = \"x86_64\"", 182 | "actions": "notification" 183 | } 184 | ---- 185 | 186 | NOTE: DELETE and PUT operations are also available on `/policies/`. 187 | 188 | Get all systems triggering a policy: 189 | ---- 190 | GET /policies/v1/policies//history/trigger 191 | ---- 192 | 193 | === Patches 194 | 195 | Get all systems with applicable advisories (patches available): 196 | ---- 197 | GET /patch/v1/advisories 198 | ---- 199 | 200 | Get all applicable advisories for a specific system: 201 | ---- 202 | GET /patch/v1/systems//advisories 203 | ---- 204 | 205 | === Subscriptions 206 | 207 | Get all subscribed Red Hat Enterprise Linux systems matching filters (e.g., Premium SLA, Production usage): 208 | ---- 209 | GET /rhsm-subscriptions/v1/hosts/products/RHEL?sla=Premium&usage=Production 210 | ---- 211 | 212 | === Remediations 213 | 214 | Get a list of defined remediations: 215 | ---- 216 | GET /remediations/v1/remediations 217 | ---- 218 | 219 | Create a new remediation and assign systems: 220 | ---- 221 | POST /remediations/v1/remediations 222 | { 223 | "name": "Fix Critical CVEs", 224 | "archived": true, 225 | "auto_reboot": true, 226 | "add": { 227 | "issues": [ 228 | { 229 | "id": "advisor:CVE_2017_6074_kernel|KERNEL_CVE_2017_6074", 230 | "resolution": "mitigate", 231 | "systems": [ 232 | "" 233 | ] 234 | } 235 | ] 236 | } 237 | } 238 | ---- 239 | 240 | NOTE: DELETE and PATCH operations are also available on `/remediations/v1/remediations/`. 241 | 242 | Get an Ansible remediation playbook: 243 | ---- 244 | GET /remediations/v1/remediations//playbook 245 | ---- 246 | 247 | Execute a remediation: 248 | ---- 249 | POST /remediations/v1/remediations//playbook_runs 250 | ---- 251 | 252 | === Integrations and Notifications 253 | 254 | Get event log history for a list of last triggered Insights events and actions: 255 | ---- 256 | GET /notifications/v1/notifications/events?endDate=2021-11-23&limit=20&offset=0&sortBy=created%3ADESC&startDate=2021-11-09 257 | ---- 258 | 259 | Get list of configured third party integrations: 260 | ---- 261 | GET /integrations/v1/endpoints 262 | ---- 263 | 264 | == Python Examples 265 | 266 | The following Python code interacts with the Insights API using the `requests` library to abstract away the complexity of handling HTTP requests. 267 | 268 | ---- 269 | $ python -m pip install requests 270 | ---- 271 | 272 | === Authentication 273 | 274 | ---- 275 | >>> headers = {'Authorization': 'Basic '} 276 | ---- 277 | or 278 | ---- 279 | >>> headers = {'Authorization': 'Bearer '} 280 | ---- 281 | 282 | === GET 283 | 284 | ---- 285 | >>> import requests 286 | >>> insights_api_url = "https://console.redhat.com/api/inventory/v1/hosts" 287 | >>> response = requests.get(insights_api_url, headers=headers) 288 | >>> response.status_code 289 | 200 290 | >>> response.json() 291 | {'total': 1195, 'count': 50, 'page': 1, 'per_page': 50, 'results': [{'insights_id': '', [...] 292 | ---- 293 | 294 | === POST 295 | 296 | ---- 297 | >>> import requests 298 | >>> insights_api_url = "https://console.redhat.com/api/system-baseline/v1/baselines" 299 | >>> baseline = {"baseline_facts": [{"name": "arch", "value": "x86_64"}], "display_name": "my_baseline"} 300 | >>> response = requests.post(insights_api_url, headers=headers, json=baseline) 301 | >>> response.status_code 302 | 200 303 | >>> response.json() 304 | {'account': '', 'baseline_facts': [{'name': 'arch', 'value': 'x86_64'}], 'created': '2021-11-29T21:06:33.630905Z', 'display_name': 'my_baseline', 'fact_count': 1, 'id': '', 'mapped_system_count': 0, 'notifications_enabled': True, 'updated': '2021-11-29T21:06:33.630910Z'} 305 | ---- 306 | 307 | == Ansible Example 308 | 309 | The following Ansible playbook uses the `ansible.builtin.uri` module to interact with the Insights API. 310 | 311 | ---- 312 | --- 313 | - hosts: localhost 314 | connection: local 315 | gather_facts: no 316 | 317 | vars: 318 | insights_api_url: "https://console.redhat.com/api" 319 | ---- 320 | ---- 321 | insights_auth: "Basic " 322 | ---- 323 | or 324 | ---- 325 | insights_auth: "Bearer " 326 | ---- 327 | ---- 328 | tasks: 329 | - name: Get Inventory 330 | uri: 331 | url: "{{ insights_api_url }}/inventory/v1/hosts/" 332 | method: GET 333 | return_content: yes 334 | headers: 335 | Authorization: "{{ insights_auth }}" 336 | status_code: 200 337 | register: result 338 | 339 | - name: Display inventory 340 | debug: 341 | var: result.json 342 | ---- 343 | -------------------------------------------------------------------------------- /kubernetes-java-operators.adoc: -------------------------------------------------------------------------------- 1 | = Writing a Kubernetes Operator in Java 2 | :experimental: true 3 | :product-name: 4 | :version: 1.4.0 5 | 6 | This cheat sheet covers how to create a Kubernetes Operator in Java using Quarkus. 7 | 8 | [source, bash-shell, subs=attributes+] 9 | ---- 10 | mvn "io.quarkus:quarkus-maven-plugin:{version}.Final:create" \ 11 | -DprojectGroupId="org.acme" \ 12 | -DprojectArtifactId="pizza-operator" \ 13 | -DprojectVersion="1.0-SNAPSHOT" \ 14 | -Dextensions="kubernetes, kubernetes-client" \ 15 | ---- 16 | 17 | TIP: You can generate the project in https://code.quarkus.io/ and selecting `kubernetes` and `kubernetes-client` extensions. 18 | 19 | == Defining the CRD 20 | 21 | First, you need to create a CRD defining the custom resource: 22 | 23 | [source, yaml] 24 | ---- 25 | apiVersion: apiextensions.k8s.io/v1beta1 26 | kind: CustomResourceDefinition 27 | metadata: 28 | name: pizzas.mykubernetes.acme.org 29 | labels: 30 | app: pizzamaker 31 | mylabel: stuff 32 | spec: 33 | group: mykubernetes.acme.org 34 | scope: Namespaced 35 | version: v1beta2 36 | names: 37 | kind: Pizza 38 | listKind: PizzaList 39 | plural: pizzas 40 | singular: pizza 41 | shortNames: 42 | - pz 43 | ---- 44 | 45 | An example of a pizza resource: 46 | 47 | [source, yaml] 48 | ---- 49 | apiVersion: mykubernetes.acme.org/v1beta2 50 | kind: Pizza 51 | metadata: 52 | name: alexmeats 53 | spec: 54 | toppings: 55 | - mozzarella 56 | - pepperoni 57 | - sausage 58 | - bacon 59 | sauce: extra 60 | ---- 61 | 62 | == Defining the Java code 63 | 64 | === Parsing of the pizza resource 65 | 66 | You need to create a parser for reading the content of pizza resource. 67 | 68 | [source, java] 69 | ---- 70 | @JsonDeserialize 71 | public class PizzaResourceSpec { 72 | 73 | @JsonProperty("toppings") 74 | private List toppings = new ArrayList<>(); 75 | 76 | @JsonProperty("sauce") 77 | private String sauce; 78 | 79 | // getters/setters 80 | } 81 | 82 | @JsonDeserialize 83 | public class PizzaResourceStatus {} 84 | 85 | @JsonDeserialize 86 | public class PizzaResource extends CustomResource { 87 | 88 | private PizzaResourceSpec spec; 89 | private PizzaResourceStatus status; 90 | 91 | // getters/setters 92 | } 93 | 94 | @JsonSerialize 95 | public class PizzaResourceList extends CustomResourceList {} 96 | 97 | public class PizzaResourceDoneable extends CustomResourceDoneable { 98 | public PizzaResourceDoneable(PizzaResource resource, Function function) 99 | { super(resource, function);} 100 | } 101 | ---- 102 | 103 | === Registering the CRD in Kubernetes Client 104 | 105 | [source, java] 106 | ---- 107 | public class KubernetesClientProducer { 108 | 109 | @Produces 110 | @Singleton 111 | @Named("namespace") 112 | String findMyCurrentNamespace() throws IOException { 113 | return new String(Files.readAllBytes(Paths.get("/var/run/secrets/kubernetes.io/serviceaccount/namespace"))); 114 | } 115 | 116 | @Produces 117 | @Singleton 118 | KubernetesClient makeDefaultClient(@Named("namespace") String namespace) { 119 | return new DefaultKubernetesClient().inNamespace(namespace); 120 | } 121 | 122 | @Produces 123 | @Singleton 124 | MixedOperation> 125 | makeCustomHelloResourceClient(KubernetesClient defaultClient) { 126 | KubernetesDeserializer.registerCustomKind("mykubernetes.acme.org/v1beta2", "Pizza", PizzaResource.class); 127 | CustomResourceDefinition crd = defaultClient.customResourceDefinitions().list().getItems().stream().findFirst() 128 | .orElseThrow(RuntimeException::new); 129 | return defaultClient.customResources(crd, PizzaResource.class, PizzaResourceList.class, 130 | PizzaResourceDoneable.class); 131 | } 132 | } 133 | ---- 134 | 135 | === Implement the Operator 136 | 137 | Operator is the logic that is executed when the custom resource (pizza) is applied. 138 | In this case, a pod is instantiated with pizza-maker image. 139 | 140 | [source, java] 141 | ---- 142 | public class PizzaResourceWatcher { 143 | 144 | @Inject 145 | KubernetesClient defaultClient; 146 | 147 | @Inject 148 | MixedOperation> crClient; 149 | 150 | void onStartup(@Observes StartupEvent event) { 151 | 152 | crClient.watch(new Watcher() { 153 | @Override 154 | public void eventReceived(Action action, PizzaResource resource) { 155 | if (action == Action.ADDED) { 156 | final String app = resource.getMetadata().getName(); 157 | final String sauce = resource.getSpec().getSauce(); 158 | final List toppings = resource.getSpec().getToppings(); 159 | 160 | final Map labels = new HashMap<>(); 161 | labels.put("app", app); 162 | 163 | final ObjectMetaBuilder objectMetaBuilder = new ObjectMetaBuilder().withName(app + "-pod") 164 | .withNamespace(resource.getMetadata().getNamespace()).withLabels(labels); 165 | 166 | final ContainerBuilder containerBuilder = new ContainerBuilder().withName("pizza-maker") 167 | .withImage("quay.io/lordofthejars/pizza-maker:1.0.0").withCommand("/work/application") 168 | .withArgs("--sauce=" + sauce, "--toppings=" + String.join(",", toppings)); 169 | 170 | final PodSpecBuilder podSpecBuilder = new PodSpecBuilder().withContainers(containerBuilder.build()) 171 | .withRestartPolicy("Never"); 172 | 173 | final PodBuilder podBuilder = new PodBuilder().withMetadata(objectMetaBuilder.build()) 174 | .withSpec(podSpecBuilder.build()); 175 | 176 | final Pod pod = podBuilder.build(); 177 | defaultClient.resource(pod).createOrReplace(); 178 | } 179 | } 180 | 181 | @Override 182 | public void onClose(KubernetesClientException e) { 183 | } 184 | }); 185 | } 186 | } 187 | ---- 188 | 189 | == Deploy Operator 190 | 191 | You need to package and create a container with all the operator code and deploy it to the cluster. 192 | 193 | [source, yaml] 194 | ---- 195 | apiVersion: rbac.authorization.k8s.io/v1 196 | kind: ClusterRole 197 | metadata: 198 | name: quarkus-operator-example 199 | rules: 200 | - apiGroups: 201 | - '' 202 | resources: 203 | - pods 204 | verbs: 205 | - get 206 | - list 207 | - watch 208 | - create 209 | - update 210 | - delete 211 | - patch 212 | - apiGroups: 213 | - apiextensions.k8s.io 214 | resources: 215 | - customresourcedefinitions 216 | verbs: 217 | - list 218 | - apiGroups: 219 | - mykubernetes.acme.org 220 | resources: 221 | - pizzas 222 | verbs: 223 | - list 224 | - watch 225 | --- 226 | apiVersion: v1 227 | kind: ServiceAccount 228 | metadata: 229 | name: quarkus-operator-example 230 | --- 231 | apiVersion: rbac.authorization.k8s.io/v1 232 | kind: ClusterRoleBinding 233 | metadata: 234 | name: quarkus-operator-example 235 | subjects: 236 | - kind: ServiceAccount 237 | name: quarkus-operator-example 238 | namespace: default 239 | roleRef: 240 | kind: ClusterRole 241 | name: quarkus-operator-example 242 | apiGroup: rbac.authorization.k8s.io 243 | --- 244 | apiVersion: apps/v1 245 | kind: Deployment 246 | metadata: 247 | name: quarkus-operator-example 248 | spec: 249 | selector: 250 | matchLabels: 251 | app: quarkus-operator-example 252 | replicas: 1 253 | template: 254 | metadata: 255 | labels: 256 | app: quarkus-operator-example 257 | spec: 258 | serviceAccountName: quarkus-operator-example 259 | containers: 260 | - image: quay.io/lordofthejars/pizza-operator:1.0.0 261 | name: quarkus-operator-example 262 | imagePullPolicy: IfNotPresent 263 | ---- 264 | 265 | Run the `kubectl apply -f pizza-crd.yaml` command to register the CRD in the cluster. 266 | Run the `kubectl apply -f deploy.yaml` command to register the operator. 267 | 268 | == Running the example 269 | 270 | Apply the custom resource by running: `kubectl apply -f meat-pizza.yaml` and check the output of `kubectl get pods` command. -------------------------------------------------------------------------------- /linux-intermediate.adoc: -------------------------------------------------------------------------------- 1 | = Intermediate Linux Commands Cheat Sheet 2 | :experimental: true 3 | :product-name: 4 | :version: 1.0.0 5 | 6 | This cheat sheet presents a collection of Linux commands and executables typically used by developers who want to move beyond the basics of working with the Linux operating system. For the purpose of this cheat sheet, _intermediate use_ involves managing processes, users, and groups on a particular machine running under Linux, as well as monitoring disk and network usage. Commands in this cheat sheet are organized by category. 7 | 8 | == Console and output management commands 9 | 10 | Commands in this section apply to working in a terminal window console and illustrate output from a computer or virtual machine running the Linux operating system. 11 | 12 | === history 13 | 14 | `history [options]` 15 | 16 | Displays a list of commands executed on the system. The `history` command can also be used to manipulate the history list and the way that history information is displayed. 17 | 18 | *Example:* 19 | 20 | The following example uses the `history` command to show a list of commands that have been executed on the system. The example pipes the result to the `more` command, which shows the first 15 lines of output using the `-15` option: 21 | 22 | ``` 23 | $ history | more -15 24 | 24 diag 25 | 25 ss 26 | 26 uname 27 | 27 lscpu 28 | 28 timedatectl 29 | 29 date 30 | 30 chronyc 31 | 31 lshw 32 | 32 sosreport 33 | 33 sos 34 | 34 tlog 35 | 35 fsck 36 | 36 fsck --help 37 | 37 fsck -A 38 | 38 sudo fsck -A 39 | --More-- 40 | ``` 41 | 42 | === more 43 | 44 | `more [options] ` 45 | 46 | Allows a user to view and traverse the contents of a file or stdout. The `more` command runs within its own command-line user interface. To exit the process, press the `q` key. 47 | 48 | *Examples:* 49 | 50 | This example uses the `more` command to display the first four lines of the file `/etc/passwd`. Users can then traverse the remainder of the file one line at a time by striking the `` key: 51 | 52 | ``` 53 | $ more -4 /etc/passwd 54 | root:x:0:0:root:/root:/bin/bash 55 | bin:x:1:1:bin:/bin:/sbin/nologin 56 | daemon:x:2:2:daemon:/sbin:/sbin/nologin 57 | adm:x:3:4:adm:/var/adm:/sbin/nologin 58 | --More--(5%) 59 | ``` 60 | 61 | === top 62 | 63 | `top [options]` 64 | 65 | Displays information about the running Linux processes. 66 | 67 | *Example:* 68 | 69 | The following command displays the `top` command with the result piped to the `more` command in order to view the first portion of the output: 70 | 71 | ``` 72 | $ top | more 73 | top - 12:02:29 up 5 days, 20:20, 2 users, load average: 0.01, 0.02, 0.00 74 | Tasks: 201 total, 2 running, 199 sleeping, 0 stopped, 0 zombie 75 | %Cpu(s): 0.0 us, 6.2 sy, 0.0 ni,93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st 76 | MiB Mem : 7770.8 total, 5409.8 free, 1240.8 used, 1120.2 buff/cache 77 | MiB Swap: 8092.0 total, 8092.0 free, 0.0 used. 6205.6 avail Mem 78 | PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 79 | 82399 guest 20 0 65584 5120 4212 R 5.9 0.1 0:00.02 top 80 | 1 root 20 0 175932 14212 8924 S 0.0 0.2 0:06.21 systemd 81 | 2 root 20 0 0 0 0 S 0.0 0.0 0:00.13 kthreadd 82 | 3 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_gp 83 | 4 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 rcu_par_gp 84 | 6 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 kworker/0:0H-events_highpri 85 | 9 root 0 -20 0 0 0 I 0.0 0.0 0:00.00 mm_percpu_wq 86 | 10 root 20 0 0 0 0 S 0.0 0.0 0:02.73 ksoftirqd/0 87 | 11 root 20 0 0 0 0 R 0.0 0.0 0:01.10 rcu_sched 88 | 12 root rt 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 89 | 13 root rt 0 0 0 0 S 0.0 0.0 0:00.04 watchdog/0 90 | 14 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0 91 | 16 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kdevtmpfs 92 | 93 | --More-- 94 | ``` 95 | 96 | == Disk management commands 97 | 98 | Commands in this section apply to working with disks, devices, and volumes on a computer running the Linux operating system. 99 | 100 | === df 101 | 102 | `df [options] ` 103 | 104 | Shows the amount of disk space used and available according to the file system that represents a particular disk device mount. If no file name is given, the space available on all mounted file systems is displayed. 105 | 106 | *Example:* 107 | 108 | The following example shows the invocation and result of `df` displaying all mounted file systems. Disk space is shown in 1K blocks (note that `$` is the command-line prompt symbol): 109 | 110 | ``` 111 | $ df 112 | Filesystem 1K-blocks Used Available Use% Mounted on 113 | devtmpfs 3949180 0 3949180 0% /dev 114 | tmpfs 3978636 0 3978636 0% /dev/shm 115 | tmpfs 3978636 9464 3969172 1% /run 116 | tmpfs 3978636 0 3978636 0% /sys/fs/cgroup 117 | /dev/mapper/rhel-root 50065528 5588744 44476784 12% / 118 | /dev/mapper/rhel-home 24445276 228104 24217172 1% /home 119 | /dev/sda1 1038336 262796 775540 26% /boot 120 | tmpfs 795724 64 795660 1% /run/user/1000 121 | ``` 122 | 123 | === du 124 | 125 | `du [options] ` 126 | 127 | Reports information about disk usage on the local computer or virtual machine. 128 | 129 | *Example:* 130 | 131 | The following example uses the command `du` to report the amount of disk space used by the files in the directory `/etc/bin`: 132 | 133 | ``` 134 | $ du /usr/bin 135 | 365940 /usr/bin 136 | ``` 137 | 138 | == File and directory management commands 139 | 140 | Commands in this section apply to working with files and directories on a computer running the Linux operating system. 141 | 142 | === find 143 | 144 | `sudo find -name ` 145 | 146 | Finds a file or directory by name. 147 | 148 | *Example:* 149 | 150 | The following command finds a file named `hostname` starting from the root (`/`) directory of the computer's file system. Note that the command starts with `sudo` in order to access files restricted to the `root` user: 151 | 152 | ``` 153 | $ sudo find / -name hostname 154 | /proc/sys/kernel/hostname 155 | /etc/hostname 156 | /var/lib/selinux/targeted/active/modules/100/hostname 157 | /usr/bin/hostname 158 | /usr/lib64/gettext/hostname 159 | /usr/share/licenses/hostname 160 | /usr/share/doc/hostname 161 | /usr/share/bash-completion/completions/hostname 162 | /usr/share/selinux/targeted/default/active/modules/100/hostname 163 | /usr/libexec/hostname 164 | ``` 165 | 166 | === pwd 167 | 168 | `pwd` 169 | 170 | Displays the name of the present working directory. 171 | 172 | *Example:* 173 | 174 | The following example displays the invocation and result of using the command `pwd` in the `HOME` directory for a user named `guest`: 175 | 176 | ``` 177 | $ pwd 178 | /home/guest 179 | ``` 180 | 181 | === alias 182 | 183 | `alias [options] ` 184 | 185 | Assigns a shortcut name to an existing command or executable. 186 | 187 | *Example:* 188 | 189 | The following example creates a temporary alias for the `clear` command. The alias is named `cls`. The `clear` command clears the terminal window. Once created, `cls` will also clear the terminal window: 190 | 191 | `$ alias cls='clear'` 192 | 193 | === awk 194 | 195 | `awk ` 196 | 197 | Finds, filters, or replaces text in a file or from stdout. 198 | 199 | *Examples:* 200 | 201 | This example pipes the string "Bobby is cool" to the `awk` command. The `awk` command invokes the subcommand named `sub` to find any occurrence of "Bobby" and change the string to "Teddy". Then, the subcommand `print` outputs the result of the substitution: 202 | 203 | ``` 204 | $ echo "Bobby is cool" | awk '{sub("Bobby","Teddy"); print}' 205 | Teddy is cool 206 | ``` 207 | 208 | This example uses `awk` to filter output according to field position. First, the example shows the output of the `who` command, which lists the current users logged in to the computer. The `who` command displays four fields (columns). The fields are username, the terminal line number, the login time, and the machine from which access originated. 209 | 210 | The second execution of `who` pipes the result to `awk`. Then, `awk` uses the `print %1` subcommand set to show only the first field name. The third execution of `who` pipes the result to `awk`, which then filters input to print the values in the second field: 211 | 212 | ``` 213 | $ who 214 | jaggermick pts/0 2022-01-19 09:14 (192.168.86.28) 215 | guest pts/1 2022-01-19 10:07 (192.168.86.20) 216 | 217 | $ who | awk '{print $1}' 218 | jaggermick 219 | guest 220 | 221 | $ who | awk '{print $2}' 222 | pts/0 223 | pts/1 224 | 225 | ``` 226 | 227 | === diff 228 | 229 | `diff [options] file1 file2` 230 | 231 | Displays the difference in content between two files. 232 | 233 | *Example:* 234 | 235 | The following example uses the `printf` command to create three files named `one.txt`, `two.txt`, and `three.txt`. Each file contains a list of names. The files named `one.txt` and `three.txt` have identical content. The file `two.txt` has an additional name. 236 | 237 | The first invocation of `diff` compares the files `one.txt` and `two.txt`. The second invocation compares files `one.txt` to `three.txt`. 238 | 239 | The first invocation reports that there is a difference in `two.txt` and that the fourth line from the file `two.text` should be added (`a`) to the third line in `one.txt`. The value of the fourth line in `two.txt` is `Shemp`. 240 | 241 | The second invocation uses the `-s` option to display the report that indicates the files `one.txt` and `three.txt` are identical. If the `-s` option was not used, there would be no output to the console (by default, identical files are not reported in stdout): 242 | 243 | ``` 244 | $ printf "Moe\nLarry\nCurly\n" > one.txt 245 | $ printf "Moe\nLarry\nCurly\nShemp\n" > two.txt 246 | $ printf "Moe\nLarry\nCurly\n" > three.txt 247 | 248 | $ diff one.txt two.txt 249 | 3a4 250 | > Shemp 251 | 252 | $ diff -s one.txt three.txt 253 | Files one.txt and three.txt are identical 254 | 255 | ``` 256 | 257 | === sed 258 | 259 | `sed [options] ` 260 | 261 | Manipulates the content of a file or output sent to stdout. 262 | 263 | *Example:* 264 | 265 | The following example uses the `echo` command to send the string `Bobby is cool` to the `sed` command. The command `sed` uses the `s` subcommand to substitute the name `Teddy` where the name `Bobby` occurs. The output is then displayed: 266 | 267 | ``` 268 | $ echo Bobby is cool | sed 's/Bobby/Teddy/' 269 | Teddy is cool 270 | ``` 271 | 272 | == Network commands 273 | 274 | Commands in this section apply to working with networks on and from a Linux computer. 275 | 276 | === hostname 277 | 278 | `hostname` 279 | 280 | Reports the local computer's host name. 281 | 282 | *Example:* 283 | 284 | ``` 285 | $ hostname 286 | localhost.localdomain 287 | ``` 288 | 289 | === nslookup 290 | 291 | `nslookup [options] ` 292 | 293 | A program that queries for information about a particular Internet domain name. 294 | 295 | *Example:* 296 | 297 | The following example invokes `nslookup` against the domain name `developers.redhat.com`: 298 | 299 | ``` 300 | $ nslookup developers.redhat.com 301 | Server: 192.168.86.1 302 | Address: 192.168.86.1#53 303 | 304 | Non-authoritative answer: 305 | developers.redhat.com canonical name = developers.redhat.com2.edgekey.net. 306 | developers.redhat.com2.edgekey.net canonical name = developers.redhat.com2.edgekey.net.globalredir.akadns.net. 307 | developers.redhat.com2.edgekey.net.globalredir.akadns.net canonical name = e40408.dsca.akamaiedge.net. 308 | Name: e40408.dsca.akamaiedge.net 309 | Address: 23.199.47.87 310 | Name: e40408.dsca.akamaiedge.net 311 | Address: 23.199.47.85 312 | Name: e40408.dsca.akamaiedge.net 313 | Address: 2600:1406:3400::6862:7512 314 | Name: e40408.dsca.akamaiedge.net 315 | Address: 2600:1406:3400::6862:7543 316 | ``` 317 | 318 | === traceroute 319 | 320 | `traceroute [options] ` 321 | 322 | Reports the route that a packet takes in hops to move through the Internet to reach its destination. 323 | 324 | The program `traceroute` is not part of Red Hat Enterprise Linux (RHEL) by default. It must be installed using `sudo dnf install traceroute`. 325 | 326 | *Example:* 327 | 328 | The following example reports the route from the local machine to `developers.redhat.com`. The `-m` option is used to limit the output to the first five hops: 329 | 330 | ``` 331 | $ traceroute -m 5 developers.redhat.com 332 | traceroute to developers.redhat.com (23.199.47.85), 5 hops max, 60 byte packets 333 | 1 _gateway (192.168.86.1) 0.599 ms 0.514 ms 0.656 ms 334 | 2 142-254-237-093.inf.spectrum.com (142.254.237.93) 11.974 ms 11.874 ms 17.793 ms 335 | 3 agg53.lsaicaev02h.socal.rr.com (24.30.168.85) 19.294 ms 20.242 ms 19.224 ms 336 | 4 72.129.19.22 (72.129.19.22) 18.984 ms 19.888 ms 19.969 ms 337 | 5 agg26.tustcaft01r.socal.rr.com (72.129.17.2) 13.575 ms 19.673 ms 13.579 ms 338 | ``` 339 | 340 | == RHEL management commands 341 | 342 | The commands in this section apply to working with the Red Hat Enterprise Linux operating system. 343 | 344 | === sestatus 345 | 346 | `sestatus [options]` 347 | 348 | This program is used to report status information about a computer or virtual machine running SELinux. 349 | 350 | *Example:* 351 | 352 | The following example invokes the program `sestatus` and displays the default response: 353 | 354 | ``` 355 | $ sestatus 356 | SELinux status: enabled 357 | SELinuxfs mount: /sys/fs/selinux 358 | SELinux root directory: /etc/selinux 359 | Loaded policy name: targeted 360 | Current mode: enforcing 361 | Mode from config file: enforcing 362 | Policy MLS status: enabled 363 | Policy deny_unknown status: allowed 364 | Memory protection checking: actual (secure) 365 | Max kernel policy version: 33 366 | ``` 367 | 368 | === uname 369 | 370 | `uname [options]` 371 | 372 | The command `uname` reports system information about the local computer. 373 | 374 | *Example:* 375 | 376 | The following example uses the `-a` option with `uname` to report all system information about the local computer: 377 | 378 | ``` 379 | $ uname -a 380 | Linux localhost.localdomain 4.18.0-348.el8.x86_64 #1 SMP Mon Oct 4 12:17:22 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux 381 | ``` 382 | 383 | == Users and groups commands 384 | 385 | The following commands apply to working with users and groups as supported by the Linux operating system. 386 | 387 | === users 388 | 389 | `users [options]` 390 | 391 | Displays the names of users logged in to the computer. 392 | 393 | *Example:* 394 | 395 | The following example uses the command `users` to list all users logged into the system: 396 | 397 | ``` 398 | $ users 399 | cooluser jaggermick lennonjohn 400 | ``` 401 | 402 | === useradd 403 | 404 | `adduser [options] ` 405 | 406 | Adds a user to the computing environment. The command must be run as `sudo` for administrator access. 407 | 408 | *Example:* 409 | 410 | The following example adds a user with the login name `cooluser`. The `HOME` directory `home/cooluser` is created by default. Then, the example invokes the command `passwd` to set a password for the new user: 411 | 412 | ``` 413 | $ sudo adduser cooluser 414 | 415 | $ sudo passwd cooluser 416 | Changing password for user cooluser. 417 | New password: 418 | Retype new password: 419 | passwd: all authentication tokens updated successfully. 420 | ``` 421 | 422 | === userdel 423 | 424 | `userdel [options] ` 425 | 426 | Deletes a user from the computer. The command must be run as `sudo` for administrator access. 427 | 428 | *Example:* 429 | 430 | The following example uses the `userdel` command to remove the user with the login name `cooluser` from the system. The `-r` option indicates that the user's `HOME` directory is also to be deleted: 431 | 432 | ``` 433 | $ sudo userdel -r cooluser 434 | ``` 435 | 436 | === usermod 437 | 438 | `usermod [options] ` 439 | 440 | Modifies user account information and can be used to add a user to a group. The command must be run as `sudo` for administrator access. 441 | 442 | *Example:* 443 | 444 | The following example uses the command `usermod` to add a user with the login name `lennonjohn` to a group named `beatles`. Then, the command `groups` is used to verify that the user `lennonjohn` is indeed assigned to the group `beatles`: 445 | 446 | ``` 447 | $ sudo usermod -a -G beatles lennonjohn 448 | 449 | $ groups lennonjohn 450 | lennonjohn : lennonjohn beatles 451 | ``` 452 | 453 | === groups 454 | 455 | `groups [options] ` 456 | 457 | Lists the groups to which a user belongs. 458 | 459 | *Example:* 460 | 461 | The following example uses the command `groups` to list the groups to which the user with the username `lennonjohn` belongs: 462 | 463 | ``` 464 | $ groups lennonjohn 465 | lennonjohn : lennonjohn beatles 466 | ``` 467 | 468 | === gpasswd 469 | 470 | `gpasswd [options] ` 471 | 472 | The command `gpasswd` is used to manage the configuration of a group under the Linux operating system. The command must be run as `sudo` for administrator access. 473 | 474 | *Example:* 475 | 476 | The following example uses `gpasswd` to remove a user from a group. The `-d` option followed by the username indicates that the user is to be deleted: 477 | 478 | ``` 479 | $ sudo gpasswd -d jaggermick beatles 480 | Removing user jaggermick from group beatles 481 | ``` 482 | 483 | === groupadd 484 | 485 | `groupadd [options] ` 486 | 487 | Adds a group to the computer. The command must be run as `sudo` for administrator access. 488 | 489 | *Example:* 490 | 491 | The following example uses the `groupadd` command to create a group named `beatles`. 492 | 493 | ``` 494 | $ sudo groupadd beatles 495 | ``` 496 | 497 | === groupdel 498 | 499 | `groupdel [options] ` 500 | 501 | Deletes a group from the computer. The command must be run as `sudo` for administrator access. 502 | 503 | *Example:* 504 | 505 | The following example uses the command `groupdel` to delete the group named `beatles` from the system: 506 | 507 | ``` 508 | $ sudo groupdel beatles 509 | ``` 510 | -------------------------------------------------------------------------------- /linux-systemd.adoc: -------------------------------------------------------------------------------- 1 | = systemd Commands Cheat Sheet 2 | :experimental: true 3 | :product-name: 4 | :version: 1.0.0 5 | 6 | This cheat sheet presents command-line executables that are used for working with `systemd`. The `systemd` service runs on https://developers.redhat.com/topics/linux[Linux] to consolidate service configuration and application behavior. `systemd` is found in https://developers.redhat.com/products/rhel[Red Hat Enterprise Linux] as well as other Linux distributions. 7 | 8 | `systemctl` is the application that interacts with systemd from the command line. This cheat sheet demonstrates how to use various `systemctl` subcommands to control the behavior of particular services running on a computer. The `journalctl` command, which displays information about `systemd` activities from its logs, is also introduced. 9 | 10 | == Application management using systemctl commands 11 | 12 | The subcommands in this section control the behavior of particular applications, usually services (daemons) that run in the background. 13 | 14 | === systemctl enable 15 | 16 | ---- 17 | systemctl [options] enable 18 | ---- 19 | 20 | Enables a service. Enabling a service causes the system to start the service upon reboot or whenever a computer starts up. The `enable` subcommand does not start the particular service immediately. To start a service immediately using `systemctl`, use the `systemctl start` command, described later. 21 | 22 | *Example:* 23 | 24 | The `systemctl enable` command in this example configures the system to invoke the `sshd` secure shell service at system startup. This command can be useful to ensure that users can securely log in from remote systems at any time when the local system is running. 25 | 26 | The `systemctl is-enabled` command that follows verifies that the `sshd` services is enabled. 27 | 28 | The `systemctl enable` command requires administrator permissions to execute. The command can be run as a subcommand to `sudo`. If `sudo` is not used, `systemctl enable` prompts for the administrator password. Calling `systemctl is-enabled` does not require administrator permissions. 29 | 30 | ---- 31 | $ sudo systemctl enable sshd 32 | 33 | $ systemctl is-enabled sshd 34 | enabled 35 | ---- 36 | 37 | === systemctl restart 38 | 39 | ---- 40 | systemctl [options] restart 41 | ---- 42 | 43 | Restarts a service. 44 | 45 | The `systemctl restart` command requires administrator permissions to execute. The command can be run as a subcommand to `sudo`. If `sudo` is not used, `systemctl restart` prompts for the administrator password. 46 | 47 | *Example:* 48 | 49 | The `systemctl restart` command in this example restarts the `httpd` web server. This command can be useful after making configuration changes to the web server, so that the changes take effect for subsequent incoming requests. 50 | 51 | ---- 52 | $ sudo systemctl restart httpd 53 | ---- 54 | 55 | === systemctl start 56 | 57 | ---- 58 | systemctl [options] start 59 | ---- 60 | 61 | Starts a service. 62 | 63 | The `systemctl start` command requires administrator permissions to execute. The command can be run as a subcommand to `sudo`. If `sudo` is not used, `systemctl start` prompts for the administrator password. 64 | 65 | *Example:* 66 | 67 | The `systemctl start` command in this example starts the `httpd` web server. This can be useful to run a service that is not normally running. `systemctl start` does not cause the service to restart after the system restarts. 68 | 69 | The `systemctl status httpd` command that follows verifies that the `httpd` service is running. Calling `systemctl status` does not require administrator permissions. 70 | 71 | ---- 72 | $ sudo systemctl start httpd 73 | 74 | $ systemctl status httpd 75 | ● httpd.service - The Apache HTTP Server 76 | Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) 77 | Active: active (running) since Mon 2022-01-24 09:31:10 PST; 2s ago 78 | Docs: man:httpd.service(8) 79 | Main PID: 41263 (httpd) 80 | Status: "Started, listening on: port 80" 81 | Tasks: 213 (limit: 49364) 82 | Memory: 36.7M 83 | CGroup: /system.slice/httpd.service 84 | ├─41263 /usr/sbin/httpd -DFOREGROUND 85 | ├─41264 /usr/sbin/httpd -DFOREGROUND 86 | ├─41265 /usr/sbin/httpd -DFOREGROUND 87 | ├─41266 /usr/sbin/httpd -DFOREGROUND 88 | └─41267 /usr/sbin/httpd -DFOREGROUND 89 | 90 | Jan 24 09:31:09 localhost.localdomain systemd[1]: Starting The Apache HTTP Server... 91 | Jan 24 09:31:10 localhost.localdomain httpd[41263]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, us> 92 | Jan 24 09:31:10 localhost.localdomain systemd[1]: Started The Apache HTTP Server. 93 | Jan 24 09:31:10 localhost.localdomain httpd[41263]: Server configured, listening on: port 80 94 | ---- 95 | 96 | === systemctl status 97 | 98 | ---- 99 | systemctl [options] status 100 | ---- 101 | 102 | Reports status information about a service. 103 | 104 | *Example:* 105 | 106 | The `systemctl status` command in this example reports status information about the `sshd` service. `systemctl status` also displays information about the service's activities via log entries that follow the status information. Earlier examples in this cheat sheet show reasons to use this command. 107 | 108 | ---- 109 | $ systemctl status sshd 110 | ● sshd.service - OpenSSH server daemon 111 | Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled) 112 | Active: active (running) since Fri 2022-01-21 10:13:49 PST; 2 days ago 113 | Docs: man:sshd(8) 114 | man:sshd_config(5) 115 | Main PID: 1026 (sshd) 116 | Tasks: 1 (limit: 49364) 117 | Memory: 4.5M 118 | CGroup: /system.slice/sshd.service 119 | └─1026 /usr/sbin/sshd -D -oCiphers=aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes256-ctr,aes256-cbc,aes128-gcm@openssh.c> 120 | 121 | Jan 21 10:13:49 localhost.localdomain systemd[1]: Starting OpenSSH server daemon... 122 | Jan 21 10:13:49 localhost.localdomain sshd[1026]: Server listening on 0.0.0.0 port 22. 123 | Jan 21 10:13:49 localhost.localdomain sshd[1026]: Server listening on :: port 22. 124 | Jan 21 10:13:49 localhost.localdomain systemd[1]: Started OpenSSH server daemon. 125 | Jan 21 10:23:25 localhost.localdomain sshd[2136]: Accepted password for reselbob from 192.168.86.20 port 59909 ssh2 126 | Jan 21 10:23:25 localhost.localdomain sshd[2136]: pam_unix(sshd:session): session opened for user reselbob by (uid=0) 127 | Jan 24 08:42:43 localhost.localdomain sshd[40279]: Accepted password for reselbob from 192.168.86.20 port 61945 ssh2 128 | Jan 24 08:42:43 localhost.localdomain sshd[40279]: pam_unix(sshd:session): session opened for user reselbob by (uid=0) 129 | lines 1-19/19 (END) 130 | ---- 131 | 132 | === systemctl stop 133 | 134 | ---- 135 | systemctl [options] stop 136 | ---- 137 | 138 | Stops a service. The `systemctl stop` command requires administrator permissions to execute. The command can be run as a subcommand to `sudo`. If `sudo` is not used, `systemctl stop` prompts for the administrator password. 139 | 140 | *Example:* 141 | 142 | The `systemctl stop` command in this example stops the `httpd` service. This command can be useful if you have to stop a service in order to backup its data, because you think it is being attacked by a malicious intruder, or for any other reason. The `systemctl status httpd` command that follows reports the status. 143 | 144 | ---- 145 | $ systemctl stop httpd 146 | 147 | $ systemctl status httpd 148 | ● httpd.service - The Apache HTTP Server 149 | Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled) 150 | Active: inactive (dead) since Mon 2022-01-24 09:56:53 PST; 3s ago 151 | Docs: man:httpd.service(8) 152 | Process: 1262 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=0/SUCCESS) 153 | Main PID: 1262 (code=exited, status=0/SUCCESS) 154 | Status: "Running, listening on: port 80" 155 | 156 | Jan 24 09:32:27 localhost.localdomain systemd[1]: Starting The Apache HTTP Server... 157 | Jan 24 09:32:34 localhost.localdomain httpd[1262]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, usi> 158 | Jan 24 09:41:29 localhost.localdomain systemd[1]: Started The Apache HTTP Server. 159 | Jan 24 09:41:29 localhost.localdomain httpd[1262]: Server configured, listening on: port 80 160 | Jan 24 09:56:52 localhost.localdomain systemd[1]: Stopping The Apache HTTP Server... 161 | Jan 24 09:56:53 localhost.localdomain systemd[1]: httpd.service: Succeeded. 162 | Jan 24 09:56:53 localhost.localdomain systemd[1]: Stopped The Apache HTTP Server. 163 | ---- 164 | 165 | Note the following line in the status output, which shows that the `httpd` service is inactive: 166 | 167 | ---- 168 | Active: inactive (dead) since Mon 2022-01-24 09:56:53 PST; 3s ago 169 | ---- 170 | 171 | == Computer control commands 172 | 173 | The subcommands in this section reboot and shut down a computer. 174 | 175 | === systemctl poweroff 176 | 177 | ---- 178 | systemctl [options] poweroff 179 | ---- 180 | 181 | Shuts down the computer or virtual machine. 182 | 183 | The `systemctl poweroff` command requires administrator permissions to execute.The command can be run as a subcommand to `sudo`. If `sudo` in not used, `systemctl poweroff` prompts for the administrator password. 184 | 185 | *Example:* 186 | 187 | The `systemctl poweroff` command in this example shuts down the computer or virtual machine. 188 | 189 | ---- 190 | sudo systemctl poweroff 191 | ---- 192 | 193 | === systemctl reboot 194 | 195 | ---- 196 | systemctl [options] reboot 197 | ---- 198 | 199 | Shuts down the computer or virtual machine and immediately restarts it. 200 | 201 | *Example:* 202 | 203 | The `systemctl reboot` command in this example reboots the computer or virtual machine. The `-i` option ensures a shutdown by ignoring logged-in users and inhibitors (programs that prevent a shutdown in order to complete some long-running activity). 204 | 205 | The `systemctl reboot` command requires administrator permissions to execute. The command can be run as a subcommand to `sudo`. If `sudo` is not used, `systemctl reboot` prompts for the administrator password. 206 | 207 | ---- 208 | sudo systemctl -i reboot 209 | ---- 210 | 211 | == System information commands 212 | 213 | The following shows how to use the `journalctl` and `systemctl` programs to get information about services running under `systemd.` 214 | 215 | === journalctl 216 | 217 | ---- 218 | journalctl [options] 219 | ---- 220 | 221 | `journalctl` works with systemd's logging capabilities. `systemd` stores the system, boot, and kernel log files in a central location in a binary format. `journalctl` presents information in the central logging system as human-readable text. 222 | 223 | *Example:* 224 | 225 | The `journalctl` command in this example displays everything stored recently in the log by `systemd`. The `--follow` option causes only the most recent journal entries to be displayed. 226 | 227 | ---- 228 | $ journalctl --follow 229 | -- Logs begin at Mon 2022-01-24 09:31:39 PST. -- 230 | Jan 24 10:01:20 localhost.localdomain systemd[1]: Starting The Apache HTTP Server... 231 | Jan 24 10:01:20 localhost.localdomain httpd[2813]: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using localhost.localdomain. Set the 'ServerName' directive globally to suppress this message 232 | Jan 24 10:01:20 localhost.localdomain systemd[1]: Started The Apache HTTP Server. 233 | Jan 24 10:01:20 localhost.localdomain polkitd[876]: Unregistered Authentication Agent for unix-process:2787:124099 (system bus name :1.333, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus) 234 | Jan 24 10:01:20 localhost.localdomain httpd[2813]: Server configured, listening on: port 80 235 | Jan 24 10:03:29 localhost.localdomain systemd[1]: Starting dnf makecache... 236 | Jan 24 10:03:34 localhost.localdomain dnf[3052]: Updating Subscription Management repositories. 237 | Jan 24 10:03:35 localhost.localdomain dnf[3052]: Metadata cache refreshed recently. 238 | Jan 24 10:03:35 localhost.localdomain systemd[1]: dnf-makecache.service: Succeeded. 239 | Jan 24 10:03:35 localhost.localdomain systemd[1]: Started dnf makecache. 240 | ---- 241 | 242 | === systemctl list-sockets 243 | 244 | ---- 245 | systemctl [options] list-sockets 246 | ---- 247 | 248 | Lists the sockets in memory available for interprocess communication (IPC). 249 | 250 | *Example:* 251 | 252 | The `systemctl list-sockets` command in this example lists the sockets in memory. By providing the pattern `systemd*`, the command shows only sockets that have a unit name beginning with the characters `systemd`. Also, the option `--show-types` is used to display the `TYPE` column in the output. 253 | 254 | ---- 255 | $ systemctl --show-types list-sockets 'systemd*' 256 | LISTEN TYPE UNIT ACTIVATES 257 | /run/initctl FIFO systemd-initctl.socket systemd-initctl.service 258 | /run/systemd/coredump SequentialPacket systemd-coredump.socket systemd-coredump@0.service 259 | /run/systemd/journal/dev-log Datagram systemd-journald-dev-log.socket systemd-journald.service 260 | /run/systemd/journal/socket Datagram systemd-journald.socket systemd-journald.service 261 | /run/systemd/journal/stdout Stream systemd-journald.socket systemd-journald.service 262 | /run/udev/control SequentialPacket systemd-udevd-control.socket systemd-udevd.service 263 | kobject-uevent 1 Netlink systemd-udevd-kernel.socket systemd-udevd.service 264 | 265 | 7 sockets listed. 266 | Pass --all to see loaded but inactive sockets, too. 267 | ---- 268 | === systemctl list-units 269 | 270 | ---- 271 | systemctl [options] list-units 272 | ---- 273 | 274 | Lists units that `systemd` has in memory. A _unit_ refers to any resource that `systemd` knows how to operate on and manage. Units are listed with the following columns: 275 | 276 | * `UNIT`: Name of the unit 277 | * `LOAD`: Indicates whether the unit is loaded 278 | * `ACTIVE`: Indicates whether the unit is active 279 | * `SUB`: Indicates low-level activation state; for example: `mounted` or `running` 280 | * `DESCRIPTION`: Describes the service or unit 281 | 282 | *Example:* 283 | 284 | The `systemctl list-units` command in this example lists units in memory, using the pattern `sys-k*` to show only units whose names begin with the characters `sys-k`. 285 | 286 | ---- 287 | $ systemctl list-units 'sys-k*' 288 | UNIT LOAD ACTIVE SUB DESCRIPTION 289 | sys-kernel-config.mount loaded active mounted Kernel Configuration File System 290 | sys-kernel-debug.mount loaded active mounted Kernel Debug File System 291 | sys-kernel-tracing.mount loaded active mounted /sys/kernel/tracing 292 | 293 | LOAD = Reflects whether the unit definition was properly loaded. 294 | ACTIVE = The high-level unit activation state, i.e. generalization of SUB. 295 | SUB = The low-level unit activation state, values depend on unit type. 296 | 297 | 3 loaded units listed. 298 | To show all installed unit files use 'systemctl list-unit-files'. 299 | ---- 300 | 301 | === systemctl list-unit-files 302 | 303 | ---- 304 | systemctl [options] list-unit-files 305 | ---- 306 | 307 | Lists a unit's associated file, which describes how `systemd` starts and runs a unit. Unit files are listed with the following columns: 308 | 309 | `UNIT FILE`: Name of a the file 310 | `STATE`: State of the file. The state can be `static`, `generated` or `disabled`. 311 | 312 | *Example:* 313 | 314 | The `systemctl list-unit-files` command in this example lists unit files, using the pattern `sys-*` to show only filenames that begin with the characters `sys-`. 315 | 316 | ---- 317 | $ systemctl list-unit-files 'sys-*' 318 | UNIT FILE STATE 319 | sys-fs-fuse-connections.mount static 320 | sys-kernel-config.mount static 321 | sys-kernel-debug.mount static 322 | 323 | 3 unit files listed. 324 | ---- 325 | -------------------------------------------------------------------------------- /nodejs-command-line-flags.adoc: -------------------------------------------------------------------------------- 1 | = Node.js - Command Line Flags 2 | 3 | This cheat sheet covers the most popular/used node.js command line flags. 4 | 5 | == Running your code 6 | 7 | `-v, --version`:: Prints the current version of node.js you use 8 | `-e, --eval` :: Evaluates the current argument as JavaScript 9 | `-c, --check` :: Checks the syntax of a script without executing it 10 | `-i, --interactive` :: Opens the node.js REPL (Read-Eval-Print-Loop) 11 | `-r, --require` :: Pre-loads a specific module at start-up 12 | `--no-deprecation` :: Silences the deprecation warnings 13 | `--no-warnings` :: Silences all warnings (including deprecations) 14 | `NODE_OPTIONS=` :: Environment variable that you can use to set command line options 15 | 16 | == Code Hygiene 17 | 18 | `--pending-deprecation` :: Emits pending deprecation warnings 19 | `--trace-deprecation` :: Prints the stack trace for deprecations 20 | `--throw-deprecation` :: Throws error on deprecation 21 | `--trace-warnings` :: Prints the stack trace for warnings 22 | 23 | == Initial problem investigation 24 | 25 | `--report-on-signal` :: Generates node report on signal 26 | `--report-on-fatalerror` :: Generates node report on fatal error 27 | `--report-uncaught-exception` :: Generates diagnostic report on uncaught exceptions 28 | 29 | == Controlling/investigating memory use 30 | 31 | `--max-old-space-size` :: Sets the size of the heap 32 | `--trace_gc` :: Turns on gc logging 33 | `--heap-prof` :: Enables heap profiling 34 | `--heapsnapshot-signal=signal` :: Generates heap snapshot on specified signal 35 | 36 | == CPU performance investigation 37 | 38 | `--prof` :: Generates V8 profiler output. 39 | `--prof-process` :: Process V8 profiler output generated using --prof 40 | `--cpu-prof` :: Starts the V8 CPU profiler on start up, and write the CPU profile to disk before exit 41 | 42 | == Debugging 43 | 44 | `--inspect-brk[=[host:]port]` :: Activates inspector on host:port and break at start of user script 45 | `--inspect[=[host:]port]` :: Activates inspector on host:port (default: 127.0.0.1:9229) 46 | -------------------------------------------------------------------------------- /odo.adoc: -------------------------------------------------------------------------------- 1 | = odo 2 | :experimental: false 3 | :product-name: odo 4 | 5 | All odo commands require a _context_ to indicate the project and "application" in use. When a component is created in a project's source code directory, this context information is stored in a directory named `.odo`. 6 | 7 | Most commands, other than for creating a component, require this context information. If the command is run from within a project that has the `.odo` directory, odo will automatically read the context information from it. 8 | 9 | If a command is run outside of an odo project, the context can be specified in one of two ways: 10 | 11 | * Using the `--context` flag to indicate the project directory containing an `.odo` directory. 12 | * Explicitly specifying the OpenShift project and odo application with the `--project` and `--app` flags respectively. 13 | 14 | == Project Management 15 | 16 | `odo project create _name_`:: create a new project 17 | `odo project list`:: list all projects in the cluster 18 | `odo project get`:: display the currently active project 19 | `odo project set _name_`:: make the specified project active 20 | `odo app list`:: list all applications in the current project 21 | 22 | == Querying the Catalog 23 | 24 | `odo catalog list components`:: list available component backends 25 | `odo catalog search component _string_`:: list all components whose name contains the text in _string_ 26 | `odo catalog list services`:: list available deployable services 27 | `odo catalog search service _string_`:: list all services whose name contains the text in _string_ 28 | `odo catalog describe service _name_`:: display details about the given service 29 | 30 | == Creating & Deleting Components 31 | 32 | `odo create`:: start the interactive component creation 33 | `odo create _component_`:: creates a new component of the given type, using the current directory for its source code 34 | `odo create _component_ _name_`:: same as above, using the specified _name_ as the name of the component in odo 35 | 36 | The following flags may be specified when creating a component. 37 | 38 | [cols="35,65"] 39 | |=== 40 | 41 | |`--app _app-name_` 42 | |explicitly sets an app name that the component will belong to; defaults to `app` if unspecified 43 | 44 | |`--binary _bin_` 45 | |configure the component to run the given binary 46 | 47 | |`--env _key1=value1,key2=value2_` 48 | |sets the given environment variables on the component's pod 49 | 50 | |`--port _p1,p2_` 51 | |sets additional exposed ports 52 | 53 | |=== 54 | 55 | `odo delete`:: deletes the component indicated by the current context 56 | `odo delete _name_`:: deletes a specific component from the current context by name 57 | `odo delete --all`:: same as above, prompting the user to delete the local `.odo` directory as well 58 | 59 | `odo list`:: when run in a project directory, list all components in that project's application 60 | `odo list --all-apps`:: display components across all apps in the current project 61 | 62 | == Developing Components 63 | 64 | `odo push`:: push local project files into the cluster and (re)start the component's pod 65 | `odo push --config`:: pushes changes made to the odo configuration of the component without pushing the latest source code (see _Configuration_ below) 66 | `odo log`:: display the log messages for the component in the current context 67 | `odo log -f`:: tails the component's logging messages 68 | 69 | === Configuration 70 | 71 | `odo config view`:: show the configuration of the component in the current context, including general metadata (such as type and project), environment variables, and resource limitations 72 | `odo config set _parameter_ _value_`:: sets the value of the given parameter, such as "Type" or "CPU"; using `odo config set -h` displays the possible parameters that can be set 73 | `odo config unset _parameter_`:: removes the explicit value for the given parameter, leaving odo to use the default 74 | `odo config set --env _ENV1=value1_`:: sets an environment variable that will be exposed to the component when it is run; multiple values can be set through multiple uses of the `--env` flag 75 | `odo config unset --env _ENV1_`:: removes the specified environment variable from the component 76 | 77 | === URLs 78 | 79 | `odo url create`:: creates a URL for the component in the current context 80 | `odo url create _name_`:: creates a URL, using the specified name to refer to it through odo 81 | `odo url create --port _port_`:: creates a URL for the specified port; this argument is required if the component type exposes more than one port 82 | `odo url list`:: show all URLs for the component in the current context 83 | `odo url delete _name_`:: delete the URL with the specified name 84 | 85 | == Creating & Deleting Services 86 | 87 | `odo service create`:: start the interactive service creation 88 | `odo service create _service_`:: creates a new service of the given type using its default configuration values 89 | `odo service create _service_ _name_`:: same as above, using the specified _name_ as the name of the service in odo 90 | `odo service delete _name_`:: delete the specified service; include `-f` to skip the confirmation prompt 91 | 92 | == Linking 93 | 94 | `odo link _component-name_`:: link the specified component to the one in the current context; environment variables from the specified component will be made available in the current context component 95 | `odo link _service-name_`:: same as above; linking a service functions in the same way as linking a component 96 | `odo link _name_ --port _port_`:: indiciates which port on the given component/service to link to; this is required if the component/service exposes multiple ports 97 | `odo unlink _name_`:: unlinks the specified component/service from the component in the current context 98 | 99 | == Miscellaneous 100 | 101 | `odo login _cluster-url_`:: login to an OpenShift cluster 102 | `odo version`:: display version information about both the odo client and the connected cluster 103 | `odo help _command_`:: display help about a command 104 | `odo --complete`:: install command completion for odo -------------------------------------------------------------------------------- /package-management-commands.adoc: -------------------------------------------------------------------------------- 1 | = Package Management Cheat Sheet 2 | :experimental: true 3 | :product-name: 4 | 5 | == Working with RPM 6 | 7 | RPM is a popular package management tool in Red Hat Enterprise Linux-based distros. Using RPM, you can install, uninstall, and query individual software packages. Still, it cannot manage dependency resolution like YUM. RPM does provide you useful output, including a list of required packages. An RPM package consists of an archive of files and metadata. Metadata includes helper scripts, file attributes, and information about packages. 8 | 9 | RPM keeps a list of installed packages in a database that located on inside the folder `/var/lib` on the host computer. 10 | 11 | You can use the RPM command line tool `rpm` to install, updated, delete and query packages. 12 | 13 | Capabilities are assigned to the `rpm` command by assigning options to the `rpm` command. For example, the `-i` option tells `rpm` to install a package. 14 | 15 | 16 | The following sections describe the various way to use `rpm`. 17 | 18 | === Listing all installed packages 19 | 20 | ``` 21 | rpm -qa 22 | ``` 23 | 24 | *Example* 25 | 26 | The following example demonstrates using the `rpm` command with the `-qa` options to list all the packages installed on the host computer. The result is piped to the `more` command in order to limit the number of package listings returned. 27 | 28 | ``` 29 | $ rpm -qa | more -5 30 | qemu-kvm-block-gluster-6.2.0-11.module+el8.6.0+14707+5aa4b42d.x86_64 31 | device-mapper-persistent-data-0.9.0-6.el8.x86_64 32 | perl-File-Path-2.15-2.el8.noarch 33 | glib2-2.56.4-158.el8.x86_64 34 | microcode_ctl-20220207-1.el8.x86_64 35 | ``` 36 | 37 | === Listing all files associate with a package 38 | 39 | ``` 40 | rpm -ql 41 | ``` 42 | 43 | *Example* 44 | 45 | The following example demonstrates using the `rpm` command with the `ql` options to list all the packages associated with the Linux command `curl`. 46 | 47 | ``` 48 | rpm -ql curl 49 | /usr/bin/curl 50 | /usr/lib/.build-id 51 | /usr/lib/.build-id/09 52 | /usr/lib/.build-id/09/dcda6e77ea757074bf0706145bcf62490ec22b 53 | /usr/share/doc/curl 54 | /usr/share/doc/curl/BUGS 55 | /usr/share/doc/curl/CHANGES 56 | /usr/share/doc/curl/FAQ 57 | /usr/share/doc/curl/FEATURES 58 | /usr/share/doc/curl/MANUAL 59 | /usr/share/doc/curl/README 60 | /usr/share/doc/curl/RESOURCES 61 | /usr/share/doc/curl/TODO 62 | /usr/share/doc/curl/TheArtOfHttpScripting 63 | /usr/share/man/man1/curl.1.gz 64 | /usr/share/zsh/site-functions 65 | /usr/share/zsh/site-functions/_curl 66 | ``` 67 | 68 | === Installing a package 69 | 70 | ``` 71 | rpm -ivh 72 | ``` 73 | 74 | *Example* 75 | 76 | The following example demonstrates using the `rpm` with the `-ivh` options to install an `rpm` package from a location on the internet. 77 | 78 | ``` 79 | $ sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 80 | 81 | Retrieving https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 82 | warning: /var/tmp/rpm-tmp.G0fqLy: Header V4 RSA/SHA256 Signature, key ID 2f86d6a1: NOKEY 83 | Verifying... ################################# [100%] 84 | Preparing... ################################# [100%] 85 | Updating / installing... 86 | 1:epel-release-8-15.el8 ################################# [100%] 87 | ``` 88 | 89 | === Installing a package without dependencies 90 | 91 | ``` 92 | rpm -ivh --nodeps 93 | ``` 94 | 95 | *Example* 96 | 97 | The following example demonstrates using the `rpm` with the `-ivh` and `--nodeps` options to install an `rpm` package from a location on the internet. Using the `--nodeps` option constrains the installation to add only the main package without its dependencies. 98 | 99 | ``` 100 | $ sudo rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 101 | ``` 102 | 103 | === Upgrading a package 104 | 105 | ``` 106 | rpm -Uvh 107 | ``` 108 | 109 | *Example* 110 | 111 | The following example demonstrates using the `rpm` with the `-Uvh` options to update a package hosted on the internet. In this case the command reports that the latest version of the package is installed. 112 | 113 | ``` 114 | $ sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 115 | Retrieving https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 116 | Verifying... ################################# [100%] 117 | Preparing... ################################# [100%] 118 | package epel-release-8-15.el8.noarch is already installed 119 | ``` 120 | 121 | === Removing a package 122 | 123 | ``` 124 | rpm -e 125 | ``` 126 | 127 | *Example* 128 | 129 | The following example demonstrates using the `rpm` with the `-e` option to remove a package and its dependencies installed on the host computer. 130 | 131 | ``` 132 | sudo rpm -e epel-release-8-15.el8.noarch 133 | ``` 134 | 135 | === Removing a package without dependencies 136 | 137 | ``` 138 | rpm -evh --nodeps 139 | ``` 140 | 141 | *Example* 142 | 143 | The following example demonstrates using the `rpm` with the `-e` option to remove a package yet leaves its dependencies intact. 144 | 145 | ``` 146 | sudo rpm -evh --nodeps epel-release-8-15.el8.noarch 147 | ``` 148 | 149 | === Querying a package 150 | 151 | ``` 152 | rpm -q 153 | ``` 154 | 155 | *Example* 156 | 157 | The following example demonstrates using the `rpm` with the `-q` option to query for all the details about the package `buildah`. 158 | 159 | ``` 160 | $ rpm -q buildah 161 | 162 | buildah-1.24.2-4.module+el8.6.0+14877+f643d2d6.x86_64 163 | ``` 164 | 165 | === Querying a package with details 166 | 167 | ``` 168 | rpm -qi 169 | ``` 170 | 171 | *Example* 172 | 173 | The following example demonstrates using the `rpm` with the `-qi` option to query for the version information for the package `buildah`. 174 | 175 | ``` 176 | $ rpm -qi buildah 177 | Name : buildah 178 | Epoch : 1 179 | Version : 1.24.2 180 | Release : 4.module+el8.6.0+14877+f643d2d6 181 | Architecture: x86_64 182 | Install Date: Tue 21 Jun 2022 07:40:41 AM PDT 183 | Group : Unspecified 184 | Size : 30943650 185 | License : ASL 2.0 186 | Signature : RSA/SHA256, Mon 25 Apr 2022 12:50:53 AM PDT, Key ID 199e2f91fd431d51 187 | Source RPM : buildah-1.24.2-4.module+el8.6.0+14877+f643d2d6.src.rpm 188 | Build Date : Tue 19 Apr 2022 03:22:19 AM PDT 189 | Build Host : x86-vm-56.build.eng.bos.redhat.com 190 | Relocations : (not relocatable) 191 | Packager : Red Hat, Inc. 192 | Vendor : Red Hat, Inc. 193 | URL : https://buildah.io 194 | Summary : A command line tool used for creating OCI Images 195 | Description : 196 | The buildah package provides a command line tool which can be used to 197 | * create a working container from scratch 198 | or 199 | * create a working container from an image as a starting point 200 | * mount/umount a working container's root file system for manipulation 201 | * save container's root file system layer to create a new image 202 | * delete a working container or an image 203 | ``` 204 | == Package Repositories 205 | 206 | A Package Repository is a remote server that provides software packages ready to be installed on your local computer using the DNF package manager. 207 | 208 | A package repository is described on your local machine in a `.repo` file. 209 | 210 | The following example shows contents of the `epel.repo` file: 211 | 212 | ``` 213 | [epel] 214 | name=Extra Packages for Enterprise Linux 8 - $basearch 215 | # It is much more secure to use the metalink, but if you wish to use a local mirror 216 | # place its address here. 217 | #baseurl=https://download.example/pub/epel/8/Everything/$basearch 218 | metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-8&arch=$basearch&infra=$infra&content=$contentdir 219 | enabled=1 220 | gpgcheck=1 221 | countme=1 222 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8 223 | 224 | [epel-debuginfo] 225 | name=Extra Packages for Enterprise Linux 8 - $basearch - Debug 226 | # It is much more secure to use the metalink, but if you wish to use a local mirror 227 | # place its address here. 228 | #baseurl=https://download.example/pub/epel/8/Everything/$basearch/debug 229 | metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-8&arch=$basearch&infra=$infra&content=$contentdir 230 | enabled=0 231 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8 232 | gpgcheck=1 233 | 234 | [epel-source] 235 | name=Extra Packages for Enterprise Linux 8 - $basearch - Source 236 | # It is much more secure to use the metalink, but if you wish to use a local mirror 237 | # place it's address here. 238 | #baseurl=https://download.example/pub/epel/8/Everything/source/tree/ 239 | metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-8&arch=$basearch&infra=$infra&content=$contentdir 240 | enabled=0 241 | gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8 242 | gpgcheck=1 243 | ``` 244 | 245 | Package repository `.repo` files are typically stored in the directory `/etc/yum.repos.d`. 246 | 247 | === Listing repositories 248 | 249 | ``` 250 | sudo ls -l /etc/yum.repos.d 251 | ``` 252 | 253 | *Example* 254 | 255 | The following example demonstrates listing the repositories stored in the `/etc/yum.repos.d` folder of host computer using the Linux `ls` command. 256 | 257 | 258 | ``` 259 | $ sudo ls -l /etc/yum.repos.d 260 | 261 | total 192 262 | -rw-r--r--. 1 root root 1395 Mar 14 14:53 epel-modular.repo 263 | -rw-r--r--. 1 root root 1332 Mar 14 14:53 epel.repo 264 | -rw-r--r--. 1 root root 1494 Mar 14 14:53 epel-testing-modular.repo 265 | -rw-r--r--. 1 root root 1431 Mar 14 14:53 epel-testing.repo 266 | -rw-r--r--. 1 root root 180039 Jun 20 10:17 redhat.repo 267 | ``` 268 | 269 | === Adding a repository 270 | 271 | ``` 272 | dnf config-manager --add-repo /etc/yum.repos.d/fedora_extras.repo 273 | ``` 274 | 275 | *Example* 276 | 277 | The following example demonstrates using the `dnf` command with the `config-manager` subcommand along with the `--add-repo` option to add the `epel.repo` repository the the RHEL Subscription Management. 278 | 279 | ``` 280 | $ sudo dnf config-manager --add-repo /etc/yum.repos.d/epel.repo 281 | Updating Subscription Management repositories. 282 | Adding repo from: file:///etc/yum.repos.d/epel.repo 283 | ``` 284 | 285 | == Working with DNF and YUM 286 | 287 | The `dnf` (Dandified YUM) and its antecedent `yum` (Yellowdog Updater, Modifier) are command line executables that facilitate installing, updating and removing packages from a machine. 288 | 289 | You can usually use `dnf` and `yum` interchangeably to work with packages on a local machine. 290 | 291 | The following sections demonstrate how to use `dnf` and `yum` to install, update, remove and inspect RPM packages. 292 | 293 | === install 294 | 295 | ``` 296 | dnf sudo install -y 297 | ``` 298 | 299 | WHERE `-y` is the option that overrides prompting the user for permission to install the package 300 | 301 | *Example* 302 | 303 | The following example snippet demonstrates using the `dnf` command with the `install` subcommand to install the package `buildah` on the host computer. The `-y` option overrides prompting the user for permission to install the package. 304 | 305 | ``` 306 | $ sudo dnf install buildah -y 307 | Updating Subscription Management repositories. 308 | Last metadata expiration check: 0:01:08 ago on Mon 20 Jun 2022 10:39:23 AM PDT. 309 | Package buildah-1:1.24.2-4.module+el8.6.0+14673+621cb8be.x86_64 is already installed. 310 | Dependencies resolved. 311 | ==================================================================================================================================================================== 312 | Package Architecture Version Repository Size 313 | ==================================================================================================================================================================== 314 | Upgrading: 315 | buildah x86_64 1:1.24.2-4.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 8.1 M 316 | . 317 | . 318 | . 319 | ``` 320 | 321 | === remove 322 | 323 | ``` 324 | dnf sudo remove -y 325 | ``` 326 | 327 | *Example* 328 | 329 | The following example snippet demonstrates using the `dnf` command with the `remove` subcommand to remove the package `buildah` on the host computer. The `-y` option overrides prompting the user for permission to remove the package. 330 | 331 | ``` 332 | sudo dnf remove buildah -y 333 | 334 | Updating Subscription Management repositories. 335 | Dependencies resolved. 336 | ==================================================================================================================================================================== 337 | Package Architecture Version Repository Size 338 | ==================================================================================================================================================================== 339 | Removing: 340 | buildah x86_64 1:1.24.2-4.module+el8.6.0+14877+f643d2d6 @rhel-8-for-x86_64-appstream-rpms 30 M 341 | 342 | Transaction Summary 343 | ==================================================================================================================================================================== 344 | Remove 1 Package 345 | 346 | Freed space: 30 M 347 | Running transaction check 348 | Transaction check succeeded. 349 | Running transaction test 350 | Transaction test succeeded. 351 | Running transaction 352 | Preparing : 1/1 353 | Erasing : buildah-1:1.24.2-4.module+el8.6.0+14877+f643d2d6.x86_64 1/1 354 | Running scriptlet: buildah-1:1.24.2-4.module+el8.6.0+14877+f643d2d6.x86_64 1/1 355 | Verifying : buildah-1:1.24.2-4.module+el8.6.0+14877+f643d2d6.x86_64 1/1 356 | Installed products updated. 357 | 358 | Removed: 359 | buildah-1:1.24.2-4.module+el8.6.0+14877+f643d2d6.x86_64 360 | 361 | Complete! 362 | ``` 363 | 364 | === upgrade 365 | 366 | ``` 367 | sudo dnf upgrade 368 | ``` 369 | 370 | *Example* 371 | 372 | The following example snippet demonstrates using the `dnf` command with the `upgrade` subcommand to upgrade the repositories stored on the host computer. The `-y` option overrides prompting the user for permission to upgrade the host computer. 373 | 374 | ``` 375 | sudo dnf upgrade -y 376 | Updating Subscription Management repositories. 377 | Last metadata expiration check: 1:27:55 ago on Tue 21 Jun 2022 06:18:32 AM PDT. 378 | Dependencies resolved. 379 | ==================================================================================================================================================================== 380 | Package Architecture Version Repository Size 381 | ==================================================================================================================================================================== 382 | Upgrading: 383 | alsa-sof-firmware noarch 1.9.3-1.el8_5 rhel-8-for-x86_64-baseos-rpms 780 k 384 | cockpit-podman noarch 43-1.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 494 k 385 | conmon x86_64 2:2.1.0-1.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 55 k 386 | container-selinux noarch 2:2.179.1-1.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 58 k 387 | containernetworking-plugins x86_64 1:1.0.1-2.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 18 M 388 | containers-common x86_64 2:1-27.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 95 k 389 | criu x86_64 3.15-3.module+el8.6.0+14877+f643d2d6 rhel-8-for-x86_64-appstream-rpms 518 k 390 | . 391 | . 392 | . 393 | sssd-proxy-2.6.2-4.el8_6.x86_64 xz-5.2.4-4.el8_6.x86_64 394 | xz-libs-5.2.4-4.el8_6.x86_64 395 | Installed: 396 | grub2-tools-efi-1:2.02-123.el8_6.8.x86_64 397 | 398 | Complete! 399 | ``` 400 | 401 | 402 | === search 403 | 404 | ``` 405 | sudo dnf search 406 | ``` 407 | 408 | *Example* 409 | 410 | The following example demonstrates using the `dnf` command with the `search` subcommand to search for the package `curl` along with its dependencies on the host computer. 411 | 412 | ``` 413 | $ sudo dnf search curl 414 | Updating Subscription Management repositories. 415 | Last metadata expiration check: 0:01:01 ago on Mon 20 Jun 2022 10:31:02 AM PDT. 416 | ==================================================================== Name Exactly Matched: curl ==================================================================== 417 | curl.x86_64 : A utility for getting files from remote servers (FTP, HTTP, and others) 418 | =================================================================== Name & Summary Matched: curl =================================================================== 419 | libcurl-devel.x86_64 : Files needed for building applications with libcurl 420 | libcurl-devel.i686 : Files needed for building applications with libcurl 421 | libcurl-minimal.i686 : Conservatively configured build of libcurl for minimal installations 422 | libcurl-minimal.x86_64 : Conservatively configured build of libcurl for minimal installations 423 | nbdkit-curl-plugin.x86_64 : HTTP/FTP (cURL) plugin for nbdkit 424 | python3-pycurl.x86_64 : Python interface to libcurl for Python 3 425 | qemu-kvm-block-curl.x86_64 : QEMU CURL block driver 426 | ======================================================================== Name Matched: curl ======================================================================== 427 | libcurl.x86_64 : A library for getting files from web servers 428 | libcurl.i686 : A library for getting files from web servers 429 | ``` 430 | 431 | === info 432 | ``` 433 | dnf info 434 | ``` 435 | 436 | *Example* 437 | 438 | The following example demonstrates using the `dnf` command with the `info` subcommand to get detail information about the package `curl`. 439 | 440 | ``` 441 | $ dnf info curl 442 | Not root, Subscription Management repositories not updated 443 | Last metadata expiration check: 0:02:05 ago on Mon 20 Jun 2022 10:34:55 AM PDT. 444 | Installed Packages 445 | Name : curl 446 | Version : 7.61.1 447 | Release : 22.el8 448 | Architecture : x86_64 449 | Size : 684 k 450 | Source : curl-7.61.1-22.el8.src.rpm 451 | Repository : @System 452 | From repo : anaconda 453 | Summary : A utility for getting files from remote servers (FTP, HTTP, and others) 454 | URL : https://curl.haxx.se/ 455 | License : MIT 456 | Description : curl is a command line tool for transferring data with URL syntax, supporting 457 | : FTP, FTPS, HTTP, HTTPS, SCP, SFTP, TFTP, TELNET, DICT, LDAP, LDAPS, FILE, IMAP, 458 | : SMTP, POP3 and RTSP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP 459 | : uploading, HTTP form based upload, proxies, cookies, user+password 460 | : authentication (Basic, Digest, NTLM, Negotiate, kerberos...), file transfer 461 | : resume, proxy tunneling and a busload of other useful tricks. 462 | ``` 463 | 464 | === update 465 | ``` 466 | dnf update 467 | ``` 468 | 469 | *Example* 470 | 471 | The following example demonstrates using the `dnf` command with the `update` subcommand to update the the `buildah` package stored on the host computer. The `-y` option overrides prompting the user for permission to upgrade the host computer. 472 | 473 | ``` 474 | $ sudo dnf update buildah -y 475 | Updating Subscription Management repositories. 476 | Last metadata expiration check: 1:24:13 ago on Tue 21 Jun 2022 06:18:32 AM PDT. 477 | Dependencies resolved. 478 | Nothing to do. 479 | Complete 480 | ``` 481 | 482 | === repolist 483 | ``` 484 | dnf repolist 485 | ``` 486 | 487 | *Example* 488 | 489 | The following example demonstrates using the `dnf` command with the `repolist` subcommand to list the package repositories installed on the host computer. 490 | 491 | ``` 492 | $ sudo dnf repolist 493 | Updating Subscription Management repositories. 494 | repo id repo name 495 | epel Extra Packages for Enterprise Linux 8 - x86_64 496 | epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64 497 | rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) 498 | rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 499 | ``` 500 | 501 | === history 502 | ``` 503 | sudo dnf history 504 | ``` 505 | 506 | *Example* 507 | 508 | The following example demonstrates using the `dnf` command with the `history` subcommand to get the history of `dnf` activity on the host computer. 509 | 510 | ``` 511 | $ sudo dnf history 512 | Updating Subscription Management repositories. 513 | ID | Command line | Date and time | Action(s) | Altered 514 | -------------------------------------------------------- 515 | 8 | install hwinfo -y | 2022-06-21 09:49 | Install | 2 516 | 7 | install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm | 2022-06-21 09:38 | Install | 1 517 | 6 | install gimp | 2022-06-21 09:27 | Install | 27 518 | 5 | upgrade -y | 2022-06-21 07:47 | I, U | 64 519 | 4 | install buildah -y | 2022-06-21 07:40 | Install | 1 520 | 3 | remove buildah -y | 2022-06-21 07:38 | Removed | 1 521 | 2 | install buildah -y | 2022-06-20 10:40 | Upgrade | 1 522 | 1 | | 2022-06-20 09:10 | Install | 1398 EE 523 | ``` -------------------------------------------------------------------------------- /quarkus-kubernetes-1.adoc: -------------------------------------------------------------------------------- 1 | = Quarkus & Kubernetes I 2 | :experimental: true 3 | :product-name: 4 | :version: 1.4.0 5 | 6 | This cheat sheet covers the integrations you can find in the form of extensions between Quarkus and Kubernetes. 7 | 8 | == Creating the project 9 | 10 | [source, bash-shell, subs=attributes+] 11 | ---- 12 | mvn "io.quarkus:quarkus-maven-plugin:{version}.Final:create" \ 13 | -DprojectGroupId="org.acme" \ 14 | -DprojectArtifactId="greeting" \ 15 | -DprojectVersion="1.0-SNAPSHOT" \ 16 | -DclassName="org.acme.GreetingResource" \ 17 | -Dextensions="kubernetes, jib" \ 18 | -Dpath="/hello" 19 | ---- 20 | 21 | TIP: You can generate the project in https://code.quarkus.io/ and selecting `kubernetes` and `jib` extensions. 22 | 23 | == Native Executable support 24 | 25 | You can build a native image by using GraalVM. 26 | But since Kubernetes works with containers, you need to create the native executable inside a container. 27 | Quarkus allows you to do that by running the following command: 28 | 29 | `./mvnw package -Pnative -Dquarkus.native.container-build=true` 30 | 31 | Or using `podman`: 32 | 33 | `./mvnw package -Pnative -Dquarkus.native.container-runtime=podman -Dquarkus.native.container-build=true` 34 | 35 | == Container Image creation 36 | 37 | Quarkus comes with default `Dockerfiles` to build the container. 38 | They are found in `src/main/docker`. 39 | 40 | `Dockerfile.jvm`:: It can be used to create a container containing the generated Java files (runner JAR + `lib` folder). 41 | `Dockerfile.native`:: It can be used to create a container containing the generated native executable file. 42 | 43 | You can use Docker to create the container image: `docker build -f src/main/docker/Dockerfile.native -t quarkus/getting-started .` or you can leverage to Quarkus the creation and release of the container images. 44 | Several extensions are provided to make it so. 45 | 46 | Standard properties that can be set as Java system properties or in the `src/main/resources/application.properties`. 47 | 48 | `quarkus.container-image.group`:: 49 | The group/repository of the image, defaults to `${user.name}`. 50 | 51 | `quarkus.container-image.name`:: 52 | The name of the image, defaults to the application name. 53 | 54 | `quarkus.container-image.tag`:: 55 | The tag of the image, defaults to the application version. 56 | 57 | `quarkus.container-image.registry`:: 58 | The registry to use for pushing, defaults to `docker.io`. 59 | 60 | `quarkus.container-image.username`:: 61 | The registry username. 62 | 63 | `quarkus.container-image.password`:: 64 | The registry password. 65 | 66 | `quarkus.container-image.insecure`:: 67 | Flag to allow insecure registries, defaults to `false`. 68 | 69 | `quarkus.container-image.build`:: 70 | Flag to set if the image should be built, defaults to `false`. 71 | 72 | `quarkus.container-image.push`:: 73 | Flag to set if the image should be pushed, defaults to `false`. 74 | 75 | Specific builders: 76 | 77 | === Jib 78 | 79 | You can use Jib to build the container image. 80 | Jib builds Docker and OCI images for Java applications in a dockerless fashion. 81 | 82 | `./mvnw quarkus:add-extensions -Dextensions="jib"` 83 | 84 | Specific properties for the Jib extension are: 85 | 86 | `quarkus.container-image-jib.base-jvm-image`:: 87 | The base image to use for the Jib build, defaults to `fabric8/java-alpine-openjdk8-jre`. 88 | 89 | `quarkus.container-image-jib.base-native-image`:: 90 | The base image to use for the native build, defaults to `registry.access.redhat.com/ubi8/ubi-minimal`. 91 | 92 | `quarkus.container-image-jib.jvm-arguments`:: 93 | The arguments to pass to Java, defaults to `-Dquarkus.http.host=0.0.0.0,-Djava.util.logging.manager=org.jboss.logmanager.LogManager`. 94 | 95 | `quarkus.container-image-jib.native-arguments`:: 96 | The arguments to pass to the native application, defaults to `-Dquarkus.http.host=0.0.0.0`. 97 | 98 | `quarkus.container-image-jib.environment-variables`:: 99 | Map of environment variables. 100 | 101 | === Docker 102 | 103 | You can use the Docker extension to build the container image using Docker CLI. 104 | 105 | `./mvnw quarkus:add-extensions -Dextensions="docker"` 106 | 107 | Specific properties for the Docker extension are: 108 | 109 | `quarkus.container-image-docker.dockerfile-jvm-path`:: 110 | Path to the JVM Dockerfile, defaults to `${project.root}/src/main/docker/Dockerfile.jvm`. 111 | 112 | `quarkus.container-image-docker.dockerfile-native-path`:: 113 | Path to the native Dockerfile, defaults to `${project.root}/src/main/docker/Dockerfile.native`. 114 | 115 | === S2I 116 | 117 | You can use the S2I to build the container image. 118 | 119 | `./mvnw quarkus:add-extensions -Dextensions="s2i"` 120 | 121 | Specific properties for the S2I extension are: 122 | 123 | `quarkus.container-image-s2i.base-jvm-image`:: 124 | The base image to use for the s2i build, defaults to `fabric8/java-alpine-openjdk8-jre`. 125 | 126 | `quarkus.container-image-s2i.base-native-image`:: 127 | The base image to use for the native build, defaults to `registry.access.redhat.com/ubi8/ubi-minimal`. 128 | 129 | == Kubernetes 130 | 131 | Quarkus use the `Dekorate` project to generate Kubernetes resources. 132 | 133 | Running `./mvnw package` the Kubernetes resources are created at `target/kubernetes/` directory. 134 | 135 | You can choose the target deployment type by setting the `quarkus.kubernetes.deployment-target` property. 136 | Possible values are `kubernetes`, `openshift` and `knative`. 137 | The default target is `kubernetes`. 138 | 139 | You can customize the generated resource by setting specific properties in `application.properties`. 140 | Full list of configurable elements are: https://quarkus.io/guides/kubernetes#configuration-options 141 | 142 | [source, properties] 143 | .src/main/resources/application.properties 144 | ---- 145 | quarkus.kubernetes.replicas=3 146 | 147 | quarkus.kubernetes.readiness-probe.period-seconds=45 148 | 149 | quarkus.kubernetes.mounts.github-token.path=/deployment/github 150 | quarkus.kubernetes.mounts.github-token.read-only=true 151 | 152 | quarkus.kubernetes.secret-volumes.github-token.volume-name=github-token 153 | quarkus.kubernetes.secret-volumes.github-token.secret-name=greeting-security 154 | quarkus.kubernetes.secret-volumes.github-token.default-mode=420 155 | 156 | quarkus.kubernetes.config-map-volumes.github-token.config-map-name=my-secret 157 | 158 | quarkus.kubernetes.labels.foo=bar 159 | quarkus.kubernetes.annotations.foo=bar 160 | 161 | quarkus.kubernetes.expose=true 162 | ---- 163 | 164 | Moreover, the generated resources are integrated with MicroProfile Health spec, registering liveness/readiness probes based on the health checks defined using the spec. 165 | 166 | To deploy the generated resources automatically, you need to set `quarkus.container.deploy` flag to `true`. 167 | 168 | `./mvnw clean package -Dquarkus.kubernetes.deploy=true` 169 | 170 | Setting this flag to `true`, makes the build and push flags from the `container-image` set to `true` too. 171 | 172 | Kubernetes extension uses the Kubernetes Client to deploy resources. 173 | By default, Kubernetes Client reads connection properties from the `~/.kube/config` folder but you can set them too by using some of the `kubernetes-client` properties: 174 | 175 | `quarkus.kubernetes-client.trust-certs`:: 176 | Trust self-signed certificates, defaults to `false`. 177 | 178 | `quarkus.kubernetes-client.master-url`:: 179 | URL of Kubernetes API server. 180 | 181 | `quarkus.kubernetes-client.namespace`:: 182 | Default namespace. 183 | 184 | `quarkus.kubernetes-client.ca-cert-file`:: 185 | CA certificate data. 186 | 187 | `quarkus.kubernetes-client.client-cert-file`:: 188 | Client certificate file. 189 | 190 | `quarkus.kubernetes-client.client-cert-data`:: 191 | Client certificate data. 192 | 193 | `quarkus.kubernetes-client.client-key-data`:: 194 | Client key data. 195 | 196 | `quarkus.kubernetes-client.client-key-algorithm`:: 197 | Client key algorithm. 198 | 199 | `quarkus.kubernetes-client.username`:: 200 | Username. 201 | 202 | `quarkus.kubernetes-client.password`:: 203 | Password. -------------------------------------------------------------------------------- /quarkus-kubernetes-2.adoc: -------------------------------------------------------------------------------- 1 | = Quarkus & Kubernetes II 2 | :experimental: true 3 | :product-name: 4 | :version: 1.4.0 5 | 6 | This cheat sheet covers more integrations that you can find in the form of extensions between Quarkus and Kubernetes. 7 | 8 | == Creating the project 9 | 10 | [source, bash-shell, subs=attributes+] 11 | ---- 12 | mvn "io.quarkus:quarkus-maven-plugin:{version}.Final:create" \ 13 | -DprojectGroupId="org.acme" \ 14 | -DprojectArtifactId="greeting" \ 15 | -DprojectVersion="1.0-SNAPSHOT" \ 16 | -DclassName="org.acme.GreetingResource" \ 17 | -Dextensions="kubernetes, kubernetes-client, health, kubernetes-config" \ 18 | -Dpath="/hello" 19 | ---- 20 | 21 | TIP: You can generate the project in https://code.quarkus.io/ and selecting `kubernetes`, `kubernetes-client`, `health` and `kubernetes-config` extensions. 22 | 23 | == Kubernetes 24 | 25 | Quarkus use the `Dekorate` project to generate Kubernetes resources. 26 | 27 | Running `./mvnw package` the Kubernetes resources are created at `target/kubernetes/` directory. 28 | 29 | == Health Checks 30 | 31 | The generated Kubernetes resources are integrated with MicroProfile Health spec, registering liveness/readiness probes based on the health checks defined using the spec. 32 | 33 | If the extension is present, a default liveness/readiness probes are registered at `/health/live` and `/health/ready` endpoints. 34 | 35 | You can implement a custom liveness/readiness probes: 36 | 37 | [source, java] 38 | ---- 39 | import io.smallrye.health.HealthStatus; 40 | 41 | @ApplicationScoped 42 | public class DatabaseHealthCheck { 43 | 44 | @Liveness 45 | HealthCheck isAlive() { 46 | return HealthStatus.up("successful-live"); 47 | } 48 | 49 | @Readiness 50 | HealthCheck isReady() { 51 | return HealthStatus.state("successful-read", this::isServiceReady) 52 | } 53 | 54 | private boolean isServiceReady() { 55 | return true; 56 | } 57 | } 58 | ---- 59 | 60 | Quarkus comes with three health check implementations for checking the service status. 61 | 62 | SocketHealthCheck:: It checks if the host is reachable using a socket. 63 | UrlHealthCheck:: It checks if the host is reachable using a HTTP URL connection. 64 | InetAddressHealthCheck:: It checks if host is reachable using `InetAddress.isReachable` method. 65 | 66 | [source, java] 67 | ---- 68 | @Readiness 69 | HealthCheck isGoogleReady() { 70 | return new UrlHealthCheck("https://www.google.com").name("Google-Check"); 71 | } 72 | ---- 73 | 74 | The following Quarkus extensions `agroal` (datasource), `kafka`, `mongoDB`, `neo4j`, `artemis`, `kafka-streams` and `vault` provide readiness health checks by default. 75 | 76 | They can be enabled/disabled by setting `quarkus..health.enabled` to `true`/`false`. 77 | 78 | [source, properties] 79 | ---- 80 | quarkus.kafka-streams.health.enabled=true 81 | quarkus.mongodb.health.enabled=false 82 | ---- 83 | 84 | == Kubernetes Configuration 85 | 86 | Kubernetes Config extension uses Kubernetes API Server to get `config-map`s and inject their key/value using MicroProfile Config spec. 87 | 88 | You need to enable the extension and set name of the `config-map`s that contains the properties to inject: 89 | 90 | [source, properties] 91 | ---- 92 | quarkus.kubernetes-config.enabled=true 93 | quarkus.kubernetes-config.config-maps=cmap1,cmap2 94 | ---- 95 | 96 | To inject `cmap1` and `cmap2` values, you need to set the key name in the `@ConfigProperty` annotation: 97 | 98 | [source, java] 99 | ---- 100 | @ConfigProperty(name = "some.prop1") 101 | String someProp1; 102 | 103 | @ConfigProperty(name = "some.prop2") 104 | String someProp2; 105 | ---- 106 | 107 | If the config key is a Quarkus configuration file, `application.properties` or `application.yaml`, the content of these files is parsed and each key/value of the configuration file can be injected as well. 108 | 109 | List of Kubernetes Config parameters. 110 | 111 | `quarkus.kubernetes-config.enabled`:: 112 | The application will attempt to look up the configuration from the API server, defaults to `false`. 113 | 114 | `quarkus.kubernetes-config.fail-on-missing-config`:: 115 | The application will not start if any of the configured config sources cannot be located, defaults to `true`. 116 | 117 | `quarkus.kubernetes-config.config-maps`:: 118 | ConfigMaps to look for in the namespace that the Kubernetes Client has been configured for. Supports CSV format. 119 | 120 | == Kubernetes Client 121 | 122 | Quarkus integrates with Fabric8 Kubernetes Client to access Kubernetes Server API. 123 | 124 | [source, java] 125 | ---- 126 | @Inject 127 | KubernetesClient client; 128 | 129 | ServiceList myServices = client.services().list(); 130 | 131 | Service myservice = client.services() 132 | .inNamespace("default") 133 | .withName("myservice") 134 | .get(); 135 | ---- 136 | 137 | Kubernetes Client can be configured programmatically: 138 | 139 | [source, java] 140 | ---- 141 | @Dependent 142 | public class KubernetesClientProducer { 143 | 144 | @Produces 145 | public KubernetesClient kubernetesClient() { 146 | Config config = new ConfigBuilder() 147 | .withMasterUrl("https://mymaster.com") 148 | .build(); 149 | return new DefaultKubernetesClient(config); 150 | } 151 | } 152 | ---- 153 | 154 | Or also in `application.properties`. 155 | 156 | By default, Kubernetes Client reads connection properties from the `~/.kube/config` folder but you can set them too by using some of the `kubernetes-client` properties: 157 | 158 | `quarkus.kubernetes-client.trust-certs`:: 159 | Trust self-signed certificates, defaults to `false`. 160 | 161 | `quarkus.kubernetes-client.master-url`:: 162 | URL of Kubernetes API server. 163 | 164 | `quarkus.kubernetes-client.namespace`:: 165 | Default namespace. 166 | 167 | `quarkus.kubernetes-client.ca-cert-file`:: 168 | CA certificate data. 169 | 170 | `quarkus.kubernetes-client.client-cert-file`:: 171 | Client certificate file. 172 | 173 | `quarkus.kubernetes-client.client-cert-data`:: 174 | Client certificate data. 175 | 176 | `quarkus.kubernetes-client.client-key-data`:: 177 | Client key data. 178 | 179 | `quarkus.kubernetes-client.client-key-algorithm`:: 180 | Client key algorithm. 181 | 182 | `quarkus.kubernetes-client.username`:: 183 | Username. 184 | 185 | `quarkus.kubernetes-client.password`:: 186 | Password. -------------------------------------------------------------------------------- /quarkus-observability.adoc: -------------------------------------------------------------------------------- 1 | = Quarkus & Observability 2 | :experimental: true 3 | :product-name: 4 | :version: 1.4.2 5 | 6 | Observability is the ability to watch the state of a system/application based on some external outputs. 7 | 8 | These are the four important concepts to have observability correctly implemented: 9 | 10 | * Monitoring/Metrics (_Prometheus_) 11 | * Visualization (_Graphana_) 12 | * Distributed tracing (_Jaeger_) 13 | * Log aggregation (_Sentry_, _GELF_) 14 | 15 | This cheat sheet covers the integrations you can find in the form of extensions for Quarkus to implement observability. 16 | 17 | == Creating the project 18 | 19 | [source, bash-shell, subs=attributes+] 20 | ---- 21 | mvn "io.quarkus:quarkus-maven-plugin:{version}.Final:create" \ 22 | -DprojectGroupId="org.acme" \ 23 | -DprojectArtifactId="greeting" \ 24 | -DprojectVersion="1.0-SNAPSHOT" \ 25 | -DclassName="org.acme.GreetingResource" \ 26 | -Dpath="/hello" 27 | ---- 28 | 29 | TIP: You can generate the project in https://code.quarkus.io/. 30 | 31 | == Monitoring/Metrics 32 | 33 | Quarkus can utilize the https://github.com/eclipse/microprofile-metrics[MicroProfile Metrics spec] to provide metrics support. 34 | 35 | [source, bash] 36 | ---- 37 | ./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-smallrye-metrics" 38 | ---- 39 | 40 | The metrics can be provided in _JSON_ or _OpenMetrics_ format. 41 | 42 | When the extension is present in the classpath, an endpoint is added automatically to `/metrics` providing default metrics. 43 | To add custom metrics, MicroProfile Metrics spec provides the next annotations: 44 | 45 | `@Timed`:: Tracks the duration of a call. 46 | `@SimplyTimed`:: Tracks the duration of a call without mean and distribution calculations. 47 | `@Metered`:: Tracks the frequency of invocations. 48 | `@Counted`:: Counts number of invocations. 49 | `@Gauge`:: Samples the value of the annotated object. 50 | `@ConcurrentGauge`:: Gauge to count parallel invocations. 51 | `@Metric`:: Used to inject a metric. Valid types are `Meter`, `Timer`, `Counter`, `Histogram`. `Gauge` type can only be produced in methods or fields. 52 | 53 | [source, java] 54 | ---- 55 | @GET 56 | //... 57 | @Timed(name = "checksTimer", 58 | unit = MetricUnits.MILLISECONDS) 59 | public String hello() {} 60 | 61 | @Counted(name = "countWelcome") 62 | public String hello() {} 63 | ---- 64 | 65 | `@Gauge` annotation returning a measure as a gauge. 66 | 67 | [source, java] 68 | ---- 69 | @Gauge(name = "hottestSauce", unit = MetricUnits.NONE) 70 | public Long hottestSauce() {} 71 | ---- 72 | 73 | Injecting a histogram using `@Metric`. 74 | 75 | [source, java] 76 | ---- 77 | @Metric(name = "histogram") 78 | Histogram historgram; 79 | ---- 80 | 81 | Metrics can be configured in `application.properties` with the next properties: 82 | 83 | `quarkus.smallrye-metrics.path`:: 84 | The path to the metrics handler, defaults to `/metrics`. 85 | 86 | `quarkus.smallrye-metrics.extensions.enabled`:: 87 | If metrics are enabled or not, defaults to `true`. 88 | 89 | `quarkus.smallrye-metrics.micrometer.compatibility`:: 90 | Applies Micrometer compatibility mode, defaults to `false`. 91 | 92 | You can apply metric annotations via CDI stereotypes: 93 | 94 | [source, java] 95 | ---- 96 | @Stereotype 97 | @Retention(RetentionPolicy.RUNTIME) 98 | @Target({ ElementType.TYPE, ElementType.METHOD, ElementType.FIELD }) 99 | @Timed(name = "checksTimer", unit = MetricUnits.MILLISECONDS) 100 | public @interface TimedMilliseconds { 101 | } 102 | ---- 103 | 104 | Metrics in Quarkus also integrates with other extensions: 105 | 106 | `quarkus.hibernate-orm.metrics.enabled`:: 107 | If enabled Hibernate metrics are exposed under `vendor` scope. 108 | 109 | `quarkus.mongodb.metrics.enabled`:: 110 | If enabled MongoDB metrics are exposed under `vendor` scope. 111 | 112 | == Distributed Tracing 113 | 114 | Quarkus can utilize the https://github.com/eclipse/microprofile-opentracing[MicroProfile OpenTracing spec] to provide tracing support. 115 | 116 | [source, bash] 117 | ---- 118 | ./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-smallrye-opentracing" 119 | ---- 120 | 121 | All the requests sent to any endpoint are traced automatically. 122 | 123 | This extension includes OpenTracing and `Jaeger` tracer support. 124 | 125 | Jaeger tracer has multiple configuration properties, but a typical example is: 126 | 127 | [source, properties] 128 | .application.properties 129 | ---- 130 | quarkus.jaeger.service-name=myservice 131 | quarkus.jaeger.sampler-type=const 132 | quarkus.jaeger.sampler-param=1 133 | quarkus.jaeger.endpoint=http://localhost:14268/api/traces 134 | ---- 135 | 136 | `@Traced(false)` annotation can be used to disable tracing at class or method level. 137 | 138 | `io.opentracing.Tracer` interface can be injected into a class to manipulate the information that is traced. 139 | 140 | [source, java] 141 | ---- 142 | @Inject 143 | Tracer tracer; 144 | 145 | tracer.activeSpan().setBaggageItem("key", "value"); 146 | ---- 147 | 148 | You can disable the `Jaeger` extension by using `quarkus.jaeger.enabled` property. 149 | 150 | You can log the `traceId`, `spanId` and `sampled` tracing information in the Quarkus logging system by configuring the log format: 151 | 152 | [source, properties] 153 | ---- 154 | quarkus.log.console.format=%d{HH:mm:ss} %-5p traceId=%X{traceId}, spanId=%X{spanId}, sampled=%X{sampled} [%c{2.}] (%t) %s%e%n 155 | ---- 156 | 157 | Tracing in Quarkus also integrates with other extensions: 158 | 159 | === JDBC Tracer 160 | 161 | Adds a span for each JDBC queries. 162 | 163 | [source, xml] 164 | ---- 165 | 166 | io.opentracing.contrib 167 | opentracing-jdbc 168 | 169 | ---- 170 | 171 | Configure JDBC driver apart from tracing properties seen before: 172 | 173 | [source, properties] 174 | ---- 175 | # add ':tracing' to your database URL 176 | quarkus.datasource.url=jdbc:tracing:postgresql://localhost:5432/mydatabase 177 | quarkus.datasource.driver=io.opentracing.contrib.jdbc.TracingDriver 178 | quarkus.hibernate-orm.dialect=org.hibernate.dialect.PostgreSQLDialect 179 | ---- 180 | 181 | === AWS XRay 182 | 183 | If you are building native images, and want to use AWS X-Ray Tracing with your lambda you will need to include `quarkus-amazon-lambda-xray` as a dependency in your pom. 184 | 185 | == Log Aggregation 186 | 187 | === Sentry 188 | 189 | Quarkus integrates with https://sentry.io[Sentry] for logging errors into an error monitoring system. 190 | 191 | [source, bash] 192 | ---- 193 | ./mvnw quarkus:add-extension -Dextensions="quarkus-logging-sentry" 194 | ---- 195 | 196 | As an example if you want to send all errors occuring in the package `org.example` to Sentry with DSN `https://abcd@sentry.io/1234`, you should configure it in the follwoing way: 197 | 198 | [source, properties] 199 | .application.properties 200 | ---- 201 | quarkus.log.sentry=true 202 | quarkus.log.sentry.dsn=https://abcd@sentry.io/1234 203 | quarkus.log.sentry.level=ERROR 204 | quarkus.log.sentry.in-app-packages=org.example 205 | ---- 206 | 207 | Full list of configuration properties for Sentry are: 208 | 209 | `quarkus.log.sentry.enable`:: 210 | Enables the Sentry logging extension, defaults to `false`. 211 | 212 | `quarkus.log.sentry.dsn`:: 213 | The _DSN_ where events are sent. 214 | 215 | `quarkus.log.sentry.level`:: 216 | The log level, defaults to `WARN`. 217 | 218 | `quarkus.log.sentry.in-app-packages`:: 219 | Configures application package prefixes. 220 | 221 | `quarkus.log.sentry.environment`:: 222 | Sets the environment value. 223 | 224 | `quarkus.log.sentry.release`:: 225 | Sets the release value. 226 | 227 | === GELF format 228 | 229 | You can configure the output logging to be in _GELF_ format instead of plain text. 230 | 231 | [source, shell-session] 232 | ---- 233 | ./mvnw quarkus:add-extension -Dextensions="quarkus-logging-gelf" 234 | ---- 235 | 236 | `quarkus.log.handler.gelf.enabled`:: 237 | Enables _GELF_ logging handler, defaults to `false`. 238 | 239 | `quarkus.log.handler.gelf.host`:: 240 | The Hostname/IP of Logstash/Graylof. Prepend `tcp:` for using TCP protocol, defaults to `udp:localhost`. 241 | 242 | `quarkus.log.handler.gelf.port`:: 243 | The port, defaults to `12201`. 244 | 245 | `quarkus.log.handler.gelf.version`:: 246 | The _GELF_ version, defaults to `1.1`. 247 | 248 | `quarkus.log.handler.gelf.extract-stack-trace`:: 249 | Posts Stack-Trace to StackTrace field, defaults to `true`. 250 | 251 | `quarkus.log.handler.gelf.stack-trace-throwable-reference`:: 252 | Gets the cause level to stack trace. `0` is fulls tack trace, defaults to `0`. 253 | 254 | `quarkus.log.handler.gelf.filter-stack-trace`:: 255 | Sets the stack-Trace filtering, defaults to `false`. 256 | 257 | `quarkus.log.handler.gelf.timestamp-pattern`:: 258 | Sets the tiemstamp format in Java Date pattern, defaults to `yyyy-MM-dd HH:mm:ss,SSS`. 259 | 260 | `quarkus.log.handler.gelf.level`:: 261 | Sets the log level using `java.util.logging.Level` class, defaults to `ALL`. 262 | 263 | `quarkus.log.handler.gelf.facility`:: 264 | The name of the facility, defaults to `jboss-logmanager`. 265 | 266 | `quarkus.log.handler.gelf.additional-field..`:: 267 | Posts additional fields (ie `quarkus.log.handler.gelf.additional-field.field1.type=String`) -------------------------------------------------------------------------------- /quarkus-reactive-messaging.adoc: -------------------------------------------------------------------------------- 1 | = Quarkus Reactive Messaging Streams 2 | :experimental: true 3 | :product-name: 4 | :version: 1.4.2 5 | 6 | Quarkus relies on MicroProfile Reactive Messaging spec to implement reactive messaging streams. 7 | 8 | This cheat sheet covers the integrations between Quarkus and Messaging systems like Apache Kafka, AMQP, and MQTT protocols. 9 | 10 | == Creating the project 11 | 12 | [source, bash-shell, subs=attributes+] 13 | ---- 14 | mvn "io.quarkus:quarkus-maven-plugin:{version}.Final:create" \ 15 | -DprojectGroupId="org.acme" \ 16 | -DprojectArtifactId="greeting" \ 17 | -DprojectVersion="1.0-SNAPSHOT" \ 18 | -DclassName="org.acme.GreetingResource" \ 19 | -Dextensions="reactive-messaging, mutiny" \ 20 | -Dpath="/hello" 21 | ---- 22 | 23 | TIP: You can generate the project in https://code.quarkus.io/ 24 | 25 | == Producing Messages 26 | 27 | There are two ways: 28 | 29 | * Declarative annotation-based approach: `@org.eclipse.microprofile.reactive.messaging.Outgoing`. This is perfect for the reactive code approach. 30 | * Programmatic-based approach: injecting `org.eclipse.microprofile.reactive.messaging.Emitter` interface. This is perfect for linking the imperative code to reactive code. 31 | 32 | === Declarative 33 | 34 | [source, java] 35 | ---- 36 | @ApplicationScoped 37 | public class PriceMessageProducer { 38 | 39 | @Outgoing("prices") // <1> 40 | public Multi generate() { 41 | return Multi.createFrom().ticks().every(Duration.ofSeconds(1)) 42 | .map(x -> random.nextDouble()); 43 | } 44 | 45 | } 46 | ---- 47 | <1> It sets `prices` as a channel. 48 | 49 | By default, messages are only dispatched to a single consumer. 50 | By using `@io.smallrye.reactive.messaging.annotations.Broadcast` the message is dispatched to all consumers. 51 | 52 | [source, java] 53 | ---- 54 | @Outgoing("out") 55 | @Broadcast(2) // <1> 56 | ---- 57 | <1> Sets the number of consumers. If not set then all consumers receive the message. 58 | 59 | === Programmatic 60 | 61 | [source, java] 62 | ---- 63 | import org.eclipse.microprofile.reactive.messaging.Channel; 64 | 65 | @ApplicationScoped 66 | public class PriceMessageProducer { 67 | 68 | @Channel("prices") // <1> 69 | Emitter emitter; 70 | 71 | public void send(double d) { 72 | emitter.send(d); 73 | } 74 | } 75 | ---- 76 | <1> It configures `Emitter` channel to `prices`. 77 | 78 | You can use `org.eclipse.microprofile.reactive.messaging.OnOverflow` to configure back pressure on `Emitter`. 79 | 80 | [source, java] 81 | ---- 82 | @Channel("prices") 83 | @OnOverflow(value = OnOverflow.Strategy.BUFFER, bufferSize = 256) // <1> 84 | Emitter emitter; 85 | ---- 86 | <1> Overflow strategy. 87 | 88 | The possible strategies are: `BUFFER`, `UNBOUNDED_BUFFER`, `DROP`, `FAIL`, `LATEST` and `NONE`. 89 | 90 | === Messages 91 | 92 | If you want to send more information apart from the payload, you can use `org.eclipse.microprofile.reactive.messaging.Message` interface instead of directly the body content. 93 | 94 | [source, java] 95 | ---- 96 | @Channel("prices") Emitter> emitter; 97 | 98 | MyMetadata metadata = new MyMetadata(); 99 | emitter.send(Message.of(d, Metadata.of(metadata))); 100 | ---- 101 | 102 | The framework automatically acknowledges messages, but you can change that with `@org.eclipse.microprofile.reactive.messaging.Acknowledgment` annotation or/and with `Message` instance. 103 | 104 | [source, java] 105 | ---- 106 | @Outgoing("out") 107 | @Acknowledgment(Acknowledgment.Strategy.PRE_PROCESSING) 108 | public String process(String input) {} 109 | ---- 110 | 111 | Possible values are: 112 | 113 | POST_PROCESSING:: It is executed once the produced message is acknowledged. 114 | PRE_PROCESSING:: It is executed before the message is processed by the method. 115 | MANUAL:: It is done by the user. 116 | NONE:: No acknowledgment is performed. 117 | 118 | [source, java] 119 | ---- 120 | @Outgoing("out") 121 | public Message processAndProduceNewMessage(Integer in) { 122 | return Message.of(in, 123 | () -> { 124 | return in.ack(); 125 | }); 126 | } 127 | ---- 128 | 129 | == Consuming Messages 130 | 131 | There are two ways: 132 | 133 | * Declarative annotation-based approach: `@org.eclipse.microprofile.reactive.messaging.Incoming`. This is perfect for the reactive code approach. 134 | * Programmatic-based approach: injecting `io.smallrye.mutiny.Multi` or `org.reactivestreams.Publisher` interface. 135 | 136 | === Declarative 137 | 138 | [source, java] 139 | ---- 140 | @ApplicationScoped 141 | public class PayloadProcessingBean { 142 | 143 | @Incoming("prices") // <1> 144 | public void process(String in) { 145 | System.out.println(in.toUpperCase()); 146 | } 147 | } 148 | ---- 149 | <1> Consumes messages from `prices` channel. 150 | 151 | By default, having multiple producers to the same channel is considered as an error, but you can use `@io.smallrye.reactive.messaging.annotations.Merge` annotation to support it. 152 | 153 | [source, java] 154 | ---- 155 | @Incoming("in1") 156 | @Outgoing("out") 157 | public int inc (int i) {} 158 | 159 | @Incoming("in2") 160 | @Outgoing("out") 161 | public int mult (int i) {} 162 | 163 | @Incoming("out") 164 | @Merge 165 | public void getAll(int i) {} 166 | ---- 167 | 168 | The following strategies are supported in `Merge` annotation: `ONE` to pick the first source only, `CONCAT` to concat the sources and `MERGE` (the default) to merge the different sources. 169 | 170 | Multiple `@Incoming` annotations can be repeated to listen from more than one channel. 171 | 172 | [source, java] 173 | ---- 174 | @Incoming("channel-1") 175 | @Incoming("channel-2") 176 | public void process(String s) {} 177 | ---- 178 | 179 | === Programmatic 180 | 181 | [source, java] 182 | ---- 183 | @Channel("my-channel") 184 | Multi streamOfPayloads; 185 | 186 | streamOfPayloads.map(s -> s.toUpperCase()); 187 | ---- 188 | 189 | == Connectors 190 | 191 | You need to set the mapping between the `channel` and the topic in the remote boker. 192 | The configuration parameters format is: `mp.messaging.[incoming|outgoing].[channel-name].[attribute]=[value]`. 193 | 194 | `incoming` or `outgoing` is to define if the channel is used as consumer or as producer. 195 | 196 | `channel-name` is the name of the channel you've given in the annotation. 197 | 198 | `attributes` are specific to the connector used. 199 | 200 | === Apache Kafka 201 | 202 | [source, bash-shell] 203 | ---- 204 | ./mvnw quarkus:add-extension -Dextensions="reactive-messaging-kafka" 205 | ---- 206 | 207 | [source, properties] 208 | ---- 209 | mp.messaging.outgoing.my-channel-out.connector=smallrye-kafka 210 | mp.messaging.outgoing.my-channel-out.topic=prices 211 | mp.messaging.outgoing.my-channel-out.bootstrap.servers=localhost:9092 212 | mp.messaging.outgoing.my-channel-out.value.serializer=org.apache.kafka.common.serialization.IntegerSerializer 213 | 214 | mp.messaging.incoming.my-channel-in.connector=smallrye-kafka 215 | mp.messaging.incoming.my-channel-in.value.deserializer=org.apache.kafka.common.serialization.IntegerDeserializer 216 | ... 217 | ---- 218 | 219 | A complete list of supported properties are provided in the Kafka site. 220 | For the https://kafka.apache.org/documentation/#producerconfigs[producer] and for the https://kafka.apache.org/documentation/#consumerconfigs[consumer]. 221 | 222 | SmallRye Reactive Messaging Kafka provides `io.smallrye.reactive.messaging.kafka.KafkaRecord` as implementation of the `org.eclipse.microprofile.reactive.messaging.Message`. 223 | 224 | [source, java] 225 | ---- 226 | OutgoingKafkaRecord outgoingKafkaRecord = KafkaRecord.of(s.id, JsonbBuilder.create().toJson(s)); 227 | 228 | metadata = OutgoingKafkaRecordMetadataBuilder.builder().withTimestamp(Instant.now()).build(); 229 | outgoingKafkaRecord.withMetadata(metadata); 230 | ---- 231 | 232 | === AMQP 233 | 234 | [source, bash-shell] 235 | ---- 236 | ./mvnw quarkus:add-extension -Dextensions="reactive-messaging-kafka" 237 | ---- 238 | 239 | [source, properties] 240 | ---- 241 | amqp-host=amqp 242 | amqp-port=5672 243 | amqp-username=quarkus 244 | amqp-password=quarkus 245 | 246 | mp.messaging.outgoing.my-channel-out.connector=smallrye-amqp 247 | mp.messaging.outgoing.my-channel-out.address=prices 248 | mp.messaging.outgoing.my-channel-out.durable=true 249 | 250 | mp.messaging.incoming.my-channel-in.connector=smallrye-amqp 251 | ... 252 | ---- 253 | 254 | SmallRye Reactive Messaging AMQP provides `io.smallrye.reactive.messaging.amqp.IncomingAmqpMetadata` and `io.smallrye.reactive.messaging.amqp.OutgoingAmqpMetadata` to deal with AMQP metadata. 255 | 256 | [source, java] 257 | ---- 258 | Optional metadata = incoming.getMetadata(IncomingAmqpMetadata.class); 259 | 260 | OutgoingAmqpMetadata metadata = OutgoingAmqpMetadata.builder() 261 | .withAddress("customized-address") 262 | .withDurable(true) 263 | .withSubject("my-subject") 264 | .build(); 265 | incoming.addMetadata(metadata); 266 | ---- 267 | 268 | A complete list of supported properties for the AMQP integration is provided at the https://smallrye.io/smallrye-reactive-messaging/#_interacting_using_amqp[Reactive Messaging site]. 269 | 270 | === MQTT 271 | 272 | [source, bash-shell] 273 | ---- 274 | ./mvnw quarkus:add-extension -Dextensions="reactive-messaging-kafka" 275 | ---- 276 | 277 | [source, properties] 278 | ---- 279 | mp.messaging.outgoing.my-channel-out.type=smallrye-mqtt 280 | mp.messaging.outgoing.my-channel-out.topic=prices 281 | mp.messaging.outgoing.my-channel-out.host=localhost 282 | mp.messaging.outgoing.my-channel-out.port=1883 283 | mp.messaging.outgoing.my-channel-out.auto-generated-client-id=true 284 | 285 | mp.messaging.incoming.my-channel-in.type=smallrye-mqtt 286 | ... 287 | ---- 288 | 289 | A complete list of supported properties for the MQTT integration is provided at the https://smallrye.io/smallrye-reactive-messaging/smallrye-reactive-messaging/2/mqtt/mqtt.html[Reactive Messaging site]. -------------------------------------------------------------------------------- /quarkus-spring.adoc: -------------------------------------------------------------------------------- 1 | = Quarkus-Spring compatibility 2 | :experimental: true 3 | :product-name: 4 | :version: 1.11.0 5 | 6 | While users are encouraged to use Java EE, MicroProfile, or Quarkus annotations/extensions, Quarkus provides an API compatibility layer for some of the Spring projects. 7 | If you are coming from Spring development, these integrations might make you a smooth transition to Quarkus. 8 | 9 | IMPORTANT: Please note that the Spring support in Quarkus does not start a Spring Application Context nor are any Spring infrastructure classes run. Spring classes and annotations are only used for reading metadata and/or are used as user code method return types or parameter types. 10 | 11 | == Spring DI 12 | 13 | [source, bash] 14 | ---- 15 | ./mvnw quarkus:add-extension 16 | -Dextensions="quarkus-spring-di" 17 | ---- 18 | 19 | [source, java] 20 | ---- 21 | import org.springframework.context.annotation.Bean; 22 | import org.springframework.context.annotation.Configuration; 23 | 24 | @Configuration 25 | public class AppConfiguration { 26 | 27 | @Bean(name = "capitalizeFunction") 28 | public StringFunction capitalizer() { 29 | return String::toUpperCase; 30 | } 31 | } 32 | ---- 33 | 34 | Or as a component: 35 | 36 | [source, java] 37 | ---- 38 | import org.springframework.stereotype.Component; 39 | 40 | @Component("noopFunction") 41 | public class NoOpSingleStringFunction 42 | implements StringFunction { 43 | } 44 | ---- 45 | 46 | Also as a service and injecting configuration properties from the `application.properties` file. 47 | 48 | [source, java] 49 | ---- 50 | import org.springframework.beans.factory.annotation.Value; 51 | import org.springframework.stereotype.Service; 52 | 53 | @Service 54 | public class MessageProducer { 55 | 56 | @Value("${greeting.message}") 57 | String message; 58 | 59 | } 60 | ---- 61 | 62 | And you can inject using `Autowired` or constructor in a component and in a JAX-RS resource. 63 | 64 | [source, java] 65 | ---- 66 | import org.springframework.beans.factory.annotation.Autowired; 67 | import org.springframework.beans.factory.annotation.Qualifier; 68 | import org.springframework.stereotype.Component; 69 | 70 | @Component 71 | public class GreeterBean { 72 | 73 | private final MessageProducer messageProducer; 74 | 75 | @Autowired @Qualifier("noopFunction") 76 | StringFunction noopStringFunction; 77 | 78 | public GreeterBean(MessageProducer messageProducer) { 79 | this.messageProducer = messageProducer; 80 | } 81 | } 82 | ---- 83 | 84 | == Spring Web 85 | 86 | [source, bash] 87 | ---- 88 | ./mvnw quarkus:add-extension 89 | -Dextensions="quarkus-spring-web" 90 | ---- 91 | 92 | It supports the REST related features. 93 | 94 | [source, java] 95 | ---- 96 | import org.springframework.web.bind.annotation.GetMapping; 97 | import org.springframework.web.bind.annotation.RequestMapping; 98 | import org.springframework.web.bind.annotation.RestController; 99 | 100 | @RestController 101 | @RequestMapping("/greeting") 102 | public class GreetingController { 103 | 104 | private final GreetingBean greetingBean; 105 | 106 | public GreetingController(GreetingBean greetingBean) { 107 | this.greetingBean = greetingBean; 108 | } 109 | 110 | @GetMapping("/{name}") 111 | public Greeting hello(@PathVariable(name = "name") 112 | String name) { 113 | return new Greeting(greetingBean.greet(name)); 114 | } 115 | } 116 | ---- 117 | 118 | Supported annotations are: `RestController`, `RequestMapping`, `GetMapping`, `PostMapping`, `PutMapping`, `DeleteMapping`, `PatchMapping`, `RequestParam`, `RequestHeader`, `MatrixVariable`, `PathVariable`, `CookieValue`, `RequestBody`, `ResponseStatus`, `ExceptionHandler` and `RestControllerAdvice`. 119 | 120 | `org.springframework.http.ResponseEntity` is also supported as a return type. 121 | 122 | == Spring Boot Configuration 123 | 124 | [source, bash] 125 | ---- 126 | ./mvnw quarkus:add-extension 127 | -Dextensions="quarkus-spring-boot-properties" 128 | ---- 129 | 130 | [source, java] 131 | ---- 132 | import org.springframework.boot.context.properties.ConfigurationProperties; 133 | 134 | @ConfigurationProperties("example") 135 | public final class ClassProperties { 136 | 137 | private String value; 138 | private AnotherClass anotherClass; 139 | 140 | // getters/setters 141 | } 142 | ---- 143 | 144 | [source, properties] 145 | ---- 146 | example.value=class-value 147 | example.anotherClass.value=true 148 | ---- 149 | 150 | == Spring Security 151 | 152 | [source, bash] 153 | ---- 154 | ./mvnw quarkus:add-extension 155 | -Dextensions="spring-security" 156 | ---- 157 | 158 | You need to choose a security extension to define user, roles, ... such as `openid-connect`, `oauth2`, `properties-file` or `security-jdbc`. 159 | Then you can use Spring Security annotations to protect classes or methods: 160 | 161 | [source, java] 162 | ---- 163 | import org.springframework.security.access.annotation.Secured; 164 | 165 | @Secured("admin") 166 | @GetMapping 167 | public String hello() { 168 | return "hello"; 169 | } 170 | ---- 171 | 172 | Quarkus provides support for some of the most used features of Spring Security’s `@PreAuthorize` annotation. 173 | 174 | Some examples: 175 | 176 | *hasRole* 177 | 178 | * `@PreAuthorize("hasRole('admin')")` 179 | * `@PreAuthorize("hasRole(@roles.USER)")` where `roles` is a bean defined with `@Component` annotation and `USER` is a public field of the class. 180 | 181 | *hasAnyRole* 182 | 183 | * `@PreAuthorize("hasAnyRole(@roles.USER, 'view')")` 184 | 185 | *Permit and Deny All* 186 | 187 | * `@PreAuthorize("permitAll()")` 188 | * `@PreAuthorize("denyAll()")` 189 | 190 | *Anonymous and Authenticated* 191 | 192 | * `@PreAuthorize("isAnonymous()")` 193 | * `@PreAuthorize("isAuthenticated()")` 194 | 195 | *Expressions* 196 | 197 | * Checks if the current logged in user is the same as the username method parameter: 198 | 199 | [source, java] 200 | ---- 201 | import org.springframework.security.access.prepost.PreAuthorize; 202 | 203 | @PreAuthorize("#person.name == authentication.principal.username") 204 | public void doSomethingElse(Person person){} 205 | ---- 206 | 207 | * Checks if calling a method if user can access: 208 | 209 | [source, java] 210 | ---- 211 | import org.springframework.security.access.prepost.PreAuthorize; 212 | 213 | @PreAuthorize("@personChecker.check(#person, authentication.principal.username)") 214 | public void doSomething(Person person){} 215 | 216 | @Component 217 | public class PersonChecker { 218 | public boolean check(Person person, String username) { 219 | return person.getName().equals(username); 220 | } 221 | } 222 | ---- 223 | 224 | * Combining expressions: 225 | 226 | [source, java] 227 | ---- 228 | @PreAuthorize("hasAnyRole('user', 'admin') AND #user == principal.username") 229 | public void allowedForUser(String user) {} 230 | ---- 231 | 232 | == Spring Data JPA 233 | 234 | [source, bash] 235 | ---- 236 | ./mvnw quarkus:add-extension 237 | -Dextensions="quarkus-spring-data-jpa" 238 | ---- 239 | 240 | [source, java] 241 | ---- 242 | import org.springframework.data.repository.CrudRepository; 243 | 244 | public interface FruitRepository 245 | extends CrudRepository { 246 | List findByColor(String color); 247 | List findByNameOrderByExpire(String name); 248 | Stream findNameByNameAndOriginAllIgnoreCase(String name, String origin); 249 | } 250 | ---- 251 | 252 | And then you can inject it either as shown in <> or in <>. 253 | 254 | Interfaces supported: 255 | 256 | * `org.springframework.data.repository.Repository` 257 | * `org.springframework.data.repository.CrudRepository` 258 | * `org.springframework.data.repository.PagingAndSortingRepository` 259 | * `org.springframework.data.jpa.repository.JpaRepository`. 260 | 261 | INFO: Generated repositories are automatically annotated with `@Transactional`. 262 | 263 | Repository fragments are also supported: 264 | 265 | [source, java] 266 | ---- 267 | public interface PersonFragment { 268 | void makeNameUpperCase(Person person); 269 | } 270 | public class PersonFragmentImpl implements PersonFragment { 271 | @Override 272 | public void makeNameUpperCase(Person person) {} 273 | } 274 | 275 | public interface PersonRepository 276 | extends JpaRepository, PersonFragment { 277 | } 278 | ---- 279 | 280 | User defined queries: 281 | 282 | [source, java] 283 | ---- 284 | @Query("select m from Movie m where m.rating = ?1") 285 | Iterator findByRating(String rating); 286 | 287 | @Modifying 288 | @Query("delete from Movie where rating = :rating") 289 | void deleteByRating(@Param("rating") String rating); 290 | 291 | @Query(value = "SELECT COUNT(*), publicationYear FROM Book GROUP BY publicationYear") 292 | List findAllByPublicationYear2(); 293 | 294 | interface BookCountByYear { 295 | int getPublicationYear(); 296 | 297 | Long getCount(); 298 | } 299 | ---- 300 | 301 | What is currently unsupported: 302 | 303 | * Methods of `org.springframework.data.repository.query.QueryByExampleExecutor` 304 | * QueryDSL support 305 | * Customizing the base repository 306 | * `java.util.concurrent.Future` as return type 307 | * Native and named queries when using `@Query` 308 | 309 | == Spring Data Rest 310 | 311 | [source, bash] 312 | ---- 313 | ./mvnw quarkus:add-extension 314 | -Dextensions="spring-data-rest" 315 | ---- 316 | 317 | [source,java] 318 | ---- 319 | import org.springframework.data.repository.CrudRepository; 320 | import org.springframework.data.rest.core.annotation.RepositoryRestResource; 321 | import org.springframework.data.rest.core.annotation.RestResource; 322 | 323 | @RepositoryRestResource(exported = false, path = "/my-fruits") 324 | public interface FruitsRepository extends CrudRepository { 325 | @RestResource(exported = true) 326 | Optional findById(Long id); 327 | @RestResource(exported = true) 328 | Iterable findAll(); 329 | } 330 | ---- 331 | 332 | The `spring-data-jpa` extension will generate an implementation for this repository. Then the `spring-data-rest` extension will generate a REST CRUD resource for it. 333 | 334 | The following interfaces are supported: 335 | 336 | * `org.springframework.data.repository.CrudRepository` 337 | * `org.springframework.data.repository.PagingAndSortingRepository` 338 | * `org.springframework.data.jpa.repository.JpaRepository` 339 | 340 | == Spring Cache 341 | 342 | [source, bash] 343 | ---- 344 | ./mvnw quarkus:add-extension 345 | -Dextensions="spring-cache" 346 | ---- 347 | 348 | [source, java] 349 | ---- 350 | @org.springframework.cache.annotation.Cacheable("someCache") 351 | public Greeting greet(String name) {} 352 | ---- 353 | 354 | Quarkus provides compatibility with the following Spring Cache annotations: 355 | 356 | * `@Cacheable` 357 | * `@CachePut` 358 | * `@CacheEvict` 359 | 360 | == Spring Schedule 361 | 362 | [source, bash] 363 | ---- 364 | ./mvnw quarkus:add-extension 365 | -Dextensions="spring-scheduled" 366 | ---- 367 | 368 | [source, java] 369 | ---- 370 | @org.springframework.scheduling.annotation.Scheduled(cron="*/5 * * * * ?") 371 | void cronJob() { 372 | System.out.println("Cron expression hardcoded"); 373 | } 374 | 375 | @Scheduled(fixedRate = 1000) 376 | @Scheduled(cron = "{cron.expr}") 377 | ---- 378 | 379 | == Spring Cloud Config Client 380 | 381 | [source, bash] 382 | ---- 383 | ./mvnw quarkus:add-extension 384 | -Dextensions="quarkus-spring-cloud-config-client" 385 | ---- 386 | 387 | [source, properties] 388 | ---- 389 | quarkus.spring-cloud-config.uri=http://localhost:8089 390 | quarkus.spring-cloud-config.username=user 391 | quarkus.spring-cloud-config.password=pass 392 | quarkus.spring-cloud-config.enabled=true 393 | ---- 394 | 395 | [source, java] 396 | ---- 397 | @ConfigProperty(name = "greeting.message") 398 | String greeting; 399 | ---- 400 | 401 | Possible configuration options. 402 | Prefix is `quarkus.spring-cloud-config`. 403 | 404 | `uri`:: 405 | Base URI where the Spring Cloud Config Server is available. (default: `localhost:8888`) 406 | 407 | `username`:: 408 | Username to be used if the Config Server has BASIC Auth enabled. 409 | 410 | `password`:: 411 | Password to be used if the Config Server has BASIC Auth enabled. 412 | 413 | `enabled`:: 414 | Enables read configuration from Spring Cloud Config Server. (default: `false`) 415 | 416 | `fail-fast`:: 417 | True to not start application if cannot access to the server. (default: `false`) 418 | 419 | `connection-timeout`:: 420 | The amount of time to wait when initially establishing a connection before giving up and timing out. (default: `10S`) 421 | 422 | `read-timeout`:: 423 | The amount of time to wait for a read on a socket before an exception is thrown. (default: `60S`) 424 | 425 | `label`:: 426 | The label to be used to pull remote configuration properties. 427 | -------------------------------------------------------------------------------- /quarkus-testing.adoc: -------------------------------------------------------------------------------- 1 | = Quarkus Testing 2 | :experimental: true 3 | :product-name: 4 | :version: 1.5.0 5 | 6 | Quarkus implements a set of functionalities to test Quarkus applications in an easy way as well as a tight integration with the REST Assured framework to write black-box tests. 7 | 8 | This cheat sheet covers how to write component/integration tests in Quarkus. 9 | 10 | == Creating the project 11 | 12 | [source, bash-shell, subs=attributes+] 13 | ---- 14 | mvn "io.quarkus:quarkus-maven-plugin:{version}.Final:create" \ 15 | -DprojectGroupId="org.acme" \ 16 | -DprojectArtifactId="greeting" \ 17 | -DprojectVersion="1.0-SNAPSHOT" \ 18 | -DclassName="org.acme.GreetingResource" \ 19 | -Dpath="/hello" 20 | ---- 21 | 22 | TIP: You can generate the project in https://code.quarkus.io/ 23 | 24 | == Testing 25 | 26 | Quarkus archetype adds test dependencies with JUnit 5 and REST-Assured library to test REST endpoints. 27 | 28 | [source, java] 29 | ---- 30 | @QuarkusTest 31 | public class GreetingResourceTest { 32 | 33 | @Test 34 | public void testHelloEndpoint() { 35 | given() 36 | .when().get("/hello") 37 | .then() 38 | .statusCode(200) 39 | .body(is("hello")); 40 | } 41 | } 42 | ---- 43 | 44 | By default test HTTP/S port is `8081` and `8444`. 45 | They can be overridden by setting the next properties: 46 | 47 | [source, properties] 48 | ---- 49 | quarkus.http.test-port=9090 50 | quarkus.http.test-ssl-port=9091 51 | ---- 52 | 53 | If static resource is served, the URL can be injected in the test: 54 | 55 | [source, java] 56 | ---- 57 | @TestHTTPResource("index.html") 58 | URL url; 59 | ---- 60 | 61 | == Stubbing 62 | 63 | To provide an alternative implementation of an interface, you need to annotate the alternative service with `@io.quarkus.test.Mock` annotation. 64 | 65 | [source, java] 66 | ---- 67 | @Mock 68 | @ApplicationScoped 69 | public class StubbedExternalService extends ExternalService {} 70 | 71 | @Inject 72 | ExternalService service; // <1> 73 | ---- 74 | <1> Service is an instance of `StubbedExternalService`. 75 | 76 | The alternative implementation overrides the real service for *all* test classes. 77 | 78 | == Mock 79 | 80 | You can also create mocks of your services with Mockito. 81 | Add the following dependency: `io.quarkus:quarkus-junit5-mockito`. 82 | 83 | [source, java] 84 | ---- 85 | @InjectMock 86 | GreetingService greetingService; 87 | 88 | @BeforeEach 89 | public void setup() { 90 | Mockito.when(greetingService.greet()).thenReturn("Hi"); 91 | } 92 | 93 | @Path("/hello") 94 | public class ExampleResource { 95 | 96 | @Inject 97 | GreetingService greetingService; // <1> 98 | } 99 | ---- 100 | <1> Mocked service. 101 | 102 | Mocks are scoped to test class so they are only valid in the current class. 103 | 104 | Spies are also supported by using `@InjectSpy`. 105 | 106 | [source, java] 107 | ---- 108 | @InjectSpy 109 | GreetingService greetingService; 110 | 111 | Mockito.verify(greetingService, Mockito.times(1)).greet(); 112 | ---- 113 | 114 | == Interceptors 115 | 116 | Since classes are in fact full CDI beans, you can apply CDI interceptors or create meta-annotations: 117 | 118 | [source, java] 119 | ---- 120 | @QuarkusTest 121 | @Stereotype 122 | @Transactional 123 | @Retention(RetentionPolicy.RUNTIME) 124 | @Target(ElementType.TYPE) 125 | public @interface TransactionalQuarkusTest {} 126 | 127 | @TransactionalQuarkusTest 128 | public class TestStereotypeTestCase {} 129 | ---- 130 | 131 | === MicroProfile REST Client 132 | 133 | To Mock REST Client, you need to define the REST Client interface with `@ApplicationScope` scope to be able to Mock it: 134 | 135 | [source, java] 136 | ---- 137 | @ApplicationScoped 138 | @RegisterRestClient 139 | public interface GreetingService { 140 | } 141 | 142 | @InjectMock 143 | @RestClient 144 | GreetingService greetingService; 145 | 146 | Mockito.when(greetingService.hello()).thenReturn("hello from mockito"); 147 | ---- 148 | 149 | == Active Record Pattern with Panache 150 | 151 | When implementing the active record pattern in Panache, there are some methods that are `static` and this makes the mocking process a bit complex. 152 | To avoid this complexity, Quarkus provides a special `PanacheMock` class that can be used to mock entities with static methods: 153 | 154 | [source, xml] 155 | ---- 156 | 157 | io.quarkus 158 | quarkus-panache-mock 159 | test 160 | 161 | ---- 162 | 163 | [source, java] 164 | ---- 165 | @Test 166 | public void testPanacheMocking() { 167 | PanacheMock.mock(Person.class); 168 | 169 | Mockito.when(Person.count()).thenReturn(23l); 170 | Assertions.assertEquals(23, Person.count()); 171 | PanacheMock.verify(Person.class, Mockito.times(1)).count(); 172 | } 173 | ---- 174 | 175 | == Quarkus Test Resource 176 | 177 | You can execute some logic before the first test run and execute some logic at the end of the test suite. 178 | 179 | You need to create a class implementing `QuarkusTestResourceLifecycleManager` interface and register it in the test via `@QuarkusTestResource` annotation. 180 | 181 | [source, java] 182 | ---- 183 | public class MyCustomTestResource 184 | implements QuarkusTestResourceLifecycleManager { 185 | 186 | @Override 187 | public Map start() { 188 | // return system properties that 189 | // are set before running tests 190 | return Collections.emptyMap(); 191 | } 192 | 193 | @Override 194 | public void stop() { 195 | } 196 | 197 | // optional 198 | public void init(Map initArgs) {} // <1> 199 | 200 | // optional 201 | @Override 202 | public void inject(Object testInstance) {} 203 | 204 | // optional 205 | @Override 206 | public int order() { 207 | return 0; 208 | } 209 | } 210 | ---- 211 | <1> Args are taken from ` QuarkusTestResource(initArgs)`. 212 | 213 | IMPORTANT: Returning new system properties implies that if you run the tests in parallel, you need to run them in different JVMs. 214 | 215 | And the registration of the test resource: 216 | 217 | [source, java] 218 | ---- 219 | @QuarkusTestResource(MyCustomTestResource.class) 220 | public class MyTest {} 221 | ---- 222 | 223 | === Provided Test Resources 224 | 225 | Quarkus provides some test resources implementations: 226 | 227 | *H2* 228 | 229 | Dependency: `io.quarkus:quarkus-test-h2` 230 | Registration: `@QuarkusTestResource(H2DatabaseTestResource.class)`. 231 | 232 | *Derby* 233 | 234 | Dependency: `io.quarkus:quarkus-test-derby` 235 | Registration: `@QuarkusTestResource(DerbyDatabaseTestResource.class)` 236 | 237 | *Artemis* 238 | 239 | Dependency: `io.quarkus:quarkus-test-artemis` 240 | Registration: `@QuarkusTestResource(ArtemisTestResource.class)` 241 | 242 | *LDAP* 243 | 244 | Dependency: `io.quarkus:quarkus-test-ldap` 245 | Registration: `@QuarkusTestResource(LdapServerTestResource.class)` 246 | 247 | You can populate LDAP entries by creating a `quarkus-io.ldif` file at the root of the classpath. 248 | 249 | *Vault* 250 | 251 | Dependency: `io.quarkus:quarkus-test-vault` 252 | Registration: `@QuarkusTestResource(VaultTestLifecycleManager.class)` 253 | 254 | *Amazon Lambda* 255 | 256 | Dependency: `io.quarkus:quarkus-test-amazon-lambda` 257 | Registration: `@QuarkusTestResource(LambdaResourceManager.class)` 258 | 259 | *Kubernetes Mock Server* 260 | 261 | Dependency: `io.quarkus:quarkus-test-kubernetes-client` 262 | Registration: `@QuarkusTestResource(KubernetesMockServerTestResource.class)` 263 | 264 | [source, java] 265 | ---- 266 | @MockServer 267 | private KubernetesMockServer mockServer; 268 | 269 | @Test 270 | public void test() { 271 | final Pod pod1 = ... 272 | mockServer 273 | .expect() 274 | .get() 275 | .withPath("/api/v1/namespaces/test/pods") 276 | .andReturn(200, 277 | new PodListBuilder() 278 | ... 279 | } 280 | ---- 281 | 282 | == Native Testing 283 | 284 | To test native executables annotate the test with `@NativeImageTest`. 285 | 286 | == About Profiles 287 | 288 | Quarkus allows you to have multiple configurations in the same file (`application.properties`). 289 | 290 | The syntax for this is `%{profile}.config.key=value`. 291 | 292 | `test` profile is used when tests are executed. 293 | 294 | [source, properties] 295 | ---- 296 | greeting.message=This is in Production 297 | %test.greeting.message=This is a Test 298 | ---- 299 | 300 | == Amazon Lambdas 301 | 302 | You can write tests for Amazon Lambdas: 303 | 304 | [source, xml] 305 | ---- 306 | 307 | io.quarkus 308 | quarkus-test-amazon-lambda 309 | test 310 | 311 | ---- 312 | 313 | [source, java] 314 | ---- 315 | @Test 316 | public void testLambda() { 317 | MyInput in = new MyInput(); 318 | in.setGreeting("Hello"); 319 | in.setName("Stu"); 320 | MyOutput out = io.quarkus.amazon.lambda.LambdaClient.invoke(MyOutput.class, in); 321 | } 322 | ---- 323 | 324 | == Quarkus Email 325 | 326 | Quarkus offers an extension to send emails. 327 | A property is provided to send emails to a mock instance instead of using a real SMTP server. 328 | 329 | If `quarkus.mailer.mock` is set to true, which is the default value in `dev` and `test` profiles, you can inject `MockMailbox` to get the sent messages. 330 | 331 | [source, java] 332 | ---- 333 | @Inject 334 | MockMailbox mailbox; 335 | 336 | @BeforeEach 337 | void init() { 338 | mailbox.clear(); 339 | } 340 | 341 | List sent = mailbox 342 | .getMessagesSentTo("to@acme.org"); 343 | ---- --------------------------------------------------------------------------------