├── .Ulysses-Group.plist
├── .Ulysses-Settings.plist
├── .gitignore
├── Foreword
├── .Ulysses-Group.plist
└── Foreword.md
├── Introduction
├── .Ulysses-Group.plist
└── Preface.md
├── LICENSE
├── README.md
├── Thank you
├── .Ulysses-Group.plist
└── Thank you.md
├── chapter1
├── .Ulysses-Group.plist
├── Chapter 1- Creating the sample service.md
└── status
├── chapter2
├── .Ulysses-Group.plist
├── Chapter 2- Deployment Basics.md
├── developer-catalog-template.png
├── status
├── template-instantiation.png
└── topology-view-template.png
├── chapter3
├── .Ulysses-Group.plist
├── 1-person-service-on-quayio.png
├── 2-event-log-openshift.png
├── 3-helm-chart-release-ocp.png
├── 4-building-operator-bundle.png
├── 5-installed-operators.png
├── Chapter 3- Packaging with Helm and Kubernetes Operators.md
└── status
├── chapter4
├── .Ulysses-Group.plist
├── 1-install-pipelines-operator.png
├── 2-installed-pipelines-operator.png
├── 3-quarkus-app-props.png
├── 4-all-cluster-tasks.png
├── 5-pipeline-builder.png
├── 6-linking-workspaces.png
├── 7-pipeline-run.png
├── 8-simplified-maven-task.png
├── Chapter 4- CI-CD with Tekton Pipelines.md
└── status
├── chapter5
├── .Ulysses-Group.plist
├── Chapter Five- GitOps and Argo CD.md
├── argocd-new-app.png
├── argocd-on-openshift.png
├── argocd-sync-failed.png
├── argocd-sync-log.png
├── argocd-sync-success.png
├── gitops-delivery-chain.png
├── gitops-dev-pipeline.png
├── gitops-stage-pipeline.png
├── install-gitops-operator.png
├── status
├── tekton-non-gitops-pipeline.png
└── tekton-parameter-mapping.png
└── using-the-examples
├── Using the Examples discussed in this book.md
└── status
/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/.Ulysses-Settings.plist:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | defaultPathExtensions
6 | md
7 | enforceFencedCodeBlocks
8 |
9 | sheetFormat
10 | foreign
11 | useInlineLinks
12 |
13 |
14 |
15 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Compiled class file
2 | *.class
3 |
4 | # Log file
5 | *.log
6 |
7 | # BlueJ files
8 | *.ctxt
9 |
10 | # Mobile Tools for Java (J2ME)
11 | .mtj.tmp/
12 |
13 | # Package Files #
14 | *.jar
15 | *.war
16 | *.nar
17 | *.ear
18 | *.zip
19 | *.tar.gz
20 | *.rar
21 |
22 | # virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml
23 | hs_err_pid*
24 |
25 | # Mac files
26 | .DS_Store
27 |
--------------------------------------------------------------------------------
/Foreword/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/Foreword/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/Foreword/Foreword.md:
--------------------------------------------------------------------------------
1 | # Foreword
2 | The job of a software developer has never been as approachable as it is nowadays. There are plenty of free resources provided by great communities to start from, allowing you to achieve quick successes with only little effort. But only a few of us remained in the role of a pure programmer, and young professionals face different challenges from those that we had one to two decades ago. The DevOps mindset and company restructurings made us all "software engineers."
3 |
4 | The name "software engineer" is not just a rebranding of these creepy, nocturnal keyboard maniacs; it mainly means two things: Building software is established as an engineering discipline (one that needs to be learned), and it's not only about producing source code anymore, but also about creating and operating complex systems composed by the right components for every purpose.
5 |
6 | That's where two of the most exciting technologies of our time come in. I'm not talking about virtual backgrounds in video conferences and the unmute button. Of course: Quarkus and Kubernetes are meant. Quarkus satisfies a developers needs as well as Kubernetes's demands and allows for actual software engineering high in the clouds.
7 |
8 | Although Quarkus prepares Java ecosystem inhabitants well for the cloud, there's some more of the "operations part" to consider (remember: engineering, not only developing). In regards of operations, another "thing" has became quite popular—at least as a keyword; but considered closely, is that the most natural operating model for Kubernetes: [GitOps][1].
9 |
10 | There were (and still are) many discussions and sophisticated concepts about how to solve continuous builds and deployments (CI/CD). Yes, that's a very important topic, but honestly—in the world of Kubernetes—it's a piece of cake. Building, testing, and integrating software modules is straightforward, especially when targeting to containers. And continuous delivery? That can be really complicated, I admit—that's why we actually don't do it by ourselves.
11 |
12 | We've known "Infrastructure as Code" for some time; it has become common practice in the most cloud environments. With GitOps we do exactly the same regarding application deployment and configuration. The desired state of Kubernetes resources is described declaratively and versioned in Git. Operators running in the target environment are continuously validating and reconciling this state. The process of continuous delivery is fully automated by specialized components such as the projects from the [Cloud Native Computing Foundation (CNCF)][2], Argo CD and Flux CD, and of course the build-ins of OpenShift.
13 |
14 | How all of this nicely plays together, peppered with lots of examples, can be found in this book. At the time my team adopted Kubernetes, GitOps wasn't widely known yet. Sticking to the principles of GitOps fostered the transition toward independent engineering teams and increased the overall quality of our solutions. Since then, we have helped a lot of teams succeed with the same journey. This job becomes much easier now that we can simply hand out the piece of literature you're looking at. So take Wanja's red pill and follow him down the rabbit hole of modern software delivery.
15 |
16 | *Florian Heubeck, Principal Engineer at MediaMarktSaturn Technology & Java User Group Ingolstadt Leader*
17 |
18 | [1]: https://opengitops.dev/
19 | [2]: https://www.cncf.io/
--------------------------------------------------------------------------------
/Introduction/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/Introduction/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/Introduction/Preface.md:
--------------------------------------------------------------------------------
1 | # Preface
2 | During my day-to-day job, I frequently have to explain and demonstrate, to interested developers and architects, the benefits of using OpenShift and modern Kubernetes platforms for their own projects. I started writing some [blog posts][1] about it just because I wanted to record what I was telling the attendees of my classes.
3 |
4 | Then I realized that there are tons of things out there that need explanation. People in my classes are telling me that they’re overwhelmed with all the news around Kubernetes and OpenShift and don’t really understand all the technologies, or the benefits of using those technologies in their own projects. They wish to have a practical guide taking them through the whole story. So I decided not only to talk about it, but also to write a book about it.
5 |
6 | As there are a lot of books available to explain the theoretical justification for the benefits of GitOps and Kubernetes, I decided to start from the other side: I have a use case and want to go through it from the beginning to the end.
7 |
8 | ## What to expect
9 | You might already know GitOps. You might understand the benefits of using Kubernetes from a developer’s perspective, because it allows you to set up deployments using declarative configurations. You might also understand container images or Tekton Pipelines or Argo CD.
10 |
11 | But did you already use everything together in one project? Did you already make your hands dirty with Tekton? With Argo CD?
12 |
13 | If you want to delve all the way down in GitOps, you might benefit from somebody giving you some hints here and there.
14 |
15 | This is why the book was written. We’ll do GitOps from scratch.
16 |
17 | ## The use case
18 | Typically, most modern software projects need to design and implement one or more services. This service could be a RESTful microservice or a reactive front end. To provide an example that is meaningful to a large group of readers, I have decided to write a RESTful microservice, called `person-service`, which requires a third-party software stack (in my case, a PostgreSQL database).
19 |
20 | This service has API methods to read data from the database and to update, create, and delete data using JSON. Thus, it’s a simple CRUD service.
21 |
22 | ## Chapter overview
23 | This book tries to proceed as you would when developing your own services. The initial question is always (after understanding the business requirements, of course), what software stack should be used. In my case I have decided to use the Java language with the [Quarkus][2] framework.
24 |
25 | Chapter 1 explains why I’m using Quarkus and how to develop your microservice with it.
26 |
27 | Once you’ve developed your code, you need to understand how to move it to your target platform. This also requires some understanding of the target platform. This is what Chapter 2 is all about: Understanding container images and all the Kubernetes manifest files, and how to easily modify them for later use.
28 |
29 | After you’ve decided on your target platform, you might also want to decide how to distribute your application. This task includes packaging that application, which in turn involves choosing the right package format. In Chapter 3 we discuss possible solutions, with examples and more detail.
30 |
31 | Now that you understand how to package and distribute your application, let’s set up a process to automate the tasks of building the sources and deploying your service to your test and production stages. Chapter 4 explains how to use Tekton to set up an integration pipeline for your service.
32 |
33 | And finally, Chapter 5 sets up GitOps for your service.
34 |
35 | ## Summary
36 | This book aims to be a blueprint or a guide for your journey to GitOps with OpenShift and Kubernetes.
37 |
38 | Thank you for reading the book.
39 |
40 |
41 | [1]: https://www.opensourcerers.org/2021/04/26/automated-application-packaging-and-distribution-with-openshift-basic-development-principles-part-14/
42 | [2]: https://quarkus.io
43 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | GNU GENERAL PUBLIC LICENSE
2 | Version 3, 29 June 2007
3 |
4 | Copyright (C) 2007 Free Software Foundation, Inc.
5 | Everyone is permitted to copy and distribute verbatim copies
6 | of this license document, but changing it is not allowed.
7 |
8 | Preamble
9 |
10 | The GNU General Public License is a free, copyleft license for
11 | software and other kinds of works.
12 |
13 | The licenses for most software and other practical works are designed
14 | to take away your freedom to share and change the works. By contrast,
15 | the GNU General Public License is intended to guarantee your freedom to
16 | share and change all versions of a program--to make sure it remains free
17 | software for all its users. We, the Free Software Foundation, use the
18 | GNU General Public License for most of our software; it applies also to
19 | any other work released this way by its authors. You can apply it to
20 | your programs, too.
21 |
22 | When we speak of free software, we are referring to freedom, not
23 | price. Our General Public Licenses are designed to make sure that you
24 | have the freedom to distribute copies of free software (and charge for
25 | them if you wish), that you receive source code or can get it if you
26 | want it, that you can change the software or use pieces of it in new
27 | free programs, and that you know you can do these things.
28 |
29 | To protect your rights, we need to prevent others from denying you
30 | these rights or asking you to surrender the rights. Therefore, you have
31 | certain responsibilities if you distribute copies of the software, or if
32 | you modify it: responsibilities to respect the freedom of others.
33 |
34 | For example, if you distribute copies of such a program, whether
35 | gratis or for a fee, you must pass on to the recipients the same
36 | freedoms that you received. You must make sure that they, too, receive
37 | or can get the source code. And you must show them these terms so they
38 | know their rights.
39 |
40 | Developers that use the GNU GPL protect your rights with two steps:
41 | (1) assert copyright on the software, and (2) offer you this License
42 | giving you legal permission to copy, distribute and/or modify it.
43 |
44 | For the developers' and authors' protection, the GPL clearly explains
45 | that there is no warranty for this free software. For both users' and
46 | authors' sake, the GPL requires that modified versions be marked as
47 | changed, so that their problems will not be attributed erroneously to
48 | authors of previous versions.
49 |
50 | Some devices are designed to deny users access to install or run
51 | modified versions of the software inside them, although the manufacturer
52 | can do so. This is fundamentally incompatible with the aim of
53 | protecting users' freedom to change the software. The systematic
54 | pattern of such abuse occurs in the area of products for individuals to
55 | use, which is precisely where it is most unacceptable. Therefore, we
56 | have designed this version of the GPL to prohibit the practice for those
57 | products. If such problems arise substantially in other domains, we
58 | stand ready to extend this provision to those domains in future versions
59 | of the GPL, as needed to protect the freedom of users.
60 |
61 | Finally, every program is threatened constantly by software patents.
62 | States should not allow patents to restrict development and use of
63 | software on general-purpose computers, but in those that do, we wish to
64 | avoid the special danger that patents applied to a free program could
65 | make it effectively proprietary. To prevent this, the GPL assures that
66 | patents cannot be used to render the program non-free.
67 |
68 | The precise terms and conditions for copying, distribution and
69 | modification follow.
70 |
71 | TERMS AND CONDITIONS
72 |
73 | 0. Definitions.
74 |
75 | "This License" refers to version 3 of the GNU General Public License.
76 |
77 | "Copyright" also means copyright-like laws that apply to other kinds of
78 | works, such as semiconductor masks.
79 |
80 | "The Program" refers to any copyrightable work licensed under this
81 | License. Each licensee is addressed as "you". "Licensees" and
82 | "recipients" may be individuals or organizations.
83 |
84 | To "modify" a work means to copy from or adapt all or part of the work
85 | in a fashion requiring copyright permission, other than the making of an
86 | exact copy. The resulting work is called a "modified version" of the
87 | earlier work or a work "based on" the earlier work.
88 |
89 | A "covered work" means either the unmodified Program or a work based
90 | on the Program.
91 |
92 | To "propagate" a work means to do anything with it that, without
93 | permission, would make you directly or secondarily liable for
94 | infringement under applicable copyright law, except executing it on a
95 | computer or modifying a private copy. Propagation includes copying,
96 | distribution (with or without modification), making available to the
97 | public, and in some countries other activities as well.
98 |
99 | To "convey" a work means any kind of propagation that enables other
100 | parties to make or receive copies. Mere interaction with a user through
101 | a computer network, with no transfer of a copy, is not conveying.
102 |
103 | An interactive user interface displays "Appropriate Legal Notices"
104 | to the extent that it includes a convenient and prominently visible
105 | feature that (1) displays an appropriate copyright notice, and (2)
106 | tells the user that there is no warranty for the work (except to the
107 | extent that warranties are provided), that licensees may convey the
108 | work under this License, and how to view a copy of this License. If
109 | the interface presents a list of user commands or options, such as a
110 | menu, a prominent item in the list meets this criterion.
111 |
112 | 1. Source Code.
113 |
114 | The "source code" for a work means the preferred form of the work
115 | for making modifications to it. "Object code" means any non-source
116 | form of a work.
117 |
118 | A "Standard Interface" means an interface that either is an official
119 | standard defined by a recognized standards body, or, in the case of
120 | interfaces specified for a particular programming language, one that
121 | is widely used among developers working in that language.
122 |
123 | The "System Libraries" of an executable work include anything, other
124 | than the work as a whole, that (a) is included in the normal form of
125 | packaging a Major Component, but which is not part of that Major
126 | Component, and (b) serves only to enable use of the work with that
127 | Major Component, or to implement a Standard Interface for which an
128 | implementation is available to the public in source code form. A
129 | "Major Component", in this context, means a major essential component
130 | (kernel, window system, and so on) of the specific operating system
131 | (if any) on which the executable work runs, or a compiler used to
132 | produce the work, or an object code interpreter used to run it.
133 |
134 | The "Corresponding Source" for a work in object code form means all
135 | the source code needed to generate, install, and (for an executable
136 | work) run the object code and to modify the work, including scripts to
137 | control those activities. However, it does not include the work's
138 | System Libraries, or general-purpose tools or generally available free
139 | programs which are used unmodified in performing those activities but
140 | which are not part of the work. For example, Corresponding Source
141 | includes interface definition files associated with source files for
142 | the work, and the source code for shared libraries and dynamically
143 | linked subprograms that the work is specifically designed to require,
144 | such as by intimate data communication or control flow between those
145 | subprograms and other parts of the work.
146 |
147 | The Corresponding Source need not include anything that users
148 | can regenerate automatically from other parts of the Corresponding
149 | Source.
150 |
151 | The Corresponding Source for a work in source code form is that
152 | same work.
153 |
154 | 2. Basic Permissions.
155 |
156 | All rights granted under this License are granted for the term of
157 | copyright on the Program, and are irrevocable provided the stated
158 | conditions are met. This License explicitly affirms your unlimited
159 | permission to run the unmodified Program. The output from running a
160 | covered work is covered by this License only if the output, given its
161 | content, constitutes a covered work. This License acknowledges your
162 | rights of fair use or other equivalent, as provided by copyright law.
163 |
164 | You may make, run and propagate covered works that you do not
165 | convey, without conditions so long as your license otherwise remains
166 | in force. You may convey covered works to others for the sole purpose
167 | of having them make modifications exclusively for you, or provide you
168 | with facilities for running those works, provided that you comply with
169 | the terms of this License in conveying all material for which you do
170 | not control copyright. Those thus making or running the covered works
171 | for you must do so exclusively on your behalf, under your direction
172 | and control, on terms that prohibit them from making any copies of
173 | your copyrighted material outside their relationship with you.
174 |
175 | Conveying under any other circumstances is permitted solely under
176 | the conditions stated below. Sublicensing is not allowed; section 10
177 | makes it unnecessary.
178 |
179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
180 |
181 | No covered work shall be deemed part of an effective technological
182 | measure under any applicable law fulfilling obligations under article
183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or
184 | similar laws prohibiting or restricting circumvention of such
185 | measures.
186 |
187 | When you convey a covered work, you waive any legal power to forbid
188 | circumvention of technological measures to the extent such circumvention
189 | is effected by exercising rights under this License with respect to
190 | the covered work, and you disclaim any intention to limit operation or
191 | modification of the work as a means of enforcing, against the work's
192 | users, your or third parties' legal rights to forbid circumvention of
193 | technological measures.
194 |
195 | 4. Conveying Verbatim Copies.
196 |
197 | You may convey verbatim copies of the Program's source code as you
198 | receive it, in any medium, provided that you conspicuously and
199 | appropriately publish on each copy an appropriate copyright notice;
200 | keep intact all notices stating that this License and any
201 | non-permissive terms added in accord with section 7 apply to the code;
202 | keep intact all notices of the absence of any warranty; and give all
203 | recipients a copy of this License along with the Program.
204 |
205 | You may charge any price or no price for each copy that you convey,
206 | and you may offer support or warranty protection for a fee.
207 |
208 | 5. Conveying Modified Source Versions.
209 |
210 | You may convey a work based on the Program, or the modifications to
211 | produce it from the Program, in the form of source code under the
212 | terms of section 4, provided that you also meet all of these conditions:
213 |
214 | a) The work must carry prominent notices stating that you modified
215 | it, and giving a relevant date.
216 |
217 | b) The work must carry prominent notices stating that it is
218 | released under this License and any conditions added under section
219 | 7. This requirement modifies the requirement in section 4 to
220 | "keep intact all notices".
221 |
222 | c) You must license the entire work, as a whole, under this
223 | License to anyone who comes into possession of a copy. This
224 | License will therefore apply, along with any applicable section 7
225 | additional terms, to the whole of the work, and all its parts,
226 | regardless of how they are packaged. This License gives no
227 | permission to license the work in any other way, but it does not
228 | invalidate such permission if you have separately received it.
229 |
230 | d) If the work has interactive user interfaces, each must display
231 | Appropriate Legal Notices; however, if the Program has interactive
232 | interfaces that do not display Appropriate Legal Notices, your
233 | work need not make them do so.
234 |
235 | A compilation of a covered work with other separate and independent
236 | works, which are not by their nature extensions of the covered work,
237 | and which are not combined with it such as to form a larger program,
238 | in or on a volume of a storage or distribution medium, is called an
239 | "aggregate" if the compilation and its resulting copyright are not
240 | used to limit the access or legal rights of the compilation's users
241 | beyond what the individual works permit. Inclusion of a covered work
242 | in an aggregate does not cause this License to apply to the other
243 | parts of the aggregate.
244 |
245 | 6. Conveying Non-Source Forms.
246 |
247 | You may convey a covered work in object code form under the terms
248 | of sections 4 and 5, provided that you also convey the
249 | machine-readable Corresponding Source under the terms of this License,
250 | in one of these ways:
251 |
252 | a) Convey the object code in, or embodied in, a physical product
253 | (including a physical distribution medium), accompanied by the
254 | Corresponding Source fixed on a durable physical medium
255 | customarily used for software interchange.
256 |
257 | b) Convey the object code in, or embodied in, a physical product
258 | (including a physical distribution medium), accompanied by a
259 | written offer, valid for at least three years and valid for as
260 | long as you offer spare parts or customer support for that product
261 | model, to give anyone who possesses the object code either (1) a
262 | copy of the Corresponding Source for all the software in the
263 | product that is covered by this License, on a durable physical
264 | medium customarily used for software interchange, for a price no
265 | more than your reasonable cost of physically performing this
266 | conveying of source, or (2) access to copy the
267 | Corresponding Source from a network server at no charge.
268 |
269 | c) Convey individual copies of the object code with a copy of the
270 | written offer to provide the Corresponding Source. This
271 | alternative is allowed only occasionally and noncommercially, and
272 | only if you received the object code with such an offer, in accord
273 | with subsection 6b.
274 |
275 | d) Convey the object code by offering access from a designated
276 | place (gratis or for a charge), and offer equivalent access to the
277 | Corresponding Source in the same way through the same place at no
278 | further charge. You need not require recipients to copy the
279 | Corresponding Source along with the object code. If the place to
280 | copy the object code is a network server, the Corresponding Source
281 | may be on a different server (operated by you or a third party)
282 | that supports equivalent copying facilities, provided you maintain
283 | clear directions next to the object code saying where to find the
284 | Corresponding Source. Regardless of what server hosts the
285 | Corresponding Source, you remain obligated to ensure that it is
286 | available for as long as needed to satisfy these requirements.
287 |
288 | e) Convey the object code using peer-to-peer transmission, provided
289 | you inform other peers where the object code and Corresponding
290 | Source of the work are being offered to the general public at no
291 | charge under subsection 6d.
292 |
293 | A separable portion of the object code, whose source code is excluded
294 | from the Corresponding Source as a System Library, need not be
295 | included in conveying the object code work.
296 |
297 | A "User Product" is either (1) a "consumer product", which means any
298 | tangible personal property which is normally used for personal, family,
299 | or household purposes, or (2) anything designed or sold for incorporation
300 | into a dwelling. In determining whether a product is a consumer product,
301 | doubtful cases shall be resolved in favor of coverage. For a particular
302 | product received by a particular user, "normally used" refers to a
303 | typical or common use of that class of product, regardless of the status
304 | of the particular user or of the way in which the particular user
305 | actually uses, or expects or is expected to use, the product. A product
306 | is a consumer product regardless of whether the product has substantial
307 | commercial, industrial or non-consumer uses, unless such uses represent
308 | the only significant mode of use of the product.
309 |
310 | "Installation Information" for a User Product means any methods,
311 | procedures, authorization keys, or other information required to install
312 | and execute modified versions of a covered work in that User Product from
313 | a modified version of its Corresponding Source. The information must
314 | suffice to ensure that the continued functioning of the modified object
315 | code is in no case prevented or interfered with solely because
316 | modification has been made.
317 |
318 | If you convey an object code work under this section in, or with, or
319 | specifically for use in, a User Product, and the conveying occurs as
320 | part of a transaction in which the right of possession and use of the
321 | User Product is transferred to the recipient in perpetuity or for a
322 | fixed term (regardless of how the transaction is characterized), the
323 | Corresponding Source conveyed under this section must be accompanied
324 | by the Installation Information. But this requirement does not apply
325 | if neither you nor any third party retains the ability to install
326 | modified object code on the User Product (for example, the work has
327 | been installed in ROM).
328 |
329 | The requirement to provide Installation Information does not include a
330 | requirement to continue to provide support service, warranty, or updates
331 | for a work that has been modified or installed by the recipient, or for
332 | the User Product in which it has been modified or installed. Access to a
333 | network may be denied when the modification itself materially and
334 | adversely affects the operation of the network or violates the rules and
335 | protocols for communication across the network.
336 |
337 | Corresponding Source conveyed, and Installation Information provided,
338 | in accord with this section must be in a format that is publicly
339 | documented (and with an implementation available to the public in
340 | source code form), and must require no special password or key for
341 | unpacking, reading or copying.
342 |
343 | 7. Additional Terms.
344 |
345 | "Additional permissions" are terms that supplement the terms of this
346 | License by making exceptions from one or more of its conditions.
347 | Additional permissions that are applicable to the entire Program shall
348 | be treated as though they were included in this License, to the extent
349 | that they are valid under applicable law. If additional permissions
350 | apply only to part of the Program, that part may be used separately
351 | under those permissions, but the entire Program remains governed by
352 | this License without regard to the additional permissions.
353 |
354 | When you convey a copy of a covered work, you may at your option
355 | remove any additional permissions from that copy, or from any part of
356 | it. (Additional permissions may be written to require their own
357 | removal in certain cases when you modify the work.) You may place
358 | additional permissions on material, added by you to a covered work,
359 | for which you have or can give appropriate copyright permission.
360 |
361 | Notwithstanding any other provision of this License, for material you
362 | add to a covered work, you may (if authorized by the copyright holders of
363 | that material) supplement the terms of this License with terms:
364 |
365 | a) Disclaiming warranty or limiting liability differently from the
366 | terms of sections 15 and 16 of this License; or
367 |
368 | b) Requiring preservation of specified reasonable legal notices or
369 | author attributions in that material or in the Appropriate Legal
370 | Notices displayed by works containing it; or
371 |
372 | c) Prohibiting misrepresentation of the origin of that material, or
373 | requiring that modified versions of such material be marked in
374 | reasonable ways as different from the original version; or
375 |
376 | d) Limiting the use for publicity purposes of names of licensors or
377 | authors of the material; or
378 |
379 | e) Declining to grant rights under trademark law for use of some
380 | trade names, trademarks, or service marks; or
381 |
382 | f) Requiring indemnification of licensors and authors of that
383 | material by anyone who conveys the material (or modified versions of
384 | it) with contractual assumptions of liability to the recipient, for
385 | any liability that these contractual assumptions directly impose on
386 | those licensors and authors.
387 |
388 | All other non-permissive additional terms are considered "further
389 | restrictions" within the meaning of section 10. If the Program as you
390 | received it, or any part of it, contains a notice stating that it is
391 | governed by this License along with a term that is a further
392 | restriction, you may remove that term. If a license document contains
393 | a further restriction but permits relicensing or conveying under this
394 | License, you may add to a covered work material governed by the terms
395 | of that license document, provided that the further restriction does
396 | not survive such relicensing or conveying.
397 |
398 | If you add terms to a covered work in accord with this section, you
399 | must place, in the relevant source files, a statement of the
400 | additional terms that apply to those files, or a notice indicating
401 | where to find the applicable terms.
402 |
403 | Additional terms, permissive or non-permissive, may be stated in the
404 | form of a separately written license, or stated as exceptions;
405 | the above requirements apply either way.
406 |
407 | 8. Termination.
408 |
409 | You may not propagate or modify a covered work except as expressly
410 | provided under this License. Any attempt otherwise to propagate or
411 | modify it is void, and will automatically terminate your rights under
412 | this License (including any patent licenses granted under the third
413 | paragraph of section 11).
414 |
415 | However, if you cease all violation of this License, then your
416 | license from a particular copyright holder is reinstated (a)
417 | provisionally, unless and until the copyright holder explicitly and
418 | finally terminates your license, and (b) permanently, if the copyright
419 | holder fails to notify you of the violation by some reasonable means
420 | prior to 60 days after the cessation.
421 |
422 | Moreover, your license from a particular copyright holder is
423 | reinstated permanently if the copyright holder notifies you of the
424 | violation by some reasonable means, this is the first time you have
425 | received notice of violation of this License (for any work) from that
426 | copyright holder, and you cure the violation prior to 30 days after
427 | your receipt of the notice.
428 |
429 | Termination of your rights under this section does not terminate the
430 | licenses of parties who have received copies or rights from you under
431 | this License. If your rights have been terminated and not permanently
432 | reinstated, you do not qualify to receive new licenses for the same
433 | material under section 10.
434 |
435 | 9. Acceptance Not Required for Having Copies.
436 |
437 | You are not required to accept this License in order to receive or
438 | run a copy of the Program. Ancillary propagation of a covered work
439 | occurring solely as a consequence of using peer-to-peer transmission
440 | to receive a copy likewise does not require acceptance. However,
441 | nothing other than this License grants you permission to propagate or
442 | modify any covered work. These actions infringe copyright if you do
443 | not accept this License. Therefore, by modifying or propagating a
444 | covered work, you indicate your acceptance of this License to do so.
445 |
446 | 10. Automatic Licensing of Downstream Recipients.
447 |
448 | Each time you convey a covered work, the recipient automatically
449 | receives a license from the original licensors, to run, modify and
450 | propagate that work, subject to this License. You are not responsible
451 | for enforcing compliance by third parties with this License.
452 |
453 | An "entity transaction" is a transaction transferring control of an
454 | organization, or substantially all assets of one, or subdividing an
455 | organization, or merging organizations. If propagation of a covered
456 | work results from an entity transaction, each party to that
457 | transaction who receives a copy of the work also receives whatever
458 | licenses to the work the party's predecessor in interest had or could
459 | give under the previous paragraph, plus a right to possession of the
460 | Corresponding Source of the work from the predecessor in interest, if
461 | the predecessor has it or can get it with reasonable efforts.
462 |
463 | You may not impose any further restrictions on the exercise of the
464 | rights granted or affirmed under this License. For example, you may
465 | not impose a license fee, royalty, or other charge for exercise of
466 | rights granted under this License, and you may not initiate litigation
467 | (including a cross-claim or counterclaim in a lawsuit) alleging that
468 | any patent claim is infringed by making, using, selling, offering for
469 | sale, or importing the Program or any portion of it.
470 |
471 | 11. Patents.
472 |
473 | A "contributor" is a copyright holder who authorizes use under this
474 | License of the Program or a work on which the Program is based. The
475 | work thus licensed is called the contributor's "contributor version".
476 |
477 | A contributor's "essential patent claims" are all patent claims
478 | owned or controlled by the contributor, whether already acquired or
479 | hereafter acquired, that would be infringed by some manner, permitted
480 | by this License, of making, using, or selling its contributor version,
481 | but do not include claims that would be infringed only as a
482 | consequence of further modification of the contributor version. For
483 | purposes of this definition, "control" includes the right to grant
484 | patent sublicenses in a manner consistent with the requirements of
485 | this License.
486 |
487 | Each contributor grants you a non-exclusive, worldwide, royalty-free
488 | patent license under the contributor's essential patent claims, to
489 | make, use, sell, offer for sale, import and otherwise run, modify and
490 | propagate the contents of its contributor version.
491 |
492 | In the following three paragraphs, a "patent license" is any express
493 | agreement or commitment, however denominated, not to enforce a patent
494 | (such as an express permission to practice a patent or covenant not to
495 | sue for patent infringement). To "grant" such a patent license to a
496 | party means to make such an agreement or commitment not to enforce a
497 | patent against the party.
498 |
499 | If you convey a covered work, knowingly relying on a patent license,
500 | and the Corresponding Source of the work is not available for anyone
501 | to copy, free of charge and under the terms of this License, through a
502 | publicly available network server or other readily accessible means,
503 | then you must either (1) cause the Corresponding Source to be so
504 | available, or (2) arrange to deprive yourself of the benefit of the
505 | patent license for this particular work, or (3) arrange, in a manner
506 | consistent with the requirements of this License, to extend the patent
507 | license to downstream recipients. "Knowingly relying" means you have
508 | actual knowledge that, but for the patent license, your conveying the
509 | covered work in a country, or your recipient's use of the covered work
510 | in a country, would infringe one or more identifiable patents in that
511 | country that you have reason to believe are valid.
512 |
513 | If, pursuant to or in connection with a single transaction or
514 | arrangement, you convey, or propagate by procuring conveyance of, a
515 | covered work, and grant a patent license to some of the parties
516 | receiving the covered work authorizing them to use, propagate, modify
517 | or convey a specific copy of the covered work, then the patent license
518 | you grant is automatically extended to all recipients of the covered
519 | work and works based on it.
520 |
521 | A patent license is "discriminatory" if it does not include within
522 | the scope of its coverage, prohibits the exercise of, or is
523 | conditioned on the non-exercise of one or more of the rights that are
524 | specifically granted under this License. You may not convey a covered
525 | work if you are a party to an arrangement with a third party that is
526 | in the business of distributing software, under which you make payment
527 | to the third party based on the extent of your activity of conveying
528 | the work, and under which the third party grants, to any of the
529 | parties who would receive the covered work from you, a discriminatory
530 | patent license (a) in connection with copies of the covered work
531 | conveyed by you (or copies made from those copies), or (b) primarily
532 | for and in connection with specific products or compilations that
533 | contain the covered work, unless you entered into that arrangement,
534 | or that patent license was granted, prior to 28 March 2007.
535 |
536 | Nothing in this License shall be construed as excluding or limiting
537 | any implied license or other defenses to infringement that may
538 | otherwise be available to you under applicable patent law.
539 |
540 | 12. No Surrender of Others' Freedom.
541 |
542 | If conditions are imposed on you (whether by court order, agreement or
543 | otherwise) that contradict the conditions of this License, they do not
544 | excuse you from the conditions of this License. If you cannot convey a
545 | covered work so as to satisfy simultaneously your obligations under this
546 | License and any other pertinent obligations, then as a consequence you may
547 | not convey it at all. For example, if you agree to terms that obligate you
548 | to collect a royalty for further conveying from those to whom you convey
549 | the Program, the only way you could satisfy both those terms and this
550 | License would be to refrain entirely from conveying the Program.
551 |
552 | 13. Use with the GNU Affero General Public License.
553 |
554 | Notwithstanding any other provision of this License, you have
555 | permission to link or combine any covered work with a work licensed
556 | under version 3 of the GNU Affero General Public License into a single
557 | combined work, and to convey the resulting work. The terms of this
558 | License will continue to apply to the part which is the covered work,
559 | but the special requirements of the GNU Affero General Public License,
560 | section 13, concerning interaction through a network will apply to the
561 | combination as such.
562 |
563 | 14. Revised Versions of this License.
564 |
565 | The Free Software Foundation may publish revised and/or new versions of
566 | the GNU General Public License from time to time. Such new versions will
567 | be similar in spirit to the present version, but may differ in detail to
568 | address new problems or concerns.
569 |
570 | Each version is given a distinguishing version number. If the
571 | Program specifies that a certain numbered version of the GNU General
572 | Public License "or any later version" applies to it, you have the
573 | option of following the terms and conditions either of that numbered
574 | version or of any later version published by the Free Software
575 | Foundation. If the Program does not specify a version number of the
576 | GNU General Public License, you may choose any version ever published
577 | by the Free Software Foundation.
578 |
579 | If the Program specifies that a proxy can decide which future
580 | versions of the GNU General Public License can be used, that proxy's
581 | public statement of acceptance of a version permanently authorizes you
582 | to choose that version for the Program.
583 |
584 | Later license versions may give you additional or different
585 | permissions. However, no additional obligations are imposed on any
586 | author or copyright holder as a result of your choosing to follow a
587 | later version.
588 |
589 | 15. Disclaimer of Warranty.
590 |
591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
599 |
600 | 16. Limitation of Liability.
601 |
602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
610 | SUCH DAMAGES.
611 |
612 | 17. Interpretation of Sections 15 and 16.
613 |
614 | If the disclaimer of warranty and limitation of liability provided
615 | above cannot be given local legal effect according to their terms,
616 | reviewing courts shall apply local law that most closely approximates
617 | an absolute waiver of all civil liability in connection with the
618 | Program, unless a warranty or assumption of liability accompanies a
619 | copy of the Program in return for a fee.
620 |
621 | END OF TERMS AND CONDITIONS
622 |
623 | How to Apply These Terms to Your New Programs
624 |
625 | If you develop a new program, and you want it to be of the greatest
626 | possible use to the public, the best way to achieve this is to make it
627 | free software which everyone can redistribute and change under these terms.
628 |
629 | To do so, attach the following notices to the program. It is safest
630 | to attach them to the start of each source file to most effectively
631 | state the exclusion of warranty; and each file should have at least
632 | the "copyright" line and a pointer to where the full notice is found.
633 |
634 |
635 | Copyright (C)
636 |
637 | This program is free software: you can redistribute it and/or modify
638 | it under the terms of the GNU General Public License as published by
639 | the Free Software Foundation, either version 3 of the License, or
640 | (at your option) any later version.
641 |
642 | This program is distributed in the hope that it will be useful,
643 | but WITHOUT ANY WARRANTY; without even the implied warranty of
644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
645 | GNU General Public License for more details.
646 |
647 | You should have received a copy of the GNU General Public License
648 | along with this program. If not, see .
649 |
650 | Also add information on how to contact you by electronic and paper mail.
651 |
652 | If the program does terminal interaction, make it output a short
653 | notice like this when it starts in an interactive mode:
654 |
655 | Copyright (C)
656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
657 | This is free software, and you are welcome to redistribute it
658 | under certain conditions; type `show c' for details.
659 |
660 | The hypothetical commands `show w' and `show c' should show the appropriate
661 | parts of the General Public License. Of course, your program's commands
662 | might be different; for a GUI interface, you would use an "about box".
663 |
664 | You should also get your employer (if you work as a programmer) or school,
665 | if any, to sign a "copyright disclaimer" for the program, if necessary.
666 | For more information on this, and how to apply and follow the GNU GPL, see
667 | .
668 |
669 | The GNU General Public License does not permit incorporating your program
670 | into proprietary programs. If your program is a subroutine library, you
671 | may consider it more useful to permit linking proprietary applications with
672 | the library. If this is what you want to do, use the GNU Lesser General
673 | Public License instead of this License. But first, please read
674 | .
675 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # README
2 | # Getting GitOps. A Practical Platform with OpenShift, Argo CD and Tekton.
3 |
4 | This is a repository with all the sources for the Book "Getting GitOps. A Practical Platform with OpenShift, Argo CD and Tekton.“, which you can download for free here:
5 |
6 | [https://developers.redhat.com/e-books/getting-gitops-practical-platform-openshift-argo-cd-and-tekton][1]
7 |
8 | # Status of the chapters
9 |
10 | |Chapter|Description|Status|
11 | |--------|--------|--------|
12 | |Foreword | Foreword by Florian Heubeck | FINAL REVIEW|
13 | |Introduction | Intro and Motivation to write this| FINAL REVIEW|
14 | |Using the Examples | [Description of the examples here][2]| FINAL REVIEW|
15 | |Chapter 1 | Quarkus MicroService |FINAL REVIEW|
16 | |Chapter 2 | Description of Kubernetes; Basic deployment | FINAL REVIEW|
17 | |Chapter 3 | Helm Charts and Kubernetes Operators |FINAL REVIEW|
18 | |Chapter 4 | Tekton Pipelines | FINAL REVIEW|
19 | |Chapter 5 | GitOps and ArgoCD | FINAL REVIEW|
20 | |Thank You | | FINAL REVIEW|
21 |
22 | [1]: https://developers.redhat.com/e-books/getting-gitops-practical-platform-openshift-argo-cd-and-tekton
23 | [2]: https://github.com/wpernath/book-example
--------------------------------------------------------------------------------
/Thank you/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/Thank you/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/Thank you/Thank you.md:
--------------------------------------------------------------------------------
1 | # Thank you
2 | I started writing a series of blog entries **Automated Application Packaging and Distribution with OpenShift** back in January 2021, originally because I just wanted to have some notes for my day-to-day work. Then I realized that there are tons of things out there that need to be explained. People in my classes were telling me that they are overwhelmed with all the news around Kubernetes and OpenShift, so I decided to not only talk about it, but also write and blog about it.
3 |
4 | The positive feedback I got from readers around the world motivated me a lot to continue writing just another chapter. And then my manager asked me if it wouldn’t be great to create a book out of it. So the idea of `Getting GitOps. A Practical Platform with OpenShift, Argo CD and Tekton.` was born.
5 |
6 | I want to thank all those people who helped me making it possible:
7 | - Günter Herold, my manager who started the idea
8 | - Hubert Schweinesbein, who gave the final ok
9 | - Markus Eisele, who helped me getting the right contacts
10 | - Florian Heubeck, for creating the foreword
11 | - Andrew Oram, for putting my words into proper and understandable english
12 | - My girlfriend for her patience
13 | - My dad for believing in me
14 | - The cats for helping me writing and the feed-back
15 |
16 | Thanks a lot to all of you. Thanks for reading. And thanks a lot for all your feedback.
17 |
--------------------------------------------------------------------------------
/chapter1/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter1/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/chapter1/Chapter 1- Creating the sample service.md:
--------------------------------------------------------------------------------
1 | # Chapter 1: Creating the sample service
2 | This books needs a good example to demonstrate the power of working with Kubernetes and OpenShift. As an application that will be familiar and useful to most readers, we’ll create a REST-based micro service in Java that reads data from and writes data to a database.
3 |
4 | I have to say, I like [Quarkus][1]. I have been coding for over two decades with Java and JEE. I have been using most of the frameworks out there. (Does anyone remember Struts or JBoss Seam or SilverStream?) I’ve even created code generators to make my life easier with EJBs (1.x and 2.x). All of those frameworks and ideas tried to minimize development effort, but they had drawbacks.
5 |
6 | And then, back in 2020 when I thought that there is nothing out that could really, positively surprise me, I had a look at Quarkus. That’s my personal story about Quarkus; the reasons I recommend it to you are summarized at the end of this chapter, after we see Quarkus at work..
7 |
8 | So this chapter is all about creating a microservice with Quarkus. It enchanted me because it provided interfaces to all the common open source tools for containerization and cloud deployment, and provided a dev mode that took away the boring compilation tasks. If you want to understand more about Quarkus, feel free to get one of the other books available on the [Red Hat developers page][2].
9 |
10 | ## First steps
11 | Quarkus has a [Get Started][3]page. Go there to have a look at how to install the command-line tool, which is called `quarkus`.
12 |
13 | After you’ve installed `quarkus`, create a new project by executing:
14 |
15 | ```bash
16 | $ quarkus create app org.wanja.demo:person-service:1.0.0
17 | Looking for the newly published extensions in registry.quarkus.io
18 | -----------
19 |
20 | applying codestarts...
21 | 📚 java
22 | 🔨 maven
23 | 📦 quarkus
24 | 📝 config-properties
25 | 🔧 dockerfiles
26 | 🔧 maven-wrapper
27 | 🚀 resteasy-codestart
28 |
29 | -----------
30 | [SUCCESS] ✅ quarkus project has been successfully generated in:
31 | --> /Users/wpernath/Devel/quarkus/person-service
32 | -----------
33 | Navigate into this directory and get started: quarkus dev
34 | ```
35 |
36 | A successful initialization creates an initial Maven project with the following structure:
37 |
38 | ```bash
39 | $ tree
40 | .
41 | ├── README.md
42 | ├── mvnw
43 | ├── mvnw.cmd
44 | ├── pom.xml
45 | └── src
46 | ├── main
47 | │ ├── docker
48 | │ │ ├── Dockerfile.jvm
49 | │ │ ├── Dockerfile.legacy-jar
50 | │ │ ├── Dockerfile.native
51 | │ │ └── Dockerfile.native-distroless
52 | │ ├── java
53 | │ │ └── org
54 | │ │ └── wanja
55 | │ │ └── demo
56 | │ │ └── GreetingResource.java
57 | │ └── resources
58 | │ ├── META-INF
59 | │ │ └── resources
60 | │ │ └── index.html
61 | │ └── application.properties
62 | └── test
63 | └── java
64 | └── org
65 | └── wanja
66 | └── demo
67 | ├── GreetingResourceTest.java
68 | └── NativeGreetingResourceIT.java
69 |
70 | 15 directories, 13 files
71 | ```
72 |
73 | If you want to test what you have done so far, call:
74 | ```bash
75 | $ mvn quarkus:dev
76 | ```
77 |
78 | Or if you prefer to use the Quarkus CLI tool, you can also call:
79 | ```bash
80 | $ quarkus dev
81 | ```
82 |
83 | These commands compile all the sources and start the development mode of your project, where you don’t need to specify any runtime environment (Tomcat, JBoss, etc.).
84 |
85 | Let’s have a look at the generated `GreetingResource.java` file, which you can find under `src/main/java/org/wanja/demo`:
86 |
87 | ```java
88 | package org.wanja.demo;
89 |
90 | import javax.ws.rs.GET;
91 | import javax.ws.rs.Path;
92 | import javax.ws.rs.Produces;
93 | import javax.ws.rs.core.MediaType;
94 |
95 | @Path("/hello")
96 | public class GreetingResource {
97 |
98 | @GET
99 | @Produces(MediaType.TEXT_PLAIN)
100 | public String hello() {
101 | return "Hello RESTEasy";
102 | }
103 | }
104 | ```
105 |
106 | If `quarkus:dev` is running, you should have an endpoint reachable at `localhost:8080/hello` in a browser on that system. Let’s have a look. For testing of REST endpoints, you can use either `curl` or the much newer client called [httpie][4]. I prefer to the newer one:
107 |
108 | ```bash
109 | $ http :8080/hello
110 | HTTP/1.1 200 OK
111 | Content-Type: text/plain;charset=UTF-8
112 | content-length: 14
113 |
114 | Hello RESTEasy
115 | ```
116 |
117 | This was easy. But still nothing really new. Let’s go a little bit deeper.
118 |
119 | Let’s change the string `Hello RESTEasy` and call the service again (but without restarting `quarkus dev`—that’s a key point to make).
120 |
121 | ```bash
122 | $ http :8080/hello
123 | HTTP/1.1 200 OK
124 | Content-Type: text/plain;charset=UTF-8
125 | content-length: 7
126 |
127 | Hi Yay!
128 | ```
129 |
130 | OK, this is getting interesting now. You don’t have to recompile or restart Quarkus to see your changes in action.
131 |
132 | Because it’s limiting to put hard-coded strings directly into Java code, let’s switch to feeding in the strings from a configuration file, as described in the [Quarkus documentation about configuring your application][5]. To reconfigure the application, open `src/main/resources/application.properties` in your preferred editor and create a new property. For example:
133 |
134 | ```bash
135 | app.greeting=Hello, dear quarkus developer!
136 | ```
137 |
138 | Then go into the `GreetingResource` and create a new property on the class level:
139 |
140 | ```java
141 | package org.wanja.demo;
142 |
143 | import javax.ws.rs.GET;
144 | import javax.ws.rs.Path;
145 | import javax.ws.rs.Produces;
146 | import javax.ws.rs.core.MediaType;
147 |
148 | import org.eclipse.microprofile.config.inject.ConfigProperty;
149 |
150 | @Path("/hello")
151 | public class GreetingResource {
152 |
153 | @ConfigProperty(name="app.greeting")
154 | String greeting;
155 |
156 | @GET
157 | @Produces(MediaType.TEXT_PLAIN)
158 | public String hello() {
159 | return greeting;
160 | }
161 | }
162 | ```
163 |
164 | Test your changes by calling the REST endpoint again:
165 |
166 | ```bash
167 | $ http :8080/hello
168 | HTTP/1.1 200 OK
169 | Content-Type: text/plain;charset=UTF-8
170 | content-length: 25
171 |
172 | Hello, quarkus developer!
173 | ```
174 |
175 | Again, you haven’t recompiled or restarted the services. Quarkus is watching for any changes in the source tree and takes the required actions automatically.
176 |
177 | This is already great. Really. But let’s move on.
178 |
179 | ## Creating a database client
180 | The use case for this book should be richer than a simple hello service. We want to have a database client that reads from writes and to a database. After [reading the corresponding documentation][6], I decided to use Panache here, as it seems to reduce the work I have to do dramatically.
181 |
182 | First you need to add the required extensions to your project. The following command installs a JDBC driver for PostgreSQL and everything to be used for ORM:
183 |
184 | ```bash
185 | $ quarkus ext add quarkus-hibernate-orm-panache quarkus-jdbc-postgresql
186 | Looking for the newly published extensions in registry.quarkus.io
187 | [SUCCESS] ✅ Extension io.quarkus:quarkus-hibernate-orm-panache has been installed
188 | [SUCCESS] ✅ Extension io.quarkus:quarkus-jdbc-postgresql has been installed
189 | ```
190 |
191 | ### Java code for database operations
192 | The next step is to create an entity. We’ll call it `Person`, so you’re going to create a `Person.java` file.
193 |
194 | ```java
195 | package org.wanja.demo;
196 |
197 | import javax.persistence.Column;
198 | import javax.persistence.Entity;
199 |
200 | import io.quarkus.hibernate.orm.panache.PanacheEntity;
201 |
202 | @Entity
203 | public class Person extends PanacheEntity {
204 | @Column(name="first_name")
205 | public String firstName;
206 |
207 | @Column(name="last_name")
208 | public String lastName;
209 |
210 | public String salutation;
211 | }
212 | ```
213 |
214 | According to the docs, this should define the `Person` entity, which maps directly to a `person` table in our PostgreSQL database. All public properties will be mapped automatically to the corresponding entity in the database. If you don’t want that, you need to specify the `@Transient` annotation.
215 |
216 | You also need a `PersonResource` class to act as a REST endpoint. Let’s create that simple class:
217 |
218 | ```java
219 | package org.wanja.demo;
220 |
221 | import java.util.List;
222 |
223 | import javax.ws.rs.Consumes;
224 | import javax.ws.rs.GET;
225 | import javax.ws.rs.Path;
226 | import javax.ws.rs.Produces;
227 | import javax.ws.rs.core.MediaType;
228 |
229 | import io.quarkus.panache.common.Sort;
230 |
231 | @Path("/person")
232 | @Consumes(MediaType.APPLICATION_JSON)
233 | @Produces(MediaType.APPLICATION_JSON)
234 | public class PersonResource {
235 |
236 | @GET
237 | public List getAll() throws Exception {
238 | return Person.findAll(Sort.ascending("last_name")).list();
239 | }
240 | }
241 | ```
242 |
243 | Right now, this class has exactly one method, `getAll()`, which simply returns a list of all persons sorted by the `last_name` column.
244 |
245 | ### Enabling the database
246 | Now we need to tell Quarkus that we want to use a database. And then we need to find a way to start a PostgreSQL database locally. But one step at a time.
247 |
248 | Open the `application.properties` file and add some properties there:
249 |
250 | ```java
251 | quarkus.hibernate-orm.database.generation=drop-and-create
252 | quarkus.hibernate-orm.log.format-sql=true
253 | quarkus.hibernate-orm.log.sql=true
254 | quarkus.hibernate-orm.sql-load-script=import.sql
255 |
256 | quarkus.datasource.db-kind=postgresql
257 | ```
258 |
259 | And then let’s make a simple SQL import script to fill some basic data into the database. Create a new file called `src/main/resources/import.sql` and put the following lines in there:
260 |
261 | ```java
262 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Doro', 'Pesch', 'Ms');
263 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Bobby', 'Brown', 'Mr');
264 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Curt', 'Cobain', 'Mr');
265 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Nina', 'Hagen', 'Mrs');
266 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Jimmi', 'Henrix', 'Mr');
267 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Janis', 'Joplin', 'Ms');
268 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Joe', 'Cocker', 'Mr');
269 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Alice', 'Cooper', 'Mr');
270 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Bruce', 'Springsteen', 'Mr');
271 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Eric', 'Clapton', 'Mr');
272 | ```
273 |
274 | You can now restart `quarkus dev` with everything you need:
275 |
276 | ```bash
277 | $ quarkus dev
278 | 2021-12-15 13:39:47,725 INFO [io.qua.dat.dep.dev.DevServicesDatasourceProcessor] (build-26) Dev Services for the default datasource (postgresql) started.
279 | Hibernate:
280 |
281 | drop table if exists Person cascade
282 | __ ____ __ _____ ___ __ ____ ______
283 | --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
284 | -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
285 | --\___\_\____/_/ |_/_/|_/_/|_|\____/___/
286 | 2021-12-15 13:39:48,869 WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread: ) SQL Warning Code: 0, SQLState: 00000
287 |
288 | 2021-12-15 13:39:48,870 WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread: ) table "person" does not exist, skipping
289 | Hibernate:
290 |
291 | drop sequence if exists hibernate_sequence
292 | 2021-12-15 13:39:48,872 WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread: ) SQL Warning Code: 0, SQLState: 00000
293 | 2021-12-15 13:39:48,872 WARN [org.hib.eng.jdb.spi.SqlExceptionHelper] (JPA Startup Thread: ) sequence "hibernate_sequence" does not exist, skipping
294 | Hibernate: create sequence hibernate_sequence start 1 increment 1
295 | Hibernate:
296 |
297 | create table Person (
298 | id int8 not null,
299 | first_name varchar(255),
300 | last_name varchar(255),
301 | salutation varchar(255),
302 | primary key (id)
303 | )
304 |
305 | Hibernate:
306 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Doro', 'Pesch', 'Mrs')
307 | Hibernate:
308 | insert into person(id, first_name, last_name, salutation) values (nextval('hibernate_sequence'), 'Bobby', 'Brown', 'Mr')
309 | ```
310 |
311 | The first time I started Quarkus, I expected exceptions because there was no PostgreSQL database installed locally on my laptop. But… no… No exception upon startup. How could that be?
312 |
313 | ## Quarkus Dev Services
314 | Every developer has faced situations where they either wanted to quickly test some new feature or had to quickly fix a bug in an existing application. The workflow is mostly the same:
315 | - Setting up the local IDE
316 | - Cloning the source code repository
317 | - Checking dependencies for databases or other infrastructure software components
318 | - Installing the dependencies locally (a Redis server, an Infinispan server, a database, ApacheMQ, or whatever is needed)
319 | - Making sure everything is properly set up
320 | - Creating and implementing the bug fix or the feature
321 |
322 | In short, it takes quite some time before you actually start implementing what you have to implement.
323 |
324 | This is where Quarkus Dev Services come into play. As soon as Quarkus detects that there is a dependency on a third-party component (database, MQ, cache, …) and you have Docker Desktop installed on your developer machine, Quarkus starts the component for you. You don’t have to configure anything. It just happens.
325 |
326 | Have a look at the [official Quarkus documentation][7] to see which components are currently supported in this manner in `dev` mode.
327 |
328 | ## Testing the database client
329 | So you don’t have to install and configure a PostgreSQL database server locally on your laptop. This is great. Let’s test your service now to prove that it works.
330 |
331 | ```bash
332 | $ http :8080/person
333 | HTTP/1.1 500 Internal Server Error
334 | Content-Type: text/html;charset=UTF-8
335 | content-length: 113
336 |
337 | Could not find MessageBodyWriter for response object of type: java.util.ArrayList of media type: application/json
338 | ```
339 |
340 | OK. Well. It does not work. You need a `MessageBodyWriter` for this response type. If you have a look at the class `PersonResource`, you can see that we are directly returning a response of type `java.util.List`. And we have a global producer annotation of `application/json`. We need a component that translates the result into a JSON string.
341 |
342 | This can be done through the `quarkus-resteasy-jsonb` or `quarkus-resteasy-jacksonb` extension. We are going to use the first one by executing:
343 |
344 | ```bash
345 | $ quarkus ext add quarkus-resteasy-jsonb
346 | [SUCCESS] ✅ Extension io.quarkus:quarkus-resteasy-jsonb has been installed
347 | ```
348 |
349 | If you now call the endpoint again, you should see the correctly resolved and formatted output:
350 |
351 | ```bash
352 | $ http :8080/person
353 | HTTP/1.1 200 OK
354 | Content-Type: application/json
355 | content-length: 741
356 |
357 | [
358 | {
359 | "firstName": "Bobby",
360 | "id": 2,
361 | "lastName": "Brown",
362 | "salutation": "Mr"
363 | },
364 | {
365 | "firstName": "Eric",
366 | "id": 11,
367 | "lastName": "Clapton",
368 | "salutation": "Mr"
369 | },
370 | {
371 | "firstName": "Curt",
372 | "id": 4,
373 | "lastName": "Cobain",
374 | "salutation": "Mr"
375 | },
376 | ...
377 | ```
378 |
379 | ## Finalizing the CRUD REST service
380 | For a well-rounded create-read-update-delete (CRUD) service, you still have to implement methods to add, delete, and update a person from the list. Let’s do it now.
381 | 6055e
382 | ### Creating a new person
383 | The code snippet to create a new person is quite easy. Just implement another method, annotate it with `@POST` and `@Transactional`, and that’s it.
384 |
385 | ```java
386 | @POST
387 | @Transactional
388 | public Response create(Person p) {
389 | if (p == null || p.id != null)
390 | throw new WebApplicationException("id != null");
391 | p.persist();
392 | return Response.ok(p).status(200).build();
393 | }
394 | ```
395 |
396 | The only relevant method we call in this method is `persist()`, called on a given `Person` instance. This is known as the [active record pattern][8] and is described in the official documentation.
397 |
398 | Let’s have a look to see whether it works:
399 |
400 | ```bash
401 | $ http POST :8080/person firstName=Carlos lastName=Santana salutation=Mr
402 | HTTP/1.1 200 OK
403 | Content-Type: application/json
404 | content-length: 69
405 |
406 | {
407 | "firstName": "Carlos",
408 | "id": 12,
409 | "lastName": "Santana",
410 | "salutation": "Mr"
411 | }
412 | ```
413 |
414 | The returned JSON indicates that we did what we intended.
415 |
416 | ### Updating an existing person
417 | The same is true for updating a person. Use the `@PUT` annotation and make sure you are providing a path parameter, which you have to annotate with `@PathParam`:
418 |
419 | ```java
420 | @PUT
421 | @Transactional
422 | @Path("{id}")
423 | public Person update(@PathParam Long id, Person p) {
424 | Person entity = Person.findById(id);
425 | if (entity == null) {
426 | throw new WebApplicationException("Person with id of " + id + " does not exist.", 404);
427 | }
428 | if(p.salutation != null ) entity.salutation = p.salutation;
429 | if(p.firstName != null ) entity.firstName = p.firstName;
430 | if(p.lastName != null) entity.lastName = p.lastName;
431 | return entity;
432 | }
433 | ```
434 |
435 | Then test it:
436 |
437 | ```bash
438 | $ http PUT :8080/person/6 firstName=Jimi lastName=Hendrix
439 | HTTP/1.1 200 OK
440 | Content-Type: application/json
441 | content-length: 66
442 |
443 | {
444 | "firstName": "Jimi",
445 | "id": 6,
446 | "lastName": "Hendrix",
447 | "salutation": "Mr"
448 | }
449 | ```
450 |
451 | ### Deleting an existing person
452 | And finally, let’s create a `delete` method, which works in the same way as the `update()` method:
453 |
454 | ```java
455 | @DELETE
456 | @Path("{id}")
457 | @Transactional
458 | public Response delete(@PathParam Long id) {
459 | Person entity = Person.findById(id);
460 | if (entity == null) {
461 | throw new WebApplicationException("Person with id of " + id + " does not exist.", 404);
462 | }
463 | entity.delete();
464 | return Response.status(204).build();
465 | }
466 | ```
467 |
468 | And let’s check whether it works:
469 | ```bash
470 | $ http DELETE :8080/person/1
471 | HTTP/1.1 204 No Content
472 | ```
473 |
474 | This is a correct response with a code in the 200 range.
475 |
476 | ## Preparing for CI/CD
477 | Until now, everything you did was for local development. With just a few lines of code, you’ve been able to create a complete database client. You did not even have to worry about setting up a local database for testing.
478 |
479 | But how can you specify real database properties when entering test or production stages?
480 |
481 | Quarkus supports [configuration profiles][9]. Properties marked with a given profile name are used only if the application runs in that particular profile. By default, Quarkus supports the following profiles:
482 | - `dev`: Gets activated when you run your app via `quarkus dev`
483 | - `test`: Gets activated when you are running tests
484 | - `prod`: The default profile if the app is not started in the `dev` profile
485 |
486 | In our case, you want to specify database-specific properties only in `prod` mode. If you specified a database URL in dev mode, for example, Quarkus would try to use that database server instead of starting the corresponding Dev Services as you want.
487 |
488 | Our configuration therefore is:
489 |
490 | ```java
491 | # only when we are developing
492 | %dev.quarkus.hibernate-orm.database.generation=drop-and-create
493 | %dev.quarkus.hibernate-orm.sql-load-script=import.sql
494 |
495 | # only in production
496 | %prod.quarkus.hibernate-orm.database.generation=update
497 | %prod.quarkus.hibernate-orm.sql-load-script=no-file
498 |
499 | # Datasource settings...
500 | # note, we only set those props in prod mode
501 | quarkus.datasource.db-kind=postgresql
502 | %prod.quarkus.datasource.username=${DB_USER}
503 | %prod.quarkus.datasource.password=${DB_PASSWORD}
504 | %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DB_HOST}/${DB_DATABASE}
505 | ```
506 |
507 | Quarkus also supports the use of [property expressions][10]. For instance, if your application is running on Kubernetes, you might want to specify the datasource username and password via a secret. In this case, use the `${PROP_NAME}` expression format to refer to the property that was set in the file. Those expressions are evaluated when they are read. The property names are either specified in the `application.properties` file or come from environment variables.
508 |
509 | Now your application is prepared for CI/CD and for production (see later in this book).
510 |
511 | ## Moving the app to OpenShift
512 | Quarkus provides extensions to generate manifest files for Kubernetes or [OpenShift][11]. Let’s add the extensions to our `pom.xml` file:
513 |
514 | ```bash
515 | $ quarkus ext add jib openshift
516 | ```
517 |
518 | The `jib` extension helps you generate a container image out of the application. The `openshift` extension generates the necessary manifest files to deploy the application on—well—OpenShift.
519 |
520 | Let’s specify the properties accordingly:
521 |
522 | ```java
523 | # Packaging the app
524 | quarkus.container-image.builder=jib
525 | quarkus.container-image.image=quay.io/wpernath/person-service:v1.0.0
526 | quarkus.openshift.route.expose=true
527 | quarkus.openshift.deployment-kind=Deployment
528 |
529 | # resource limits
530 | quarkus.openshift.resources.requests.memory=128Mi
531 | quarkus.openshift.resources.requests.cpu=250m
532 | quarkus.openshift.resources.limits.memory=256Mi
533 | quarkus.openshift.resources.limits.cpu=500m
534 |
535 | ```
536 |
537 | Now build the application container image via:
538 | ```bash
539 | $ mvn package -Dquarkus.container-image.push=true
540 | ```
541 |
542 | This command also pushes the image to [Quay.io][12] as `quay.io/wpernath/person-service:v1.0.0`. Quarkus is using [Jib][13] to build the image.
543 |
544 | After the image is built, you can install the application into OpenShift by applying the manifest file:
545 |
546 | ```bash
547 | $ oc apply -f target/kubernetes/openshift.yml
548 | service/person-service configured
549 | imagestream.image.openshift.io/person-service configured
550 | deployment.apps/person-service configured
551 | route.route.openshift.io/person-service configured
552 | ```
553 |
554 | Then create a PostgreSQL database instance in the same namespace from the corresponding template. You can install the database from the OpenShift console by clicking **+Add→Developer Catalog→Database→PostgreSQL** and filling in meaningful properties for the service name, user name, password, and database name. You could alternatively execute the following command from the shell to instantiate a PostgreSQL server in the current namespace:
555 |
556 | ```bash
557 | $ oc new-app postgresql-persistent \
558 | -p POSTGRESQL_USER=wanja \
559 | -p POSTGRESQL_PASSWORD=wanja \
560 | -p POSTGRESQL_DATABASE=wanjadb \
561 | -p DATABASE_SERVICE_NAME=wanjaserver
562 | ```
563 |
564 | Suppose you’ve specified the database properties in `application.properties` like this:
565 |
566 | ```java
567 | %prod.quarkus.datasource.username=${DB_USER:wanja}
568 | %prod.quarkus.datasource.password=${DB_PASSWORD:wanja}
569 | %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://${DB_HOST:wanjaserver}/${DB_DATABASE:wanjadb}
570 | ```
571 |
572 | Quarkus takes the values after the colon as defaults, which means you don’t have to create those environment values in the `Deployment` file for this test. But if you want to use a secret or ConfigMap, have a look at the [corresponding extension][14] for Quarkus.
573 |
574 | After restarting the `person-service` you should see that the database is used and that the `person` table was created. But there is no data in the database, because you’ve defined the corresponding property to be used in dev mode only.
575 |
576 | So fill the database now:
577 | ```bash
578 | $ http POST http://person-service.apps.art8.ocp.lan/person firstName=Jimi lastName=Hendrix salutation=Mr
579 |
580 | $ http POST http://person-service.apps.art8.ocp.lan/person firstName=Joe lastName=Cocker salutation=Mr
581 |
582 | $ http POST http://person-service.apps.art8.ocp.lan/person firstName=Carlos lastName=Santana salutation=Mr
583 |
584 | ```
585 |
586 | You should now have three singers in the database. To verify, call:
587 | ```bash
588 | $ http http://person-service.apps.art8.ocp.lan/person
589 | HTTP/1.1 200 OK
590 |
591 | [
592 | {
593 | "firstName": "Joe",
594 | "id": 2,
595 | "lastName": "Cocker",
596 | "salutation": "Mr"
597 | },
598 | {
599 | "firstName": "Jimi",
600 | "id": 1,
601 | "lastName": "Hendrix",
602 | "salutation": "Mr"
603 | },
604 | {
605 | "firstName": "Carlos",
606 | "id": 3,
607 | "lastName": "Santana",
608 | "salutation": "Mr"
609 | }
610 | ]
611 |
612 | ```
613 |
614 | ## Becoming native
615 | Do you want to create a native executable out of your Quarkus app? That’s easily done by running:
616 |
617 | ```bash
618 | $ mvn package -Pnative -DskipTests
619 | ```
620 |
621 | However, this command would require you to set up [GraalVM][15] locally. GraalVM is a Java compiler that creates native executables from Java sources. If you don’t want to install and set up GraalVM locally or if you’re always building for a container runtime, you could instruct Quarkus to [do a container build][16] as follows:
622 |
623 | ```bash
624 | $ mvn package -Pnative -DskipTests -Dquarkus.native.container-build=true
625 | ```
626 |
627 | If you also define `quarkus.container-image.build=true`, Quarkus will produce a native container image, which you could then use to deploy to a Kubernetes cluster.
628 |
629 | Try it. And if you’re using OpenShift 4.9, you could have a look at the `Observe` register within the Developer Console. This page monitors the resources used by a container image.
630 |
631 | My OpenShift 4.9 instance is installed on an Intel NUC with a Core i7 with 6 cores and 64GB of RAM. Using a native image instead of a JVM one changes quite a few things:
632 | - Startup time decreases from 1.2sec (non-native) to 0.03sec (native).
633 | - Memory usage decreases from 120MB (non-native) to 25MB (native).
634 | - CPU utilization drops to 0.2% of the requested CPU time.
635 |
636 | ## Summary
637 | Using Quarkus dramatically reduces the lines of code you have to write. As you have seen, creating a simple REST CRUD service is a piece of cake. If you then want to move your app to Kubernetes, it’s just a matter of adding another extension to the build process.
638 |
639 | Thanks to the Dev Services, you’re even able to do fast prototyping without worrying about installing many third-party applications, such as databases.
640 |
641 | Minimizing the amount of boilerplate code makes your application easier to maintain, and lets you focus on what you really have to do: implementing the business case.
642 |
643 | This is why I fell in love with Quarkus.
644 |
645 | Now let’s have a deeper look into working with images on Kubernetes and OpenShift.
646 |
647 | [1]: https://quarkus.io
648 | [2]: https://developers.redhat.com/e-books
649 | [3]: https://quarkus.io/get-started/
650 | [4]: https://httpie.io
651 | [5]: https://quarkus.io/guides/config
652 | [6]: https://quarkus.io/guides/hibernate-orm-panache
653 | [7]: https://quarkus.io/guides/dev-services
654 | [8]: https://quarkus.io/guides/hibernate-orm-panache#solution-1-using-the-active-record-pattern
655 | [9]: https://quarkus.io/guides/config-reference#profiles
656 | [10]: https://quarkus.io/guides/config-reference#property-expressions
657 | [11]: https://quarkus.io/guides/deploying-to-openshift
658 | [12]: https://quay.io
659 | [13]: https://github.com/GoogleContainerTools/jib
660 | [14]: https://quarkus.io/guides/kubernetes-config
661 | [15]: https://www.graalvm.org
662 | [16]: https://quarkus.io/guides/building-native-image#container-runtime
--------------------------------------------------------------------------------
/chapter1/status:
--------------------------------------------------------------------------------
1 | EDITING
2 |
--------------------------------------------------------------------------------
/chapter2/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter2/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/chapter2/Chapter 2- Deployment Basics.md:
--------------------------------------------------------------------------------
1 | # Chapter 2: Deployment Basics
2 | This chapter discusses how apps are deployed in Kubernetes and OpenShift, what manifest files are involved, and how to change the files so that you can redeploy your application into a new, clean namespace without rebuilding it.
3 |
4 | The chapter also discusses OpenShift Templates and Kustomize, tools that help automate those necessary file changes.
5 |
6 | ## Introduction and Motivation
7 | As someone with a long history of developing software, I like containers and Kubernetes a lot, because those technologies increase my own productivity. They free me from waiting to get what I need (a remote testing system, for example) from the operations department.
8 |
9 | On the other hand, writing applications for a container environment—especially micro-services—can easily become quite complex, because I suddenly also have to maintain artifacts that do not necessarily belong to me:
10 |
11 | - ConfigMaps and secrets (well, I have to store my application configuration somehow, anyway)
12 | - The `deployment.yaml` file
13 | - The `service.yaml` file
14 | - An `ingress.yaml` or `route.yaml` file
15 | - A `PersistentVolumeClaim.yaml` file
16 | - etc.
17 |
18 | In native Kubernetes, I have to take care of creating and maintaining those artifacts. Thanks to the Source-to-Image concept in OpenShift, I don’t have to worry about most of those manifest files, because they will be generated for me.
19 |
20 | The following commands create a new project named `book-dev` in OpenShift, followed by a new app named `person-service`. The app is based on the Java builder image `openjdk-11-ubi8` and takes its source code coming from GitHub. The final command effectively publishes the service so that apps from outside of OpenShift can interact with it:
21 |
22 | ```bash
23 | $ oc new-project book-dev
24 | $ oc new-app java:openjdk-11-ubi8~https://github.com/wpernath/book-example.git --context-dir=person-service --name=person-service --build-env MAVEN_MIRROR_URL=http://nexus.ci:8081/repository/maven-public/
25 | $ oc expose service/person-service
26 | route.route.openshift.io/person-service exposed
27 | ```
28 |
29 | If you don’t have a local Maven mirror, omit the `--build-env` parameter from the second command. The `--context-dir` option lets you specify a subfolder within the Git repository with the actual source files.
30 |
31 | The security settings, deployment, image, route, and service are generated for you by OpenShift (and also some OpenShift specific files, such as `ImageStream` or `DeploymentConfig`). These OpenShift conveniences allow you to fully focus on app development.
32 |
33 | In order to let the example start successfully, we have to create a PostgreSQL database server as well. Just execute the following command. We will discuss it later.
34 |
35 | ```bash
36 | $ oc new-app postgresql-persistent \
37 | -p POSTGRESQL_USER=wanja \
38 | -p POSTGRESQL_PASSWORD=wanja \
39 | -p POSTGRESQL_DATABASE=wanjadb \
40 | -p DATABASE_SERVICE_NAME=wanjaserver
41 | ```
42 |
43 |
44 | ## Basic Kubernetes Files
45 | So what are the necessary artifacts in an OpenShift app deployment?
46 |
47 | - `Deployment`: A deployment connects the image with a container and provides various runtime information, including environment variables, startup scripts, and config maps. This configuration file also defines the ports used by the application.
48 | - `DeploymentConfig`: This file is specific to OpenShift, and contains mainly the same functionality as a `Deployment`. If you’re starting today with your OpenShift tour, use `Deployment` instead of this file.
49 | - `Service`: A service contains the runtime information that Kubernetes needs to load balance your application over different instances (pods).
50 | - `Route`: A route defines the external URL exposed by your app. Requests from clients are received at this URL.
51 | - `ConfigMap`: ConfigMaps contain—well—configurations for the app.
52 | - `Secret`: Like a ConfigMap, a secret contains hashed password information.
53 |
54 | Once those files are automatically generated, you can get them by using `kubectl` or `oc`:
55 |
56 | ```bash
57 | $ oc get deployment
58 | NAME READY UP-TO-DATE AVAILABLE AGE
59 | person-service 1/1 1 1 79m
60 | ```
61 |
62 | By specifying the `-o yaml` option, you can get the complete descriptor:
63 | ```bash
64 | $ oc get deployment person-service -o yaml
65 | apiVersion: apps/v1
66 | kind: Deployment
67 | metadata:
68 | [...]
69 | ```
70 |
71 | Just pipe the output into a new `.yaml` file and you’re done. You can directly use this file to create your app in a new namespace (except for the image section). But the generated file contains a lot of text you don’t need, so it’s a good idea to pare it down. For example, you can safely remove the `managedFields` section, big parts of the `metadata` section at the beginning, and the `status` section at the end of each file. After stripping the file down to the relevant parts (show in the following listing), add it to your Git repository:
72 |
73 | ```yaml
74 | apiVersion: apps/v1
75 | kind: Deployment
76 | metadata:
77 | labels:
78 | app: person-service
79 | name: person-service
80 | spec:
81 | replicas: 1
82 | selector:
83 | matchLabels:
84 | deployment: person-service
85 | strategy:
86 | rollingUpdate:
87 | maxSurge: 25%
88 | maxUnavailable: 25%
89 | type: RollingUpdate
90 | template:
91 | metadata:
92 | labels:
93 | deployment: person-service
94 | spec:
95 | containers:
96 | - image: image-registry.openshift-image-registry.svc:5000/book-dev/person-service:latest
97 | imagePullPolicy: IfNotPresent
98 | name: person-service
99 | ports:
100 | - containerPort: 8080
101 | protocol: TCP
102 | restartPolicy: Always
103 | ```
104 |
105 | Do the same with `Route` and `Service`. That’s all for the present. You’re now able to create your app in a new namespace by entering:
106 |
107 | ```bash
108 | $ oc new-project book-test
109 | $ oc policy add-role-to-user system:image-puller system:serviceaccount:book-test:default --namespace=book-dev
110 | $ oc apply -f raw-kubernetes/service.yaml
111 | $ oc apply -f raw-kubernetes/deployment.yaml
112 | $ oc apply -f raw-kubernetes/route.yaml
113 | ```
114 |
115 | The `oc policy` command is necessary to grant the `book-test` namespace access to the image in the namespace `book-dev`. Without this command, you’d get an error message in OpenShift saying that the image was not found, unless you are entering commands as an admin user.
116 |
117 | This section has described one way of getting required files. Of course, if you have more elements to your application, you need to export the files defining those elements as well. If you have defined objects of type `PersistentVolumeClaim`, `ConfigMap`, or `Secret`, you need to export them and strip them down as well.
118 |
119 | This simple example has shown how you can export the manifest files of your app to redeploy it into another clean namespace. Typically, you have to change some fields to reflect differences between environments, especially for the `Deployment` file.
120 |
121 | For example, it does not make sense to use the latest image from the `book-dev` namespace in the `book-test` namespace. You’d always have the same version of your application in the development and test environments. To allow the environments to evolve separately, you have to change the image in the `Deployment` on every stage you’re using. You could obviously do this manually. But let’s find some ways to automate it.
122 |
123 | ## YAML Parser (yq)
124 | To maintain different versions of configuration files, the first tool that most likely pops into your mind is the lightweight command-line YAML parser, [`yq`][1].
125 |
126 | There are many ports available for most operating systems. On macOS, you can install it via [Homebrew][2]:
127 |
128 | ```bash
129 | $ brew install yq
130 | ```
131 |
132 | To read the name of the image out of the `Deployment` file, you could enter:
133 |
134 | ```bash
135 | $ yq e '.spec.template.spec.containers[0].image' \
136 | raw-kubernetes/deployment.yaml \
137 | image-registry.openshift-image-registry.svc:5000/book-dev/person-service@
138 | ```
139 |
140 | To change the name of the image, you could enter:
141 |
142 | ```bash
143 | $ yq e -i '.spec.template.spec.containers[0].image = "image-registry.openshift-image-registry.svc:5000/book-dev/person-service:latest"' \
144 | raw-kubernetes/deployment.yaml
145 | ```
146 |
147 | This command updates the `Deployment` in place, changing the name of the image to `person-service:latest`.
148 |
149 | The following process efficiently creates a staging release:
150 | - Tag the currently used image in `book-dev` to something more meaningful, like `person-service:v1.0.0-test`.
151 | - Use `yq` to change the image name in the deployment.
152 | - Create a new namespace.
153 | - Apply the necessary `Deployment`, `Service`, and `Route` configuration files as shown earlier.
154 |
155 | This process could easily be scripted in a shell script, for example:
156 |
157 | ```bash
158 | #!/bin/bash
159 | oc tag book-dev/person-service@sha... book-dev/person-service:stage-v1.0.0
160 | yq e -i ...
161 | oc new-project ...
162 | oc apply -f deployment.yaml
163 | ```
164 |
165 | More details on this topic can be found in my article [Release Management with OpenShift: Under the hood][3].
166 |
167 | Using a tool such as `yq` seems to be the easiest way to automate the processing of Kubernetes manifest files. However, this process makes you create and maintain a script with each of your projects. It might be the best solution for small teams and small projects, but as soon as you’re responsible for more apps, the demands could easily get out of control.
168 |
169 | So let’s discuss other solutions.
170 |
171 | ## OpenShift Templates
172 | OpenShift Templates provides an easy way to create a single file out of the required configuration files and add customizable parameters to the unified file. As the name indicates, the service is offered only on OpenShift and is not portable to a generic Kubernetes environment.
173 |
174 | First, create all the standard configurations shown near the beginning of this chapter (such as `route.yaml`, `deployment.yaml`, and `service.yaml`), although you don’t have to separate the configurations into specific files. Next, to create a new template file, open your preferred editor and create a file called `template.yaml`. The header of that file should look like this:
175 |
176 | ```bash
177 | apiVersion: template.openshift.io/v1
178 | kind: Template
179 | name: service-template
180 | metadata:
181 | name: service-template
182 | annotation:
183 | tags: java
184 | iconClass: icon-rh-openjdk
185 | openshift.io/display-name: The person service template
186 | description: This Template creates a new service
187 | objects:
188 | ```
189 |
190 | Then add the configurations you want to combine into this file right under the `objects` tag. Values that you want to change from one system to another should be specified as parameters in the format `$(parameter)`. For instance, a typical service configuration might look like this in example `template.yaml`:
191 |
192 | ```bash
193 | - apiVersion: v1
194 | kind: Service
195 | metadata:
196 | labels:
197 | app: ${APPLICATION_NAME}
198 | name: ${APPLICATION_NAME}
199 | spec:
200 | ports:
201 | - name: 8080-tcp
202 | port: 8080
203 | protocol: TCP
204 | selector:
205 | app: ${APPLICATION_NAME}
206 | ```
207 |
208 | Then define the parameters in the `parameters` section of the file:
209 |
210 | ```bash
211 | parameters:
212 | - name: APPLICATION_NAME
213 | description: The name of the application you'd like to create
214 | displayName: Application Name
215 | required: true
216 | value: person-service
217 | - name: IMAGE_REF
218 | description: The full image path
219 | displayName: Container Image
220 | required: true
221 | value: image-registry.openshift-image-registry.svc:5000/book-dev/person-service:latest
222 | ```
223 |
224 | Now for the biggest convenience offered by OpenShift Templates: Once you have instantiated a template in an OpenShift namespace, you can used the template to create applications within the graphical user interface (UI):
225 |
226 | ```bash
227 | $ oc new-project book-template
228 | $ oc policy add-role-to-user system:image-puller system:serviceaccount:book-template:default --namespace=book-dev
229 | $ oc apply -f ocp-template/service-template.yaml
230 | template.template.openshift.io/service-template created
231 | ```
232 |
233 | Just open the OpenShift web console now, choose the project, click **+Add**, and choose the **Developer Catalog**. You should be able to find a template called `service-template` (Image 1). This is the one we’ve created.
234 |
235 | ![Image 1: The Developer Catalog after adding the template ][image-1]
236 |
237 | Instantiate the template and fill in the required fields (Image 2).
238 |
239 | ![Image 2: Template instantiation with required fields][image-2]
240 |
241 | Then click **Create**. After a short time, you should see the application’s deployment progressing. Once it is finished, you should be able to access the route of the application.
242 |
243 | There are also several ways to create an application instance out of a template without the UI. You can run an `oc`command to do the work within OpenShift:
244 |
245 | ```bash
246 | $ oc new-app service-template -p APPLICATION_NAME=simple-service
247 | --> Deploying template "book-template/service-template" to project book-template
248 |
249 | * With parameters:
250 | * Application Name=simple-service
251 | * Container Image=image-registry.openshift-image-registry.svc:5000/book-dev/person-service:latest
252 |
253 | --> Creating resources ...
254 | route.route.openshift.io "simple-service" created
255 | service "simple-service" created
256 | deployment.apps "simple-service" created
257 | --> Success
258 | Access your application via route 'simple-service-book-template.apps.art3.ocp.lan'
259 | Run 'oc status' to view your app.
260 | ```
261 |
262 | Finally, you can process the template locally:
263 |
264 | ```bash
265 | $ oc process service-template APPLICATION_NAME=process-service -o yaml | oc apply -f -
266 | route.route.openshift.io/process-service created
267 | service/process-service created
268 | deployment.apps/process-service created
269 | ```
270 |
271 | Whatever method you choose to process the template, results show up in your Topology view for the project (Image 3).
272 |
273 | ![Image 3: OpenShift UI after using several ways of using the template][image-3]
274 |
275 | Creating and maintaining an OpenShift Template is fairly easy. Parameters can be created and set in intuitive ways. I personally like the deep integration into OpenShift’s developer console and the `oc` command.
276 |
277 | I would like OpenShift Templates even better if I could extend the development process to other teams. I would like to be able to create a template of a standard application (including a `BuildConfig`, etc.), and import it into the global `openshift` namespace so that all users could reuse my base—just like the other OpenShift Templates shipped with any OpenShift installation.
278 |
279 | Unfortunately, OpenShift Templates is for OpenShift only. If you are using a local Kubernetes installation and a production OpenShift version, the template is not easy to reuse. But if your development and production environments are completely based on OpenShift, you should give it a try.
280 |
281 | ## Kustomize
282 | Kustomize is a command-line tool that edits Kubernetes YAML configuration files in place, similar to `yq`. Kustomize tends to be easy to use because, usually, only a few properties of configuration files have to be changed from stage to stage. that only a few fields have to be changed from stage to stage. Therefore, you start by creating a base set of files (`Deployment`, `Service`, `Route` etc.) and apply changes through Kustomize for each stage. The patch mechanism of Kustomize takes care of merging the files together.
283 |
284 | Kustomize is very handy if you don’t want to learn a new templating engine and maintain a file that could easily contain thousands of lines, as happens with OpenShift Templates.
285 |
286 | Kustomize was originally created by Google and is now a subproject of Kubernetes. The command line tools, such as `kubectl` and `oc`, have most of the necessary functionality built in.
287 |
288 | ### How Kustomize works
289 | Let’s have a look at the files in a Kustomize directory:
290 |
291 | ```bash
292 | $ tree kustomize
293 | kustomize
294 | ├── base
295 | │ ├── deployment.yaml
296 | │ ├── kustomization.yaml
297 | │ ├── route.yaml
298 | │ └── service.yaml
299 | └── overlays
300 | ├── dev
301 | │ ├── deployment.yaml
302 | │ ├── kustomization.yaml
303 | │ └── route.yaml
304 | └── stage
305 | ├── deployment.yaml
306 | ├── kustomization.yaml
307 | └── route.yaml
308 |
309 | 4 directories, 10 files
310 | ```
311 |
312 | The top-level directory contains the `base` files and an `overlays` subdirectory. The `base` files define the resources that Kubernetes or OpenShift need in order to deploy your application. These files should be well-known from the previous sections of this chapter.
313 |
314 | Only `kustomization.yaml` is new. Let’s have a look at this file:
315 | ```yaml
316 | apiVersion: kustomize.config.k8s.io/v1beta1
317 | kind: Kustomization
318 |
319 | commonLabels:
320 | org: wanja.org
321 |
322 | resources:
323 | - deployment.yaml
324 | - service.yaml
325 | - route.yaml
326 |
327 | ```
328 |
329 | This file defines the resources for the deployment (`Deployment`, `Service`, and `Route`) but also adds a section called `commonLabels`. Those labels will be applied to all resources generated by Kustomize.
330 |
331 | The following commands process the files and deploy our application on OpenShift:
332 |
333 | ```bash
334 | $ oc new-project book-kustomize
335 | $ oc apply -k kustomize/overlays/dev
336 | service/dev-person-service created
337 | deployment.apps/dev-person-service created
338 | route.route.openshift.io/dev-person-service created
339 | ```
340 |
341 | If you also install the Kustomize command-line tool (for example, with `brew install kustomize` on macOS), you’re able to debug the output:
342 |
343 | ```bash
344 | $ kustomize build kustomize/overlays/dev
345 | apiVersion: v1
346 | kind: Service
347 | metadata:
348 | annotations:
349 | stage: development
350 | labels:
351 | app: person-service
352 | org: wanja.org
353 | variant: development
354 | name: dev-person-service
355 | spec:
356 | ports:
357 | - name: 8080-tcp
358 | port: 8080
359 | protocol: TCP
360 | targetPort: 8080
361 | selector:
362 | deployment: person-service
363 | org: wanja.org
364 | variant: development
365 | sessionAffinity: None
366 | type: ClusterIP
367 | ---
368 | apiVersion: route.openshift.io/v1
369 | kind: Route
370 | metadata:
371 | annotations:
372 | stage: development
373 | labels:
374 | app: person-service
375 | org: wanja.org
376 | variant: development
377 | name: dev-person-service
378 | spec:
379 | port:
380 | targetPort: 8080-tcp
381 | to:
382 | kind: Service
383 | name: dev-person-service
384 | weight: 100
385 | wildcardPolicy: None
386 | ---
387 | [...]
388 | ```
389 |
390 | A big benefit of Kustomize is that you have to maintain only the differences between each stage, so the overlay files are quite small and clear. If a file does not change between stages, it does not need to be duplicated.
391 |
392 | Kustomize fields such as `commonLabels` or `commonAnnotations` can specify labels or annotations that you would like to have in every metadata section of every generated file. `namePrefix` specifies a prefix for Kustomize to add to every `name` tag.
393 |
394 | The following command merges the files together for the staging overlay.
395 |
396 | ```bash
397 | $ kustomize build kustomize/overlays/stage
398 | ```
399 |
400 | The following output shows that all filenames have `staging-` as a prefix. Additionally, the configuration has a new `commonLabel` (the `variant: staging` line) and annotation (`note: we are on staging now`).
401 |
402 | ```bash
403 | $ kustomize build kustomize/overlays/stage
404 | [...]
405 |
406 | apiVersion: apps/v1
407 | kind: Deployment
408 | metadata:
409 | annotations:
410 | note: We are on staging now
411 | stage: staging
412 | labels:
413 | app: person-service
414 | org: wanja.org
415 | variant: staging
416 | name: staging-person-service
417 | spec:
418 | progressDeadlineSeconds: 600
419 | replicas: 2
420 | selector:
421 | matchLabels:
422 | deployment: person-service
423 | org: wanja.org
424 | variant: staging
425 | strategy:
426 | rollingUpdate:
427 | maxSurge: 25%
428 | maxUnavailable: 25%
429 | type: RollingUpdate
430 | template:
431 | metadata:
432 | annotations:
433 | note: We are on staging now
434 | stage: staging
435 | labels:
436 | deployment: person-service
437 | org: wanja.org
438 | variant: staging
439 | spec:
440 | containers:
441 | - env:
442 | - name: APP_GREETING
443 | value: Hey, this is the STAGING environment of the App
444 | image: image-registry.openshift-image-registry.svc:5000/book-dev/person-service:latest
445 | ---
446 | apiVersion: route.openshift.io/v1
447 | kind: Route
448 | metadata:
449 | annotations:
450 | note: We are on staging now
451 | stage: staging
452 | labels:
453 | app: person-service
454 | org: wanja.org
455 | variant: staging
456 | name: staging-person-service
457 | spec:
458 | port:
459 | targetPort: 8080-tcp
460 | to:
461 | kind: Service
462 | name: staging-person-service
463 | weight: 100
464 | wildcardPolicy: None
465 | ```
466 |
467 | The global `org` label is still specified. You can deploy the stage to OpenShift with the command:
468 |
469 | ```bash
470 | $ oc apply -k kustomize/overlays/stage
471 | ```
472 |
473 |
474 |
475 | ### More sophisticated Kustomize examples
476 | Instead of using `patchStrategicMerge` files, you could just maintain a `kustomization.yaml` file containing everything. An example looks like:
477 |
478 | ```yaml
479 | apiVersion: kustomize.config.k8s.io/v1beta1
480 | kind: Kustomization
481 | resources:
482 | - ../../base
483 |
484 | namePrefix: dev-
485 | commonLabels:
486 | variant: development
487 |
488 |
489 | # replace the image tag of the container with latest
490 | images:
491 | - name: image-registry.openshift-image-registry.svc:5000/book-dev/person-service:latest
492 | newTag: latest
493 |
494 | # generate a configmap
495 | configMapGenerator:
496 | - name: app-config
497 | literals:
498 | - APP_GREETING=We are in DEVELOPMENT mode
499 |
500 | # this patch needs to be done, because kustomize does not change
501 | # the route target service name
502 | patches:
503 | - patch: |-
504 | - op: replace
505 | path: /spec/to/name
506 | value: dev-person-service
507 | target:
508 | kind: Route
509 | ```
510 |
511 | There are specific fields in newer versions (v4.x and above) of Kustomize that help you even better maintain your overlays. For example, if all you have to do is change the tag of the target image, you could simply use the `images` field array specifier, shown in the previous listing.
512 |
513 | The `patches` parameter can issue a patch on a list of target files, such as replacing the target service name of the `Route` (as shown in the following listing) or adding health checks for the application in the `Deployment` file:
514 |
515 | ```yaml
516 | # this patch needs to be done, because kustomize does not change the route target service name
517 | patches:
518 | - patch: |-
519 | - op: replace
520 | path: /spec/to/name
521 | value: dev-person-service
522 | target:
523 | kind: Route
524 | ```
525 |
526 | The following patch applies the file `apply-health-checks.yaml`to the `Deployment`:
527 |
528 | ```yaml
529 | # apply some patches
530 | patches:
531 | # apply health checks to deployment
532 | - path: apply-health-checks.yaml
533 | target:
534 | version: v1
535 | kind: Deployment
536 | name: person-service
537 | ```
538 |
539 | The following file is the patch itself and gets applied to the `Deployment`:
540 | ```yaml
541 | apiVersion: apps/v1
542 | kind: Deployment
543 | metadata:
544 | name: person-service
545 | spec:
546 | template:
547 | spec:
548 | containers:
549 | - name: person-service
550 | readinessProbe:
551 | httpGet:
552 | path: /q/health/ready
553 | port: 8080
554 | scheme: HTTP
555 | timeoutSeconds: 1
556 | periodSeconds: 10
557 | successThreshold: 1
558 | failureThreshold: 3
559 | livenessProbe:
560 | httpGet:
561 | path: /q/health/live
562 | port: 8080
563 | scheme: HTTP
564 | timeoutSeconds: 2
565 | periodSeconds: 10
566 | successThreshold: 1
567 | failureThreshold: 3
568 | ```
569 |
570 | You can even generate the ConfigMap based on fixed parameters or properties files:
571 |
572 | ```yaml
573 | # generate a configmap
574 | configMapGenerator:
575 | - name: app-config
576 | literals:
577 | - APP_GREETING=We are in DEVELOPMENT mode
578 | ```
579 |
580 | Starting with Kubernetes release 1.21 (which is reflected in OpenShift 4.8.x), `oc` and `kubectl` contain advanced Kustomize features from version 4.0.5. Kubernetes 1.22 (OpenShift 4.9.x) will contain features of Kustomize 4.2.0.
581 |
582 | Before Kubernetes 1.21 (OpenShift 4.7.x and before) `oc apply -k` does not contain recent Kustomize features. So if you want to use those features, you need to use the `kustomize` command-line tool and pipe the output to `oc apply -f`.
583 |
584 | ```bash
585 | $ kustomize build kustomize-ext/overlays/stage | oc apply -f -
586 | ```
587 |
588 | For more information and even more sophisticated examples, have a look at the [Kustomize home page][4] as well as the examples in the official [GitHub.com repository][5].
589 |
590 | ### Summary of Kustomize
591 | Using Kustomize is quite easy and straightforward. You don’t really have to learn a templating DSL. You just need to understand the processes of patching and merging. Kustomize makes it easy for CI/CD practitioners to separate the configuration of an application for every stage. And because Kustomize is also a Kubernetes subproject and is tightly integrated into Kubernetes’s tools, you don’t have to worry that Kustomize would suddenly disappear.
592 |
593 | Argo CD has build-in support for Kustomize as well, so that if you’re doing CI/CD with Argo CD you can still use Kustomize.
594 |
595 | ## Summary
596 | In this chapter we have learned how to build an application using OpenShift’s Source-to-Image (S2I) technology, along with the YAML parser, OpenShift Templates, and Kustomize. These are the base technologies for automating application deployment and packaging.
597 |
598 | Now you have an understanding of which artifacts need to be taken into account when you want to release your application and how to modify those artifacts to make sure that the new environment is capable of handling your application.
599 |
600 | The next chapter is about Helm Charts and Kubernetes Operators for application packaging and distribution.
601 |
602 | [1]: https://github.com/mikefarah/yq "YQ"
603 | [2]: https://brew.sh "homebrew"
604 | [3]: https://www.opensourcerers.org/2017/09/19/release-management-with-openshift-under-the-hood/
605 | [4]: https://kustomize.io/
606 | [5]: https://github.com/kubernetes-sigs/kustomize/tree/master/examples
607 |
608 | [image-1]: file:///Users/wpernath/Devel/ocpdev-book/chapter2/developer-catalog-template.png
609 | [image-2]: file:///Users/wpernath/Devel/ocpdev-book/chapter2/template-instantiation.png
610 | [image-3]: file:///Users/wpernath/Devel/ocpdev-book/chapter2/topology-view-template.png
611 |
--------------------------------------------------------------------------------
/chapter2/developer-catalog-template.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter2/developer-catalog-template.png
--------------------------------------------------------------------------------
/chapter2/status:
--------------------------------------------------------------------------------
1 | FINAL REVIEW
2 |
--------------------------------------------------------------------------------
/chapter2/template-instantiation.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter2/template-instantiation.png
--------------------------------------------------------------------------------
/chapter2/topology-view-template.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter2/topology-view-template.png
--------------------------------------------------------------------------------
/chapter3/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter3/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/chapter3/1-person-service-on-quayio.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter3/1-person-service-on-quayio.png
--------------------------------------------------------------------------------
/chapter3/2-event-log-openshift.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter3/2-event-log-openshift.png
--------------------------------------------------------------------------------
/chapter3/3-helm-chart-release-ocp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter3/3-helm-chart-release-ocp.png
--------------------------------------------------------------------------------
/chapter3/4-building-operator-bundle.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter3/4-building-operator-bundle.png
--------------------------------------------------------------------------------
/chapter3/5-installed-operators.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter3/5-installed-operators.png
--------------------------------------------------------------------------------
/chapter3/status:
--------------------------------------------------------------------------------
1 | FINAL REVIEW
2 |
--------------------------------------------------------------------------------
/chapter4/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/chapter4/1-install-pipelines-operator.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/1-install-pipelines-operator.png
--------------------------------------------------------------------------------
/chapter4/2-installed-pipelines-operator.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/2-installed-pipelines-operator.png
--------------------------------------------------------------------------------
/chapter4/3-quarkus-app-props.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/3-quarkus-app-props.png
--------------------------------------------------------------------------------
/chapter4/4-all-cluster-tasks.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/4-all-cluster-tasks.png
--------------------------------------------------------------------------------
/chapter4/5-pipeline-builder.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/5-pipeline-builder.png
--------------------------------------------------------------------------------
/chapter4/6-linking-workspaces.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/6-linking-workspaces.png
--------------------------------------------------------------------------------
/chapter4/7-pipeline-run.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/7-pipeline-run.png
--------------------------------------------------------------------------------
/chapter4/8-simplified-maven-task.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter4/8-simplified-maven-task.png
--------------------------------------------------------------------------------
/chapter4/Chapter 4- CI-CD with Tekton Pipelines.md:
--------------------------------------------------------------------------------
1 | # Chapter 4: CI/CD with Tekton Pipelines
2 | Chapters 2 and 3 of this book discussed the basics of application packaging with Kustomize, Helm Charts, and Operators. They also discussed how to handle images and all the metadata required for working with Kubernetes.
3 |
4 | This chapter discusses how to integrate complex tasks, such as building and deploying applications, into Kubernetes using [Tekton][1]. Continuous integration and continuous development (CI/CD) are represented in Tekton as *pipelines* that combine all the steps you need to accomplish what you want. And Tekton makes it easy to write a general pipeline that you can adapt to many related tasks.
5 |
6 | ## Tekton and OpenShift Pipelines
7 | [Tekton][2] is an open source framework to create pipelines for Kubernetes and the cloud. This means that there is no central tool you need to maintain, such as Jenkins. You just have to install a Kubernetes Operator into your Kubernetes cluster to provide some custom resource definitions (CRDs). Based on those CRDs, you can create tasks and pipelines to compile, test, deploy, and maintain your application.
8 |
9 | [OpenShift Pipelines][3] is based on Tekton and adds a nice GUI to the OpenShift developer console. The Pipelines Operator is free to use for every OpenShift user.
10 |
11 | ### Tekton Concepts
12 | Tekton has numerous objects, but the architecture is quite easy to understand. The key concepts are:
13 |
14 | - Step: A process that runs in its own container and can execute whatever the container image provides. A step does not stand on its own, but must be embedded in a task.
15 | - Task: A set of steps running in separate containers (known as "pods" in Kubernetes). A task could be, for example, a compilation process using Maven. One step would be to check the Maven `settings.xml` file. The second step could be to execute the Maven goals (compile, package etc.).
16 | - Pipeline: A set of tasks that are executed either in parallel or (in a simpler case) one after another. A pipeline can be customized through *parameters*.
17 | - PipelineRun: A collection of parameters to submit to a pipeline. For instance, a build-and-deploy pipeline might refer to a PipelineRun that contains technical input (for example, a `ConfigMap` and `PersistentVolumeClaim`) as well as non-technical parameters (for example, the URL for the Git repository to clone, the name of the target image, etc.)
18 |
19 | Internally, Tekton creates a TaskRun object for each task it finds in a PipelineRun.
20 |
21 | To summarize: A pipeline contains a list of tasks, each of which contain a list of steps. One of the benefits of Tekton is that tasks and pipelines can be shared with other people, because a pipeline just specifies what to do in a given order. So if most of your projects have a similar pipeline, share and reuse it.
22 |
23 | ### Installing the tkn command-line tool
24 | Tekton comes with a command-line tool called `tkn`, which you can easily install on macOS by issuing:
25 |
26 | ```bash
27 | $ brew install tektoncd-cli
28 | ```
29 |
30 | Please check the official homepage of Tekton to see how to install the tool on other operating systems.
31 |
32 | ### Installing OpenShift Pipelines on Red Hat OpenShift
33 |
34 | The process in this chapter requires version 1.4.1 or higher of the OpenShift Pipelines Operator. To install that version, you also need a recent 4.7 OpenShift cluster, which you could install for example via [CodeReady Containers][4]. Without these tools, you won’t have access to workspaces (which you need to define).
35 |
36 | To install OpenShift Pipelines, you must be cluster-admin. Go to the OperatorHub, search for "pipelines," and click the **Install** button. There is nothing more to do for now, as the Operator maintains everything for you (Figure 1).
37 |
38 | ![Image 1: Using the OpenShift UI to install OpenShift Pipelines operator][image-1]
39 |
40 | After a while you’ll notice a new GUI entry in both the Administrator and the Developer UI (Figure 2).
41 | ![Image 2: New UI entries in the OpenShift GUI after you've installed the Pipelines Operator][image-2]
42 |
43 | ## This chapter's example: Create a pipeline for quarkus-simple
44 | For our [person-service][5], we are going to create a Tekton pipeline for a simple deployment task. The pipeline compiles the source, creates a Docker image based on [Jib][6], pushes the image to [Quay.io][7], and uses [kustomize][8] to apply that image to an OpenShift project called `book-tekton`.
45 |
46 | Sounds easy?
47 |
48 | It is. Well, mostly.
49 |
50 | First of all, why are we going to use Jib for building the container image? Well, that’s easily explained: Right now, there are three different container image build strategies available with Quarkus:
51 | - Docker
52 | - Source-to-Image (S2I)
53 | - Jib
54 | The Docker strategy uses the `docker` binary to build the container image. But the `docker` binary is not available inside a Kubernetes cluster (as mentioned in Chapter 3), because Docker is too heavyweight and requires root privileges to run the daemon.
55 |
56 | S2I requires creating `BuildConfig,`DeploymentConfig`, and`ImageStream\` objects specific to OpenShift, but these are not available in vanilla Kubernetes clusters.
57 |
58 | So in order to stay vendor-independent, we have to use Jib for this use case.
59 |
60 | Of course, you could also use other tools to create your container image inside Kubernetes. But in order to keep this Tekton example clean and simple, we are reusing what Quarkus provides. So we are able to simply set a few Quarkus properties in `application.properties` to define how Quarkus should package the application. Then we'll be able to use exactly *one* Tekton task to compile, package, and push the application to an external registry.
61 |
62 | **NOTE**: Make sure that your Quarkus application is using the required Quarkus extension called `container-image-jib`. If your `pom.xml` file does not include the `quarkus-container-image-jib` dependency, add it by executing:
63 |
64 | ```bash
65 | $ mvn quarkus:add-extension -Dextensions="container-image-jib"
66 | [INFO] Scanning for projects...
67 | [INFO]
68 | [INFO] -------------------< org.wanja.book:person-service >--------------------
69 | [INFO] Building person-service 1.0.0
70 | [INFO] --------------------------------[ jar ]---------------------------------
71 | [INFO]
72 | [INFO] --- quarkus-maven-plugin:2.4.2.Final:add-extension (default-cli) @ person-service ---
73 | [INFO] Looking for the newly published extensions in registry.quarkus.io
74 | [INFO] [SUCCESS] ✅ Extension io.quarkus:quarkus-container-image-jib has been installed
75 | [INFO] ------------------------------------------------------------------------
76 | [INFO] BUILD SUCCESS
77 | [INFO] ------------------------------------------------------------------------
78 | [INFO] Total time: 5.817 s
79 | [INFO] Finished at: 2021-11-22T09:34:05+01:00
80 | [INFO] ------------------------------------------------------------------------
81 | ```
82 |
83 | Then have a look at Figure 3 to see what properties need to be set to let Quarkus build, package, and push the image. Basically, the following properties need to be set in `application.properties`:
84 | ![Image 3: Application properties of the person-service][image-3]
85 |
86 | 1. `quarkus.container-image.build`: Set this to `true` to ensure that a `mvn package` command builds a container image.
87 | 2. `quarkus.container-image.push`: This is optional and required only if you want to push the image directly to the registry. I don't intend to do so, so I set the value to `false`.
88 | 3. `quarkus.container-image.builder`: This property selects the method of building the container image. We set the value to `jib` to use [Jib][9].
89 | 4. `quarkus.container-image.image`: Set this to the complete name of the image to be built, including the domain name.
90 |
91 | Now check out the [source code][10], have a look at `person-service/src/main/resources/application.properties`, change the image property to meet your needs, and issue:
92 |
93 | ```bash
94 | $ mvn clean package -DskipTests
95 | ```
96 |
97 | This command compiles the sources and builds the container image. If you want to push the resulting image to your registry, simply call:
98 |
99 | ```bash
100 | $ mvn package -DskipTests -Dquarkus.container-image.push=true
101 | ```
102 |
103 | After a while, Quarkus will generate and push your image to your registry. In my case, it’s `quay.io/wpernath/person-service`.
104 |
105 | ### Inventory check: What do we need?
106 | To create our use case, you need the following tools:
107 |
108 | - `git`: To fetch the source from GitHub.
109 | - `maven`: To reuse most of what Quarkus provides.
110 | - `kustomize`: To change our Deployment to point to the new image.
111 | - OpenShift client: To apply the changes we’ve made in the previous steps.
112 |
113 | Some of them can be set up for you by OpenShift. So now log into your OpenShift cluster, create a new project, and list all the available ClusterTasks:
114 |
115 | ```bash
116 | $ oc login .....
117 | $ oc new-project book-tekton
118 | $ tkn ct list
119 | ```
120 |
121 | > **Note**: What is the difference between a task and a ClusterTask? A ClusterTask is available globally in all projects, whereas a task is available only locally per project and must be installed into each project where you want to use it.
122 |
123 | Figure 4 shows all the available ClusterTasks created after you install the OpenShift Pipelines Operator. It seems you have most of what we need:
124 | - `git-clone`
125 | - `maven`
126 | - `openshift-client`
127 |
128 | ![Image 4: Available ClusterTasks in OpenShift after installation of the Pipelines Operator][image-4]
129 |
130 | You're missing just the `kustomize` task. You'll create one later, but first we want to take care of the rest of the tasks.
131 |
132 | ### Analyzing the necessary tasks
133 | If you want to have a look at the structure of a task, you can easily do so by executing the following command:
134 |
135 | ```bash
136 | $ tkn ct describe
137 | ```
138 |
139 | The output explains all the parameters of the task, together with other necessary information such as its inputs and outputs.
140 |
141 | By specifying the `-o yaml` parameter, you can view the YAML source definition of the task.
142 |
143 | The `git-clone` task allows a large number of parameters, but most of them are optional. You just have to specify `git-url` and `git-revision`. And you have to specify a workspace for the task.
144 |
145 | ### What are workspaces?
146 | Remember that Tekton is running each and every task (and all steps inside a task) as a separate pod. If the application running on the pod writes to some random folder, nothing gets really stored. So if we want (and yes, we do want) one step of the pipeline to read and write data that is shared with other steps, we have to find a way to do that.
147 |
148 | This is what workspaces are for. They could be a persistent volume claim, a config map, etc. A task that either requires a place to store data (such as git-clone) or needs to have access to data coming from a previous step (such as Maven), defines a workspace. If the task is embedded into a pipeline, the workspace is defined for every task in the pipeline. The PipelineRun (or in case of a single running task, the TaskRun) finally creates the mapping between the defined workspace and a corresponding storage.
149 |
150 | In our example, we need two workspaces:
151 | - A PersistentVolumeClaim (PVC) where the git-clone task is cloning the source code to and from the place where the Maven task is compiling the source
152 | - A ConfigMap with the `maven-settings` file you need in your environment
153 |
154 | ## Ways of building the pipeline
155 | Once you know what tasks you need in order to build your pipeline, you can start creating it. There are two ways of doing so:
156 | - Build your pipeline via a code editor as a YAML file.
157 | - Build your pipeline in the OpenShift Developer Console.
158 |
159 | As a first try, I recommend building the pipeline via the graphical Developer Console of OpenShift (Figure 5). Then export and see it what it looks like. The rest of this section focuses on that activity.
160 |
161 | > **Note**: Remember that you should have at least version 1.4.1 of the OpenShift Pipelines Operator installed.
162 |
163 | ![Image 5: Pipeline Builder][image-5]
164 |
165 | You have to provide parameters to each task, and link the required workspaces to the tasks. You can easily do that by using the GUI (Figure 6).
166 |
167 | ![Image 6:Linking the workspaces from Pipeline to task][image-6]
168 |
169 | You need to use the `maven` task twice, using the `package` goal:
170 | 1. To simply compile the source code.
171 | 2. To execute the `package` goal with the following parameters that [instruct quarkus to build and push the image][11]:
172 | - `-Dquarkus.container-image.push=true`
173 | - `-Dquarkus.container-image.builder=jib`
174 | - `-Dquarkus.container-image.image=$(params.image-name)`
175 | - `-Dquarkus.container-image.username=$(params.image-username)`
176 | - `-Dquarkus.container-image.password=$(params.image-password)`
177 |
178 | Once you’ve done all that and have clicked on the **Save** button, you’re able to export the YAML file by executing:
179 |
180 | ```bash
181 | $ oc get pipeline/build-and-push-image -o yaml > tekton/pipelines/build-and-push-image.yaml
182 | apiVersion: tekton.dev/v1beta1
183 | kind: Pipeline
184 | metadata:
185 | name: build-and-push-image
186 | spec:
187 | params:
188 | - default: https://github.com/wpernath/book-example.git
189 | description: the URL of the Git repoisotry to build
190 | name: git-url
191 | type: string
192 | ....
193 | ```
194 |
195 | You can easily re-import the pipeline file by executing:
196 |
197 | ```bash
198 | $ oc apply -f tekton/pipelines/build-and-push-image.yaml
199 | ```
200 |
201 | ### Placement of task parameters
202 | One of the goals of Tekton has always been to provide tasks and pipelines that are as reusable as possible. This means making each task as general-purpose as possible.
203 |
204 | If you’re providing the necessary parameters directly to each task, you might repeat the settings over and over again. For example, in our case, we are using the Maven task for compiling, packaging, image generation, and pushing. In this case it makes sense to take the parameters out of the specification of each task. Instead, put them on the pipeline level under a property called `params` (as shown in the following listing) and refer to them inside the corresponding task by specifying them by their name in the syntax `$(params.parameter-name)`.
205 |
206 | ```yaml
207 | apiVersion: tekton.dev/v1beta1
208 | kind: Pipeline
209 | metadata:
210 | name: build-and-push-image
211 | spec:
212 | params:
213 | - default: 'https://github.com/wpernath/book-example.git'
214 | description: Source to the GIT
215 | name: git-url
216 | type: string
217 | - default: main
218 | description: revision to be used
219 | name: git-revision
220 | type: string
221 | [...]
222 | tasks:
223 | - name: git-clone
224 | params:
225 | - name: url
226 | value: $(params.git-url)
227 | - name: revision
228 | value: $(params.git-revision)
229 | [...]
230 | taskRef:
231 | kind: ClusterTask
232 | name: git-clone
233 | workspaces:
234 | - name: output
235 | workspace: shared-workspace
236 | [...]
237 | ```
238 |
239 | ### Creating a new task: kustomize
240 | Remember that our default OpenShift Pipelines Operator installation didn't include Kustomize. Because we want to use it to apply the new image to our Deployment, we have to look for a proper task in [Tekton Hub][12]. Unfortunately, there doesn’t seem to be one available, so we have to create our own.
241 |
242 | For this, we first need to have a proper image that contains the `kustomize` executable. The `Dockerfile` for this project is available in the [kustomize-ubi repository on GitHub][13] and the image is available in [its repository on Quay.io][14].
243 |
244 | Now let’s create a new Tekton task:
245 |
246 | ```yaml
247 | apiVersion: tekton.dev/v1beta1
248 | kind: Task
249 | metadata:
250 | name: kustomize
251 | labels:
252 | app.kubernetes.io/version: "0.4"
253 | annotations:
254 | tekton.dev/pipelines.minVersion: "0.12.1"
255 | tekton.dev/tags: build-tool
256 | spec:
257 | description: >-
258 | This task can be used to execute kustomze build scripts and to apply the changes via oc apply -f
259 | workspaces:
260 | - name: source
261 | description: The workspace holding the cloned and compiled quarkus source.
262 | params:
263 | - name: kustomize-dir
264 | description: Where should kustomize look for kustomization in source?
265 | - name: target-namespace
266 | description: Where to apply the kustomization to
267 | - name: image-name
268 | description: Which image to use. Kustomize is taking care of it
269 | steps:
270 | - name: build
271 | image: quay.io/wpernath/kustomize-ubi:latest
272 | workingDir: $(workspaces.source.path)
273 | script: |
274 |
275 | cd $(workspaces.source.path)/$(params.kustomize-dir)
276 |
277 | DIGEST=$(cat $(workspaces.source.path)/target/jib-image.digest)
278 |
279 | kustomize edit set image quay.io/wpernath/simple-quarkus:latest=$(params.image-name)@$DIGEST
280 |
281 | kustomize build $(workspaces.source.path)/$(params.kustomize-dir) > $(workspaces.source.path)/target/kustomized.yaml
282 |
283 | - name: apply
284 | image: 'image-registry.openshift-image-registry.svc:5000/openshift/cli:latest'
285 | workingDir: $(workspaces.source.path)
286 | script: |
287 | oc apply -f $(workspaces.source.path)/target/kustomized.yaml -n $(params.target-namespace)
288 | ```
289 |
290 | Paste this text into a new file called `kustomize-task.yaml`. As you can see from the contents of the file, this task requires a workspace called `source` and three parameters: `kustomize-dir`, `target-namespace`, and `image-name`. The task contains two steps: `build` and `apply`.
291 |
292 | The build step uses the Kustomize image to set the new image and digest. The apply step uses the internal OpenShift CLI image to apply the Kustomize-created files in the `target-namespace` namespace.
293 |
294 | To load the `kustomize-task.yaml` file into your current OpenShift project, simply execute:
295 |
296 | ```bash
297 | $ oc apply -f kustomize-task.yaml
298 | task.tekton.dev/kustomize configured
299 | ```
300 |
301 | ## Putting it all together
302 | We have now created a pipeline that contains four tasks: `git-clone`, `package`, `build-and-push-image`, and `apply-kustomize`. We have provided the necessary parameters to each task and to the pipeline and we have connected workspaces to it.
303 |
304 | Now we have to create the PersistentVolumeClaim (PVC) and a ConfigMap named `maven-settings`, which will then be used by the corresponding PipelineRun.
305 |
306 | ### Creating a maven-settings ConfigMap
307 | If you have a working `maven-settings` file, you can easily reuse it with the Maven task. Simply create it via:
308 |
309 | ```bash
310 | $ oc create cm maven-settings --from-file=/your-maven-settings --dry-run=client -o yaml > maven-settings-cm.yaml
311 | ```
312 |
313 | If you need to edit the ConfigMap, feel free to do it right now and then execute to import the ConfigMap into your current project:
314 |
315 | ```bash
316 | $ oc apply -f maven-settings-cm.yaml
317 | ```
318 |
319 | ### Creating a PersistentVolumeClaim
320 | Create a new file with the following content and execute `oc apply -f` to import it into your project:
321 |
322 | ```yaml
323 | apiVersion: v1
324 | kind: PersistentVolumeClaim
325 | metadata:
326 | name: builder-pvc
327 | spec:
328 | resources:
329 | requests:
330 | storage: 10Gi
331 | volumeMode: Filesystem
332 | accessModes:
333 | - ReadWriteOnce
334 | persistentVolumeReclaimPolicy: Retain
335 |
336 | ```
337 |
338 | This file reserves a PVC with the name `builder-pvc` and a requested storage of 10GB. It’s important to use `persistentVolumeReclaimPolicy: Retain` here, as we want to reuse build artifacts from the previous builds. More on this requirement later in this chapter.
339 |
340 | ## Running the pipeline
341 | Once you have imported all your artifacts into your current project, you can run the pipeline. To do so, click on the **Pipelines** entry on the left side of the Developer Perspective of OpenShift, choose your created pipeline, and select **Start** from the **Actions** menu on the right side. After you’ve filled in all necessary parameters (Figure 7), you’re able to start the PipelineRun.
342 | ![Image 7: Starting the pipeline with all parameters][image-7]
343 |
344 | The **Logs** and **Events** cards of the OpenShift Pipeline Editor show, well, all the logs and events. If you prefer to view these things from the command line, use `tkn` to follow the logs of the PipelineRun:
345 |
346 | ```bash
347 | $ tkn pr
348 | ```
349 |
350 | The output shows the available actions for PipelineRuns.
351 |
352 | To list each PipelineRun and its status, enter:
353 |
354 | ```bash
355 | $ tkn pr list
356 | NAME STARTED DURATION STATUS
357 | build-and-push-image-run-20211123-091039 1 minute ago 54 seconds Succeeded
358 | build-and-push-image-run-20211122-200911 13 hours ago 2 minutes Succeeded
359 | build-and-push-image-ru0vni 13 hours ago 8 minutes Failed
360 | ```
361 |
362 | To follow the logs of the last run, execute:
363 | ```bash
364 | $ tkn pr logs -f -L
365 | ```
366 |
367 | If you omit the `-L` option, `tkn` lets you choose from the list of PipelineRuns.
368 |
369 | You can also log, list, cancel, and delete PipelineRuns.
370 |
371 | Visual Code has a Tekton Pipeline extension that you can also use to edit, build, and execute pipelines.
372 |
373 | ### Creating a PipelineRun object
374 | In order to start the pipeline via a shell (or from any other application you’re using for CI/CD), you need to create a PipelineRun object, which looks like the following:
375 |
376 | ```yaml
377 | apiVersion: tekton.dev/v1beta1
378 | kind: PipelineRun
379 | metadata:
380 | name: $PIPELINE-run-$(date "+%Y%m%d-%H%M%S")
381 | spec:
382 | params:
383 | - name: git-url
384 | value: https://github.com/wpernath/book-example.git
385 | - name: git-revision
386 | value: main
387 | - name: context-dir
388 | value: the-source
389 | - name: image-name
390 | value: quay.io/wpernath/person-service
391 | - name: image-username
392 | value: wpernath
393 | - name: image-password
394 | value: *****
395 | - name: target-namespace
396 | value: book-tekton
397 | workspaces:
398 | - name: shared-workspace
399 | persistentVolumeClaim:
400 | claimName: builder-pvc
401 | - configMap:
402 | name: maven-settings
403 | name: maven-settings
404 | pipelineRef:
405 | name: build-and-push-image
406 | serviceAccountName: pipeline
407 | ```
408 |
409 | Most of the properties of this object are self-explanatory. Just one word on the `serviceAccountName` property: Each PipelineRun runs under a given service account, which means that all pods started along the pipeline run inside this security context.
410 |
411 | OpenShift Pipelines creates a default service account for you called `pipeline`. If you have secrets that you want to make available to your PipelineRun, you have to connect them with the service account name. But this requirement is out of scope for this chapter of the book; we'll return to secrets in the next chapter.
412 |
413 | The `tekton/pipeline.sh` shell script creates a full version of this PipelineRun based on input parameters.
414 |
415 | ### Optimizing the pipeline
416 | As earlier output from the logs showed, the first pipeline run takes quite a long time to finish: In my case, approximately 8 minutes. The second pipeline still took 2 minutes. I was running the pipelines on a home server, which has modest resources. When you use Tekton on your build farms, run times should be much lower because you’re running on dedicated server hardware.
417 |
418 | But still, the pipelines at this point take way too long.
419 |
420 | If you’re looking at the logs, you can see that the `maven` task is taking a long time to finish. This is because Maven is downloading the necessary artifacts again and again on every run. Depending on your Internet connection, this takes some time, even if you’re using a local Maven mirror.
421 |
422 | On your developer machine, Maven uses the `$HOME/.m2` folder as a cache for the artifacts. The same will be done when you’re running Maven from a task. However, because each PipelineRun runs on a separate set of pods, `$HOME/.m2` is not properly defined, which means the whole cache gets invalidated once the PipelineRun is finished.
423 |
424 | Maven allows you to specify `-Dmaven.repo.local` to provide a different path to a local cache. This option is what you can use to solve the problem.
425 |
426 | I have created a new Maven task (`maven-caching`), which you can find in the [book's example repository][15]. The file was originally just a copy of the one that came from Tekton Hub. But then I decided to remove the init step, which was building a `maven-settings.xml` file based on some input parameters. Instead, I removed most of the parameters and added a ConfigMap with must-have `maven-settings`. I believe this makes everything much easier.
427 |
428 | As Figure 8 shows, you now have only two parameters: `GOALS` and `CONTEXT_DIR`.
429 |
430 | ![Image 8: Simplified Maven task][image-8]
431 |
432 | The important properties for the `maven` call is shown in the second red box of Figure 8. These properties call `maven` with the `maven-settings` property and with the parameter indicating where to store the downloaded artifacts.
433 |
434 | One note on artifact storage: During my tests of this example, I realized that if the `git-clone` task clones the source to the root directory of the PVC (when no `subdirectory` parameter is given on task execution), the next start of the pipeline will delete everything from the PVC again. And in that case, we once again have no artifact cache.
435 |
436 | So you have to provide a `subdirectory` parameter (in my case, I used a global property called `the-source`) and provide exactly the same value to the `CONTEXT_DIR` parameter in the `maven` calls.
437 |
438 | The changes discussed in this section reduce our maven calls dramatically, in my case from 8 minutes to 54 seconds:
439 |
440 | ```bash
441 | $ tkn pr list
442 | NAME STARTED DURATION STATUS
443 | build-and-push-image-123 1 minute ago 54 seconds Succeeded
444 | build-and-push-image-ru0 13 hours ago 8 minutes Succeeded
445 | ```
446 |
447 | ## Summary of using Tekton pipelines
448 | Tekton is a powerful tool for creating CI/CD pipelines. Because it is based on Kubernetes, it uses extensive concepts from Kubernetes and reduces the maintenance of the tool itself. If you want to start your first pipeline quickly, try to use the OpenShift Developer UI, which you get for free if you’re installing the Operator. This gives you a nice base to start your tests. However, at some point—especially when it comes to optimizations—you need a proper editor to code your pipelines.
449 |
450 | One of the biggest advantages of Tekton over other CI/CD tools such as Jenkins is that you can reuse all your work for other projects and applications. If you want to standardize the way your pipelines work, build one pipeline and simply specify different sets of parameters for different situations. PipelineRun objects make this possible. The pipeline we have just created in this chapter can easily be reused for all Quarkus-generated applications. Just change the `git-url` and `image-name` parameters. Isn‘t this great?
451 |
452 | And even if you’re not satisfied with all the tasks you get from Tekton Hub, use them as bases and build your own iterations out of them, as we did with the optimized Maven task and the Kustomize task in this chapter.
453 |
454 | I would not say that Tekton is the easiest technology available to do CI/CD pipelines, but it is definitely one of the most flexible.
455 |
456 | However, we have not even talked about [Tekton security][16] and how we are able to provide, for example, secrets to access your Git repository or the image repository. And we have cheated a little bit about image generation, because we were using the mechanism Quarkus provides. There are other ways of creating images using a dedicated Buildah task.
457 |
458 | The next chapter of this book discusses Tekton security, as well as GitOps and Argo CD.
459 |
460 | [1]: https://tekton.dev
461 | [2]: https://tekton.dev "Tekton Homepage"
462 | [3]: https://cloud.redhat.com/blog/introducing-openshift-pipelines "Introducing OpenShift Pipelines"
463 | [4]: https://github.com/code-ready/crc
464 | [5]: https://github.com/wpernath/book-example/tree/main/person-service "Quarkus Simple"
465 | [6]: https://github.com/GoogleContainerTools/jib "Google's JIB"
466 | [7]: https://quay.io/repository/wpernath/quarkus-simple-wow "quay.io"
467 | [8]: https://www.opensourcerers.org/2021/04/26/automated-application-packaging-and-distribution-with-openshift-part-12/ "Kustomize explained "
468 | [9]: https://github.com/GoogleContainerTools/jib "Google's Jib"
469 | [10]: https://github.com/wpernath/book-example
470 | [11]: https://quarkus.io/guides/container-image#quarkus-container-image-jib_quarkus.jib.base-jvm-image
471 | [12]: https://hub.tekton.dev
472 | [13]: https://github.com/wpernath/kustomize-ubi
473 | [14]: https://quay.io/repository/wpernath/kustomize-ubi
474 | [15]: https://raw.githubusercontent.com/wpernath/book-example/main/tekton/tasks/maven-task.yaml
475 | [16]: https://tekton.dev/docs/pipelines/auth/
476 |
477 | [image-1]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/1-install-pipelines-operator.png
478 | [image-2]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/2-installed-pipelines-operator.png
479 | [image-3]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/3-quarkus-app-props.png
480 | [image-4]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/4-all-cluster-tasks.png
481 | [image-5]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/5-pipeline-builder.png
482 | [image-6]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/6-linking-workspaces.png
483 | [image-7]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/7-pipeline-run.png
484 | [image-8]: file:///Users/wpernath/Devel/ocpdev-book/chapter4/8-simplified-maven-task.png
--------------------------------------------------------------------------------
/chapter4/status:
--------------------------------------------------------------------------------
1 | FINAL REVIEW
2 |
--------------------------------------------------------------------------------
/chapter5/.Ulysses-Group.plist:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/wpernath/ocpdev-book/aa545cb7ff1f23d0d1cac742824a2d74a8a5471f/chapter5/.Ulysses-Group.plist
--------------------------------------------------------------------------------
/chapter5/Chapter Five- GitOps and Argo CD.md:
--------------------------------------------------------------------------------
1 | # Chapter Five: GitOps and Argo CD
2 | The previous chapters discussed the basics of modern application development with Kubernetes. This chapter shows you how to integrate a project into Kubernetes native pipelines to do your CI/CD and automatically deploy your application out of a pipeline run. We discuss the risks and benefits of using GitOps and [Argo CD][1] in your project and give you some hints on how to use it with Red Hat OpenShift.
3 |
4 | ## Introduction to GitOps
5 | I can imagine a reader complaining, "We are still struggling to implement DevOps and now you're coming to us with yet another new fancy acronym to help solve all the issues we still have?“ This is something I have heard when I first talked about GitOps during a customer engagement.
6 |
7 | The short answer is: DevOps is a cultural change in your enterprise, meaning that developers and operations people should talk to each other instead of doing their work secretly behind big walls.
8 |
9 | GitOps is an evolutionary way of implementing continuous deployments for the cloud and Kubernetes. The idea behind GitOps is to use the same version control system you're using for your code to store formal descriptions of the infrastructure desired in the test or production environment. These descriptions can be updated as the needs of the environment change, and can be managed through version control just like source code. You automatically gain a history of all the deployments you've done. After each change, an automated process runs (either through a manual step or through automation) to make the production environment match the desired state. The term "healing" is often applied to the process that brings the actual state of the system in sync with the desired state.
10 |
11 | ### Motivation Behind GitOps
12 | But why Git? And why now? And what does Kubernetes has to do with all that?
13 |
14 | As described earlier in this book, you already should be maintaining a formal description of your infrastructure. Each application you're deploying on Kubernetes has a bunch of YAML files that are required to run your application. Adding those files to your project in a Git repository is just a natural step forward. And if you have a tool that could read those files from the repository and apply them to a specified Kubernetes namespace…wouldn't that be great?
15 |
16 | Well, that's what GitOps accomplishes. And Argo CD is one of the available tools to help you do GitOps.
17 |
18 | ### What Does a Typical GitOps Process Look Like?
19 | One of the questions people ask most often about GitOps is this: Is it just another way of doing CI/CD? The answer to this question is simply No. GitOps takes care of only the CD part, the delivery part.
20 |
21 | Without GitOps, the developer workflow looks like this:
22 | 1. A developer implements a change request.
23 | 2. Once the developer commits the changes to Git, an integration pipeline is triggered.
24 | 3. This pipeline compiles the code, runs all automated tests, and creates and pushes the image.
25 | 4. Finally, the pipeline automatically installs the application on the test system.
26 |
27 | With GitOps, the developer workflow looks somewhat different (see Image 1):
28 | 1. A developer implements a change request.
29 | 2. Once the developer commits the changes to Git, an integration pipeline is triggered.
30 | 3. This pipeline compiles the code, runs all automated tests, and creates and pushes the image.
31 | 4. The pipeline automatically updates the configuration files' directory in the Git repository to reflect the changes.
32 | 5. The CD tool sees a new desired state in Git, which is then synchronized to the Kubernetes environment.
33 |
34 | ![Image 1: The GitOps delivery model][image-1]
35 |
36 | So you're still using your pipeline based on Tekton, Jenkins, or whatever to do CI. GitOps then takes care of the CD part.
37 |
38 | ## Argo CD Concepts
39 | Right now (as of version 2.0), the concepts behind Argo CD are quite easy. You register an Argo application that contains pointers to the necessary Git repository with all the application-specific descriptors such as `Deployment`, `Service`, etc., and to the Kubernetes cluster. You might also define an Argo project, which defines various defaults such as:
40 | - Which source repositories are allowed
41 | - Which destination servers and namespaces can be deployed to
42 | - A whitelist of cluster resources to deploy, such as deployments, services, etc.
43 | - Synchronization windows
44 |
45 | Each application is assigned to a project and inherits the project's settings. A `default` project is created in Argo CD and contains reasonable defaults.
46 |
47 | Once the application is registered, you can manually start a sync to update the actual environment. Alternatively, Argo CD starts "healing" the application automatically, if the synchronization policy is set to do so.
48 |
49 | ## The Use Case: Implementing GitOps for our person-service App
50 | We've been using the [person service][2] over the course of this book. Let's continue to use it and create a GitOps workflow for it. You can find all the resources discussed here in the `gitops` folder within the `book-example` repository on GitHub.
51 |
52 | We are going to set up Argo CD (via the [OpenShift GitOps Operator][3]) on OpenShift 4.9 (via [Red Hat CodeReady Containers][4]). We are going to use [Tekton to build a pipeline][5], which updates the [person-service-config][6] Git repository with the latest image digest of the build. Argo CD should then detect the changes and should start a synchronization of our application.
53 |
54 | > **Note**: Typically, a GitOps pipeline does not directly push changes into the main branch of a configuration repository. Instead, the pipeline should commit files into a feature branch or release branch and should create a pull request, so that committers can review changes before they are merged to the main branch.
55 |
56 | ## The Application Configuration Repository
57 | First of all, let's create a new repository for our application configuration: `person-service-config`.
58 |
59 | Just create a new remote Git repository for example on GitHub.com and copy the URL (for example, `https://github.com/wpernath/person-service-config.git`). Then jump to the shell, create a new empty folder somewhere, and issue the following commands:
60 |
61 | ```bash
62 | $ mkdir person-service-config
63 | $ git init -b main
64 | $ git remote add origin https://github.com/wpernath/person-service-config.git
65 | ```
66 |
67 | One of the main concepts behind GitOps is to represent the configuration and build parameters of your application as a Git repository. This repository could be either part of the source code repository or separate. As I am a big fan of [*separation of concerns*][7], we will create a new repository containing the artifacts that we built in earlier chapters using Kustomize:
68 |
69 | ```bash
70 | $ tree
71 | └── config
72 | ├── base
73 | │ ├── config-map.yaml
74 | │ ├── deployment.yaml
75 | │ ├── kustomization.yaml
76 | │ ├── route.yaml
77 | │ └── service.yaml
78 | └── overlays
79 | ├── dev
80 | │ └── kustomization.yaml
81 | ├── prod
82 | │ └── kustomization.yaml
83 | └── stage
84 | ├── apply-health-checks.yaml
85 | ├── change-env-value.yaml
86 | └── kustomization.yaml
87 |
88 | 6 directories, 10 files
89 | ```
90 |
91 | Of course, there are several ways to structure your config repositories. Some natural choices include:
92 | 1. A single configuration repository with all files covering all services and stages for your complete environment
93 | 2. A separate configuration repository per service or application, with all files for all stages
94 | 3. A separate configuration repository for each stage of each service
95 |
96 | This is completely up to you. But option 1 is probably not optimal because combining all services and stages in one configuration repository might make the repository hard to read, and does not promote separation of concerns. On the other hand, option 3 might break up information too much, forcing you to maintain hundredths of repositories for different applications or services. Therefore, option 2 strikes me as a good balance: One repository per application, containing files that cover all stages for that application.
97 |
98 | For now, create this configuration repository by copying the files from the `book-example/kustomize_ext` directory into the newly created Git repository:
99 |
100 | ```bash
101 | $ mkdir config
102 | $ cp -r ../book-example/kustomize-ext/ config/
103 | $ git add config
104 | $ git commit -am 'initial commit'
105 | $ git push -u origin main
106 | ```
107 |
108 | > **Note**: The original `kustomization.yaml` file already contains an image section. This should be removed first.
109 |
110 | ## Installing the OpenShift GitOps Operator
111 | Because the OpenShift GitOps Operator is offered free of charge to OpenShift users and comes quite well preconfigured, I am focusing on its use. If you want to bypass the Operator and dig into Argo CD installation, please feel free to have a look at the official [guides][8].
112 |
113 | The [OpenShift GitOps Operator can easily be installed in OpenShift][9]. Just log in as a user with cluster-admin rights and switch to the **Administrator** perspective of the OpenShift console. Then go to the **Operators** menu entry and select **OperatorHub** (Image 2). In the search field, start typing "gitops" and select the GitOps Operator when its panel is shown.
114 |
115 | ![Image 2: Installing the OpenShift GitOps Operator][image-2]
116 | Once the Operator is installed, it creates a new namespace called `openshift-gitops` where an instance of Argo CD is installed and ready to be used.
117 |
118 | At time of this writing, Argo CD is not yet configured to use OpenShift authentication, so you have to get the password of the admin user by getting the value of the `openshift-gitops-cluster` secret in the `openshift-gitops` namespace:
119 |
120 | ```bash
121 | $ oc get secret openshift-gitops-cluster -n openshift-gitops -ojsonpath='{.data.admin\.password}' | base64 -d
122 | ```
123 |
124 | And this is how to get the URL of your Argo CD instance:
125 |
126 | ```bash
127 | $ oc get route openshift-gitops-server -ojsonpath='{.spec.host}' -n openshift-gitops
128 | ```
129 |
130 | ## Creating a New Argo Application
131 | The easiest way to create a new Argo application is by using the GUI provided by Argo CD (Image 3).
132 |
133 | ![Image 3: Argo CD on OpenShift][image-3]
134 |
135 | Go to the URL and log in using `admin` as the user and the password you got as described in the previous section. Click **New App** and fill in the required fields shown in Image 4, as follows:
136 |
137 | 1. Application Name: We'll use `book-dev`, the same name as our repository.
138 | 2. Project: In our case it's `default`, the project created during Argo CD installation.
139 | 3. Sync Policy: Choose whether you want automatic synchronization, which is enabled by the **SELF HEAL** option.
140 | 4. Repository URL: Specify your directory with the application metadata (Kubernetes resources).
141 | 5. Path: This specifies the subdirectory within the repository that points to the actual files.
142 | 6. Cluster URL: Specify your Kubernetes instance.
143 | 7. Namespace: This specifies the OpenShift or Kubernetes namespace to deploy to.
144 |
145 | ![Image 4: Creating a new App in Argo CD][image-4]
146 |
147 | After filling out the fields, click **Create**. All Argo CD objects of the default Argo CD instance will be stored in the `openshift-gitops` namespace, from which you can export them via:
148 |
149 | ```bash
150 | $ oc get Application/book-dev -o yaml -n openshift-gitops > book-dev-app.yaml
151 | ```
152 |
153 | To create an application object in a new Kubernetes instance, open the `book-dev-app.yaml` file exported by the previous command in your preferred editor:
154 |
155 | ```yaml
156 | apiVersion: argoproj.io/v1alpha1
157 | kind: Application
158 | metadata:
159 | name: book-dev
160 | namespace: openshift-gitops
161 | spec:
162 | destination:
163 | namespace: book-dev
164 | server: https://kubernetes.default.svc
165 | project: default
166 | source:
167 | path: config/overlays/dev
168 | repoURL: https://github.com/wpernath/person-service-config.git
169 | targetRevision: HEAD
170 | syncPolicy:
171 | automated:
172 | prune: true
173 | syncOptions:
174 | - PruneLast=true
175 | ```
176 |
177 | Remove the metadata from the object file so that it looks like the listing just shown, and then enter the following command to import the application into the predefined Argo CD instance:
178 |
179 | ```bash
180 | $ oc apply -f book-dev-app.yaml -n openshift-gitops
181 | ```
182 |
183 | Now import the application into the predefined Argo CD instance. Please note that you have to import the application into the `openshift-gitops` namespace. Otherwise it won’t be recognized by the default Argo CD instance running after you’ve installed the OpenShift GitOps operator.
184 |
185 | ## First Synchronization
186 | As you’ve chosen to do an automatic synchronization, Argo CD will immediately start synchronizing the configuration repository with your OpenShift target server. However, you might notice that the first synchronization takes quite a while and breaks without doing anything except to issue an error message (Image 5).
187 | ![Image 5: Argo CD UI showing synchronization failure][image-5]
188 |
189 | The error arises because the service account of Argo CD does not have the necessary authority to create typical resources in a new Kubernetes namespace. You have to enter the following command for each namespace Argo CD is taking care of:
190 |
191 | ```bash
192 | $ oc policy add-role-to-user admin system:serviceaccount:openshift-gitops:openshift-gitops-argocd-application-controller -n
193 | ```
194 |
195 | Alternatively, if you prefer to use a YAML description file for this task, create something like the following:
196 |
197 | ```yaml
198 | apiVersion: rbac.authorization.k8s.io/v1
199 | kind: RoleBinding
200 | metadata:
201 | name: book-dev-role-binding
202 | namespace: book-dev
203 | roleRef:
204 | apiGroup: rbac.authorization.k8s.io
205 | kind: ClusterRole
206 | name: admin
207 | subjects:
208 | - kind: ServiceAccount
209 | name: openshift-gitops-argocd-application-controller
210 | namespace: openshift-gitops
211 | ```
212 |
213 | > **Note**: You could also provide cluster-admin rights to the Argo CD service account. This would have the benefit of granting Argo CD to everything on its own. The drawback is that Argo is then superuser of your Kubernetes cluster. This might not be very secure.
214 |
215 | After you've given the service account the necessary role, you can safely click **Sync** and Argo CD will do the synchronization (Image 6).
216 | ![Image 6: Argo CD UI showing successful synchronization][image-6]
217 |
218 | If you chose automatic sync during configuration, any change to a file in the application's Git repository will cause Argo CD to check what has changed and start the necessary actions to keep the environment in sync.
219 |
220 | ## Automated setup
221 | In order to automatically create everything you need to let Argo CD start synchronizing your config repository with a Kubernetes cluster, you have to create the following files. Please have a look at `book-example/gitops/argocd`.
222 |
223 | ### Argo CD application config file
224 | The Argo CD application config file is named `book-apps.yaml`. This file contains the Application instructions for Argo CD discussed earlier:
225 |
226 | ```yaml
227 | apiVersion: argoproj.io/v1alpha1
228 | kind: Application
229 | metadata:
230 | name: book-dev
231 | namespace: openshift-gitops
232 | spec:
233 | destination:
234 | namespace: book-dev
235 | server: https://kubernetes.default.svc
236 | project: default
237 | source:
238 | path: config/overlays/dev
239 | repoURL: https://github.com/wpernath/person-service-config.git
240 | targetRevision: HEAD
241 | syncPolicy:
242 | automated:
243 | prune: true
244 | syncOptions:
245 | - PruneLast=true
246 | ```
247 |
248 | ### A File to Create the Target Namespace
249 | Because we are talking about an automated setup, you also need to automatically create the target namespace. This can be achieved via the `ns.yaml` file, which looks like:
250 |
251 | ```yaml
252 | apiVersion: v1
253 | kind: Namespace
254 | metadata:
255 | annotations:
256 | openshift.io/description: ""
257 | openshift.io/display-name: "DEV"
258 | labels:
259 | kubernetes.io/metadata.name: book-dev
260 | name: book-dev
261 | spec:
262 | finalizers:
263 | - kubernetes
264 | ```
265 |
266 | ### The Role Binding
267 | As described earlier, you need a role binding that makes sure the service account of Argo CD is allowed to create and modify the necessary Kubernetes objects. This can be done via the `roles.yaml` file:
268 |
269 | ```yaml
270 | apiVersion: rbac.authorization.k8s.io/v1
271 | kind: RoleBinding
272 | metadata:
273 | name: book-dev-role-binding
274 | namespace: book-dev
275 | roleRef:
276 | apiGroup: rbac.authorization.k8s.io
277 | kind: ClusterRole
278 | name: admin
279 | subjects:
280 | - kind: ServiceAccount
281 | name: openshift-gitops-argocd-application-controller
282 | namespace: openshift-gitops
283 | ```
284 |
285 | ### Use Kustomize to Apply All Files in One Go
286 | Until now, you have had to apply all the preceding files separately. Using Kustomize, you’re able to apply all files in one go. To accomplish this simplification, create a `kustomization.yaml` file, which looks like:
287 |
288 | ```yaml
289 | apiVersion: kustomize.config.k8s.io/v1beta1
290 | kind: Kustomization
291 |
292 | resources:
293 | - ns.yaml
294 | - roles.yaml
295 | - book-apps.yaml
296 | ```
297 |
298 | To install everything in one go, you simply have to execute the following command:
299 |
300 | ```bash
301 | $ oc apply -k book-example/gitops/argocd
302 | ```
303 |
304 | ## Creating a Tekton Pipeline to Update person—service-config
305 | We now want to change our pipeline from the previous chapter (Figure 7) to be more GitOps'y. But what exactly needs to be done?
306 | ![Image 7: Tekton pipeline from Chapter 4][image-7]
307 |
308 | The current pipeline is a development pipeline, which will be used to:
309 | - Compile and test the code.
310 | - Create a new image.
311 | - Push that image to an external registry (in our case, [quay.io][10]).
312 | - Use Kustomize to change the image target.
313 | - Apply the changes via the OpenShift CLI to a given namespace.
314 |
315 | In GitOps, we don’t do deployments through pipelines anymore. The final step of our pipeline is to update our `person-service-config` Git repository with the new version of the image.
316 |
317 | The current pipeline is a development pipeline, which will be used to:
318 | 1. Compile and test the code.
319 | 2. Create a new image.
320 | 3. Push that image to an external registry (in our case [Quay.io][11])).
321 | 4. Use Kustomize to change the image target.
322 | 5. Apply the changes via the OpenShift CLI to a given namespace.
323 |
324 | In GitOps, we don't do pipeline-centric deployments anymore. As explained earlier, the final step of our pipeline just updates our `person-service-config` Git repository with the new version of the image. Instead of the `apply-kustomize` task, we are creating and using the `git-update-deployment` task as final step. This task should clone the config repository, use Kustomize to apply the image changes, and finally push the changes back to GitHub.com.
325 |
326 | ### A Word On Tekton Security
327 | Because we want to update a private repository, we first need to have a look at [Tekton authentication][12]. Tekton uses specially annotated secrets with either a ``/`` combination or an SSH key. The authentication then produces a `~/.gitconfig` file (or for an image repository, a `~/.docker/config.json` file) and maps it into the step's pod via the run's associated ServiceAccount. That's easy, isn't it? Configuring the process looks like:
328 |
329 | ```yaml
330 | apiVersion: v1
331 | kind: Secret
332 | metadata:
333 | name: git-user-pass
334 | annotations:
335 | tekton.dev/git-0: https://github.com
336 | type: kubernetes.io/basic-auth
337 | stringData:
338 | username:
339 | password:
340 | ```
341 |
342 | Once you've filled in the `username` and `password`, you can apply the secret into the namespace where you want to run your newly created pipeline.
343 |
344 | ```bash
345 | $ oc new-project book-ci
346 | $ oc apply -f secret.yaml
347 | ```
348 |
349 | Now you need to either create a new ServiceAccount for your pipeline or update the existing one, which was generated by the OpenShift Pipeline Operator. The pipeline runs completely within the security context of the provided ServiceAccount.
350 |
351 | Let's use on a new ServiceAccount. To see which other secrets this ServiceAccount requires, execute:
352 |
353 | ```bash
354 | $ oc get sa/pipeline -o yaml
355 | ```
356 | Copy the secrets to your own ServiceAccount:
357 |
358 | ```yaml
359 | apiVersion: v1
360 | kind: ServiceAccount
361 | metadata:
362 | name: pipeline-bot
363 | secrets:
364 | - name: git-user-pass
365 | ```
366 |
367 | You don't need to copy the following generated secrets to your ServiceAccount, because they will be linked automatically with the new ServiceAccount by the Operator:
368 |
369 | - `pipeline-dockercfg-`: The default secret for reading and writing images from and to the internal OpenShift registry.
370 | - `pipeline-token-`: The default secret for the `pipeline` ServiceAccount. This is used internally.
371 |
372 | You also have to create two RoleBindings for the ServiceAccount. Otherwise, you can't reuse the PersistenceVolumes we've been using so far:
373 |
374 | ```yaml
375 | apiVersion: rbac.authorization.k8s.io/v1
376 | kind: RoleBinding
377 | metadata:
378 | name: piplinebot-rolebinding1
379 | roleRef:
380 | apiGroup: rbac.authorization.k8s.io
381 | kind: ClusterRole
382 | name: pipelines-scc-clusterrole
383 | subjects:
384 | - kind: ServiceAccount
385 | name: pipeline-bot
386 | ---
387 | apiVersion: rbac.authorization.k8s.io/v1
388 | kind: RoleBinding
389 | metadata:
390 | name: piplinebot-rolebinding2
391 | roleRef:
392 | apiGroup: rbac.authorization.k8s.io
393 | kind: ClusterRole
394 | name: edit
395 | subjects:
396 | - kind: ServiceAccount
397 | name: pipeline-bot
398 | ```
399 |
400 | The `edit` role is mainly used if your pipeline needs to change any Kubernetes metadata in the given namespace. If your pipeline doesn't do things like that, you can safely ignore that role. In our case, we don't necessary need the edit role.
401 |
402 | ### The git-update-deployment Tekton Task
403 | Now that you understand Tekton authentication and have created all the necessary manifests, you are able to focus on the `git-update-deployment` task.
404 |
405 | Remember, we want to have a task that does the following:
406 | - Clone the configuration Git repository.
407 | - Update the image digest via Kustomize.
408 | - Commit and push the changes back to the repository.
409 |
410 | This means you need to create a task with at least the following parameters:
411 | - `GIT_REPOSITORY`: The configuration repository to clone.
412 | - `CURRENT_IMAGE`: The name of the image in the `deployment.yaml` file.
413 | - `NEW_IMAGE`: The name of the new image to deploy.
414 | - `NEW_DIGEST`: The name of the digest of the new image to deploy. This digest is generated in the `build-and-push-image` step that appears in both the Chapter 4 version and this chapter's version of the pipeline.
415 | - `KUSTOMIZE_PATH`: The path within the `GIT_REPOSITORY` with the `kustomization.yaml` file.
416 |
417 | And of course, you need to create a workspace to hold the project files.
418 |
419 | Let's have a look at the steps within the task:
420 | ```yaml
421 | steps:
422 | - name: update-digest
423 | image: quay.io/wpernath/kustomize-ubi:latest
424 | workingDir: $(workspaces.workspace.path)/the-config
425 | script: |
426 | cd $(params.KUSTOMIZATION_PATH)
427 | kustomize edit set image $(params.CURRENT_IMAGE)=$(params.NEW_IMAGE)@$(params.NEW_DIGEST)
428 |
429 | cat kustomization.yaml
430 |
431 | - name: git-commit
432 | image: docker.io/alpine/git:v2.26.2
433 | workingDir: $(workspaces.workspace.path)/the-config
434 | script: |
435 | git config user.email "wpernath@redhat.com"
436 | git config user.name "My Tekton Bot"
437 |
438 | git add $(params.KUSTOMIZATION_PATH)/kustomization.yaml
439 | git commit -am "[ci] Image digest updated"
440 |
441 | git push origin HEAD:main
442 |
443 | RESULT_SHA="$(git rev-parse HEAD | tr -d '\n')"
444 | EXIT_CODE="$?"
445 | if [ "$EXIT_CODE" != 0 ]
446 | then
447 | exit $EXIT_CODE
448 | fi
449 | # Make sure we don't add a trailing newline to the result!
450 | echo -n "$RESULT_SHA" > $(results.commit.path)
451 |
452 | ```
453 |
454 | Nothing special here. It's the same things we would do via the CLI. The full task and everything related can be found, as always, in the `gitops/tekton/tasks` folder of the repository on GitHub.
455 |
456 | ### Creating an extract-digest Tekton Task
457 | The next question is how to get the image digest. Because we are using the [Quarkus image builder][13] (which in turn is using [Jib][14]), we need to create either a step or a separate task that provides content to create a `target/jib-image.digest` file.
458 |
459 | Because I want to have the `git-update-deployment` task as general-purpose as possible, I have created a separate task that does just this step. The step relies on a Tekton feature known as [emitting results from a task][15].
460 |
461 | Within the `spec` section of a task, you can define a `results` property. Each result is stored in `$(results..path)`, where the `` component is a string that refers to the data in that result. Results are available in all tasks and on the pipeline level through strings in the format:
462 |
463 | ```bash
464 | $(tasks..results.)
465 | ```
466 |
467 | The following configuration defines the step that extracts the image digest and stores it into a result:
468 |
469 | ```yaml
470 | spec:
471 | params:
472 | - name: image-digest-path
473 | default: target
474 |
475 | results:
476 | - name: DIGEST
477 | description: The image digest of the last quarkus maven build with JIB image creation
478 |
479 | steps:
480 | - name: extract-digest
481 | image: quay.io/wpernath/kustomize-ubi:latest
482 | script: |
483 | # extract DIGEST
484 | DIGEST=$(cat $(workspaces.source.path)/$(params.image-digest-path)/jib-image.digest)
485 |
486 | # Store DIGEST into result
487 | echo -n $DIGEST > $(results.DIGEST.path)
488 | ```
489 |
490 | Now it's time to summarize everything in a new pipeline. Image 8 shows the tasks. The first three are the same as in the Chapter 4 version of this pipeline. We have added the `extract-digest` step as described in the previous section, and end by updating our repository.
491 | ![Image 8: The gitops-pipeline][image-8]
492 |
493 | Start by using the [previous non-GitOps pipeline][16], which we created in Chapter 4. Remove the last task and add `extract-digest` and `git-update-deployment` as new tasks.
494 |
495 | Add a new `git-clone` task at the beginning by hovering over `clone-source` and pressing the plus sign below it to create a new parallel task. Name the new task `clone-config` and fill in the necessary parameters:
496 |
497 | - `config-git-url`: This should point to the service configuration repository.
498 | - `config-git-revision`: This is the branch name of the configuration repository to clone.
499 |
500 | Map these parameters to the `git-update-deployment` task, as shown in Image 9.
501 | ![Image 9: Parameter mapping][image-9]
502 |
503 | ### Testing the Pipeline
504 | You can't currently run the pipeline from the user interface because you can't use a different ServiceAccount that lacks the two secrets you need to provide. Therefore, start a pipeline via the CLI. For your convenience, I have created a Bash script called `gitops/tekton/pipeline.sh` that can be used to initialize your namespace and start the pipeline.
505 |
506 | To create the necessary namespaces and Argo CD Applications, enter the following command, passing your username and password:
507 |
508 | ```bash
509 | $ ./pipeline.sh init [--force] --git-user \
510 | --git-password \
511 | --registry-user \
512 | --registry-password
513 | ```
514 |
515 | If the `--force` option is included, the command creates the following namespaces and Argo CD applications for you:
516 | - `book-ci`: Pipelines, tasks, and a Nexus instance
517 | - `book-dev`: The current dev stage
518 | - `book-stage`: The most recent stage release
519 |
520 | The following command starts the development pipeline.
521 |
522 | ```bash
523 | $ ./pipeline.sh build -u \
524 | -p
525 | ```
526 |
527 | Whenever the pipeline is successfully executed, you should notice an updated message in the `person-service-config` Git repository. And you should notice that Argo CD has initiated a synchronization process, which ends with a redeployment of the quarkus application.
528 |
529 | Have a look at Chapter 4 for more information on starting and testing pipelines.
530 |
531 | ## Creating a stage-release Pipeline
532 | What does a staging pipeline look like? We need a process which does the following, in our case (Image 10):
533 | 1. Clone the config repository.
534 | 2. Create a release branch (e.g., `release-1.2.3`).
535 | 3. Get the image digest. In our case, we extract the image out of the current development environment.
536 | 4. Tag the image in the image repository (e.g., `quay.up/wpernath/person-service:1.2.3`).
537 | 5. Update the configuration repository and point the stage configuration to the newly tagged image.
538 | 6. Commit and push the code back to the Git repository.
539 | 7. Create a pull or merge request.
540 |
541 | ![Image 10: The staging pipeline][image-10]
542 |
543 | These tasks are followed by a manual process where a test specialist accepts the pull request and merges the content from the branch back into the main branch. Then Argo CD takes the changes and updates the running staging instance in Kubernetes.
544 |
545 | You can use the Bash script I created as follows to start the staging pipeline, creating release 1.2.5:
546 |
547 | ```bash
548 | $ ./pipeline.sh stage -r 1.0.0-beta1
549 | ```
550 |
551 | ### Setup of the Pipeline
552 | The `git-clone` and `git-branch` steps use existing ClusterTasks, so there is nothing to explain here except one new Tekton feature: [Conditional execution of a task][17] by using a "When" expression.
553 |
554 | In our case, if a `release-name` is specified, only the `git-branch` task should be executed. The corresponding YAML code in the pipeline looks like:
555 |
556 | ```yaml
557 | when:
558 | - input: $(params.release-name)
559 | operator: notin
560 | values:
561 | - ""
562 | ```
563 |
564 | The new `extract-digest` task uses `yq` to extract the digest out of the `kustomization.yaml` file. The command looks like:
565 |
566 | ```bash
567 | $ yq eval '.images[0].digest' $(workspaces.source.path)/$(params.kustomize-dir)/kustomization.yaml
568 | ```
569 |
570 | The result of this call is stored in the task's `results` field.
571 |
572 | ### The tag-image Task
573 | The `tag-image` task uses a `skopeo-copy` ClusterTask, which requires a source image and a target image. The original use case of this task was to copy images from one repository to another (for example, from the local repository up to an external Quay.io repository). However, you can also use this task to tag an image in a repository. The corresponding parameters for the task are:
574 |
575 | ```yaml
576 | - name: tag-image
577 | params:
578 | - name: srcImageURL
579 | value: >-
580 | docker://$(params.target-image)@$(tasks.extract-digest.results.DIGEST)
581 | - name: destImageURL
582 | value: >-
583 | docker://$(params.target-image):$(params.release-name)
584 | - name: srcTLSverify
585 | value: 'false'
586 | - name: destTLSverify
587 | value: 'false'
588 | runAfter:
589 | - extract-digest
590 | taskRef:
591 | kind: ClusterTask
592 | name: skopeo-copy
593 | [...]
594 | ```
595 |
596 | `skopeo` uses an existing Docker configuration if it finds one in the home directory of the current user. For us, this means that we have to create another secret with the following content:
597 | ```yaml
598 | apiVersion: v1
599 | kind: Secret
600 | metadata:
601 | annotations:
602 | tekton.dev/docker-0: https://quay.io
603 | name: quay-push-secret
604 | type: kubernetes.io/basic-auth
605 | stringData:
606 | username: