├── .github └── workflows │ └── test.yaml ├── .gitignore ├── .sourceignore ├── Brewfile ├── DEMO.md ├── LICENSE ├── Makefile ├── README.md ├── WORKSHOP.md ├── apps └── faces │ ├── faces-abtest.yaml │ ├── faces-canary.yaml │ ├── faces-ingress.yaml │ ├── faces-sync.yaml │ ├── kustomization.yaml │ └── namespace.yaml ├── clusters └── my-cluster │ ├── apps.yaml │ ├── flux-system │ ├── gotk-components.yaml │ ├── gotk-sync.yaml │ └── kustomization.yaml │ └── infrastructure.yaml ├── demosh ├── check-github.sh ├── check-requirements.sh └── demo-tools.sh ├── docs └── screens │ ├── faces-foo.png │ ├── faces.png │ ├── linkerd-metrics.png │ ├── wego-apps.png │ ├── wego-deps.png │ └── wego-linkerd.png ├── infrastructure ├── cert-manager │ ├── kustomization.yaml │ ├── namespace.yaml │ ├── release.yaml │ └── repository.yaml ├── flagger │ ├── kustomization.yaml │ ├── loadtester.yaml │ ├── namespace.yaml │ ├── release.yaml │ └── repository.yaml ├── ingress-nginx │ ├── kustomization.yaml │ ├── namespace.yaml │ ├── release.yaml │ └── repository.yaml ├── linkerd │ ├── README.md │ ├── kustomization.yaml │ ├── kustomizeconfig.yaml │ ├── linkerd-certs.yaml │ ├── linkerd-control-plane.yaml │ ├── linkerd-crds.yaml │ ├── linkerd-smi.yaml │ ├── linkerd-viz.yaml │ ├── namespaces.yaml │ └── repositories.yaml └── weave-gitops │ ├── kustomization.yaml │ ├── release.yaml │ └── repository.yaml ├── kind └── kind-config.yaml └── scripts └── validate.sh /.github/workflows/test.yaml: -------------------------------------------------------------------------------- 1 | name: test 2 | 3 | on: 4 | pull_request: 5 | push: 6 | branches: 7 | - main 8 | 9 | permissions: 10 | contents: read 11 | 12 | jobs: 13 | validate: 14 | runs-on: ubuntu-latest 15 | steps: 16 | - name: Checkout 17 | uses: actions/checkout@v3 18 | - name: Set up Homebrew 19 | uses: Homebrew/actions/setup-homebrew@master 20 | - name: Install tools 21 | run: make tools 22 | - name: Run manifests validation 23 | run: make validate 24 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | bin/ 2 | Brewfile.lock.json 3 | -------------------------------------------------------------------------------- /.sourceignore: -------------------------------------------------------------------------------- 1 | # Flux ignore 2 | # https://fluxcd.io/flux/components/source/gitrepositories/#excluding-files 3 | 4 | # Exclude all 5 | /* 6 | 7 | # Include manifest directories 8 | !/apps/ 9 | !/clusters/ 10 | !/infrastructure/ 11 | -------------------------------------------------------------------------------- /Brewfile: -------------------------------------------------------------------------------- 1 | # Kubernetes 2 | brew "kubectl" 3 | brew "kind" 4 | 5 | # Kubernetes tools 6 | brew "yq" 7 | brew "jq" 8 | brew "kustomize" 9 | brew "kubeconform" 10 | 11 | # Flux 12 | tap "fluxcd/tap" 13 | brew "fluxcd/tap/flux" 14 | 15 | # Linkerd 16 | brew "linkerd" 17 | brew "step" 18 | -------------------------------------------------------------------------------- /DEMO.md: -------------------------------------------------------------------------------- 1 | # Real-World GitOps: Flux, Flagger, and Linkerd 2 | 3 | This is the documentation - and executable code! - for the Service Mesh 4 | Academy workshop on Flux, Flagger, and Linkerd. The easiest way to use this 5 | file is to execute it with [demosh]. 6 | 7 | Things in Markdown comments are safe to ignore when reading this later. When 8 | executing this with [demosh], things after the horizontal rule below (which 9 | is just before a commented `@SHOW` directive) will get displayed. 10 | 11 | [demosh]: https://github.com/BuoyantIO/demosh 12 | 13 | This workshop requires some fairly specific setup. 14 | 15 | - First, you need a running Kubernetes cluster that can support `LoadBalancer` 16 | services. 17 | 18 | - Second, you need to fork https://github.com/kflynn/gitops-linkerd and 19 | https://github.com/BuoyantIO/faces-demo under your own account. You also 20 | need to set GITHUB_USER to your GitHub username, and GITHUB_TOKEN to a 21 | personal access token under your account with `repo` scope. 22 | 23 | (Note that this token must be an old-style personal access token, not a new 24 | fine-grained access token. To create a token, go to your GitHub settings > 25 | `Developer settings` > `Personal access tokens`.) 26 | 27 | - Third, you need to clone your two forked repos side-by-side in the directory 28 | tree, so that "gitops-linkerd" and "faces-demo" are siblings. Both of these 29 | clones need to be in their `main` branch. 30 | 31 | - Finally, you need to edit `apps/faces/faces-sync.yaml` in your clone of 32 | `gitops-linkerd` to point to your fork of faces-demo -- change the `url` 33 | field on line 8 as appropriate. 34 | 35 | When you use `demosh` to run this file (from your `gitops-linkerd` clone), all 36 | of the above will be checked for you. 37 | 38 | 40 | 41 | 42 | 43 | 44 | 45 | ```bash 46 | BAT_STYLE="grid,numbers" 47 | ``` 48 | 49 | --- 50 | 51 | 52 | # Real-World GitOps: Flux, Flagger, and Linkerd 53 | 54 | Welcome to the Service Mesh Academy workshop on Flux, Flagger, Weave GitOps, 55 | and Linkerd. We're starting with an empty cluster, so the first task is to 56 | bootstrap Flux itself. Flux will be bootstrapping everything else. 57 | 58 | Note the `--owner` and `--repository` switches here: we are explicitly looking 59 | for the `${GITHUB_USER}/gitops-linkerd` repo here. 60 | 61 | ```bash 62 | flux bootstrap github \ 63 | --owner=${GITHUB_USER} \ 64 | --repository=gitops-linkerd \ 65 | --branch=main \ 66 | --path=./clusters/my-cluster \ 67 | --personal 68 | ``` 69 | 70 | At this point Flux will install its own in-cluster services, then continue 71 | with all the other components we've defined (Weave GitOps, Linkerd, Flagger, 72 | etc.). We can see what it's doing by looking at its Kustomization resources: 73 | 74 | ```bash 75 | flux get kustomizations 76 | ``` 77 | 78 | Bootstrapping everything can take a little while, of course, so there's a 79 | `--watch` switch that can be nice. We'll use that to keep an eye on what's 80 | going on, and make sure that everything is proceeding correctly: 81 | 82 | ```bash 83 | flux get kustomizations --watch 84 | ``` 85 | 86 | One thing to be aware of here: note that that last line says "Applied", not 87 | "applied and everything is ready". Let's also wait until our application is 88 | completely running: 89 | 90 | ```bash 91 | kubectl rollout status -n faces deployments 92 | watch kubectl get pods -n faces 93 | ``` 94 | 95 | 96 | 97 | # Dashboards 98 | 99 | At this point, our application is happily running. It's called Faces, and it's 100 | a single-page web app that... shows faces in a web browser, as we can see at 101 | http://127-0-0-1.faces.sslip.io/. 102 | 103 | 104 | 105 | We also have two dashboards that we can see in the browser: 106 | 107 | - The Linkerd Viz dashboard is at http://127-0-0-1.linkerd.sslip.io/ 108 | - The Weave GitOps dashboard is at http://127-0-0-1.wego.sslip.io/ 109 | 110 | Let's go check those out. These are not specific to our application; they 111 | provide insight into what's happening under the hood. 112 | 113 | 114 | 115 | # What's Under The Hood 116 | 117 | Now that we have things running, let's back up and look at _exactly_ how Flux 118 | pulls everything together. A good first step here is to look back at the `flux 119 | bootstrap` command we used to kick everything off: 120 | 121 | ``` 122 | flux bootstrap github \ 123 | --owner=${GITHUB_USER} \ 124 | --repository=gitops-linkerd \ 125 | --branch=main \ 126 | --path=./clusters/my-cluster \ 127 | --personal 128 | ``` 129 | 130 | The `path` argument there defines where Flux will look to get the definitions 131 | for how to set up the cluster. If we look there, we'll see two files: 132 | `apps.yaml` and `infrastructure.yaml`. (We also see `flux-system`, which is 133 | for configuration of the core Flux components themselves -- we're not going to 134 | look into this during this workshop.) 135 | 136 | ```bash 137 | #@echo 138 | #@notypeout 139 | #@nowaitbefore 140 | #@waitafter 141 | ls -l clusters/my-cluster 142 | ``` 143 | 144 | 145 | 146 | `infrastructure.yaml` is a good place to look first. Let's look at the first 147 | two documents in this file with `yq`. 148 | 149 | The first document defines a `Kustomization` resource called `cert-manager`. 150 | This Kustomization lives in the `flux-system` namespace, doesn't depend on 151 | anything, and has `kustomize` files at `infrastructure/cert-manager`: 152 | 153 | ```bash 154 | #@echo 155 | #@notypeout 156 | #@nowaitbefore 157 | #@waitafter 158 | yq 'select(document_index == 0)' clusters/my-cluster/infrastructure.yaml \ 159 | | bat -l yaml 160 | ``` 161 | 162 | 163 | 164 | The second document defines a Kustomization called `linkerd`, also in the 165 | `flux-system` namespace. It depends on `cert-manager`, and has `kustomize` 166 | files at `infrastructure/linkerd`. 167 | 168 | (That `dependsOn` element is worth an extra callout: it's an extremely 169 | powerful aspect of Flux that makes setting up complex applications really 170 | easy.) 171 | 172 | ```bash 173 | #@echo 174 | #@notypeout 175 | #@nowaitbefore 176 | #@waitafter 177 | yq 'select(document_index == 1)' clusters/my-cluster/infrastructure.yaml \ 178 | | bat -l yaml 179 | ``` 180 | 181 | 182 | 183 | Let's look quickly at `cert-manager`'s `kustomize` files: 184 | 185 | ```bash 186 | #@echo 187 | #@notypeout 188 | #@nowaitbefore 189 | #@waitafter 190 | ls -l ./infrastructure/cert-manager 191 | ``` 192 | 193 | The `kustomization.yaml` file tells `kustomize` what other files to use: 194 | 195 | ```bash 196 | #@echo 197 | #@notypeout 198 | #@nowaitbefore 199 | #@waitafter 200 | bat ./infrastructure/cert-manager/kustomization.yaml 201 | ``` 202 | 203 | 204 | 205 | If we look at those three files, the interesting thing to note with them is 206 | that they're just ordinary YAML. We're not using `kustomize`'s ability to 207 | patch things here; we're just using it to sequence applying some YAML -- and 208 | some of the YAML is for Flux resources rather than Kubernetes resources. 209 | 210 | `namespace.yaml` creates the `cert-manager` namespace: 211 | 212 | ```bash 213 | #@echo 214 | #@notypeout 215 | #@nowaitbefore 216 | #@waitafter 217 | bat ./infrastructure/cert-manager/namespace.yaml 218 | ``` 219 | 220 | `repository.yaml` tells Flux where to find the Helm chart for `cert-manager`: 221 | 222 | ```bash 223 | #@echo 224 | #@notypeout 225 | #@nowaitbefore 226 | #@waitafter 227 | bat ./infrastructure/cert-manager/repository.yaml 228 | ``` 229 | 230 | Finally, `release.yaml` tells Flux how to use the Helm chart to install 231 | `cert-manager`: 232 | 233 | ```bash 234 | #@echo 235 | #@notypeout 236 | #@nowaitbefore 237 | #@waitafter 238 | bat ./infrastructure/cert-manager/release.yaml 239 | ``` 240 | 241 | Again, we're not actually using `kustomize` to patch anything here: all we're 242 | doing is telling it to create some resources for us. 243 | 244 | 245 | 246 | We won't look at all the other components in `infrastructure`, but there's one 247 | important thing buried in `infrastructure/flagger`. Here's what that looks like: 248 | 249 | ```bash 250 | #@echo 251 | #@notypeout 252 | #@nowaitbefore 253 | #@waitafter 254 | ls -l ./infrastructure/flagger 255 | ``` 256 | 257 | This is basically the same pattern as we saw for `cert-manager`, and most of 258 | it is basically just setting up a standard Helm install of Flagger, following 259 | the docs at https://docs.flagger.app. 260 | 261 | However, Flagger requires us to define the set of _selector labels_ that it 262 | will pay attention to when it watches for rollouts, and by default, that set 263 | does not include `service`. We want to be able to use `service` when managing 264 | rollouts, so we've added it in `infrastructure/flagger/release.yaml`: 265 | 266 | ```bash 267 | bat ./infrastructure/flagger/release.yaml 268 | ``` 269 | 270 | 271 | 272 | So that's a quick look at some of the definitions for the infrastructure of 273 | this cluster -- basically, all the things our application needs to work. Now 274 | let's continue with a look at `apps.yaml`, which is the definition of the 275 | Faces application itself. There's just a single YAML document in this file: it 276 | defines a Kustomization named `apps`, still in `flux-system` namespace, that 277 | depends on both `flagger` and `ingress-nginx`, and has `kustomize` files in 278 | the `apps` directory: 279 | 280 | ```bash 281 | #@echo 282 | #@notypeout 283 | #@nowaitbefore 284 | #@waitafter 285 | bat ./clusters/my-cluster/apps.yaml 286 | ``` 287 | 288 | 289 | 290 | Looking at that `apps` directory, there's a single directory for `faces`, 291 | which in turn has various files: 292 | 293 | ```bash 294 | #@echo 295 | #@notypeout 296 | #@nowaitbefore 297 | ls -l apps 298 | #@echo 299 | #@notypeout 300 | #@nowaitbefore 301 | #@waitafter 302 | ls -l apps/faces 303 | ``` 304 | 305 | 306 | 307 | Again, let's start with `kustomization.yaml`. 308 | 309 | ```bash 310 | #@echo 311 | #@notypeout 312 | #@nowaitbefore 313 | bat ./apps/faces/kustomization.yaml 314 | ``` 315 | 316 | `kustomizations.yaml` tells `kustomize` to pull in several other files -- and, 317 | importantly, sets the namespace for all the other things it's pulling from 318 | those files to be `faces`. (In fact, if it pulls in a file that sets a 319 | different namespace, the namespace in `kustomizations.yaml` will override what 320 | the file says.) 321 | 322 | The choice of namespace is mostly a matter of convention: typically, 323 | infrastructure things live in `flux-system` and application things don't. 324 | However, the namespace will also come up again when we talk about 325 | reconciliation. 326 | 327 | 328 | 329 | Let's take a look at those files one at a time. 330 | 331 | First, we have `namespace.yaml`. This one is straightforward: it defines the 332 | `faces` namespace that we'll use for everything here, and it tells Linkerd to 333 | inject any Pods appearing in this namespace. 334 | 335 | ```bash 336 | #@echo 337 | #@notypeout 338 | #@nowaitbefore 339 | #@waitafter 340 | bat ./apps/faces/namespace.yaml 341 | ``` 342 | 343 | 344 | 345 | Next up is `faces-sync.yaml`. This is... _less_ straightforward. It has 346 | several documents in it, so let's look at those one by one. 347 | 348 | The first document defines the Git repository that's the source of truth for 349 | the Faces resources. We're using the `${GITHUB_USER}/faces-demo` repo, looking 350 | at the `main` branch, and we're ignoring everything except for a few files in 351 | `k8s/01-base` -- and, remember, per `kustomization.yaml` this GitRepository 352 | resource will be created in the `faces` namespace. 353 | 354 | ```bash 355 | #@echo 356 | #@notypeout 357 | #@nowaitbefore 358 | #@waitafter 359 | yq 'select(document_index == 0)' apps/faces/faces-sync.yaml | bat -l yaml 360 | ``` 361 | 362 | Let's take a quick look at that GitHub repo (and branch) in the 363 | browser: 364 | 365 | 366 | 367 | So, basically, we're pulling in the minimal set of files to actually deploy 368 | the Faces app. Let's continue with the _second_ document in `faces-sync.yaml`, 369 | which defines yet another Kustomization resource -- this one is named `faces`, 370 | and lives in the `faces` namespace rather than the `flux-system` namespace. 371 | 372 | The `faces` Kustomization includes patches to the minimal deployment files for 373 | the Faces app -- specifically, we're going to force the `ERROR_FRACTION` 374 | environment variable to "0" in all our Deployments. (We have to use two 375 | separate patches for this because the Deployments for the backend workloads 376 | already include some environment variables, and the `faces-gui` Deployment 377 | does not.) 378 | 379 | ```bash 380 | #@echo 381 | #@notypeout 382 | yq 'select(document_index == 1)' apps/faces/faces-sync.yaml | bat -l yaml 383 | ``` 384 | 385 | The reason we care about the `ERROR_FRACTION` is that it's the environment 386 | variable that controls how often errors get injected into the Faces services 387 | -- for the moment, we want these services _not_ to fail artificially, so that 388 | we get to see how everything is supposed to look before we start breaking 389 | things! 390 | 391 | 392 | 393 | Next up is `faces-ingress.yaml`. Unsurprisingly, this file defines an Ingress 394 | resource that we'll use to route to the Faces services. It's fairly 395 | straightforward, if a little long: 396 | 397 | ```bash 398 | #@echo 399 | #@notypeout 400 | bat apps/faces/faces-ingress.yaml 401 | ``` 402 | 403 | 404 | 405 | After the Ingress definition, `faces-canary.yaml` tells Flagger how to do 406 | canary rollouts of the `face`, `smiley`, and `color` services in the Faces 407 | app. All three are basically the same, so we'll just look at the one for the 408 | `face` service. The important things here are that, when a new deployment 409 | happens: 410 | 411 | 1. We're going to have Flagger ramp traffic in 5% increments every 10 seconds 412 | until it reaches 50%, then the next step will be to fully cut over; and 413 | 2. We're going to demand a 70% success rate to keep going. 414 | 415 | ```bash 416 | #@echo 417 | #@notypeout 418 | yq 'select(document_index == 0)' apps/faces/faces-canary.yaml | bat -l yaml 419 | ``` 420 | 421 | Remember that this applies to the services _behind_ the Faces GUI only! For 422 | the GUI itself, we'll do A/B testing. 423 | 424 | 425 | 426 | The A/B test is defined by our last file, `faces-abtest.yaml`. The important 427 | things to notice in here are: 428 | 429 | 1. We're only working with the `faces-gui` service here. 430 | 2. We also have to know which Ingress to mess with, because Flagger needs to 431 | modify the Ingress to route only specific traffic to the deployment we're 432 | testing! 433 | 3. The `analysis.match` clause says that the new deployment will _only_ get 434 | traffic where the `X-Faces-Header` has the value `testuser`. 435 | 4. Finally, we're again going to demand a 70% success rate to keep going. 436 | 437 | ```bash 438 | #@echo 439 | #@notypeout 440 | bat apps/faces/faces-abtest.yaml 441 | ``` 442 | 443 | 444 | 445 | ## Putting it All Together 446 | 447 | Taking all those files together, there's a lot going on under the hood, but 448 | rather little that the app developers need to worry about: 449 | 450 | - We've defined how all the infrastructure fits together. 451 | - We've defined how our application gets deployed. 452 | - We've defined how to roll out new versions of the Faces GUI using A/B 453 | testing. 454 | - We've defined how to roll out new versions of the underlying Faces services 455 | using canaries. 456 | 457 | Most importantly: now that this is set up, application developers generally 458 | won't have to think much about it. 459 | 460 | Finally, one last reminder: don't forget that that `faces` Kustomization lives 461 | in the `faces` namespace. 462 | 463 | 464 | 465 | ## A Failed Rollout 466 | 467 | OK! Let's actually roll out a new version and watch what happens. We're going 468 | to start by trying to roll out to a _failing_ version. This is bit more 469 | instructive than starting with a successful rollout, because you'll get to see 470 | how Flagger responds when things go wrong. 471 | 472 | To get the ball rolling, edit `faces-sync.yaml` to set the `ERROR_FRACTION` to 473 | 75% for all the backend Faces services. (This is on line 35 -- don't mess with 474 | the `ERROR_FRACTION` for the `faces-gui`.) 475 | 476 | ```bash 477 | ${EDITOR} apps/faces/faces-sync.yaml 478 | ``` 479 | 480 | Let's double-check the changes... 481 | 482 | ```bash 483 | git diff --color | more -r 484 | ``` 485 | 486 | Once convinced that they're OK, commit and push the change. 487 | 488 | ```bash 489 | git add apps/faces/faces-sync.yaml 490 | git commit -m "Force ERROR_FRACTION for backend Faces services to 75%" 491 | git push 492 | ``` 493 | 494 | 495 | 496 | At this point, we could just wait for Flux to notice our change. This is how 497 | we'd do things in production: commit, and let the system work. 498 | 499 | However, right now Flux is set up to check for changes every 10 minutes, and 500 | that's just too long to wait for this demo. Instead, we'll manually tell Flux 501 | to reconcile any changes to the `apps` Kustomization immediately 502 | 503 | **Many things will happen at this point:** 504 | 505 | - Flagger will start updating Canary resources in the `faces` namespace. We'll 506 | be able to watch these using the command line in this window. 507 | 508 | - Flagger will tell Linkerd to start routing traffic to the new Deployments. 509 | We'll be able to watch this in the Linkerd dashboard. 510 | 511 | - The Faces GUI will start seeing responses from the new (bad) Deployments. 512 | We'll be able to see this in the Faces GUI: not all the faces will be 513 | smileys, and not all the cell backgrounds will be green. 514 | 515 | 516 | 517 | So. Off we go. Let's trigger the rollout by explicitly telling Flux to 518 | reconcile the `apps` Kustomization. 519 | 520 | **Note**: When we trigger the rollout, we'll get an error message about 521 | Flagger not being ready. This isn't a problem, it's just an artifact of the 522 | manual trigger. 523 | 524 | ```bash 525 | flux reconcile ks apps --with-source 526 | ``` 527 | 528 | Then, we'll start watching the Canary resources here that Flagger should be 529 | updating, using the command line. We'll also watch the faces GUI itself, and 530 | the Linkerd Viz dashboard, so we really have three separate views of the world 531 | here. 532 | 533 | ```bash 534 | kubectl get -n faces canaries -w 535 | ``` 536 | 537 | 538 | 539 | 540 | At this point, we've seen a rollout fail. Let's repeat that, but this time set 541 | things up for success. Once again, we'll start with editing `faces-sync.yaml`: 542 | this time, we'll set `ERROR_FRACTION` to "20" so that we should see some 543 | errors but still succeed with the rollout (since we need at least 70% success 544 | to continue: an error fraction of 20% should give us an 80% success rate). 545 | 546 | ```bash 547 | ${EDITOR} apps/faces/faces-sync.yaml 548 | ``` 549 | 550 | Again, we'll doublecheck the changes... 551 | 552 | ```bash 553 | git diff --color | more -r 554 | ``` 555 | 556 | ...then commit, push, and trigger reconciliation. 557 | 558 | ```bash 559 | git add apps/faces/faces-sync.yaml 560 | git commit -m "Force ERROR_FRACTION for backend Faces services to 20%" 561 | git push 562 | ``` 563 | 564 | This time, we'll trigger the reconciliation from the Weave GitOps GUI. 565 | 566 | 567 | 568 | Once again, we can watch the rollout progress with the CLI. 569 | 570 | ```bash 571 | kubectl get -n faces canaries -w 572 | ``` 573 | 574 | 575 | 576 | So, this time around the rollout actually succeeded, since we passed the 577 | minimum success threshold. 578 | 579 | 580 | 581 | ## The Source Repo 582 | 583 | When we walked through the Flux setup in the first place, you might remember 584 | this bit of setup from the first document in `faces-sync.yaml`: 585 | 586 | ```bash 587 | #@echo 588 | #@notypeout 589 | #@nowaitbefore 590 | #@waitafter 591 | yq 'select(document_index == 0)' apps/faces/faces-sync.yaml | bat -l yaml 592 | ``` 593 | 594 | As we discussed earlier, we're pulling our initial deployments from the 595 | `${GITHUB_USER}/faces-demo` repo, then applying patches from the 596 | `${GITHUB_USER}/gitops-linkerd` repo to those resources. This means that we 597 | can also use Flux and Flagger to handle changes _to the initial deployment 598 | resources_ in `${GITHUB_USER}/faces-demo` -- the tools aren't limited to 599 | managing changes to the patches. 600 | 601 | 602 | 603 | Let's demo this by editing the base `smiley` service definition to show a 604 | different smiley. We could do this with a `kustomize` patch, but it's much 605 | easier this way. 606 | 607 | Since the `${GITHUB_USER}/face-demo` repo is cloned into a sibling directory 608 | of `${GITHUB_USER}/gitops-linkerd`, we can easily edit the base definition and 609 | push it up to GitHub. Let's change the kind of smiley that the `smiley` 610 | service will send back: to do this, add a environment variable to the `smiley` 611 | Deployment's Pod template, for example `SMILEY=HeartEyes` (case matters on 612 | both sides of the `=`!). 613 | 614 | ```bash 615 | ${EDITOR} ../faces-demo/k8s/01-base/faces.yaml 616 | ``` 617 | 618 | We'll doublecheck the changes as before -- the only change here is the `-C` 619 | argument to `git`, to point it to the correct clone. 620 | 621 | ```bash 622 | git -C ../faces-demo diff --color | more -r 623 | ``` 624 | 625 | Then, we'll use the same `git -C` trick to commit and push our change. 626 | 627 | ```bash 628 | git -C ../faces-demo add k8s/01-base/faces.yaml 629 | git -C ../faces-demo commit -m "Switch to HeartEyes smileys" 630 | git -C ../faces-demo push 631 | ``` 632 | 633 | 634 | 635 | After that, again, we could just wait for the reconciliation to happen on its 636 | own, but that would take longer than we'd like, so we'll kick it off by hand 637 | by telling Flux to reconcile the `faces` Kustomization. 638 | 639 | You might recall that last time we told Flux to reconcile the `apps` 640 | Kustomization. That won't work this time: when manually triggering 641 | reconciliation, Flux just looks at the one Kustomization you tell it to, 642 | rather than recursing into Kustomizations it creates. So, this time, we tell 643 | it to look in the `faces` namespace for the `faces` Kustomization: 644 | 645 | ```bash 646 | flux reconcile ks faces -n faces --with-source 647 | ``` 648 | 649 | Once that happens, watching the rollout progress with the CLI is the same as 650 | before: 651 | 652 | ```bash 653 | kubectl get -n faces canaries -w 654 | ``` 655 | 656 | 657 | 658 | So now we have all `HeartEyes` smileys, since our rollout succeeded. 659 | 660 | 661 | 662 | ## A/B Deployments 663 | 664 | Now let's take a look at the GUI. We want to roll out a simple change to have 665 | the background be light cyan -- however! We don't want random users to get 666 | this (as a canary would cause), we want our to test users to see it first. 667 | This is a perfect use case for A/B testing. 668 | 669 | As we reviewed earlier, this is set up with the `faces-abtest.yaml`. The 670 | critical point we want to review here is the `match` specification which 671 | determines how we route traffic: 672 | 673 | ```bash 674 | #@echo 675 | #@notypeout 676 | bat apps/faces/faces-abtest.yaml 677 | ``` 678 | 679 | As this is defined, we key the A/B routing on the `X-Faces-User` header: if 680 | this header has a value of "testuser", we'll route the request to the incoming 681 | workload. 682 | 683 | 684 | 685 | So, let's go ahead and modify `faces-sync.yaml` to set the new background 686 | color, by adding the `COLOR` environment variable to the `faces-gui`. Note 687 | that we set it unconditionally: we're going to trust the canary to do the 688 | right thing here. 689 | 690 | ```bash 691 | ${EDITOR} apps/faces/faces-sync.yaml 692 | ``` 693 | 694 | Again, we'll doublecheck the changes... 695 | 696 | ```bash 697 | git diff --color | more -r 698 | ``` 699 | 700 | ...then commit, push, and trigger reconciliation. 701 | 702 | ```bash 703 | git add apps/faces/faces-sync.yaml 704 | git commit -m "Force the GUI background to light cyan" 705 | git push 706 | ``` 707 | 708 | This time we're back to reconciling the `apps` Kustomization: 709 | 710 | ```bash 711 | flux reconcile ks apps --with-source 712 | ``` 713 | 714 | 715 | 716 | Even though this is an A/B rollout, we still watch it by watching Canaries 717 | resources – nice for consistency even if it's somewhat odd. While this is 718 | going on, too, we can flip between a browser sending no `X-Faces-User` header, 719 | and a browser sending `X-Faces-User: testuser`, and we'll see the difference. 720 | 721 | ```bash 722 | kubectl get -n faces canaries -w 723 | ``` 724 | 725 | Now that the rollout is finished, we'll see the new background color from both 726 | browsers. 727 | 728 | 729 | 730 | ## Error Handling 731 | 732 | It wouldn't be a talk about real-world usage if we didn't talk at least a bit 733 | about things going wrong, right? Specifically, when things go wrong, what can 734 | you do? 735 | 736 | ### Get Events 737 | 738 | When Flux and Flagger are working, they post Kubernetes events describing 739 | important things that are happening. This is a great go-to to figure out 740 | what's going on when things are misbehaving. 741 | 742 | Note that you may need to look at events in various namespaces. 743 | 744 | 745 | 746 | ### Describe Pods 747 | 748 | If you have a Pod that doesn't seem to be starting, `kubectl describe pod` can 749 | help you see what's going on. (Partly this is because describing the Pod will 750 | show you events for that Pod -- it can be a quicker way to get a 751 | narrowly-focused view of events, though.) 752 | 753 | 754 | 755 | ### Get Controller Logs 756 | 757 | As a bit of a last resort, there are firehoses of logs from the Flux and 758 | Flagger controllers: 759 | 760 | - The `flagger` Deployment in the `flagger-system` namespace is the place to 761 | start if you want a really detailed look at what Flagger is doing. 762 | 763 | - There are several controllers in the `flux-system` namespace too -- for 764 | example, the `kustomize-controller` is doing most of the work in this demo. 765 | 766 | In all cases, the logs have an _enormous_ amount of information, which can 767 | make them hard to sift through -- but sometimes they have the critical bit 768 | that you need to see what's going on. 769 | 770 | 771 | 772 | ## Real-World GitOps: Flux, Flagger, and Linkerd 773 | 774 | And there you have it! You can find the source for this workshop at 775 | 776 | https://github.com/BuoyantIO/gitops-linkerd 777 | 778 | and, as always, we welcome feedback. Join us at https://slack.linkerd.io/ for 779 | more. 780 | 781 | 782 | 783 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | # Flux local dev environment with Docker and Kubernetes KIND 2 | # Requirements: 3 | # - Docker 4 | # - Homebrew 5 | 6 | .PHONY: tools 7 | tools: ## Install Kubernetes kind, kubectl, FLux CLI and other tools with Homebrew 8 | brew bundle 9 | 10 | .PHONY: validate 11 | validate: ## Validate the Kubernetes manifests (including Flux custom resources) 12 | scripts/validate.sh 13 | 14 | .PHONY: help 15 | help: ## Display this help menu 16 | @awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-20s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST) 17 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # gitops-linkerd 2 | 3 | [![test](https://github.com/stefanprodan/gitops-linkerd/workflows/test/badge.svg)](https://github.com/stefanprodan/gitops-linkerd/actions) 4 | [![license](https://img.shields.io/github/license/stefanprodan/gitops-linkerd.svg)](https://github.com/stefanprodan/gitops-linkerd/blob/main/LICENSE) 5 | 6 | Progressive Delivery workshop with [Linkerd](https://github.com/linkerd/linkerd2), 7 | [Flagger](https://github.com/fluxcd/flagger), [Flux](https://github.com/fluxcd/flux) 8 | and [Weave GitOps](https://github.com/weaveworks/weave-gitops). 9 | 10 | ![flux-ui](docs/screens/wego-apps.png) 11 | 12 | ## THE DEMO 13 | 14 | See [DEMO.md] for the demo as presented during the "Real-World GitOps" Service 15 | Mesh Academy. You'll need a running (empty) cluster that can support 16 | `LoadBalancer` services, and you'll need `yq`, `bat`, `kubectl`, and `flux`. 17 | The easiest way to get the commands is to run `brew bundle`; the easiest way 18 | to run the demo is with [demosh](https://github.com/BuoyantIO/demosh). 19 | 20 | ## Introduction 21 | 22 | ### What is GitOps? 23 | 24 | GitOps is a way to do Continuous Delivery, it works by using Git as a source of truth 25 | for declarative infrastructure and workloads. 26 | For Kubernetes this means using `git push` instead of `kubectl apply/delete` or `helm install/upgrade`. 27 | 28 | In this workshop you'll be using GitHub to host the config repository and [Flux](https://fluxcd.io) 29 | as the GitOps delivery solution. 30 | 31 | ### What is Progressive Delivery? 32 | 33 | Progressive delivery is an umbrella term for advanced deployment patterns like canaries, feature flags and A/B testing. 34 | Progressive delivery techniques are used to reduce the risk of introducing a new software version in production 35 | by giving app developers and SRE teams a fine-grained control over the blast radius. 36 | 37 | In this workshop you'll be using [Flagger](https://flagger.app), [Linkerd](https://github.com/linkerd/linkerd2) and 38 | Prometheus to automate Canary Releases and A/B Testing for your applications. 39 | 40 | ## Prerequisites 41 | 42 | For this workshop you will need a GitHub account and a Kubernetes cluster version 1.21 43 | or newer with **Load Balancer** support. 44 | 45 | Steps for using a local Kind cluster are [included](#create-kind-cluster) in this document. 46 | 47 | In order to follow the guide you'll need a GitHub account and a 48 | [personal access token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line) 49 | that can create repositories (check all permissions under `repo`). 50 | 51 | ### Fork the repository 52 | 53 | Start by forking the [gitops-linkerd](https://github.com/rparmer/gitops-linkerd) 54 | repository on your own GitHub account. 55 | Then generate a GitHub 56 | [personal access token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line) 57 | that can create repositories (check all permissions under `repo`), 58 | and export your GitHub token, username and repo name as environment variables: 59 | 60 | ```sh 61 | export GITHUB_TOKEN= 62 | export GITHUB_USER= 63 | export GITHUB_REPO="gitops-linkerd" 64 | ``` 65 | 66 | Next clone your repository locally with: 67 | 68 | ```shell 69 | git clone https://github.com/${GITHUB_USER}/${GITHUB_REPO}.git 70 | cd ${GITHUB_REPO} 71 | ``` 72 | 73 | ### Install CLI tools 74 | 75 | Install flux, kubectl, linkerd, step and other CLI tools with Homebrew: 76 | 77 | ```shell 78 | brew bundle 79 | ``` 80 | 81 | The complete list of tools can be found in the `Brewfile`. 82 | 83 | ## Create Kind cluster (optional) 84 | 85 | Everything in this demo can be ran locally in a [Kind](https://kind.sigs.k8s.io/) cluster and a configuration file is included will automatically expose the nginx controller to the localhost. This will prevent the need for port-forwarding later on. To create a new cluster run: 86 | 87 | ```shell 88 | kind create cluster --name gitops-linkerd --config kind/kind-config.yaml 89 | ``` 90 | 91 | ## Cluster bootstrap 92 | 93 | With the `flux bootstrap` command you can install Flux on a Kubernetes cluster and configure 94 | it to manage itself from a Git repository. If the Flux components are present on the cluster, 95 | the bootstrap command will perform an upgrade if needed. 96 | 97 | ```shell 98 | flux bootstrap github \ 99 | --owner=${GITHUB_USER} \ 100 | --repository=${GITHUB_REPO} \ 101 | --branch=main \ 102 | --path=./clusters/my-cluster \ 103 | --personal 104 | ``` 105 | 106 | When Flux has access to your repository it will do the following: 107 | 108 | * installs the Flux UI (Weave GitOps OSS) 109 | * installs cert-manager and generates the Linkerd trust anchor certificate 110 | * installs Linkerd using the `linkerd-crds`, `linkerd-control-plane`, `linkerd-viz` and `linkerd-smi` Helm charts 111 | * waits for the Linkerd control plane to be ready 112 | * installs the Kubernetes NGINX ingress in the `ingress-nginx` namespace 113 | * installs Flagger and configures its load testing service inside the `flagger-system` namespace 114 | * waits for NGINX and Flagger to be ready 115 | * creates the faces deployments and configures it for progressive traffic shifting 116 | * creates the faces-gui deployment and configures it for A/B testing 117 | 118 | ![flux-ui](docs/screens/wego-deps.png) 119 | 120 | Watch Flux installing Linkerd first, then the demo apps: 121 | 122 | ```bash 123 | flux get kustomizations --watch 124 | ``` 125 | 126 | When bootstrapping a cluster with Linkerd, it is important to control the installation order. 127 | For the applications pods to be injected with Linkerd proxy, 128 | the Linkerd control plane must be up and running before the apps. 129 | For the ingress controller to forward traffic to the apps, NGINX must be injected with the Linker sidecar. 130 | 131 | ## Access the dashboards 132 | 133 | All of the dashboards have been exposed ingresses and are accessable via various `sslip.io` urls. You will need external-ip address for the `ingress-nginx-controller` service. To find this run: 134 | 135 | ```sh 136 | kubectl -n ingress-nginx get svc ingress-nginx-controller 137 | ``` 138 | 139 | If you are using the kind and create the cluster using the kind-config.yaml file provided, you can use the local `127.0.0.1` ip. 140 | 141 | If no external-ip is listed and you are not using the provided kind-config, you will need to start port forwarding and you will use the local `127.0.0.1` ip: 142 | 143 | ```sh 144 | kubectl -n ingress-nginx port-forward svc/ingress-nginx-controller 8080:80 & 145 | ``` 146 | > NOTE: You will need add the `8080` or the specified forwarding port to all dashboard urls 147 | 148 | ### Dashboard links 149 | 150 | | Dashboard | Url | Credentials | 151 | | --------- | --- | ----------- | 152 | | Weave GitOps (Flux) | http://127-0-0-1.wego.sslip.io | username: `admin` password `flux` | 153 | | Linkerd | http://127-0-0-1.linkerd.sslip.io | n/a | 154 | | Faces | http://127-0-0-1.faces.sslip.io | n/a | 155 | 156 | > replace `127-0-0-1` with the ip address of the ingress controller if necessary. Also replace the `.` with `-` (ie change `127.0.0.1` to `127-0-0-1`) 157 | 158 | Sample Flux ui 159 | ![flux-ui](docs/screens/wego-linkerd.png) 160 | 161 | Sample Linkerd ui 162 | ![linkerd-ui](docs/screens/linkerd-metrics.png) 163 | -------------------------------------------------------------------------------- /WORKSHOP.md: -------------------------------------------------------------------------------- 1 | # Gitops Linkerd Workshop 2 | This contains the steps required to complete the Flagger/Linkerd workshop. Please make sure to review the [README](README.md) doc before starting this workshop. 3 | 4 | ## Getting started 5 | This workshop will contain a lot of contact switching. It is recommended to use multi browsers and terminals to help view changes as they progress through the various stages. Installing a browser extention such as [ModHeader](https://modheader.com/) is also recommended. This will allow use to modify request headers in the browser and view canary rollout progress in realtime. 6 | 7 | ### Verify deployments 8 | The first thing we need to do is verify all the provided deployments are working correctly and accessable. Navigate to each of the deployment links below and verify they all load correct: 9 | 10 | Weave Gitops - http://127-0-0-1.wego.sslip.io (username: `admin` password: `flux`) 11 | 12 | Linkerd - http://127-0-0-0.linkerd.sslip.io 13 | 14 | Faces - http://127-0-0-1.faces.sslip.io 15 | 16 | The faces deployment should be showing all green smiles that fade in and out. 17 | ![faces](docs/screens/faces.png) 18 | 19 | ## Workshop 20 | ### Create failing canary release 21 | Rather then show off a successful deployment first, lets start of by showing a failed deployment. 22 | 23 | Update the ERROR_FRACTION value on line #36 in the [faces-sync.yaml](apps/faces/faces-sync.yaml) file to be "75" 24 | ```yaml 25 | - patch: | 26 | - op: replace 27 | path: /spec/template/spec/containers/0/env/1 28 | value: 29 | name: ERROR_FRACTION 30 | value: "75" 31 | target: 32 | kind: Deployment 33 | ``` 34 | 35 | Commit, push and reconcile the change to trigger a new rollout 36 | ```shell 37 | git add apps/faces/faces-sync.yaml 38 | git push 39 | flux reconcile ks apps --with-source 40 | ``` 41 | 42 | Don't worry about the resource not ready error. That is due to the Flux dependsOn config and is normal when manually triggering a reconcile. 43 | > We are only manually reconciling for demo purposes. You do not need to do this in a production setting 44 | 45 | Watch the canary progression 46 | ```shell 47 | kubectl -n faces get canaries -w 48 | ``` 49 | 50 | You should see the `color`, `face`, and `smiley` canaries start progressing. In the Linkerd dashboard you should see traffic being routed to the new deployments. In the Faces dashboard you should start seeing error faces. The canaries should progress for a few interations, then the progression should pause, and then the rollout should fail. The Faces dashboard should return to all green faces and Linkerd should stop routing traffic to the new deployments. 51 | 52 | The rollout failed because in our [faces-canary.yaml](/apps/faces/faces-canary.yaml) file we configured an acceptable failure percentage to be 70 53 | ```yaml 54 | metrics: 55 | - name: request-success-rate 56 | thresholdRange: 57 | min: 70 58 | interval: 1m 59 | ``` 60 | 61 | ### Create successful canary release 62 | Lets update the ERROR_FRACTION to be at a lower level that is in the canaries acceptable range. Update the ERROR_FRACTION value on line #36 in the [faces-sync.yaml](apps/faces/faces-sync.yaml) file to be "15" 63 | ```yaml 64 | - patch: | 65 | - op: replace 66 | path: /spec/template/spec/containers/0/env/1 67 | value: 68 | name: ERROR_FRACTION 69 | value: "15" 70 | target: 71 | kind: Deployment 72 | ``` 73 | 74 | Repeat the commit, push and reconcile steps from earlier and watch the canary progession again. 75 | 76 | You should see the same routing and dashboard changes again, but this time the progession should not pause. The `weight` should continue until it reaches 50 (our configured `maxWeight` in the canary definition) before the rollout finishes and all traffic is routed to the new deployment. 77 | 78 | ### Create successful AB release 79 | Now lets create a new release for the faces ui. Before making any changes lets prepare a few browsers so we can view the ui updates in real time. Open 2 browser instances. They can be different of the same browser as long as you can modify the request headers in both. If using Chrome create the 2 instances using different profiles. Otherwise changes to the request headers will impact both instances. In one browser navigate to the Faces dashboard and using ModHeader (or equivalent) add the request header of `x-faces-user: foo` and reload the page. Repeat this process in the other brower, but this time add a second header value of `x-canary: insider`. You should now see a light gray background and the user should say `foo` instead of unknown on both browsers. 80 | 81 | ![faces-foo](/docs/screens/faces-foo.png) 82 | 83 | With everything setup the way we want now, lets update the background color for foo. Update the COLOR_foo value on line #46 in the [faces-sync.yaml](apps/faces/faces-sync.yaml) file to be `lightblue` 84 | ```yaml 85 | - patch: | 86 | - op: add 87 | path: /spec/template/spec/containers/0/env 88 | value: 89 | - name: ERROR_FRACTION 90 | value: "0" 91 | - name: COLOR_foo 92 | value: lightblue 93 | target: 94 | kind: Deployment 95 | name: faces-gui 96 | ``` 97 | 98 | Commit, push, reconcile, and watch the canaries just like you did earlier. This time you should see the `faces-gui` start progressing. Once it starts progressing refresh both browsers. The browser with the x-canary header set should display the new background. The other brower should still show the lightgray one. Let the canary rollout finish. You'll notice the the weight value remains at 0 the entire time. This is because in an AB test there is no traffic to gradually shift too. You use header values (or cookies, user agents, etc) to determine who sees which version. With the rollout finished, update both browsers again. They should both have the new light blue background. 99 | 100 | ### Create failing AB release 101 | Now lets force an update to fail. Back in the [faces-sync.yaml](apps/faces/faces-sync.yaml) file, update COLOR_foo to be `lightpink` and update ERROR_FRACTION to be "75" 102 | ```yaml 103 | - patch: | 104 | - op: add 105 | path: /spec/template/spec/containers/0/env 106 | value: 107 | - name: ERROR_FRACTION 108 | value: "75" 109 | - name: COLOR_foo 110 | value: lightpink 111 | target: 112 | kind: Deployment 113 | name: faces-gui 114 | ``` 115 | 116 | Commit, push, reconcile, and watch the canaries the same way as before. Once the `faces-gui` canary starts progressing refresh both browsers. The browser with the x-canary header set should display the new background. The other brower should still show the old one. This time keep refreshing the browser with the x-canary header set. You should see random errors returned. Let the canary rollout finish. The rollout should fail and they both should still have the old light blue background. 117 | -------------------------------------------------------------------------------- /apps/faces/faces-abtest.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: flagger.app/v1beta1 2 | kind: Canary 3 | metadata: 4 | name: faces-gui 5 | spec: 6 | provider: nginx 7 | targetRef: 8 | apiVersion: apps/v1 9 | kind: Deployment 10 | name: faces-gui 11 | ingressRef: 12 | apiVersion: networking.k8s.io/v1 13 | kind: Ingress 14 | name: faces-gui 15 | progressDeadlineSeconds: 60 16 | service: 17 | port: 80 18 | targetPort: http 19 | analysis: 20 | interval: 10s 21 | threshold: 3 22 | iterations: 10 23 | # A/B test routing 24 | match: 25 | - headers: 26 | x-faces-user: 27 | exact: "testuser" 28 | metrics: 29 | - name: request-success-rate 30 | thresholdRange: 31 | min: 70 32 | interval: 1m 33 | webhooks: 34 | - name: load-test 35 | type: rollout 36 | url: http://flagger-loadtester.flagger-system/ 37 | metadata: 38 | cmd: "hey -z 2m -q 10 -c 2 -H 'x-faces-user: testuser' -host faces-gui.faces.sslip.io http://ingress-nginx-controller.ingress-nginx" 39 | - name: load-test-primary 40 | type: rollout 41 | url: http://flagger-loadtester.flagger-system/ 42 | metadata: 43 | cmd: "hey -z 2m -q 10 -c 2 -H 'x-faces-user: normaluser' -host faces-gui.faces.sslip.io http://ingress-nginx-controller.ingress-nginx" 44 | -------------------------------------------------------------------------------- /apps/faces/faces-canary.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: flagger.app/v1beta1 2 | kind: Canary 3 | metadata: 4 | name: face 5 | spec: 6 | provider: linkerd 7 | targetRef: 8 | apiVersion: apps/v1 9 | kind: Deployment 10 | name: face 11 | progressDeadlineSeconds: 60 12 | service: 13 | port: 80 14 | targetPort: http 15 | analysis: 16 | interval: 10s 17 | threshold: 3 18 | maxWeight: 50 19 | stepWeight: 5 20 | metrics: 21 | - name: request-success-rate 22 | thresholdRange: 23 | min: 70 24 | interval: 1m 25 | webhooks: 26 | - name: load-test 27 | type: rollout 28 | url: http://flagger-loadtester.flagger-system/ 29 | metadata: 30 | cmd: "hey -z 2m -q 10 -c 2 http://face-canary.faces/" 31 | --- 32 | apiVersion: flagger.app/v1beta1 33 | kind: Canary 34 | metadata: 35 | name: color 36 | spec: 37 | provider: linkerd 38 | targetRef: 39 | apiVersion: apps/v1 40 | kind: Deployment 41 | name: color 42 | progressDeadlineSeconds: 60 43 | service: 44 | port: 80 45 | targetPort: http 46 | analysis: 47 | interval: 10s 48 | threshold: 3 49 | maxWeight: 50 50 | stepWeight: 5 51 | metrics: 52 | - name: request-success-rate 53 | thresholdRange: 54 | min: 70 55 | interval: 1m 56 | webhooks: 57 | - name: load-test 58 | type: rollout 59 | url: http://flagger-loadtester.flagger-system/ 60 | metadata: 61 | cmd: "hey -z 2m -q 10 -c 2 http://color-canary.faces/" 62 | --- 63 | apiVersion: flagger.app/v1beta1 64 | kind: Canary 65 | metadata: 66 | name: smiley 67 | spec: 68 | provider: linkerd 69 | targetRef: 70 | apiVersion: apps/v1 71 | kind: Deployment 72 | name: smiley 73 | progressDeadlineSeconds: 60 74 | service: 75 | port: 80 76 | targetPort: http 77 | analysis: 78 | interval: 10s 79 | threshold: 3 80 | maxWeight: 50 81 | stepWeight: 5 82 | metrics: 83 | - name: request-success-rate 84 | thresholdRange: 85 | min: 70 86 | interval: 1m 87 | webhooks: 88 | - name: load-test 89 | type: rollout 90 | url: http://flagger-loadtester.flagger-system/ 91 | metadata: 92 | cmd: "hey -z 2m -q 10 -c 2 http://smiley-canary.faces/" 93 | -------------------------------------------------------------------------------- /apps/faces/faces-ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: Ingress 3 | metadata: 4 | name: faces-gui 5 | labels: 6 | app: faces-gui 7 | annotations: 8 | nginx.ingress.kubernetes.io/service-upstream: "true" # IMPORTANT!!! -> Required for AB testing using NGINX with Linkerd 9 | spec: 10 | ingressClassName: nginx 11 | rules: 12 | - host: "*.faces.sslip.io" 13 | http: 14 | paths: 15 | - pathType: Prefix 16 | path: / 17 | backend: 18 | service: 19 | name: faces-gui 20 | port: 21 | name: http 22 | - pathType: Prefix 23 | path: /color 24 | backend: 25 | service: 26 | name: color 27 | port: 28 | name: http 29 | - pathType: Prefix 30 | path: /face 31 | backend: 32 | service: 33 | name: face 34 | port: 35 | name: http 36 | - pathType: Prefix 37 | path: /smiley 38 | backend: 39 | service: 40 | name: smiley 41 | port: 42 | name: http 43 | -------------------------------------------------------------------------------- /apps/faces/faces-sync.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: source.toolkit.fluxcd.io/v1beta2 3 | kind: GitRepository 4 | metadata: 5 | name: faces 6 | spec: 7 | interval: 10m 8 | url: https://github.com/BuoyantIO/faces-demo 9 | ref: 10 | branch: main 11 | ignore: | 12 | /* 13 | !/k8s/01-base/*-profile.yaml 14 | !/k8s/01-base/faces.yaml 15 | !/k8s/01-base/faces-gui.yaml 16 | --- 17 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 18 | kind: Kustomization 19 | metadata: 20 | name: faces 21 | spec: 22 | targetNamespace: faces 23 | interval: 10m 24 | path: "./k8s/" 25 | prune: true 26 | sourceRef: 27 | kind: GitRepository 28 | name: faces 29 | patches: 30 | - patch: | 31 | - op: replace 32 | path: /spec/template/spec/containers/0/env/1 33 | value: 34 | name: ERROR_FRACTION 35 | value: "0" 36 | target: 37 | kind: Deployment 38 | name: (face|smiley|color) 39 | - patch: | 40 | - op: add 41 | path: /spec/template/spec/containers/0/env 42 | value: 43 | - name: ERROR_FRACTION 44 | value: "0" 45 | target: 46 | kind: Deployment 47 | name: faces-gui 48 | -------------------------------------------------------------------------------- /apps/faces/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | namespace: faces 4 | resources: 5 | - namespace.yaml 6 | - faces-sync.yaml 7 | - faces-ingress.yaml 8 | - faces-canary.yaml 9 | - faces-abtest.yaml 10 | -------------------------------------------------------------------------------- /apps/faces/namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: faces 5 | annotations: 6 | linkerd.io/inject: enabled 7 | -------------------------------------------------------------------------------- /clusters/my-cluster/apps.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 3 | kind: Kustomization 4 | metadata: 5 | name: apps 6 | namespace: flux-system 7 | spec: 8 | dependsOn: 9 | - name: flagger 10 | - name: ingress-nginx 11 | interval: 10m 12 | retryInterval: 1m 13 | timeout: 5m 14 | prune: true 15 | wait: true 16 | sourceRef: 17 | kind: GitRepository 18 | name: flux-system 19 | path: ./apps 20 | -------------------------------------------------------------------------------- /clusters/my-cluster/flux-system/gotk-sync.yaml: -------------------------------------------------------------------------------- 1 | # This manifest was generated by flux. DO NOT EDIT. 2 | --- 3 | apiVersion: source.toolkit.fluxcd.io/v1beta2 4 | kind: GitRepository 5 | metadata: 6 | name: flux-system 7 | namespace: flux-system 8 | spec: 9 | interval: 1m0s 10 | ref: 11 | branch: main 12 | secretRef: 13 | name: flux-system 14 | url: ssh://git@github.com/rparmer/gitops-linkerd 15 | --- 16 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 17 | kind: Kustomization 18 | metadata: 19 | name: flux-system 20 | namespace: flux-system 21 | spec: 22 | interval: 10m0s 23 | path: ./clusters/my-cluster 24 | prune: true 25 | sourceRef: 26 | kind: GitRepository 27 | name: flux-system 28 | -------------------------------------------------------------------------------- /clusters/my-cluster/flux-system/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | resources: 4 | - gotk-components.yaml 5 | - gotk-sync.yaml 6 | labels: 7 | - pairs: 8 | toolkit.fluxcd.io/tenant: sre-team 9 | patches: 10 | - patch: | 11 | - op: add 12 | path: /spec/template/spec/containers/0/args/- 13 | value: --concurrent=20 14 | - op: add 15 | path: /spec/template/spec/containers/0/args/- 16 | value: --requeue-dependency=5s 17 | target: 18 | kind: Deployment 19 | name: "(kustomize-controller|helm-controller|source-controller)" 20 | - patch: | 21 | - op: replace 22 | path: /spec/template/spec/containers/0/resources/limits 23 | value: 24 | cpu: 250m 25 | memory: 256Mi 26 | target: 27 | kind: Deployment 28 | -------------------------------------------------------------------------------- /clusters/my-cluster/infrastructure.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 3 | kind: Kustomization 4 | metadata: 5 | name: cert-manager 6 | namespace: flux-system 7 | spec: 8 | interval: 1h 9 | retryInterval: 1m 10 | timeout: 5m 11 | prune: true 12 | wait: true 13 | sourceRef: 14 | kind: GitRepository 15 | name: flux-system 16 | path: ./infrastructure/cert-manager 17 | --- 18 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 19 | kind: Kustomization 20 | metadata: 21 | name: linkerd 22 | namespace: flux-system 23 | spec: 24 | dependsOn: 25 | - name: cert-manager 26 | interval: 1h 27 | retryInterval: 1m 28 | timeout: 5m 29 | prune: true 30 | wait: true 31 | sourceRef: 32 | kind: GitRepository 33 | name: flux-system 34 | path: ./infrastructure/linkerd 35 | --- 36 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 37 | kind: Kustomization 38 | metadata: 39 | name: ingress-nginx 40 | namespace: flux-system 41 | spec: 42 | dependsOn: 43 | - name: linkerd 44 | interval: 1h 45 | retryInterval: 1m 46 | timeout: 5m 47 | sourceRef: 48 | kind: GitRepository 49 | name: flux-system 50 | path: ./infrastructure/ingress-nginx 51 | prune: true 52 | wait: true 53 | --- 54 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 55 | kind: Kustomization 56 | metadata: 57 | name: flagger 58 | namespace: flux-system 59 | spec: 60 | dependsOn: 61 | - name: linkerd 62 | interval: 1h 63 | retryInterval: 1m 64 | timeout: 5m 65 | prune: true 66 | wait: true 67 | sourceRef: 68 | kind: GitRepository 69 | name: flux-system 70 | path: ./infrastructure/flagger 71 | --- 72 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 73 | kind: Kustomization 74 | metadata: 75 | name: weave-gitops 76 | namespace: flux-system 77 | spec: 78 | interval: 1h 79 | retryInterval: 1m 80 | timeout: 5m 81 | prune: true 82 | wait: true 83 | sourceRef: 84 | kind: GitRepository 85 | name: flux-system 86 | path: ./infrastructure/weave-gitops 87 | -------------------------------------------------------------------------------- /demosh/check-github.sh: -------------------------------------------------------------------------------- 1 | set -e 2 | 3 | # This demo _requires_ a few things: 4 | # 5 | # 1. You need to set GITHUB_USER to your GitHub username. 6 | # 7 | # 2. You need to set GITHUB_TOKEN to a GitHub personal access token with repo 8 | # access to your fork of gitops-linkerd. 9 | # 10 | # 3. You need to fork https://github.com/kflynn/gitops-linkerd and 11 | # https://github.com/BuoyantIO/faces-demo under the $GITHUB_USER account. 12 | # 13 | # 4. You need to clone the two repos side-by-side in the directory tree, so 14 | # that "gitops-linkerd" and "faces-demo" are siblings. 15 | # 16 | # 5. You need both clones to be in their "main" branch. 17 | # 18 | # 6. You need to be running this script from the "gitops-linkerd" repo. 19 | # 20 | # 7. Finally, you need to edit apps/faces/faces-sync.yaml to point to your 21 | # fork of faces-demo -- change the `url` field on line 8. 22 | # 23 | # This script verifies that all these things are done. 24 | # 25 | # NOTE WELL: We use Makefile-style escaping in several places, because 26 | # demosh needs it. 27 | 28 | # First up, is GITHUB_USER set? 29 | if [ -z "$GITHUB_USER" ]; then \ 30 | echo "GITHUB_USER is not set" >&2 ;\ 31 | exit 1 ;\ 32 | fi 33 | 34 | # Next up, is GITHUB_TOKEN set to something with repo scope? 35 | 36 | s1=$(curl -s -I -H "Authorization: token $GITHUB_TOKEN" https://api.github.com/repos/$GITHUB_USER/gitops-linkerd) 37 | s2=$(echo "$s1" | tr -d '\015' | grep 'x-oauth-scopes:') 38 | scopes=$(echo "$s2" | cut -d: -f2- | tr -d ' ",\012') 39 | 40 | if [ "$scopes" != "repo" ]; then \ 41 | echo "GITHUB_TOKEN does not have repo scope" >&2 ;\ 42 | exit 1 ;\ 43 | fi 44 | 45 | # OK. Next up: we should be in the gitops-linkerd repo, and our "origin" 46 | # remote should point to a fork of the repo under the $GITHUB_USER account. 47 | 48 | origin=$(git remote get-url --all origin) 49 | 50 | if [ $(echo "$origin" | grep -c "$GITHUB_USER/gitops-linkerd\.git$") -ne 1 ]; then \ 51 | echo "Not in the $GITHUB_USER fork of gitops-linkerd" >&2 ;\ 52 | exit 1 ;\ 53 | fi 54 | 55 | # Next up: we should be in the "main" branch. 56 | if [ $(git branch --show-current) != "main" ]; then \ 57 | echo "Not in the main branch of gitops-linkerd" >&2 ;\ 58 | exit 1 ;\ 59 | fi 60 | 61 | # Next up: we should have a sibling directory called "faces-demo" that has an 62 | # origin remote pointing to a fork of the repo under the $GITHUB_USER account. 63 | 64 | if [ ! -d ../faces-demo ]; then \ 65 | echo "Missing sibling directory ../faces-demo" >&2 ;\ 66 | exit 1 ;\ 67 | else \ 68 | origin=$(git -C ../faces-demo remote get-url --all origin) ;\ 69 | \ 70 | if [ $(echo "$origin" | grep -c "$GITHUB_USER/faces-demo\.git$") -ne 1 ]; then \ 71 | echo "../faces-demo is not the $GITHUB_USER fork of faces-demo" >&2 ;\ 72 | exit 1 ;\ 73 | fi ;\ 74 | \ 75 | if [ $(git -C ../faces-demo branch --show-current) != "main" ]; then \ 76 | echo "Not in the main branch of faces-demo" >&2 ;\ 77 | exit 1 ;\ 78 | fi ;\ 79 | fi 80 | 81 | # Finally, let's make sure that the `url` in `app/faces/faces-sync.yaml` is 82 | # set correctly. 83 | 84 | faces_demo_url=$(yq 'select(document_index==0) .spec.url' apps/faces/faces-sync.yaml) 85 | 86 | if [ "$faces_demo_url" != "https://github.com/${GITHUB_USER}/faces-demo" ]; then \ 87 | echo "apps/faces/faces-sync.yaml is not pointing to the $GITHUB_USER fork of faces-demo" >&2 ;\ 88 | exit 1 ;\ 89 | fi 90 | 91 | set +e 92 | -------------------------------------------------------------------------------- /demosh/check-requirements.sh: -------------------------------------------------------------------------------- 1 | set -e 2 | 3 | # Make sure that we have what we need in our $PATH. Makefile-style escapes are 4 | # required here. 5 | missing= ;\ 6 | \ 7 | for cmd in bat yq kubectl kubectx linkerd flux; do \ 8 | if ! command -v $cmd >/dev/null 2>&1; then \ 9 | missing="$missing $cmd" ;\ 10 | fi ;\ 11 | done ;\ 12 | 13 | if [ -n "$missing" ]; then \ 14 | echo "Missing commands:$missing" >&2 ;\ 15 | exit 1 ;\ 16 | fi 17 | 18 | set +e 19 | -------------------------------------------------------------------------------- /demosh/demo-tools.sh: -------------------------------------------------------------------------------- 1 | # The #@hook stuff below allows for hooks to control what's being 2 | # displayed when livecasting the demo -- for example, consider the 3 | # "show_terminal" hook: 4 | # - if DEMO_HOOK_TERMINAL is set, then "show_terminal" will execute 5 | # $DEMO_HOOK_TERMINAL as a command. 6 | # - If DEMO_HOOK_TERMINAL is not set, then "show_terminal" is a 7 | # no-op. 8 | 9 | #@hook show_terminal TERMINAL 10 | #@hook show_browser BROWSER 11 | #@hook show_video VIDEO 12 | #@hook show_slides SLIDES 13 | 14 | # browser_then_terminal, if we're livecasting, will wait, then switch the 15 | # view for the livestream to the browser, then wait again, then clear the 16 | # terminal before switching the view back to the terminal. There are a lot 17 | # of places in the demo where we want to present stuff in the terminal, then 18 | # flip to the browser to show something, then flip back to the terminal. 19 | # 20 | # If you're _not_ livecasting, so the hooks aren't doing anything... uh... 21 | # you'll be stuck hitting RETURN twice to clear the screen and get to the 22 | # next step. Working on that... 23 | 24 | #@macro browser_then_terminal 25 | #@wait 26 | #@show_browser 27 | #@wait 28 | #@clear 29 | #@show_terminal 30 | #@end 31 | 32 | # wait_clear is a macro that just waits before clearing the terminal. We 33 | # do this a lot. 34 | 35 | #@macro wait_clear 36 | #@wait 37 | #@clear 38 | #@end 39 | 40 | # start_livecast is a macro for starting a livecast. It assumes that the demo 41 | # hooks are working, and uses them to display slides at first while putting a 42 | # cue to hit RETURN on the terminal. Once the user hits RETURN, it clears the 43 | # terminal before showing it, so that the stuff after the macro call is front 44 | # and center. 45 | 46 | #@macro start_livecast 47 | #@show_slides 48 | 49 | clear 50 | echo Waiting... 51 | 52 | #@wait_clear 53 | #@show_terminal 54 | #@end 55 | -------------------------------------------------------------------------------- /docs/screens/faces-foo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BuoyantIO/gitops-linkerd/05ed6e02f70768cce04bb272fb4b34b5e796d4ae/docs/screens/faces-foo.png -------------------------------------------------------------------------------- /docs/screens/faces.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BuoyantIO/gitops-linkerd/05ed6e02f70768cce04bb272fb4b34b5e796d4ae/docs/screens/faces.png -------------------------------------------------------------------------------- /docs/screens/linkerd-metrics.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BuoyantIO/gitops-linkerd/05ed6e02f70768cce04bb272fb4b34b5e796d4ae/docs/screens/linkerd-metrics.png -------------------------------------------------------------------------------- /docs/screens/wego-apps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BuoyantIO/gitops-linkerd/05ed6e02f70768cce04bb272fb4b34b5e796d4ae/docs/screens/wego-apps.png -------------------------------------------------------------------------------- /docs/screens/wego-deps.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BuoyantIO/gitops-linkerd/05ed6e02f70768cce04bb272fb4b34b5e796d4ae/docs/screens/wego-deps.png -------------------------------------------------------------------------------- /docs/screens/wego-linkerd.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BuoyantIO/gitops-linkerd/05ed6e02f70768cce04bb272fb4b34b5e796d4ae/docs/screens/wego-linkerd.png -------------------------------------------------------------------------------- /infrastructure/cert-manager/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | namespace: cert-manager 4 | resources: 5 | - namespace.yaml 6 | - repository.yaml 7 | - release.yaml 8 | 9 | 10 | -------------------------------------------------------------------------------- /infrastructure/cert-manager/namespace.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: cert-manager 6 | labels: 7 | toolkit.fluxcd.io/tenant: sre-team 8 | -------------------------------------------------------------------------------- /infrastructure/cert-manager/release.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 3 | kind: HelmRelease 4 | metadata: 5 | name: cert-manager 6 | namespace: cert-manager 7 | spec: 8 | interval: 30m 9 | chart: 10 | spec: 11 | chart: cert-manager 12 | version: "*" 13 | sourceRef: 14 | kind: HelmRepository 15 | name: cert-manager 16 | namespace: cert-manager 17 | interval: 12h 18 | values: 19 | installCRDs: true 20 | -------------------------------------------------------------------------------- /infrastructure/cert-manager/repository.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: source.toolkit.fluxcd.io/v1beta2 3 | kind: HelmRepository 4 | metadata: 5 | name: cert-manager 6 | namespace: cert-manager 7 | spec: 8 | interval: 24h 9 | url: https://charts.jetstack.io 10 | 11 | -------------------------------------------------------------------------------- /infrastructure/flagger/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | namespace: flagger-system 4 | resources: 5 | - namespace.yaml 6 | - repository.yaml 7 | - release.yaml 8 | - loadtester.yaml 9 | -------------------------------------------------------------------------------- /infrastructure/flagger/loadtester.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: source.toolkit.fluxcd.io/v1beta2 2 | kind: OCIRepository 3 | metadata: 4 | name: flagger-manifests 5 | spec: 6 | interval: 6h 7 | url: oci://ghcr.io/fluxcd/flagger-manifests 8 | ref: 9 | semver: 1.x 10 | --- 11 | apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 12 | kind: Kustomization 13 | metadata: 14 | name: loadtester 15 | spec: 16 | interval: 30m 17 | targetNamespace: flagger-system 18 | path: "./tester/" 19 | prune: true 20 | sourceRef: 21 | kind: OCIRepository 22 | name: flagger-manifests 23 | wait: true 24 | -------------------------------------------------------------------------------- /infrastructure/flagger/namespace.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: flagger-system 6 | labels: 7 | toolkit.fluxcd.io/tenant: sre-team 8 | annotations: 9 | linkerd.io/inject: enabled 10 | -------------------------------------------------------------------------------- /infrastructure/flagger/release.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 2 | kind: HelmRelease 3 | metadata: 4 | name: flagger 5 | spec: 6 | interval: 1h 7 | releaseName: flagger 8 | install: # override existing Flagger CRDs 9 | crds: CreateReplace 10 | upgrade: # update Flagger CRDs 11 | crds: CreateReplace 12 | chart: 13 | spec: 14 | chart: flagger 15 | version: 1.x # update Flagger to the latest minor version 16 | interval: 6h # scan for new versions every six hours 17 | sourceRef: 18 | kind: HelmRepository 19 | name: flagger 20 | values: 21 | meshProvider: linkerd 22 | metricsServer: http://prometheus.linkerd-viz:9090 23 | selectorLabels: app,name,app.kubernetes.io/name,service 24 | -------------------------------------------------------------------------------- /infrastructure/flagger/repository.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: source.toolkit.fluxcd.io/v1beta2 2 | kind: HelmRepository 3 | metadata: 4 | name: flagger 5 | spec: 6 | interval: 6h 7 | type: oci 8 | url: oci://ghcr.io/fluxcd/charts 9 | -------------------------------------------------------------------------------- /infrastructure/ingress-nginx/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | namespace: ingress-nginx 4 | resources: 5 | - namespace.yaml 6 | - repository.yaml 7 | - release.yaml 8 | -------------------------------------------------------------------------------- /infrastructure/ingress-nginx/namespace.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: ingress-nginx 6 | labels: 7 | toolkit.fluxcd.io/tenant: sre-team 8 | -------------------------------------------------------------------------------- /infrastructure/ingress-nginx/release.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 3 | kind: HelmRelease 4 | metadata: 5 | name: ingress-nginx 6 | spec: 7 | interval: 30m 8 | chart: 9 | spec: 10 | chart: ingress-nginx 11 | version: "*" 12 | sourceRef: 13 | kind: HelmRepository 14 | name: ingress-nginx 15 | interval: 12h 16 | values: 17 | controller: 18 | service: 19 | type: LoadBalancer 20 | nodePorts: 21 | http: 32080 22 | https: 32443 23 | podAnnotations: 24 | linkerd.io/inject: enabled 25 | metrics: 26 | port: 10254 27 | enabled: true 28 | service: 29 | annotations: 30 | prometheus.io/scrape: "true" 31 | prometheus.io/port: "10254" 32 | -------------------------------------------------------------------------------- /infrastructure/ingress-nginx/repository.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: source.toolkit.fluxcd.io/v1beta2 2 | kind: HelmRepository 3 | metadata: 4 | name: ingress-nginx 5 | spec: 6 | interval: 24h 7 | url: https://kubernetes.github.io/ingress-nginx 8 | -------------------------------------------------------------------------------- /infrastructure/linkerd/README.md: -------------------------------------------------------------------------------- 1 | # Generate Linkerd v2 certificates 2 | 3 | Install the step CLI on MacOS and Linux using Homebrew run: 4 | 5 | ```sh 6 | brew install step 7 | ``` 8 | 9 | Generate the Linkerd trust anchor certificate: 10 | 11 | ```sh 12 | step certificate create identity.linkerd.cluster.local ca.crt ca.key \ 13 | --san identity.linkerd.cluster.local \ 14 | --profile root-ca --no-password --insecure \ 15 | --not-after=87600h 16 | ``` 17 | 18 | -------------------------------------------------------------------------------- /infrastructure/linkerd/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | configurations: 4 | - kustomizeconfig.yaml 5 | resources: 6 | - linkerd-crds.yaml 7 | - linkerd-control-plane.yaml 8 | - linkerd-viz.yaml 9 | - linkerd-smi.yaml 10 | - linkerd-certs.yaml 11 | - repositories.yaml 12 | - namespaces.yaml 13 | secretGenerator: 14 | - name: linkerd-trust-anchor 15 | namespace: linkerd 16 | type: kubernetes.io/tls 17 | files: 18 | - tls.crt=ca.crt 19 | - tls.key=ca.key 20 | -------------------------------------------------------------------------------- /infrastructure/linkerd/kustomizeconfig.yaml: -------------------------------------------------------------------------------- 1 | nameReference: 2 | - kind: Secret 3 | version: v1 4 | fieldSpecs: 5 | - path: spec/valuesFrom/name 6 | kind: HelmRelease 7 | - kind: Secret 8 | version: v1 9 | fieldSpecs: 10 | - path: spec/ca/secretName 11 | kind: Issuer 12 | -------------------------------------------------------------------------------- /infrastructure/linkerd/linkerd-certs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: Issuer 3 | metadata: 4 | name: linkerd-trust-anchor 5 | namespace: linkerd 6 | spec: 7 | ca: 8 | secretName: linkerd-trust-anchor 9 | --- 10 | apiVersion: cert-manager.io/v1 11 | kind: Certificate 12 | metadata: 13 | name: linkerd-identity-issuer 14 | namespace: linkerd 15 | spec: 16 | secretName: linkerd-identity-issuer 17 | duration: 48h 18 | renewBefore: 25h 19 | issuerRef: 20 | name: linkerd-trust-anchor 21 | kind: Issuer 22 | commonName: identity.linkerd.cluster.local 23 | dnsNames: 24 | - identity.linkerd.cluster.local 25 | isCA: true 26 | privateKey: 27 | algorithm: ECDSA 28 | usages: 29 | - cert sign 30 | - crl sign 31 | - server auth 32 | - client auth 33 | -------------------------------------------------------------------------------- /infrastructure/linkerd/linkerd-control-plane.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 2 | kind: HelmRelease 3 | metadata: 4 | name: linkerd-control-plane 5 | namespace: linkerd 6 | spec: 7 | interval: 30m 8 | targetNamespace: linkerd 9 | dependsOn: 10 | - name: linkerd-crds 11 | releaseName: linkerd-control-plane 12 | chart: 13 | spec: 14 | version: "1.x" 15 | chart: linkerd-control-plane 16 | sourceRef: 17 | kind: HelmRepository 18 | name: linkerd 19 | interval: 12h 20 | # https://artifacthub.io/packages/helm/linkerd2/linkerd-control-plane 21 | valuesFrom: 22 | - kind: Secret 23 | name: linkerd-trust-anchor 24 | valuesKey: tls.crt 25 | targetPath: identityTrustAnchorsPEM 26 | values: 27 | proxy: 28 | resources: 29 | cpu: 30 | request: 50m 31 | limit: 100m 32 | memory: 33 | request: 64Mi 34 | limit: 128Mi 35 | identity: 36 | issuer: 37 | scheme: "kubernetes.io/tls" 38 | -------------------------------------------------------------------------------- /infrastructure/linkerd/linkerd-crds.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 2 | kind: HelmRelease 3 | metadata: 4 | name: linkerd-crds 5 | namespace: linkerd 6 | spec: 7 | interval: 30m 8 | targetNamespace: linkerd 9 | releaseName: linkerd-crds 10 | chart: 11 | spec: 12 | version: "1.x" 13 | chart: linkerd-crds 14 | sourceRef: 15 | kind: HelmRepository 16 | name: linkerd 17 | interval: 12h 18 | -------------------------------------------------------------------------------- /infrastructure/linkerd/linkerd-smi.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 2 | kind: HelmRelease 3 | metadata: 4 | name: linkerd-smi 5 | namespace: linkerd-smi 6 | spec: 7 | interval: 30m 8 | targetNamespace: linkerd-smi 9 | dependsOn: 10 | - name: linkerd-control-plane 11 | namespace: linkerd 12 | releaseName: linkerd-smi 13 | chart: 14 | spec: 15 | version: "0.x" 16 | chart: linkerd-smi 17 | sourceRef: 18 | kind: HelmRepository 19 | name: linkerd-smi 20 | interval: 12h 21 | -------------------------------------------------------------------------------- /infrastructure/linkerd/linkerd-viz.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 2 | kind: HelmRelease 3 | metadata: 4 | name: linkerd-viz 5 | namespace: linkerd-viz 6 | spec: 7 | interval: 30m 8 | targetNamespace: linkerd-viz 9 | dependsOn: 10 | - name: linkerd-control-plane 11 | namespace: linkerd 12 | releaseName: linkerd-viz 13 | chart: 14 | spec: 15 | version: "30.x" 16 | chart: linkerd-viz 17 | sourceRef: 18 | kind: HelmRepository 19 | name: linkerd 20 | namespace: linkerd 21 | interval: 12h 22 | # https://artifacthub.io/packages/helm/linkerd2/linkerd-viz 23 | values: 24 | grafana: 25 | enabled: true 26 | prometheus: 27 | enabled: true 28 | --- 29 | # https://linkerd.io/2.12/tasks/exposing-dashboard/#nginx 30 | apiVersion: networking.k8s.io/v1 31 | kind: Ingress 32 | metadata: 33 | name: web-ingress 34 | namespace: linkerd-viz 35 | annotations: 36 | nginx.ingress.kubernetes.io/upstream-vhost: $service_name.$namespace.svc.cluster.local 37 | nginx.ingress.kubernetes.io/configuration-snippet: | 38 | proxy_set_header Origin ""; 39 | proxy_hide_header l5d-remote-ip; 40 | proxy_hide_header l5d-server-id; 41 | spec: 42 | ingressClassName: nginx 43 | rules: 44 | - host: "*.linkerd.sslip.io" 45 | http: 46 | paths: 47 | - pathType: Prefix 48 | path: / 49 | backend: 50 | service: 51 | name: web 52 | port: 53 | name: http 54 | -------------------------------------------------------------------------------- /infrastructure/linkerd/namespaces.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: linkerd 5 | annotations: 6 | linkerd.io/inject: disabled 7 | kustomize.toolkit.fluxcd.io/ssa: merge 8 | labels: 9 | linkerd.io/is-control-plane: "true" 10 | config.linkerd.io/admission-webhooks: disabled 11 | linkerd.io/control-plane-ns: linkerd 12 | toolkit.fluxcd.io/tenant: sre-team 13 | --- 14 | apiVersion: v1 15 | kind: Namespace 16 | metadata: 17 | name: linkerd-smi 18 | annotations: 19 | kustomize.toolkit.fluxcd.io/ssa: merge 20 | labels: 21 | linkerd.io/extension: smi 22 | toolkit.fluxcd.io/tenant: sre-team 23 | --- 24 | apiVersion: v1 25 | kind: Namespace 26 | metadata: 27 | name: linkerd-viz 28 | annotations: 29 | kustomize.toolkit.fluxcd.io/ssa: merge 30 | labels: 31 | linkerd.io/extension: viz 32 | toolkit.fluxcd.io/tenant: sre-team 33 | -------------------------------------------------------------------------------- /infrastructure/linkerd/repositories.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: source.toolkit.fluxcd.io/v1beta2 3 | kind: HelmRepository 4 | metadata: 5 | name: linkerd 6 | namespace: linkerd 7 | spec: 8 | interval: 1h 9 | url: https://helm.linkerd.io/stable 10 | --- 11 | apiVersion: source.toolkit.fluxcd.io/v1beta2 12 | kind: HelmRepository 13 | metadata: 14 | name: linkerd-smi 15 | namespace: linkerd-smi 16 | spec: 17 | interval: 1h 18 | url: https://linkerd.github.io/linkerd-smi 19 | -------------------------------------------------------------------------------- /infrastructure/weave-gitops/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | namespace: flux-system 4 | resources: 5 | - repository.yaml 6 | - release.yaml 7 | -------------------------------------------------------------------------------- /infrastructure/weave-gitops/release.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.toolkit.fluxcd.io/v2beta1 2 | kind: HelmRelease 3 | metadata: 4 | name: weave-gitops 5 | spec: 6 | interval: 50m 7 | chart: 8 | spec: 9 | chart: weave-gitops 10 | version: "*" 11 | sourceRef: 12 | kind: HelmRepository 13 | name: weave-gitops 14 | interval: 65m 15 | # https://github.com/weaveworks/weave-gitops/blob/main/charts/gitops-server/values.yaml 16 | values: 17 | resources: 18 | requests: 19 | cpu: 100m 20 | memory: 64Mi 21 | limits: 22 | cpu: 1 23 | memory: 512Mi 24 | securityContext: 25 | capabilities: 26 | drop: 27 | - ALL 28 | readOnlyRootFilesystem: true 29 | runAsNonRoot: true 30 | runAsUser: 1000 31 | adminUser: 32 | create: true 33 | username: admin 34 | # Change password by generating a new hash on https://bcrypt.online 35 | # bcrypt hash for password "flux" 36 | passwordHash: "$2a$10$P/tHQ1DNFXdvX0zRGA8LPeSOyb0JXq9rP3fZ4W8HGTpLV7qHDlWhe" 37 | --- 38 | apiVersion: networking.k8s.io/v1 39 | kind: NetworkPolicy 40 | metadata: 41 | name: weave-gitops-ingress 42 | spec: 43 | policyTypes: 44 | - Ingress 45 | ingress: 46 | - from: 47 | - namespaceSelector: {} 48 | podSelector: 49 | matchLabels: 50 | app.kubernetes.io/name: weave-gitops 51 | --- 52 | apiVersion: networking.k8s.io/v1 53 | kind: Ingress 54 | metadata: 55 | name: wego-ingress 56 | spec: 57 | ingressClassName: nginx 58 | rules: 59 | - host: "*.wego.sslip.io" 60 | http: 61 | paths: 62 | - pathType: Prefix 63 | path: / 64 | backend: 65 | service: 66 | name: weave-gitops 67 | port: 68 | name: http 69 | -------------------------------------------------------------------------------- /infrastructure/weave-gitops/repository.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: source.toolkit.fluxcd.io/v1beta2 3 | kind: HelmRepository 4 | metadata: 5 | name: weave-gitops 6 | spec: 7 | type: oci 8 | interval: 60m0s 9 | url: oci://ghcr.io/weaveworks/charts 10 | -------------------------------------------------------------------------------- /kind/kind-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kind.x-k8s.io/v1alpha4 2 | kind: Cluster 3 | nodes: 4 | - role: control-plane 5 | kubeadmConfigPatches: 6 | - | 7 | kind: InitConfiguration 8 | nodeRegistration: 9 | kubeletExtraArgs: 10 | node-labels: "ingress-ready=true" 11 | extraPortMappings: 12 | - containerPort: 32080 13 | hostPort: 80 14 | protocol: TCP 15 | - containerPort: 32443 16 | hostPort: 443 17 | protocol: TCP 18 | extraMounts: 19 | - hostPath: /var/run/docker.sock 20 | containerPath: /var/run/docker.sock 21 | -------------------------------------------------------------------------------- /scripts/validate.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # This script downloads the Flux OpenAPI schemas, then it validates the 4 | # Flux custom resources and the kustomize overlays using kubeconform. 5 | # This script is meant to be run locally and in CI before the changes 6 | # are merged on the main branch that's synced by Flux. 7 | 8 | # Copyright 2022 The Flux authors. All rights reserved. 9 | # 10 | # Licensed under the Apache License, Version 2.0 (the "License"); 11 | # you may not use this file except in compliance with the License. 12 | # You may obtain a copy of the License at 13 | # 14 | # http://www.apache.org/licenses/LICENSE-2.0 15 | # 16 | # Unless required by applicable law or agreed to in writing, software 17 | # distributed under the License is distributed on an "AS IS" BASIS, 18 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 19 | # See the License for the specific language governing permissions and 20 | # limitations under the License. 21 | 22 | # This script is meant to be run locally and in CI to validate the Kubernetes 23 | # manifests (including Flux custom resources) before changes are merged into 24 | # the branch synced by Flux in-cluster. 25 | 26 | # Prerequisites 27 | # - yq v4.30 28 | # - kustomize v4.5 29 | # - kubeconform v0.5.0 30 | 31 | set -o errexit 32 | 33 | echo "INFO - Downloading Flux OpenAPI schemas" 34 | mkdir -p /tmp/flux-crd-schemas/master-standalone-strict 35 | curl -sL https://github.com/fluxcd/flux2/releases/latest/download/crd-schemas.tar.gz | tar zxf - -C /tmp/flux-crd-schemas/master-standalone-strict 36 | 37 | find . -type f -name '*.yaml' -print0 | while IFS= read -r -d $'\0' file; 38 | do 39 | echo "INFO - Validating $file" 40 | yq e 'true' "$file" > /dev/null 41 | done 42 | 43 | kubeconform_config=("-strict" "-ignore-missing-schemas" "-schema-location" "default" "-schema-location" "/tmp/flux-crd-schemas" "-verbose") 44 | 45 | echo "INFO - Validating clusters" 46 | find ./clusters -maxdepth 2 -type f -name '*.yaml' -print0 | while IFS= read -r -d $'\0' file; 47 | do 48 | kubeconform "${kubeconform_config[@]}" "${file}" 49 | if [[ ${PIPESTATUS[0]} != 0 ]]; then 50 | exit 1 51 | fi 52 | done 53 | 54 | # mirror kustomize-controller build options 55 | kustomize_flags=("--load-restrictor=LoadRestrictionsNone") 56 | kustomize_config="kustomization.yaml" 57 | 58 | echo "INFO - Validating kustomize overlays" 59 | find . -type f -name $kustomize_config -print0 | while IFS= read -r -d $'\0' file; 60 | do 61 | echo "INFO - Validating kustomization ${file/%$kustomize_config}" 62 | kustomize build "${file/%$kustomize_config}" "${kustomize_flags[@]}" | \ 63 | kubeconform "${kubeconform_config[@]}" 64 | if [[ ${PIPESTATUS[0]} != 0 ]]; then 65 | exit 1 66 | fi 67 | done 68 | --------------------------------------------------------------------------------