├── .gitignore ├── README.md └── reference-architecture ├── README.md ├── phase-i ├── automate-ha-deployment │ ├── README.md │ └── attachments │ │ └── automate-ha-deployment-tf.zip ├── automated-rancher-backup │ ├── README.md │ └── attachments │ │ └── automated-rancher-backup-tf.zip ├── rancher-server-monitoring │ └── README.md ├── repeatable-rancher-upgrades │ └── README.md └── repeatable-rke-upgrades │ └── README.md ├── phase-ii ├── automated-cluster-deployment │ ├── README.md │ └── attachments │ │ └── automated-cluster-deployment-tf.zip ├── automated-k8s-backups │ └── README.md ├── operations-for-k8s-clusters │ └── README.md ├── repeatable-k8s-upgrades │ └── README.md ├── system-hardening │ └── README.md └── user-management │ └── README.md └── phase-iii ├── catalog ├── README.md └── attachments │ ├── catalog1.png │ ├── catalog2.png │ ├── catalog3.png │ ├── catalog4.png │ ├── catalog5.png │ ├── catalog6.png │ ├── catalog7.png │ └── catalog8.png ├── container-scanning ├── README.md └── attachments │ ├── scanning1.png │ ├── scanning2.png │ ├── scanning3.png │ ├── scanning4.png │ ├── scanning5.png │ └── scanning6.png ├── ingress-dns ├── README.md └── attachments │ ├── dns1.png │ ├── dns2.png │ ├── dns3.png │ ├── dns4.png │ ├── dns5.png │ ├── dns6.png │ ├── dns7.png │ ├── ingress1.png │ ├── ingress2.png │ └── ingress3.png ├── logging ├── README.md └── attachments │ ├── logging-filebeat.png │ ├── logging1.png │ ├── logging10.png │ ├── logging11.png │ ├── logging12.png │ ├── logging13.png │ ├── logging14.png │ ├── logging2.png │ ├── logging3.png │ ├── logging4.png │ ├── logging5.png │ ├── logging6.png │ ├── logging7.png │ ├── logging8.png │ ├── logging9.png │ ├── rancherlogging1.png │ ├── rancherlogging2.png │ ├── rancherlogging3.png │ └── rancherlogging4.png ├── registry ├── README.md └── attachments │ ├── dockerregistry1.png │ ├── dockerregistry2.png │ └── registry1.png └── storage ├── README.md └── attachments ├── nfs1.png ├── nfs2.png ├── nfs3.png ├── nfs4.png ├── nfs5.png ├── storage1.png ├── storage2.png ├── storage3.png ├── storage4.png ├── storage5.png ├── storage6.png ├── storage7.png ├── storage8.png └── storage9.png /.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | **/.terraform/* 3 | 4 | # .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | 8 | # Crash log files 9 | crash.log 10 | 11 | # Ignore any .tfvars files that are generated automatically for each Terraform run. Most 12 | # .tfvars files are managed as part of configuration and so should be included in 13 | # version control. 14 | # 15 | terraform.tfvars 16 | 17 | # Ignore override files as they are usually used to override resources locally and so 18 | # are not checked in 19 | override.tf 20 | override.tf.json 21 | *_override.tf 22 | *_override.tf.json 23 | 24 | # Include override files you do wish to add to version control using negated pattern 25 | # 26 | # !example_override.tf 27 | 28 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 29 | # example: *tfplan* 30 | 31 | .DS_Store 32 | 33 | kube_config_cluster.yml 34 | cluster.rkestate -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Rancher Onboarding Program Reference Architecture 2 | 3 | This repo contains practical examples of setting up Rancher according to the outcomes specified in the Rancher Onboarding Program. -------------------------------------------------------------------------------- /reference-architecture/README.md: -------------------------------------------------------------------------------- 1 | This reference architecture contains practical examples of setting up Rancher according to the outcomes specified in the Rancher Onboarding Program. 2 | 3 | ## PHASE I 4 | 5 | 1.1 [Automated Highly Available Deployment of Rancher Servers](phase-i/automate-ha-deployment) 6 | 7 | 1.2 [Repeatable Upgrades of RKE Cluster (Rancher Server)](phase-i/repeatable-rke-upgrades) 8 | 9 | 1.3 [Repeatable Upgrades of Rancher Server](phase-i/repeatable-rancher-upgrades) 10 | 11 | 1.4 [Automated Backup of Rancher](phase-i/automated-rancher-backup) 12 | 13 | 1.5 [Monitoring of Rancher Server & RKE Cluster](phase-i/rancher-server-monitoring) 14 | 15 | ## PHASE II 16 | 17 | 2.1 [Automated Deployment of Kubernetes Clusters with Autoscaling (where possible)](phase-ii/automated-cluster-deployment) 18 | 19 | 2.2 [System Hardening & Security Policies Implemented](phase-ii/system-hardening) 20 | 21 | 2.3 [Defined User Management and Governance for Kubernetes Clusters](phase-ii/user-management) 22 | 23 | 2.4 [Repeatable Upgrades of Kubernetes Cluster](phase-ii/repeatable-k8s-upgrades) 24 | 25 | 2.5 [Automated Backup of Kubernetes Cluster Components](phase-ii/automated-k8s-backup) 26 | 27 | 2.6 [Logging and Monitoring of Kubernetes Clusters](phase-ii/operations-for-k8s-clusters) 28 | 29 | ## PHASE III 30 | 31 | 3.1 [Log Collection for Rancher and Kubernetes Cluster](phase-iii/logging) 32 | 33 | 3.2 [Integration with Registry](phase-iii/registry) 34 | 35 | 3.3 [Integration with Image Scanning Tools](phase-iii/container-scanning) 36 | 37 | 3.4 [Ingress and DNS Configured](phase-iii/ingress-dns) 38 | 39 | 3.5 [Configuration of Storage Classes](phase-iii/storage) 40 | 41 | 3.6 [Helm Catalog Setup](phase-iii/catalog) -------------------------------------------------------------------------------- /reference-architecture/phase-i/automate-ha-deployment/README.md: -------------------------------------------------------------------------------- 1 | The process of deploying Rancher should be well-defined, repeatable and automated as much as possible. Best practices for this include terraform plans for infrastructure, coupled with RKE for building Kubernetes and Helm for installing Rancher’s application components. 2 | 3 | ## Background 4 | 5 | Automation of Rancher deployment largely deals with the underlying infrastructure. That is, most of the work to be done at this step is not related to installing Rancher or even Kubernetes itself, but rather standing up the infrastructure necessary for these components. 6 | 7 | The basic process is: 8 | 9 | 1. Setup nodes / instances / compute on the platform of choice 10 | 2. Setup load balancer of choice 11 | 3. Setup DNS record pointing to LB 12 | 4. Install Kubernetes using RKE 13 | 5. Install Rancher using Helm 14 | 15 | ### Terraform example 16 | 17 | The terraform configuration used for this example, can [be downloaded at the link here](./attachments/automate-ha-deployment-tf.zip). 18 | 19 | ### Note 20 | 21 | The examples in this scenario use terraform to provision AWS EC2 instances, along with associated load balancers, Route 53 DNS records, security groups, etc. These scripts are designed to be an example or "jumping off" point, and should *not* be considered a general use production ready script. Rather, take the concepts in this guide and the terraform examples, and apply them to your situation and toolset. 22 | 23 | ## Step 1: Setup nodes 24 | 25 | Begin by selecting your desired operating system. Referring to the [Rancher Support and Maintenance Terms of Service](https://www.suse.com/suse-rancher/support-matrix/all-supported-versions), as of this writing the supported operating systems are: 26 | 27 | - CentOS 7.5, 7.6, 7.7 28 | - Oracle Linux 7.6, 7.7 29 | - RancherOS 1.5.5 30 | - Ubuntu 16.04, 18.04 31 | - RHEL 7.5, 7.6, 7.7 32 | 33 | It is recommended that the more "fully-fledged" operating systems be selected over something like RancherOS. This is because RancherOS, while excellent at running containers and being a small, immutable OS, lacks features that make integrations such as security and storage painful. You will be best served by choosing one of the other operating systems. 34 | 35 | Once you have chosen an operating system, plan the number of nodes that you are going to need. 36 | 37 | Nodes come in three flavors: etcd, controlplane, and worker. etcd nodes host instances of `etcd`, which communicate amongst themselves to form an etcd cluster. Controlplane nodes host the Kubernetes control plane components such as `kube-apiserver`, `kube-scheduler`, and others. Worker nodes are nodes that are capable of hosting user workloads. 38 | 39 | Three nodes is the **minimum** recommended count for an HA Rancher deployment, with each node having the `etcd`, `controlplane` and `worker` roles. This provides a good balance between having highly-available components (`etcd` and `controlplane`) and resource utilization of your infrastructure. 40 | 41 | Three nodes is the minimum recommended number of nodes for etcd. One node does not offer high availability. Two nodes can cause etcd to suffer from [split brain](https://www.quora.com/What-is-split-brain-in-distributed-systems). Three nodes is the minimum amount that avoids these issues. In addition, when Rancher is installed, it is scaled up as a 3-instance Kubernetes `Deployment`. Three nodes therefore ensures that Rancher itself is spread across the nodes. 42 | 43 | You could choose to separate out the various node roles into different sets of nodes, and in fact many users choose to do this. In that situation, you could have three nodes hosting `etcd`, two or more nodes hosting `controlplane` components, and one or more `worker` nodes. This uses more of your underlying infrastructure, but can be a good choice if you are looking to deploy a large Rancher instance. (That is, a Rancher system that is supporting a large number of clusters, nodes, or both). 44 | 45 | Generally speaking, though, three nodes is sufficient. 46 | 47 | The terraform in this example sets up three master nodes and three worker nodes. The master nodes host the `etcd` and `controlplane` components, while the `worker` nodes host the workload (which, in this case, is the Rancher system itself). You can see these settings in `main.tf`: 48 | 49 | ```terraform 50 | master_node_count = 3 51 | worker_node_count = 3 52 | ``` 53 | 54 | For your implementation, consider adding similar variables. This will allow you to easily scale up or down the number of nodes in your infrastructure. 55 | 56 | ## Step 2: Load Balancer 57 | 58 | When Kubernetes gets setup (in a later step), the `rke` tool will deploy the Nginx Ingress Controller. This controller will listen on ports 80 & 443 of the worker nodes, answering traffic destined for specific hostnames. 59 | 60 | When Rancher is installed (also in a later step), the Rancher system creates an `Ingress` resource. That Ingress tells the Nginx ingress controller to listen for traffic destined for the Rancher hostname (configurable). The controller, when receiving traffic destined for the Rancher hostname, will forward that traffic to the running Rancher pods in the cluster. 61 | 62 | This is how we get web traffic to the Rancher system. However, it also presents an issue: if each node is listening on 80/443, which node do we send traffic to? The answer is any of them. However, _that_ creates another problem: if the node we're sending traffic to becomes unavailable, what happens? 63 | 64 | We need a load balancer in front of these nodes to solve this problem. 65 | 66 | A load balancer (in either Layer-7 or Layer-4 mode) will be able to balance inbound traffic to the worker nodes in this cluster. That will prevent an outage of any single node from taking down communications to our Rancher instance. 67 | 68 | In this terraform example, we are setting up an AWS Elastic Load Balancer (ELB). This is a simple Layer-4 load balancer that will forward requests on port 80 & 443 to the worker nodes that are setup. 69 | 70 | For your implementation, consider if you want/need to use a Layer-4 or Layer-7 load balancer. 71 | 72 | A layer-4 load balancer is the simpler of the two choices - you are just forwarding TCP traffic to your nodes. Considerations may need to be taken for the _mode_ of operation of certain load balancers. Some load balancers come with the choice of things like Source NAT, virtual servers, etc. 73 | 74 | A layer-7 load balancer is a bit more complicated but can offer features that you may want. For instance, a layer-7 load balancer is capable of doing TLS termination at the load balancer (as opposed to Rancher doing TLS termination itself). This can be beneficial if you want to centralize your TLS termination in your infrastructure. L7 load balancing also offers the capability for your load balancer to make decisions based on HTTP attributes such as cookies, etc. that a layer-4 LB is not able to concern itself with. 75 | 76 | ## Step 3: DNS Record 77 | 78 | Once you have setup your load balancer, you will need to create a DNS record to send traffic to this load balancer. 79 | 80 | Depending on your environment, this may be an A record pointing to the LB IP, or it may be a CNAME pointing to the LB hostname. 81 | 82 | In either case, make sure this record is the hostname that you intend Rancher to respond on. You will need to specify this hostname for Rancher installation, and it is **not possible to change later.** Make sure that your decision is a final one. 83 | 84 | In this terraform example, we are manipulating a Route53-hosted zone, and adding an A record pointing to the ELB IP. In your implementation, you may choose to automate this using your DNS provider of choice. Or, you may need to do this out-of-band if automation of your DNS records is not possible. 85 | 86 | ## Step 4: Install Kubernetes Using RKE 87 | 88 | The Rancher Kubernetes Engine, or `rke` is an "extremely simple, lightning fast Kubernetes installer that works everywhere." Rancher provides RKE to make bootstrapping Kubernetes clusters as easy as possible. 89 | 90 | You will need to use RKE to stand up the Kubernetes cluster that you install Rancher onto. This is a requirement from a support perspective - we are only able to validate Rancher installations in a small set of environments, and RKE is our tool of choice to stand up these clusters. 91 | 92 | If used in a standalone fashion, the `rke` tool will require that you create a `cluster.yml` file in which you specify, among other things, the nodes upon which you intend to setup Kubernetes. Documentation of all the available options for RKE is available at https://rancher.com/docs/rke/latest/en/. 93 | 94 | 95 | ## Step 5: Install Rancher Using Helm 96 | 97 | Rancher itself is published as a helm chart that is easily installable on Kubernetes clusters. Through the use of helm, we can actually leverage native Kubernetes concepts to provide Rancher in a highly-available fashion. For instance, Rancher itself is deployed as 3-instance Kubernetes `Deployment`. In a 3-node (`worker`) cluster, Rancher then has an instance of itself running on each node, and so can tolerate multiple node failures while remaining available. 98 | 99 | Rancher has many options available for install, which are documented here: https://rancher.com/docs/rancher/v2.6/en/installation/install-rancher-on-k8s/chart-options/. There are two very important options to consider: 100 | 101 | 1. Hostname 102 | 2. TLS configuration 103 | 104 | ### Hostname 105 | 106 | This is the hostname at which you wish Rancher to be available. This should be the same hostname you configured in step 3. 107 | **This hostname cannot be changed after it is set, so please think carefully before setting this option.** 108 | 109 | ### TLS configuration 110 | 111 | Rancher supports three TLS modes: 112 | 113 | 1. Rancher-generated TLS certificate 114 | 2. Let's Encrypt 115 | 3. Bring-your-own certificate 116 | 117 | The first mode is the simplest. In this case, you will need to install `cert-manager` into the cluster. Rancher utilizes cert-mananager to issue and maintain its certificates. Rancher will generate a CA certificate of its own, and sign a cert using that CA. Cert-manager is then responsible for managing that certificate. 118 | 119 | The second mode, Let's Encrypt, also uses cert-manager. However, in this case, cert-manager is combined with a special `Issuer` for Let's Encrypt that performs all actions (including request and validation) necessary for getting an LE-issued cert. Please note, however, that this requires that the Rancher instance be available from the Internet. This is the option that is configured in the provided terraform. 120 | 121 | The third option allows you to bring your own public- or private-CA signed certificate. Rancher will use that certificate to secure websocket and HTTPS traffic. In this case, you must upload thiss certificate (and associated key) as PEM-encoded files with the name `tls.crt` and `tls.key`. 122 | 123 | *If you are using a private CA, you must also upload that certificate.* This is due to the fact that this private CA may not be trusted by your nodes. Rancher will take that CA certificate, and generate a checksum from it, which the various Rancher components will use to validate their connection to Rancher. 124 | 125 | ## Terraform 126 | 127 | In this terraform example the [RKE terraform provider](https://github.com/rancher/terraform-provider-rke) is used. You can see that this example includes the specification of hosts in RKE declaration based on the nodes created in earlier steps. For example: 128 | 129 | ```terraform 130 | dynamic nodes { 131 | for_each = aws_instance.rancher-master 132 | ``` 133 | 134 | This is a terraform 0.12 block that creates node declarations dynamically from the already-created EC2 instances. There is a similar block for the worker nodes. 135 | 136 | RKE will communicate with these nodes via SSH, and run various Docker container to instantiate Kubernetes. Thus, it is a requirement for the nodes that you are setting up to allow inbound tcp/22 communications from the host you are running `rke` on. 137 | 138 | After RKE installs the kubernetes cluster, we are using the `helm` terraform provider to install Rancher and its preqrequisites. Looking at `rancher-ha.tf`, we can see that not only is Rancher installed as a helm chart, but so is cert-manager. The script requires cert-manager as it is leveraging Let's Encrpyt as described above. 139 | 140 | To start using the provided terraform: 141 | 142 | 1. Make sure you have terraform installed and updated 143 | 2. Install the RKE provider following the instructions in the link above 144 | 3. Copy the terraform to a directory of your choosing 145 | 4. Set your variables in `main.tf`. This includes setting an email for Let's Encrypt, but the script can be modified if you want to use something else for your Rancher certificate. 146 | 5. `terraform init` to download the necessary providers 147 | 6. `terraform apply` will execute the scripts to provision the resources. (Optionally you can see what will be done by running `terraform plan`) 148 | 149 | ## Wrap-Up 150 | 151 | Through this documentation and example, you should be able to arrive at a repeatable, highly-available installation of Rancher including node setup. 152 | 153 | Some of the steps listed here offer more options than what is included in this terraform example. In the case that you wish to use some of these options, it is recommended that you build upon our example here, combined with our documentation, to develop a workflow that fits your environment. -------------------------------------------------------------------------------- /reference-architecture/phase-i/automate-ha-deployment/attachments/automate-ha-deployment-tf.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-i/automate-ha-deployment/attachments/automate-ha-deployment-tf.zip -------------------------------------------------------------------------------- /reference-architecture/phase-i/automated-rancher-backup/README.md: -------------------------------------------------------------------------------- 1 | This is an example of automation to produce recurring backups in Rancher, as well as the automation to perform a recovery from backup. 2 | 3 | ## Requirements 4 | 5 | - Terraform 6 | - AWS 7 | - RKE 8 | 9 | If you are on a Mac you can install Terraform and RKE with Homebrew by running: `brew install terraform` & `brew install rke`. 10 | 11 | ### Terraform example 12 | 13 | The terraform configuration used for this example, can [be downloaded at the link here](./attachments/automated-rancher-backup-tf.zip). 14 | 15 | ## Configuring Backup 16 | 17 | ### Setup 1 - Start two EC2 Servers 18 | 19 | First create a file called `terraform.tfvars` that contains the following: 20 | 21 | ```bash 22 | ssh_keypair = "" 23 | vpc_id = " 24 | resource_prefix = "my-name-or-unqiue-id" 25 | ``` 26 | 27 | Replace the variables with values that correspond to your environment. 28 | 29 | 30 | Then run: 31 | 32 | ```bash 33 | terraform init 34 | ``` 35 | 36 | and subsequently: 37 | 38 | ```bash 39 | terraform up 40 | ``` 41 | 42 | You should now have two Ubuntu servers running in EC2. Take note of their IP addresses for the next step. 43 | 44 | ### Step 2 - Configure RKE cluster.yml 45 | 46 | Edit the file called `cluster.yml` in the root of this repo to reflect the correct IP addresses of the two servers you just created. 47 | 48 | Next we want to configure the backup settings for the cluster. Our Terraform plan from above created an S3 bucket we can use for this called `demo-rke-backup-bucket`. Update the section under the key `etcd` in the `cluster.yml` to reflect the correct settings for your AWS account. You will need to provide it with AWS credentials that the automated job can use to upload. We recommend you create a narrow-scoped IAM user profile for this purpose. 49 | 50 | ### Step 3 - Deploying Kubernetes and starting Backup Job 51 | 52 | Now we can apply our settings with RKE: 53 | 54 | ```bash 55 | rke up --ssh-agent-auth 56 | ``` 57 | 58 | This will install Kubernetes on these nodes and configure automated backup. You may need to add the SSH key you are using in AWS to your agent by running `ssh-add /path/to/key`. 59 | 60 | ## Configuring Restore 61 | 62 | To preform a restore, we will re-use much of the information from the previous example to construct the restore command: 63 | 64 | ```bash 65 | rke etcd snapshot-restore --config cluster.yml --name snapshot-name \ 66 | --s3 --access-key S3_ACCESS_KEY --secret-key S3_SECRET_KEY \ 67 | --bucket-name demo-rke-backup-bucket --s3-endpoint s3.amazonaws.com 68 | ``` 69 | 70 | Be sure to replace "snapshot-name" with the correct name of an actual snapshot. Also replace "access-key" and "secret-key" with the same keys we used previously to write the snapshots. 71 | -------------------------------------------------------------------------------- /reference-architecture/phase-i/automated-rancher-backup/attachments/automated-rancher-backup-tf.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-i/automated-rancher-backup/attachments/automated-rancher-backup-tf.zip -------------------------------------------------------------------------------- /reference-architecture/phase-i/rancher-server-monitoring/README.md: -------------------------------------------------------------------------------- 1 | Monitoring of Rancher and Kubernetes cluster components can be achieved in two main ways: 2 | 3 | 1. Scraping metrics from `/metrics` endpoints of the Kubernetes components 4 | 2. Enabling Rancher-based monitoring (self-contained Prometheus + Grafana deployment) 5 | 6 | ## Scraping Metrics from Endpoints 7 | 8 | Most Kubernetes components expose a `/metrics` endpoint through which operational metrics of the component can be scraped. For instance, from `kube-apiserver`, a metrics export could contain the following info: 9 | 10 | ```plaintext 11 | etcd_request_duration_seconds_bucket{operation="listWithCount",type="/registry/management.cattle.io/clusters",le="0.05"} 772 12 | etcd_request_duration_seconds_bucket{operation="listWithCount",type="/registry/management.cattle.io/clusters",le="0.1"} 772 13 | etcd_request_duration_seconds_bucket{operation="listWithCount",type="/registry/management.cattle.io/clusters",le="0.25"} 773 14 | etcd_request_duration_seconds_bucket{operation="listWithCount",type="/registry/management.cattle.io/clusters",le="0.5"} 776 15 | ``` 16 | 17 | While this is just a snippet of metrics that are exported, you can begin to get a feel for how this information is being represented and exposed. 18 | 19 | These metrics can be scraped using a monitoring tool of your choice. An example of such a tool would be [Prometheus](https://prometheus.io/). You could configure Prometheus to collect these metrics and store their values. From there, alerting and visualization capabilities can be built upon this data. An example of a visualization tool is [Grafana](https://grafana.com/). 20 | 21 | Specifics of how to implement this type of solution is outside of the scope of this document. Most organizations already have a monitoring solution in place for their traditional infrastructure. Where possible, it is recommended to leverage that existing tooling to help provide a holistic view of your systems. Certain monitoring systems may not be well-suited for this type of metrics collection, however. In those cases it is recommended to explore cloud-native solutions such as Prometheus and Grafana as mentioned above. 22 | 23 | ## Enabling Rancher-based Monitoring 24 | 25 | Version 2.2 of Rancher introduced a monitoring capability for Kubernetes. Specifically, Rancher enables you to deploy Prometheus & Grafana to your clusters, have those services automatically configued for a multitude of metrics and visualizations, and then integrate those visualizations into Rancher itself. 26 | 27 | Enabling this functionality depends on how you are configuring and managing the Rancher product. 28 | 29 | For configuration through the user interface, you can visit Tools -> Monitoring when in cluster context of one of your clusters. Several options related to the Prometheus and Grafana deployment are made available, such as resource reservations and persistent volume configuration. 30 | 31 | Configuration through an infrastructure-as-code tool is a matter of changing specific configuration options. In _Automated Highly Available Deployment of Rancher Servers_, configuration of Rancher itself was accomplished through terraform. In that example, however, no monitoring capabilities were enabled. 32 | 33 | The following is an example of a terraform snipped using the `terraform/rancher2` provider to enable monitoring on a Rancher cluster. A `rancher2_cluster` resource is declared, and various options are set for the deployment of cluster monitoring. This terraform is also included in the phase 2 terraform for setting up a cluster through Rancher. 34 | 35 | ```terraform 36 | # Create a new rancher2 RKE Cluster 37 | resource "rancher2_cluster" "foo-custom" { 38 | name = "foo-custom" 39 | description = "Foo rancher2 custom cluster" 40 | rke_config { 41 | network { 42 | plugin = "canal" 43 | } 44 | } 45 | enable_cluster_monitoring = true 46 | cluster_monitoring_input { 47 | answers = { 48 | "exporter-kubelets.https" = true 49 | "exporter-node.enabled" = true 50 | "exporter-node.ports.metrics.port" = 9796 51 | "exporter-node.resources.limits.cpu" = "200m" 52 | "exporter-node.resources.limits.memory" = "200Mi" 53 | "grafana.persistence.enabled" = false 54 | "grafana.persistence.size" = "10Gi" 55 | "grafana.persistence.storageClass" = "default" 56 | "operator.resources.limits.memory" = "500Mi" 57 | "prometheus.persistence.enabled" = "false" 58 | "prometheus.persistence.size" = "50Gi" 59 | "prometheus.persistence.storageClass" = "default" 60 | "prometheus.persistent.useReleaseName" = "true" 61 | "prometheus.resources.core.limits.cpu" = "1000m", 62 | "prometheus.resources.core.limits.memory" = "1500Mi" 63 | "prometheus.resources.core.requests.cpu" = "750m" 64 | "prometheus.resources.core.requests.memory" = "750Mi" 65 | "prometheus.retention" = "12h" 66 | } 67 | } 68 | } 69 | ``` 70 | -------------------------------------------------------------------------------- /reference-architecture/phase-i/repeatable-rancher-upgrades/README.md: -------------------------------------------------------------------------------- 1 | Upgrades should be well understood and where possible automated as a low risk procedure. 2 | 3 | ## Pre-requisites 4 | 5 | This scenario assumes you have built an infrastructure using the steps outlined in [Automated Highly Available Deployment of Rancher Servers](360042011191). If you have not, it is recommended that you read that scenario first to develop the background necessary to continue. 6 | 7 | ### Note - Semantic Versioning 8 | 9 | Kubernetes (and Rancher, and many other projects) make use of [semantic versioning](https://semver.org/). Semantic versioning specifies a format of `major.minor.patch`. From the semver website: 10 | 11 | > Given a version number MAJOR.MINOR.PATCH, increment the: 12 | > 13 | > - MAJOR version when you make incompatible API changes, 14 | > - MINOR version when you add functionality in a backwards compatible manner, and 15 | > - PATCH version when you make backwards compatible bug fixes. 16 | 17 | For example, as of this writing, the latest RKE version is `1.1.0` which is a major 1, minor 1, patch 0 version. References to major, minor, and patch version changes follow this scheme for the rest of the document. 18 | 19 | ## Background 20 | 21 | In [Automated Highly Available Deployment of Rancher Servers](360042011191), Helm was used (via the Terraform provider) to install Rancher as a chart on top of an RKE-provisioned Kubernetes cluster. For a refresher, view the contents of `rancher-ha.tf`. 22 | 23 | Note that in this configuration, there was one other application installed - the cert-manager Helm chart. Rancher makes use of cert-manager in certain TLS configurations to manage the certificates that secure Rancher communications. 24 | 25 | Generally speaking, Rancher TLS configurations fall into the following three categories: 26 | 27 | | Configuration | Description | Requires cert-manager? | 28 | | ------------- | ----------- | ---------------------- | 29 | | Self-signed | Rancher generates both a CA and a TLS certificate, and self-signs the certificate. | Yes | 30 | | Bring-your-own | Bring your own certificates and Rancher will utilize them. | No | 31 | | Let's Encrypt | Rancher will manage requesting and installing a Let's Encrypt certificate for you | Yes | 32 | 33 | Of these three options, the second one (Bring-Your-Own) offers two "sub-options": 34 | 35 | 1. TLS termination in the Kubernetes cluster (at ingress) 36 | 2. TLS termination externally 37 | 38 | In [Automated Highly Available Deployment of Rancher Servers](360042011191), Let's Encrypt certificates were requested and installed. This is evident through the usage of configuration options `tls.ingress.source` (which was set to `letsEncrypt`), and `letsEncrypt.*` options. These options are arguments passed to the Helm chart installation (and upgrade). 39 | 40 | Depending on your configuration, these options may be different. When upgrading Rancher, it is important to maintain the same options from install to upgrade. 41 | 42 | Successful upgrades of Rancher will need to execute the following steps: 43 | 44 | 1. Determine target Rancher version 45 | 2. Determine existing Rancher install options 46 | 3. Execute Rancher upgrade 47 | 48 | ## Step 1 - Determine Target Rancher Version 49 | 50 | ### Understanding Rancher Versioning 51 | 52 | Each version of Rancher supports up to four Kubernetes releases: 53 | 54 | 1. `Experimental` (generally whatever the latest minor release of Kubernetes is). Sometimes this version of Kubernetes is not immediately available in Rancher. 55 | 2. `Default` (one release back from `Experimental`) 56 | 3. One minor release back from `Default` 57 | 4. Two minor releases back from `Default` 58 | 59 | As of v2.4.2, Rancher support Kubernetes 1.17, 1.16, and 1.15. 60 | 61 | ### RKE Cluster Version 62 | 63 | In order to remain compatible with Rancher tooling, the cluster that Rancher is installed upon should be upgraded to a Rancher-supported version. Rancher versions match RKE versions, allowing you to increment the two as you upgrade your Rancher infrastructure. 64 | 65 | See [Repeatable Upgrades of RKE Cluster (Rancher Server)](360041579072) to learn more about upgrading the underlying RKE cluster for Rancher. The rest of this document assumes that your RKE cluster is at a Kubernetes version with which Rancher is compatible. 66 | 67 | ### Selecting Rancher Version 68 | 69 | Recall that Rancher's versioning follows the semantic versioning guidelines. Thus, upgrades from one patch version to another should not cause any breaking changes and should be relatively low risk. Upgrades from one minor version to another should be backwards compatible - but may (usually do) introduce new features and functionality. Major version upgrades are reserved for API changes that are not backwards compatible, or other "major" upgrade paths. 70 | 71 | Prior to upgrading, check Rancher's documentation on known upgrade issues. That documentation is available at [https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/#known-upgrade-issues](https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/#known-upgrade-issues.). 72 | 73 | ## Step 2 - Determine Existing Rancher Install Options 74 | 75 | Rancher is installed as a Helm chart on top of an RKE-provisioned cluster. As part of being a Helm chart, Rancher comes with a litany of options that can be set during installation. These options are documented at [https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/](https://rancher.com/docs/rancher/v2.x/en/installation/ha/helm-rancher/chart-options/). 76 | 77 | Prior to upgrading, you must document the options used to deploy Rancher. 78 | 79 | ### Helm CLI Installs 80 | 81 | If you installed Rancher using the `helm` CLI, retrieving install options can be achieved in a two-step process: 82 | 83 | 1. Determine the name of your Rancher chart installation. To do so, execute `helm list -n cattle-system` at a console that is configured for `kubectl` access to your Rancher cluster. That output should look something similar to: 84 | 85 | ```plaintext 86 | NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE 87 | cert-manager 1 Fri Apr 24 15:33:32 2020 DEPLOYED cert-manager-v0.12.0 v0.12.0 kube-system 88 | rancher 11 Fri Apr 24 15:37:04 2020 DEPLOYED rancher-2.4.2 v2.4.2 cattle-system 89 | ``` 90 | 91 | In this instance, Rancher is deployed under the name `rancher`. 92 | 93 | 2. Using the name of your Rancher chart installation, obtain a list of the options used to install the chart. To do so, execute `helm get values -n cattle-system ` where `` is the value you looked up in step 1. The output from this step will be a listing of key-value pairs that comprise the options used to install Rancher. 94 | 95 | ### Infrastructure-as-code Installs 96 | 97 | The advantage of executing `helm` installations using an infrastructure-as-code tool is that your install options should always be available to you (provided proper maintenance and version control). 98 | 99 | In [Automated Highly Available Deployment of Rancher Servers](360042011191), options for Rancher installation are documented in `rancher-ha.tf`. Those options are: 100 | 101 | * `hostname` 102 | * `ingress.tls.source` 103 | * `letsEncrypt.email` 104 | * `letsEncrypt.environment` 105 | 106 | If you have developed your own solution, these options may differ. 107 | 108 | Note these options, as they will be needed for the upgrade procedure. 109 | 110 | ## Step 3 - Execute Rancher Upgrade 111 | 112 | Before upgrading Rancher, look through the release notes for the version of Rancher to which you are upgrading. These release notes may contain deprecations of certain chart options that you may wish to adjust. 113 | 114 | ### Helm CLI Installations 115 | 116 | If you installed Rancher at the CLI using `helm`, upgrading is a two step process: 117 | 118 | 1. Update your Helm repository. To do so, execute `helm repo update`. You may have either installed Rancher from `rancher-stable` or `rancher-latest` chart repository. *Upgrades from `rancher-alpha` to stable or latest are not supported. Alpha is not recommended for production installations.* 119 | 2. Execute the chart update command, specifying the chart options you documented in _Step 2 - Determine Existing Rancher Install Options_. This command is: 120 | 121 | ```bash 122 | helm upgrade -n cattle-system rancher rancher-[stable|latest]/rancher --set = 123 | ``` 124 | 125 | Replace `--set =` with your options. Repeat as many times as necessary. 126 | 127 | Execute the command. This will upgrade your Rancher installation. 128 | 129 | ### Infrastructure-as-code Installs 130 | 131 | Depending on your IaC tooling, performing this upgrade could take different forms. In [Automated Highly Available Deployment of Rancher Servers](360042011191), the version of Rancher installed is specified in `main.tf`, along with the value of options used in the installation: 132 | 133 | ```terraform 134 | rancher_version = "v2.4.2" 135 | le_email = "none@none.com" 136 | [...] 137 | ``` 138 | 139 | Performing the upgrade to a newer version of Rancher is as simple, in this case, as changing `rancher_version` from `v2.4.2` to the desired target version (keeping in mind general upgrade guidelines mentioned above). Then, we would execute `terraform apply` and allow Terraform to perform this upgrade for us. 140 | -------------------------------------------------------------------------------- /reference-architecture/phase-i/repeatable-rke-upgrades/README.md: -------------------------------------------------------------------------------- 1 | Upgrades should be well understood and where possible automated as a low risk procedure. 2 | 3 | ## Pre-requisites 4 | 5 | This scenario assumes you have built an infrastructure using the steps outlined in [Automated Highly Available Deployment of Rancher Servers](360042011191). If you have not, it is recommended that you read that scenario first to develop the background necessary to continue. 6 | 7 | --- 8 | ### Note - Semantic Versioning 9 | 10 | Kubernetes (and Rancher, and many other projects) make use of [semantic versioning](https://semver.org/). Semantic versioning specifies a format of `major.minor.patch`. From the semver website: 11 | 12 | > Given a version number MAJOR.MINOR.PATCH, increment the: 13 | > 14 | > - MAJOR version when you make incompatible API changes, 15 | > - MINOR version when you add functionality in a backwards compatible manner, and 16 | > - PATCH version when you make backwards compatible bug fixes. 17 | 18 | For example, as of this writing, the latest RKE version is `0.3.0` which is a major 0, minor 3, patch 0 version. References to major, minor, and patch version changes follow this scheme for the rest of the document. 19 | 20 | --- 21 | 22 | ## Background 23 | 24 | In [Automated Highly Available Deployment of Rancher Servers](360042011191), RKE was used to setup the Kubernetes cluster upon which Rancher was installed. For a refresher, view the contents of `rke.tf` in that scenario. 25 | 26 | The RKE configuration in that scenario specifies a Kubernetes version as a local variable. If this was left out each RKE version has a default for the cluster version. As of this writing, `0.3.0` is the latest version of RKE. This version specifies `v1.15.4-rancher1-2` as the _default_ version of Kubernetes in use. 27 | 28 | To update RKE-created clusters, the steps are fairly simple: 29 | 30 | 1. Identify next version of Kubernetes you wish to deploy, and ensure RKE compatibility 31 | 2. Update your infrastructure-as-code (in this case, updating the rke third party provider, as well as the other terraform providers used), and execute the change 32 | 33 | ### Kubernetes Versioning 34 | 35 | Kubernetes has published versioning schemes and recommendations available [here](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#kubernetes-release-versioning) and [here](https://kubernetes.io/docs/setup/release/version-skew-policy/). 36 | 37 | From those documents, we can understand the following general recommendations: 38 | 39 | 1. Upgrading from one patch release to another patch release (within the same `major.minor`) is a suported, expected, low-risk operation. 40 | 2. Kubernetes (as in the authors) recommends only upgrading two `minor` releases at most. e.g. going from `1.13` to `1.15` 41 | 42 | *These documents also specify a supported component upgrade order. This is not being outlined here as the upgrade order is managed by RKE and should remain opaque to the administrator.* 43 | 44 | There is a lot to comprehend within these guidelines. Certainly there is a lot to manage when upgrading Kubernetes. Fortunately, RKE makes this operation painless and easy for us to manage. 45 | 46 | ### RKE Versioning 47 | 48 | Each version of RKE supports four Kubernetes releases: 49 | 50 | 1. `Experimental` (generally whatever the latest minor release of Kubernetes is) 51 | 2. `Default` (one release back from `Experimental`) 52 | 3. One minor release back from `Default` 53 | 4. Two minor releases back from `Default` 54 | 55 | RKE will bump patch versions between releases. For example, RKE `0.2.8` supported the following Kubernetes versions: 56 | 57 | ``` 58 | v1.15.3-rancher1-1 (experimental) 59 | v1.14.6-rancher1-1 (default) 60 | v1.13.10-rancher1-2 61 | v1.12.9-rancher1-1 62 | ``` 63 | 64 | RKE version `0.3.0` supports the following Kubernetes versions: 65 | 66 | ``` 67 | v1.16.1-rancher1-1 (experimental) 68 | v1.15.4-rancher1-2 (default) 69 | v1.14.7-rancher1-1 70 | v1.13.11-rancher1-1 71 | ``` 72 | 73 | Note that in addition to incrementing (as a sliding window) the minor versions of Kubernetes, the patch versions have also incremented. This is supported under Kubernetes upgrade guidelines. 74 | 75 | ## Terraform 76 | 77 | To change the version of kubernetes for the rke cluster, you can modify the local variable `kubernetes_version` in `main.tf`. To get the list of valid versions for a specific rke version, you can run `rke config -l -a`. 78 | 79 | If we were to execute `tf apply` the following actions would take place: 80 | 81 | 1. Terraform would evaluate current state and pick up the difference of Kubernetes version 82 | 2. The RKE provider would be engaged to (behing-the-scenes) execute an `rke up` that would upgrade the Kubernetes version of the underlying cluster. 83 | 84 | ## Wrap-Up 85 | 86 | This document covered upgrade recommendations for Kubernetes using RKE tooling, as well as how to apply those recommendations in practice. In reality, upgrades to Kubernetes should be planned carefully and executed during periods of downtime or maintenance windows. RKE makes all attempts to perform upgrades safely and successfully, but as in any other operation, failures can and do occur. 87 | -------------------------------------------------------------------------------- /reference-architecture/phase-ii/automated-cluster-deployment/README.md: -------------------------------------------------------------------------------- 1 | This section will go over best practices for deploying a cluster using Rancher through the 'custom install' method. It also includes opinionated terraform leveraging AWS as an example. 2 | 3 | ## Background 4 | 5 | Rancher uses Rancher Kubernetes Engine (RKE) to deploy clusters in the same manner that the RKE CLI does that we used for the Rancher HA deployment. Rancher can also import existing clusters installed through other methods, or manage hosted providers (EKS, AKS, GKE), but these clusters lifecycle will be managed externally from Rancher, as Rancher cannot upgrade, modify or backup clusters it did not create. 6 | 7 | ### Terraform example 8 | 9 | The terraform configuration used for this example, can [be downloaded at the link here](./attachments/automated-cluster-deployment-tf.zip). 10 | 11 | ## Step 1: Define Cluster Nodes 12 | 13 | The same OS recommendations from the Rancher HA deployment work here as well. Also, each cluster managed by Rancher will have their own `etcd`,`controlplane` and `worker`. However unlike the HA cluster, the roles should not be all colocated. You should either use one role per node, or you can colocate `etcd` and `controplane`, which is what the example terraform in this example does. We go over the reasons for this strategy in our documentation here: [Production Ready Cluster](https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/). That link also has useful information around node sizing and networking requirements. 14 | 15 | In our example, the minimum number of nodes required for a highly available cluster are 5 (3 `etcd/controlplane`, 2 `worker`), but additional workers would usually be required, as losing one worker node in this 5 node cluster would cut your total capacity in half, and there is a good chance the one remaining worker node would not be able to handle the required capacity. If you are splitting the `etcd` and `controlplane` roles, then you would need 3 `etcd` nodes and 2 `controlplane` at a minimum. In the example terraform you can modify the node counts in `main.tf` which defaults to 3 `etcd/controlplane` and 3 `worker` nodes. 16 | 17 | The example defaults to `t3.large` machines, but larger machines may make sense if you are deploying larger applications, or several tools that run on all nodes. You can also mix node sizes which will often make sense for larger clusters, as you may have different classes of worker nodes, or want your `etcd` and `controlplane` nodes to be different sizes. Information around required infrastructure for the different roles can be found at the [Production Ready Cluster](https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/) link above. 18 | 19 | One last consideration is the disks for your machines. Etcd nodes in particular are very sensitive to slow disks, so it is highly recommended to use SSDs if possible. Also consider using larger or additional volumes if you plan to use a storage solution that leverages the disks of the nodes themselves. Since we are deploying to AWS, we will just use the cloud provider's capability to provision EBS volumes if needed. 20 | 21 | ## Step 2: Cluster Options 22 | 23 | 1. Networking 24 | 25 | Rancher by default implements Canal as the cluster CNI. If you require windows worker support, you will have to use flannel. You can also have Rancher deploy Calico or Weave, or deploy your own. Information about different CNIs can be found here [CNI](https://rancher.com/docs/rancher/v2.x/en/faq/networking/cni-providers/) 26 | *terraform setting: true* 27 | 28 | 2. Project Network Isolation 29 | 30 | This feature will be turned off by default, if you are deploying in a multitenant manner and wish for each Project within Rancher to be isolated from each other on the cluster network, this should be enabled. _This is only available if the CNI is set to Canal_. 31 | 32 | 3. Cloud Provider 33 | 34 | This will allow your cluster to provision external resources, usually either `services` of type loadbalancer or `persistentvolumes`. This will be set to Amazon in our case. The capability of different cloud providers can be found here [Cloud Providers](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/), but this is not an exhaustive list. 35 | 36 | 4. Private Registry 37 | 38 | This can be enabled if the images used to install this cluster need to be pulled from somewhere else, usually used for airgapped installs. The example will leave this `Disabled`. 39 | 40 | 5. Authorized Endpoint 41 | 42 | This will allow users to access the cluster's API server directly, instead of being proxied through Rancher. This is on by default which is how our example will have it. If you plan to leverage this for much of your communication rather than just as a fallback in case of an outage, it is recommended that you put an LB in front of this cluster's controlplane nodes, and provide this setting with the FQDN of that LB. If you do so, you will need to either add this FQDN in the `authentication.sans` section of your `cluster.yaml`, or provide a CA that will be used to validate the traffic from the LB. 43 | 44 | 6. Advanced Options 45 | 46 | Set snapshot restore, maybe a default PSP 47 | 48 | 7. Additional Considerations 49 | 50 | Beyond these options which are available as a form in the UI, you can also edit the `cluster.yaml` directly. You may want to do so if you need to modify the configuration or pass extra arguments to your core kubernetes components (e.g. etcd, api-server, kubelet) as well as the addon components Rancher deploys (Ingress Controller, Metrics Server). We will leave these options as the default in the example, but you can see examples of these configurations here [Ingress Controller Configuration](https://rancher.com/docs/rke/latest/en/config-options/add-ons/ingress-controllers/) and here [Kubernetes Default Services](https://rancher.com/docs/rke/latest/en/config-options/services/). 51 | 52 | ## Terraform 53 | 54 | The included terraform files will stand up a cluster in AWS. In `main.tf` you can configure the rancher version, kubernetes version and the node count. Through the next phases we will learn how to modify this terraform as needed to add additional configuration and capabilities to our cluster. 55 | 56 | To get this terraform working, you will need to install the rke provider which can be found at https://github.com/yamamoto-febc/terraform-provider-rke. 57 | 58 | Once that provider is installed, you can deploy by using `terraform init` and then `terraform apply`. Optionally `terraform plan` will show you the changes that executing the apply command will take. 59 | -------------------------------------------------------------------------------- /reference-architecture/phase-ii/automated-cluster-deployment/attachments/automated-cluster-deployment-tf.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-ii/automated-cluster-deployment/attachments/automated-cluster-deployment-tf.zip -------------------------------------------------------------------------------- /reference-architecture/phase-ii/automated-k8s-backups/README.md: -------------------------------------------------------------------------------- 1 | Every cluster managed by Rancher should have an automated backup solution for their etcd nodes. This can be leveraged for similar scenarios as the backup for the Rancher HA cluster itself. More information can be found here [Backing up etcd](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/backing-up-etcd/). 2 | 3 | ### _Note on Cluster Types_ 4 | 5 | > This process assumes that Rancher is installing the cluster for you. If you are leveraging a hosted soltuion (EKS, GKE, AKS etc.) or importing the cluster, Rancher will not be able to take snapshots or restore from one. 6 | 7 | ## Backup Configuration Options 8 | 9 | By default, a cluster created by Rancher will be configured to take an etcd snapshot every 12 hours, and to retain the last 6 snapshots. These snapshots will be stored locally on each etcd node in `/opt/rke/etcd-snapshots`. We suggest backing up your snapshots off cluster as well in case you lose all your etcd nodes in a disaster scenario. If you can use S3 as a backup target, Rancher can be configured to move these automatically for you, which is how the terraform cluster script in the deployment phase is configured. 10 | 11 | ## Restore 12 | 13 | You can restore a cluster to a previous snapshot in the UI or directly in the API. If you lose all of your nodes, you can still restore to a snapshot after adding new nodes back to the cluster if the snapshot was stored off cluster. See [Restoring etcd](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/restoring-etcd/) for more information. 14 | 15 | ## Terraform 16 | 17 | In the terraform script, we create an s3 bucket in AWS and provide a secret and key to the cluster configuration in `cluster-ha.tf` in the `rke_config.services.etcd.backup_config.s3_backup_config` section. 18 | -------------------------------------------------------------------------------- /reference-architecture/phase-ii/operations-for-k8s-clusters/README.md: -------------------------------------------------------------------------------- 1 | This document will go over considerations for setting up tooling for your Rancher managed clusters 2 | 3 | ## Logging 4 | 5 | Logging can be configured after cluster creation. Rancher has a built in logging tool, that will deploy Fluentd as a daemonset. Fluentd is a data collector that will scrape standard error/standard out as well as the files in `/var/log/containers` when deployed by Rancher. Fluentd can be configured to send these logs to a number of different endpoints within the UI. One of these endpoints is also Fluentd, so that users can send logs to an intermediary to perform resource intensive pre-processing of the logs if needed. Configuring a logging endpoint such as elasticsearch is outside the scope of this document. 6 | 7 | [Rancher Logging Documentation](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/tools/logging/) 8 | 9 | ## Monitoring 10 | 11 | Monitoring for your Rancher managed clusters can be implemented in the same manner as it is for the Rancher HA cluster itself. You can refer to the documentation in the [Rancher Server Monitoring](360042011251) Section for details. The terraform script enables cluster monitoring by default. This can be disabled and enabled manually after cluster creation, which may be desirable so a storage class can be configured first to allow grafana and prometheus to be backed up to a `PersistentVolume`. 12 | 13 | [Rancher Monitoring Documenation](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/tools/monitoring/) 14 | 15 | _Note: You are not required to use Rancher's built in solutions for logging and monitoring. Rancher can be used with any third party logging or monitoring solution you choose, but the built in solutions are also covered by Rancher's SLAs._ 16 | -------------------------------------------------------------------------------- /reference-architecture/phase-ii/repeatable-k8s-upgrades/README.md: -------------------------------------------------------------------------------- 1 | This document will go over considerations you should keep in mind when upgrading clusters that were deployed by Rancher. 2 | 3 | ## Upgrade Considerations 4 | 5 | * Upgrades are on a per cluster basis, so not all clusters managed by one Rancher instance need to be on the same Kubernetes versions. 6 | * In Rancher 2.3.0 and later, Kubernetes versions are decoupled from the Rancher version, so upgrading Rancher is no longer required to get newer supported Kubernetes versions. 7 | * Upgrades may cause small downtime windows if certain components need to be upgraded. For example if the kubelet or kube-proxy services need to be redeployed, routing to pods on that node may be briefly interrupted. Likewise for addons such as the Ingress Controller. In Rancher 2.4.0 and later, you can mitigate this during upgrade by configuring an upgrade strategy to upgrade nodes in batches, per the [documentation here](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/upgrading-kubernetes/). 8 | * For support reasons, you should keep your clusters upgraded to a version that fits within our support matrix for the Rancher version you are on. This information can be found here: [Support Terms](https://rancher.com/support-maintenance-terms/). Usually versions outside of the three most recent are are considered EOL by the community and may not receive security patches. 9 | 10 | ## Terraform 11 | 12 | An upgrade can be accomplished in the example terraform in the [Automated Kubernetes Cluster Deployment](360042011091) section by changing the local variable `kubernetes_version`. Rerunning `terraform apply` will then trigger an upgrade to the new version. 13 | -------------------------------------------------------------------------------- /reference-architecture/phase-ii/system-hardening/README.md: -------------------------------------------------------------------------------- 1 | This section will cover security considerations you may want to take in order to harden your cluster and reduce potential attack surface area and intrusion points. Most of this section is regarding configuration that will be applied after the terraform scripts. Configuration best practices for some of these tools is outside the scope of this document, but they are mentioned here to give a general overview. 2 | 3 | ## CIS Benchmarking 4 | 5 | The Center for Internet Security (CIS) provides a benchmark assessment for securing a Kubernetes cluster. Rancher has created a self assessment guide as well as recommended tunings in order to meet these standards. If desired, many of the recommended options can be enabled within the cluster configuration in the terraform scripts. 6 | 7 | You can find the Rancher CIS guide here: [Hardening Guide](https://rancher.com/docs/rancher/v2.x/en/security/hardening-2.3/). 8 | 9 | ## RKE Templates 10 | 11 | Often there can be a number of users and teams in charge of deploying clusters through Rancher. In this case it is useful to be able to put controls around how clusters are deployed in addition to who can deploy them. RKE templates allow an administrator to define a cluster configuration through Rancher, and enforce whether that configuration is required to be used. The templates can only be leveraged for clusters deployed by Rancher, and will not apply for imported or hosted clusters (e.g. EKS, AKS, GKE). More information can be found [here](https://rancher.com/docs/rancher/v2.x/en/admin-settings/rke-templates/). 12 | 13 | ## Service Mesh 14 | 15 | Rancher can also deploy Istio for managed clusters as well. Istio is a service mesh that allows operators and developers control over application traffic management and security. From a security perspective, Istio gives you the ability to require TLS communication between services in the cluster, setup rate limiting policies, and access control for traffic Ingress/Egress to services. More information can be found at the Rancher docs [here](https://rancher.com/docs/rancher/v2.x/en/cluster-admin/tools/istio/), as well as Istio's website [here](https://istio.io/docs/tasks/). 16 | 17 | ## External Considerations 18 | 19 | While Rancher provides you with the tools and guidance to harden a cluster, your organization may consider using an additional container security solution on top of Rancher. These external products can give you additional capabilities around vulnerability management, runtime security, secret management, regular security and compliance auditing and more. Rancher partners with Twistlock, Aquasec and Sysdig for many companies to enhance cluster security control. 20 | -------------------------------------------------------------------------------- /reference-architecture/phase-ii/user-management/README.md: -------------------------------------------------------------------------------- 1 | This document will discuss general best practices for setting up authentication and authorization for Users. 2 | 3 | ## Authentication 4 | 5 | Rancher allows you to create local users to grant access to an environment, but it is a much better idea to configure one of the authentication plugins that Rancher provides. This allows you to leverage existing users and groups from a provider like Active Directory, other LDAP services, and SAML providers. This configuration is done at the Global level, which means that all clusters managed by a single Rancher will use the same authentication provider. 6 | 7 | ### Note On Provider Searches 8 | 9 | > If you leverage an LDAP solution, you will be able to search for users and groups below the group and user search bases you define, on a list of attributes you can provide. This can be done even before a user logs into the platform, so that you can setup roles beforehand. SAML providers however only return information about an individual user, and do not provide a searchable tree. Because of that you won't be able to search for other users or groups you are not a member of when assigning roles. 10 | 11 | ### Terraform 12 | 13 | In the [Automated Highly Available Deployment of Rancher Servers](360042011191) terraform file `rancher-ha.tf` there is a section commented out that configures the authentication. This example is using github authentication and only allows the admin to login by default. This should be configured using whatever authentication provider your company plans to use. 14 | 15 | ```terraform 16 | resource "rancher2_auth_config_github" "github" { 17 | count = local.rancher2_auth_config_github_count 18 | client_id = var.github_client_id 19 | client_secret = var.github_client_secret 20 | access_mode = "restricted" 21 | 22 | # Concatanate the local Rancher id with any specified GitHub principals 23 | allowed_principal_ids = ["local://${data.rancher2_user.admin.id}"] 24 | } 25 | ``` 26 | 27 | ## Authorizaiton 28 | 29 | ### Global 30 | 31 | At the global level you can set a default access level for new users. This will define what a user can do when first logging in within the Global context. Usually you will not want all of your users to be able to create or delete clusters, modify Global settings, etc. To avoid this you can set the default Global Role to `User` or `User Base`. The main difference is that `User` can create clusters and node templates. 32 | 33 | If you want to put more controls around the type of cluster users can create, you can leverage RKE Templates. This allows you to define the options that user's can modify while deploying a cluster. You can either provide example templates to users, or require they use them by changing `cluster-template-enforcement` to true in global settings. 34 | 35 | ### Cluster 36 | 37 | Cluster level settings are useful if you have operations teams who need to be able to set up cluster tooling and create projects, but not actually deploy applications into specific projects. You can also give someone the `Cluster Owner` role, which will give them access to all resources in any project within the cluster. 38 | 39 | ### Project 40 | 41 | Projects are a custom resource Rancher has created to make it easier to manage namespaces. A project is a collection of one or more namespaces that allows you to apply RBAC rules, resource quotas and pod security policies to the entire set of namespaces. 42 | 43 | Projects will be the level that you split up access control within the cluster to enable multitenancy. Often times projects will be created like `DevTeamA`,`DevTeamB`, or by line of business. Roles can then be applied to users and groups through your authentication provider. If you want a user to have complete access within a particular project, `Project Owner` and `Project Member` can be used. The main difference between these is an Owner can modify other project role bindings, so it often makes more sense to give users `Project Member`. 44 | 45 | These roles also inherit from the kubernetes built in roles (listed within Rancher as kubernetes view, edit and admin), which are discussed here [User Facing Roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles). When creating your own roles you can reference these built in user roles, or other roles defined within Rancher as a baseline. 46 | 47 | ### Note On UI Visibility 48 | 49 | > Only projects a user has some level of access to will display in the UI. If a user does not have access to any project in a cluster, that cluster will not display in their dropdown. 50 | 51 | ### Terraform 52 | 53 | In the [Automated Kubernetes Cluster Deployment](360042011091) section, we create a local user and two projects, `DevTeamA` and `DevTeamB`. We have given the local user the `Project Member` role in `DevTeamA` and the `Read-Only` role in `DevTeamB`. This is just for example purposes to demonstrate how project creaton and authorization can be done through terraform. 54 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/README.md: -------------------------------------------------------------------------------- 1 | This tutorial aims at showing the flow for setting a custom catalog on Rancher, and using the same to deploy workloads. 2 | 3 | Rancher allows Catalogs to be setup at the following levels: 4 | 5 | * Global: Catalog items are available to all clusters managed by the rancher master. 6 | * Cluster: Catalog items are only available to all the projects managed under the specified cluster. 7 | * Project: Catalog items are only available to all the namespaces in the specified project. 8 | 9 | Using these scope's the development teams can easily setup their deployment pipelines using the catalog. 10 | 11 | For example, common logging and monitoring charts could be made available to all the clusters. 12 | 13 | Cluster specific charts such as ingress setup, could be restricted to the cluster scope. 14 | 15 | While specific workloads specific charts could be limited to the Project level. 16 | 17 | Rancher supports the use of git repos as catalogs. 18 | 19 | The repos' can be both public or private. 20 | 21 | We will cover both examples of private and public repos. 22 | 23 | ## Public repos 24 | 25 | Public repos are easier to setup as there is no auth requirement when setting up the catalog. 26 | 27 | Users just need to go to the **Tools -> Catalog** item and click on "Add Catalog" 28 | 29 | ![](./attachments/catalog1.png) 30 | 31 | Provide Catalog details: 32 | 33 | ![](./attachments/catalog2.png) 34 | 35 | Rancher will referesh the catalog and in a few seconds the repo should be active: 36 | ![](./attachments/catalog3.png) 37 | 38 | Since our catalog scope was cluster, we can use this in any project on the cluster. 39 | 40 | For the purpose of this example we will select the demo-app from our recently added catalog repo. 41 | 42 | ![](./attachments/catalog4.png) 43 | 44 | We will just use the defaults: 45 | 46 | ![](./attachments/catalog5.png) 47 | 48 | And in a few seconds the app should be deployed and ready. 49 | 50 | ![](./attachments/catalog6.png) 51 | 52 | ## Private Repos 53 | 54 | For private repos the process is the same, except we need to select the "Use Private Catalog" option while adding the Catalog. 55 | 56 | ![](./attachments/catalog7.png) 57 | 58 | For this example I have cloned my original **charts** repo into a **charts-private** repo. 59 | 60 | Again for this second example we specify the Catalog to be avaialble only to the Project. 61 | 62 | We can now provide our credentials. 63 | 64 | For github repos, we can use a Personal Access Token for authentication. 65 | 66 | Once the catalog has been refreshed it is ready for use, and we can deploy our trusted **demo-app** from the catalog. 67 | 68 | ![](./attachments/catalog8.png) 69 | 70 | ## Additional information 71 | 72 | Catalog items are basically helm charts. 73 | 74 | For details on learning how to write helm charts please refer to the official [documentation.](https://helm.sh/docs/developing_charts/) 75 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog5.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog6.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog7.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/catalog/attachments/catalog8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/catalog/attachments/catalog8.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/README.md: -------------------------------------------------------------------------------- 1 | This guide is intended to provide an example on how to setup container scanning infrastructure on K8S. 2 | 3 | There are a lot of products, opensource and paid which can be used to perform scanning on the container images. 4 | 5 | For the purpose of this demo we will be using CoreOS [clair](https://github.com/coreos/clair). 6 | 7 | Clair can perform static analysis of vulnerabilities in the application containers. 8 | 9 | Once clair is deployed, the users can use the API calls to perform scanning of container images or one of the readily available CLI integrations. 10 | 11 | Additional details of the most popular integrations can be found [here](https://github.com/coreos/clair/blob/master/Documentation/integrations.md). 12 | 13 | For the purpose of this example we will be using **klar** 14 | 15 | We will start first by deploying Clair to our K8S cluster. 16 | 17 | A clair helm chart is already available in [BanzaiCharts](https://github.com/banzaicloud/banzai-charts). 18 | 19 | We can add the Banzai charts helm repo as a catalog to our cluster: 20 | 21 | ![](./attachments/scanning1.png) 22 | 23 | After a few seconds this should be available to use: 24 | ![](./attachments/scanning2.png) 25 | 26 | The clair helm chart runs a postgres sql database for persistence. 27 | 28 | We can either disable persistence from the chart settings, or can use a Persistent Volume Claim. 29 | 30 | For production grade environments it is advisable to setup persistence. 31 | 32 | This can be done via the volume settings in the Rancher UI. 33 | 34 | The PVC name needed by the helm chart is **data-clair-postgresql-0**: 35 | ![](./attachments/scanning3.png) 36 | 37 | We are now ready to deploy the helm chart: 38 | ![](./attachments/scanning4.png) 39 | 40 | We will be overriding the following settings in the helm chart: 41 | ![](./attachments/scanning5.png) 42 | 43 | * ingress.enabled 44 | * ingress.hosts 45 | * service.type 46 | * postgresql.postgresqlUsername 47 | * postgresql.postgresqlDatabase 48 | * postgresql.postgresqlPassword 49 | * image.repository 50 | * image.tag 51 | 52 | These settings ensure that ingress is allowed to the clair service and on a predefined hostname. 53 | 54 | **NOTE:** The postgresql value overrides are a workarond to handle the change in dependency chart for postgresql. 55 | 56 | The fully deployed clair workload should look something like this: 57 | ![](./attachments/scanning6.png) 58 | 59 | ## Setting up the cli: klar 60 | 61 | We can install [klar](https://github.com/optiopay/klar) by either following the install instructions on the project page, or just using one of the precompiled packages. 62 | 63 | klar needs the following environment variables to be setup. 64 | 65 | **CLAIR_ADDR** To point to the clair endpoint. In our example it will be `clair.demo:80`: 66 | 67 | ```shell 68 | $ export CLAIR_ADDR=http://clair.demo:80 69 | ``` 70 | 71 | Now the scan can be run as follows: 72 | 73 | ```shell 74 | $ klar alpine 75 | clair timeout 1m0s 76 | docker timeout: 1m0s 77 | no whitelist file 78 | Analysing 1 layers 79 | Got results from Clair API v1 80 | Found 0 vulnerabilities 81 | ``` 82 | 83 | ```shell 84 | $ klar ubuntu 85 | clair timeout 1m0s 86 | docker timeout: 1m0s 87 | no whitelist file 88 | Analysing 4 layers 89 | Got results from Clair API v1 90 | Found 27 vulnerabilities 91 | Negligible: 9 92 | Low: 11 93 | Medium: 7 94 | ``` 95 | 96 | Please note that klar can only scan the images once they are available in the docker registry. 97 | 98 | klar cli can be easily integrated into image build pipelines to scan the image once it has been built. 99 | 100 | The CI step should take care to remove the image if the scan fails or make sure that the build artefact cannot be promoted. 101 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/attachments/scanning1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/container-scanning/attachments/scanning1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/attachments/scanning2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/container-scanning/attachments/scanning2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/attachments/scanning3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/container-scanning/attachments/scanning3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/attachments/scanning4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/container-scanning/attachments/scanning4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/attachments/scanning5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/container-scanning/attachments/scanning5.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/container-scanning/attachments/scanning6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/container-scanning/attachments/scanning6.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/README.md: -------------------------------------------------------------------------------- 1 | Ingress in K8S exposes the http and https routes from outsdie the cluster to services within the cluster. 2 | 3 | To be able to use the ingress resource the cluster must have an ingress controller enabled. 4 | 5 | For the purpose of this example we will be using the Nginx Ingress controller which can be easily enabled from the Rancher UI. 6 | 7 | ![](./attachments/ingress1.png) 8 | 9 | This should deploy a daemonset for nginx ingress controller on all your worker nodes. 10 | 11 | The pods can be viewed in the **ingress-nginx** namespace: 12 | 13 | ```shell 14 | $ kubectl get pods -n ingress-nginx -o wide 15 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 16 | default-http-backend-5954bd5d8c-wld2q 1/1 Running 0 12d 10.42.1.7 gm-test-worker1 17 | nginx-ingress-controller-cc7z6 1/1 Running 0 8d 172.16.135.118 gm-test-worker2 18 | nginx-ingress-controller-k2qlz 1/1 Running 0 8d 172.16.135.117 gm-test-worker1 19 | nginx-ingress-controller-xmfvh 1/1 Running 0 8d 172.16.133.169 gm-test-worker3 20 | 21 | ``` 22 | 23 | When deployed from Rancher this namespace gets mapped into the **System** project. 24 | 25 | ![](./attachments/ingress2.png) 26 | 27 | Now that we have ingress configured we are ready to setup a demo app to consume this ingress resource. 28 | 29 | We can use the sample chart available [here](https://github.com/ibrokethecloud/charts/tree/master/demo-app) for the purpose of testing. 30 | 31 | The rendered ingress spec is as follows: 32 | 33 | ```yaml 34 | # Source: demo-app/templates/ingress.yaml 35 | apiVersion: extensions/v1beta1 36 | kind: Ingress 37 | metadata: 38 | name: demo-app 39 | labels: 40 | app.kubernetes.io/name: demo-app 41 | helm.sh/chart: demo-app-0.1.0 42 | app.kubernetes.io/instance: demo-app 43 | app.kubernetes.io/version: "1.0" 44 | app.kubernetes.io/managed-by: Tiller 45 | annotations: 46 | kubernetes.io/ingress.class: nginx 47 | 48 | spec: 49 | rules: 50 | - host: 51 | http: 52 | paths: 53 | - path: /demo 54 | backend: 55 | serviceName: demo-app 56 | servicePort: http 57 | 58 | ``` 59 | 60 | This configures the ingress controller to route all requests for the uri `/demo` to the service `demo-app`. 61 | 62 | The annotation `kubernetes.io/ingress.class: nginx` is used to specify the ingress controller to use. 63 | 64 | Users may want to run multiple ingress controllers, for example in a multi-tenant environment to ensure that ingress can be managed by each of the tenants. 65 | 66 | In that case the users need to ensure that the correct ingress controller is specified. 67 | 68 | Once the ingress is applied, the ingress specification can be view via kubectl: 69 | 70 | ```shell 71 | $ kubectl get ingress 72 | NAME HOSTS ADDRESS PORTS AGE 73 | demo-app * 172.16.133.169,172.16.135.117,172.16.135.118 80 4m37s 74 | ``` 75 | 76 | Or via Rancher: 77 | 78 | ![](./attachments/ingress3.png) 79 | 80 | Our app can now be accessed via any of the ingress hosts using any valid hostname at the uri **/demo**. 81 | 82 | We can change this behaviour by updating the ingress controller to match specific hostnames only: 83 | 84 | ```yaml 85 | # Source: demo-app/templates/ingress.yaml 86 | apiVersion: extensions/v1beta1 87 | kind: Ingress 88 | metadata: 89 | name: demo-app 90 | labels: 91 | app.kubernetes.io/name: demo-app 92 | helm.sh/chart: demo-app-0.1.0 93 | app.kubernetes.io/instance: demo-app 94 | app.kubernetes.io/version: "1.0" 95 | app.kubernetes.io/managed-by: Tiller 96 | annotations: 97 | kubernetes.io/ingress.class: nginx 98 | 99 | spec: 100 | rules: 101 | - host: demo.local 102 | http: 103 | paths: 104 | - path: /demo 105 | backend: 106 | serviceName: demo-app 107 | servicePort: http 108 | 109 | ``` 110 | 111 | We have added the extra **host** match directive - `host: demo.local`. 112 | 113 | As a result of this the ingress controller will now ally the path matching rules only when the host header matches **demo.local**. 114 | 115 | The ingress reflects this change now: 116 | 117 | ```shell 118 | $ kubectl get ingress 119 | NAME HOSTS ADDRESS PORTS AGE 120 | demo-app demo.local 172.16.133.169,172.16.135.117,172.16.135.118 80 19m 121 | ``` 122 | 123 | Now that we have covered ingress, we can look at how DNS can now be used along with the ingress spec. 124 | 125 | We can use the **Global-DNS** option available within Rancher for this. 126 | 127 | To use Global DNS we need to first configure a DNS provider. For purpose of this demo, we are going to use AWS Route 53: 128 | 129 | ![](./attachments/dns1.png) 130 | 131 | Once this is done we can configure a global-dns record, and map it a multi-cluster app or a project. 132 | 133 | For this example we will just use the **demo-app** project on our cluster: 134 | 135 | ![](./attachments/dns2.png) 136 | 137 | Once created, we should see an entry: 138 | 139 | ![](./attachments/dns3.png) 140 | 141 | Once the base setup is done we need to deploy the helm chart with the extra annotations on the ingress spec: 142 | 143 | ```yaml 144 | apiVersion: extensions/v1beta1 145 | kind: Ingress 146 | metadata: 147 | name: demo-app 148 | labels: 149 | app.kubernetes.io/name: demo-app 150 | helm.sh/chart: demo-app-0.1.0 151 | app.kubernetes.io/instance: demo-app 152 | app.kubernetes.io/version: "1.0" 153 | app.kubernetes.io/managed-by: Tiller 154 | annotations: 155 | kubernetes.io/ingress.class: nginx 156 | rancher.io/globalDNS.hostname: demo-app.glbdns.fe.rancher.space 157 | 158 | spec: 159 | rules: 160 | - host: "demo-app.glbdns.fe.rancher.space" 161 | http: 162 | paths: 163 | - path: /demo 164 | backend: 165 | serviceName: demo-app 166 | servicePort: http 167 | ``` 168 | 169 | The spec now has an extra annotation: 170 | 171 | ```rancher.io/globalDNS.hostname: demo-app.glbdns.fe.rancher.space``` 172 | 173 | Which matches the Host matching rule on the ingress controller. 174 | 175 | Once the workload is deployed, the GlobalDNS controller picks up the workload, and creates a DNS record in route53 matching the IP of the worker nodes running the workload: 176 | 177 | ![](./attachments/dns4.png) 178 | 179 | matches worker node: 180 | 181 | ![](./attachments/dns5.png) 182 | 183 | The benefit of using a Global DNS controller is that DNS records are kept upto date to match the ingress nodes. 184 | 185 | For example we scaled our node pool: 186 | 187 | ![](./attachments/dns7.png) 188 | 189 | and that gets reflected in the DNS record as well: 190 | 191 | ![](./attachments/dns6.png) 192 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns5.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns6.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/dns7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/dns7.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/ingress1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/ingress1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/ingress2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/ingress2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/ingress-dns/attachments/ingress3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/ingress-dns/attachments/ingress3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/README.md: -------------------------------------------------------------------------------- 1 | This guide is intended to provide an example of how to setup a simple logging stack on K8S and use Rancher to setup log forwarding to that cluster 2 | 3 | ## Setup EFK stack 4 | 5 | We can choose one of the options available in the helm stable catalog, and development / operation teams are free to chose one which works for their use case scenario: 6 | 7 | ![](./attachments/logging1.png) 8 | 9 | For the purpose of this demo we already have a K8S cluster running and managed by Rancher. 10 | 11 | We have setup a logging project to deploy this on the cluster, as this eventually helps with RBAC to managing the workload at a K8S layer. 12 | 13 | For the purpose of this demo I will be choosing the ElasticSearch+FluentBit+Kibana stack: 14 | 15 | ![](./attachments/logging2.png) 16 | 17 | There are few customisations available in the app, which need to be configured on the use case: 18 | 19 | ![](./attachments/logging3.png) 20 | 21 | Because we will be using Rancher to manage log forwarding, we want to disbale FileBeat in this Helm chart by setting "Enable Filebeat" to "False": 22 | 23 | ![](./attachments/logging-filebeat.png) 24 | 25 | There are some key items to note: 26 | 27 | * The app will spin up 3 elasticsearch pods, so based on the number of worker nodes or how ingress needs to be managed to elasticsearch the team may need to decide if they prefer to use NodePort or ClusterIP. 28 | 29 | For the purpose of this example, we have used ClusterIP 30 | ![](./attachments/logging11.png) 31 | 32 | * The front end for viewing logs is via Kibana, which can be customised further. One key customisation to note of is the **hostname** to use to access Kibana. 33 | 34 | The hostname is used to setup the ingress rule on thee ingress controller: 35 | 36 | ![](./attachments/logging7.png) 37 | 38 | For the purpose of this demo we use ***logging.demo** 39 | 40 | * The users may also want to choose how to manage persistence for ElasticSearch. This can be easily achieved by using persistent volume claims. For the purpose of this demo we will not be changing this. 41 | 42 | We simply now click **Launch** and this should launch the catalog app into the logging project under a new namespace named **efk**: 43 | 44 | ![](./attachments/logging4.png) 45 | 46 | Users can now follow the status of the EFK rollout, and in a few mins you should have all the logging components deployed: 47 | 48 | ![](./attachments/logging5.png) 49 | 50 | Users can also now verify the ingress spec to allow access to Kibana: 51 | 52 | ![](./attachments/logging8.png) 53 | 54 | Kibana can now be accessed by pointing the broweser to http://logging.demo 55 | 56 | ![](./attachments/logging9.png) 57 | 58 | As part of the Kibana / ElasticSearch configuration, at first time the user will need to specify an index pattern to use from ElasticSearch: 59 | 60 | ![](./attachments/logging6.png) 61 | 62 | Once this initial setup the users can search for logs from the cluster using Kibana: 63 | 64 | ![](./attachments/logging10.png) 65 | 66 | This document hopefully shows how easy it is to setup a logging stack of your choice using the wide variety of community curated helm charts. 67 | 68 | This document is intended to be a quick start reference guide as we used most of the common defaults. 69 | 70 | For production grade logging stacks, the users need to decide on factors such as: 71 | 72 | - log retention duration 73 | - persistence 74 | - rate of log ingestion, which in turn impacts elasticsearch JVM heap sizing. 75 | 76 | ## Configure Log Forwarding 77 | 78 | Rancher itself can integrate with a number of common logging solutions. 79 | 80 | The logging can be setup at the cluster level or per project level. 81 | 82 | We will enable the logging at the Cluster scope. 83 | 84 | The same steps can also be performed at the Project scope as well: 85 | 86 | ![](./attachments/rancherlogging1.png) 87 | 88 | Rancher supports out of box integration with Elasticsearch, Splunk, Kafka, Syslog and Fluentd. 89 | 90 | For the purpose of this example we will use the Elasticsearch integration: 91 | 92 | ![](./attachments/rancherlogging2.png) 93 | 94 | We need to specify a few mandatory fields: 95 | 96 | * endpoint for Elasticsearch 97 | * prefix for index name. 98 | 99 | ![](./attachments/rancherlogging3.png) 100 | 101 | Once the fields are specified, we can **TEST** the integration and **Save** the changes. 102 | 103 | After the changes are done, we can go to the Kibana search interface for our Elasticsearch instance, and visualise the cluster logs: 104 | 105 | ![](./attachments/rancherlogging4.png) 106 | 107 | Since the logging was enabled at cluster scope we can see logs for all projects in our cluster. 108 | 109 | ## Viewing Log Output 110 | 111 | Now that we have setup a logging stack and configured Rancher to foward logs, we can see log events from demo-app which just servces a static hello world page with a randomised delay. 112 | 113 | The container logs can be viewed in the Rancher UI: 114 | 115 | ![](./attachments/logging12.png) 116 | 117 | The users can now go and search the same logs in the Kibana frontend as well: 118 | 119 | ![](./attachments/logging13.png) 120 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging-filebeat.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging-filebeat.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging10.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging10.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging11.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging11.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging12.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging12.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging13.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging13.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging14.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging14.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging5.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging6.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging7.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging8.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/logging9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/logging9.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/rancherlogging1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/rancherlogging1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/rancherlogging2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/rancherlogging2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/rancherlogging3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/rancherlogging3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/logging/attachments/rancherlogging4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/logging/attachments/rancherlogging4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/registry/README.md: -------------------------------------------------------------------------------- 1 | For the purpose of this example we will be using the docker registry helm chart available in the catalog. 2 | 3 | ![](./attachments/registry1.png) 4 | 5 | A docker registry needs to have persistent storage available to store the container image layers. 6 | 7 | The docker registry needs a persistent store for storing container images, unless this registry is only for testing purposes. 8 | 9 | The list of supported docker registry drivers is available [here](https://docs.docker.com/registry/storage-drivers/). 10 | 11 | For the purpose of this demo we will use the **FileSystem** driver, which allows the registry to use a local file system. 12 | 13 | The helm chart needs a Persistent Volume for use with the FileSystem option. 14 | 15 | The chart requires that the persistent volume is already available before the chart is deployed. 16 | 17 | The users need to ensure that the appropriate cloud credentials are available for use in your K8S cluster. 18 | 19 | For the purpose of this example we will use the NFS storage provisioner. Details on how to install the same can be found [here](360041579132). 20 | 21 | Another pre-requisite for the docker-registry is an authentication user and password. 22 | 23 | This can be easily created using the existing **registry** container image. 24 | 25 | For this example we will create a user named **demo** which can authenticate with the password **demopassword**: 26 | 27 | ```bash 28 | docker run --entrypoint htpasswd registry:2 -Bbn demo demopassword > htpasswd 29 | ``` 30 | 31 | This password can now be used in the variable updates for the chart. 32 | 33 | We will use the docker-registry chart from the rancher charts library: 34 | 35 | ![](./attachments/dockerregistry1.png) 36 | 37 | We need to update a few settings: 38 | 39 | ![](./attachments/dockerregistry2.png) 40 | 41 | Specify the username:password combination generated from the `htpasswd` command in the **Docker Registry Htpasswd Authentication** field. 42 | 43 | We will select the `nfs-provisioner` storage class. 44 | 45 | We will also specify a hostname to use in the L7 load balancer ingress specificiation, example: registry.yourdomain.com 46 | 47 | Now launching the app will deploy the docker-registry to a docker-registry namespace. 48 | 49 | In case you are using a self signed certificate, then please ensure that the insecure-registries on your local docker-daemon are setup to include the newly setup registry. In this particular case **registry.yourdomain.com** 50 | 51 | To verify the registry, we will login to the registry using the username / password we setup in the htpasswd file: 52 | 53 | ```shell 54 | $ docker login -u demo registry.yourdomain.com 55 | Password: 56 | Login Succeeded 57 | ``` 58 | 59 | We can now push an image to this registry. We will just use an existing image for this test and re-tag it: 60 | 61 | ```bash 62 | docker tag alpine:latest registry.yourdomain.com/alpine:latest 63 | ``` 64 | 65 | Now the push should be successful: 66 | 67 | ```shell 68 | $ docker push registry.local/alpine:latest 69 | The push refers to repository [registry.local/alpine] 70 | 03901b4a2ea8: Pushed 71 | latest: digest: sha256:acd3ca9941a85e8ed16515bfc5328e4e2f8c128caa72959a58a127b7801ee01f size: 528 72 | ``` 73 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/registry/attachments/dockerregistry1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/registry/attachments/dockerregistry1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/registry/attachments/dockerregistry2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/registry/attachments/dockerregistry2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/registry/attachments/registry1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/registry/attachments/registry1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/README.md: -------------------------------------------------------------------------------- 1 | Stateful workloads in Kubernetes may have a requirement for storing state or configuration for persistence from pod failures. 2 | 3 | Kubernetes allows workloads with such requirements to use Persistent Volumes to allow such information to be stored on a separate volume provisioned from the cloud provider, which can then be attached across pod failures. 4 | 5 | This tutorial will cover one such scenario and illustrate how persistent volumes can be used to accomplish this. 6 | 7 | To be able to use persistent volumes, the cluster administrator needs to configure the **cloud provider** when setting up the cluster. 8 | 9 | This can be done from the Rancher UI, at the cluster level by Editing the cluster, and updating the YAML to provide the details of the cloud provider: 10 | 11 | ![](./attachments/storage9.png) 12 | 13 | For the purpose of this example we will use the vsphere provider. 14 | 15 | Once the vsphere cloud provider has been setup in the cluster yaml, the users can proceed with creating the storage classes: 16 | 17 | We will define two storage classes for this example: 18 | 19 | **vsphere** storage class, which has a reclaim policy to delete volumes when it is released by the workload. 20 | 21 | ![](./attachments/storage1.png) 22 | 23 | **vsphere-retain** storage class, which has a reclaim policy that retains the volumes after the workload has released it. 24 | 25 | ![](./attachments/storage2.png) 26 | 27 | If needed the cluster administrator can now setup either storage class as a default class. This is useful for cases when no explicity storage class is specified in the PVC, the cluster can switch to the default one: 28 | 29 | ![](./attachments/storage3.png) 30 | 31 | We will now deploy a sample wordpress workload using these storage classes. We can use the wordpress app available in the catalog: 32 | 33 | ![](./attachments/storage4.png) 34 | 35 | We will be updating the wordpress settings in the app, to ensure we turn on Persistent Volumes and specify the storage class to be used by wordpress: 36 | 37 | ![](./attachments/storage5.png) 38 | 39 | We proceed with the vsphere storage class. 40 | 41 | After we launch the app, we should see an event in the workload requesting a volume, and eventually a volume is attached: 42 | 43 | ![](./attachments/storage7.png) 44 | 45 | We can verify the volume in the volumes section of the workload: 46 | 47 | ![](./attachments/storage6.png) 48 | 49 | It takes a few seconds for the wordpress workload to be deployed. 50 | 51 | Once deployed we can login to the pod and see the extra volume mounted under /bitnami/php/ : 52 | 53 | ![](./attachments/storage8.png) 54 | 55 | All changes to this mount should persist across pod failures. 56 | 57 | ## NFS Storage provider 58 | 59 | We will provide a second storage class, which can be easily setup for demo's and if needed eventually fine tuned for long running persistence. 60 | 61 | We can deploy the NFS storage provisioner from the rancher library. 62 | 63 | Please note that at the time of writing this document this library helm chart is **Experimental** and should not be used for any production grade workloads. 64 | 65 | The chart is available in the catalog: 66 | 67 | ![](./attachments/nfs1.png) 68 | 69 | For the purpose of this demo, we will continue with the default values in the chart: 70 | 71 | ![](./attachments/nfs2.png) 72 | 73 | **NOTE:** If you are using RancherOS for the Worker nodes, then please ensure that the NFS Host Path is /mnt. 74 | 75 | Deploying this chart will create a new storage class at the cluster scope which can now be used by the workloads: 76 | 77 | ![](./attachments/nfs3.png) 78 | 79 | **Note**: The default settings of the chart do not enable persistence. If you need persistence then this needs to be setup based on your storage requirements. 80 | 81 | Now we can redeploy the wordpress helm chart using this NFS provisioner storage driver: 82 | 83 | ![](./attachments/nfs4.png) 84 | 85 | Once the workload is deployed we should see two new PVC claims using the new nfs provisioner: 86 | 87 | ![](./attachments/nfs5.png) 88 | 89 | This example shows a simple use case of having persistence. 90 | 91 | Users are advised to look at their workload requirements when deciding on the setup for the storage classes. 92 | -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/nfs1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/nfs1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/nfs2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/nfs2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/nfs3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/nfs3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/nfs4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/nfs4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/nfs5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/nfs5.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage1.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage2.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage3.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage4.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage5.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage6.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage7.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage7.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage8.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage8.png -------------------------------------------------------------------------------- /reference-architecture/phase-iii/storage/attachments/storage9.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/onboarding-content/87ee58792183ca94e6438eda35b2692ddd111782/reference-architecture/phase-iii/storage/attachments/storage9.png --------------------------------------------------------------------------------