├── Sudoer
└── readme.md
├── cleanupHangingObjects
└── README.md
├── devconsole
├── README.md
├── console-oauth-config.yaml
├── console-serving-cert.yaml
├── devconsole-template.json
└── service-ca-configmap.yaml
├── htpasswdExtras
└── readme.md
├── integrateLauncherWithCRWKeycloak
├── GitHubOAuthApp.png
├── GitHubProvider.png
├── KeycloakClientConfig.png
└── README.md
├── monitor
├── README.md
├── cluster-monitoring-config.yaml
├── example-alert.yaml
├── monitor1.png
├── monitor2.png
├── monitor3.png
├── monitor4.png
├── prometheus-example-app.yaml
└── prometheus-example-monitor.yaml
├── nfsautoprovisioner
└── README.md
├── openshiftlogging
└── README.md
└── registryconfigurations
└── README.md
/Sudoer/readme.md:
--------------------------------------------------------------------------------
1 |
2 |
How to make a user sudoer?
3 |
4 | Sometimes we may want users to be granted access to run certain commands as administrators. We can grant them cluster-admin privileges by running `oc adm policy add-cluster-role-to-user cluster-admin`. However this should be avoided as much as possible.
5 |
6 | Instead you can make the user a `sudoer` so that the user can run specific commands by impersonating as `system:admin` when needed.
7 |
8 | Adding `sudoer` role is easy.
9 |
10 | Steps
11 |
12 | 1. Login as an administrator
13 |
14 | 2. Grant `sudoer` access
15 |
16 | ```
17 | oc adm policy add-cluster-role-to-user sudoer --as system:admin
18 | ```
19 | Now the user with can run any command as an administrator by adding `--as system:admin` at the end of the command
20 |
21 | **Example:**
22 |
23 | ```
24 | % oc get nodes
25 | Error from server (Forbidden): nodes is forbidden: User "username" cannot list resource "nodes" in API group "" at the cluster scope
26 |
27 | % oc get nodes --as system:admin
28 | NAME STATUS ROLES AGE VERSION
29 | master0 Ready master 191d v1.14.6+8e46c0036
30 | master1 Ready master 191d v1.14.6+8e46c0036
31 | master2 Ready master 191d v1.14.6+8e46c0036
32 | worker0 Ready worker 191d v1.14.6+7e13ab9a7
33 | worker1 Ready worker 191d v1.14.6+7e13ab9a7
34 | ```
35 |
36 |
37 |
38 |
39 |
--------------------------------------------------------------------------------
/cleanupHangingObjects/README.md:
--------------------------------------------------------------------------------
1 | # Cleanup Hanging Objects
2 |
3 | ## Cleaning up projects hanging in `Terminating` State
4 |
5 | **Use Case**: If you have a bunch of projects hanging around inspite of being deleted.
6 |
7 | * `jq` is used in the command below. I am running it from Mac
8 | * Login as `admin` user
9 |
10 | Run the following script:
11 | ```
12 | for i in $( kubectl get ns | grep Terminating | awk '{print $1}'); do echo $i; kubectl get ns $i -o json| jq "del(.spec.finalizers[0])"> "$i.json"; curl -k -H "Authorization: Bearer $(oc whoami -t)" -H "Content-Type: application/json" -X PUT --data-binary @"$i.json" "$(oc config view --minify -o jsonpath='{.clusters[0].cluster.server}')/api/v1/namespaces/$i/finalize"; done
13 | ```
14 | >> Tip: Finding API Server URL `APISERVERURL= $(oc config view --minify -o jsonpath='{.clusters[0].cluster.server}')`
15 |
16 |
17 | ## Cleaning up Persistent Volumes hanging in `Terminating` State
18 |
19 | **Use Case**: If you have a bunch of PVCs and PVs hanging around inspite of being released.
20 |
21 | This script identifies all PVCs in `Terminating` state and removes the finalizers from their spec
22 |
23 | ```
24 | for i in $(oc get pvc | grep Terminating| awk '{print $1}'); do oc patch pvc $i --type='json' -p='[{"op": "replace", "path": "/metadata/finalizers", "value":[]}]'; done
25 | ```
26 |
27 | After running the above all hanging PVCs will be gone.
28 |
29 | If you want to get rid of all hanging PVs run the following command after above.
30 |
31 | ```
32 | for i in $(oc get pv | grep Released| awk '{print $1}'); do oc patch pv $i --type='json' -p='[{"op": "replace", "path": "/metadata/finalizers", "value":[]}]'; done
33 |
34 | ```
35 |
36 | ## Deleting all `Evicted` pods
37 |
38 | Login as admin and run this
39 |
40 | ```
41 | for project in $(oc get pods --all-namespaces| grep Evicted| awk '{print $1}' | uniq); \
42 | do echo $project; \
43 | oc delete po $(oc get pods -n $project | grep Evicted| awk '{print $1}') -n $project; \
44 | done
45 | ```
46 |
47 | ## Cleaning up all pods stuck in `Terminating` status and not going away
48 |
49 | Login as admin and run this
50 |
51 | ```
52 | for project in $(oc get pods --all-namespaces| grep Terminating| awk '{print $1}' | uniq); \
53 | do echo $project; \
54 | oc delete po $(oc get pods -n $project | grep Terminating| awk '{print $1}') -n $project --grace-period=0 --force; \
55 | done
56 | ```
57 |
58 | ## Cleaning up all pods stuck in `Error` status
59 |
60 | If your cluster got abruptly restarted, you may have a bunch of pods in `Error` status when the cluster comes back up. Deleting these application pods in `Error` status should bring up new pods in their place. In order to handle this across the cluster
61 |
62 | Login as admin and run this
63 |
64 | ```
65 | for project in $(oc get pods --all-namespaces| grep Error| awk '{print $1}' | uniq); \
66 | do echo $project; \
67 | oc delete po $(oc get pods -n $project | grep Error | awk '{print $1}') -n $project --grace-period=0 --force; \
68 | done
69 | ```
70 |
71 | ## Pods in `NotReady` status on Nodes
72 |
73 | Usually you wont ssh to the nodes on an OCP4 cluster. But if you realize that there are pods stuck in `NotReady` status on nodes (let us say after an abrupt restart) and you want to get rid of those:
74 |
75 | Log onto each node and run
76 |
77 | ```
78 | # crictl rmp $(crictl pods | grep NotReady | awk '{print $1}')
79 | ```
80 |
81 |
82 | ## Node running out of IP Addresses
83 |
84 | **Use case:** Workloads (pods) get assigned to node but they stay in `Container Creating` mode. If you check project events, it shows an error that looks like this
85 |
86 | ```
87 | Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_mongodb-1-9qnpx_catalyst1_558c29e7-b78d-11e9-99d8-5254006dbdbc_0(0f93c64baabcce3f6a9396f106b3fe1e27611e4f74d74914e99ff7a063a8c6a7): Multus: Err adding pod to network "openshift-sdn": Multus: error in invoke Delegate add - "openshift-sdn": CNI request failed with status 400: 'failed to run IPAM for 0f93c64baabcce3f6a9396f106b3fe1e27611e4f74d74914e99ff7a063a8c6a7: failed to run CNI IPAM ADD: failed to allocate for range 0: no IP addresses available in range set: 10.254.5.1-10.254.5.254
88 | ```
89 |
90 | However, if you see what is running on the node it shows nothing other than static pods.
91 |
92 | ```
93 | $ oc get po -o wide --all-namespaces | grep Running | grep worker2
94 | openshift-cluster-node-tuning-operator tuned-gcp5m 1/1 Running 2 5h41m 192.168.1.46 worker2.ocp4.home.ocpcloud.com
95 | openshift-machine-config-operator machine-config-daemon-9klh9 1/1 Running 2 5h41m 192.168.1.46 worker2.ocp4.home.ocpcloud.com
96 | openshift-monitoring node-exporter-jfxwq 2/2 Running 4 5h39m 192.168.1.46 worker2.ocp4.home.ocpcloud.com
97 | openshift-multus multus-lj8cq 1/1 Running 2 5h41m 192.168.1.46 worker2.ocp4.home.ocpcloud.com
98 | openshift-sdn ovs-l9qt5 1/1 Running 1 5h11m 192.168.1.46 worker2.ocp4.home.ocpcloud.com
99 | openshift-sdn sdn-lqss7 1/1 Running 2 5h11m 192.168.1.46 worker2.ocp4.home.ocpcloud.com
100 | ```
101 |
102 | On the other hand `oc get nodes` shows all nodes are `Ready`.
103 |
104 | ### What's happening?
105 | Somehow the pod deletion is not complete on the nodes (may be when the node gets restarted), and the IP addresses allocated to the pods on that node are all listed as already used.
106 |
107 | You'll need to SSH to the node to be able to get out of this situation
108 |
109 | **Note:** OpenShift discourages SSHing to the node. But as of now I don't know any other way of dealing with this.
110 |
111 | Login to the node and list the following location:
112 |
113 | ```
114 | # ls /var/lib/cni/networks/openshift-sdn/ | wc -l
115 |
116 | 254
117 | ```
118 |
119 | You will see there are a bunch of ip addresses here that aren't cleaned up.
120 |
121 | ### Quick fix
122 |
123 | Make sure only static pods are running on that node. If you had restarted the node, and no pods are coming up on that node, then only static pods will be running anyway!!
124 |
125 | Now remove all the files with IP address names in this location:
126 |
127 | ```
128 | # rm /var/lib/cni/networks/openshift-sdn/10*
129 | ```
130 |
131 | As soon as you do that the node should be running normally.
132 |
133 |
134 | ### Confirm node is normal
135 |
136 | Try deleting a pod running on this node just to test. You should see an entry in CRIO logs `journalctl -u crio -l --no-pager`to this effect:
137 |
138 | ```
139 | Aug 05 20:00:18 worker2.ocp4.home.ocpcloud.com crio[897]: 2019-08-05T20:00:18Z [verbose] Del: bluegreen:green-2-jbv56:openshift-sdn:eth0 {"cniVersion":"0.3.1","name":"openshift-sdn","type":"openshift-sdn"}
140 | ```
141 |
142 | and the corresponding ip address of the pod (`oc get po -o wide` should show the PODIP) should be removed from this location `/var/lib/cni/networks/openshift-sdn`. If so, your node is normal again.
143 |
144 |
145 |
146 |
147 |
148 |
149 |
150 |
151 |
152 |
153 |
--------------------------------------------------------------------------------
/devconsole/README.md:
--------------------------------------------------------------------------------
1 | # Install the latest Dev Console as another application (to test)
2 |
3 | **Acknowledgements:** Based on Steve Speicher's https://github.com/sspeiche/dev-console
4 |
5 | ## Pre-requisites
6 | * OCP4 Cluster
7 | * You will need admin access and login as admin user to do this
8 |
9 |
10 | ## Steps
11 |
12 | ### Short Script
13 |
14 | If you want don't want to follow step by step instructions in details. Just run the following script with your own value for `SECRET`
15 |
16 | ```
17 | ###############################
18 | # INSTALL DEVCONSOLE #
19 | ###############################
20 | export SECRET=mysecret
21 | oc create namespace openshift-devconsole
22 | oc project openshift-devconsole
23 | oc get cm service-ca -n openshift-console --export -o yaml | sed -e "s/app: console/app: devconsole/g" | oc apply -n openshift-devconsole -f -
24 | oc get secret console-serving-cert -n openshift-console --export -o yaml | oc apply -n openshift-devconsole -f -
25 | oc get oauthclient console --export -o yaml | sed -e "s/name: console/name: devconsole/g" -e "s/console-openshift-console/devconsole/g" -e "s/^secret: .*/secret: $SECRET/g" | oc apply -f -
26 | curl https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/master/devconsole/console-oauth-config.yaml | sed -e "s/yourBase64EncodedSecret/$(echo -n $SECRET | base64)/g"| oc apply -f -
27 | oc create clusterrole console --verb=get,list,watch --resource=customresourcedefinitions.apiextensions.k8s.io
28 | oc create sa console -n openshift-devconsole
29 | oc adm policy add-cluster-role-to-user console -z console -n openshift-devconsole
30 | oc process -f https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/master/devconsole/devconsole-template.json -p OPENSHIFT_API_URL=$(oc cluster-info | awk '/Kubernetes/{printf $NF}'| sed -e "s,$(printf '\033')\\[[0-9;]*[a-zA-Z],,g") -p OPENSHIFT_CONSOLE_URL=$(oc get oauthclient devconsole -o jsonpath='{.redirectURIs[0]}'| sed -e 's~http[s]*://~~g' -e "s/com.*/com/g") | oc create -f -
31 | ```
32 |
33 | ### Step by Step Instructions
34 |
35 | 1. Clone this repo and change to `devconsole`
36 |
37 | 2. Create a new namespace
38 | **Note:** You won't be allowed to create a project that starts with `openshift-`. Hence we are creating a namespace. The template we process in the last step has a `priorityClass` annotation that requires an `openshift-*` namespace. Switch the project to `openshift-devconsole` after creating the namespace.
39 |
40 | ```
41 | oc create namespace openshift-devconsole
42 | oc project openshift-devconsole
43 | ```
44 |
45 | 3. Update service-ca-configmap.yaml, console-serving-cert.yaml with your values for similar objects from openshift-console project
46 |
47 | Get the `service-ca` used by your console.
48 | ```
49 | oc get cm service-ca -n openshift-console -o yaml
50 | ```
51 | Copy the `data` section and update `service-ca-configmap.yaml`
52 |
53 | ```
54 | oc create -f service-ca-configmap.yaml
55 | ```
56 |
57 | Get the serving certficate used by your console.
58 | ```
59 | $ oc get secret console-serving-cert -n openshift-console -o yaml
60 | ```
61 |
62 | Note the values of `data` section update the data sectio in `console-serving-cert.yaml` file.
63 |
64 | ```
65 | oc create -f console-serving-cert.yaml
66 | ```
67 |
68 | 4. Create an oauthclient for your new devconsole.
69 |
70 | ```
71 | oc get oauthclient console -o yaml > oauthclient.yaml
72 | ```
73 |
74 | Edit the values to look like below
75 |
76 | ```
77 | $ cat oauthclient.yaml
78 | apiVersion: oauth.openshift.io/v1
79 | grantMethod: auto
80 | kind: OAuthClient
81 | metadata:
82 | name: devconsole
83 | redirectURIs:
84 | - https://<>/auth/callback
85 | secret: <>
86 | ```
87 | Note `clientId` is `devconsole` and the secret will be a string of your choice. You can leave the avlue you copied from the other `oauthclient` if you don't want to change it
88 |
89 | Create oauthclient.
90 | ```
91 | oc create -f oauthclient.yaml
92 | ```
93 |
94 | Update the base64 encoded value of your secret in `console-oauth-config.yaml` (`echo -n <> | base64`).
95 |
96 | **Note: ** Make sure you use `echo -n`, otherwise it will add a newline and the secret won't work.
97 |
98 | Now create console-oauth-config
99 | ```
100 | oc create -f console-oauth-config.yaml
101 | ```
102 |
103 | 5. Deploy the new Developer Console using the template
104 |
105 | ```
106 | oc process -f devconsole-template.json -p OPENSHIFT_API_URL=<> -p OPENSHIFT_CONSOLE_URL=<> | oc create -f -
107 | ```
108 |
--------------------------------------------------------------------------------
/devconsole/console-oauth-config.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | data:
3 | clientSecret: yourBase64EncodedSecret
4 | kind: Secret
5 | metadata:
6 | labels:
7 | app: console
8 | name: console-oauth-config
9 | type: Opaque
10 |
--------------------------------------------------------------------------------
/devconsole/console-serving-cert.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | data:
3 | tls.crt: >-
4 | <>
5 | tls.key: >-
6 | <
7 | kind: Secret
8 | metadata:
9 | name: console-serving-cert
10 | type: kubernetes.io/tls
11 |
--------------------------------------------------------------------------------
/devconsole/devconsole-template.json:
--------------------------------------------------------------------------------
1 | {
2 | "apiVersion": "template.openshift.io/v1",
3 | "kind": "Template",
4 | "metadata": {
5 | "name": "devconsole-latest",
6 | "annotations": {
7 | "description": "Deploys latest Console and stays current for it."
8 | }
9 | },
10 | "parameters": [
11 | {
12 | "name": "OPENSHIFT_API_URL",
13 | "description": "The Openshift API url",
14 | "displayName": "Openshift API URL",
15 | "value": "https://api.openshift.codeready.cloud:6443",
16 | "required": true
17 | },
18 | {
19 | "name": "OPENSHIFT_CONSOLE_URL",
20 | "description": "The OpenShift Console URL",
21 | "displayName": "OpenShift Console URL",
22 | "value": "console-devconsole.apps.openshift.codeready.cloud",
23 | "required": true
24 | }
25 | ],
26 | "objects": [
27 | {
28 | "apiVersion": "v1",
29 | "kind": "ImageStream",
30 | "metadata": {
31 | "name": "origin-console",
32 | "labels": {
33 | "app": "devconsole"
34 | }
35 | },
36 | "spec": {
37 | "lookupPolicy": {
38 | "local": false
39 | },
40 | "tags": [
41 | {
42 | "from": {
43 | "kind": "DockerImage",
44 | "name": "quay.io/openshift/origin-console:latest"
45 | },
46 | "generation": 3,
47 | "importPolicy": {
48 | "scheduled": true
49 | },
50 | "name": "latest",
51 | "referencePolicy": {
52 | "type": "Source"
53 | }
54 | }
55 | ]
56 |
57 | }
58 | },
59 | {
60 | "apiVersion": "v1",
61 | "data": {
62 | "console-config.yaml": "kind: ConsoleConfig\napiVersion: console.openshift.io/v1beta1\nauth:\n clientID: devconsole\n clientSecretFile: /var/oauth-config/clientSecret\n logoutRedirect: \"\"\n oauthEndpointCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\nclusterInfo:\n consoleBaseAddress: https://${OPENSHIFT_CONSOLE_URL}\n consoleBasePath: \"\"\n masterPublicURL: ${OPENSHIFT_API_URL}\ncustomization:\n branding: ocp\n documentationBaseURL: https://docs.openshift.com/container-platform/4.0/\nservingInfo:\n bindAddress: https://0.0.0.0:8443\n certFile: /var/serving-cert/tls.crt\n keyFile: /var/serving-cert/tls.key\n"
63 | },
64 | "kind": "ConfigMap",
65 | "metadata": {
66 | "creationTimestamp": null,
67 | "labels": {
68 | "app": "devconsole"
69 | },
70 | "name": "devconsole"
71 | }
72 | },
73 | {
74 | "apiVersion": "apps.openshift.io/v1",
75 | "kind": "DeploymentConfig",
76 | "metadata": {
77 | "labels": {
78 | "app": "devconsole",
79 | "component": "ui"
80 | },
81 | "name": "devconsole"
82 | },
83 | "spec": {
84 | "replicas": 2,
85 | "triggers": [
86 | {
87 | "type": "ConfigChange"
88 | },
89 | {
90 | "type": "ImageChange",
91 | "imageChangeParams": {
92 | "automatic": true,
93 | "containerNames": [
94 | "console"
95 | ],
96 | "from": {
97 | "kind": "ImageStreamTag",
98 | "name": "origin-console:latest"
99 | }
100 | }
101 | }
102 | ],
103 | "revisionHistoryLimit": 10,
104 | "selector": {
105 | "app": "devconsole",
106 | "component": "ui"
107 | },
108 | "strategy": {
109 | "type": "Rolling"
110 | },
111 | "template": {
112 | "metadata": {
113 | "labels": {
114 | "app": "devconsole",
115 | "component": "ui"
116 | },
117 | "name": "devconsole"
118 | },
119 | "spec": {
120 | "affinity": {
121 | "podAntiAffinity": {
122 | "preferredDuringSchedulingIgnoredDuringExecution": [
123 | {
124 | "podAffinityTerm": {
125 | "labelSelector": {
126 | "matchLabels": {
127 | "app": "devconsole"
128 | }
129 | },
130 | "topologyKey": "kubernetes.io/hostname"
131 | },
132 | "weight": 100
133 | }
134 | ]
135 | }
136 | },
137 | "containers": [
138 | {
139 | "command": [
140 | "/opt/bridge/bin/bridge",
141 | "--public-dir=/opt/bridge/static",
142 | "--config=/var/console-config/console-config.yaml",
143 | "--service-ca-file=/var/service-ca/service-ca.crt"
144 | ],
145 | "image": "origin-console:latest",
146 | "imagePullPolicy": "Always",
147 | "livenessProbe": {
148 | "failureThreshold": 3,
149 | "httpGet": {
150 | "path": "/health",
151 | "port": 8443,
152 | "scheme": "HTTPS"
153 | },
154 | "initialDelaySeconds": 150,
155 | "periodSeconds": 10,
156 | "successThreshold": 1,
157 | "timeoutSeconds": 1
158 | },
159 | "name": "console",
160 | "ports": [
161 | {
162 | "containerPort": 443,
163 | "name": "https",
164 | "protocol": "TCP"
165 | }
166 | ],
167 | "readinessProbe": {
168 | "failureThreshold": 3,
169 | "httpGet": {
170 | "path": "/health",
171 | "port": 8443,
172 | "scheme": "HTTPS"
173 | },
174 | "periodSeconds": 10,
175 | "successThreshold": 1,
176 | "timeoutSeconds": 1
177 | },
178 | "resources": {
179 | "requests": {
180 | "cpu": "10m",
181 | "memory": "100Mi"
182 | }
183 | },
184 | "terminationMessagePath": "/dev/termination-log",
185 | "terminationMessagePolicy": "File",
186 | "volumeMounts": [
187 | {
188 | "mountPath": "/var/serving-cert",
189 | "name": "console-serving-cert",
190 | "readOnly": true
191 | },
192 | {
193 | "mountPath": "/var/oauth-config",
194 | "name": "console-oauth-config",
195 | "readOnly": true
196 | },
197 | {
198 | "mountPath": "/var/console-config",
199 | "name": "console-config",
200 | "readOnly": true
201 | },
202 | {
203 | "mountPath": "/var/service-ca",
204 | "name": "service-ca",
205 | "readOnly": true
206 | }
207 | ]
208 | }
209 | ],
210 | "dnsPolicy": "ClusterFirst",
211 | "nodeSelector": {
212 | "node-role.kubernetes.io/master": ""
213 | },
214 | "priorityClassName": "system-cluster-critical",
215 | "restartPolicy": "Always",
216 | "schedulerName": "default-scheduler",
217 | "securityContext": {},
218 | "serviceAccount": "console",
219 | "serviceAccountName": "console",
220 | "terminationGracePeriodSeconds": 30,
221 | "tolerations": [
222 | {
223 | "effect": "NoSchedule",
224 | "key": "node-role.kubernetes.io/master",
225 | "operator": "Exists"
226 | }
227 | ],
228 | "volumes": [
229 | {
230 | "name": "console-serving-cert",
231 | "secret": {
232 | "defaultMode": 420,
233 | "secretName": "console-serving-cert"
234 | }
235 | },
236 | {
237 | "name": "console-oauth-config",
238 | "secret": {
239 | "defaultMode": 420,
240 | "secretName": "console-oauth-config"
241 | }
242 | },
243 | {
244 | "configMap": {
245 | "defaultMode": 420,
246 | "name": "devconsole"
247 | },
248 | "name": "console-config"
249 | },
250 | {
251 | "configMap": {
252 | "defaultMode": 420,
253 | "name": "service-ca"
254 | },
255 | "name": "service-ca"
256 | }
257 | ]
258 | }
259 | }
260 | }
261 | },
262 | {
263 | "apiVersion": "v1",
264 | "kind": "Service",
265 | "metadata": {
266 | "labels": {
267 | "app": "devconsole"
268 | },
269 | "name": "devconsole"
270 | },
271 | "spec": {
272 | "ports": [
273 | {
274 | "name": "https",
275 | "port": 443,
276 | "protocol": "TCP",
277 | "targetPort": 8443
278 | }
279 | ],
280 | "selector": {
281 | "app": "devconsole",
282 | "component": "ui"
283 | },
284 | "sessionAffinity": "None",
285 | "type": "ClusterIP"
286 | }
287 | },
288 | {
289 | "apiVersion": "route.openshift.io/v1",
290 | "kind": "Route",
291 | "metadata": {
292 | "annotations": {
293 | "openshift.io/host.generated": "true"
294 | },
295 | "labels": {
296 | "app": "devconsole"
297 | },
298 | "name": "devconsole"
299 | },
300 | "spec": {
301 | "host": "${OPENSHIFT_CONSOLE_URL}",
302 | "port": {
303 | "targetPort": "https"
304 | },
305 | "tls": {
306 | "insecureEdgeTerminationPolicy": "None",
307 | "termination": "passthrough"
308 | },
309 | "to": {
310 | "kind": "Service",
311 | "name": "devconsole",
312 | "weight": 100
313 | },
314 | "wildcardPolicy": "None"
315 | },
316 | "status": {
317 | "ingress": null
318 | }
319 | }
320 | ]
321 | }
--------------------------------------------------------------------------------
/devconsole/service-ca-configmap.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ConfigMap
3 | metadata:
4 | annotations:
5 | service.alpha.openshift.io/inject-cabundle: "true"
6 | labels:
7 | app: devconsole
8 | name: service-ca
9 |
--------------------------------------------------------------------------------
/htpasswdExtras/readme.md:
--------------------------------------------------------------------------------
1 | # htpasswd - Changing Passwords or Adding Users
2 |
3 | OpenShift docs in [Configuring an HTPasswd identity provider](https://docs.openshift.com/container-platform/4.2/authentication/identity_providers/configuring-htpasswd-identity-provider.html) explain how to create htpasswd file and how to add it as an authentication provider. But they don't explain how to add new users when you need to. Here the details:
4 |
5 | ## Add users or change passwords
6 |
7 | ```htpasswd ```
8 |
9 | example
10 |
11 | ```
12 | % htpasswd myhtpasswd user2
13 | New password:
14 | Re-type new password:
15 | Updating password for user user2
16 | ```
17 |
18 | ## Updating OAuth in OpenShift
19 |
20 | Now take the newly generated htpasswd file and create a secret in `openshift-config` project as shown below
21 |
22 | ```
23 | % oc create secret generic htpasswd-secret --from-file=htpasswd=myhtpasswd -n openshift-config
24 | secret/htpasswd-secret created
25 | ```
26 |
27 | ### Change the OAuth Custom Resource
28 |
29 | Navigate to `Administration`->`Cluster Settings`->`Global Configuration`
30 |
31 | Find the CR `OAuth`. Click on it and navigate to `YAML` tab
32 |
33 | Edit the spec to point to your newly created secret
34 |
35 | ```
36 | ..
37 | ..
38 | ..
39 |
40 | spec:
41 | identityProviders:
42 | - htpasswd:
43 | fileData:
44 | name: htpasswd-secret
45 | mappingMethod: claim
46 | name: htpasswd
47 | type: HTPasswd
48 | ```
49 |
50 | and save the CR.
51 |
52 | This will trigger the operator to update the htpasswd file.
--------------------------------------------------------------------------------
/integrateLauncherWithCRWKeycloak/GitHubOAuthApp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/integrateLauncherWithCRWKeycloak/GitHubOAuthApp.png
--------------------------------------------------------------------------------
/integrateLauncherWithCRWKeycloak/GitHubProvider.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/integrateLauncherWithCRWKeycloak/GitHubProvider.png
--------------------------------------------------------------------------------
/integrateLauncherWithCRWKeycloak/KeycloakClientConfig.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/integrateLauncherWithCRWKeycloak/KeycloakClientConfig.png
--------------------------------------------------------------------------------
/integrateLauncherWithCRWKeycloak/README.md:
--------------------------------------------------------------------------------
1 | # Integrating Launcher with Keycloak installed by CRW
2 |
3 | ## Install EclipseChe 7.1
4 |
5 | Install Eclipse Che7.1 on your OCP4 cluster following the documentation explained here
6 | [https://www.eclipse.org/che/docs/che-7/installing-che-on-openshift-4-from-operatorhub/](https://www.eclipse.org/che/docs/che-7/installing-che-on-openshift-4-from-operatorhub/)
7 |
8 | >**Note** Once CRW2.x is released we will replace EclipseChe7.x with CRW in the above step.
9 |
10 | Note also enable SSL as explained here [https://www.eclipse.org/che/docs/che-7/installing-che-on-openshift-4-from-operatorhub/#enabling-ssl-on-openshift-4_installing-che-on-openshift-4-from-operatorhub](https://www.eclipse.org/che/docs/che-7/installing-che-on-openshift-4-from-operatorhub/#enabling-ssl-on-openshift-4_installing-che-on-openshift-4-from-operatorhub). **Note** that you will have to set `selfSignedCert: true` in the custom resource, if you are using self-signed certs.
11 |
12 | Most likely you are using a self-signed certificates. IDE gets stuck several times due to these self-signed certificates. There are two ways to deal with this issue:
13 |
14 | 1. Look at the developer console in your browser to figure out which URLs are failing, access those URLS, go to advanced and agree to move forward with that URL. You may have to do this multiple times for different URLs (such as plugin registry, devfile registry, keycloak etc)
15 | 2. Easier way is to add the self-signed certificate as a trusted cert as explained here [https://www.accuweaver.com/2014/09/19/make-chrome-accept-a-self-signed-certificate-on-osx/](https://www.accuweaver.com/2014/09/19/make-chrome-accept-a-self-signed-certificate-on-osx/)
16 |
17 | ### Keycloak Setup
18 |
19 | EclipseChe 7.x installation above also installs Keycloak Server.
20 |
21 | #### Add Github Identity Provider
22 | Login to the KeyCloak admin console.
23 | You can find the keycloak URL associated with the route in the namespace where Che 7.x was installed. You can find it by running `oc get route keycloak -o jsonpath='{.spec.host}'`
24 |
25 | For the admin password, check the environment variables in the `Deployment` for Keycloak. You can find it by running `oc get deployment keycloak -o yaml | grep -A1 KEYCLOAK_PASSWORD`
26 |
27 | Navigate to `Identity Providers` on the left menu. You will see `openshift-v4` identity provider already existing. This means your keycloak is already integrated with the authentication mechanism configured for your openshift cluster.
28 |
29 | Let us now add a new identity provider for `github`.
30 | * Press on `Add Provider` and select `github` provider
31 |
32 | * Configure github provider as shown in the following figure. Note the value of `Redirect URI`
33 |
34 | 
35 | > **Note** the value of the URI should be `https`
36 |
37 | * To fill the values for `Client ID` and `Client Secret` above, login to your Github account, go to `Settings` and `Developer Settings`. Then add an `OAuth App` let us say with the name `launcher` as shown in the screen below. This will generate a `Client Id` and `Client Secret` that you can use to configure Github Identity provider in Keycloak. Make sure you set the `Authorization Callback URL` to the `Redirect URI` noted above.
38 |
39 |
40 | 
41 |
42 | You can set the `HomePage URL` to some temporary URL and later come back and update it to Launcher's URL if you wish.
43 |
44 | >**Note** the Authorization callback URL is `https`
45 |
46 | ## Install Fabric8 Launcher
47 |
48 | ### Deploy Launcher Operator
49 |
50 | > **NOTE:** Launcher Operator is not yet part of OperatorHub. So we have to deploy launcher operator manually
51 |
52 | Clone the launcher code from its git repo
53 | ```
54 | $ git clone https://github.com/fabric8-launcher/launcher-operator
55 | $ cd launcher-operator
56 | ```
57 |
58 | Create a new project on your openshift cluster for launcher
59 |
60 | ```
61 | $ oc new-project launcher-infra
62 | ```
63 |
64 | Deploy Launcher Operator
65 |
66 | ```
67 | $ oc create -R -f ./deploy
68 |
69 | customresourcedefinition.apiextensions.k8s.io/launchers.launcher.fabric8.io created
70 | deployment.apps/launcher-operator created
71 | role.rbac.authorization.k8s.io/launcher-operator created
72 | rolebinding.rbac.authorization.k8s.io/launcher-operator created
73 | serviceaccount/launcher-operator created
74 | ```
75 |
76 | As you can see above, it deploys CRD, role, role binding, service account, and deployment for launcher operator.
77 |
78 | In a few mins you should see launcher operator running
79 |
80 | ```
81 | $ oc get po
82 | NAME READY STATUS RESTARTS AGE
83 | launcher-operator-69ddb677dd-c72jw 1/1 Running 0 3m23s
84 | ```
85 |
86 | ### Install Launcher on OpenShift
87 |
88 | Launcher operator template custom resources (CRs) are available in the example folder. Edit the CR for your cluster. Mine looks like this.
89 |
90 | The keycloak that I am using here was installed earlier with EclipseChe 7.1. That keycloak instance already integrates with openshift-v4 and will authenticate against your openshift environment. Using that keycloak with launcher will allow your launcher to automatically authenticate against your openshift cluster.
91 |
92 | ```
93 | $ cat example/launcher_cr.yaml
94 | apiVersion: launcher.fabric8.io/v1alpha2
95 | kind: Launcher
96 | metadata:
97 | name: launcher
98 | spec:
99 |
100 | ####### OpenShift Configuration
101 | openshift:
102 | consoleUrl: https://console-openshift-console.apps.ocp4.home.ocpcloud.com
103 | apiUrl: https://openshift.default.svc.cluster.local
104 |
105 | ####### OAuth Configuration
106 | oauth:
107 | enabled: true
108 | url: https://oauth-openshift.apps.ocp4.home.ocpcloud.com
109 | keycloakUrl: https://keycloak-che71.apps.ocp4.home.ocpcloud.com/auth
110 | keycloakRealm: che
111 | keycloakClientId: che-public
112 | ```
113 |
114 | In the above CR
115 | * **consoleURL** is your OCP web console URL
116 | * **apiURL** This is your OCP4 cluster's API URL. You can leave the value as `https://openshift.default.svc.cluster.local` if you are using the same cluster where the launcher is running
117 | * **oauth.url** is the oauth URL for your cluster. You can run `oc get route -n openshift-authentication` to get this value for your cluster.
118 | * **keycloakURL** is the route for your keycloak server. In this case, I am using the keycloak installed by EclipseChe operator.
119 | * **keyCloakRealm** is the realm configured in your KeyCloakServer. Login to your Keycloak Administrator Console and look at `Realm Settings` to find `Name` on `General`tab
120 | * **keycloakClientId** is the client to be used on Keycloak Server. In this case created by Che installation. You will find this in the `Clients` section when you login to keycloak admin console
121 |
122 |
123 | Create the custom resource now. Operator will create a launcher once it finds this custom resource
124 |
125 | ```
126 | $ oc create -f example/launcher_cr.yaml
127 | launcher.launcher.fabric8.io/launcher created
128 | ```
129 |
130 | You will see launcher pod running in a few mins
131 |
132 | ```
133 | $ oc get po
134 | NAME READY STATUS RESTARTS AGE
135 | launcher-application-1-deploy 0/1 Completed 0 5d2h
136 | launcher-application-1-qn4l2 1/1 Running 0 51m
137 | launcher-operator-69ddb677dd-x69xs 1/1 Running 0 5d2h
138 | ```
139 |
140 | Find launcher URL by inquiring on the route.
141 | ```
142 | $ oc get route launcher --template={{.spec.host}}
143 | launcher-launcher-infra.apps.ocp4.home.ocpcloud.com
144 | ```
145 | Use this to log on to your launcher.
146 |
147 | Optionally, You can also go back and update the Github OAuth App to set the `Homepage URL` to this value.
148 |
149 |
150 | #### Update Launcher into KeyCloak
151 |
152 | Navigate back to Keycloak Admin console. Select `Clients` from the left menu and locate `Che-public`.
153 |
154 | Scroll down to `Valid Redirect URIs` and add Launcher's route with a `/*` to this list. You may want to add both `http` and `https`.
155 |
156 | Also add these routes to `Web Origins` as well. See the screenshot below.
157 |
158 | 
159 |
160 | #### Update GitHub OAuth App Homepage URL
161 |
162 | Go back to your Github OAuth App that we configured earlier and update the HomePage URL to the Launcher's URL
163 |
164 | 
--------------------------------------------------------------------------------
/monitor/README.md:
--------------------------------------------------------------------------------
1 | # User Workload Monitoring
2 |
3 | ## Enable User Workload Monitoring
4 |
5 | In OCP 4.5 this is a tech preview feature.
6 |
7 | Cluster administrator should create a ConfigMap to enable it
8 |
9 | ```
10 | % cat cluster-monitoring-config.yaml
11 |
12 | apiVersion: v1
13 | kind: ConfigMap
14 | metadata:
15 | name: cluster-monitoring-config
16 | namespace: openshift-monitoring
17 | data:
18 | config.yaml: |
19 | techPreviewUserWorkload:
20 | enabled: true
21 | ```
22 |
23 | ```
24 | % oc create -f cluster-monitoring-config.yaml
25 |
26 | configmap/cluster-monitoring-config created
27 |
28 | ```
29 |
30 | ## Create a namespace to deploy an application
31 |
32 | As a regular user
33 |
34 | ` oc new-project ns1`
35 |
36 | ## Provide access to user(s) on a namespace
37 |
38 | Cluster administrator to add following roles to user(s). Each of these rolebindings can be created for different users. If you are doing it for one user, just 'monitoring-edit` access is sufficient which is a superset of the other two.
39 |
40 | * allows reading PrometheusRule custom resources within the namespace
41 | ```
42 | oc policy add-role-to-user monitoring-rules-view UserName -n ns1
43 | ```
44 |
45 | * allows creating, modifying, and deleting PrometheusRule custom resources matching the permitted namespace
46 | ```
47 | oc policy add-role-to-user monitoring-rules-edit UserName -n ns1
48 | ```
49 | * in addition to `monitoring-rules-edit` permissions it also allows creating new scraping targets for services or Pods. It also allows creating, modifying, and deleting ServiceMonitors and PodMonitors
50 | ```
51 | oc policy add-role-to-user monitoring-edit UserName -n ns1
52 | ```
53 |
54 | ## Deploy an application
55 |
56 | The application should have been instrumented to expose metrics using prometheus standards. The following deployment is for one such sample application.
57 |
58 | ```
59 | % cat prometheus-example-app.yaml
60 | apiVersion: apps/v1
61 | kind: Deployment
62 | metadata:
63 | labels:
64 | app: prometheus-example-app
65 | name: prometheus-example-app
66 | namespace: ns1
67 | spec:
68 | replicas: 1
69 | selector:
70 | matchLabels:
71 | app: prometheus-example-app
72 | template:
73 | metadata:
74 | labels:
75 | app: prometheus-example-app
76 | spec:
77 | containers:
78 | - image: quay.io/brancz/prometheus-example-app:v0.2.0
79 | imagePullPolicy: IfNotPresent
80 | name: prometheus-example-app
81 | ---
82 | apiVersion: v1
83 | kind: Service
84 | metadata:
85 | labels:
86 | app: prometheus-example-app
87 | name: prometheus-example-app
88 | namespace: ns1
89 | spec:
90 | ports:
91 | - port: 8080
92 | protocol: TCP
93 | targetPort: 8080
94 | name: web
95 | selector:
96 | app: prometheus-example-app
97 | type: ClusterIP
98 | ```
99 |
100 | * Create this application deployment and service as a regular user. This will create a deployment for a containerized application ie., `quay.io/brancz/prometheus-example-app:v0.2.0` and add a service for the same.
101 |
102 | ```
103 | % oc create -f prometheus-example-app.yaml
104 |
105 | deployment.apps/prometheus-example-app created
106 | service/prometheus-example-app created
107 | ```
108 |
109 | * Create a route by exposing the service
110 |
111 | ```
112 | % oc expose svc prometheus-example-app
113 | route.route.openshift.io/prometheus-example-app exposed
114 | ```
115 | * Wait until the application pod is running
116 | ```
117 | % oc get po
118 |
119 | NAME READY STATUS RESTARTS AGE
120 | prometheus-example-app-5db4496fd4-5gwrr 1/1 Running 0 2m15s
121 | ```
122 |
123 | Get your application URL and curl for metrics exposed by the application
124 |
125 | ```
126 | % export URL=$(oc get route | awk 'NR>1 {print $2}')
127 | % curl $URL/metrics
128 | # HELP http_requests_total Count of all HTTP requests
129 | # TYPE http_requests_total counter
130 | http_requests_total{code="200",method="get"} 3
131 | # HELP version Version information about this binary
132 | # TYPE version gauge
133 | version{version="v0.1.0"} 1
134 | ```
135 |
136 |
137 | ## Configure Prometheus to scrape metrics
138 |
139 | * Create a ServiceMonitor to scrape metrics from the application
140 |
141 | ```
142 | % cat prometheus-example-monitor.yaml
143 | apiVersion: monitoring.coreos.com/v1
144 | kind: ServiceMonitor
145 | metadata:
146 | labels:
147 | k8s-app: prometheus-example-monitor
148 | name: prometheus-example-monitor
149 | namespace: ns1
150 | spec:
151 | endpoints:
152 | - interval: 30s
153 | port: web
154 | scheme: http
155 | selector:
156 | matchLabels:
157 | app: prometheus-example-app
158 | ```
159 |
160 | ```
161 | % oc create -f prometheus-example-monitor.yaml
162 |
163 | servicemonitor.monitoring.coreos.com/prometheus-example-monitor created
164 | ```
165 |
166 | * Verify
167 |
168 | ```
169 | % oc -n ns1 get servicemonitor
170 | NAME AGE
171 | prometheus-example-monitor 4m2s
172 | ```
173 |
174 | ## Add an Alert
175 |
176 |
177 | * Add an alert based on the metric being scraped. The alert below will fire when number of http requests hitting the service cross 20.
178 |
179 | ```
180 | % cat example-alert.yaml
181 | apiVersion: monitoring.coreos.com/v1
182 | kind: PrometheusRule
183 | metadata:
184 | name: example-alert
185 | namespace: ns1
186 | spec:
187 | groups:
188 | - name: example
189 | rules:
190 | - alert: VersionAlert
191 | expr: http_requests_total{job="prometheus-example-app"} > 20
192 | ```
193 |
194 | ```
195 | % oc create -f example-alert.yaml
196 | prometheusrule.monitoring.coreos.com/example-alert created
197 | ```
198 |
199 | * Verify
200 |
201 | ```
202 | % oc get PrometheusRule
203 | NAME AGE
204 | example-alert 52s
205 | ```
206 |
207 | ## Test the user workload metrics captured by Prometheus
208 |
209 | Navigate to Developer Console -> Monitoring -> Metrics Tab and search with a `Custom Query` for the two metrics `http_requests_total` and `version`
210 |
211 | The results will be shown as below.
212 |
213 | 
214 | 
215 |
216 | ## Verify the Alert firing
217 |
218 | Administrator can view the alerts. Cluster administrator can capture Thanos URL running
219 |
220 | `oc get route -n openshift-user-workload-monitoring`
221 |
222 | Type in this URL in the browser and login with cluster admin credentials
223 |
224 | You will notice no alerts are firing yet.
225 | 
226 |
227 | Now curl the application URL a few times until it crosses 20.
228 | `curl $URL/metrics`
229 |
230 | As soon as the output shows total requests as 21
231 |
232 | ```
233 | # HELP http_requests_total Count of all HTTP requests
234 | # TYPE http_requests_total counter
235 | http_requests_total{code="200",method="get"} 21
236 | # HELP version Version information about this binary
237 | # TYPE version gauge
238 | version{version="v0.1.0"} 1
239 | ```
240 |
241 | you will see the alert firing as below.
242 |
243 | 
244 |
245 |
246 |
247 |
248 |
249 |
250 |
251 |
252 |
253 |
254 |
255 |
256 |
257 |
258 |
259 |
--------------------------------------------------------------------------------
/monitor/cluster-monitoring-config.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ConfigMap
3 | metadata:
4 | name: cluster-monitoring-config
5 | namespace: openshift-monitoring
6 | data:
7 | config.yaml: |
8 | techPreviewUserWorkload:
9 | enabled: true
10 |
--------------------------------------------------------------------------------
/monitor/example-alert.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: monitoring.coreos.com/v1
2 | kind: PrometheusRule
3 | metadata:
4 | name: example-alert
5 | namespace: ns1
6 | spec:
7 | groups:
8 | - name: example
9 | rules:
10 | - alert: VersionAlert
11 | expr: version{job="prometheus-example-app"} > 0
12 |
--------------------------------------------------------------------------------
/monitor/monitor1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/monitor/monitor1.png
--------------------------------------------------------------------------------
/monitor/monitor2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/monitor/monitor2.png
--------------------------------------------------------------------------------
/monitor/monitor3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/monitor/monitor3.png
--------------------------------------------------------------------------------
/monitor/monitor4.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/VeerMuchandi/ocp4-extras/724da1e2b57bff6462c481b06af483c97e723472/monitor/monitor4.png
--------------------------------------------------------------------------------
/monitor/prometheus-example-app.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: ns1
6 |
7 | ---
8 | apiVersion: apps/v1
9 | kind: Deployment
10 | metadata:
11 | labels:
12 | app: prometheus-example-app
13 | name: prometheus-example-app
14 | namespace: ns1
15 | spec:
16 | replicas: 1
17 | selector:
18 | matchLabels:
19 | app: prometheus-example-app
20 | template:
21 | metadata:
22 | labels:
23 | app: prometheus-example-app
24 | spec:
25 | containers:
26 | - image: quay.io/brancz/prometheus-example-app:v0.2.0
27 | imagePullPolicy: IfNotPresent
28 | name: prometheus-example-app
29 | ---
30 | apiVersion: v1
31 | kind: Service
32 | metadata:
33 | labels:
34 | app: prometheus-example-app
35 | name: prometheus-example-app
36 | namespace: ns1
37 | spec:
38 | ports:
39 | - port: 8080
40 | protocol: TCP
41 | targetPort: 8080
42 | name: web
43 | selector:
44 | app: prometheus-example-app
45 | type: ClusterIP
46 |
--------------------------------------------------------------------------------
/monitor/prometheus-example-monitor.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: monitoring.coreos.com/v1
2 | kind: ServiceMonitor
3 | metadata:
4 | labels:
5 | k8s-app: prometheus-example-monitor
6 | name: prometheus-example-monitor
7 | namespace: ns1
8 | spec:
9 | endpoints:
10 | - interval: 30s
11 | port: web
12 | scheme: http
13 | selector:
14 | matchLabels:
15 | app: prometheus-example-app
16 |
--------------------------------------------------------------------------------
/nfsautoprovisioner/README.md:
--------------------------------------------------------------------------------
1 | # Adding NFS Auto Provisioner to your cluster
2 |
3 | **Acknowledgements: ** Based on
4 | https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client
5 |
6 | ## Prerequisites
7 | * [NFS Server is set up and available](#appendix)
8 | * OCP 4 cluster is setup and available. Admin access to this cluster
9 |
10 | ## Steps
11 |
12 | 1. Create a new project
13 |
14 | ```
15 | # oc new-project nfsprovisioner
16 | # NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
17 | # echo $NS
18 | nfsprovisioner
19 | # NAMESPACE=${NS:-default}
20 | ```
21 |
22 | 2. Set up folder structure like below
23 | ```
24 | # mkdir nfsprovisioner
25 | # cd nfsprovisioner/
26 | # mkdir deploy
27 | ```
28 |
29 | 3. Get the needed files
30 |
31 | ```
32 | wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/deployment.yaml -O deploy/deployment.yaml
33 |
34 | wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml -O deploy/rbac.yaml
35 |
36 | wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/class.yaml -O deploy/class.yaml
37 | ```
38 |
39 | Verify
40 |
41 | ```
42 | [root@veerocp4helper nfsprovisioner]# ls -R
43 | .:
44 | deploy
45 |
46 | ./deploy:
47 | class.yaml deployment.yaml rbac.yaml
48 | ```
49 |
50 | Update namespace
51 |
52 | ```
53 | # sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml
54 | ```
55 |
56 | ```
57 | # cat deploy/rbac.yaml
58 | kind: ServiceAccount
59 | apiVersion: v1
60 | metadata:
61 | name: nfs-client-provisioner
62 | ---
63 | kind: ClusterRole
64 | apiVersion: rbac.authorization.k8s.io/v1
65 | metadata:
66 | name: nfs-client-provisioner-runner
67 | rules:
68 | - apiGroups: [""]
69 | resources: ["persistentvolumes"]
70 | verbs: ["get", "list", "watch", "create", "delete"]
71 | - apiGroups: [""]
72 | resources: ["persistentvolumeclaims"]
73 | verbs: ["get", "list", "watch", "update"]
74 | - apiGroups: ["storage.k8s.io"]
75 | resources: ["storageclasses"]
76 | verbs: ["get", "list", "watch"]
77 | - apiGroups: [""]
78 | resources: ["events"]
79 | verbs: ["create", "update", "patch"]
80 | ---
81 | kind: ClusterRoleBinding
82 | apiVersion: rbac.authorization.k8s.io/v1
83 | metadata:
84 | name: run-nfs-client-provisioner
85 | subjects:
86 | - kind: ServiceAccount
87 | name: nfs-client-provisioner
88 | namespace: nfsprovisioner
89 | roleRef:
90 | kind: ClusterRole
91 | name: nfs-client-provisioner-runner
92 | apiGroup: rbac.authorization.k8s.io
93 | ---
94 | kind: Role
95 | apiVersion: rbac.authorization.k8s.io/v1
96 | metadata:
97 | name: leader-locking-nfs-client-provisioner
98 | rules:
99 | - apiGroups: [""]
100 | resources: ["endpoints"]
101 | verbs: ["get", "list", "watch", "create", "update", "patch"]
102 | ---
103 | kind: RoleBinding
104 | apiVersion: rbac.authorization.k8s.io/v1
105 | metadata:
106 | name: leader-locking-nfs-client-provisioner
107 | subjects:
108 | - kind: ServiceAccount
109 | name: nfs-client-provisioner
110 | # replace with namespace where provisioner is deployed
111 | namespace: nfsprovisioner
112 | roleRef:
113 | kind: Role
114 | name: leader-locking-nfs-client-provisioner
115 | apiGroup: rbac.authorization.k8s.io
116 | ```
117 |
118 | Add objects to cluster
119 | ```
120 | # kubectl create -f deploy/rbac.yaml
121 | serviceaccount/nfs-client-provisioner created
122 | clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
123 | clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
124 | role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
125 | rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
126 | ```
127 |
128 | Elevate SCC to `hostmount-anyuid`
129 |
130 | **If you don't have a cluster role** that points to `hostmount-anyuid` create one by running:
131 |
132 | ```
133 | oc create clusterrole hostmount-anyuid-role --verb=use --resource=scc --resource-name=hostmount-anyuid
134 | ```
135 | Now add the service account `nfs-client-provisioner` to that role by running
136 |
137 | ```
138 | oc adm policy add-cluster-role-to-user hostmount-anyuid-role -z nfs-client-provisioner -n $NAMESPACE
139 | ```
140 |
141 | The above method replaces the older way of editing default SCC i.e, `oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner` as this is discouraged in the openshift documentation now.
142 |
143 |
144 | Edit `deploy/deployment.yaml` for the values of NFS server and NFS path. Both in `env` and `volumes`
145 |
146 | ```
147 | # cat deploy/deployment.yaml
148 | apiVersion: v1
149 | kind: Deployment
150 | apiVersion: extensions/v1beta1
151 | metadata:
152 | name: nfs-client-provisioner
153 | spec:
154 | replicas: 1
155 | strategy:
156 | type: Recreate
157 | template:
158 | metadata:
159 | labels:
160 | app: nfs-client-provisioner
161 | spec:
162 | serviceAccountName: nfs-client-provisioner
163 | containers:
164 | - name: nfs-client-provisioner
165 | image: quay.io/external_storage/nfs-client-provisioner:latest
166 | volumeMounts:
167 | - name: nfs-client-root
168 | mountPath: /persistentvolumes
169 | env:
170 | - name: PROVISIONER_NAME
171 | value: nfs-provisioner
172 | - name: NFS_SERVER
173 | value: 199.168.2.150
174 | - name: NFS_PATH
175 | value: /exports
176 | volumes:
177 | - name: nfs-client-root
178 | nfs:
179 | server: 199.168.2.150
180 | path: /exports
181 | ```
182 |
183 |
184 | Edit the `provisioner` name in `deploy/class.yaml`
185 |
186 | ```
187 | # cat deploy/class.yaml
188 | apiVersion: storage.k8s.io/v1
189 | kind: StorageClass
190 | metadata:
191 | name: managed-nfs-storage
192 | provisioner: nfs-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
193 | parameters:
194 | archiveOnDelete: "false" # When set to "false" your PVs will not be archived
195 | # by the provisioner upon deletion of the PVC.
196 | ```
197 |
198 | Deploy nfs-provisioner and storage-class
199 |
200 | ```
201 | # oc create -f deploy/deployment.yaml
202 |
203 | # oc create -f deploy/class.yaml
204 | ```
205 |
206 |
207 | If you want this to be the default storage class, annotate the strae class accordingly
208 |
209 | ```
210 | # oc annotate storageclass managed-nfs-storage storageclass.kubernetes.io/is-default-class="true"
211 |
212 | storageclass.storage.k8s.io/managed-nfs-storage annotated
213 | ```
214 |
215 | ## Appendix: NFS Setup
216 |
217 | ```
218 | # pvcreate /dev/vdb
219 |
220 | # pvs
221 | PV VG Fmt Attr PSize PFree
222 | /dev/vda2 rhel_veerocp4helper lvm2 a-- <29.00g 0
223 | /dev/vdb lvm2 --- 100.00g 100.00g
224 |
225 | # vgcreate vg-nfs /dev/vdb
226 | Volume group "vg-nfs" successfully created
227 |
228 | # lvcreate -n lv-nfs -l 100%FREE vg-nfs
229 | Logical volume "lv-nfs" created.
230 |
231 | # lvs
232 | LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
233 | root rhel_veerocp4helper -wi-ao---- <26.00g
234 | swap rhel_veerocp4helper -wi-ao---- 3.00g
235 | lv-nfs vg-nfs -wi-a----- <100.00g
236 |
237 | # mkfs.xfs /dev/vg-nfs/lv-nfs
238 | meta-data=/dev/vg-nfs/lv-nfs isize=512 agcount=4, agsize=6553344 blks
239 | = sectsz=512 attr=2, projid32bit=1
240 | = crc=1 finobt=0, sparse=0
241 | data = bsize=4096 blocks=26213376, imaxpct=25
242 | = sunit=0 swidth=0 blks
243 | naming =version 2 bsize=4096 ascii-ci=0 ftype=1
244 | log =internal log bsize=4096 blocks=12799, version=2
245 | = sectsz=512 sunit=0 blks, lazy-count=1
246 | realtime =none extsz=4096 blocks=0, rtextents=0
247 |
248 | # mkdir /exports
249 |
250 | # vi /etc/fstab
251 |
252 | # mount -a
253 |
254 | # df -h
255 | Filesystem Size Used Avail Use% Mounted on
256 | /dev/mapper/rhel_veerocp4helper-root 26G 4.3G 22G 17% /
257 | devtmpfs 1.9G 0 1.9G 0% /dev
258 | tmpfs 1.9G 84K 1.9G 1% /dev/shm
259 | tmpfs 1.9G 8.8M 1.9G 1% /run
260 | tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
261 | /dev/vda1 1014M 145M 870M 15% /boot
262 | tmpfs 379M 0 379M 0% /run/user/1000
263 | /dev/mapper/vg--nfs-lv--nfs 100G 33M 100G 1% /exports
264 |
265 | # yum install nfs-utils
266 |
267 | # cat /etc/exports.d/veer.exports
268 | /exports *(rw,root_squash)
269 |
270 | # exportfs
271 | /exports
272 |
273 | # systemctl restart nfs
274 |
275 | # showmount -e 199.168.2.150
276 | Export list for 199.168.2.150:
277 | /exports *
278 |
279 | chown nfsnobody:nfsnobody /exports
280 | chmod 777 /exports
281 |
282 | firewall-cmd --add-service=nfs --permanent
283 | firewall-cmd --add-service=nfs
284 | ```
--------------------------------------------------------------------------------
/openshiftlogging/README.md:
--------------------------------------------------------------------------------
1 | # Installing OpenShift Logging on OCP4
2 |
3 | ## Install Elasticsearch operator
4 |
5 | * From Operator Hub choose `Elasticsearch` operator from `Logging and Tracing`
6 | * Wait until upgrade status is `Up to date` and `1 installed`. You will observe the operator pod running.
7 |
8 | ```
9 | $ oc get po -n openshift-operators
10 | NAME READY STATUS RESTARTS AGE
11 | elasticsearch-operator-7c5cc4bff9-pf8lb 1/1 Running 0 3m7s
12 | ```
13 |
14 | ## Install OpenShift Cluster Logging
15 |
16 | **Note :** Tested with the Cluster Logging Operator version 4.1.4
17 |
18 | * Goto `Administration`->`Namespaces` and `Create` a namespace with name `openshift-logging` with following labels
19 |
20 | ```
21 | openshift.io/cluster-logging=true
22 | openshift.io/cluster-monitoring=true
23 | ```
24 |
25 | * Install openshift cluster logging using `cluster logging operator` from Operator Hub
26 | * Choose `openshift-logging` that we created earlier as the specific namespace on the cluster
27 | * Approval strategy `Automatic`
28 |
29 | * Wait until upgrade status is `Up to date` and `1 installed`. You will observe the operator pod running.
30 |
31 | ```
32 | oc get po -n openshift-logging
33 | NAME READY STATUS RESTARTS AGE
34 | cluster-logging-operator-8cb69d977-sqmm2 1/1 Running 0 6m26s
35 | ```
36 |
37 | * Now click on the link `1 installed` and `Create New` for `Cluster Logging`
38 |
39 | * Edit the yaml to include the following resources requirements. The default will configure the Elasticsearch pods to require `16Gi` memory. Instead we are configuring it to use lower memory as below. Add these resource requirements right before `storage:` for elasticsearch
40 |
41 | ```
42 | resources:
43 | limits:
44 | memory: 8Gi
45 | requests:
46 | cpu: 200m
47 | memory: 4Gi
48 | storage:
49 | ```
50 |
51 | * Click on `Create` on the next screen
52 |
53 | **Note :** In spite of making this change, only one of the elasticsearch pods came up and the other two were pending due to lack of resources. So I had to add additional nodes (on AWS, change the count on MachineSet and wait for a few mins to nodes to be added)
54 |
55 |
56 | * Watch creation on EFK pods
57 |
58 | ```
59 | $ oc get po -n openshift-logging
60 | NAME READY STATUS RESTARTS AGE
61 | cluster-logging-operator-5fbb755d9c-c5dv4 1/1 Running 0 3h39m
62 | elasticsearch-cdm-w9sf54if-1-bcd8ffc87-5gxz2 2/2 Running 0 9m47s
63 | elasticsearch-cdm-w9sf54if-2-54df86d48d-xkq5z 2/2 Running 0 9m39s
64 | elasticsearch-cdm-w9sf54if-3-6c96bbd554-7kl66 2/2 Running 0 9m32s
65 | fluentd-5stvp 1/1 Running 0 9m42s
66 | fluentd-926tj 1/1 Running 0 9m42s
67 | fluentd-hqgb4 1/1 Running 0 9m42s
68 | fluentd-j8xhg 1/1 Running 0 9m42s
69 | fluentd-l5ssm 1/1 Running 0 9m42s
70 | fluentd-llbwk 1/1 Running 0 9m42s
71 | fluentd-lz4w6 1/1 Running 0 9m42s
72 | fluentd-x89cw 1/1 Running 0 9m42s
73 | fluentd-xs74x 1/1 Running 0 9m42s
74 | kibana-57ff8486f8-p7bk4 2/2 Running 0 9m47s
75 | ```
76 |
77 | Identify the route in the `openshift-logging` project
78 |
79 | ```
80 | $ oc get route -n openshift-logging
81 | NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
82 | kibana kibana-openshift-logging.apps.YOURDOMAIN kibana reencrypt/Redirect None
83 |
84 | ```
85 |
86 | Access this url and login
87 |
88 |
89 | ## Cleanup
90 |
91 | Delete the CR
92 | ```
93 | $ oc get clusterlogging -n openshift-logging
94 | NAME AGE
95 | instance 55m
96 |
97 | $ oc delete clusterlogging instance -n openshift-logging
98 | clusterlogging.logging.openshift.io "instance" deleted
99 | ```
100 |
101 | Remove pvcs yourself
102 |
103 | ```
104 | $ oc delete pvc -n openshift-logging --all
105 | persistentvolumeclaim "elasticsearch-elasticsearch-cdm-aeslo1r7-1" deleted
106 | persistentvolumeclaim "elasticsearch-elasticsearch-cdm-aeslo1r7-2" deleted
107 | persistentvolumeclaim "elasticsearch-elasticsearch-cdm-aeslo1r7-3" deleted
108 | ```
109 |
110 | Confirm that all the pods except the operator are gone
111 |
112 | ```
113 | $ oc get po
114 | NAME READY STATUS RESTARTS AGE
115 | cluster-logging-operator-5fbb755d9c-c5dv4 1/1 Running 0 4h26m
116 | ```
117 |
118 |
--------------------------------------------------------------------------------
/registryconfigurations/README.md:
--------------------------------------------------------------------------------
1 | # Configuring Insecure Registries, Blocking Registries and so on
2 |
3 | ## Prerequisites
4 | You will need admin access to the cluster
5 | Login as the admin user
6 |
7 |
8 | ## Understanding the images CRD
9 |
10 | Read the following CRD for details of all the features that can be changed
11 |
12 | ```
13 | $ oc get crds images.config.openshift.io -n openshift-config -o yaml
14 | ```
15 |
16 | Also look at the CR created by default for this CRD. It should look like this
17 |
18 | ```
19 | $ oc get images.config.openshift.io cluster -o yaml
20 | apiVersion: config.openshift.io/v1
21 | kind: Image
22 | metadata:
23 | annotations:
24 | release.openshift.io/create-only: "true"
25 | creationTimestamp: "2019-07-09T19:30:12Z"
26 | generation: 1
27 | name: cluster
28 | resourceVersion: "985798"
29 | selfLink: /apis/config.openshift.io/v1/images/cluster
30 | uid: f83bf84a-a27f-11e9-ab36-5254006a091b
31 | spec: {}
32 | status:
33 | externalRegistryHostnames:
34 | - default-route-openshift-image-registry.apps.ocp4.example.mycloud.com
35 | internalRegistryHostname: image-registry.openshift-image-registry.svc:5000
36 | ```
37 |
38 | The `spec:` above is where we can add all customizations.
39 |
40 |
41 | A couple of examples are shown here:
42 |
43 | ## To block a registry
44 |
45 | Let us say we want to block `docker.io`
46 |
47 | Here is what we can do. Edit the CR
48 |
49 | ```
50 | $ oc edit images.config.openshift.io cluster
51 | ```
52 | Edit the spec section as follows
53 |
54 | ```
55 | spec:
56 | registrySources:
57 | blockedRegistries:
58 | - docker.io
59 | ```
60 |
61 | This change should update all the masters and nodes and add this change `[registries.block]
62 | registries = ["docker.io"]`
63 |
64 |
65 | ```
66 | [core@worker0 ~]$ cat /etc/containers/registries.conf
67 | [registries]
68 | [registries.search]
69 | registries = ["registry.access.redhat.com", "docker.io"]
70 | [registries.insecure]
71 | registries = []
72 | [registries.block]
73 | registries = ["docker.io"]
74 | ```
75 |
76 | Once all the nodes are up if you try to create an application using an image from docker.io, it will be blocked. You will see these kind of error messages in the events.
77 |
78 | ```
79 |
80 | 1s Warning InspectFailed pod/welcome-1-fwcp4 Failed to inspect image "docker.io/veermuchandi/welcome@sha256:db4d49ca6ab825c013cb076a2824947c5378b845206cc29c6fd245744ffd35fb": rpc error: code = Unknown desc = cannot use "docker.io/veermuchandi/welcome@sha256:db4d49ca6ab825c013cb076a2824947c5378b845206cc29c6fd245744ffd35fb" because it's blocked
81 | 1s Warning Failed pod/welcome-1-fwcp4 Error: ImageInspectError
82 |
83 | ```
84 |
85 |
86 | ## Configuring Insecure Registries
87 |
88 | Update the CR `$ oc edit images.config.openshift.io cluster`
89 |
90 | with the spec as follows
91 |
92 | ```
93 | spec:
94 | registrySources:
95 | insecureRegistries:
96 | - bastion.mycloud.com:5000
97 | - 198.18.100.1:5000
98 | ```
99 |
100 | This change should update all the masters and nodes and add this change
101 |
102 | ```
103 | [core@worker-0 ~]$ sudo cat /etc/containers/registries.conf
104 | [registries]
105 | [registries.search]
106 | registries = ["registry.access.redhat.com", "docker.io"]
107 | [registries.insecure]
108 | registries = ["bastion.mycloud.com:5000", "198.18.100.1:5000"]
109 | [registries.block]
110 | registries = []
111 | ```
112 |
113 |
114 |
115 |
116 |
117 |
118 |
--------------------------------------------------------------------------------