├── operators ├── resources │ └── .gitkeep ├── docs │ ├── install.md │ └── clusterd.md └── README.md ├── cns ├── ocs-grafana.png ├── gluster-default-storageclass.yaml └── host-prepare.yaml ├── ocp4_upi ├── docs │ ├── rhcos.png │ ├── proxy_notes.md │ ├── 2.installrhcos.md │ ├── 3.installocp4.md │ └── 1.setup.md └── README.md ├── installation ├── images │ └── osev3.jpg ├── README.md ├── guides │ ├── docker-ocp.md │ ├── okd.md │ ├── okd-inventory.ini │ └── disconnected.md └── 3.md ├── aws_refarch ├── ose-on-aws-architecture.png └── aws_inventory.ini ├── aws_refarch2 ├── ose-on-aws-architecture.jpg └── README.md ├── ipa_on_ocp ├── images │ └── freeipa-parameters.png └── README.md ├── istio ├── sm-resources │ ├── bookinfo │ │ ├── biexports.env │ │ ├── 7.request-timeout-add-timeout.yaml │ │ ├── 1.virtual-service-reviews-to-v2.yaml │ │ ├── 2.virtual-service-reviews-to-v3.yaml │ │ ├── 6.request-timeout-add-delay-to-reviews.yaml │ │ ├── 5.virtual-service-reviews-retry.yaml │ │ ├── 3.virtual-service-reviews-50-v3.yaml │ │ ├── 4.virtual-service-reviews-jason-v2-v3.yaml │ │ ├── 0.virtual-service-all-v1.yaml │ │ └── 00.setup-bookinfo.yaml │ ├── 2.istio-cr.yaml │ ├── 3.test-deploy.yaml │ └── 1.istio_community_operator_template.yaml └── README.md ├── network_policy ├── policies │ ├── default-deny.yaml │ ├── allow-within-same-namespace.yml │ ├── allow-from-namespace.yaml │ ├── allow-from-default-namespace.yml │ ├── allow-frontend-pn.yaml │ ├── allow-frontend-pa.yaml │ ├── allow-pa-database.yaml │ ├── allow-to-database.yaml │ └── allow-domain.json └── readme.md ├── README.md ├── certbot ├── README.md ├── CRON_reset-kibana-cert.sh └── CRON_reset-hawk-cert.sh ├── ansible_hostfiles ├── awsinstall ├── all-in-one ├── all-in-one-3.11 ├── singlemaster-crio ├── singlemaster ├── singlemaster-3.11 ├── singlemaster-crio-3.11 ├── multimaster └── multimaster-3.11 ├── mco ├── examples │ └── mcp-with-mc.yaml └── README.md ├── logging └── README.md ├── miscellaneous └── samples │ └── windows-app.yaml ├── haproxy_config ├── haproxy.cfg └── haproxy-letsencrypt.cfg ├── daemon_sets ├── ds-withstorage.yaml └── README.md ├── metrics └── README.md ├── manage_servers └── README.md ├── oc_cluster_up └── README.md ├── scripts └── update-oc-bin ├── router └── README.md ├── registry └── manifests │ ├── minioinstance.yaml │ └── minio-operator.yaml └── authentication └── README.md /operators/resources/.gitkeep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /cns/ocs-grafana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/christianh814/openshift-toolbox/HEAD/cns/ocs-grafana.png -------------------------------------------------------------------------------- /ocp4_upi/docs/rhcos.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/christianh814/openshift-toolbox/HEAD/ocp4_upi/docs/rhcos.png -------------------------------------------------------------------------------- /installation/images/osev3.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/christianh814/openshift-toolbox/HEAD/installation/images/osev3.jpg -------------------------------------------------------------------------------- /aws_refarch/ose-on-aws-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/christianh814/openshift-toolbox/HEAD/aws_refarch/ose-on-aws-architecture.png -------------------------------------------------------------------------------- /aws_refarch2/ose-on-aws-architecture.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/christianh814/openshift-toolbox/HEAD/aws_refarch2/ose-on-aws-architecture.jpg -------------------------------------------------------------------------------- /ipa_on_ocp/images/freeipa-parameters.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/christianh814/openshift-toolbox/HEAD/ipa_on_ocp/images/freeipa-parameters.png -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/biexports.env: -------------------------------------------------------------------------------- 1 | export GATEWAY_URL=$(oc get route -n istio-system istio-ingressgateway -o jsonpath='{.spec.host}') 2 | -------------------------------------------------------------------------------- /network_policy/policies/default-deny.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: default-deny 5 | spec: 6 | podSelector: 7 | -------------------------------------------------------------------------------- /network_policy/policies/allow-within-same-namespace.yml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: NetworkPolicy 3 | metadata: 4 | name: allow-from-same-namespace 5 | spec: 6 | podSelector: 7 | ingress: 8 | - from: 9 | - podSelector: {} 10 | -------------------------------------------------------------------------------- /network_policy/policies/allow-from-namespace.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: allow-from-namespace 5 | spec: 6 | podSelector: 7 | ingress: 8 | - from: 9 | - namespaceSelector: 10 | matchLabels: 11 | project: myproject 12 | -------------------------------------------------------------------------------- /network_policy/policies/allow-from-default-namespace.yml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: allow-from-default-namespace 5 | spec: 6 | podSelector: 7 | ingress: 8 | - from: 9 | - namespaceSelector: 10 | matchLabels: 11 | name: default 12 | -------------------------------------------------------------------------------- /network_policy/policies/allow-frontend-pn.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: allow-8080-frontend-pn 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | app: pricelist-not 9 | ingress: 10 | - ports: 11 | - protocol: TCP 12 | port: 8080 13 | -------------------------------------------------------------------------------- /network_policy/policies/allow-frontend-pa.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: allow-8080-pricelist-allow 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | app: pricelist-allow 9 | ingress: 10 | - ports: 11 | - protocol: TCP 12 | port: 8080 13 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/7.request-timeout-add-timeout.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: reviews 5 | spec: 6 | hosts: 7 | - reviews 8 | http: 9 | - route: 10 | - destination: 11 | host: reviews 12 | subset: v2 13 | timeout: 0.5s 14 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/1.virtual-service-reviews-to-v2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: reviews 5 | spec: 6 | hosts: 7 | - reviews 8 | http: 9 | - route: 10 | - destination: 11 | host: reviews 12 | subset: v2 13 | weight: 100 14 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/2.virtual-service-reviews-to-v3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: reviews 5 | spec: 6 | hosts: 7 | - reviews 8 | http: 9 | - route: 10 | - destination: 11 | host: reviews 12 | subset: v3 13 | weight: 100 14 | -------------------------------------------------------------------------------- /network_policy/policies/allow-pa-database.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: allow-3306 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | tier: database 9 | ingress: 10 | - from: 11 | - podSelector: 12 | matchLabels: 13 | tier: frontend 14 | ports: 15 | - protocol: TCP 16 | port: 3306 17 | -------------------------------------------------------------------------------- /network_policy/policies/allow-to-database.yaml: -------------------------------------------------------------------------------- 1 | kind: NetworkPolicy 2 | apiVersion: extensions/v1beta1 3 | metadata: 4 | name: allow-3306 5 | spec: 6 | podSelector: 7 | matchLabels: 8 | tier: database 9 | ingress: 10 | - from: 11 | - podSelector: 12 | matchLabels: 13 | tier: frontend 14 | ports: 15 | - protocol: TCP 16 | port: 3306 17 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/6.request-timeout-add-delay-to-reviews.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: reviews 5 | spec: 6 | hosts: 7 | - reviews 8 | http: 9 | - fault: 10 | delay: 11 | percent: 100 12 | fixedDelay: 2s 13 | route: 14 | - destination: 15 | host: reviews 16 | subset: v1 17 | -------------------------------------------------------------------------------- /istio/sm-resources/2.istio-cr.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "istio.openshift.com/v1alpha1" 2 | kind: "Installation" 3 | metadata: 4 | name: "istio-installation" 5 | namespace: istio-operator 6 | spec: 7 | istio: 8 | authentication: true 9 | prefix: docker.io/maistra 10 | jaeger: 11 | prefix: docker.io/jaegertracing 12 | kiali: 13 | username: admin 14 | password: admin 15 | prefix: docker.io/kiali 16 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/5.virtual-service-reviews-retry.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: networking.istio.io/v1alpha3 3 | kind: VirtualService 4 | metadata: 5 | name: reviews 6 | spec: 7 | hosts: 8 | - reviews 9 | http: 10 | - retries: 11 | attempts: 3 12 | perTryTimeout: 4.000s 13 | route: 14 | - destination: 15 | host: reviews 16 | subset: v2 17 | weight: 100 18 | --- 19 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/3.virtual-service-reviews-50-v3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: reviews 5 | spec: 6 | hosts: 7 | - reviews 8 | http: 9 | - route: 10 | - destination: 11 | host: reviews 12 | subset: v2 13 | weight: 50 14 | - destination: 15 | host: reviews 16 | subset: v3 17 | weight: 50 18 | -------------------------------------------------------------------------------- /cns/gluster-default-storageclass.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: storage.k8s.io/v1beta1 2 | kind: StorageClass 3 | metadata: 4 | name: gluster-container 5 | annotations: 6 | storageclass.beta.kubernetes.io/is-default-class: "true" 7 | provisioner: kubernetes.io/glusterfs 8 | parameters: 9 | resturl: "http://heketi-glusterfs.apps.example.com" 10 | restuser: "admin" 11 | secretNamespace: "default" 12 | secretName: "heketi-secret" 13 | volumetype: "replicate:3" 14 | -------------------------------------------------------------------------------- /istio/sm-resources/3.test-deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: sleep 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | annotations: 10 | sidecar.istio.io/inject: "true" 11 | labels: 12 | app: sleep 13 | spec: 14 | containers: 15 | - name: sleep 16 | image: tutum/curl 17 | command: ["/bin/sleep","infinity"] 18 | imagePullPolicy: IfNotPresent 19 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/4.virtual-service-reviews-jason-v2-v3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: reviews 5 | spec: 6 | hosts: 7 | - reviews 8 | http: 9 | - match: 10 | - headers: 11 | end-user: 12 | exact: jason 13 | route: 14 | - destination: 15 | host: reviews 16 | subset: v2 17 | - route: 18 | - destination: 19 | host: reviews 20 | subset: v3 21 | -------------------------------------------------------------------------------- /network_policy/policies/allow-domain.json: -------------------------------------------------------------------------------- 1 | { 2 | "kind": "EgressNetworkPolicy", 3 | "apiVersion": "v1", 4 | "metadata": { 5 | "name": "allow-espn" 6 | }, 7 | "spec": { 8 | "egress": [ 9 | { 10 | "type": "Allow", 11 | "to": { 12 | "dnsName": "espn.com" 13 | } 14 | }, 15 | { 16 | "type": "Deny", 17 | "to": { 18 | "cidrSelector": "0.0.0.0/0" 19 | } 20 | } 21 | ] 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /installation/README.md: -------------------------------------------------------------------------------- 1 | # Installation Guides 2 | 3 | The installation of OpenShift Container Platform (OCP); will be done via ansible for 3.x and via the [OpenShift Installer](https://github.com/openshift/installer#openshift-installer) for 4.x. More information can be found using the OpenShift [documentation site](https://docs.openshift.com) 4 | 5 | Please select a version you wish to install 6 | 7 | * [OpenShift Enterprise v2](https://github.com/christianh814/notes/blob/master/documents/openshift_v2.md) 8 | * [OpenShift Container Platform v3](3.md) 9 | * [OpenShift Container Platform v4](../ocp4_upi/) 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenShift Toolbox 2 | OpenShift Setup Tools And Howto's 3 | 4 | * [OpenShift Installation](installation) 5 | * [Registry Information](registry) 6 | * [Router Information](router) 7 | * [Manage Masters and Nodes](manage_servers) 8 | * [Storage Notes](storage) 9 | * [Run FreeIPA on OpenShift](ipa_on_ocp) 10 | * [Container Native Storage](cns) 11 | * [Network Policy Example](network_policy) 12 | * [DaemonSets](daemon_sets) 13 | * [Authentication](authentication) 14 | * [Metrics](metrics) 15 | * [Logging](logging) 16 | * [Service Mesh](istio) 17 | * [Operators](operators) 18 | * [MCO](mco) 19 | * [And The Rest](miscellaneous) 20 | -------------------------------------------------------------------------------- /operators/docs/install.md: -------------------------------------------------------------------------------- 1 | # Installing The SDK 2 | 3 | The first step is to install the SDK. The SDK will help you build and deploy your Operator. The official doc can be found on the [github page](https://github.com/operator-framework/operator-sdk#workflow); and I'd check there first for the latest. 4 | 5 | ## Prerequisites 6 | 7 | See the [github page](https://github.com/operator-framework/operator-sdk). You just need to download the `operator-sdk` cli from the [releases page](https://github.com/operator-framework/operator-sdk/releases) 8 | 9 | ## Create an Operator 10 | 11 | Now you can visit the [howto](../README.md) to create/install an Operator! 12 | -------------------------------------------------------------------------------- /certbot/README.md: -------------------------------------------------------------------------------- 1 | # Let's Encrypt 2 | 3 | Various crons I've used to set up my SSL certs 4 | 5 | ## Hawkular 6 | 7 | Initial Hawkular setup (done on the server that is running the router) 8 | 9 | ``` 10 | mkdir /root/hawkular-cert-deploy/ 11 | oc scale dc/router -n default --replicas=0 12 | certbot certonly --standalone -d hawkular-metrics.apps.chx.cloud --agree-tos -m example@example.com 13 | ``` 14 | 15 | ## Kibana 16 | 17 | Initial Kibana setup (done on the server that is running the router) 18 | 19 | ``` 20 | mkdir /root/kibana-cert-deploy/ 21 | oc scale dc/router -n default --replicas=0 22 | certbot certonly --standalone -d kibana.apps.chx.cloud --agree-tos -m example@example.com 23 | ``` 24 | 25 | -------------------------------------------------------------------------------- /cns/host-prepare.yaml: -------------------------------------------------------------------------------- 1 | - hosts: nodes 2 | 3 | tasks: 4 | 5 | - name: insert iptables rules required for GlusterFS 6 | blockinfile: 7 | dest: /etc/sysconfig/iptables 8 | block: | 9 | -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT 10 | -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT 11 | -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT 12 | -A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m multiport --dports 49152:49664 -j ACCEPT 13 | insertbefore: "^COMMIT" 14 | 15 | - name: reload iptables 16 | systemd: 17 | name: iptables 18 | state: reloaded 19 | 20 | - name: Install needed packages 21 | package: name={{ item }} state=present 22 | with_items: 23 | - cns-deploy 24 | - heketi-client 25 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/0.virtual-service-all-v1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.istio.io/v1alpha3 2 | kind: VirtualService 3 | metadata: 4 | name: productpage 5 | spec: 6 | hosts: 7 | - productpage 8 | http: 9 | - route: 10 | - destination: 11 | host: productpage 12 | subset: v1 13 | --- 14 | apiVersion: networking.istio.io/v1alpha3 15 | kind: VirtualService 16 | metadata: 17 | name: reviews 18 | spec: 19 | hosts: 20 | - reviews 21 | http: 22 | - route: 23 | - destination: 24 | host: reviews 25 | subset: v1 26 | --- 27 | apiVersion: networking.istio.io/v1alpha3 28 | kind: VirtualService 29 | metadata: 30 | name: ratings 31 | spec: 32 | hosts: 33 | - ratings 34 | http: 35 | - route: 36 | - destination: 37 | host: ratings 38 | subset: v1 39 | --- 40 | apiVersion: networking.istio.io/v1alpha3 41 | kind: VirtualService 42 | metadata: 43 | name: details 44 | spec: 45 | hosts: 46 | - details 47 | http: 48 | - route: 49 | - destination: 50 | host: details 51 | subset: v1 52 | --- 53 | -------------------------------------------------------------------------------- /ansible_hostfiles/awsinstall: -------------------------------------------------------------------------------- 1 | ## You may need to set the ansible "ssh user" and have it use sudo 2 | ansible_ssh_user=ec2-user 3 | ansible_sudo=true 4 | ansible_become=yes 5 | 6 | ## To enable AWS api access 7 | openshift_cloudprovider_kind=aws 8 | openshift_clusterid=myocp 9 | openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" 10 | openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" 11 | 12 | ## This is to use AWS Object Storage with with the Registry 13 | openshift_hosted_registry_storage_kind=object 14 | openshift_hosted_registry_storage_provider=s3 15 | openshift_hosted_registry_storage_s3_accesskey="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" 16 | openshift_hosted_registry_storage_s3_secretkey="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" 17 | openshift_hosted_registry_storage_s3_bucket=bucket_name 18 | openshift_hosted_registry_storage_s3_region=us-west-2 19 | openshift_hosted_registry_storage_s3_chunksize=26214400 20 | openshift_hosted_registry_storage_s3_rootdirectory=/registry 21 | openshift_hosted_registry_pullthrough=true 22 | openshift_hosted_registry_acceptschema2=true 23 | openshift_hosted_registry_enforcequota=true 24 | ## 25 | ## 26 | -------------------------------------------------------------------------------- /operators/README.md: -------------------------------------------------------------------------------- 1 | # Operators 2 | 3 | An [Operator](https://coreos.com/operators/) is a method of packaging, deploying and managing a Kubernetes application. A Kubernetes application is an application that is both deployed on Kubernetes and managed using the Kubernetes APIs and kubectl tooling. 4 | 5 | To be able to make the most of Kubernetes, you need a set of cohesive APIs to extend in order to service and manage your applications that run on Kubernetes. You can think of Operators as the runtime that manages this type of application on Kubernetes. You can find more info [here](https://coreos.com/blog/introducing-operator-framework) 6 | 7 | You can think of an Operator as a Kubernetes application operation manager system. 8 | 9 | An [Operator Framework](https://coreos.com/blog/introducing-operator-framework) was developed to make it easier to create an maintain them. Right now helm, go, and ansible are supported. In this doc I'm going to go over how to create an ansible one. 10 | 11 | 12 | ## Ansible Operators 13 | 14 | THIS SECTION IS UNDER CONSTUCTION! 15 | 16 | Since the release of v1 of the OperatorSDK, I'll need to make updates. For now, look at [the official docs](https://sdk.operatorframework.io/docs/building-operators/ansible/tutorial/)...they're pretty good. 17 | -------------------------------------------------------------------------------- /mco/examples/mcp-with-mc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: machineconfiguration.openshift.io/v1 2 | kind: MachineConfigPool 3 | metadata: 4 | name: worker-bm 5 | spec: 6 | machineConfigSelector: 7 | matchExpressions: 8 | - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-bm]} 9 | nodeSelector: 10 | matchLabels: 11 | node-role.kubernetes.io/worker-bm: "" 12 | --- 13 | apiVersion: machineconfiguration.openshift.io/v1 14 | kind: MachineConfig 15 | metadata: 16 | labels: 17 | machineconfiguration.openshift.io/role: worker-bm 18 | name: 50-worker-bm 19 | spec: 20 | config: 21 | ignition: 22 | config: {} 23 | security: 24 | tls: {} 25 | timeouts: {} 26 | version: 3.1.0 27 | networkd: {} 28 | passwd: {} 29 | storage: 30 | files: 31 | - path: "/etc/foo/foo.conf" 32 | filesystem: root 33 | mode: 420 34 | overwrite: true 35 | contents: 36 | source: data:;base64,UmVkIEhhdCBpcyBiZXR0ZXIgdGhhbiBWTXdhcmUhCg== 37 | - path: "/etc/foo/foo-other.conf" 38 | filesystem: root 39 | mode: 420 40 | overwrite: true 41 | contents: 42 | source: data:;base64,T3BlblNoaWZ0IGlzIHRoZSBiZXN0Cg== 43 | -------------------------------------------------------------------------------- /logging/README.md: -------------------------------------------------------------------------------- 1 | # Logging 2 | 3 | OpenShift aggr logging is done on the EFK stack. This guide assumes that you're running [CNS](../cns) 4 | 5 | # Installation 6 | 7 | To Install add the following under `[OSEv3:vars]` in `/etc/ansible/hosts` 8 | 9 | ``` 10 | # Logging 11 | openshift_logging_install_logging=true 12 | openshift_logging_es_pvc_dynamic=true 13 | openshift_logging_es_pvc_size=20Gi 14 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 15 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 16 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 17 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 18 | openshift_logging_es_memory_limit=4G 19 | ``` 20 | 21 | Just like [metrics](../metrics), you may need this for dynamic storage 22 | 23 | ``` 24 | openshift_master_dynamic_provisioning_enabled=true 25 | dynamic_volumes_check=False 26 | ``` 27 | 28 | Next run the installer 29 | 30 | ``` 31 | ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml 32 | ``` 33 | 34 | # Uninstall 35 | 36 | To uninstall, run the same playbook with `-e openshift_logging_install_logging=False` 37 | 38 | ``` 39 | ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-logging/config.yml \ 40 | -e openshift_logging_install_logging=False 41 | ``` 42 | 43 | # Misc 44 | 45 | If you messed up and didn't include `*_nodeselector`; then moved them with 46 | 47 | ``` 48 | oc get dc -n logging 49 | oc patch dc/ -n loggin -p '{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/infra":"true"}}}}}' 50 | ``` 51 | -------------------------------------------------------------------------------- /certbot/CRON_reset-kibana-cert.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | kibintcert=/root/kibana-cert-deploy/kibana-internal-ca.crt 3 | kibcert=/etc/letsencrypt/live/kibana.apps.chx.cloud/fullchain.pem 4 | kibkey=/etc/letsencrypt/live/kibana.apps.chx.cloud/privkey.pem 5 | kibanaservicename="logging-kibana" 6 | url="kibana.apps.chx.cloud" 7 | routename="logging-kibana" 8 | # 9 | # Must run as root 10 | [[ $(id -u) -ne 0 ]] && echo "Must be root" && exit 254 11 | 12 | # 13 | # Run if the week number (0-52) is divisable by 2 14 | [[ $(( $(date +%U) % 2 )) -eq 0 ]] || exit 15 | 16 | # 17 | # Let's timestap this 18 | date +%F 19 | echo "==========" 20 | 21 | # 22 | # Let's be in the default project 23 | /bin/oc project default 24 | 25 | # 26 | # Make sure you scale the router to 0 27 | /bin/oc scale dc/router --replicas=0 28 | 29 | # 30 | # Sleep so that it gives it some time to scale down 31 | sleep 10 32 | 33 | # 34 | # Renew cert if you can 35 | /bin/certbot renew 36 | 37 | # 38 | # Sleep so that it gives it some time reissue the cert 39 | sleep 20 40 | # Make sure you scale the router back to 1 41 | /bin/oc scale dc/router --replicas=1 42 | 43 | # 44 | # Switch to the logging project 45 | /bin/oc project logging 46 | 47 | # 48 | # Export the destination ca cert 49 | /bin/oc get route -o jsonpath='{.items[*].spec.tls.destinationCACertificate}' > ${kibintcert} 50 | # Delete the expired route 51 | /bin/oc delete route ${routename} 52 | 53 | # 54 | # Create the new route with the updated certs 55 | /bin/oc create route reencrypt ${routename} --hostname ${url} --cert ${kibcert} --key ${kibkey} --service ${kibanaservicename} --dest-ca-cert ${kibintcert} --insecure-policy="Redirect" 56 | 57 | # 58 | # Let's come back tothe default project 59 | /bin/oc project default 60 | ## 61 | ## 62 | -------------------------------------------------------------------------------- /certbot/CRON_reset-hawk-cert.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | hawkintcert=/root/hawkular-cert-deploy/hawkular-internal-ca.crt 3 | hawkcert=/etc/letsencrypt/live/hawkular-metrics.apps.chx.cloud/fullchain.pem 4 | hawkkey=/etc/letsencrypt/live/hawkular-metrics.apps.chx.cloud/privkey.pem 5 | hawkservicename="hawkular-metrics" 6 | url="hawkular-metrics.apps.chx.cloud" 7 | routename="hawkular-metrics-reencrypt" 8 | # 9 | # Must run as root 10 | [[ $(id -u) -ne 0 ]] && echo "Must be root" && exit 254 11 | 12 | # 13 | # Run if the week number (0-52) is divisable by 2 14 | [[ $(( $(date +%U) % 2 )) -eq 0 ]] || exit 15 | 16 | # 17 | # Let's timestap this 18 | date +%F 19 | echo "==========" 20 | 21 | # 22 | # Let's be in the default project 23 | /bin/oc project default 24 | 25 | # 26 | # Make sure you scale the router to 0 27 | /bin/oc scale dc/router --replicas=0 28 | 29 | # 30 | # Sleep so that it gives it some time to scale down 31 | sleep 10 32 | 33 | # 34 | # Renew cert if you can 35 | /bin/certbot renew 36 | 37 | # 38 | # Sleep so that it gives it some time reissue the cert 39 | sleep 20 40 | # Make sure you scale the router back to 1 41 | /bin/oc scale dc/router --replicas=1 42 | 43 | # 44 | # Switch to the openshift-infra project 45 | /bin/oc project openshift-infra 46 | 47 | # 48 | # Export the destination ca cert 49 | /bin/oc get secrets hawkular-metrics-certificate -o jsonpath='{.data.hawkular-metrics-ca\.certificate}' | base64 -d > ${hawkintcert} 50 | # Delete the expired route 51 | /bin/oc delete route ${routename} 52 | 53 | # 54 | # Create the new route with the updated certs 55 | /bin/oc create route reencrypt ${routename} --hostname ${url} --cert ${hawkcert} --key ${hawkkey} --service ${hawkservicename} --dest-ca-cert ${hawkintcert} 56 | 57 | # 58 | # Let's come back tothe default project 59 | /bin/oc project default 60 | ## 61 | ## 62 | -------------------------------------------------------------------------------- /miscellaneous/samples/windows-app.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: win-webserver 5 | namespace: openshift-windows-machine-config-operator 6 | labels: 7 | app: win-webserver 8 | spec: 9 | ports: 10 | # the port that this service should serve on 11 | - port: 80 12 | targetPort: 80 13 | selector: 14 | app: win-webserver 15 | type: LoadBalancer 16 | --- 17 | apiVersion: apps/v1 18 | kind: Deployment 19 | metadata: 20 | labels: 21 | app: win-webserver 22 | name: win-webserver 23 | namespace: openshift-windows-machine-config-operator 24 | spec: 25 | selector: 26 | matchLabels: 27 | app: win-webserver 28 | replicas: 1 29 | template: 30 | metadata: 31 | labels: 32 | app: win-webserver 33 | name: win-webserver 34 | spec: 35 | tolerations: 36 | - key: "os" 37 | value: "Windows" 38 | Effect: "NoSchedule" 39 | containers: 40 | - name: windowswebserver 41 | image: mcr.microsoft.com/powershell:lts-nanoserver-1809 42 | imagePullPolicy: IfNotPresent 43 | command: 44 | - pwsh.exe 45 | - -command 46 | - $listener = New-Object System.Net.HttpListener; $listener.Prefixes.Add('http://*:80/'); $listener.Start();Write-Host('Listening at http://*:80/'); while ($listener.IsListening) { $context = $listener.GetContext(); $response = $context.Response; $content='

OpenShift for Windows Containers

'; $buffer = [System.Text.Encoding]::UTF8.GetBytes($content); $response.ContentLength64 = $buffer.Length; $response.OutputStream.Write($buffer, 0, $buffer.Length); $response.Close(); }; 47 | securityContext: 48 | windowsOptions: 49 | runAsUserName: "ContainerAdministrator" 50 | nodeSelector: 51 | beta.kubernetes.io/os: windows 52 | -------------------------------------------------------------------------------- /ocp4_upi/docs/proxy_notes.md: -------------------------------------------------------------------------------- 1 | # OpenShift 4 Proxy Install 2 | 3 | > **NOTE** This is for OCP 4.2 and newer 4 | 5 | The only difference is that, the `install-config.yaml` file will have the proxy information. This proxy information is a "global" configuration for the cluster. Here is an example: 6 | 7 | ```yaml 8 | apiVersion: v1 9 | baseDomain: example.com 10 | compute: 11 | - hyperthreading: Enabled 12 | name: worker 13 | replicas: 0 14 | controlPlane: 15 | hyperthreading: Enabled 16 | name: master 17 | replicas: 3 18 | metadata: 19 | name: ocp4 20 | networking: 21 | clusterNetworks: 22 | - cidr: 10.254.0.0/16 23 | hostPrefix: 24 24 | networkType: OpenShiftSDN 25 | serviceNetwork: 26 | - 172.30.0.0/16 27 | platform: 28 | none: {} 29 | proxy: 30 | httpProxy: http://proxyuser:proxypass@myproxy.example.com:3128 31 | httpsProxy: http://proxyuser:proxypass@myproxy.example.com:3128 32 | noProxy: 192.168.7.0/24,10.254.0.0/16,172.30.0.0/16,.example.com 33 | pullSecret: '{"auths": ...}' 34 | sshKey: 'ssh-ed25519 AAAA...' 35 | ``` 36 | 37 | What's important here is the `noProxy` setting. Remember to put in the range of your environemnt's network, the range of the `serviceNetwork`, and the range of the pod `clusterNetworks`. Also, add the domain of the environment you're installing in. 38 | 39 | **NOTE** You want to put these on the "helper node" (or your bastion host) if you're using it (inside of `/etc/environment`) 40 | 41 | ```shell 42 | root@helper# cat /etc/environment 43 | export HTTP_PROXY="http://proxyuser:proxypass@myproxy.example.com:3128" 44 | export HTTPS_PROXY="http://proxyuser:proxypass@myproxy.example.com:3128" 45 | export NO_PROXY="192.168.7.0/24,10.254.0.0/16,172.30.0.0/16,.example.com" 46 | export http_proxy="http://proxyuser:proxypass@myproxy.example.com:3128" 47 | export https_proxy="http://proxyuser:proxypass@myproxy.example.com:3128" 48 | export no_proxy="192.168.7.0/24,10.254.0.0/16,172.30.0.0/16,.example.com" 49 | ``` 50 | 51 | [Back to ToC](../) 52 | -------------------------------------------------------------------------------- /ocp4_upi/README.md: -------------------------------------------------------------------------------- 1 | # OpenShift 4 UPI Install 2 | 3 | This is a high-level guide that will help you install OCP 4.1 UPI on BareMetal (but works on VMs). OCP4.x requires more infra components than 3.x and makes a lot of assumptions. I will go over them here; but remember. These are **__HIGH LEVEL__** notes and assumes you know what you're doing. 4 | 5 | Please consult the [official docs](https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html) to see up to date information. 6 | 7 | * [Prereqs](docs/0.prereqs.md) 8 | * [Setup Artifacts](docs/1.setup.md) 9 | * [Install RHCOS](docs/2.installrhcos.md) 10 | * [Install OCP4](docs/3.installocp4.md) 11 | 12 | If you are using Libvirt, and/or doing a "lab" install (or PoC)...I suggest you look at my [helper node](https://github.com/christianh814/ocp4-upi-helpernode#ocp4-upi-helper-node-playbook) repo to expedite things. In that repo there is a [quickstart](https://github.com/christianh814/ocp4-upi-helpernode/blob/master/docs/quickstart.md) guide that makes things extra fast! 13 | 14 | # OpenShift 4 IPI Cloud Installers 15 | 16 | Using the IPI Cloud installers is an easier, more automated, but less flexiable way of installing OCP4.x and requires less setup. If installing in the cloud, I recommend one of these. 17 | 18 | * [AWS Installer](https://docs.openshift.com/container-platform/latest/installing/installing_aws/installing-aws-default.html) 19 | * [Helpful AWS Install Video](https://www.youtube.com/watch?v=kQJxGtsqphk) 20 | * [Azure Installer](https://github.com/openshift/installer/tree/master/docs/user/azure) 21 | * [Helpful Azure Install Blog](https://blog.openshift.com/openshift-4-2-on-azure-preview/) 22 | * [GCP Installer](https://github.com/openshift/installer/tree/master/docs/user/gcp) 23 | * [Helpful GCP Install Video](https://www.youtube.com/watch?v=v17Taqza3ZU) 24 | 25 | # OpenShift 4 Restricted Installs 26 | 27 | The following are guide for "restricted" type of installs of OpenShift 4 28 | 29 | * [Disconnected Install](https://github.com/christianh814/blogs/tree/master/docs/openshift-4.2-restricted-network-install) 30 | * [Proxy Install](docs/proxy_notes.md) 31 | -------------------------------------------------------------------------------- /installation/guides/docker-ocp.md: -------------------------------------------------------------------------------- 1 | 2 | # Configure docker storage. 3 | 4 | Docker’s default loopback storage mechanism is not supported for production use and is only appropriate for proof of concept environments. For production environments, you must create a thin-pool logical volume and re-configure docker to use that volume. 5 | 6 | You can use the docker-storage-setup script to create a thin-pool device and configure docker’s storage driver after installing docker but before you start using it. The script reads configuration options from the `/etc/sysconfig/docker-storage-setup` file. 7 | 8 | Configure docker-storage-setup for your environment. There are three options available based on your storage configuration: 9 | 10 | a) Create a thin-pool volume from the remaining free space in the volume group where your root filesystem resides; this requires no configuration: 11 | 12 | `# docker-storage-setup` 13 | 14 | b) Use an existing volume group, in this example docker-vg, to create a thin-pool: 15 | 16 | ``` 17 | # echo < /etc/sysconfig/docker-storage-setup 18 | VG=docker-vg 19 | SETUP_LVM_THIN_POOL=yes 20 | DATA_SIZE=90%FREE 21 | WIPE_SIGNATURES=true 22 | EOF 23 | # docker-storage-setup 24 | ``` 25 | 26 | c) Use an unpartitioned block device to create a new volume group and thinpool. In this example, the /dev/vdc device is used to create the docker-vg volume group: 27 | 28 | ``` 29 | # cat < /etc/sysconfig/docker-storage-setup 30 | DEVS=/dev/vdc 31 | VG=docker-vg 32 | DATA_SIZE=90%FREE 33 | WIPE_SIGNATURES=true 34 | EOF 35 | # docker-storage-setup 36 | ``` 37 | 38 | Verify your configuration. You should have dm.thinpooldev value in the /etc/sysconfig/docker-storage file and a docker-pool device: 39 | 40 | ``` 41 | # lvs 42 | LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert 43 | docker-pool docker-vg twi-a-tz-- 48.95g 0.00 0.44 44 | # cat /etc/sysconfig/docker-storage 45 | DOCKER_STORAGE_OPTIONS=--storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/docker--vg-docker--pool 46 | ``` 47 | 48 | Re-initialize docker. 49 | 50 | **Warning** This will destroy any docker containers or images currently on the host. 51 | 52 | ```shell 53 | systemctl stop docker 54 | vgremove -ff docker-vg 55 | rm -rf /var/lib/docker/* 56 | wipefs -a /path/to/dev 57 | cat /dev/null > /etc/sysconfig/docker-storage 58 | docker-storage-setup 59 | systemctl restart docker 60 | ``` 61 | -------------------------------------------------------------------------------- /mco/README.md: -------------------------------------------------------------------------------- 1 | # Machine Config Operator 2 | 3 | I'll write more when I have time. 4 | 5 | ## Example 6 | 7 | The following config can be found [here](examples/mcp-with-mc.yaml) 8 | 9 | ```yaml 10 | 11 | apiVersion: machineconfiguration.openshift.io/v1 12 | kind: MachineConfigPool 13 | metadata: 14 | name: worker-bm 15 | spec: 16 | machineConfigSelector: 17 | matchExpressions: 18 | - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-bm]} 19 | nodeSelector: 20 | matchLabels: 21 | node-role.kubernetes.io/worker-bm: "" 22 | --- 23 | apiVersion: machineconfiguration.openshift.io/v1 24 | kind: MachineConfig 25 | metadata: 26 | labels: 27 | machineconfiguration.openshift.io/role: worker-bm 28 | name: 50-worker-bm 29 | spec: 30 | config: 31 | ignition: 32 | config: {} 33 | security: 34 | tls: {} 35 | timeouts: {} 36 | version: 3.1.0 37 | networkd: {} 38 | passwd: {} 39 | storage: 40 | files: 41 | - path: "/etc/foo/foo.conf" 42 | filesystem: root 43 | mode: 420 44 | overwrite: true 45 | contents: 46 | source: data:;base64,UmVkIEhhdCBpcyBiZXR0ZXIgdGhhbiBWTXdhcmUhCg== 47 | - path: "/etc/foo/foo-other.conf" 48 | filesystem: root 49 | mode: 420 50 | overwrite: true 51 | contents: 52 | source: data:;base64,T3BlblNoaWZ0IGlzIHRoZSBiZXN0Cg== 53 | ``` 54 | 55 | Do an oc create of that file 56 | 57 | ```shell 58 | oc create -f https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/mco/examples/mcp-with-mc.yaml 59 | ``` 60 | 61 | Then label one of your nodes 62 | 63 | ``` 64 | oc label node worker1.ocp4.example.com node-role.kubernetes.io/worker-bm="" 65 | ``` 66 | 67 | You'll see something like this 68 | 69 | ``` 70 | oc get nodes 71 | NAME STATUS ROLES AGE VERSION 72 | master0.ocp4.example.com Ready master 50m v1.16.2 73 | master1.ocp4.example.com Ready master 50m v1.16.2 74 | master2.ocp4.example.com Ready master 50m v1.16.2 75 | worker0.ocp4.example.com Ready worker 50m v1.16.2 76 | worker1.ocp4.example.com Ready worker,worker-bm 50m v1.16.2 77 | ``` 78 | 79 | Since `worker1.ocp4.example.com` matches both `worker` MCP and `worker-bm`, it'll get both MCPs. But since `worker1.ocp4.example.com` is the only one that matches `worker-bm` MCP. It's the only one with the `/etc/foo/` contents. 80 | -------------------------------------------------------------------------------- /haproxy_config/haproxy.cfg: -------------------------------------------------------------------------------- 1 | # Global settings 2 | #--------------------------------------------------------------------- 3 | global 4 | maxconn 20000 5 | log /dev/log local0 info 6 | chroot /var/lib/haproxy 7 | pidfile /var/run/haproxy.pid 8 | user haproxy 9 | group haproxy 10 | daemon 11 | 12 | # turn on stats unix socket 13 | stats socket /var/lib/haproxy/stats 14 | 15 | #--------------------------------------------------------------------- 16 | # common defaults that all the 'listen' and 'backend' sections will 17 | # use if not designated in their block 18 | #--------------------------------------------------------------------- 19 | defaults 20 | mode http 21 | log global 22 | option httplog 23 | option dontlognull 24 | # option http-server-close 25 | option forwardfor except 127.0.0.0/8 26 | option redispatch 27 | retries 3 28 | timeout http-request 10s 29 | timeout queue 1m 30 | timeout connect 10s 31 | timeout client 300s 32 | timeout server 300s 33 | timeout http-keep-alive 10s 34 | timeout check 10s 35 | maxconn 20000 36 | 37 | listen stats 38 | bind :9000 39 | mode http 40 | stats enable 41 | stats uri / 42 | 43 | frontend atomic-openshift-api 44 | bind *:8443 45 | default_backend atomic-openshift-api 46 | mode tcp 47 | option tcplog 48 | 49 | backend atomic-openshift-api 50 | balance source 51 | mode tcp 52 | server master0 192.168.1.31:8443 check 53 | server master1 192.168.1.41:8443 check 54 | server master2 192.168.1.51:8443 check 55 | 56 | # Router Config 57 | 58 | frontend router-http 59 | bind *:80 60 | default_backend router-backend-http 61 | 62 | frontend router-https 63 | bind *:443 64 | mode tcp 65 | option tcplog 66 | default_backend router-backend-https 67 | 68 | backend router-backend-http 69 | balance source 70 | mode http 71 | option httpclose 72 | option forwardfor 73 | server router1 192.168.1.61:80 check 74 | server router2 192.168.1.101:80 check 75 | server router3 192.168.1.102:80 check 76 | 77 | backend router-backend-https 78 | mode tcp 79 | balance source 80 | server router1 192.168.1.61:443 check 81 | server router2 192.168.1.101:443 check 82 | server router3 192.168.1.102:443 check 83 | 84 | ## 85 | ## 86 | -------------------------------------------------------------------------------- /daemon_sets/ds-withstorage.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: DaemonSet 3 | metadata: 4 | name: custom-mongo-shard1 5 | spec: 6 | template: 7 | metadata: 8 | labels: 9 | app: custom-mongo-shard1 10 | spec: 11 | nodeSelector: 12 | app: custom-mongo-shard1 13 | containers: 14 | - name: custom-mongo-shard1 15 | image: docker-registry.default.svc:5000/customshardmongo/mongo-shard1:3.6 16 | imagePullPolicy: Always 17 | ports: 18 | - containerPort: 27018 19 | hostPort: 27018 20 | volumeMounts: 21 | - mountPath: /var/lib/mongodb/data 22 | name: shard1-storage 23 | - mountPath: /etc/mongo 24 | name: shard1-config 25 | volumes: 26 | - name: shard1-storage 27 | persistentVolumeClaim: 28 | claimName: ks-shard1-ds 29 | - configMap: 30 | defaultMode: 420 31 | name: custom-mongo-shard1-config 32 | name: shard1-config 33 | --- 34 | apiVersion: v1 35 | kind: PersistentVolumeClaim 36 | metadata: 37 | name: ks-shard1-ds 38 | spec: 39 | accessModes: 40 | - ReadWriteOnce 41 | resources: 42 | requests: 43 | storage: 75Gi 44 | --- 45 | apiVersion: v1 46 | data: 47 | mongod-shard1.conf: | 48 | ## 49 | ## For list of options visit: 50 | ## https://docs.mongodb.org/manual/reference/configuration-options/ 51 | ## 52 | ## For Mongo SHARD Configuration Only 53 | ## 54 | # systemLog Options - How to do logging 55 | # where to write logging data. 56 | # quiet: true 57 | systemLog: 58 | destination: file 59 | logAppend: true 60 | path: /var/log/mongodb/mongod.log 61 | # how the process runs 62 | # ProcessManagement: 63 | # fork: true # fork and run in background 64 | # pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile 65 | # timeZoneInfo: /usr/share/zoneinfo 66 | # net Options - Network interfaces settings 67 | net: 68 | # Specify port number (27018 by default for Shards) 69 | port: 27018 70 | bindIp: 0.0.0.0 71 | # storage Options - How and Where to store data 72 | storage: 73 | # Directory for datafiles 74 | dbPath: /var/lib/mongodb/data 75 | # journal: 76 | # enabled: true 77 | #replication: 78 | replication: 79 | replSetName: "ks-shard-1" 80 | #sharding: 81 | sharding: 82 | clusterRole: shardsvr 83 | kind: ConfigMap 84 | metadata: 85 | creationTimestamp: null 86 | name: custom-mongo-shard1-config 87 | -------------------------------------------------------------------------------- /daemon_sets/README.md: -------------------------------------------------------------------------------- 1 | # DaemonSet 2 | 3 | Here is an example on how to create a `daemonset` in OpenShift. More info can be found [here](https://docs.openshift.com/container-platform/latest/dev_guide/daemonsets.html) 4 | 5 | First create your DS yaml 6 | 7 | ``` 8 | [chernand@chernand entrust-examples]$ cat ds-example.yml 9 | apiVersion: extensions/v1beta1 10 | kind: DaemonSet 11 | metadata: 12 | name: welcome-php 13 | spec: 14 | template: 15 | metadata: 16 | labels: 17 | app: welcome-php 18 | spec: 19 | nodeSelector: 20 | app: welcome-php 21 | containers: 22 | - name: welcome-php 23 | image: redhatworkshops/welcome-php:latest 24 | ports: 25 | - containerPort: 8080 26 | hostPort: 9999 27 | ``` 28 | 29 | Set privileged containers for the project (you need it for `hostPort`) 30 | 31 | ``` 32 | oc project demo 33 | oc adm policy add-scc-to-user privileged -z default 34 | ``` 35 | 36 | Now create the ds 37 | 38 | ``` 39 | [chernand@chernand entrust-examples]$ oc create -f ds-example.yml 40 | daemonset "welcome-php" created 41 | 42 | [chernand@chernand entrust-examples]$ oc get ds 43 | NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 44 | welcome-php 1 1 1 1 1 app=welcome-php 5m 45 | 46 | ``` 47 | 48 | Now label your node as `app=welcome-php` for the pod to deploy 49 | 50 | ``` 51 | [chernand@chernand entrust-examples]$ oc get ds 52 | NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 53 | welcome-php 0 0 0 0 0 app=welcome-php 6m 54 | 55 | [chernand@chernand entrust-examples]$ oc get pods 56 | No resources found. 57 | 58 | [chernand@chernand entrust-examples]$ oc label node ip-172-31-25-120.us-west-1.compute.internal app=welcome-php 59 | node "ip-172-31-25-120.us-west-1.compute.internal" labeled 60 | 61 | [chernand@chernand entrust-examples]$ oc get pods 62 | NAME READY STATUS RESTARTS AGE 63 | welcome-php-dkb2n 0/1 ContainerCreating 0 3s 64 | ``` 65 | 66 | Now you can hit the node directly on the port you specified. 67 | 68 | ``` 69 | [chernand@chernand entrust-examples]$ curl -sI http://ec2-13-56-228-64.us-west-1.compute.amazonaws.com:9999/ 70 | HTTP/1.1 200 OK 71 | Date: Wed, 16 May 2018 02:00:40 GMT 72 | Server: Apache/2.4.18 (Red Hat) 73 | Content-Type: text/html; charset=UTF-8 74 | ``` 75 | 76 | Here we went to `ec2-13-56-228-64.us-west-1.compute.amazonaws.com` instead of `ip-172-31-25-120.us-west-1.compute.internal` because I tested this on AWS. On a NONcloud env, you'd just go to the node directly. 77 | -------------------------------------------------------------------------------- /metrics/README.md: -------------------------------------------------------------------------------- 1 | # Metrics 2 | 3 | This shows you how to install Metrics on OpenShift. This assumes that you got [dynamic storage setup](../cns). Also note that I use ansible variables and those may change. I don't keep this doc very up to date so best to look at [my sample ansible hosts files](../ansible_hostfiles) go verify. 4 | 5 | # Installation Hawkular 6 | 7 | First, set up your `/etc/ansible/hosts` file with the following in the `[OSEv3:vars]` section 8 | 9 | ``` 10 | # Metrics 11 | openshift_metrics_install_metrics=true 12 | openshift_metrics_cassandra_pvc_size=20Gi 13 | openshift_metrics_cassandra_storage_type=dynamic 14 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 15 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 16 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 17 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 18 | ``` 19 | 20 | Also verify that you have this under `[OSEv3:vars]` as well 21 | 22 | ``` 23 | openshift_master_dynamic_provisioning_enabled=true 24 | dynamic_volumes_check=False 25 | ``` 26 | 27 | Then run the installer. 28 | 29 | ``` 30 | ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml 31 | ``` 32 | 33 | # Uninstall Hawkular 34 | 35 | To uninstall hawkular; run the same playbook but add `-e openshift_metrics_install_metrics=False` 36 | 37 | ``` 38 | ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml \ 39 | -e openshift_metrics_install_metrics=False 40 | ``` 41 | 42 | # Prometheus/Grafana 43 | 44 | Same idea with prometheus; make sure you have something like `[OSEv3:vars]` in your `/etc/ansible/hosts` file 45 | 46 | ``` 47 | # Prometheus Metrics 48 | openshift_cluster_monitoring_operator_install=true 49 | openshift_cluster_monitoring_operator_prometheus_storage_enabled=true 50 | openshift_cluster_monitoring_operator_alertmanager_storage_enabled=true 51 | openshift_cluster_monitoring_operator_prometheus_storage_capacity=15Gi 52 | openshift_cluster_monitoring_operator_alertmanager_storage_capacity=15Gi 53 | openshift_cluster_monitoring_operator_node_selector={'node-role.kubernetes.io/infra':'true'} 54 | ``` 55 | 56 | Then run 57 | 58 | ``` 59 | ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/openshift-prometheus/config.yml 60 | ``` 61 | 62 | (NOTE: Grafana installs automatically) 63 | 64 | # Misc Hawkular 65 | 66 | If you ran the installer without the `*_nodeselector` options; you can do this to "move" it over to your infra nodes. 67 | 68 | ``` 69 | oc patch ns openshift-infra -p '{"metadata": {"annotations": {"openshift.io/node-selector": "node-role.kubernetes.io/infra=true"}}}' 70 | ``` 71 | -------------------------------------------------------------------------------- /ansible_hostfiles/all-in-one: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | 7 | # Set variables common for all OSEv3 hosts 8 | [OSEv3:vars] 9 | 10 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 11 | ansible_ssh_user=root 12 | 13 | # Install Enterprise or Origin; set up ntp 14 | openshift_deployment_type=openshift-enterprise 15 | openshift_clock_enabled=true 16 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 17 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 18 | 19 | # Network/DNS Related 20 | openshift_master_default_subdomain=apps.172.16.1.47.nip.io 21 | osm_cluster_network_cidr=10.1.0.0/16 22 | osm_host_subnet_length=8 23 | openshift_portal_net=172.30.0.0/16 24 | openshift_docker_insecure_registries=172.30.0.0/16 25 | 26 | # Automatically Deploy the router 27 | openshift_hosted_manage_router=true 28 | 29 | # Automatically deploy the registry 30 | openshift_hosted_manage_registry=true 31 | openshift_enable_unsupported_configurations=true 32 | 33 | # Disable ASB 34 | ansible_service_broker_install=false 35 | 36 | # Disble Checks 37 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 38 | 39 | # 40 | # Network Policies that are available: 41 | # redhat/openshift-ovs-networkpolicy # fine grained control 42 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 43 | # redhat/openshift-ovs-subnet # "flat" network 44 | # 45 | # Network OVS Plugin to use 46 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 47 | 48 | # If using Route53 or you're pointed to the master with a "vanity" name 49 | openshift_master_public_api_url=https://master.ocp.172.16.1.47.nip.io:8443 50 | openshift_master_public_console_url=https://master.ocp.172.16.1.47.nip.io:8443/console 51 | openshift_master_cluster_public_hostname=master.ocp.172.16.1.47.nip.io 52 | openshift_master_api_port=8443 53 | openshift_master_console_port=8443 54 | 55 | # The following enabled htpasswd authentication 56 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 57 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 58 | 59 | # OpenShift host groups 60 | 61 | # host group for etcd 62 | [etcd] 63 | master.ocp.172.16.1.47.nip.io 64 | 65 | # host group for masters - set scedulable to "true" for the web-console pod 66 | [masters] 67 | master.ocp.172.16.1.47.nip.io openshift_schedulable=true 68 | 69 | # host group for nodes, includes region info 70 | [nodes] 71 | master.ocp.172.16.1.47.nip.io openshift_node_group_name=node-config-all-in-one 72 | ### 73 | ### 74 | -------------------------------------------------------------------------------- /ansible_hostfiles/all-in-one-3.11: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | 7 | # Set variables common for all OSEv3 hosts 8 | [OSEv3:vars] 9 | 10 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 11 | ansible_ssh_user=root 12 | 13 | # Install Enterprise or Origin; set up ntp 14 | openshift_deployment_type=openshift-enterprise 15 | openshift_clock_enabled=true 16 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 17 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 18 | 19 | # Network/DNS Related 20 | openshift_master_default_subdomain=apps.172.16.1.47.nip.io 21 | osm_cluster_network_cidr=10.1.0.0/16 22 | osm_host_subnet_length=8 23 | openshift_portal_net=172.30.0.0/16 24 | openshift_docker_insecure_registries=172.30.0.0/16 25 | 26 | # Automatically Deploy the router 27 | openshift_hosted_manage_router=true 28 | 29 | # Automatically deploy the registry 30 | openshift_hosted_manage_registry=true 31 | openshift_enable_unsupported_configurations=true 32 | 33 | # Disable ASB 34 | ansible_service_broker_install=false 35 | 36 | # Disble Checks 37 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 38 | 39 | # 40 | # Network Policies that are available: 41 | # redhat/openshift-ovs-networkpolicy # fine grained control 42 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 43 | # redhat/openshift-ovs-subnet # "flat" network 44 | # 45 | # Network OVS Plugin to use 46 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 47 | 48 | # If using Route53 or you're pointed to the master with a "vanity" name 49 | openshift_master_public_api_url=https://master.ocp.172.16.1.47.nip.io:8443 50 | openshift_master_public_console_url=https://master.ocp.172.16.1.47.nip.io:8443/console 51 | openshift_master_cluster_public_hostname=master.ocp.172.16.1.47.nip.io 52 | openshift_master_api_port=8443 53 | openshift_master_console_port=8443 54 | 55 | # The following enabled htpasswd authentication 56 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 57 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 58 | 59 | # OpenShift host groups 60 | 61 | # host group for etcd 62 | [etcd] 63 | master.ocp.172.16.1.47.nip.io 64 | 65 | # host group for masters - set scedulable to "true" for the web-console pod 66 | [masters] 67 | master.ocp.172.16.1.47.nip.io openshift_schedulable=true 68 | 69 | # host group for nodes, includes region info 70 | [nodes] 71 | master.ocp.172.16.1.47.nip.io openshift_node_group_name=node-config-all-in-one 72 | ### 73 | ### 74 | -------------------------------------------------------------------------------- /manage_servers/README.md: -------------------------------------------------------------------------------- 1 | # Manage Servers 2 | 3 | Here are various notes specific to master/node management in no paticular order 4 | 5 | * [Masters](#masters) 6 | * [Nodes](#nodes) 7 | 8 | ## Masters 9 | 10 | Sometimes, in order to deploy pods on them, you'll need to mark your masters `schedulable` 11 | 12 | ``` 13 | root@master# oc adm manage-node ose3-master.example.com --schedulable=true 14 | ``` 15 | 16 | In 4.2 nightly builds, the masters are schedulable by default;mark them as unschedulable by first running 17 | 18 | ``` 19 | oc edit schedulers cluster 20 | ``` 21 | 22 | Then set `masterschedulable` to `false`. Then you can remove the worker label from the master. 23 | 24 | 25 | ## Nodes 26 | 27 | * [Assign Node Roles](#roles) 28 | * [Node Selector](#node-selector) 29 | 30 | ## Roles 31 | 32 | Label for node roles 33 | 34 | ``` 35 | oc label node infra1.cloud.chx node-role.kubernetes.io/infra=true 36 | ``` 37 | 38 | Common roles are... 39 | 40 | ``` 41 | node-role.kubernetes.io/compute: "true" 42 | node-role.kubernetes.io/infra: "true" 43 | node-role.kubernetes.io/master: "true" 44 | ``` 45 | 46 | Your nodes will look like this 47 | 48 | ``` 49 | [root@master1 ~]# oc get nodes 50 | NAME STATUS ROLES AGE VERSION 51 | app1.cloud.chx Ready compute 33d v1.9.1+a0ce1bc657 52 | app2.cloud.chx Ready compute 33d v1.9.1+a0ce1bc657 53 | app3.cloud.chx Ready compute 33d v1.9.1+a0ce1bc657 54 | cns1.cloud.chx Ready compute 33d v1.9.1+a0ce1bc657 55 | cns2.cloud.chx Ready compute 33d v1.9.1+a0ce1bc657 56 | cns3.cloud.chx Ready compute 33d v1.9.1+a0ce1bc657 57 | infra1.cloud.chx Ready infra 33d v1.9.1+a0ce1bc657 58 | infra2.cloud.chx Ready infra 33d v1.9.1+a0ce1bc657 59 | master1.cloud.chx Ready master 33d v1.9.1+a0ce1bc657 60 | master2.cloud.chx Ready master 33d v1.9.1+a0ce1bc657 61 | master3.cloud.chx Ready master 33d v1.9.1+a0ce1bc657 62 | ``` 63 | 64 | ## Node Selector 65 | 66 | To deploy an app on a specific node.... 67 | 68 | ``` 69 | oc edit namespace default 70 | ``` 71 | 72 | and add... 73 | 74 | ``` 75 | openshift.io/node-selector: region=infra 76 | ``` 77 | 78 | OR, you can just annotate it. 79 | 80 | ``` 81 | root@master# oc annotate namespace default openshift.io/node-selector=region=infra 82 | ``` 83 | 84 | Then make sure your `nodeSelector` matches the `key=value` paring. 85 | 86 | ## Adding nodes on OpenShift 4 87 | 88 | To get ignition for worker 89 | 90 | ``` 91 | oc extract -n openshift-machine-api secret/worker-user-data --keys=userData --to=- 92 | ``` 93 | 94 | To get ignition for master 95 | 96 | ``` 97 | oc extract -n openshift-machine-api secret/master-user-data --keys=userData --to=- 98 | ``` 99 | -------------------------------------------------------------------------------- /haproxy_config/haproxy-letsencrypt.cfg: -------------------------------------------------------------------------------- 1 | ## 2 | # Setup Instructions: 3 | # # Choose "temp webserver" when you run `certbot` 4 | # # You need to create a new chain because HAProxy expects it in a certian order 5 | # sed -i.bak 's/notify_only=1/notify_only=0/g' /etc/yum/pluginconf.d/search-disabled-repos.conf 6 | # yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 7 | # yum -y install certbot 8 | # certbot certonly 9 | # mkdir /etc/haproxy/ssl 10 | # export LEPATH=/etc/letsencrypt/live/ 11 | # cat $LEPATH/cert.pem $LEPATH/chain.pem $LEPATH/privkey.pem > /etc/haproxy/ssl/ocp-cain.pem 12 | # 13 | # Sample Crontab renewal 14 | # certbot renew --renew-hook "systemctl restart haproxy" >> /var/log/le-renew.log 15 | ## 16 | global 17 | chroot /var/lib/haproxy 18 | user haproxy 19 | group haproxy 20 | daemon 21 | 22 | maxconn 20000 23 | 24 | log 127.0.0.1 local0 notice 25 | # Uncomment to enable connection level logging and debug messages 26 | #log 127.0.0.1 local1 debug 27 | 28 | stats socket /var/lib/haproxy/stats 29 | 30 | # See https://docs.openshift.com/container-platform/latest/architecture/core_concepts/routes.html#env-variables 31 | defaults 32 | log global 33 | mode http 34 | option httplog 35 | option dontlognull 36 | option log-separate-errors 37 | option http-server-close 38 | timeout http-request 5s 39 | timeout connect 5s 40 | timeout client 30s 41 | timeout server 30s 42 | timeout tunnel 1h 43 | timeout check 5s 44 | maxconn 20000 45 | 46 | listen stats *:9000 47 | mode http 48 | stats enable 49 | stats uri / 50 | stats auth admin:redhat1 51 | stats refresh 5s 52 | 53 | frontend openshift-master-api-frontend 54 | bind *:8443 ssl crt /etc/haproxy/ssl/ocp-fullchain.pem 55 | option tcplog 56 | default_backend openshift-master-api-backend 57 | 58 | 59 | backend openshift-master-api-backend 60 | balance source 61 | server master3 172.31.15.244:8443 check ssl verify none 62 | 63 | # Router Config 64 | 65 | frontend router-http 66 | bind *:80 67 | default_backend router-backend-http 68 | 69 | frontend router-https 70 | bind *:443 71 | mode tcp 72 | option tcplog 73 | default_backend router-backend-https 74 | 75 | backend router-backend-http 76 | balance source 77 | mode http 78 | option httpclose 79 | option forwardfor 80 | option httpchk get /healthz 81 | http-check expect status 200 82 | server router1 172.31.15.244:80 check port 1936 83 | 84 | backend router-backend-https 85 | mode tcp 86 | balance source 87 | option httpchk get /healthz 88 | http-check expect status 200 89 | server router1 172.31.15.244:443 check port 1936 90 | 91 | #----- 92 | -------------------------------------------------------------------------------- /oc_cluster_up/README.md: -------------------------------------------------------------------------------- 1 | # OC Cluster Up 2 | 3 | This is used to run OpenShift inside a docker container on a Linux host 4 | 5 | 6 | QnD 7 | 8 | 1. Download latest `oc` client [here](https://github.com/openshift/origin/releases) 9 | 10 | 2. Temp setup (option 1) 11 | 12 | ``` 13 | yum -y install docker 14 | sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --insecure-registry 172.30.0.0/16"' /etc/sysconfig/docker 15 | systemctl enable docker 16 | systemctl start docker 17 | NETWORKSPACE=$(docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge) 18 | firewall-cmd --permanent --new-zone dockerc 19 | firewall-cmd --permanent --zone dockerc --add-source ${NETWORKSPACE} 20 | firewall-cmd --permanent --zone dockerc --add-port 8443/tcp 21 | firewall-cmd --permanent --zone dockerc --add-port 53/udp 22 | firewall-cmd --permanent --zone dockerc --add-port 8053/udp 23 | firewall-cmd --permanent --zone public --add-port 8443/tcp 24 | firewall-cmd --permanent --zone public --add-port 443/tcp 25 | firewall-cmd --permanent --zone public --add-port 80/tcp 26 | firewall-cmd --permanent --zone public --add-port 53/udp 27 | firewall-cmd --permanent --zone public --add-port 8053/udp 28 | firewall-cmd --reload 29 | oc cluster up --metrics=true --logging=true --public-hostname console.$DOMAIN --routing-suffix apps.$DOMAIN 30 | ``` 31 | 32 | 2. Save config for later (option 2) 33 | 34 | ``` 35 | yum -y install docker 36 | sed -i '/OPTIONS=.*/c\OPTIONS="--selinux-enabled --insecure-registry 172.30.0.0/16"' /etc/sysconfig/docker 37 | systemctl enable docker 38 | systemctl start docker 39 | NETWORKSPACE=$(docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge) 40 | firewall-cmd --permanent --new-zone dockerc 41 | firewall-cmd --permanent --zone dockerc --add-source ${NETWORKSPACE} 42 | firewall-cmd --permanent --zone dockerc --add-port 8443/tcp 43 | firewall-cmd --permanent --zone dockerc --add-port 53/udp 44 | firewall-cmd --permanent --zone dockerc --add-port 8053/udp 45 | firewall-cmd --permanent --zone public --add-port 8443/tcp 46 | firewall-cmd --permanent --zone public --add-port 443/tcp 47 | firewall-cmd --permanent --zone public --add-port 80/tcp 48 | firewall-cmd --permanent --zone public --add-port 53/udp 49 | firewall-cmd --permanent --zone public --add-port 8053/udp 50 | firewall-cmd --reload 51 | mkdir -m 777 -p /ocp-storage/{host-config-dir,host-data-dir,host-volumes-dir} 52 | oc cluster up --metrics=true --logging=true --public-hostname console.$DOMAIN --routing-suffix apps.$DOMAIN \ 53 | --host-config-dir=/ocp-storage/host-config-dir \ 54 | --host-data-dir=/ocp-storage/host-data-dir \ 55 | --host-volumes-dir=/ocp-storage/host-volumes-dir 56 | ``` 57 | 58 | 3. Optional (but helpful!) stuff 59 | 60 | If you want other versions try this 61 | ``` 62 | oc cluster up ... \ 63 | --image=registry.access.redhat.com/openshift3/ose --version=v3.4.1.5 64 | ``` 65 | 66 | If you want your data from above to persist... 67 | 68 | ``` 69 | oc cluster up ... \ 70 | --use-existing-config 71 | ``` 72 | 73 | Newer versions have this handy 74 | ``` 75 | oc cluster up ... \ 76 | --host-pv-dir=/ocp-storage/openshift.local.pv 77 | ``` 78 | -------------------------------------------------------------------------------- /istio/README.md: -------------------------------------------------------------------------------- 1 | # Service Mesh 2 | 3 | These are quick and dirty notes. These notes work as of 3.11.x 4 | 5 | ## Install 6 | 7 | These are highlevel taken from [the official doc](https://docs.openshift.com/container-platform/3.11/servicemesh-install/servicemesh-install.html#removing-bookinfo-application) 8 | 9 | __1. Change the Kernel Params on every server__ 10 | 11 | Create the `/etc/sysctl.d/99-elasticsearch.conf` file with the following 12 | 13 | ``` 14 | vm.max_map_count = 262144 15 | ``` 16 | 17 | Then run 18 | 19 | ``` 20 | sysctl vm.max_map_count=262144 21 | ``` 22 | 23 | __2. Deploy Istio Operator__ 24 | 25 | > Note; for updated info see [this github page](https://github.com/Maistra/istio) 26 | 27 | This repo has the operator yamls you need...load these in 28 | 29 | ``` 30 | oc new-project istio-operator 31 | oc process -f 1.istio_community_operator_template.yaml | oc create -f - 32 | ``` 33 | 34 | Wait until the operator is up and running 35 | 36 | ``` 37 | oc logs -n istio-operator $(oc -n istio-operator get pods -l name=istio-operator --output=jsonpath={.items..metadata.name}) 38 | ``` 39 | 40 | __3. Deploy the Control Plane__ 41 | 42 | Deploy the istio control plane using a customer resource for the istio CRD 43 | 44 | ``` 45 | oc create -f 2.istio-cr.yaml -n istio-operator 46 | ``` 47 | 48 | Wait until your pods come up 49 | 50 | ``` 51 | watch oc get pods -n istio-system 52 | ``` 53 | 54 | __4. Update the master config__ 55 | 56 | You need to update the master config in order to "auto inject" the envoy sidecar proxy. 57 | 58 | Create your `/etc/origin/master/master-config.patch` file with the following contents 59 | 60 | ```yaml 61 | admissionConfig: 62 | pluginConfig: 63 | MutatingAdmissionWebhook: 64 | configuration: 65 | apiVersion: apiserver.config.k8s.io/v1alpha1 66 | kubeConfigFile: /dev/null 67 | kind: WebhookAdmission 68 | ValidatingAdmissionWebhook: 69 | configuration: 70 | apiVersion: apiserver.config.k8s.io/v1alpha1 71 | kubeConfigFile: /dev/null 72 | kind: WebhookAdmission 73 | ``` 74 | 75 | Now load this into the master config file 76 | 77 | ``` 78 | cd /etc/origin/master/ 79 | cp -p master-config.yaml master-config.yaml.prepatch 80 | oc ex config patch master-config.yaml.prepatch -p "$(cat master-config.patch)" > master-config.yaml 81 | master-restart api && master-restart controllers 82 | ``` 83 | 84 | __5. Test__ 85 | 86 | Test with the included deployment (note that your project needs special permissions) 87 | 88 | ``` 89 | oc new-project servicemesh-test 90 | oc project servicemesh-test 91 | oc adm policy add-scc-to-user anyuid -z default -n servicemesh-test 92 | oc adm policy add-scc-to-user privileged -z default -n servicemesh-test 93 | oc create -f 3.test-deploy.yaml 94 | ``` 95 | 96 | You should see two continers in the pod. One for the app and one for envoy. Running... 97 | 98 | ``` 99 | oc get pods -l app=sleep 100 | ``` 101 | 102 | Should show the following output 103 | 104 | ``` 105 | NAME READY STATUS RESTARTS AGE 106 | sleep-9b989c67c-xbr6t 2/2 Running 0 14s 107 | ``` 108 | 109 | 110 | Delete it once it's successful 111 | 112 | ``` 113 | oc delete -f 3.test-deploy.yaml 114 | ``` 115 | 116 | ## Working with Istio 117 | 118 | WIP. You can find yamls [here](sm-resources/bookinfo) 119 | -------------------------------------------------------------------------------- /network_policy/readme.md: -------------------------------------------------------------------------------- 1 | # NetworkPolicy QnD 2 | 3 | These are "quick and dirty" notes. Hacked together by [this blog](https://blog.openshift.com/network-policy-objects-action/) and this [COP github page](https://github.com/redhat-cop/openshift-toolkit/tree/master/networkpolicy) 4 | 5 | 6 | ## Admin Tasks 7 | 8 | It's important to make sure the default namespace is labeled properly. This is an admin task 9 | 10 | ``` 11 | oc login -u system:admin 12 | oc label namespace default name=default 13 | oc label namespace default name=kube-service-catalog 14 | ``` 15 | 16 | 17 | ## Default Deny 18 | 19 | **NOTE** Before you start, make sure you're in the right project! 20 | 21 | ``` 22 | oc login -u developer 23 | oc project myproject 24 | ``` 25 | 26 | You need to deny ALL traffic coming into your namespace. 27 | 28 | 29 | ``` 30 | oc create -f default-deny.yaml 31 | ``` 32 | 33 | ^ This essentially "breaks" your project as ALL traffic (wanted and unwanted alike) is blocked. 34 | 35 | ## Allow Router/K8S 36 | 37 | Next, you want to be able to have the router/kubernetes to be able to access your namespace. 38 | 39 | 40 | ``` 41 | oc create -f allow-from-default-namespace.yml 42 | ``` 43 | 44 | ^ this makes your app "browsable" 45 | 46 | ## Allow Pod access 47 | 48 | To allow a certian webapp to access the database run the following... 49 | 50 | ``` 51 | oc create -f allow-to-database.yaml 52 | ``` 53 | 54 | In this example, I am targeting `tier=database` and am allowing things labeled as `tier=frontend`. Now if you want to change these...you need to label ALL these resources as such (not as simple as JUST labeling the pods) 55 | 56 | For example... 57 | 58 | ``` 59 | [user@host]$ oc get all -l tier=frontend --no-headers | awk '{print $1}' 60 | pod/pricelist-allowed-1-sxdpr 61 | replicationcontroller/pricelist-allowed-1 62 | service/pricelist-allowed 63 | deploymentconfig.apps.openshift.io/pricelist-allowed 64 | buildconfig.build.openshift.io/pricelist-allowed 65 | build.build.openshift.io/pricelist-allowed-1 66 | imagestream.image.openshift.io/pricelist-allowed 67 | route.route.openshift.io/pricelist-allowed 68 | 69 | [user@host]$ oc get all -l tier=database --no-headers | awk '{print $1}' 70 | pod/mysql-1-x6pkh 71 | replicationcontroller/mysql-1 72 | service/mysql 73 | deploymentconfig.apps.openshift.io/mysql 74 | ``` 75 | 76 | ^ If you want to change the labels in `allow-to-database.yaml` you need to label all of these resources 77 | 78 | 79 | ## Recap 80 | 81 | This is all I have to allow webfrontend1 to a db without allowing webfrontend2 82 | 83 | 84 | ``` 85 | [user@host]$ oc get networkpolicies 86 | NAME POD-SELECTOR AGE 87 | allow-3306 tier=database 7m 88 | allow-from-default-namespace 15m 89 | default-deny 15m 90 | ``` 91 | 92 | 93 | ## Multitenant functionality 94 | 95 | If you want to allow pods to communicate from one project to another (i.e. like the multitenant plugin); you'll need to do something like this... 96 | 97 | ``` 98 | oc label ns myproject project=myproject 99 | ``` 100 | 101 | then 102 | 103 | ``` 104 | oc create -f allow-from-namespace.yaml -n yourproject 105 | ``` 106 | 107 | ^ This allows pods from the namespace labeled `myproject` to access pods to the `yourproject` namespace 108 | 109 | # Egress Rules 110 | 111 | To block access from pods within a namespace to go out of the cluster you can run... 112 | 113 | ``` 114 | oc create -f allow-domain.json -n myproject 115 | ``` 116 | 117 | This blocks/allows traffic going outside the OCP cluster. **NOTE** this is an admin task; you have to be `system:admin` or equiv to use Egress rules. 118 | -------------------------------------------------------------------------------- /scripts/update-oc-bin: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Don't bother running if you're not root 4 | if [ $(id -u) -ne 0 ]; then 5 | echo "This utility must be run as root." 6 | exit 253 7 | fi 8 | # 9 | ocbin=/usr/local/bin/oc 10 | jayq=/usr/bin/jq 11 | crl=/usr/bin/curl 12 | ocmirrorurl=https://mirror.openshift.com/pub/openshift-v4/clients/ocp 13 | stagingdir=/usr/local/src/ocbin 14 | tarchive=/usr/bin/tar 15 | # 16 | basicchecks () { 17 | [[ ! -f ${ocbin} ]] && echo "The OC binary doesn't seem to exist" && exit 253 18 | [[ ! -f ${jayq} ]] && echo "The JQ binary doesn't seem to exist" && exit 253 19 | [[ ! -f ${crl} ]] && echo "The CURL binary doesn't seem to exist" && exit 253 20 | [[ ! -f ${tarchive} ]] && echo "The TAR binary doesn't seem to exist" && exit 253 21 | clientversion=$(oc version --client -o json | jq -r .releaseClientVersion) 22 | [[ ${clientversion} == "null" ]] && echo "Your version is TOO old; try \"$(basename $0) force\" first (dangerous)" && exit 252 23 | } 24 | # 25 | upgrade () { 26 | basicchecks 27 | clientversion=$(oc version --client -o json | jq -r .releaseClientVersion) 28 | serverversion=$(oc version -o json | jq -r .openshiftVersion) 29 | [[ ${serverversion} == "null" ]] && echo "You need to be logged in to the cluster. Are you running as SUDO? That can be a problem too" && exit 251 30 | if [ ${clientversion} == ${serverversion} ]; then 31 | echo "Client version and Server version match, nothing to do." 32 | exit 0 33 | else 34 | echo "Upgrading client to version ${serverversion}" 35 | dowloadurl=${ocmirrorurl}/${serverversion}/openshift-client-linux-${serverversion}.tar.gz 36 | cp -a ${ocbin} ${ocbin}.backup 37 | rm -f ${ocbin} 38 | mkdir -m 777 -p ${stagingdir} 39 | ${crl} -o ${stagingdir}/openshift-client-linux-${serverversion}.tar.gz -s ${dowloadurl} 40 | ${tarchive} -xzf ${stagingdir}/openshift-client-linux-${serverversion}.tar.gz -C ${stagingdir}/ 41 | if [ ! -f ${stagingdir}/oc ]; then 42 | echo "FATAL ERROR: Could not validate new OC client...reverting" 43 | cp -a ${ocbin}.backup ${ocbin} 44 | fi 45 | cp -a ${stagingdir}/oc $(dirname ${ocbin})/ 46 | rm -rf ${stagingdir}/* 47 | fi 48 | } 49 | # 50 | downgrade () { 51 | basicchecks 52 | echo "Downgrading client " 53 | [[ ! -f ${ocbin}.backup ]] && echo "There is nothing to revert to" && exit 253 54 | cp -a ${ocbin}.backup ${ocbin} 55 | } 56 | # 57 | force () { 58 | forcedownload=https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.4.3/openshift-client-linux-4.4.3.tar.gz 59 | echo "WARNING: This does no santiy checks and just blindly downloads a version (currently 4.4.3)." 60 | echo -n "Are you sure you want to continue?[Y/n] " 61 | read ans 62 | # 63 | case "${ans}" in 64 | Y|Yes) 65 | dowloadurl=${forcedownload} 66 | rm -f ${ocbin} ${ocbin}.backup 67 | mkdir -m 777 -p ${stagingdir} 68 | ${crl} -o ${stagingdir}/openshift-client-linux.tar.gz -s ${dowloadurl} 69 | ${tarchive} -xzf ${stagingdir}/openshift-client-linux.tar.gz -C ${stagingdir}/ 70 | if [ ! -f ${stagingdir}/oc ]; then 71 | echo "FATAL ERROR: Could not validate new OC client...manual intervention required" 72 | exit 254 73 | fi 74 | cp -a ${stagingdir}/oc $(dirname ${ocbin})/ 75 | rm -rf ${stagingdir}/* 76 | echo "You may want to do a \"$(basename $0) upgrade\" at this point" 77 | ;; 78 | *) 79 | echo -e "Force upgrade aborted\n" 80 | ;; 81 | esac 82 | } 83 | # 84 | showhelp () { 85 | echo "$(basename $0) [upgrade|downgrade|help]" 86 | } 87 | # 88 | # 89 | case "${1}" in 90 | upgrade|update|up) 91 | upgrade 92 | ;; 93 | revert|downgrade|undo) 94 | downgrade 95 | ;; 96 | force|download) 97 | force 98 | ;; 99 | help|-h) 100 | showhelp 101 | ;; 102 | *) 103 | echo "ERROR! Required parameter to the script was not passed" 104 | showhelp 105 | exit 254 106 | ;; 107 | esac 108 | # 109 | exit 0 110 | ## 111 | ## 112 | -------------------------------------------------------------------------------- /ocp4_upi/docs/2.installrhcos.md: -------------------------------------------------------------------------------- 1 | # Install RHCOS 2 | 3 | Using the bios file you downloaded and the ISO file; you'll install RHCOS on all 6 servers (the 3 masters, 2 wokers, 1 bootstrap). Procedure is easy. 4 | 5 | Boot into the ISO and when you're presented with this screen hit `TAB` 6 | 7 | ![RHCOS_BOOT](rhcos.png) 8 | 9 | For each server do the following 10 | 11 | **NOTE** The IP address you're using is either your laptop (the "python" method) or your webserver (the "apache" method) 12 | 13 | # Bootstrap 14 | 15 | For the bootstrap server; when you press `TAB` enter the following all on one line (change the port if you're running on something other than `8080`) 16 | 17 | ``` 18 | coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.27:8080/rhcos-4.2.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.27:8080/install/bootstrap.ign 19 | ``` 20 | 21 | This will install the server "unattended". It will set the IP to whatever you set it in DHCP. So it's IMPORTANT that you set up the right MAC address in your dhcp "static ip" config. 22 | 23 | # Masters 24 | 25 | For the master servers; when you press `TAB` enter the following all on one line (change the port if you're running on something other than `8080`) 26 | 27 | ``` 28 | coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.27:8080/rhcos-4.2.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.27:8080/install/master.ign 29 | ``` 30 | 31 | This will install the server "unattended". It will set the IP to whatever you set it in DHCP. 32 | 33 | # Workers 34 | 35 | For the workers; press `TAB` enter the following all on one line (change the port if you're running on something other than `8080`) 36 | 37 | ``` 38 | coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.27:8080/rhcos-4.2.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.27:8080/install/worker.ign 39 | ``` 40 | 41 | This will install the server "unattended". It will set the IP to whatever you set it in DHCP. 42 | 43 | # Static IPs 44 | 45 | If you want to set up your master/worker/boostrap node with static IPs, you'll need to pass the dracut `ip=` option, for example... 46 | 47 | ``` 48 | ip=192.168.7.20::192.168.7.1:255.255.255.0:worker.ocp4.example.com:enp1s0:none nameserver=192.168.7.77 coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.27:8080/rhcos-4.2.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.27:8080/install/worker.ign 49 | ``` 50 | 51 | Here is a readable version 52 | 53 | ``` 54 | ip=192.168.7.20::192.168.7.1:255.255.255.0:worker.ocp4.example.com:enp1s0:none nameserver=192.168.7.77 55 | coreos.inst.install_dev=sda 56 | coreos.inst.image_url=http://192.168.1.27:8080/rhcos-4.2.0-x86_64-metal-bios.raw.gz 57 | coreos.inst.ignition_url=http://192.168.1.27:8080/install/worker.ign 58 | ``` 59 | 60 | The syntax for `ip=` is `ip=::::::none nameserver=`. You can use `nameserver=` multiple times for multiple nameservers.. 61 | 62 | # Troubleshooting 63 | 64 | Any issues that come up; you can view it in your bootstrap server. SSH using `core` user and running the `journalctl -b -f -u bootkube.service` command 65 | 66 | ``` 67 | $ ssh bootstrap.ocp4.example.com -l core 68 | Red Hat Enterprise Linux CoreOS 410.8.20190516.0 69 | WARNING: Direct SSH access to machines is not recommended. 70 | This node has been annotated with machineconfiguration.openshift.io/ssh=accessed 71 | 72 | --- 73 | This is the bootstrap node; it will be destroyed when the master is fully up. 74 | 75 | The primary service is "bootkube.service". To watch its status, run e.g. 76 | 77 | journalctl -b -f -u bootkube.service 78 | Last login: Wed May 22 19:30:48 2019 from 192.168.1.254 79 | [systemd] 80 | Failed Units: 1 81 | systemd-firstboot.service 82 | [core@bootstrap ~]$ journalctl -b -f -u bootkube.service 83 | ``` 84 | 85 | Also remember to keep the LB status page up. You'll see that it's all red for now. Keep this up during the OCP4 install 86 | 87 | ``` 88 | firefox http://api.ocp4.example.com:9000/ 89 | ``` 90 | 91 | # Conclusion 92 | 93 | Now you're ready to install OCP4 94 | 95 | [return to the index page](../README.md) 96 | 97 | -------------------------------------------------------------------------------- /ocp4_upi/docs/3.installocp4.md: -------------------------------------------------------------------------------- 1 | # Installl OCP4 2 | 3 | Once you've installed RHCOS on the bootstrap server, the masters, and the workers; you are ready to begin the install process. 4 | 5 | # Bootstrapping 6 | 7 | In fact...the cluster is already installing! This is what the boostrap server does. But you can monitor it via the following command 8 | 9 | ``` 10 | cd ${HOME} 11 | openshift-install --log-level=debug --dir=install wait-for bootstrap-complete 12 | ``` 13 | 14 | This waits until the Kubernetes APIServer signals that it has been bootstrapped on the control plane machines. 15 | 16 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv 17 | 18 | > **NOTE** After this completes...REMOVE the boostrap machine from the loadbalancer and restart the `haproxy` service! 19 | 20 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21 | 22 | At this point the base cluster is installed and you can connect to it 23 | 24 | Export the kubeconfig file 25 | ``` 26 | export KUBECONFIG=${HOME}/install/auth/kubeconfig 27 | ``` 28 | 29 | Use the cli tools 30 | 31 | ``` 32 | $ oc whoami 33 | system:admin 34 | ``` 35 | 36 | > You *MIGHT* need to approve the certs outlined [here](https://docs.openshift.com/container-platform/4.2/installing/installing_bare_metal/installing-bare-metal.html#installation-approve-csrs_installing-bare-metal). Test this with `oc get csr` and see if you see `Pending`. 37 | 38 | Approve if needed 39 | 40 | ``` 41 | oc get csr --no-headers | awk '{print $1}' | xargs oc adm certificate approve 42 | ``` 43 | 44 | # Operators install 45 | 46 | At this point operators are being installed...you can take a look with the following 47 | 48 | ``` 49 | oc get clusteroperators 50 | ``` 51 | 52 | To setup the registry, you have to set the `managementState` to `Managed` for your cluster 53 | 54 | ``` 55 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 56 | ``` 57 | 58 | Now, the registry operator is waiting for you to provide some storage. For now set it to `emptyDir` 59 | 60 | ``` 61 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' 62 | ``` 63 | 64 | If you need to expose the registry, run this command 65 | 66 | ``` 67 | oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}' 68 | ``` 69 | 70 | You can watch the operators finish or use the `install-complete` option to track it 71 | 72 | # Complete Install 73 | 74 | Run the following to monitor the operator installation 75 | 76 | ``` 77 | cd ${HOME} 78 | openshift-install --log-level=debug --dir=install wait-for install-complete 79 | ``` 80 | 81 | # Upgrade 82 | 83 | If you didn't install the latest 4.2.Z release...then just run the following 84 | 85 | ``` 86 | oc adm upgrade --to-latest=true 87 | ``` 88 | 89 | Scale the router if you need to 90 | 91 | ``` 92 | oc patch --namespace=openshift-ingress-operator --patch='{"spec": {"replicas": 3}}' --type=merge ingresscontroller/default 93 | ``` 94 | 95 | # PROFIT 96 | 97 | You did it! 98 | 99 | ``` 100 | $ oc get nodes 101 | NAME STATUS ROLES AGE VERSION 102 | master0.ocp4.example.com Ready master 25h v1.13.4+27816e1b1 103 | master1.ocp4.example.com Ready master 25h v1.13.4+27816e1b1 104 | master2.ocp4.example.com Ready master 25h v1.13.4+27816e1b1 105 | worker0.ocp4.example.com Ready worker 25h v1.13.4+27816e1b1 106 | worker1.ocp4.example.com Ready worker 25h v1.13.4+27816e1b1 107 | worker2.ocp4.example.com Ready worker 25h v1.13.4+27816e1b1 108 | ``` 109 | 110 | # Adding nodes on OpenShift 4 111 | 112 | To add a node, you'll just provide an ignition file to the new worker just like the initial install. 113 | 114 | To get ignition for worker 115 | 116 | ``` 117 | oc extract -n openshift-machine-api secret/worker-user-data --keys=userData --to=- 118 | ``` 119 | 120 | To get ignition for master (you cannot "scale" the masters right now. This is here for information only) 121 | 122 | ``` 123 | oc extract -n openshift-machine-api secret/master-user-data --keys=userData --to=- 124 | ``` 125 | -------------------------------------------------------------------------------- /istio/sm-resources/1.istio_community_operator_template.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Template 3 | metadata: 4 | name: istio-operator-job 5 | parameters: 6 | - displayName: Master Public URL 7 | description: The public URL for master 8 | name: OPENSHIFT_ISTIO_MASTER_PUBLIC_URL 9 | value: https://127.0.0.1:8443 10 | - displayName: OpenShift Release 11 | description: The version of the OpenShift release. 12 | name: OPENSHIFT_RELEASE 13 | value: v3.11.0 14 | required: true 15 | - displayName: Istio Operator Namespace 16 | description: The namespace for the Istio operator 17 | name: OPENSHIFT_ISTIO_OPERATOR_NAMESPACE 18 | value: istio-operator 19 | required: true 20 | - displayName: Default Prefix 21 | description: The default image prefix for istio deployments 22 | name: OPENSHIFT_ISTIO_PREFIX 23 | value: maistra/ 24 | - displayName: Default Version 25 | description: The default image version for istio deployments 26 | name: OPENSHIFT_ISTIO_VERSION 27 | value: 0.9.0 28 | - displayName: Default Deployment Type 29 | description: The default deployment type for istio deployments 30 | name: OPENSHIFT_DEPLOYMENT_TYPE 31 | value: origin 32 | objects: 33 | - kind: CustomResourceDefinition 34 | apiVersion: apiextensions.k8s.io/v1beta1 35 | metadata: 36 | name: installations.istio.openshift.com 37 | spec: 38 | group: istio.openshift.com 39 | names: 40 | kind: Installation 41 | plural: installations 42 | singular: installation 43 | scope: Namespaced 44 | version: v1alpha1 45 | - kind: Role 46 | apiVersion: rbac.authorization.k8s.io/v1 47 | metadata: 48 | name: istio-operator 49 | rules: 50 | - apiGroups: 51 | - istio.openshift.com 52 | resources: 53 | - "*" 54 | verbs: 55 | - "*" 56 | - apiGroups: 57 | - "" 58 | resources: 59 | - pods 60 | - services 61 | - endpoints 62 | - persistentvolumeclaims 63 | - events 64 | - configmaps 65 | - secrets 66 | - securitycontextconstraints 67 | verbs: 68 | - "*" 69 | - apiGroups: 70 | - apps 71 | resources: 72 | - deployments 73 | - daemonsets 74 | - replicasets 75 | - statefulsets 76 | verbs: 77 | - "*" 78 | - kind: RoleBinding 79 | apiVersion: rbac.authorization.k8s.io/v1 80 | metadata: 81 | name: default-account-istio-operator 82 | subjects: 83 | - kind: ServiceAccount 84 | namespace: ${OPENSHIFT_ISTIO_OPERATOR_NAMESPACE} 85 | name: default 86 | roleRef: 87 | kind: Role 88 | name: istio-operator 89 | apiGroup: rbac.authorization.k8s.io 90 | - kind: ClusterRoleBinding 91 | apiVersion: rbac.authorization.k8s.io/v1 92 | metadata: 93 | name: default-account-istio-operator-cluster-role-binding 94 | subjects: 95 | - kind: ServiceAccount 96 | namespace: ${OPENSHIFT_ISTIO_OPERATOR_NAMESPACE} 97 | name: default 98 | roleRef: 99 | kind: ClusterRole 100 | name: cluster-admin 101 | apiGroup: rbac.authorization.k8s.io 102 | - kind: Deployment 103 | apiVersion: apps/v1 104 | metadata: 105 | name: istio-operator 106 | namespace: ${OPENSHIFT_ISTIO_OPERATOR_NAMESPACE} 107 | spec: 108 | replicas: 1 109 | selector: 110 | matchLabels: 111 | name: istio-operator 112 | template: 113 | metadata: 114 | labels: 115 | name: istio-operator 116 | spec: 117 | containers: 118 | - name: istio-operator 119 | image: ${OPENSHIFT_ISTIO_PREFIX}istio-operator-centos7:${OPENSHIFT_ISTIO_VERSION} 120 | ports: 121 | - containerPort: 60000 122 | name: metrics 123 | command: 124 | - istio-operator 125 | args: 126 | - "--release=${OPENSHIFT_RELEASE}" 127 | - "--masterPublicURL=${OPENSHIFT_ISTIO_MASTER_PUBLIC_URL}" 128 | - "--istioPrefix=${OPENSHIFT_ISTIO_PREFIX}" 129 | - "--istioVersion=${OPENSHIFT_ISTIO_VERSION}" 130 | - "--deploymentType=${OPENSHIFT_DEPLOYMENT_TYPE}" 131 | imagePullPolicy: IfNotPresent 132 | env: 133 | - name: WATCH_NAMESPACE 134 | valueFrom: 135 | fieldRef: 136 | fieldPath: metadata.namespace 137 | - name: OPERATOR_NAME 138 | value: "istio-operator" 139 | -------------------------------------------------------------------------------- /router/README.md: -------------------------------------------------------------------------------- 1 | # Router Notes 2 | 3 | The OpenShift router is the ingress point for all traffic destined for services in your OpenShift installation. The router is based on HAProxy, and these notes are in no paticular order. 4 | 5 | * [Deploy Router](#deploy-router) 6 | * [Health Checks](#health-checks) 7 | * [Router Settings](#router-settings) 8 | * [Node Port](#node-port) 9 | * [Ingress](#ingress) 10 | 11 | ## Deploy Router 12 | 13 | Sometimes, you'll need to create a router; although this can be done with the ansible installer. These notes are here for historical purposes. 14 | 15 | First create the certificate that will be used for all default SSL connections 16 | 17 | ``` 18 | root@master# CA=/etc/origin/master 19 | root@master# oc adm ca create-server-cert \ 20 | --signer-cert=$CA/ca.crt --signer-key=$CA/ca.key --signer-serial=$CA/ca.serial.txt \ 21 | --hostnames='*.cloudapps.example.com' --cert=cloudapps.crt --key=cloudapps.key 22 | root@master# cat cloudapps.crt cloudapps.key $CA/ca.crt > cloudapps.router.pem 23 | ``` 24 | 25 | Now create the router 26 | 27 | ``` 28 | root@master# oc adm router \ 29 | --default-cert=cloudapps.router.pem --credentials='/etc/origin/master/openshift-router.kubeconfig' \ 30 | --selector='region=infra' --images='openshift3/ose-${component}:${version}' --service-account=router 31 | ``` 32 | 33 | ## Health Checks 34 | 35 | To check the health of the router endpoint 36 | 37 | ``` 38 | curl http://infra1.example.com:1936/healthz 39 | ``` 40 | 41 | To check the health of the API service; check the master server endpoint 42 | 43 | ``` 44 | curl --cacert /etc/origin/master/master.server.crt https://master1.example.com:8443/healthz 45 | ``` 46 | ## Router Settings 47 | 48 | By default the route does leastconn with sticky sessions. Annotate the application route with roundrobbin/cookies to disable it. 49 | 50 | ``` 51 | oc annotate route/myapp haproxy.router.openshift.io/balance=roundrobin 52 | oc annotate route/myapp haproxy.router.openshift.io/disable_cookies=true 53 | ``` 54 | 55 | To do sticky set it to.. 56 | 57 | ``` 58 | oc annotate route/myapp haproxy.router.openshift.io/balance=source 59 | ``` 60 | 61 | ## Node Port 62 | 63 | A `nodePort` allows you to connect to a pod directly to one of the nodes (ANY node in the cluster) on a specific port (thus bypassing the router). This is useful if you want to expose a database outside of the cluster. 64 | 65 | To create nodeport; first setup the file 66 | 67 | ``` 68 | $ cat nodeport-ssh.yaml 69 | apiVersion: v1 70 | kind: Service 71 | metadata: 72 | name: ssh-fedora 73 | labels: 74 | vm: fedora 75 | spec: 76 | type: NodePort 77 | ports: 78 | - port: 22 79 | nodePort: 31122 80 | name: ssh 81 | selector: 82 | vm: fedora 83 | ``` 84 | 85 | Make sure you label either the pod/deploymentconfig or whatever you're trying to reach 86 | 87 | ``` 88 | oc label vm vm-fedora vm=fedora 89 | oc label pod virt-launcher-vm-fedora-vt74t vm=fedora 90 | ``` 91 | 92 | Now you can create the definition 93 | 94 | ``` 95 | oc create -f nodeport-ssh.yaml 96 | ``` 97 | 98 | In this case; you'll be able connect into port `31122` (on to ANY server in the cluster) and it will foward it to port `22` on the pod that matches the label. 99 | 100 | # Ingress 101 | 102 | Ingress specific notes 103 | 104 | ## IngressClass 105 | 106 | QnD (more to come). When deploying another Ingress controller (say NGINX) on OpenShift. 107 | 108 | * Make sure it deploys on a non router node (port conflicts) 109 | * Create an Ingress Object 110 | * Ingress object must have `.spec.ingressClassName` 111 | 112 | 113 | Example: 114 | 115 | ```yaml 116 | apiVersion: networking.k8s.io/v1 117 | kind: Ingress 118 | metadata: 119 | name: nginx-static-ingress 120 | namespace: testing 121 | spec: 122 | ingressClassName: nginx 123 | rules: 124 | - host: 127.0.0.1.nip.io 125 | http: 126 | paths: 127 | - pathType: Prefix 128 | path: "/" 129 | backend: 130 | service: 131 | name: myservice 132 | port: 133 | number: 8080 134 | ``` 135 | 136 | You may also need `kubernetes.io/ingress.class: "nginx"` (for example) until controllers are updated to support ingress classes. [MORE INFO](https://kubernetes.io/docs/concepts/services-networking/ingress/) 137 | 138 | To find out the name (after the controller has been installed) run: `oc get ingressclass` 139 | -------------------------------------------------------------------------------- /ocp4_upi/docs/1.setup.md: -------------------------------------------------------------------------------- 1 | # Preparing for an Install 2 | 3 | Here, we will gather everything you need for the install. As of this writing, this is uptodate with OCP 4.2 4 | 5 | * [Download Artifacts](#download-artifacts) 6 | * [Generate Ignition Files](#generate-ignition-files) 7 | * [Download Install Tools](#download-install-tools) 8 | 9 | # Download Artifacts 10 | 11 | Go to [https://try.openshift.com/](https://try.openshift.com/) for the latest links to download the artifacts. 12 | 13 | Download the installer and client tools for your env (I'm using Linux so the example below is for Linux) from [here](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) 14 | 15 | ``` 16 | cd /tmp/ 17 | wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.2.0/openshift-client-linux-4.2.0.tar.gz 18 | wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.2.0/openshift-install-linux-4.2.0.tar.gz 19 | tar -xzf openshift-client-linux-4.2.0.tar.gz 20 | tar -xzf openshift-install-linux-4.2.0.tar.gz 21 | ``` 22 | 23 | Store it somewhere...if you don't want to store it in `/usr/local/bin` create a dir like so 24 | 25 | ``` 26 | mkdir ~/ocp4-bin/ 27 | mv /tmp/openshift-install ~/ocp4-bin/ 28 | mv /tmp/oc ~/ocp4-bin/ 29 | mv /tmp/kubectl ~/ocp4-bin/ 30 | export PATH=${HOME}/ocp4-bin/:$PATH 31 | ``` 32 | 33 | # Generate Ignition Files 34 | 35 | Create an installation dir to store everything. Make sure this is a *BRAND NEW* dir as to avoid confilcts with other installs. 36 | 37 | ``` 38 | mkdir ${HOME}/install 39 | cd ${HOME} 40 | ``` 41 | 42 | Create an install yaml file. This is a simple one that works for most installations. *NOTE* It __MUST__ be named `install-config.yaml`. 43 | 44 | ```yaml 45 | apiVersion: v1 46 | baseDomain: example.com 47 | compute: 48 | - hyperthreading: Enabled 49 | name: worker 50 | replicas: 0 51 | controlPlane: 52 | hyperthreading: Enabled 53 | name: master 54 | replicas: 3 55 | metadata: 56 | name: ocp4 57 | networking: 58 | clusterNetworks: 59 | - cidr: 10.254.0.0/16 60 | hostPrefix: 24 61 | networkType: OpenShiftSDN 62 | serviceNetwork: 63 | - 172.30.0.0/16 64 | platform: 65 | none: {} 66 | pullSecret: '{"auths": ...}' 67 | sshKey: 'ssh-ed25519 AAAA...' 68 | ``` 69 | 70 | A few things to note... 71 | 72 | * `baseDomain` is the domain of your lab 73 | * `metadata.name` is the subdomain you set up in your DNS config (in this case `ocp4`) 74 | * `pullSecret` is obtained from [try.openshift.com](https://try.openshift.com). Just copy and paste it (all in one line) 75 | * `sshKey` is usually from `~/.ssh/id_rsa.pub` Use `ssh-keygen` if you don't have a key yet. 76 | 77 | Once you create this file, copy it into your install dir 78 | 79 | ``` 80 | cp install-config.yaml ${HOME}/install 81 | ``` 82 | 83 | First, create the installation manifests 84 | 85 | ``` 86 | openshift-install create manifests --dir=./install/ 87 | ``` 88 | 89 | Disable master being schuedulable 90 | 91 | ``` 92 | sed -i 's/mastersSchedulable: true/mastersSchedulable: false/g' ./install/manifests/cluster-scheduler-02-config.yml 93 | ``` 94 | 95 | It should look something like this 96 | 97 | ``` 98 | $ cat manifests/cluster-scheduler-02-config.yml 99 | apiVersion: config.openshift.io/v1 100 | kind: Scheduler 101 | metadata: 102 | creationTimestamp: null 103 | name: cluster 104 | spec: 105 | mastersSchedulable: false 106 | policy: 107 | name: "" 108 | status: {} 109 | ``` 110 | 111 | Generate the ignition configurations 112 | 113 | ``` 114 | cd ${HOME} 115 | openshift-install create ignition-configs --log-level=debug --dir=./install/ 116 | ``` 117 | 118 | This will create the following... 119 | 120 | ``` 121 | $ tree install 122 | install 123 | ├── auth 124 | │   ├── kubeadmin-password 125 | │   └── kubeconfig 126 | ├── bootstrap.ign 127 | ├── master.ign 128 | ├── metadata.json 129 | └── worker.ign 130 | ``` 131 | 132 | # Download Install Tools 133 | 134 | Download the ISO file to install the OCP4 nodes. Save it to where your VM Platform is running. Or if you're using BareMetal; burn it to a CD. (Visit the [official install page](https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.2/latest/) for the latest artifacts) 135 | 136 | ``` 137 | curl -J -L -O https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.2/4.2.0/rhcos-4.2.0-x86_64-installer.iso 138 | ``` 139 | 140 | Download the bios file 141 | 142 | ``` 143 | cd ${HOME} 144 | curl -J -L -O https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.2/4.2.0/rhcos-4.2.0-x86_64-metal-bios.raw.gz 145 | ``` 146 | 147 | **IF** You're using an apache server for the install; copy the `install` dir over with the bios file... 148 | 149 | ``` 150 | $ scp -r install root@webserver.example.com:/var/www/html/ 151 | $ scp -r rhcos-4.2.0-x86_64-metal-bios.raw.gz root@webserver.example.com:/var/www/html/ 152 | ``` 153 | 154 | If you're using `python` ...run the webserver from yourlaptop in another terminal window 155 | 156 | ``` 157 | python3 -m http.server -d ${HOME} 8080 158 | ``` 159 | 160 | # Conclusion 161 | 162 | Now you're ready to install the OCP4 nodes via the ISO and the bios file 163 | 164 | [return to the index page](../README.md) 165 | -------------------------------------------------------------------------------- /installation/guides/okd.md: -------------------------------------------------------------------------------- 1 | # OKD 2 | 3 | This is a HIGH LEVEL howto install the upstream. It's pretty simple. I borrowed A LOT from [Grant Shipley's All-in-One install guide](https://github.com/gshipley/installcentos) 4 | 5 | This page assumes you know enough about OpenShift that all you need is a few notes. 6 | 7 | ## Install CentOS 8 | 9 | I installed CentOS with the following in mind 10 | 11 | * 1 Master/2 Nodes 12 | * 12GB Ram each 13 | * 4CPUs Each 14 | * 50GB for root 15 | * 50GB for container storage 16 | * 100GB for OCS (formerly CNS) 17 | * DNS 18 | * Foward and reverse DNS for all hosts 19 | * Wildcard entry `*.apps.example.com` pointed to the IP of the master 20 | * Webconsole entry `ocp.example.com` pointed to the IP of the master 21 | * Minimal Install 22 | 23 | ## Prep The host 24 | 25 | I preped the hosts with the follwing steps borrowed from the [setup script](https://github.com/gshipley/installcentos/blob/master/install-openshift.sh). Note that I **DID NOT** run the script but used to to prepare my hosts 26 | 27 | _detailed instructions to come_ 28 | 29 | ## Inventory File 30 | 31 | I used my [standard inventory file](https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/ansible_hostfiles/singlemaster) and edited it with stuff taken from [Grant's inventory file](https://github.com/gshipley/installcentos/blob/master/inventory.ini) 32 | 33 | I ended up with this [inventory file for OKD 3.11](okd-inventory.ini) 34 | 35 | ## Install 36 | 37 | Install is SOP 38 | 39 | NOTE: make sure you get the right branch 40 | ``` 41 | export VERSION=3.11 42 | git clone https://github.com/openshift/openshift-ansible.git 43 | cd openshift-ansible && git fetch && git checkout release-${VERSION} && cd .. 44 | ``` 45 | 46 | Other than that it's the same... 47 | 48 | ``` 49 | ansible-playbook openshift-ansible/playbooks/prerequisites.yml 50 | ansible-playbook openshift-ansible/playbooks/deploy_cluster.yml 51 | ``` 52 | 53 | ## Troubleshooting 54 | 55 | List of things I've ran into 56 | 57 | ### Image version 58 | 59 | Had an issue with `openshift-logging` namespace where the images weren't there 60 | 61 | ``` 62 | [root@dhcp-host-81 ~]# oc get pods 63 | NAME READY STATUS RESTARTS AGE 64 | logging-es-data-master-omovbji7-1-deploy 1/1 Running 0 2m 65 | logging-es-data-master-omovbji7-1-lphdr 1/2 ImagePullBackOff 0 2m 66 | logging-fluentd-788fx 1/1 Running 0 3m 67 | logging-fluentd-7ndvb 1/1 Running 0 3m 68 | logging-fluentd-sh2b4 1/1 Running 0 3m 69 | logging-kibana-1-deploy 1/1 Running 0 4m 70 | logging-kibana-1-zxgvr 1/2 ImagePullBackOff 0 4m 71 | ``` 72 | 73 | checked to see if I could pull manually 74 | 75 | ``` 76 | [root@dhcp-host-81 ~]# docker pull docker.io/openshift/origin-logging-elasticsearch5:v3.11 77 | Trying to pull repository docker.io/openshift/origin-logging-elasticsearch5 ... 78 | manifest for docker.io/openshift/origin-logging-elasticsearch5:v3.11 not found 79 | ``` 80 | 81 | So I just pulled the latest... 82 | 83 | ``` 84 | [root@dhcp-host-81 ~]# docker pull docker.io/openshift/origin-logging-elasticsearch5:latest 85 | Trying to pull repository docker.io/openshift/origin-logging-elasticsearch5 ... 86 | latest: Pulling from docker.io/openshift/origin-logging-elasticsearch5 87 | aeb7866da422: Already exists 88 | 0fc84339b005: Pull complete 89 | 5af964698c82: Pull complete 90 | Digest: sha256:add3106c24e2759f73259d769db61bd5a25db95111591a0ec7607feac8887ce2 91 | Status: Downloaded newer image for docker.io/openshift/origin-logging-elasticsearch5:latest 92 | 93 | [root@dhcp-host-81 ~]# docker pull docker.io/openshift/origin-logging-kibana5:latest 94 | Trying to pull repository docker.io/openshift/origin-logging-kibana5 ... 95 | latest: Pulling from docker.io/openshift/origin-logging-kibana5 96 | aeb7866da422: Already exists 97 | 0fc84339b005: Already exists 98 | 3b9a249f07fb: Pull complete 99 | Digest: sha256:3678bf6d9c9e595e60534843e5cfe15471dd6a1fd81593cbdf292e71771663ff 100 | Status: Downloaded newer image for docker.io/openshift/origin-logging-kibana5:latest 101 | ``` 102 | 103 | Then I tagged them 104 | 105 | ``` 106 | docker tag docker.io/openshift/origin-logging-kibana5:latest docker.io/openshift/origin-logging-kibana5:v3.11 107 | docker tag docker.io/openshift/origin-logging-elasticsearch5:latest docker.io/openshift/origin-logging-elasticsearch5:v3.11 108 | ``` 109 | 110 | Logging came up! 111 | 112 | ``` 113 | [root@dhcp-host-81 ~]# oc get pods 114 | NAME READY STATUS RESTARTS AGE 115 | logging-es-data-master-omovbji7-1-lphdr 2/2 Running 0 4m 116 | logging-fluentd-788fx 1/1 Running 0 5m 117 | logging-fluentd-7ndvb 1/1 Running 0 5m 118 | logging-fluentd-sh2b4 1/1 Running 0 5m 119 | logging-kibana-1-deploy 1/1 Running 0 6m 120 | logging-kibana-1-zxgvr 1/2 Running 0 6m 121 | ``` 122 | -------------------------------------------------------------------------------- /registry/manifests/minioinstance.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: minio-creds-secret 5 | type: Opaque 6 | data: 7 | accesskey: bWluaW8= # base 64 encoded "minio" (echo -n 'minio' | base64) 8 | secretkey: bWluaW8xMjM= # based 64 encoded "minio123" (echo -n 'minio123' | base64) 9 | --- 10 | apiVersion: v1 11 | kind: Service 12 | metadata: 13 | name: minio-service 14 | spec: 15 | type: ClusterIP 16 | ports: 17 | - port: 9000 18 | targetPort: 9000 19 | protocol: TCP 20 | # Optional field 21 | # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767) 22 | # nodePort: 30007 23 | selector: 24 | app: minio 25 | --- 26 | apiVersion: operator.min.io/v1 27 | kind: MinIOInstance 28 | metadata: 29 | name: minio 30 | ## If specified, MinIOInstance pods will be dispatched by specified scheduler. 31 | ## If not specified, the pod will be dispatched by default scheduler. 32 | # scheduler: 33 | # name: my-custom-scheduler 34 | spec: 35 | ## Add metadata to the all pods created by the StatefulSet 36 | metadata: 37 | ## Optionally pass labels to be applied to the statefulset pods 38 | labels: 39 | app: minio 40 | annotations: 41 | prometheus.io/path: /minio/prometheus/metrics 42 | prometheus.io/port: "9000" 43 | prometheus.io/scrape: "true" 44 | ## Registry location and Tag to download MinIO Server image 45 | image: minio/minio:RELEASE.2020-07-14T19-14-30Z 46 | ## A ClusterIP Service will be created with the given name 47 | serviceName: minio-internal-service 48 | zones: 49 | - name: "zone-0" 50 | ## Number of MinIO servers/pods in this zone. 51 | ## For standalone mode, supply 1. For distributed mode, supply 4 or more. 52 | ## Note that the operator does not support upgrading from standalone to distributed mode. 53 | servers: 4 54 | ## Supply number of volumes to be mounted per MinIO server instance. 55 | volumesPerServer: 4 56 | ## Mount path where PV will be mounted inside container(s). Defaults to "/export". 57 | mountPath: /export 58 | ## Sub path inside Mount path where MinIO starts. Defaults to "". 59 | # subPath: /data 60 | ## This VolumeClaimTemplate is used across all the volumes provisioned for MinIO cluster. 61 | ## Please do not change the volumeClaimTemplate field while expanding the cluster, this may 62 | ## lead to unbound PVCs and missing data 63 | volumeClaimTemplate: 64 | metadata: 65 | name: data 66 | spec: 67 | accessModes: 68 | - ReadWriteOnce 69 | resources: 70 | requests: 71 | storage: 10Gi 72 | ## Secret with credentials to be used by MinIO instance. 73 | credsSecret: 74 | name: minio-creds-secret 75 | ## PodManagement policy for pods created by StatefulSet. Can be "OrderedReady" or "Parallel" 76 | ## Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy 77 | ## for details. Defaults to "Parallel" 78 | podManagementPolicy: Parallel 79 | ## Secret with certificates to configure TLS for MinIO certs. Create secrets as explained 80 | ## here: https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret 81 | # externalCertSecret: 82 | # name: tls-ssl-minio 83 | ## Enable Kubernetes based certificate generation and signing as explained in 84 | ## https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster 85 | requestAutoCert: false 86 | ## Used when "requestAutoCert" is set to true. Set CommonName for the auto-generated certificate. 87 | ## Internal DNS name for the pod will be used if CommonName is not provided. 88 | ## DNS name format is minio-{0...3}.minio.default.svc.cluster.local 89 | certConfig: 90 | commonName: "" 91 | organizationName: [] 92 | dnsNames: [] 93 | ## Used to specify a toleration for a pod 94 | # tolerations: 95 | # - effect: NoSchedule 96 | # key: dedicated 97 | # operator: Equal 98 | # value: storage 99 | ## Add environment variables to be set in MinIO container (https://github.com/minio/minio/tree/master/docs/config) 100 | # env: 101 | # - name: MINIO_BROWSER 102 | # value: "off" # to turn-off browser 103 | # - name: MINIO_STORAGE_CLASS_STANDARD 104 | # value: "EC:2" 105 | ## Configure resource requests and limits for MinIO containers 106 | # resources: 107 | # requests: 108 | # memory: 20Gi 109 | ## Liveness probe detects situations where MinIO server instance 110 | ## is not working properly and needs restart. Kubernetes automatically 111 | ## restarts the pods if liveness checks fail. 112 | liveness: 113 | initialDelaySeconds: 10 114 | periodSeconds: 1 115 | timeoutSeconds: 1 116 | ## nodeSelector parameters for MinIO Pods. It specifies a map of key-value pairs. For the pod to be 117 | ## eligible to run on a node, the node must have each of the 118 | ## indicated key-value pairs as labels. 119 | ## Read more here: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ 120 | # nodeSelector: 121 | # disktype: ssd 122 | ## Affinity settings for MinIO pods. Read more about affinity 123 | ## here: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity. 124 | # affinity: 125 | -------------------------------------------------------------------------------- /registry/manifests/minio-operator.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.k8s.io/v1 2 | kind: CustomResourceDefinition 3 | metadata: 4 | name: minioinstances.operator.min.io 5 | spec: 6 | group: operator.min.io 7 | scope: Namespaced 8 | names: 9 | kind: MinIOInstance 10 | singular: minioinstance 11 | plural: minioinstances 12 | versions: 13 | - name: v1 14 | served: true 15 | storage: true 16 | schema: 17 | # openAPIV3Schema is the schema for validating custom objects. 18 | # Refer https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema 19 | # for more details 20 | openAPIV3Schema: 21 | type: object 22 | properties: 23 | spec: 24 | type: object 25 | x-kubernetes-preserve-unknown-fields: true 26 | properties: 27 | replicas: 28 | type: integer 29 | minimum: 1 30 | maximum: 32 31 | image: 32 | type: string 33 | serviceName: 34 | type: string 35 | volumesPerServer: 36 | type: integer 37 | mountPath: 38 | type: string 39 | podManagementPolicy: 40 | type: string 41 | enum: [Parallel,OrderedReady] 42 | default: Parallel 43 | requestAutoCert: 44 | type: boolean 45 | default: false 46 | version: 47 | type: string 48 | mountpath: 49 | type: string 50 | subpath: 51 | type: string 52 | mcs: 53 | type: object 54 | x-kubernetes-preserve-unknown-fields: true 55 | properties: 56 | image: 57 | type: string 58 | replicas: 59 | type: integer 60 | default: 2 61 | mcsSecret: 62 | type: object 63 | properties: 64 | name: 65 | type: string 66 | kes: 67 | type: object 68 | x-kubernetes-preserve-unknown-fields: true 69 | properties: 70 | image: 71 | type: string 72 | replicas: 73 | type: integer 74 | default: 2 75 | kesSecret: 76 | type: object 77 | properties: 78 | name: 79 | type: string 80 | status: 81 | type: object 82 | properties: 83 | currentState: 84 | type: string 85 | subresources: 86 | # status enables the status subresource. 87 | status: {} 88 | additionalPrinterColumns: 89 | - name: Current State 90 | type: string 91 | jsonPath: ".status.currentState" 92 | --- 93 | apiVersion: rbac.authorization.k8s.io/v1beta1 94 | kind: ClusterRole 95 | metadata: 96 | name: minio-operator-role 97 | rules: 98 | - apiGroups: 99 | - "" 100 | resources: 101 | - namespaces 102 | - secrets 103 | - pods 104 | - services 105 | - events 106 | verbs: 107 | - get 108 | - watch 109 | - create 110 | - list 111 | - delete 112 | - apiGroups: 113 | - apps 114 | resources: 115 | - statefulsets 116 | - deployments 117 | verbs: 118 | - get 119 | - create 120 | - list 121 | - patch 122 | - watch 123 | - update 124 | - delete 125 | - apiGroups: 126 | - batch 127 | resources: 128 | - jobs 129 | verbs: 130 | - get 131 | - create 132 | - list 133 | - patch 134 | - watch 135 | - update 136 | - delete 137 | - apiGroups: 138 | - "certificates.k8s.io" 139 | resources: 140 | - "certificatesigningrequests" 141 | - "certificatesigningrequests/approval" 142 | - "certificatesigningrequests/status" 143 | verbs: 144 | - update 145 | - create 146 | - get 147 | - delete 148 | - apiGroups: 149 | - operator.min.io 150 | resources: 151 | - "*" 152 | verbs: 153 | - "*" 154 | - apiGroups: 155 | - min.io 156 | resources: 157 | - "*" 158 | verbs: 159 | - "*" 160 | --- 161 | apiVersion: v1 162 | kind: ServiceAccount 163 | metadata: 164 | name: minio-operator 165 | namespace: minio 166 | --- 167 | kind: ClusterRoleBinding 168 | apiVersion: rbac.authorization.k8s.io/v1beta1 169 | metadata: 170 | name: minio-operator-binding 171 | roleRef: 172 | apiGroup: rbac.authorization.k8s.io 173 | kind: ClusterRole 174 | name: minio-operator-role 175 | subjects: 176 | - kind: ServiceAccount 177 | name: minio-operator 178 | namespace: minio 179 | --- 180 | apiVersion: apps/v1 181 | kind: Deployment 182 | metadata: 183 | name: minio-operator 184 | namespace: minio 185 | spec: 186 | replicas: 1 187 | selector: 188 | matchLabels: 189 | name: minio-operator 190 | template: 191 | metadata: 192 | labels: 193 | name: minio-operator 194 | spec: 195 | serviceAccountName: minio-operator 196 | containers: 197 | - name: minio-operator 198 | image: minio/k8s-operator:2.0.9 199 | imagePullPolicy: IfNotPresent 200 | # To specify cluster domain, un comment the following: 201 | # env: 202 | # - name: CLUSTER_DOMAIN 203 | # value: mycluster.mydomain 204 | -------------------------------------------------------------------------------- /installation/guides/okd-inventory.ini: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | glusterfs 7 | 8 | # Set variables common for all OSEv3 hosts 9 | [OSEv3:vars] 10 | 11 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 12 | ansible_ssh_user=root 13 | 14 | # Install Enterprise or Origin; set up ntp 15 | openshift_deployment_type=origin 16 | openshift_clock_enabled=true 17 | 18 | # Network/DNS Related 19 | openshift_master_default_subdomain=apps.192.168.1.81.nip.io 20 | osm_cluster_network_cidr=10.1.0.0/16 21 | osm_host_subnet_length=8 22 | openshift_portal_net=172.30.0.0/16 23 | openshift_docker_insecure_registries=0.0.0.0/0 24 | 25 | # CNS Storage 26 | openshift_storage_glusterfs_namespace=glusterfs 27 | openshift_storage_glusterfs_name=storage 28 | openshift_storage_glusterfs_heketi_wipe=true 29 | openshift_storage_glusterfs_wipe=true 30 | openshift_storage_glusterfs_storageclass_default=true 31 | openshift_storage_glusterfs_block_storageclass=true 32 | openshift_storage_glusterfs_block_host_vol_size=50 33 | 34 | # Automatically Deploy the router 35 | openshift_hosted_manage_router=true 36 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 37 | 38 | # Automatically deploy the registry using glusterfs 39 | openshift_hosted_manage_registry=true 40 | openshift_hosted_registry_storage_kind=glusterfs 41 | openshift_hosted_registry_storage_volume_size=10Gi 42 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 43 | #openshift_hosted_registry_replicas=2 44 | 45 | # Disble Checks 46 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 47 | 48 | # 49 | # Network Policies that are available: 50 | # redhat/openshift-ovs-networkpolicy # fine grained control 51 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 52 | # redhat/openshift-ovs-subnet # "flat" network 53 | # 54 | # Network OVS Plugin to use 55 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 56 | 57 | # Uncomment when setting up logging/metrics/prometheus 58 | openshift_master_dynamic_provisioning_enabled=true 59 | dynamic_volumes_check=False 60 | 61 | # Logging 62 | openshift_logging_install_logging=true 63 | openshift_logging_es_pvc_dynamic=true 64 | openshift_logging_es_pvc_size=20Gi 65 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 66 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 67 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 68 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 69 | openshift_logging_es_memory_limit=4G 70 | openshift_logging_elasticsearch_proxy_image_version="v1.0.0" 71 | openshift_logging_image_version=v3.11 72 | 73 | # Metrics 74 | openshift_metrics_install_metrics=true 75 | openshift_metrics_cassandra_pvc_size=20Gi 76 | openshift_metrics_cassandra_storage_type=dynamic 77 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 78 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 79 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 80 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 81 | openshift_metrics_image_version=v3.11 82 | 83 | # Prometheus Metrics 84 | openshift_cluster_monitoring_operator_install=true 85 | openshift_cluster_monitoring_operator_prometheus_storage_enabled=true 86 | openshift_cluster_monitoring_operator_alertmanager_storage_enabled=true 87 | openshift_cluster_monitoring_operator_prometheus_storage_capacity=15Gi 88 | openshift_cluster_monitoring_operator_alertmanager_storage_capacity=15Gi 89 | openshift_cluster_monitoring_operator_node_selector={'node-role.kubernetes.io/infra':'true'} 90 | 91 | # OKD specific stuff 92 | openshift_cluster_monitoring_operator_install=true 93 | enable_excluders=False 94 | enable_docker_excluder=False 95 | ansible_service_broker_install=False 96 | containerized=True 97 | openshift_additional_repos=[{'id': 'centos-paas', 'name': 'centos-paas', 'baseurl' :'https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311', 'gpgcheck' :'0', 'enabled' :'1'}] 98 | 99 | # If using Route53 or you're pointed to the master with a "vanity" name 100 | openshift_master_public_api_url=https://ocp.192.168.1.81.nip.io:8443 101 | openshift_master_public_console_url=https://ocp.192.168.1.81.nip.io:8443/console 102 | openshift_master_api_port=8443 103 | openshift_master_console_port=8443 104 | 105 | # The following enabled htpasswd authentication 106 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 107 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 108 | 109 | ## OpenShift host groups 110 | 111 | # host group for etcd 112 | [etcd] 113 | dhcp-host-81.cloud.chx 114 | 115 | # host group for masters - set scedulable to "true" for the web-console pod 116 | [masters] 117 | dhcp-host-81.cloud.chx openshift_schedulable=true 118 | 119 | # host group for nodes, includes region info 120 | [nodes] 121 | dhcp-host-81.cloud.chx openshift_node_group_name='node-config-master-infra' 122 | dhcp-host-60.cloud.chx openshift_node_group_name='node-config-compute' 123 | dhcp-host-21.cloud.chx openshift_node_group_name='node-config-compute' 124 | 125 | [glusterfs] 126 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 127 | dhcp-host-81.cloud.chx glusterfs_ip=192.168.1.81 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 128 | dhcp-host-60.cloud.chx glusterfs_ip=192.168.1.60 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 129 | dhcp-host-21.cloud.chx glusterfs_ip=192.168.1.21 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 130 | ## 131 | ## 132 | -------------------------------------------------------------------------------- /aws_refarch2/README.md: -------------------------------------------------------------------------------- 1 | # AWS Installer 2 | 3 | This installer sets up OpenShift on AWS in a HA configuration. This is a summary of the [official documentation](https://access.redhat.com/documentation/en-us/reference_architectures/2017/html-single/deploying_and_managing_openshift_container_platform_3.6_on_amazon_web_services/) 4 | 5 | In the end you'll have the following. 6 | 7 | ![aws refarch overview](./ose-on-aws-architecture.jpg) 8 | 9 | You will need the following to get started 10 | * An AWS IAM account 11 | * This account pretty much needs full access 12 | * AWS Secret Key 13 | * AWS Key ID 14 | * Delegate a Subdomain to AWS Route53 15 | * OpenShift Subs 16 | * A host to launch the commands from 17 | 18 | ## Set Up Host 19 | 20 | First install the following packages 21 | 22 | ``` 23 | subscription-manager register --username=christian.hernandez@redhat.com 24 | subscription-manager list --available 25 | subscription-manager attach --pool 26 | subscription-manager repos --disable=* 27 | subscription-manager repos --enable rhel-7-server-rpms 28 | subscription-manager repos --enable rhel-7-server-optional-rpms 29 | subscription-manager repos --enable rhel-7-server-ose-3.5-rpms 30 | subscription-manager repos --enable rhel-7-fast-datapath-rpms 31 | yum -y install yum-utils 32 | yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 33 | yum-config-manager --disable epel 34 | yum -y install ansible atomic-openshift-utils 35 | yum -y install --enablerepo=epel \ 36 | python2-boto \ 37 | python2-boto3 \ 38 | pyOpenSSL \ 39 | git \ 40 | python-netaddr \ 41 | python-click \ 42 | python-httplib2 vim bash-completion 43 | ``` 44 | 45 | Next, either copy or create an ssh-key; below is how to generate an SSH key (skip this if you already have one) 46 | 47 | ``` 48 | ssh-keygen 49 | ``` 50 | 51 | Create an `~/.ssh/config` file with the following (substitute your delegated domain and ssh key where appropriate) 52 | 53 | ``` 54 | Host bastion 55 | HostName bastion.aws.chx.cloud 56 | User ec2-user 57 | StrictHostKeyChecking no 58 | ProxyCommand none 59 | CheckHostIP no 60 | ForwardAgent yes 61 | IdentityFile /root/.ssh/id_rsa 62 | 63 | Host *.aws.chx.cloud 64 | ProxyCommand ssh ec2-user@bastion -W %h:%p 65 | user ec2-user 66 | IdentityFile /root/.ssh/id_rsa 67 | ``` 68 | 69 | You are basically setting up a way to "tunnel" through your env. 70 | 71 | Next, clone the repo 72 | 73 | ``` 74 | cd 75 | git clone https://github.com/openshift/openshift-ansible-contrib.git 76 | ``` 77 | 78 | This env is going to use GitHub for auth. Create an Org (this is easy). Then Go to `Settings ~> oAuth Applications` and register a new app. I used the following settings 79 | 80 | ``` 81 | Homepage URL: https://openshift-master.aws.chx.cloud 82 | Authorization callback URL: https://openshift-master.aws.chx.cloud/oauth2callback/github 83 | ``` 84 | 85 | 86 | ## Provision The Environment 87 | 88 | To provision the env; you need to be in the right dir. Enter the dir and run the help menu. For the most, part it's well documented 89 | 90 | ``` 91 | sudo setenforce 0 92 | cd openshift-ansible-contrib/reference-architecture/aws-ansible 93 | ./ose-on-aws.py --help 94 | ``` 95 | 96 | **NOTE** 97 | I had to edit the following file `openshift-ansible-contrib/reference-architecture/aws-ansible/inventory/aws/hosts/ec2.ini` and change the entry (around line 14) `regions = all` to the entry below (substitute for your region). This is a current bug and you may not need to do this. Check the [[https://github.com/openshift/openshift-ansible-contrib/issues/369|issues]] page for info. __UPDATE__: I hadn't had to do this the last two times i've ran this but YMMV. 98 | 99 | ``` 100 | regions = us-east-1 101 | ``` 102 | 103 | Export your AWS env 104 | ``` 105 | export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXX 106 | export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX 107 | ``` 108 | 109 | I ran the following to provision the env. (I used TMUX so I can disconnect and comeback later) 110 | 111 | ``` 112 | ./ose-on-aws.py \ 113 | --stack-name=ocp-chx \ 114 | --ami=ami-b63769a1 \ 115 | --region=us-east-1 \ 116 | --public-hosted-zone=aws.chx.cloud \ 117 | --app-dns-prefix=apps \ 118 | --rhsm-user=christian.hernandez@redhat.com \ 119 | --rhsm-password=rhsecret \ 120 | --rhsm-pool="Employee SKU" \ 121 | --github-client-secret=githubclientsecret \ 122 | --github-client-id=githubclientid \ 123 | --github-organization=openshift-tigerteam \ 124 | --keypair=chernand-ec2 \ 125 | --create-key=yes \ 126 | --key-path=/root/.ssh/id_rsa.pub 127 | ``` 128 | 129 | Things to note 130 | 131 | * If you want another github org just pass multiple `--github-organization` 132 | * The `--keypair` is the NAME you want it in AWS 133 | * And `--create-key` means that you're going to upload this to AWS 134 | * The option `--rhsm-pool` cloud be `"60 Day Supported OpenShift Enterprise, 2 Cores Evaluation"` 135 | * You probably want to pass `--node-instance-type` and/or `--app-instance-type` and chose either `m4.xlarge` for 10-20 users or `m4.2xlarge` for 20-40 users (more expensive but enough resources) 136 | 137 | ## Add A Node 138 | 139 | Adding a node is easy. Just make sure you're in that same dir 140 | 141 | ``` 142 | ./add-node.py \ 143 | --existing-stack=ocp-chx \ 144 | --rhsm-user=christian.hernandez@redhat.com \ 145 | --rhsm-password=rhsecret \ 146 | --public-hosted-zone=aws.chx.cloud \ 147 | --keypair=chernand-ec2 \ 148 | --rhsm-pool="Employee SKU" \ 149 | --use-cloudformation-facts \ 150 | --shortname=ose-app-node03 \ 151 | --subnet-id=subnet-67cefb05 152 | ``` 153 | 154 | Two things to note 155 | * Use the proper `--shortname` - just look on AWS 156 | * The `--subnet-id` means what zone you want it in...makes sure it's what you want. 157 | -------------------------------------------------------------------------------- /ansible_hostfiles/singlemaster-crio: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | glusterfs 7 | 8 | # Set variables common for all OSEv3 hosts 9 | [OSEv3:vars] 10 | 11 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 12 | ansible_ssh_user=root 13 | #ansible_sudo=true 14 | #ansible_become=yes 15 | 16 | # Install Enterprise or Origin; set up ntp 17 | openshift_deployment_type=openshift-enterprise 18 | openshift_clock_enabled=true 19 | 20 | # Registry auth needed if you're using RH's registry. 21 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 22 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 23 | 24 | # User CRI-O while still using docker for s2i builds 25 | openshift_use_crio=True 26 | openshift_use_crio_only=False 27 | openshift_crio_enable_docker_gc=True 28 | 29 | ### Disconnected Install 30 | #openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 31 | #openshift_docker_additional_registries=registry.cloud.chx 32 | #oreg_url=registry.cloud.chx/openshift3/ose-${component}:${version} 33 | #openshift_examples_modify_imagestreams=true 34 | #openshift_release=v3.11 35 | #openshift_image_tag=v3.11 36 | ##openshift_pkg_version=-3.11.16 37 | ### 38 | 39 | # Network/DNS Related 40 | openshift_master_default_subdomain=apps.192.168.1.96.nip.io 41 | osm_cluster_network_cidr=10.1.0.0/16 42 | osm_host_subnet_length=8 43 | openshift_portal_net=172.30.0.0/16 44 | # This can be set to 0.0.0.0/0 for disconnected installs 45 | openshift_docker_insecure_registries=172.30.0.0/16 46 | 47 | # CNS Storage 48 | openshift_storage_glusterfs_namespace=glusterfs 49 | openshift_storage_glusterfs_name=storage 50 | openshift_storage_glusterfs_heketi_wipe=true 51 | openshift_storage_glusterfs_wipe=true 52 | openshift_storage_glusterfs_storageclass_default=true 53 | openshift_storage_glusterfs_block_storageclass=true 54 | openshift_storage_glusterfs_block_host_vol_size=50 55 | 56 | # Automatically Deploy the router 57 | openshift_hosted_manage_router=true 58 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 59 | 60 | # Automatically deploy the registry using glusterfs 61 | openshift_hosted_manage_registry=true 62 | openshift_hosted_registry_storage_kind=glusterfs 63 | openshift_hosted_registry_storage_volume_size=25Gi 64 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 65 | #openshift_hosted_registry_replicas=2 66 | 67 | # Disble Checks 68 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 69 | 70 | # 71 | # Network Policies that are available: 72 | # redhat/openshift-ovs-networkpolicy # fine grained control 73 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 74 | # redhat/openshift-ovs-subnet # "flat" network 75 | # 76 | # Network OVS Plugin to use 77 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 78 | 79 | # Uncomment when setting up logging/metrics/prometheus 80 | openshift_master_dynamic_provisioning_enabled=true 81 | dynamic_volumes_check=False 82 | 83 | # Logging 84 | openshift_logging_install_logging=true 85 | openshift_logging_es_pvc_dynamic=true 86 | openshift_logging_es_pvc_size=20Gi 87 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 88 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 89 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 90 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 91 | openshift_logging_es_memory_limit=4G 92 | 93 | # Metrics 94 | openshift_metrics_install_metrics=true 95 | openshift_metrics_cassandra_pvc_size=20Gi 96 | openshift_metrics_cassandra_storage_type=dynamic 97 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 98 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 99 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 100 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 101 | 102 | # Prometheus Metrics 103 | openshift_cluster_monitoring_operator_install=true 104 | openshift_cluster_monitoring_operator_prometheus_storage_enabled=true 105 | openshift_cluster_monitoring_operator_alertmanager_storage_enabled=true 106 | openshift_cluster_monitoring_operator_prometheus_storage_capacity=15Gi 107 | openshift_cluster_monitoring_operator_alertmanager_storage_capacity=15Gi 108 | openshift_cluster_monitoring_operator_node_selector={'node-role.kubernetes.io/infra':'true'} 109 | 110 | ## Ansible Service Broker - only install it if you REALLY need it. 111 | ansible_service_broker_install=false 112 | #openshift_hosted_etcd_storage_kind=dynamic 113 | #openshift_hosted_etcd_storage_volume_name=etcd-vol2 114 | #openshift_hosted_etcd_storage_volume_size=10Gi 115 | #ansible_service_broker_local_registry_whitelist=['.*-apb$'] 116 | 117 | # If using Route53 or you're pointed to the master with a "vanity" name 118 | openshift_master_public_api_url=https://ocp.192.168.1.96.nip.io:8443 119 | openshift_master_public_console_url=https://ocp.192.168.1.96.nip.io:8443/console 120 | openshift_master_cluster_public_hostname=ocp.192.168.1.96.nip.io 121 | openshift_master_api_port=8443 122 | openshift_master_console_port=8443 123 | 124 | # The following enabled htpasswd authentication 125 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 126 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 127 | 128 | ## OpenShift host groups 129 | 130 | # host group for etcd 131 | [etcd] 132 | dhcp-host-96.cloud.chx 133 | 134 | # host group for masters - set scedulable to "true" for the web-console pod 135 | [masters] 136 | dhcp-host-96.cloud.chx openshift_schedulable=true 137 | 138 | # host group for nodes, includes region info 139 | [nodes] 140 | dhcp-host-96.cloud.chx openshift_node_group_name='node-config-master-infra-crio' 141 | dhcp-host-30.cloud.chx openshift_node_group_name='node-config-compute-crio' 142 | dhcp-host-78.cloud.chx openshift_node_group_name='node-config-compute-crio' 143 | 144 | [glusterfs] 145 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 146 | dhcp-host-96.cloud.chx glusterfs_ip=192.168.1.96 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 147 | dhcp-host-30.cloud.chx glusterfs_ip=192.168.1.30 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 148 | dhcp-host-78.cloud.chx glusterfs_ip=192.168.1.78 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 149 | ## 150 | ## 151 | -------------------------------------------------------------------------------- /ansible_hostfiles/singlemaster: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | glusterfs 7 | 8 | # Set variables common for all OSEv3 hosts 9 | [OSEv3:vars] 10 | 11 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 12 | ansible_ssh_user=root 13 | #ansible_sudo=true 14 | #ansible_become=yes 15 | 16 | # Install Enterprise or Origin; set up ntp 17 | openshift_deployment_type=openshift-enterprise 18 | openshift_clock_enabled=true 19 | 20 | # Registry auth needed if you're using RH's registry. 21 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 22 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 23 | 24 | ### Disconnected Install 25 | #openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 26 | #openshift_docker_additional_registries=registry.cloud.chx 27 | #oreg_url=registry.cloud.chx/openshift3/ose-${component}:${version} 28 | #openshift_examples_modify_imagestreams=true 29 | #openshift_release=v3.11 30 | #openshift_image_tag=v3.11 31 | ##openshift_pkg_version=-3.11.16 32 | ### 33 | 34 | # Network/DNS Related 35 | openshift_master_default_subdomain=apps.cloud.chx 36 | osm_cluster_network_cidr=10.1.0.0/16 37 | osm_host_subnet_length=8 38 | openshift_portal_net=172.30.0.0/16 39 | # This can be set to 0.0.0.0/0 for disconnected installs 40 | openshift_docker_insecure_registries=172.30.0.0/16 41 | 42 | # CNS Storage 43 | openshift_storage_glusterfs_namespace=glusterfs 44 | openshift_storage_glusterfs_name=storage 45 | openshift_storage_glusterfs_heketi_wipe=true 46 | openshift_storage_glusterfs_wipe=true 47 | openshift_storage_glusterfs_storageclass_default=true 48 | openshift_storage_glusterfs_block_storageclass=true 49 | openshift_storage_glusterfs_block_host_vol_size=50 50 | openshift_storage_glusterfs_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11 51 | openshift_storage_glusterfs_block_image=registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11 52 | openshift_storage_glusterfs_s3_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11 53 | openshift_storage_glusterfs_heketi_image=registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11 54 | 55 | # Automatically Deploy the router 56 | openshift_hosted_manage_router=true 57 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 58 | 59 | # Automatically deploy the registry using glusterfs 60 | openshift_hosted_manage_registry=true 61 | openshift_hosted_registry_storage_kind=glusterfs 62 | openshift_hosted_registry_storage_volume_size=25Gi 63 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 64 | #openshift_hosted_registry_replicas=2 65 | 66 | # Disble Checks 67 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 68 | 69 | # 70 | # Network Policies that are available: 71 | # redhat/openshift-ovs-networkpolicy # fine grained control 72 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 73 | # redhat/openshift-ovs-subnet # "flat" network 74 | # 75 | # Network OVS Plugin to use 76 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 77 | 78 | # Uncomment when setting up logging/metrics/prometheus 79 | openshift_master_dynamic_provisioning_enabled=true 80 | dynamic_volumes_check=False 81 | 82 | # Logging 83 | openshift_logging_install_logging=true 84 | openshift_logging_es_pvc_dynamic=true 85 | openshift_logging_es_pvc_size=20Gi 86 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 87 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 88 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 89 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 90 | openshift_logging_es_memory_limit=4G 91 | 92 | # Metrics 93 | openshift_metrics_install_metrics=true 94 | openshift_metrics_cassandra_pvc_size=20Gi 95 | openshift_metrics_cassandra_storage_type=dynamic 96 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 97 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 98 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 99 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 100 | 101 | # Prometheus Metrics 102 | openshift_cluster_monitoring_operator_install=true 103 | openshift_cluster_monitoring_operator_prometheus_storage_enabled=true 104 | openshift_cluster_monitoring_operator_alertmanager_storage_enabled=true 105 | openshift_cluster_monitoring_operator_prometheus_storage_capacity=15Gi 106 | openshift_cluster_monitoring_operator_alertmanager_storage_capacity=15Gi 107 | openshift_cluster_monitoring_operator_node_selector={'node-role.kubernetes.io/infra':'true'} 108 | 109 | 110 | ## Ansible Service Broker - only install it if you REALLY need it. 111 | ansible_service_broker_install=false 112 | #openshift_hosted_etcd_storage_kind=dynamic 113 | #openshift_hosted_etcd_storage_volume_name=etcd-vol2 114 | #openshift_hosted_etcd_storage_volume_size=10Gi 115 | #ansible_service_broker_local_registry_whitelist=['.*-apb$'] 116 | 117 | # If using Route53 or you're pointed to the master with a "vanity" name 118 | openshift_master_public_api_url=https://ocp.cloud.chx:8443 119 | openshift_master_public_console_url=https://ocp.cloud.chx:8443/console 120 | openshift_master_cluster_public_hostname=ocp.cloud.chx 121 | openshift_master_api_port=8443 122 | openshift_master_console_port=8443 123 | 124 | # The following enabled htpasswd authentication 125 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 126 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 127 | 128 | ## OpenShift host groups 129 | 130 | # host group for etcd 131 | [etcd] 132 | master1.cloud.chx 133 | 134 | # host group for masters - set scedulable to "true" for the web-console pod 135 | [masters] 136 | master1.cloud.chx openshift_schedulable=true 137 | 138 | # host group for nodes, includes region info 139 | [nodes] 140 | master1.cloud.chx openshift_node_group_name='node-config-master-infra' 141 | app1.cloud.chx openshift_node_group_name='node-config-compute' 142 | app2.cloud.chx openshift_node_group_name='node-config-compute' 143 | app3.cloud.chx openshift_node_group_name='node-config-compute' 144 | 145 | [glusterfs] 146 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 147 | app1.cloud.chx glusterfs_ip=192.168.1.32 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 148 | app2.cloud.chx glusterfs_ip=192.168.1.42 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 149 | app3.cloud.chx glusterfs_ip=192.168.1.52 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 150 | ## 151 | ## 152 | -------------------------------------------------------------------------------- /operators/docs/clusterd.md: -------------------------------------------------------------------------------- 1 | # Create Ansible Operator. 2 | 3 | Highlevel notes 4 | 5 | ## Init 6 | 7 | Scaffold 8 | 9 | ```shell 10 | mkdir welcome-php 11 | cd welcome-php 12 | operator-sdk init --plugins=ansible --domain example.com 13 | ``` 14 | 15 | Create your apis and kinds 16 | 17 | ```shell 18 | operator-sdk create api --group welcome --version v1alpha1 --kind Welcome --generate-role 19 | ``` 20 | 21 | ## Create Playbook 22 | 23 | Edit the tasks file 24 | 25 | ```shell 26 | vim roles/welcome/tasks/main.yml 27 | ``` 28 | 29 | Use the `k8s` module 30 | 31 | ```yaml 32 | --- 33 | # tasks file for Welcome 34 | 35 | - name: Create welcome deployment 36 | k8s: 37 | state: present 38 | definition: "{{ lookup('template', 'deployment.yaml.j2') }}" 39 | 40 | - name: Create welcome service 41 | k8s: 42 | state: present 43 | definition: "{{ lookup('template', 'service.yaml.j2') }}" 44 | ``` 45 | 46 | Create deployment template 47 | 48 | ```shell 49 | vim roles/welcome/templates/deployment.yaml.j2 50 | ``` 51 | 52 | Deployment template example 53 | 54 | ```yaml 55 | apiVersion: apps/v1 56 | kind: Deployment 57 | metadata: 58 | creationTimestamp: null 59 | labels: 60 | app: "{{ ansible_operator_meta.name }}-welcome" 61 | name: welcome 62 | namespace: "{{ ansible_operator_meta.namespace }}" 63 | spec: 64 | replicas: {{ instances | int }} 65 | selector: 66 | matchLabels: 67 | app: "{{ ansible_operator_meta.name }}-welcome" 68 | strategy: {} 69 | template: 70 | metadata: 71 | creationTimestamp: null 72 | labels: 73 | app: "{{ ansible_operator_meta.name }}-welcome" 74 | spec: 75 | containers: 76 | - image: quay.io/redhatworkshops/welcome-php:latest 77 | name: welcome-php 78 | resources: {} 79 | ``` 80 | 81 | Create service teamplte 82 | 83 | ```shell 84 | vim roles/welcome/templates/service.yaml.j2 85 | ``` 86 | 87 | Service example 88 | 89 | ```yaml 90 | apiVersion: v1 91 | kind: Service 92 | metadata: 93 | creationTimestamp: null 94 | labels: 95 | app: "{{ ansible_operator_meta.name }}-welcome" 96 | name: "{{ ansible_operator_meta.name }}-welcome" 97 | namespace: "{{ ansible_operator_meta.namespace }}" 98 | spec: 99 | ports: 100 | - port: 8080 101 | protocol: TCP 102 | targetPort: 8080 103 | selector: 104 | app: "{{ ansible_operator_meta.name }}-welcome" 105 | status: 106 | loadBalancer: {} 107 | ``` 108 | 109 | Note `ansible_operator_meta.` prefix for the downward API 110 | 111 | Create default values for your variables 112 | 113 | ```shell 114 | vim roles/welcome/defaults/main.yml 115 | ``` 116 | 117 | Example `main.yml` file 118 | 119 | ```yaml 120 | --- 121 | # defaults file for Welcome 122 | instances: 1 123 | ``` 124 | 125 | Create sane sample CR 126 | 127 | ```shell 128 | vim config/samples/welcome_v1alpha1_welcome.yaml 129 | ``` 130 | 131 | Sample CR 132 | 133 | ```yaml 134 | apiVersion: welcome.example.com/v1alpha1 135 | kind: Welcome 136 | metadata: 137 | name: welcome-sample 138 | spec: 139 | instances: 3 140 | ``` 141 | 142 | Configure RBAC for cluster scoped operator 143 | 144 | ```shell 145 | vim config/rbac/role-extra.yaml 146 | ``` 147 | 148 | Sample RBAC 149 | 150 | ```yaml 151 | apiVersion: rbac.authorization.k8s.io/v1 152 | kind: ClusterRole 153 | metadata: 154 | creationTimestamp: null 155 | name: welcome-operator-user 156 | labels: 157 | rbac.authorization.k8s.io/aggregate-to-admin: "true" 158 | rules: 159 | - apiGroups: 160 | - welcome.example.com 161 | resources: 162 | - '*' 163 | verbs: 164 | - '*' 165 | ``` 166 | 167 | Add it to the kustomize file 168 | 169 | ```shell 170 | vim config/rbac/kustomization.yaml 171 | ``` 172 | 173 | The file should look like this (note what I added) 174 | 175 | ```yaml 176 | resources: 177 | - role.yaml 178 | - role_binding.yaml 179 | - leader_election_role.yaml 180 | - leader_election_role_binding.yaml 181 | # Comment the following 4 lines if you want to disable 182 | # the auth proxy (https://github.com/brancz/kube-rbac-proxy) 183 | # which protects your /metrics endpoint. 184 | - auth_proxy_service.yaml 185 | - auth_proxy_role.yaml 186 | - auth_proxy_role_binding.yaml 187 | - auth_proxy_client_clusterrole.yaml 188 | # Extra stuff I added - CHX 189 | - role-extra.yaml 190 | ``` 191 | 192 | Since we're adding services, edit the `role.yaml` file to allow it from the core API group 193 | 194 | ```shell 195 | vim config/rbac/role.yaml 196 | ``` 197 | 198 | It should look like this 199 | 200 | ```yaml 201 | --- 202 | apiVersion: rbac.authorization.k8s.io/v1 203 | kind: ClusterRole 204 | metadata: 205 | name: manager-role 206 | rules: 207 | ## 208 | ## Base operator rules 209 | ## 210 | - apiGroups: 211 | - "" 212 | resources: 213 | - secrets 214 | - services # added this - CHX 215 | - pods 216 | - pods/exec 217 | - pods/log 218 | verbs: 219 | - create 220 | - delete 221 | - get 222 | - list 223 | - patch 224 | - update 225 | - watch 226 | - apiGroups: 227 | - apps 228 | resources: 229 | - deployments 230 | - daemonsets 231 | - replicasets 232 | - statefulsets 233 | verbs: 234 | - create 235 | - delete 236 | - get 237 | - list 238 | - patch 239 | - update 240 | - watch 241 | ## 242 | ## Rules for welcome.example.com/v1alpha1, Kind: Welcome 243 | ## 244 | - apiGroups: 245 | - welcome.example.com 246 | resources: 247 | - welcomes 248 | - welcomes/status 249 | verbs: 250 | - create 251 | - delete 252 | - get 253 | - list 254 | - patch 255 | - update 256 | - watch 257 | # +kubebuilder:scaffold:rules 258 | ``` 259 | 260 | ## Build the image 261 | 262 | Login to quay 263 | 264 | ```shell 265 | podman login quay.io 266 | ``` 267 | 268 | Build the operator image and push to Quay (in my case, the image already existed...you may need to create it) 269 | 270 | ```shell 271 | make docker-build docker-push IMG=quay.io/christianh814/welcome-php-operator:latest 272 | ``` 273 | 274 | ## Build the Operator 275 | 276 | Export your image `IMG` var 277 | 278 | ```shell 279 | export IMG=quay.io/christianh814/welcome-php-operator:latest 280 | ``` 281 | 282 | Set the image var for `kustomize` to use 283 | 284 | ```shell 285 | cd config/manager && kustomize edit set image controller=${IMG} && cd ../.. 286 | ``` 287 | 288 | Generate the manifest yaml 289 | 290 | ```shell 291 | kustomize build config/default > /tmp/welcome-operator.yaml 292 | ``` 293 | 294 | ## Deploy Operator 295 | 296 | Deploy the operator (it'll end up in `$OPERATOR_NAME-system` namespace) 297 | 298 | ```shell 299 | oc create -f /tmp/welcome-operator.yaml 300 | ``` 301 | 302 | ## Deploy instance 303 | 304 | Deploy an instance (as a regular user) 305 | 306 | ```shell 307 | oc login -y ocp-developer 308 | oc new-project myspace 309 | oc create -f config/samples/welcome_v1alpha1_welcome.yaml 310 | ``` 311 | -------------------------------------------------------------------------------- /ansible_hostfiles/singlemaster-3.11: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | glusterfs 7 | 8 | # Set variables common for all OSEv3 hosts 9 | [OSEv3:vars] 10 | 11 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 12 | ansible_ssh_user=root 13 | #ansible_sudo=true 14 | #ansible_become=yes 15 | 16 | # Install Enterprise or Origin; set up ntp 17 | openshift_deployment_type=openshift-enterprise 18 | openshift_clock_enabled=true 19 | 20 | # Registry auth needed if you're using RH's registry. 21 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 22 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 23 | 24 | ### Disconnected Install 25 | #openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 26 | #openshift_docker_additional_registries=registry.cloud.chx 27 | #oreg_url=registry.cloud.chx/openshift3/ose-${component}:${version} 28 | #openshift_examples_modify_imagestreams=true 29 | #openshift_release=v3.11 30 | #openshift_image_tag=v3.11 31 | ### 32 | 33 | # Network/DNS Related 34 | openshift_master_default_subdomain=apps.cloud.chx 35 | osm_cluster_network_cidr=10.1.0.0/16 36 | osm_host_subnet_length=8 37 | openshift_portal_net=172.30.0.0/16 38 | # This can be set to 0.0.0.0/0 for disconnected installs 39 | openshift_docker_insecure_registries=172.30.0.0/16 40 | #container_runtime_docker_storage_setup_device=/dev/nvme1n1 41 | 42 | # CNS Storage 43 | openshift_storage_glusterfs_namespace=glusterfs 44 | openshift_storage_glusterfs_name=storage 45 | openshift_storage_glusterfs_heketi_wipe=true 46 | openshift_storage_glusterfs_wipe=true 47 | openshift_storage_glusterfs_storageclass_default=true 48 | openshift_storage_glusterfs_block_storageclass=true 49 | openshift_storage_glusterfs_block_host_vol_size=50 50 | 51 | # Automatically Deploy the router 52 | openshift_hosted_manage_router=true 53 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 54 | 55 | # Automatically deploy the registry using glusterfs 56 | openshift_hosted_manage_registry=true 57 | openshift_hosted_registry_storage_kind=glusterfs 58 | openshift_hosted_registry_storage_volume_size=25Gi 59 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 60 | #openshift_hosted_registry_replicas=2 61 | 62 | # Disble Checks 63 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 64 | 65 | # 66 | # Network Policies that are available: 67 | # redhat/openshift-ovs-networkpolicy # fine grained control 68 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 69 | # redhat/openshift-ovs-subnet # "flat" network 70 | # 71 | # Network OVS Plugin to use 72 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 73 | 74 | # Uncomment when setting up logging/metrics/prometheus 75 | openshift_master_dynamic_provisioning_enabled=true 76 | dynamic_volumes_check=False 77 | 78 | # Logging 79 | openshift_logging_install_logging=true 80 | openshift_logging_es_pvc_dynamic=true 81 | openshift_logging_es_pvc_size=20Gi 82 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 83 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 84 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 85 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 86 | openshift_logging_es_memory_limit=4G 87 | 88 | # Metrics 89 | openshift_metrics_install_metrics=true 90 | openshift_metrics_cassandra_pvc_size=20Gi 91 | openshift_metrics_cassandra_storage_type=dynamic 92 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 93 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 94 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 95 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 96 | 97 | ## Prometheus Metrics 98 | openshift_hosted_prometheus_deploy=true 99 | openshift_prometheus_namespace=openshift-metrics 100 | openshift_prometheus_node_selector={'node-role.kubernetes.io/infra':'true'} 101 | 102 | # Prometheus storage config 103 | openshift_prometheus_storage_volume_name=prometheus 104 | openshift_prometheus_storage_volume_size=10Gi 105 | openshift_prometheus_storage_type='pvc' 106 | openshift_prometheus_sc_name="glusterfs-storage" 107 | 108 | # For prometheus-alertmanager 109 | openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager 110 | openshift_prometheus_alertmanager_storage_volume_size=10Gi 111 | openshift_prometheus_alertmanager_storage_type='pvc' 112 | openshift_prometheus_alertmanager_sc_name="glusterfs-storage" 113 | 114 | # For prometheus-alertbuffer 115 | openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer 116 | openshift_prometheus_alertbuffer_storage_volume_size=10Gi 117 | openshift_prometheus_alertbuffer_storage_type='pvc' 118 | openshift_prometheus_alertbuffer_sc_name="glusterfs-storage" 119 | 120 | ## Ansible Service Broker - only install it if you REALLY need it. 121 | ansible_service_broker_install=false 122 | #openshift_hosted_etcd_storage_kind=dynamic 123 | #openshift_hosted_etcd_storage_volume_name=etcd-vol2 124 | #openshift_hosted_etcd_storage_volume_size=10Gi 125 | #ansible_service_broker_local_registry_whitelist=['.*-apb$'] 126 | 127 | # If using Route53 or you're pointed to the master with a "vanity" name 128 | openshift_master_public_api_url=https://ocp.cloud.chx:8443 129 | openshift_master_public_console_url=https://ocp.cloud.chx:8443/console 130 | openshift_master_cluster_public_hostname=ocp.cloud.chx 131 | openshift_master_api_port=8443 132 | openshift_master_console_port=8443 133 | 134 | # The following enabled htpasswd authentication 135 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 136 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 137 | 138 | ## OpenShift host groups 139 | 140 | # host group for etcd 141 | [etcd] 142 | master1.cloud.chx 143 | 144 | # host group for masters - set scedulable to "true" for the web-console pod 145 | [masters] 146 | master1.cloud.chx openshift_schedulable=true 147 | 148 | # host group for nodes, includes region info 149 | [nodes] 150 | master1.cloud.chx openshift_node_group_name='node-config-master-infra' 151 | app1.cloud.chx openshift_node_group_name='node-config-compute' 152 | app2.cloud.chx openshift_node_group_name='node-config-compute' 153 | app3.cloud.chx openshift_node_group_name='node-config-compute' 154 | 155 | [glusterfs] 156 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 157 | app1.cloud.chx glusterfs_ip=192.168.1.32 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 158 | app2.cloud.chx glusterfs_ip=192.168.1.42 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 159 | app3.cloud.chx glusterfs_ip=192.168.1.52 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 160 | ## 161 | ## 162 | -------------------------------------------------------------------------------- /ipa_on_ocp/README.md: -------------------------------------------------------------------------------- 1 | # IPA On OpenShift 2 | 3 | This assumes the following 4 | 5 | * DNS for the domain is pointed at the OCP router 6 | * Dynamic storage and/or a PV is available 7 | * You have admin access to OCP 8 | 9 | NOTE: This was tested with `oc cluster up` 10 | 11 | ## Prepare Cluster 12 | 13 | You must set the `container_manage_cgroup` SEBoolean to `on` on ALL servers 14 | 15 | ``` 16 | ansible all -m shell -a "setsebool -P container_manage_cgroup on" 17 | ``` 18 | 19 | It's helpful if you pre-pull the image (this is not required) 20 | 21 | ``` 22 | ansible all -m shell -a "docker pull freeipa/freeipa-server:centos-7" 23 | ``` 24 | 25 | ## Install FreeIPA 26 | 27 | First, create a project to "house" IPA and switch to it 28 | 29 | ``` 30 | oc new-project ldap 31 | oc project ldap 32 | ``` 33 | 34 | Create a service account and give it access to run pods as root 35 | 36 | ``` 37 | oc create serviceaccount useroot 38 | oc adm policy add-scc-to-user anyuid -z useroot 39 | oc patch scc anyuid -p '{"seccompProfiles":["docker/default"]}' 40 | ``` 41 | 42 | Use the upstream template to create an IPA instance 43 | 44 | ``` 45 | oc new-app --name ipa -f https://raw.githubusercontent.com/freeipa/freeipa-container/master/freeipa-server-openshift.json \ 46 | -p IPA_SERVER_IMAGE=freeipa-server:centos-7 \ 47 | -p IPA_ADMIN_PASSWORD=password \ 48 | -p TIMEOUT=1200 49 | ``` 50 | 51 | To trigger the deplopyment, import the image 52 | 53 | ``` 54 | oc import-image freeipa-server:centos-7 --from=freeipa/freeipa-server:centos-7 --confirm 55 | ``` 56 | 57 | If you get the following warning... 58 | 59 | ``` 60 | Configuring Kerberos KDC (krb5kdc). Estimated time: 30 seconds 61 | [1/9]: adding kerberos container to the directory 62 | [2/9]: configuring KDC 63 | [3/9]: initialize kerberos container 64 | WARNING: Your system is running out of entropy, you may experience long delays 65 | ``` 66 | 67 | Just run this on the node the pod is running on to speed it along (run ^c after a minute or two). You don't need this if you set the `TIMEOUT` long enough to where it doesn't matter 68 | 69 | ``` 70 | while true; do find /; done 71 | ``` 72 | 73 | Add the router's IP address in your `/etc/hosts` file (HINT: it's the IP address of where you ran `oc cluster up`) in order to access the fake domain you created 74 | 75 | ``` 76 | 172.16.1.222 ipa.example.test 77 | ``` 78 | 79 | Login to the pod to find out the admin password 80 | 81 | ``` 82 | [root@ocp-aio]# oc get pods 83 | NAME READY STATUS RESTARTS AGE 84 | freeipa-server-1-dp1sv 1/1 Running 0 15m 85 | sso-1-sp5ws 1/1 Running 0 1h 86 | sso-mysql-1-3tbj7 1/1 Running 0 1h 87 | 88 | [root@ocp-aio]# oc exec freeipa-server-1-dp1sv -- env | grep PASSWORD 89 | PASSWORD=5YqaAHLmgXHjWvUlXarmFN7yunhXOIRS 90 | ``` 91 | 92 | Login with `username: admin` and `password: ` 93 | 94 | ``` 95 | firefox https://ipa.example.test 96 | ``` 97 | ## Add LDAP User(s) 98 | 99 | Fastest way is with `oc rsh`; so find out your pod name. 100 | 101 | ``` 102 | [root@ocp-aio ]# oc get pods 103 | NAME READY STATUS RESTARTS AGE 104 | freeipa-server-1-dp1sv 1/1 Running 0 2h 105 | sso-1-sp5ws 1/1 Running 0 3h 106 | sso-mysql-1-3tbj7 1/1 Running 0 3h 107 | ``` 108 | 109 | Now `oc rsh` into this pod 110 | 111 | ``` 112 | [root@ocp-aio ]# oc rsh freeipa-server-1-dp1sv 113 | sh-4.2# 114 | ``` 115 | 116 | Obtain a Kerberos ticket 117 | 118 | ``` 119 | sh-4.2# echo $PASSWORD | kinit admin@$(echo ${IPA_SERVER_HOSTNAME#*.} | tr '[:lower:]' '[:upper:]') 120 | ``` 121 | 122 | You should be able to show your IPA config now 123 | 124 | ``` 125 | sh-4.2# ipa config-show 126 | Maximum username length: 32 127 | Home directory base: /home 128 | Default shell: /bin/sh 129 | Default users group: ipausers 130 | Default e-mail domain: example.test 131 | Search time limit: 2 132 | Search size limit: 100 133 | User search fields: uid,givenname,sn,telephonenumber,ou,title 134 | Group search fields: cn,description 135 | Enable migration mode: FALSE 136 | Certificate Subject base: O=EXAMPLE.TEST 137 | Password Expiration Notification (days): 4 138 | Password plugin features: AllowNThash 139 | SELinux user map order: guest_u:s0$xguest_u:s0$user_u:s0$staff_u:s0-s0:c0.c1023$unconfined_u:s0-s0:c0.c1023 140 | Default SELinux user: unconfined_u:s0-s0:c0.c1023 141 | Default PAC types: nfs:NONE, MS-PAC 142 | IPA masters: ipa.example.test 143 | IPA CA servers: ipa.example.test 144 | IPA NTP servers: 145 | IPA CA renewal master: ipa.example.test 146 | ``` 147 | 148 | 149 | Add a user now 150 | 151 | ``` 152 | sh-4.2# ipa user-add homer --first=Homer --last=Simpson --gecos="Homer J. Simposon" --email=homerj@mailinator.com --homedir=/home/homer --password 153 | Password: 154 | Enter Password again to verify: 155 | ------------------ 156 | Added user "homer" 157 | ------------------ 158 | User login: homer 159 | First name: Homer 160 | Last name: Simpson 161 | Full name: Homer Simpson 162 | Display name: Homer Simpson 163 | Initials: HS 164 | Home directory: /home/homer 165 | GECOS: Homer J. Simposon 166 | Login shell: /bin/sh 167 | Principal name: homer@EXAMPLE.TEST 168 | Principal alias: homer@EXAMPLE.TEST 169 | Email address: homerj@mailinator.com 170 | UID: 50800003 171 | GID: 50800003 172 | Password: True 173 | Member of groups: ipausers 174 | Kerberos keys available: True 175 | ``` 176 | 177 | You should be able to list the user's attributes 178 | 179 | ``` 180 | sh-4.2# ipa user-find homer 181 | -------------- 182 | 1 user matched 183 | -------------- 184 | User login: homer 185 | First name: Homer 186 | Last name: Simpson 187 | Home directory: /home/homer 188 | Login shell: /bin/sh 189 | Principal name: homer@EXAMPLE.TEST 190 | Principal alias: homer@EXAMPLE.TEST 191 | Email address: homerj@mailinator.com 192 | UID: 50800003 193 | GID: 50800003 194 | Account disabled: False 195 | ---------------------------- 196 | Number of entries returned 1 197 | ---------------------------- 198 | 199 | ``` 200 | 201 | ## Profit! 202 | 203 | You should now have a full blown IPA server on OpenShift 204 | 205 | Things to do: 206 | 207 | * Test DNS functionality (nodePort?) 208 | * Create Replicas 209 | * Test cross domain trusts 210 | * Create "bind user" 211 | 212 | # Apendix 213 | 214 | I created this `nodePort` config so I can run `ldapsearch` against the master. 215 | 216 | ```yaml 217 | apiVersion: v1 218 | kind: Service 219 | metadata: 220 | name: ldap 221 | labels: 222 | app: ipa 223 | spec: 224 | type: NodePort 225 | ports: 226 | - port: 389 227 | nodePort: 32389 228 | name: ldap 229 | selector: 230 | app: ipa 231 | ``` 232 | 233 | Note: I used `oc get pods --show-labels` to get the labels/selector 234 | 235 | Now run `oc create -f freeipa-nodeport.yaml` to create the service. 236 | 237 | Next you can run `ldapsearch` to any node in the cluster. I use the master for consistency. 238 | 239 | ``` 240 | ldapsearch -x -h ocp.chx.cloud -p 32389 -b uid=homer,cn=users,cn=accounts,dc=example,dc=test 241 | ``` 242 | -------------------------------------------------------------------------------- /ansible_hostfiles/singlemaster-crio-3.11: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | glusterfs 7 | 8 | # Set variables common for all OSEv3 hosts 9 | [OSEv3:vars] 10 | 11 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 12 | ansible_ssh_user=root 13 | #ansible_sudo=true 14 | #ansible_become=yes 15 | 16 | # Install Enterprise or Origin; set up ntp 17 | openshift_deployment_type=openshift-enterprise 18 | openshift_clock_enabled=true 19 | 20 | # Registry auth needed if you're using RH's registry. 21 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 22 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 23 | 24 | # User CRI-O while still using docker for s2i builds 25 | openshift_use_crio=True 26 | openshift_use_crio_only=False 27 | openshift_crio_enable_docker_gc=True 28 | 29 | ### Disconnected Install 30 | #openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 31 | #openshift_docker_additional_registries=registry.cloud.chx 32 | #oreg_url=registry.cloud.chx/openshift3/ose-${component}:${version} 33 | #openshift_examples_modify_imagestreams=true 34 | #openshift_release=v3.11 35 | #openshift_image_tag=v3.11 36 | ### 37 | 38 | # Network/DNS Related 39 | openshift_master_default_subdomain=apps.192.168.1.96.nip.io 40 | osm_cluster_network_cidr=10.1.0.0/16 41 | osm_host_subnet_length=8 42 | openshift_portal_net=172.30.0.0/16 43 | # This can be set to 0.0.0.0/0 for disconnected installs 44 | openshift_docker_insecure_registries=172.30.0.0/16 45 | #container_runtime_docker_storage_setup_device=/dev/nvme1n1 46 | 47 | # CNS Storage 48 | openshift_storage_glusterfs_namespace=glusterfs 49 | openshift_storage_glusterfs_name=storage 50 | openshift_storage_glusterfs_heketi_wipe=true 51 | openshift_storage_glusterfs_wipe=true 52 | openshift_storage_glusterfs_storageclass_default=true 53 | openshift_storage_glusterfs_block_storageclass=true 54 | openshift_storage_glusterfs_block_host_vol_size=50 55 | 56 | # Automatically Deploy the router 57 | openshift_hosted_manage_router=true 58 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 59 | 60 | # Automatically deploy the registry using glusterfs 61 | openshift_hosted_manage_registry=true 62 | openshift_hosted_registry_storage_kind=glusterfs 63 | openshift_hosted_registry_storage_volume_size=25Gi 64 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 65 | #openshift_hosted_registry_replicas=2 66 | 67 | # Disble Checks 68 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 69 | 70 | # 71 | # Network Policies that are available: 72 | # redhat/openshift-ovs-networkpolicy # fine grained control 73 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 74 | # redhat/openshift-ovs-subnet # "flat" network 75 | # 76 | # Network OVS Plugin to use 77 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 78 | 79 | # Uncomment when setting up logging/metrics/prometheus 80 | openshift_master_dynamic_provisioning_enabled=true 81 | dynamic_volumes_check=False 82 | 83 | # Logging 84 | openshift_logging_install_logging=true 85 | openshift_logging_es_pvc_dynamic=true 86 | openshift_logging_es_pvc_size=20Gi 87 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 88 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 89 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 90 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 91 | openshift_logging_es_memory_limit=4G 92 | 93 | # Metrics 94 | openshift_metrics_install_metrics=true 95 | openshift_metrics_cassandra_pvc_size=20Gi 96 | openshift_metrics_cassandra_storage_type=dynamic 97 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 98 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 99 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 100 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 101 | 102 | ## Prometheus Metrics 103 | openshift_hosted_prometheus_deploy=true 104 | openshift_prometheus_namespace=openshift-metrics 105 | openshift_prometheus_node_selector={'node-role.kubernetes.io/infra':'true'} 106 | 107 | # Prometheus storage config 108 | openshift_prometheus_storage_volume_name=prometheus 109 | openshift_prometheus_storage_volume_size=10Gi 110 | openshift_prometheus_storage_type='pvc' 111 | openshift_prometheus_sc_name="glusterfs-storage" 112 | 113 | # For prometheus-alertmanager 114 | openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager 115 | openshift_prometheus_alertmanager_storage_volume_size=10Gi 116 | openshift_prometheus_alertmanager_storage_type='pvc' 117 | openshift_prometheus_alertmanager_sc_name="glusterfs-storage" 118 | 119 | # For prometheus-alertbuffer 120 | openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer 121 | openshift_prometheus_alertbuffer_storage_volume_size=10Gi 122 | openshift_prometheus_alertbuffer_storage_type='pvc' 123 | openshift_prometheus_alertbuffer_sc_name="glusterfs-storage" 124 | 125 | ## Ansible Service Broker - only install it if you REALLY need it. 126 | ansible_service_broker_install=false 127 | #openshift_hosted_etcd_storage_kind=dynamic 128 | #openshift_hosted_etcd_storage_volume_name=etcd-vol2 129 | #openshift_hosted_etcd_storage_volume_size=10Gi 130 | #ansible_service_broker_local_registry_whitelist=['.*-apb$'] 131 | 132 | # If using Route53 or you're pointed to the master with a "vanity" name 133 | openshift_master_public_api_url=https://ocp.192.168.1.96.nip.io:8443 134 | openshift_master_public_console_url=https://ocp.192.168.1.96.nip.io:8443/console 135 | openshift_master_cluster_public_hostname=ocp.192.168.1.96.nip.io 136 | openshift_master_api_port=8443 137 | openshift_master_console_port=8443 138 | 139 | # The following enabled htpasswd authentication 140 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 141 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 142 | 143 | ## OpenShift host groups 144 | 145 | # host group for etcd 146 | [etcd] 147 | dhcp-host-96.cloud.chx 148 | 149 | # host group for masters - set scedulable to "true" for the web-console pod 150 | [masters] 151 | dhcp-host-96.cloud.chx openshift_schedulable=true 152 | 153 | # host group for nodes, includes region info 154 | [nodes] 155 | dhcp-host-96.cloud.chx openshift_node_group_name='node-config-master-infra-crio' 156 | dhcp-host-30.cloud.chx openshift_node_group_name='node-config-compute-crio' 157 | dhcp-host-78.cloud.chx openshift_node_group_name='node-config-compute-crio' 158 | 159 | [glusterfs] 160 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 161 | dhcp-host-96.cloud.chx glusterfs_ip=192.168.1.96 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 162 | dhcp-host-30.cloud.chx glusterfs_ip=192.168.1.30 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 163 | dhcp-host-78.cloud.chx glusterfs_ip=192.168.1.78 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 164 | ## 165 | ## 166 | -------------------------------------------------------------------------------- /ansible_hostfiles/multimaster: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | lb 7 | glusterfs 8 | 9 | # Set variables common for all OSEv3 hosts 10 | [OSEv3:vars] 11 | 12 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 13 | ansible_ssh_user=root 14 | #ansible_sudo=true 15 | #ansible_become=yes 16 | 17 | # Install Enterprise or Origin; set up ntp 18 | openshift_deployment_type=openshift-enterprise 19 | openshift_clock_enabled=true 20 | 21 | # Registry auth needed if you're using RH's registry. 22 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 23 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 24 | 25 | ### Disconnected Install 26 | #openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 27 | #openshift_docker_additional_registries=registry.cloud.chx 28 | #oreg_url=registry.cloud.chx/openshift3/ose-${component}:${version} 29 | #openshift_examples_modify_imagestreams=true 30 | #openshift_release=v3.11 31 | #openshift_image_tag=v3.11 32 | ##openshift_pkg_version=-3.11.16 33 | ### 34 | 35 | # Network/DNS Related 36 | openshift_master_default_subdomain=apps.cloud.chx 37 | osm_cluster_network_cidr=10.1.0.0/16 38 | osm_host_subnet_length=8 39 | openshift_portal_net=172.30.0.0/16 40 | # This can be set to 0.0.0.0/0 for disconnected installs 41 | openshift_docker_insecure_registries=172.30.0.0/16 42 | 43 | # CNS Storage 44 | openshift_storage_glusterfs_namespace=glusterfs 45 | openshift_storage_glusterfs_name=storage 46 | openshift_storage_glusterfs_heketi_wipe=true 47 | openshift_storage_glusterfs_wipe=true 48 | openshift_storage_glusterfs_storageclass_default=true 49 | openshift_storage_glusterfs_block_storageclass=true 50 | openshift_storage_glusterfs_block_host_vol_size=50 51 | openshift_storage_glusterfs_image=registry.redhat.io/rhgs3/rhgs-server-rhel7:v3.11 52 | openshift_storage_glusterfs_block_image=registry.redhat.io/rhgs3/rhgs-gluster-block-prov-rhel7:v3.11 53 | openshift_storage_glusterfs_s3_image=registry.redhat.io/rhgs3/rhgs-s3-server-rhel7:v3.11 54 | openshift_storage_glusterfs_heketi_image=registry.redhat.io/rhgs3/rhgs-volmanager-rhel7:v3.11 55 | 56 | # Automatically Deploy the router 57 | openshift_hosted_manage_router=true 58 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 59 | 60 | # Automatically deploy the registry using glusterfs 61 | openshift_hosted_manage_registry=true 62 | openshift_hosted_registry_storage_kind=glusterfs 63 | openshift_hosted_registry_storage_volume_size=25Gi 64 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 65 | #openshift_hosted_registry_replicas=2 66 | 67 | # Disble Checks 68 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 69 | 70 | # 71 | # Network Policies that are available: 72 | # redhat/openshift-ovs-networkpolicy # fine grained control 73 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 74 | # redhat/openshift-ovs-subnet # "flat" network 75 | # 76 | # Network OVS Plugin to use 77 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 78 | 79 | # Uncomment when setting up logging/metrics/prometheus 80 | openshift_master_dynamic_provisioning_enabled=true 81 | dynamic_volumes_check=False 82 | 83 | # Logging 84 | openshift_logging_install_logging=true 85 | openshift_logging_es_pvc_dynamic=true 86 | openshift_logging_es_pvc_size=20Gi 87 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 88 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 89 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 90 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 91 | openshift_logging_es_memory_limit=4G 92 | #openshift_logging_es_cluster_size=3 93 | 94 | # Metrics 95 | openshift_metrics_install_metrics=true 96 | openshift_metrics_cassandra_pvc_size=20Gi 97 | openshift_metrics_cassandra_storage_type=dynamic 98 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 99 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 100 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 101 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 102 | 103 | # Prometheus Metrics 104 | openshift_cluster_monitoring_operator_install=true 105 | openshift_cluster_monitoring_operator_prometheus_storage_enabled=true 106 | openshift_cluster_monitoring_operator_alertmanager_storage_enabled=true 107 | openshift_cluster_monitoring_operator_prometheus_storage_capacity=15Gi 108 | openshift_cluster_monitoring_operator_alertmanager_storage_capacity=15Gi 109 | openshift_cluster_monitoring_operator_node_selector={'node-role.kubernetes.io/infra':'true'} 110 | 111 | ## Ansible Service Broker - only install it if you REALLY need it. 112 | ansible_service_broker_install=false 113 | #openshift_hosted_etcd_storage_kind=dynamic 114 | #openshift_hosted_etcd_storage_volume_name=etcd-vol2 115 | #openshift_hosted_etcd_storage_volume_size=10Gi 116 | #ansible_service_broker_local_registry_whitelist=['.*-apb$'] 117 | 118 | # If using Route53 or you're pointed to the master with a "vanity" name 119 | openshift_master_public_api_url=https://ocp.cloud.chx:8443 120 | openshift_master_public_console_url=https://ocp.cloud.chx:8443/console 121 | openshift_master_api_port=8443 122 | openshift_master_console_port=8443 123 | 124 | # Native high availbility cluster method with optional load balancer. 125 | # If no lb group is defined installer assumes that a load balancer has 126 | # been preconfigured. For installation the value of 127 | # openshift_master_cluster_hostname must resolve to the load balancer 128 | # or to one or all of the masters defined in the inventory if no load 129 | # balancer is present. 130 | openshift_master_cluster_method=native 131 | openshift_master_cluster_hostname=ocp.cloud.chx 132 | openshift_master_cluster_public_hostname=ocp.cloud.chx 133 | 134 | # The following enabled htpasswd authentication 135 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 136 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 137 | 138 | ## OpenShift host groups 139 | 140 | # host group for etcd 141 | [etcd] 142 | master[1:3].cloud.chx 143 | 144 | # host group for loadbalancers 145 | [lb] 146 | ocp.cloud.chx 147 | 148 | # host group for masters - set scedulable to "true" for the web-console pod 149 | [masters] 150 | master[1:3].cloud.chx openshift_schedulable=true 151 | 152 | # host group for nodes, includes region info 153 | [nodes] 154 | master1.cloud.chx openshift_node_group_name='node-config-master' 155 | master2.cloud.chx openshift_node_group_name='node-config-master' 156 | master3.cloud.chx openshift_node_group_name='node-config-master' 157 | infra1.cloud.chx openshift_node_group_name='node-config-infra' 158 | infra2.cloud.chx openshift_node_group_name='node-config-infra' 159 | infra3.cloud.chx openshift_node_group_name='node-config-infra' 160 | app1.cloud.chx openshift_node_group_name='node-config-compute' 161 | app2.cloud.chx openshift_node_group_name='node-config-compute' 162 | app3.cloud.chx openshift_node_group_name='node-config-compute' 163 | 164 | [glusterfs] 165 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 166 | app1.cloud.chx glusterfs_ip=192.168.1.32 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 167 | app2.cloud.chx glusterfs_ip=192.168.1.42 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 168 | app3.cloud.chx glusterfs_ip=192.168.1.52 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 169 | ## 170 | ## 171 | -------------------------------------------------------------------------------- /ansible_hostfiles/multimaster-3.11: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | lb 7 | glusterfs 8 | 9 | # Set variables common for all OSEv3 hosts 10 | [OSEv3:vars] 11 | 12 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 13 | ansible_ssh_user=root 14 | #ansible_sudo=true 15 | #ansible_become=yes 16 | 17 | # Install Enterprise or Origin; set up ntp 18 | openshift_deployment_type=openshift-enterprise 19 | openshift_clock_enabled=true 20 | 21 | # Registry auth needed if you're using RH's registry. 22 | oreg_auth_user="{{ lookup('env','OREG_AUTH_USER') }}" 23 | oreg_auth_password="{{ lookup('env','OREG_AUTH_PASSWORD') }}" 24 | 25 | ### Disconnected Install 26 | #openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 27 | #openshift_docker_additional_registries=registry.cloud.chx 28 | #oreg_url=registry.cloud.chx/openshift3/ose-${component}:${version} 29 | #openshift_examples_modify_imagestreams=true 30 | #openshift_release=v3.11 31 | #openshift_image_tag=v3.11 32 | ### 33 | 34 | # Network/DNS Related 35 | openshift_master_default_subdomain=apps.cloud.chx 36 | osm_cluster_network_cidr=10.1.0.0/16 37 | osm_host_subnet_length=8 38 | openshift_portal_net=172.30.0.0/16 39 | # This can be set to 0.0.0.0/0 for disconnected installs 40 | openshift_docker_insecure_registries=172.30.0.0/16 41 | #container_runtime_docker_storage_setup_device=/dev/nvme1n1 42 | 43 | # CNS Storage 44 | openshift_storage_glusterfs_namespace=glusterfs 45 | openshift_storage_glusterfs_name=storage 46 | openshift_storage_glusterfs_heketi_wipe=true 47 | openshift_storage_glusterfs_wipe=true 48 | openshift_storage_glusterfs_storageclass_default=true 49 | openshift_storage_glusterfs_block_storageclass=true 50 | openshift_storage_glusterfs_block_host_vol_size=50 51 | 52 | # Automatically Deploy the router 53 | openshift_hosted_manage_router=true 54 | #openshift_router_selector={'node-role.kubernetes.io/infra':'true'} 55 | 56 | # Automatically deploy the registry using glusterfs 57 | openshift_hosted_manage_registry=true 58 | openshift_hosted_registry_storage_kind=glusterfs 59 | openshift_hosted_registry_storage_volume_size=25Gi 60 | #openshift_registry_selector={'node-role.kubernetes.io/infra':'true'} 61 | #openshift_hosted_registry_replicas=2 62 | 63 | # Disble Checks 64 | openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability,package_availability,package_version 65 | 66 | # 67 | # Network Policies that are available: 68 | # redhat/openshift-ovs-networkpolicy # fine grained control 69 | # redhat/openshift-ovs-multitenant # each project gets it's own "private" network 70 | # redhat/openshift-ovs-subnet # "flat" network 71 | # 72 | # Network OVS Plugin to use 73 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 74 | 75 | # Uncomment when setting up logging/metrics/prometheus 76 | openshift_master_dynamic_provisioning_enabled=true 77 | dynamic_volumes_check=False 78 | 79 | # Logging 80 | openshift_logging_install_logging=true 81 | openshift_logging_es_pvc_dynamic=true 82 | openshift_logging_es_pvc_size=20Gi 83 | openshift_logging_es_pvc_storage_class_name=glusterfs-storage-block 84 | openshift_logging_curator_nodeselector={'node-role.kubernetes.io/infra':'true'} 85 | openshift_logging_es_nodeselector={'node-role.kubernetes.io/infra':'true'} 86 | openshift_logging_kibana_nodeselector={'node-role.kubernetes.io/infra':'true'} 87 | openshift_logging_es_memory_limit=4G 88 | #openshift_logging_es_cluster_size=3 89 | 90 | # Metrics 91 | openshift_metrics_install_metrics=true 92 | openshift_metrics_cassandra_pvc_size=20Gi 93 | openshift_metrics_cassandra_storage_type=dynamic 94 | openshift_metrics_cassandra_pvc_storage_class_name=glusterfs-storage-block 95 | openshift_metrics_hawkular_nodeselector={'node-role.kubernetes.io/infra':'true'} 96 | openshift_metrics_heapster_nodeselector={'node-role.kubernetes.io/infra':'true'} 97 | openshift_metrics_cassandra_nodeselector={'node-role.kubernetes.io/infra':'true'} 98 | 99 | ## Prometheus Metrics 100 | openshift_hosted_prometheus_deploy=true 101 | openshift_prometheus_namespace=openshift-metrics 102 | openshift_prometheus_node_selector={'node-role.kubernetes.io/infra':'true'} 103 | 104 | # Prometheus storage config 105 | openshift_prometheus_storage_volume_name=prometheus 106 | openshift_prometheus_storage_volume_size=10Gi 107 | openshift_prometheus_storage_type='pvc' 108 | openshift_prometheus_sc_name="glusterfs-storage" 109 | 110 | # For prometheus-alertmanager 111 | openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager 112 | openshift_prometheus_alertmanager_storage_volume_size=10Gi 113 | openshift_prometheus_alertmanager_storage_type='pvc' 114 | openshift_prometheus_alertmanager_sc_name="glusterfs-storage" 115 | 116 | # For prometheus-alertbuffer 117 | openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer 118 | openshift_prometheus_alertbuffer_storage_volume_size=10Gi 119 | openshift_prometheus_alertbuffer_storage_type='pvc' 120 | openshift_prometheus_alertbuffer_sc_name="glusterfs-storage" 121 | 122 | ## Ansible Service Broker - only install it if you REALLY need it. 123 | ansible_service_broker_install=false 124 | #openshift_hosted_etcd_storage_kind=dynamic 125 | #openshift_hosted_etcd_storage_volume_name=etcd-vol2 126 | #openshift_hosted_etcd_storage_volume_size=10Gi 127 | #ansible_service_broker_local_registry_whitelist=['.*-apb$'] 128 | 129 | # If using Route53 or you're pointed to the master with a "vanity" name 130 | openshift_master_public_api_url=https://ocp.cloud.chx:8443 131 | openshift_master_public_console_url=https://ocp.cloud.chx:8443/console 132 | openshift_master_api_port=8443 133 | openshift_master_console_port=8443 134 | 135 | # Native high availbility cluster method with optional load balancer. 136 | # If no lb group is defined installer assumes that a load balancer has 137 | # been preconfigured. For installation the value of 138 | # openshift_master_cluster_hostname must resolve to the load balancer 139 | # or to one or all of the masters defined in the inventory if no load 140 | # balancer is present. 141 | openshift_master_cluster_method=native 142 | openshift_master_cluster_hostname=ocp.cloud.chx 143 | openshift_master_cluster_public_hostname=ocp.cloud.chx 144 | 145 | # The following enabled htpasswd authentication 146 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] 147 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 148 | 149 | ## OpenShift host groups 150 | 151 | # host group for etcd 152 | [etcd] 153 | master[1:3].cloud.chx 154 | 155 | # host group for loadbalancers 156 | [lb] 157 | ocp.cloud.chx 158 | 159 | # host group for masters - set scedulable to "true" for the web-console pod 160 | [masters] 161 | master[1:3].cloud.chx openshift_schedulable=true 162 | 163 | # host group for nodes, includes region info 164 | [nodes] 165 | master1.cloud.chx openshift_node_group_name='node-config-master' 166 | master2.cloud.chx openshift_node_group_name='node-config-master' 167 | master3.cloud.chx openshift_node_group_name='node-config-master' 168 | infra1.cloud.chx openshift_node_group_name='node-config-infra' 169 | infra2.cloud.chx openshift_node_group_name='node-config-infra' 170 | infra3.cloud.chx openshift_node_group_name='node-config-infra' 171 | app1.cloud.chx openshift_node_group_name='node-config-compute' 172 | app2.cloud.chx openshift_node_group_name='node-config-compute' 173 | app3.cloud.chx openshift_node_group_name='node-config-compute' 174 | 175 | [glusterfs] 176 | # "standalone" glusterfs nodes STILL need to be in the "[nodes]" section 177 | app1.cloud.chx glusterfs_ip=192.168.1.32 glusterfs_zone=1 glusterfs_devices='[ "/dev/vdc" ]' 178 | app2.cloud.chx glusterfs_ip=192.168.1.42 glusterfs_zone=2 glusterfs_devices='[ "/dev/vdc" ]' 179 | app3.cloud.chx glusterfs_ip=192.168.1.52 glusterfs_zone=3 glusterfs_devices='[ "/dev/vdc" ]' 180 | ## 181 | ## 182 | -------------------------------------------------------------------------------- /installation/3.md: -------------------------------------------------------------------------------- 1 | # Installation 2 | 3 | The installation of OpenShift Container Platform (OCP); will be done via ansible. More information can be found using the OpenShift [documentation site](https://docs.openshift.com/container-platform/latest/welcome/index.html). 4 | 5 | * [Infrastrucure](#infrastrucure) 6 | * [Host preparation](#host-preparation) 7 | * [Docker Configuration](#docker-configuration) 8 | * [Ansible Installer](#ansible-installer) 9 | * [Running The Playbook](#running-the-playbook) 10 | * [Package Excluder](#package-excluder) 11 | * [Uninstaller](#uninstaller) 12 | * [Cloud Install](#cloud-install) 13 | * [Disconnected Install](#disconnected-install) 14 | 15 | ## Infrastrucure 16 | 17 | For this installation we have the following 18 | 19 | * Wildcard DNS entry like `*.apps.example.com` 20 | * Servers installed with RHEL 7.x (latest RHEL 7 version) with a "minimum" install profile. 21 | * Forward/Reverse DNS is a MUST for master/nodes 22 | * SELinux should be enforcing 23 | * Firewall should be running. 24 | * NetworkManager 1.0 or later 25 | * Masters 26 | * 4CPU 27 | * 16GB RAM 28 | * Disk 0 (Root Drive) - 50GB 29 | * Disk 1 - 100GB mounted as `/var` 30 | * Nodes 31 | * 4CPU 32 | * 16GB RAM 33 | * Disk 0 (Root Drive) - 50GB 34 | * Disk 1 - 100GB mounted as `/var` 35 | * Disk 2 - 500GB Raw/Unformatted (for Container Native Storage) 36 | 37 | Here is a diagram of how OCP is layed out 38 | 39 | ![ocp_diagram](images/osev3.jpg) 40 | 41 | ## Host preparation 42 | 43 | Each host must be registered using RHSM and have an active OCP subscription attached to access the required packages. 44 | 45 | On each host, register with RHSM: 46 | 47 | ``` 48 | subscription-manager register --username=${user_name} --password=${password} 49 | ``` 50 | 51 | List the available subscriptions: 52 | 53 | ``` 54 | subscription-manager list --available 55 | ``` 56 | 57 | In the output for the previous command, find the pool ID for an OpenShift Enterprise subscription and attach it: 58 | 59 | ``` 60 | subscription-manager attach --pool=${pool_id} 61 | ``` 62 | 63 | Disable all repositories and enable only the required ones: 64 | 65 | ``` 66 | subscription-manager repos --disable=* 67 | yum-config-manager --disable \* 68 | subscription-manager repos \ 69 | --enable=rhel-7-server-rpms \ 70 | --enable=rhel-7-server-extras-rpms \ 71 | --enable=rhel-7-server-ose-3.11-rpms \ 72 | --enable=rhel-7-server-ansible-2.6-rpms \ 73 | --enable=rh-gluster-3-client-for-rhel-7-server-rpms 74 | ``` 75 | 76 | Make sure the pre-req pkgs are installed/removed and make sure the system is updated 77 | 78 | ``` 79 | yum -y install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct vim 80 | yum -y update 81 | systemctl reboot 82 | yum -y install openshift-ansible 83 | ``` 84 | 85 | Then install docker when it comes back up. Make sure you're running the version it states in the docs 86 | 87 | ``` 88 | yum -y install docker-1.13.1 89 | docker version 90 | ``` 91 | 92 | ## Docker Configuration 93 | 94 | Docker storage doesn't need to be configured as we use `overlayfs`. 95 | 96 | I moved the `lvm` config section [here](guides/docker-ocp.md) just for historical purposes. (you won't need them though) 97 | 98 | ## Ansible Installer 99 | 100 | On The master host, generate ssh keys to use for ansible press enter to accept the defaults 101 | 102 | ``` 103 | root@master# ssh-keygen 104 | ``` 105 | 106 | Distribue these keys to all hosts (including the master) 107 | 108 | ``` 109 | root@master# for host in ose3-master.example.com \ 110 | ose3-node1.example.com \ 111 | ose3-node2.example.com; \ 112 | do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ 113 | done 114 | ``` 115 | 116 | Test passwordless ssh 117 | 118 | ``` 119 | root@master# for host in ose3-master.example.com \ 120 | ose3-node1.example.com \ 121 | ose3-node2.example.com; \ 122 | do ssh $host hostname; \ 123 | done 124 | ``` 125 | 126 | Make a backup of the `/etc/ansible/hosts` file 127 | 128 | ``` 129 | cp /etc/ansible/hosts{,.bak} 130 | ``` 131 | 132 | Next You must create an `/etc/ansible/hosts` file for the playbook to use during the installation 133 | 134 | > **NOTE**, for a PoC install, you can't have LESS than a `/21` (i.e. NO `/24`) or an `osm_host_subnet_length` less than 9 135 | 136 | Sample Ansible Hosts files 137 | * [Single Master](https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/ansible_hostfiles/singlemaster) 138 | * [Multi Master](https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/ansible_hostfiles/multimaster) 139 | 140 | With Cloud installations; you need to enable API access to the cloud provider. Below are example entries (NOT whole "hostfiles"; rather what you need to add to the above) 141 | * [AWS Hostfile Options](https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/ansible_hostfiles/awsinstall) 142 | 143 | Sample HAProxy configs if you want to build your own HAProxy server 144 | * [HAProxy Config](https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/haproxy_config/haproxy.cfg) 145 | * [HAProxy with Let's Encrypt](https://raw.githubusercontent.com/christianh814/openshift-toolbox/master/haproxy_config/haproxy-letsencrypt.cfg) 146 | 147 | If you used let's encrypt, you might find [these crons](../certbot) useful 148 | 149 | ## Running The Playbook 150 | 151 | You can run the playbook (specifying a `-i` if you wrote the hosts file somewhere else) at this point 152 | 153 | First run the prereq playbook 154 | 155 | ``` 156 | root@master# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml 157 | ``` 158 | 159 | Now run the installer afterwards 160 | 161 | ``` 162 | root@master# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml 163 | ``` 164 | 165 | Once this completes successfully, run `oc get nodes` and you should see "Ready" 166 | 167 | ``` 168 | root@master# oc get nodes 169 | NAME STATUS ROLES AGE VERSION 170 | ip-172-31-19-75.us-west-1.compute.internal Ready compute 10d v1.11.0+d4cacc0 171 | ip-172-31-23-129.us-west-1.compute.internal Ready compute 10d v1.11.0+d4cacc0 172 | ip-172-31-23-47.us-west-1.compute.internal Ready compute 10d v1.11.0+d4cacc0 173 | ip-172-31-28-6.us-west-1.compute.internal Ready infra,master 10d v1.11.0+d4cacc0 174 | ``` 175 | 176 | I also like to see all my pods statuses 177 | 178 | ``` 179 | oc get pods --all-namespaces 180 | ``` 181 | 182 | Label nodes if the installer didn't for whatever reason... 183 | 184 | ``` 185 | oc label node infra1.cloud.chx node-role.kubernetes.io/infra=true 186 | ``` 187 | 188 | ## Package Excluder 189 | 190 | OpenShift excludes packages during install, you may want to unexclude it at times (you probably never have to; but here's how to in any event) 191 | 192 | ``` 193 | atomic-openshift-excluder [ unexclude | exclude ] 194 | ``` 195 | 196 | ## Uninstaller 197 | 198 | If you need to "start over", you can uninstall OpenShift with the following playbook... 199 | 200 | ``` 201 | root@master# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml 202 | ``` 203 | 204 | Note that this may have unintended consequences (like destroying formatted disks, removing config files, etc). Run this only when needed. 205 | 206 | ## Cloud Install 207 | 208 | Here are Notes about cloud based installations 209 | 210 | * [AWS Install](../aws_refarch) 211 | * [Azure Install](https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_azure/index) 212 | * [GCE Install](https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_google_cloud_platform/) 213 | * [Openstack Install](https://access.redhat.com/documentation/en-us/reference_architectures/2018/html/deploying_and_managing_openshift_3.9_on_red_hat_openstack_platform_10/index) 214 | 215 | 216 | ## Disconnected Install 217 | 218 | There are many factors to take into consideration when trying to do a disconnected install. Instructions/notes for that can be found [HERE](guides/disconnected.md) 219 | -------------------------------------------------------------------------------- /aws_refarch/aws_inventory.ini: -------------------------------------------------------------------------------- 1 | # Create an OSEv3 group that contains the masters and nodes groups 2 | [OSEv3:children] 3 | masters 4 | nodes 5 | etcd 6 | glusterfs 7 | 8 | # Set variables common for all OSEv3 hosts 9 | [OSEv3:vars] 10 | 11 | # If ansible_ssh_user is not root, ansible_sudo must be set to true 12 | ansible_user=ec2-user 13 | ansible_become=yes 14 | ansible_sudo=true 15 | 16 | # Install Enterprise or Origin; set up ntp 17 | openshift_deployment_type=openshift-enterprise 18 | openshift_release=v3.9 19 | # openshift_image_tag=v3.9.27 20 | # oreg_url_master=registry.access.redhat.com/openshift3/ose-${component}:${version} 21 | # oreg_url_node=registry.access.redhat.com/openshift3/ose-${component}:${version} 22 | # oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} 23 | # openshift_examples_modify_imagestreams=true 24 | openshift_clock_enabled=true 25 | osm_use_cockpit=true 26 | openshift_hostname_check=false 27 | 28 | 29 | ## AWS 30 | openshift_cloudprovider_kind=aws 31 | openshift_clusterid=myocp 32 | openshift_cloudprovider_aws_access_key="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" 33 | openshift_cloudprovider_aws_secret_key="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" 34 | 35 | # Registry Storage 36 | openshift_hosted_registry_storage_kind=object 37 | openshift_hosted_registry_storage_provider=s3 38 | openshift_hosted_registry_storage_s3_accesskey="{{ lookup('env','AWS_ACCESS_KEY_ID') }}" 39 | openshift_hosted_registry_storage_s3_secretkey="{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}" 40 | openshift_hosted_registry_storage_s3_bucket=myocp.example.com-registry 41 | openshift_hosted_registry_storage_s3_region=us-east-1 42 | openshift_hosted_registry_storage_s3_chunksize=26214400 43 | openshift_hosted_registry_storage_s3_rootdirectory=/registry 44 | openshift_hosted_registry_pullthrough=true 45 | openshift_hosted_registry_acceptschema2=true 46 | openshift_hosted_registry_enforcequota=true 47 | 48 | # Network/DNS Related 49 | openshift_master_default_subdomain=apps.myocp.example.com 50 | osm_cluster_network_cidr=10.1.0.0/16 51 | osm_host_subnet_length=8 52 | openshift_portal_net=172.30.0.0/16 53 | osm_default_node_selector="region=apps" 54 | openshift_docker_insecure_registries=172.30.0.0/16 55 | container_runtime_docker_storage_setup_device=/dev/nvme1n1 56 | 57 | # CNS Storage 58 | openshift_storage_glusterfs_namespace=glusterfs 59 | openshift_storage_glusterfs_name=storage 60 | openshift_storage_glusterfs_heketi_wipe=true 61 | openshift_storage_glusterfs_wipe=true 62 | openshift_storage_glusterfs_storageclass_default=false 63 | openshift_storage_glusterfs_block_storageclass=false 64 | openshift_storage_glusterfs_block_storageclass_default=false 65 | 66 | # Automatically Deploy the router 67 | openshift_hosted_manage_router=true 68 | openshift_hosted_router_selector='region=infra' 69 | 70 | # Automatically deploy the registry using glusterfs 71 | openshift_hosted_manage_registry=true 72 | openshift_hosted_registry_storage_kind=object 73 | openshift_hosted_registry_selector='region=infra' 74 | openshift_hosted_registry_replicas=3 75 | 76 | # Disble Checks 77 | openshift_disable_check=disk_availability,docker_storage,memory_availability 78 | 79 | # Mulititenant functionality (i.e. each project gets it's own "private" network) 80 | os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' 81 | 82 | # Uncomment when setting up logging/metrics/prometheus 83 | openshift_master_dynamic_provisioning_enabled=true 84 | dynamic_volumes_check=False 85 | 86 | # Logging 87 | openshift_logging_install_logging=true 88 | 89 | openshift_logging_storage_kind=dynamic 90 | openshift_logging_storage_volume_size=25Gi 91 | openshift_logging_es_cluster_size=3 92 | openshift_logging_curator_nodeselector={'region':'infra'} 93 | openshift_logging_es_nodeselector={'region':'infra'} 94 | openshift_logging_kibana_nodeselector={'region':'infra'} 95 | openshift_logging_es_memory_limit=4G 96 | 97 | # Metrics 98 | openshift_metrics_install_metrics=true 99 | openshift_metrics_storage_kind=dynamic 100 | openshift_metrics_storage_volume_size=25Gi 101 | openshift_metrics_hawkular_nodeselector={'region':'infra'} 102 | openshift_metrics_heapster_nodeselector={'region':'infra'} 103 | openshift_metrics_cassandra_nodeselector={'region':'infra'} 104 | 105 | ## Prometheus Metrics 106 | openshift_hosted_prometheus_deploy=true 107 | openshift_prometheus_namespace=openshift-metrics 108 | openshift_prometheus_node_selector={'region':'infra'} 109 | 110 | # Prometheus storage config 111 | openshift_prometheus_storage_access_modes=['ReadWriteOnce'] 112 | openshift_prometheus_storage_volume_name=prometheus 113 | openshift_prometheus_storage_volume_size=20Gi 114 | openshift_prometheus_storage_type='dynamic' 115 | 116 | # For prometheus-alertmanager 117 | openshift_prometheus_alertmanager_storage_access_modes=['ReadWriteOnce'] 118 | openshift_prometheus_alertmanager_storage_volume_name=prometheus-alertmanager 119 | openshift_prometheus_alertmanager_storage_volume_size=20Gi 120 | openshift_prometheus_alertmanager_storage_type='dymanic' 121 | 122 | # For prometheus-alertbuffer 123 | openshift_prometheus_alertbuffer_storage_access_modes=['ReadWriteOnce'] 124 | openshift_prometheus_alertbuffer_storage_volume_name=prometheus-alertbuffer 125 | openshift_prometheus_alertbuffer_storage_volume_size=20Gi 126 | openshift_prometheus_alertbuffer_storage_type='dymanic' 127 | 128 | openshift_prometheus_node_exporter_image_version=v3.9.25 129 | 130 | openshift_enable_service_catalog=true 131 | 132 | # Ansible Service Broker 133 | openshift_hosted_etcd_storage_kind=dynamic 134 | openshift_hosted_etcd_storage_volume_name=etcd-vol2 135 | openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] 136 | openshift_hosted_etcd_storage_volume_size=20Gi 137 | ansible_service_broker_local_registry_whitelist=['.*-apb$'] 138 | 139 | # If using Route53 or you're pointed to the master with a "vanity" name 140 | openshift_master_public_api_url=https://master.myocp.example.com:443 141 | openshift_master_public_console_url=https://master.myocp.example.com:443/console 142 | openshift_master_api_port=443 143 | openshift_master_console_port=443 144 | 145 | # Native high availbility cluster method with optional load balancer. 146 | # If no lb group is defined installer assumes that a load balancer has 147 | # been preconfigured. For installation the value of 148 | # openshift_master_cluster_hostname must resolve to the load balancer 149 | # or to one or all of the masters defined in the inventory if no load 150 | # balancer is present. 151 | openshift_master_cluster_method=native 152 | openshift_master_cluster_hostname=master.myocp.example.com 153 | openshift_master_cluster_public_hostname=master.myocp.example.com 154 | 155 | # The following enabled htpasswd authentication 156 | openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/openshift-passwd'}] 157 | openshift_master_htpasswd_users={'developer': '$apr1$q2fVVf46$85HP/4JHGYeFBKAKPBblo0'} 158 | 159 | ## OpenShift host groups 160 | 161 | [masters] 162 | ip-172-16-27-103.ec2.internal openshift_node_labels="{'region': 'master'}" 163 | ip-172-16-34-16.ec2.internal openshift_node_labels="{'region': 'master'}" 164 | ip-172-16-59-159.ec2.internal openshift_node_labels="{'region': 'master'}" 165 | 166 | [etcd] 167 | 168 | [etcd:children] 169 | masters 170 | 171 | [nodes] 172 | ip-172-16-21-79.ec2.internal openshift_node_labels="{'region': 'apps'}" 173 | ip-172-16-46-135.ec2.internal openshift_node_labels="{'region': 'apps'}" 174 | ip-172-16-52-185.ec2.internal openshift_node_labels="{'region': 'apps'}" 175 | ip-172-16-25-12.ec2.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 176 | ip-172-16-38-215.ec2.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 177 | ip-172-16-50-186.ec2.internal openshift_node_labels="{'region': 'infra', 'zone': 'default'}" 178 | ip-172-16-26-120.ec2.internal openshift_schedulable=True 179 | ip-172-16-38-170.ec2.internal openshift_schedulable=True 180 | ip-172-16-63-17.ec2.internal openshift_schedulable=True 181 | 182 | 183 | [nodes:children] 184 | masters 185 | 186 | [glusterfs] 187 | ip-172-16-26-120.ec2.internal glusterfs_devices='[ "/dev/nvme3n1" ]' 188 | ip-172-16-38-170.ec2.internal glusterfs_devices='[ "/dev/nvme3n1" ]' 189 | ip-172-16-63-17.ec2.internal glusterfs_devices='[ "/dev/nvme3n1" ]' 190 | 191 | ## 192 | ## 193 | -------------------------------------------------------------------------------- /authentication/README.md: -------------------------------------------------------------------------------- 1 | # Authentication 2 | 3 | These are notes related to authentication. 4 | 5 | # LDAP Configuration 6 | 7 | First (if using `ldaps`) you need to download the CA certificate (below example is using Red Hat IdM server) 8 | 9 | ``` 10 | root@master# curl http://ipa.example.com/ipa/config/ca.crt >> /etc/origin/master/my-ldap-ca-bundle.crt 11 | ``` 12 | 13 | Make a backup copy of the config file 14 | ``` 15 | root@master# cp /etc/origin/master/master-config.yaml{,.bak} 16 | ``` 17 | 18 | Edit the `/etc/origin/master/master-config.yaml` file with the following changes under the `identityProviders` section 19 | 20 | ``` 21 | identityProviders: 22 | - name: "my_ldap_provider" 23 | challenge: true 24 | login: true 25 | provider: 26 | apiVersion: v1 27 | kind: LDAPPasswordIdentityProvider 28 | attributes: 29 | id: 30 | - dn 31 | email: 32 | - mail 33 | name: 34 | - cn 35 | preferredUsername: 36 | - uid 37 | bindDN: "cn=directory manager" 38 | bindPassword: "secret" 39 | ca: my-ldap-ca-bundle.crt 40 | insecure: false 41 | url: "ldaps://ipa.example.com/cn=users,cn=accounts,dc=example,dc=com?uid" 42 | ``` 43 | 44 | Note you can customize what attributes it searches for. First non empty attribute returned is used. 45 | 46 | Restart the openshift-master service 47 | ``` 48 | systemctl restart atomic-openshift-master 49 | ``` 50 | 51 | # Active Directory 52 | 53 | AD usually is using `sAMAccountName` as uid for login. Use the following ldapsearch to validate the informaiton 54 | 55 | ``` 56 | ldapsearch -x -D "CN=xxx,OU=Service-Accounts,OU=DCS,DC=homeoffice,DC=example,DC=com" -W -H ldaps://ldaphost.example.com -b "ou=Users,dc=office,dc=example,DC=com" -s sub 'sAMAccountName=user1' 57 | ``` 58 | 59 | If the ldapsearch did not return any user, it means -D or -b may not be correct. Retry different `baseDN`. If there is too many entries returns, add filter to your search. Filter example is `(objectclass=people)` or `(objectclass=person)` if still having issues; increase logging as `OPTIONS=--loglevel=5` in `/etc/sysconfig/atomic-openshift-master` 60 | 61 | If you see an error in `journalctl -u atomic-openshift-master` there might be a conflict with the user identity when user trying to login (if you used `htpasswd` beforehand). Just do the following... 62 | ``` 63 | oc get user 64 | oc delete user user1 65 | ``` 66 | 67 | Inspiration from [This workshop](https://github.com/RedHatWorkshops/openshiftv3-ops-workshop/blob/master/adding_an_ldap_provider.md) 68 | 69 | The configuration in `master-config.yaml` Should look something like this: 70 | 71 | ``` 72 | oauthConfig: 73 | assetPublicURL: https://master.example.com:8443/console/ 74 | grantConfig: 75 | method: auto 76 | identityProviders: 77 | - name: "OfficeAD" 78 | challenge: true 79 | login: true 80 | provider: 81 | apiVersion: v1 82 | kind: LDAPPasswordIdentityProvider 83 | attributes: 84 | id: 85 | - dn 86 | email: 87 | - mail 88 | name: 89 | - cn 90 | preferredUsername: 91 | - sAMAccountName 92 | bindDN: "CN=LinuxSVC,OU=Service-Accounts,OU=DCS,DC=office,DC=example,DC=com" 93 | bindPassword: "password" 94 | ca: ad-ca.pem.crt 95 | insecure: false 96 | url: "ldaps://ad-server.example.com:636/CN=Users,DC=hoffice,DC=example,DC=com?sAMAccountName?sub" 97 | ``` 98 | 99 | If you need to look for a subclass... 100 | 101 | ``` 102 | "ldaps://ad.corp.example.com:636/OU=Users,DC=corp,DC=example,DC=com?sAMAccountName?sub?(&(objectClass=person)" 103 | ``` 104 | 105 | Here's an example of doing it inside on an ansible host file 106 | 107 | ``` 108 | openshift_master_identity_providers=[{'name':'Active_Directory','login':'true','challenge':'true','kind':'LDAPPasswordIdentityProvider','attributes':{'id':['userPrincipalName'],'email':['userPrincipalName'],'name':['name'],'preferredUsername':['sAMAccountName']},'insecure':'true','bindDN':'CN=svc-openshift,CN=Users,DC=moos,DC=red','bindPassword':'REMOVED','url':'ldap://dc.moos.red:389/CN=Users,DC=moos,DC=red?sAMAccountName?sub?(objectClass=person)'}] 109 | ``` 110 | 111 | # Two Auth Provider 112 | 113 | Here is an example of using two auth providers 114 | ``` 115 | identityProviders: 116 | - challenge: true 117 | login: true 118 | mappingMethod: claim 119 | name: htpasswd_auth 120 | provider: 121 | apiVersion: v1 122 | file: /etc/origin/master/htpasswd 123 | kind: HTPasswdPasswordIdentityProvider 124 | - challenge: true 125 | login: true 126 | mappingMethod: claim 127 | name: htpasswd_auth2 128 | provider: 129 | apiVersion: v1 130 | file: /etc/origin/master/htpasswd2 131 | kind: HTPasswdPasswordIdentityProvider 132 | ``` 133 | 134 | 135 | The ansible scripts configured authentication using `htpasswd` so just create the users using the proper method 136 | 137 | ``` 138 | root@host# htpasswd -b /etc/origin/openshift-passwd demo demo 139 | ``` 140 | 141 | # Adding User to group 142 | 143 | Currently, you can only add a user to a group by setting the "group" array to a group 144 | ``` 145 | [root@ose3-master ~]# oc edit user/christian -o json 146 | { 147 | "kind": "User", 148 | "apiVersion": "v1", 149 | "metadata": { 150 | "name": "christian", 151 | "selfLink": "/osapi/v1beta3/users/christian", 152 | "uid": "a5c96638-4084-11e5-8a3c-fa163e2e3caf", 153 | "resourceVersion": "1182", 154 | "creationTimestamp": "2015-08-11T23:56:56Z" 155 | }, 156 | "identities": [ 157 | "htpasswd_auth:christian" 158 | ], 159 | "groups": [ 160 | "mygroup" 161 | ] 162 | } 163 | 164 | ``` 165 | 166 | # Create Admin User 167 | 168 | First, you create the user using either `ldap` or the `htpasswd` file. It's basically whatever backend auth system you set up; for example; if you use `htpasswd` create the user like so... 169 | 170 | ``` 171 | htpasswd /etc/origin/openshift-passwd ocp-admin 172 | ``` 173 | 174 | Then, as `system:admin` give this user `cluster-admin` permissions (**CAREFUL** this is like "root" but for OpenShift) 175 | 176 | ``` 177 | oc adm policy add-cluster-role-to-user cluster-admin ocp-admin 178 | ``` 179 | 180 | Now they can login from anywhere with an `oc` cli tool and login with... 181 | 182 | ``` 183 | user@host$ oc login https://ose3-master.example.com:8443 --insecure-skip-tls-verify --username=ocp-admin 184 | ``` 185 | 186 | You can then maybe install [cockpit](https://github.com/RedHatWorkshops/openshiftv3-ops-workshop/blob/master/deploying_cockpit_as_a_container.md#step-2) if you want a sort of "admin" interface. 187 | 188 | # Login 189 | 190 | There are many methods to login including username/password or token. 191 | 192 | ## User Login 193 | 194 | To login as a "regular" user... 195 | 196 | ``` 197 | user@host$ oc login https://ose3-master.example.com:8443 --insecure-skip-tls-verify --username=demo 198 | ``` 199 | ## Admin Login 200 | 201 | On the master, you can log back into OCP as admin with... 202 | 203 | ``` 204 | root@master# oc login -u system:admin -n default 205 | ``` 206 | 207 | Or, you can specify the kubeconfig file directly 208 | 209 | ``` 210 | oc login --config=/path/to/admin.kubeconfig -u system:admin 211 | ``` 212 | 213 | You can also export `KUBECONFIG` to wherever your kubeconfig is (when you login, it SHOULD be under `~/.kube/config` but you can specify the one on the master if you'd like) 214 | 215 | # Custom Roles 216 | 217 | More info found [here](http://v1.uncontained.io/playbooks/operationalizing/custom_role_creation.html) 218 | 219 | Highlevel; find something like what you want and export it. 220 | 221 | ``` 222 | oc export clusterrole edit > edit_role.yaml 223 | cp edit_role.yaml customrole.yaml 224 | ``` 225 | 226 | Edit to your hearts content (I did a `diff` here to show you the change) 227 | 228 | ``` 229 | diff edit_role.yaml customrole.yaml 230 | 231 | 8c8 232 | < name: edit 233 | --- 234 | > name: edit_no_rsh 235 | 16d15 236 | < - pods/exec 237 | ``` 238 | 239 | Above you see I changed the name and removed `pods/exec` 240 | 241 | You also want to remove the `aggregationRule` (redo the diff because it'll look different than the `diff` above) 242 | 243 | ``` 244 | aggregationRule: 245 | clusterRoleSelectors: 246 | - matchLabels: 247 | rbac.authorization.k8s.io/aggregate-to-edit: "true" 248 | ``` 249 | 250 | Load the new role 251 | ``` 252 | oc create -f customrole.yaml 253 | clusterrole "edit_no_rsh" created 254 | ``` 255 | 256 | Assign to a user 257 | 258 | ``` 259 | oc adm policy add-role-to-user edit_no_rsh bob -n myproject 260 | ``` 261 | 262 | -------------------------------------------------------------------------------- /istio/sm-resources/bookinfo/00.setup-bookinfo.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 Istio Authors 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | ################################################################################################## 16 | # Details service 17 | ################################################################################################## 18 | apiVersion: v1 19 | kind: Service 20 | metadata: 21 | name: details 22 | labels: 23 | app: details 24 | spec: 25 | ports: 26 | - port: 9080 27 | name: http 28 | selector: 29 | app: details 30 | --- 31 | apiVersion: extensions/v1beta1 32 | kind: Deployment 33 | metadata: 34 | name: details-v1 35 | spec: 36 | replicas: 1 37 | template: 38 | metadata: 39 | annotations: 40 | sidecar.istio.io/inject: "true" 41 | labels: 42 | app: details 43 | version: v1 44 | spec: 45 | containers: 46 | - name: details 47 | image: istio/examples-bookinfo-details-v1:1.8.0 48 | imagePullPolicy: IfNotPresent 49 | ports: 50 | - containerPort: 9080 51 | --- 52 | ################################################################################################## 53 | # Ratings service 54 | ################################################################################################## 55 | apiVersion: v1 56 | kind: Service 57 | metadata: 58 | name: ratings 59 | labels: 60 | app: ratings 61 | spec: 62 | ports: 63 | - port: 9080 64 | name: http 65 | selector: 66 | app: ratings 67 | --- 68 | apiVersion: extensions/v1beta1 69 | kind: Deployment 70 | metadata: 71 | name: ratings-v1 72 | spec: 73 | replicas: 1 74 | template: 75 | metadata: 76 | annotations: 77 | sidecar.istio.io/inject: "true" 78 | labels: 79 | app: ratings 80 | version: v1 81 | spec: 82 | containers: 83 | - name: ratings 84 | image: istio/examples-bookinfo-ratings-v1:1.8.0 85 | imagePullPolicy: IfNotPresent 86 | ports: 87 | - containerPort: 9080 88 | --- 89 | ################################################################################################## 90 | # Reviews service 91 | ################################################################################################## 92 | apiVersion: v1 93 | kind: Service 94 | metadata: 95 | name: reviews 96 | labels: 97 | app: reviews 98 | spec: 99 | ports: 100 | - port: 9080 101 | name: http 102 | selector: 103 | app: reviews 104 | --- 105 | apiVersion: extensions/v1beta1 106 | kind: Deployment 107 | metadata: 108 | name: reviews-v1 109 | spec: 110 | replicas: 1 111 | template: 112 | metadata: 113 | annotations: 114 | sidecar.istio.io/inject: "true" 115 | labels: 116 | app: reviews 117 | version: v1 118 | spec: 119 | containers: 120 | - name: reviews 121 | image: istio/examples-bookinfo-reviews-v1:1.8.0 122 | imagePullPolicy: IfNotPresent 123 | ports: 124 | - containerPort: 9080 125 | --- 126 | apiVersion: extensions/v1beta1 127 | kind: Deployment 128 | metadata: 129 | name: reviews-v2 130 | spec: 131 | replicas: 1 132 | template: 133 | metadata: 134 | annotations: 135 | sidecar.istio.io/inject: "true" 136 | labels: 137 | app: reviews 138 | version: v2 139 | spec: 140 | containers: 141 | - name: reviews 142 | image: istio/examples-bookinfo-reviews-v2:1.8.0 143 | imagePullPolicy: IfNotPresent 144 | ports: 145 | - containerPort: 9080 146 | --- 147 | apiVersion: extensions/v1beta1 148 | kind: Deployment 149 | metadata: 150 | name: reviews-v3 151 | spec: 152 | replicas: 1 153 | template: 154 | metadata: 155 | annotations: 156 | sidecar.istio.io/inject: "true" 157 | labels: 158 | app: reviews 159 | version: v3 160 | spec: 161 | containers: 162 | - name: reviews 163 | image: istio/examples-bookinfo-reviews-v3:1.8.0 164 | imagePullPolicy: IfNotPresent 165 | ports: 166 | - containerPort: 9080 167 | --- 168 | ################################################################################################## 169 | # Productpage services 170 | ################################################################################################## 171 | apiVersion: v1 172 | kind: Service 173 | metadata: 174 | name: productpage 175 | labels: 176 | app: productpage 177 | spec: 178 | ports: 179 | - port: 9080 180 | name: http 181 | selector: 182 | app: productpage 183 | --- 184 | apiVersion: extensions/v1beta1 185 | kind: Deployment 186 | metadata: 187 | name: productpage-v1 188 | spec: 189 | replicas: 1 190 | template: 191 | metadata: 192 | annotations: 193 | sidecar.istio.io/inject: "true" 194 | labels: 195 | app: productpage 196 | version: v1 197 | spec: 198 | containers: 199 | - name: productpage 200 | image: istio/examples-bookinfo-productpage-v1:1.8.0 201 | imagePullPolicy: IfNotPresent 202 | ports: 203 | - containerPort: 9080 204 | --- 205 | ################################################################################################## 206 | # Book info Gateway Rules 207 | ################################################################################################## 208 | apiVersion: networking.istio.io/v1alpha3 209 | kind: Gateway 210 | metadata: 211 | name: bookinfo-gateway 212 | spec: 213 | selector: 214 | istio: ingressgateway # use istio default controller 215 | servers: 216 | - port: 217 | number: 80 218 | name: http 219 | protocol: HTTP 220 | hosts: 221 | - "*" 222 | --- 223 | apiVersion: networking.istio.io/v1alpha3 224 | kind: VirtualService 225 | metadata: 226 | name: bookinfo 227 | spec: 228 | hosts: 229 | - "*" 230 | gateways: 231 | - bookinfo-gateway 232 | http: 233 | - match: 234 | - uri: 235 | exact: /productpage 236 | - uri: 237 | exact: /login 238 | - uri: 239 | exact: /logout 240 | - uri: 241 | prefix: /api/v1/products 242 | route: 243 | - destination: 244 | host: productpage 245 | port: 246 | number: 9080 247 | --- 248 | ################################################################################################## 249 | # Default Destination Rules to v1 of the application (using mTLS) 250 | ################################################################################################## 251 | apiVersion: networking.istio.io/v1alpha3 252 | kind: DestinationRule 253 | metadata: 254 | name: productpage 255 | spec: 256 | host: productpage 257 | trafficPolicy: 258 | tls: 259 | mode: ISTIO_MUTUAL 260 | subsets: 261 | - name: v1 262 | labels: 263 | version: v1 264 | --- 265 | apiVersion: networking.istio.io/v1alpha3 266 | kind: DestinationRule 267 | metadata: 268 | name: reviews 269 | spec: 270 | host: reviews 271 | trafficPolicy: 272 | tls: 273 | mode: ISTIO_MUTUAL 274 | subsets: 275 | - name: v1 276 | labels: 277 | version: v1 278 | - name: v2 279 | labels: 280 | version: v2 281 | - name: v3 282 | labels: 283 | version: v3 284 | --- 285 | apiVersion: networking.istio.io/v1alpha3 286 | kind: DestinationRule 287 | metadata: 288 | name: ratings 289 | spec: 290 | host: ratings 291 | trafficPolicy: 292 | tls: 293 | mode: ISTIO_MUTUAL 294 | subsets: 295 | - name: v1 296 | labels: 297 | version: v1 298 | - name: v2 299 | labels: 300 | version: v2 301 | - name: v2-mysql 302 | labels: 303 | version: v2-mysql 304 | - name: v2-mysql-vm 305 | labels: 306 | version: v2-mysql-vm 307 | --- 308 | apiVersion: networking.istio.io/v1alpha3 309 | kind: DestinationRule 310 | metadata: 311 | name: details 312 | spec: 313 | host: details 314 | trafficPolicy: 315 | tls: 316 | mode: ISTIO_MUTUAL 317 | subsets: 318 | - name: v1 319 | labels: 320 | version: v1 321 | - name: v2 322 | labels: 323 | version: v2 324 | --- 325 | -------------------------------------------------------------------------------- /installation/guides/disconnected.md: -------------------------------------------------------------------------------- 1 | # Disconnected Install 2 | 3 | There are many things to take into account. I will write "highlevel" notes here but YMMV. Pull requests welcome but I may/maynot merge them. 4 | 5 | * [Sync Repositories](#sync-repos) 6 | * [Sync Registry](#sync-registry) 7 | * [Install OpenShift](#install-openshift) 8 | * [Troubleshooting](troubleshooting) 9 | 10 | NOTE: Most of this is hacked together from [Nick's Repo](https://github.com/nnachefski/ocpstuff/blob/master/install/setup-disconnected.md) 11 | 12 | ## Sync Repos 13 | 14 | If you're not using SAT or another repo that is pre-synced; you'll have to create your own. This is straight foward so I won't elaborate too much... 15 | 16 | __Subscribe your Server__ 17 | 18 | Subscribe to all the channels you want to sync (required even though this server won't be "using" them) 19 | 20 | ``` 21 | subscription-manager register 22 | subscription-manager attach --pool=${pool_id} 23 | subscription-manager repos --disable=* 24 | yum-config-manager --disable \* 25 | subscription-manager repos \ 26 | --enable=rhel-7-server-rpms \ 27 | --enable=rhel-7-server-extras-rpms \ 28 | --enable=rhel-7-server-ose-3.11-rpms \ 29 | --enable=rhel-7-server-ansible-2.6-rpms \ 30 | --enable=rh-gluster-3-client-for-rhel-7-server-rpms 31 | ``` 32 | 33 | __Install/Configure Apache__ 34 | 35 | ``` 36 | yum -y install httpd 37 | firewall-cmd --permanent --add-service=http 38 | firewall-cmd --reload 39 | ``` 40 | 41 | __Sync The Repos__ 42 | 43 | I usually do this in a script but here is the "straight" `for` loop. 44 | 45 | ``` 46 | for repo in rhel-7-server-rpms rhel-7-server-extras-rpms rhel-7-server-ose-3.11-rpms rhel-7-server-ansible-2.6-rpms rh-gluster-3-client-for-rhel-7-server-rpms 47 | do 48 | reposync --gpgcheck -lm --repoid=${repo} --download_path=/var/www/html/repos 49 | createrepo -v /var/www/html/repos/${repo} -o /var/www/html/repos/${repo} 50 | done 51 | ``` 52 | 53 | You'll probably need to run `restorecon` 54 | 55 | ``` 56 | /sbin/restorecon -vR /var/www/html 57 | ``` 58 | 59 | You can start/enable Apache 60 | 61 | ``` 62 | systemctl enable --now httpd 63 | ``` 64 | 65 | __Create Repo Files__ 66 | 67 | You need to create a repo file on ALL servers (masters/infra/nodes/ocs). Usually I create this as `/etc/yum.repos.d/ocp.repo` 68 | 69 | 70 | ``` 71 | [rhel-7-server-rpms] 72 | name=rhel-7-server-rpms 73 | baseurl=http://repo.example.com/repos/rhel-7-server-rpms 74 | enabled=1 75 | gpgcheck=0 76 | 77 | [rhel-7-server-extras-rpms] 78 | name=rhel-7-server-extras-rpms 79 | baseurl=http://repo.example.com/repos/rhel-7-server-extras-rpms 80 | enabled=1 81 | gpgcheck=0 82 | 83 | [rhel-7-server-ose-3.11-rpms] 84 | name=rhel-7-server-ose-3.11-rpms 85 | baseurl=http://repo.example.com/repos/rhel-7-server-ose-3.11-rpms 86 | enabled=1 87 | gpgcheck=0 88 | 89 | [rhel-7-server-ansible-2.6-rpms] 90 | name=rhel-7-server-ansible-2.6-rpms 91 | baseurl=http://repo.example.com/repos/rhel-7-server-ansible-2.6-rpms 92 | enabled=1 93 | gpgcheck=0 94 | 95 | [rh-gluster-3-client-for-rhel-7-server-rpms] 96 | name=rh-gluster-3-client-for-rhel-7-server-rpms 97 | baseurl=http://repo.example.com/repos/rh-gluster-3-client-for-rhel-7-server-rpms 98 | enabled=1 99 | gpgcheck=0 100 | ``` 101 | 102 | ## Sync Registry 103 | 104 | Now you need to sync the docker repo. These are high level notes and assumes you know what you're doing 105 | 106 | __Subscribe The Registry Server__ 107 | 108 | Subscribe to the proper channels 109 | 110 | ``` 111 | subscription-manager register 112 | subscription-manager attach --pool=${pool_id} 113 | subscription-manager repos --disable=* 114 | yum-config-manager --disable \* 115 | subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-ose-3.11-rpms --enable=rhel-7-fast-datapath-rpms --enable=rhel-7-server-ansible-2.6-rpms --enable=rh-gluster-3-client-for-rhel-7-server-rpms 116 | ``` 117 | 118 | __Install Docker__ 119 | 120 | Install and enable the registry and docker 121 | 122 | ``` 123 | yum -y install docker-distribution docker 124 | systemctl enable docker-distribution --now 125 | systemctl enable --now docker 126 | ``` 127 | 128 | Export your repo hostname (or whatever DNS is pointing to the server as) 129 | 130 | ``` 131 | export MY_REPO=$(hostname) 132 | ``` 133 | 134 | __Generate Certs__ 135 | 136 | This step is **OPTIONAL** ...skip this if you're not going to verify the cert of this server 137 | 138 | ``` 139 | mkdir -p /etc/docker/certs.d/$MY_REPO 140 | openssl req -newkey rsa:4096 -nodes -sha256 -keyout /etc/docker/certs.d/$MY_REPO/$MY_REPO.key -x509 -days 365 -out /etc/docker/certs.d/$MY_REPO/$MY_REPO.cert 141 | ``` 142 | 143 | Tell docker-registry to use this cert 144 | 145 | ``` 146 | cat <> /etc/docker-distribution/registry/config.yml 147 | headers: 148 | X-Content-Type-Options: [nosniff] 149 | tls: 150 | certificate: /etc/docker/certs.d/$MY_REPO/$MY_REPO.cert 151 | key: /etc/docker/certs.d/$MY_REPO/$MY_REPO.key 152 | EOF 153 | ``` 154 | 155 | Change the port of you don't want to use 5000 156 | 157 | ``` 158 | sed -i 's/\:5000/\:443/' /etc/docker-distribution/registry/config.yml 159 | ``` 160 | 161 | Restart the service if you made any changes 162 | 163 | ``` 164 | systemctl restart docker-distribution 165 | ``` 166 | 167 | You'll need to open up the firewall 168 | 169 | ``` 170 | firewall-cmd --permanent --add-service=https 171 | firewall-cmd --reload 172 | ``` 173 | 174 | __Install Skopeo__ 175 | 176 | You'll need certian python modules to do the sync so install them with epel (then disable epel) 177 | 178 | ``` 179 | rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 180 | yum install -y python34 python34-pip 181 | yum-config-manager --disable epel 182 | ``` 183 | 184 | Install Skopeo 185 | 186 | ``` 187 | yum install -y skopeo 188 | ``` 189 | 190 | __Sync Repos__ 191 | 192 | If you haven't export your repo's DNS/Hostname (you can use the IP too if you want). Also export your source repo 193 | 194 | ``` 195 | export MY_REPO=$(hostname) 196 | export SRC_REPO=registry.access.redhat.com 197 | ``` 198 | 199 | If you're using Red Hat's registry; you'll need to login in order to pull the images 200 | 201 | ``` 202 | docker login $SRC_REPO 203 | ``` 204 | 205 | Grab the files and script provided by Nick 206 | 207 | ``` 208 | cd ~ 209 | wget https://raw.githubusercontent.com/nnachefski/ocpstuff/master/scripts/import-images.py 210 | chmod +x import-images.py 211 | wget https://raw.githubusercontent.com/nnachefski/ocpstuff/master/images/core_images.txt 212 | wget https://raw.githubusercontent.com/nnachefski/ocpstuff/master/images/app_images.txt 213 | wget https://raw.githubusercontent.com/nnachefski/ocpstuff/master/images/mw_images.txt 214 | ``` 215 | 216 | Now loop trough these and sync them to your repo (I'd do this in a `tmux` session and I'd go grab lunch) 217 | 218 | ``` 219 | for i in core_images.txt app_images.txt mw_images.txt; do 220 | ./import-images.py docker $SRC_REPO $MY_REPO -d -l $i 221 | ./import-images.py docker $SRC_REPO $MY_REPO -d -l $i 222 | ./import-images.py docker $SRC_REPO $MY_REPO -d -l $i 223 | done 224 | ``` 225 | 226 | ## Install OpenShift 227 | 228 | Now you can install OpenShift like you would normally. The example ansible host files found [HERE](../../ansible_hostfiles/) is well commented but I'll go over the options you may need to change/uncomment 229 | 230 | ``` 231 | openshift_docker_blocked_registries=registry.access.redhat.com,docker.io,registry.redhat.io 232 | openshift_docker_additional_registries=registry.example.com 233 | oreg_url=registry.example.com/openshift3/ose-${component}:${version} 234 | openshift_examples_modify_imagestreams=true 235 | openshift_release=v3.11 236 | openshift_image_tag=v3.11 237 | openshift_docker_insecure_registries=0.0.0.0/0 238 | ``` 239 | 240 | That's all I had to use but YMMV. 241 | 242 | ## Troubleshooting 243 | 244 | If you are getting errors deploying run `journalctl --no-pager | grep -i "pull"` ...you'll see something like this in the output 245 | 246 | ``` 247 | 57.cloud.chx_kube-system(9ab708c14a27c2f93c7c3c0de192c3de)" failed: rpc error: code = Unknown desc = failed pulling image "registry.cloud.chx/openshift3/ose-pod:v3.11.16": Error: image openshift3/ose-pod:v3.11.16 not found 248 | ``` 249 | 250 | That means that the `.z` tag wasn't pulled in the core images. Pull/tag/upload. I wrote a script that does this 251 | 252 | ``` 253 | #!/bin/bash 254 | tag=v3.11.16 255 | repo=registry.cloud.chx 256 | for image in $(< core_images.txt) 257 | do 258 | docker pull ${repo}/${image}; docker tag ${repo}/${image} ${repo}/$(echo ${image} |awk -F: '{print $1}'):${tag} 259 | docker push ${repo}/$(echo ${image} |awk -F: '{print $1}'):${tag} 260 | done 261 | ## 262 | ``` 263 | 264 | Run that and then re-reun the installer 265 | --------------------------------------------------------------------------------