├── enhancements ├── sig-policy │ ├── OWNERS │ ├── 98-long-term-compliance-history │ │ ├── phase-1.jpeg │ │ ├── phase-2.jpeg │ │ ├── phase-3.jpeg │ │ └── metadata.yaml │ ├── 28-selective-policy-enforcment │ │ ├── subFilterSelection.png │ │ └── metadata.yaml │ ├── 54-core-policy-module │ │ └── metadata.yaml │ ├── 134-standalone-hub-templates │ │ ├── metadata.yaml │ │ └── README.md │ ├── 59-ansible-everyevent-mode │ │ ├── metadata.yaml │ │ └── README.md │ ├── 66-delete-policy-objects │ │ └── metadata.yaml │ ├── 83-multiline-policy-templating │ │ ├── metadata.yaml │ │ └── README.md │ ├── 122-expanded-hub-templates-access │ │ └── metadata.yaml │ ├── 85-gatekeeper-policy-integration │ │ └── metadata.yaml │ ├── 128-resource-selector │ │ ├── metadata.yaml │ │ └── README.md │ ├── 78-policy-collection │ │ ├── metadata.yaml │ │ └── README.md │ ├── 72-content-to-ansilbe-job │ │ ├── metadata.yaml │ │ └── README.md │ ├── 74-order-policy-execution │ │ └── metadata.yaml │ ├── 62-namespace-labelselector │ │ └── metadata.yaml │ ├── 71-hub-templates-auto-reconcile │ │ └── metadata.yaml │ ├── 43-governance-addon-controller │ │ ├── metadata.yaml │ │ └── README.md │ ├── 89-operator-policy-kind │ │ └── metadata.yaml │ ├── 26-policy-generator │ │ └── metadata.yaml │ ├── 99-policy-placement-strategy │ │ └── metadata.yaml │ └── 33-hub-templates-securing-sensitive-data │ │ └── metadata.yaml └── sig-architecture │ ├── 34-work-executor-group │ ├── arch.png │ └── metadata.yaml │ ├── 105-aws-iam-registration │ ├── OCM-using-AWS-IAM.jpg │ └── metadata.yaml │ ├── 81-addon-lifecycle │ ├── kustomize-rollout-examples │ │ ├── 04-rollback │ │ │ ├── kustomization.yaml │ │ │ ├── hub-config.yaml │ │ │ ├── nameReference.yaml │ │ │ ├── cluster-management-addon.yaml │ │ │ └── kustomized-output.yaml │ │ ├── 01-fresh-install │ │ │ ├── kustomization.yaml │ │ │ ├── hub-config.yaml │ │ │ ├── nameReference.yaml │ │ │ ├── cluster-management-addon.yaml │ │ │ └── kustomized-output.yaml │ │ ├── 02-rolling-update │ │ │ ├── kustomization.yaml │ │ │ ├── hub-config.yaml │ │ │ ├── nameReference.yaml │ │ │ ├── cluster-management-addon.yaml │ │ │ └── kustomized-output.yaml │ │ └── 03-rolling-update-with-canary │ │ │ ├── kustomization.yaml │ │ │ ├── hub-config.yaml │ │ │ ├── nameReference.yaml │ │ │ ├── cluster-management-addon.yaml │ │ │ └── kustomized-output.yaml │ └── metadata.yaml │ ├── 166-dynamic-scoring-framework-addon │ ├── res │ │ ├── concept.drawio.png │ │ ├── mcp-connection.png │ │ ├── story1.drawio.png │ │ ├── story2.drawio.png │ │ └── architecture.drawio.png │ └── metadata.yaml │ ├── 35-work-placement │ ├── metadata.yaml │ └── work-placement.md │ ├── 82-addon-template │ ├── examples │ │ ├── upgrade │ │ │ ├── placement.yaml │ │ │ ├── managed-cluster-setbinding.yaml │ │ │ ├── cluster-management-addon.yaml │ │ │ └── addon-template-v2.yaml │ │ ├── signer-ca.yaml │ │ ├── addon-deployment-config.yaml │ │ ├── signca-secret-role.yaml │ │ ├── signca-secret-rolebinding.yaml │ │ ├── cluster-management-addon.yaml │ │ ├── roles.yaml │ │ ├── managed-cluster-addon.yaml │ │ ├── agent-deployment-output.yaml │ │ └── addon-template.yaml │ └── metadata.yaml │ ├── 225-clusteradm-operator │ └── metadata.yaml │ ├── 146-fleet-namespaces │ └── metadata.yaml │ ├── 179-work-feedback-scrape-type │ ├── metadata.yaml │ └── README.md │ ├── 8-addon-framework │ └── metadata.yaml │ ├── 6-placements │ └── metadata.yaml │ ├── 64-placementStrategy │ └── metadata.yaml │ ├── 100-multiple-bootstrapkubeconfigs │ ├── metadata.yaml │ └── README.md │ ├── 29-manifestwork-status-feedback │ └── metadata.yaml │ ├── 138-workload-completion │ └── metadata.yaml │ ├── 14-addon-cluster-proxy │ └── metadata.yaml │ ├── 20-taint-toleration │ └── metadata.yaml │ ├── 4-cluster-claims │ └── metadata.yaml │ ├── 17-cluster-features │ ├── metadata.yaml │ └── README.md │ ├── 47-manifestwork-updatestrategy │ ├── metadata.yaml │ └── README.md │ ├── 12-addon-manager │ └── metadata.yaml │ ├── 224-event-based-manifestwork │ └── metadata.yaml │ ├── 30-clusterset-override │ └── metadata.yaml │ ├── 229-manifestworkreplicaset-rollback │ └── metadata.yaml │ ├── 33-hosted-deploy-mode │ └── metadata.yaml │ ├── 15-resourcebasedscheduling │ └── metadata.yaml │ ├── 58-addon-configuration │ └── metadata.yaml │ ├── 76-addon-install-strategy │ ├── metadata.yaml │ └── README.md │ ├── 70-spread-policy │ └── metadata.yaml │ ├── 10-deletepropagationstrategy │ ├── metadata.yaml │ └── README.md │ ├── 132-work-update-only-when-spec-change │ ├── metadata.yaml │ └── README.md │ ├── 63-hosted-addon │ └── metadata.yaml │ ├── 98-cluster-permission │ ├── metadata.yaml │ └── README.md │ ├── 141-grpc-based-registration │ └── metadata.yaml │ ├── 231-manifestworkreplicaset-rollout-plugin │ └── metadata.yaml │ ├── 136-placement-cel-selector │ └── metadata.yaml │ ├── 123-addon-multiple-configs │ ├── metadata.yaml │ └── README.md │ ├── 32-extensiblescheduling │ └── metadata.yaml │ ├── 158-addon-v1beta1 │ └── metadata.yaml │ └── 19-projected-serviceaccount-token │ └── README.md ├── SECURITY.md ├── OWNERS ├── guidelines └── metadata.yaml ├── .github └── workflows │ └── dco.yml ├── DCO ├── README.md └── CODE_OF_CONDUCT.md /enhancements/sig-policy/OWNERS: -------------------------------------------------------------------------------- 1 | approvers: 2 | - mprahl 3 | - dhaiducek 4 | 5 | reviewers: 6 | - mprahl 7 | - dhaiducek 8 | -------------------------------------------------------------------------------- /SECURITY.md: -------------------------------------------------------------------------------- 1 | Refer to our [Community Security Response](https://github.com/open-cluster-management-io/community/blob/main/SECURITY.md). 2 | -------------------------------------------------------------------------------- /OWNERS: -------------------------------------------------------------------------------- 1 | approvers: 2 | - deads2k 3 | - qiujian16 4 | 5 | emeritus_approvers: 6 | - mdelder # 2022-07-14 7 | - pmorie # 2022-07-14 8 | 9 | reviewers: 10 | - deads2k 11 | - qiujian16 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/34-work-executor-group/arch.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/34-work-executor-group/arch.png -------------------------------------------------------------------------------- /enhancements/sig-policy/98-long-term-compliance-history/phase-1.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-policy/98-long-term-compliance-history/phase-1.jpeg -------------------------------------------------------------------------------- /enhancements/sig-policy/98-long-term-compliance-history/phase-2.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-policy/98-long-term-compliance-history/phase-2.jpeg -------------------------------------------------------------------------------- /enhancements/sig-policy/98-long-term-compliance-history/phase-3.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-policy/98-long-term-compliance-history/phase-3.jpeg -------------------------------------------------------------------------------- /enhancements/sig-architecture/105-aws-iam-registration/OCM-using-AWS-IAM.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/105-aws-iam-registration/OCM-using-AWS-IAM.jpg -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/04-rollback/kustomization.yaml: -------------------------------------------------------------------------------- 1 | resources: 2 | - hub-config.yaml 3 | - cluster-management-addon.yaml 4 | configurations: 5 | - nameReference.yaml 6 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/01-fresh-install/kustomization.yaml: -------------------------------------------------------------------------------- 1 | resources: 2 | - hub-config.yaml 3 | - cluster-management-addon.yaml 4 | configurations: 5 | - nameReference.yaml 6 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/02-rolling-update/kustomization.yaml: -------------------------------------------------------------------------------- 1 | resources: 2 | - hub-config.yaml 3 | - cluster-management-addon.yaml 4 | configurations: 5 | - nameReference.yaml 6 | -------------------------------------------------------------------------------- /enhancements/sig-policy/28-selective-policy-enforcment/subFilterSelection.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-policy/28-selective-policy-enforcment/subFilterSelection.png -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/03-rolling-update-with-canary/kustomization.yaml: -------------------------------------------------------------------------------- 1 | resources: 2 | - hub-config.yaml 3 | - cluster-management-addon.yaml 4 | configurations: 5 | - nameReference.yaml 6 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/concept.drawio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/concept.drawio.png -------------------------------------------------------------------------------- /enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/mcp-connection.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/mcp-connection.png -------------------------------------------------------------------------------- /enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/story1.drawio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/story1.drawio.png -------------------------------------------------------------------------------- /enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/story2.drawio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/story2.drawio.png -------------------------------------------------------------------------------- /enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/architecture.drawio.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/open-cluster-management-io/enhancements/HEAD/enhancements/sig-architecture/166-dynamic-scoring-framework-addon/res/architecture.drawio.png -------------------------------------------------------------------------------- /enhancements/sig-architecture/35-work-placement/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: work-placement 2 | authors: 3 | - "@jnpacker" 4 | - "@qiujian16" 5 | reviewers: 6 | approvers: 7 | creation-date: 2022-04-11 8 | last-updated: 2022-04-11 9 | status: implementable 10 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/upgrade/placement.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cluster.open-cluster-management.io/v1beta1 2 | kind: Placement 3 | metadata: 4 | name: placement-all 5 | namespace: default 6 | spec: 7 | clusterSets: 8 | - global 9 | -------------------------------------------------------------------------------- /enhancements/sig-policy/54-core-policy-module/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: core-policy-module 2 | authors: 3 | - "@JustinKuli" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2022-04-22 9 | last-updated: 2022-06-07 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/134-standalone-hub-templates/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: standalone-hub-templates 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2024-11-15 9 | last-updated: 2024-11-15 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/59-ansible-everyevent-mode/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: ansible-everyevent-mode 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2022-07-13 9 | last-updated: 2022-07-14 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/66-delete-policy-objects/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: delete-policy-objects 2 | authors: 3 | - "@willkutler" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2022-06-27 9 | last-updated: 2022-06-27 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/83-multiline-policy-templating/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: multiline-policy-templating 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2022-11-22 9 | last-updated: 2022-11-22 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/122-expanded-hub-templates-access/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: expanded-hub-templates-access 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2024-07-03 9 | last-updated: 2024-07-03 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/85-gatekeeper-policy-integration/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: gatekeeper-policy-integration 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2023-01-27 9 | last-updated: 2023-02-09 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/signer-ca.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | data: 3 | tls.crt: 4 | tls.key: 5 | kind: Secret 6 | metadata: 7 | name: ca-secret 8 | namespace: open-cluster-management-hub 9 | type: kubernetes.io/tls 10 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/upgrade/managed-cluster-setbinding.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cluster.open-cluster-management.io/v1beta2 2 | kind: ManagedClusterSetBinding 3 | metadata: 4 | generation: 1 5 | name: global 6 | namespace: default 7 | spec: 8 | clusterSet: global -------------------------------------------------------------------------------- /enhancements/sig-policy/128-resource-selector/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: Resource selector for ConfigurationPolicy 2 | authors: 3 | - "@dhaiducek" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2024-10-23 9 | last-updated: 2024-10-23 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-policy/78-policy-collection/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: policy-collection 2 | authors: 3 | - "@gparvin" 4 | reviewers: 5 | - "@mprahl" 6 | - "@dhaiducek" 7 | approvers: 8 | - TBD 9 | creation-date: 2022-10-24 10 | last-updated: 2022-10-24 11 | status: implementable 12 | -------------------------------------------------------------------------------- /enhancements/sig-policy/98-long-term-compliance-history/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: long-term-policy-compliance-history 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2023-07-26 9 | last-updated: 2023-08-02 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/225-clusteradm-operator/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: clusteradm-operator 2 | authors: 3 | - "@TylerGillson" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2025-06-02 9 | last-updated: 2025-06-02 10 | status: provisional 11 | see-also: [] 12 | -------------------------------------------------------------------------------- /enhancements/sig-policy/72-content-to-ansilbe-job/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: content-to-ansilbe-job 2 | authors: 3 | - "@ChunxiAlexLuo" 4 | - "@willkutler" 5 | reviewers: 6 | - TBD 7 | approvers: 8 | - TBD 9 | creation-date: 2022-09-06 10 | last-updated: 2022-09-06 11 | status: implementable 12 | -------------------------------------------------------------------------------- /enhancements/sig-policy/74-order-policy-execution/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: content-to-ansilbe-job 2 | authors: 3 | - "@willkutler" 4 | - "@JustinKuli" 5 | reviewers: 6 | - TBD 7 | approvers: 8 | - TBD 9 | creation-date: 2022-09-22 10 | last-updated: 2022-09-22 11 | status: implementable 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/146-fleet-namespaces/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: fleet-namespaces 2 | authors: 3 | - "@jnpackcer" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@mikeshng" 7 | approvers: 8 | - "@qiujian16" 9 | creation-date: 2025-07-14 10 | last-updated: 2025-07-14 11 | status: provisional 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/179-work-feedback-scrape-type/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: neat-enhancement-idea 2 | authors: 3 | - "@annelaucg" 4 | - "@youngbupark" 5 | reviewers: 6 | - "TBD" 7 | approvers: 8 | - "TBD" 9 | creation-date: 2025-10-22 10 | last-updated: 2025-10-22 11 | status: provisional -------------------------------------------------------------------------------- /enhancements/sig-policy/62-namespace-labelselector/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: core-policy-module 2 | authors: 3 | - "@dhaiducek" 4 | reviewers: 5 | - "@JustinKuli" 6 | - "@MPrahl" 7 | approvers: 8 | - "@MPrahl" 9 | creation-date: 2022-06-15 10 | last-updated: 2022-06-15 11 | status: implementable 12 | -------------------------------------------------------------------------------- /enhancements/sig-policy/71-hub-templates-auto-reconcile/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: automatic-policy-reconciliation-with-hub-templates 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2022-08-31 9 | last-updated: 2022-08-31 10 | status: implementable 11 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/8-addon-framework/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-framework 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | approvers: 7 | - "@deads2k" 8 | - "@pmorie" 9 | - "@mdelder" 10 | creation-date: 2021-2-22 11 | last-updated: 2021-2-22 12 | status: provisional -------------------------------------------------------------------------------- /enhancements/sig-architecture/6-placements/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: new-placement-apis 2 | authors: 3 | - "@elgnay" 4 | reviewers: 5 | - "@qiujian16" 6 | approvers: 7 | - "@deads2k" 8 | - "@pmorie" 9 | - "@mdelder" 10 | creation-date: 2021-01-28 11 | last-updated: 2021-06-28 12 | status: implemeted 13 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/64-placementStrategy/metadata.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | title: placement-strategy 3 | authors: 4 | - "@serngawy" 5 | reviewers: 6 | - "@qiujian16" 7 | - "@jnpacker" 8 | - "@jc-rh" 9 | approvers: 10 | - 11 | creation-date: 2023-03-1 12 | last-updated: 2023-09-8 13 | status: implementable 14 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/addon-deployment-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnDeploymentConfig 3 | metadata: 4 | name: hello-template 5 | namespace: cluster1 6 | spec: 7 | customizedVariables: 8 | - name: LOG_LEVEL 9 | value: "4" 10 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/100-multiple-bootstrapkubeconfigs/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: multiplehubs 2 | authors: 3 | - "@xuezhaojun" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@deads2k" 7 | approvers: 8 | - "@qiujian16" 9 | - "@deads2k" 10 | creation-date: 2023-10-30 11 | last-updated: 2023-10-30 12 | status: provisional 13 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/29-manifestwork-status-feedback/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: return-resource-status-in-manifestwork 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | - "@elgnay" 7 | approvers: 8 | - "@deads2k" 9 | creation-date: 2021-10-18 10 | last-updated: 2021-10-18 11 | status: implementable 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/138-workload-completion/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: workoad-completion 2 | authors: 3 | - "@bhperry" 4 | reviewers: 5 | - "@deads2k" 6 | - "@elgnay" 7 | - "@zhujian7" 8 | approvers: 9 | - "@elgnay" 10 | creation-date: 2025-02-24 11 | last-updated: 2025-02-24 12 | status: provisional 13 | see-also: [] 14 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/14-addon-cluster-proxy/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: cluster-proxy 2 | authors: 3 | - "@yue9944882" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@luckyfengyong" 7 | approvers: 8 | - "@qiujian16" 9 | - "@luckyfengyong" 10 | creation-date: 2021-08-25 11 | last-updated: 2021-08-25 12 | status: implementable 13 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/04-rollback/hub-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config 5 | annotations: 6 | internal.config.kubernetes.io/needsHashSuffix: enabled 7 | spec: 8 | desiredVersion: v0.10.0 9 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/20-taint-toleration/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: taint-toleration-in-placement-apis 2 | authors: 3 | - "@anthonyxu109" 4 | reviewers: 5 | - "@qiujian16" 6 | approvers: 7 | - "@deads2k" 8 | - "@pmorie" 9 | - "@mdelder" 10 | creation-date: 2021-08-21 11 | last-updated: 2021-09-30 12 | status: implementable 13 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/4-cluster-claims/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: update-managedcluster-status-to-include-clusterinfo 2 | authors: 3 | - "@elgnay" 4 | reviewers: 5 | - "@qiujian16" 6 | approvers: 7 | - "@deads2k" 8 | - "@pmorie" 9 | - "@mdelder" 10 | creation-date: 2020-10-30 11 | last-updated: 2020-11-25 12 | status: provisional -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/01-fresh-install/hub-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config 5 | annotations: 6 | internal.config.kubernetes.io/needsHashSuffix: enabled 7 | spec: 8 | desiredVersion: v0.10.0 9 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/02-rolling-update/hub-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config 5 | annotations: 6 | internal.config.kubernetes.io/needsHashSuffix: enabled 7 | spec: 8 | desiredVersion: v0.11.0 9 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/17-cluster-features/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: update-managedcluster-status-to-include-clusterinfo 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | - "@elgnay" 7 | approvers: 8 | - "@deads2k" 9 | - "@mdelder" 10 | creation-date: 2021-08-10 11 | last-updated: 2021-08-10 12 | status: implementable 13 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/03-rolling-update-with-canary/hub-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config 5 | annotations: 6 | internal.config.kubernetes.io/needsHashSuffix: enabled 7 | spec: 8 | desiredVersion: v0.11.0 9 | -------------------------------------------------------------------------------- /enhancements/sig-policy/43-governance-addon-controller/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: policy-generator 2 | authors: 3 | - "@dhaiducek" 4 | - "@JustinKuli" 5 | reviewers: 6 | - "@ycao56" 7 | - "@gparvin" 8 | approvers: 9 | - "@qiujian16" 10 | - "@jnpacker" 11 | creation-date: 2022-03-11 12 | last-updated: 2022-03-11 13 | status: implementable 14 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/47-manifestwork-updatestrategy/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: return-resource-status-in-manifestwork 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | - "@elgnay" 7 | - "@yue9944882" 8 | approvers: 9 | - "@deads2k" 10 | creation-date: 2021-10-18 11 | last-updated: 2021-10-18 12 | status: implementable 13 | -------------------------------------------------------------------------------- /enhancements/sig-policy/89-operator-policy-kind/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: operator-policy-kind 2 | authors: 3 | - "@JustinKuli" 4 | reviewers: 5 | - "@gparvin" 6 | - "@joeg-pro" 7 | - "@fgiloux" 8 | - "@jc-rh" 9 | - "@mprahl" 10 | approvers: 11 | - TBD 12 | creation-date: "2023-03-06" 13 | last-updated: "2024-05-28" 14 | status: implementable 15 | -------------------------------------------------------------------------------- /enhancements/sig-policy/26-policy-generator/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: policy-generator 2 | authors: 3 | - "@dhaiducek" 4 | - "@mprahl" 5 | - "@ChunxiAlexLuo" 6 | reviewers: 7 | - "@ycao56" 8 | - "@gparvin" 9 | approvers: 10 | - "@qiujian16" 11 | - "@jnpacker" 12 | creation-date: 2021-09-29 13 | last-updated: 2021-09-30 14 | status: implementable 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/12-addon-manager/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: update-managedcluster-status-to-include-clusterinfo 2 | authors: 3 | - "@skeeey" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@elgnay" 7 | approvers: 8 | - "@deads2k" 9 | - "@pmorie" 10 | - "@mdelder" 11 | creation-date: 2021-04-08 12 | last-updated: 2021-04-19 13 | status: implementable 14 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/224-event-based-manifestwork/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: event-based-manifestwork 2 | authors: 3 | - "@jmelis" 4 | - "@apahim" 5 | - "@skeeey" 6 | - "@morvencao" 7 | reviewers: 8 | - "@qiujian16" 9 | - "@clyang82" 10 | approvers: 11 | - "@qiujian16" 12 | creation-date: 2023-07-18 13 | last-updated: 2023-07-18 14 | status: implementable 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/30-clusterset-override/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: update-managedcluster-status-to-include-clusterinfo 2 | authors: 3 | - "@ldpliu" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@elgnay" 7 | approvers: 8 | - "@qiujian16" 9 | - "@elgnay" 10 | - "@deads2k" 11 | creation-date: 2021-11-30 12 | last-updated: 2022-02-24 13 | status: provisional 14 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/229-manifestworkreplicaset-rollback/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: manifestworkreplicaset-rollback 2 | authors: 3 | - "@youngbupark" 4 | - "@annelaucg" 5 | reviewers: 6 | - "@qiujian16" 7 | - "@haoqing0110" 8 | approvers: 9 | - "@qiujian16" 10 | - "@haoqing0110" 11 | creation-date: 2025-10-31 12 | last-updated: 2025-12-01 13 | status: implementable 14 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/33-hosted-deploy-mode/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: hosted-mode 2 | authors: 3 | - "@xuezhaojun" 4 | - "@zhujian7" 5 | reviewers: 6 | - "@qiujian16" 7 | - "@zhiweiyin318" 8 | - "@elgnay" 9 | approvers: 10 | - "@qiujian16" 11 | - "@zhiweiyin318" 12 | - "@elgnay" 13 | creation-date: 2021-11-16 14 | last-updated: 2022-06-20 15 | status: implemented 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/34-work-executor-group/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: work-executor-group 2 | authors: 3 | - "@zychina" 4 | - "@Somefive" 5 | - "@yue9944882" 6 | reviewers: 7 | - "@deads2k" 8 | - "@qiujian16" 9 | approvers: 10 | - "@deads2k" 11 | - "@qiujian16" 12 | creation-date: 2022-02-09 13 | last-updated: 2022-02-09 14 | status: implementable 15 | see-also: [] 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/signca-secret-role.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: Role 3 | metadata: 4 | name: get-customer-ca 5 | namespace: open-cluster-management-hub 6 | rules: 7 | - apiGroups: 8 | - "" 9 | resources: 10 | - secrets 11 | verbs: 12 | - get 13 | resourceNames: 14 | - ca-secret 15 | -------------------------------------------------------------------------------- /enhancements/sig-policy/99-policy-placement-strategy/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: policy-placement-strategy 2 | authors: 3 | - "@dhaiducek" 4 | - "@mprahl" 5 | reviewers: 6 | - "@mprahl" 7 | - "@gparvin" 8 | - "@melserngawy" 9 | - "@imiller0" 10 | approvers: 11 | - "@mprahl" 12 | - "@gparvin" 13 | creation-date: 2023-08-03 14 | last-updated: 2023-10-02 15 | status: implementable 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/15-resourcebasedscheduling/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: resource-based-scheduling 2 | authors: 3 | - "@elgnay" 4 | reviewers: 5 | - "@qiujian16" 6 | approvers: 7 | - "@deads2k" 8 | - "@qiujian16" 9 | - "@mdelder" 10 | creation-date: 2021-08-06 11 | last-updated: 2021-09-17 12 | status: implemented 13 | see-also: 14 | - "/enhancements/sig-architecture/6-placements" 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/58-addon-configuration/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-placement-annotation 2 | authors: 3 | - "@skeeey" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@yue9944882" 7 | - "@zhiweiyin318" 8 | - "@mprahl" 9 | - "@mikeshng" 10 | - "@JustinKuli" 11 | approvers: 12 | - "@qiujian16" 13 | creation-date: 2022-06-07 14 | last-updated: 2022-06-07 15 | status: implementable 16 | -------------------------------------------------------------------------------- /enhancements/sig-policy/33-hub-templates-securing-sensitive-data/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: securing-sensitive-data-in-hub-policy-templates 2 | authors: 3 | - "@mprahl" 4 | reviewers: 5 | - "@ckandag" 6 | - "@gparvin" 7 | - "@ycao56" 8 | approvers: 9 | - "@deads2k" 10 | - "@jnpacker" 11 | - "@qiujian16" 12 | creation-date: 2021-12-21 13 | last-updated: 2021-12-21 14 | status: implementable 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/76-addon-install-strategy/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-install-strategy 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | - "@yue9944882" 7 | - "@jnpacker" 8 | approvers: 9 | - "@deads2k" 10 | - "@jnpacker" 11 | creation-date: 2022-10-13 12 | last-updated: 2022-10-13 13 | status: provisional 14 | see-also: 15 | - "/enhancements/8-addon-framework" 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/70-spread-policy/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: spread-policy 2 | authors: 3 | - "@huaouo" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@elgnay" 7 | - "@haoqing0110" 8 | approvers: 9 | - "@qiujian16" 10 | - "@elgnay" 11 | - "@haoqing0110" 12 | creation-date: 2022-08-03 13 | last-updated: 2022-10-28 14 | status: provisional 15 | see-also: 16 | - "/enhancements/sig-architecture/6-placements" 17 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/10-deletepropagationstrategy/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: add-deletionpropagation-for-manifestwork 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | - "@pmorie" 7 | - "@mdelder" 8 | approvers: 9 | - "@deads2k" 10 | - "@pmorie" 11 | - "@mdelder" 12 | creation-date: 2021-03-18 13 | last-updated: 2021-03-18 14 | status: implementable 15 | see-also: 16 | replaces: 17 | superseded-by: 18 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/132-work-update-only-when-spec-change/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: return-resource-status-in-manifestwork 2 | authors: 3 | - "@qiujian16" 4 | reviewers: 5 | - "@deads2k" 6 | - "@elgnay" 7 | - "@zhujian7" 8 | approvers: 9 | - "@elgnay" 10 | creation-date: 2024-11-12 11 | last-updated: 2021-11-12 12 | status: provisional 13 | see-also: 14 | - "/enhancements/sig-architecture/47-manifestwork-updatestrategy" 15 | -------------------------------------------------------------------------------- /enhancements/sig-policy/28-selective-policy-enforcment/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: selective-policy-enforcement 2 | authors: 3 | - "@imiller0" 4 | - "@jc-rh" 5 | reviewers: 6 | - "@mprahl" 7 | - TBD 8 | approvers: 9 | - TBD 10 | creation-date: 2022-05-17 11 | last-updated: 2022-05-17 12 | status: provisional 13 | see-also: 14 | - "https://github.com/openshift-kni/cluster-group-upgrades-operator" 15 | replaces: [] 16 | superseded-by: [] 17 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/63-hosted-addon/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: hosted-addon-deploy-mode 2 | authors: 3 | - "@zhujian7" 4 | reviewers: 5 | - "@qiujian16" 6 | - TBD 7 | approvers: 8 | - "@qiujian16" 9 | - TBD 10 | creation-date: 2022-06-21 11 | last-updated: 2022-06-21 12 | status: provisional 13 | see-also: 14 | - "/enhancements/sig-architecture/33-hosted-deploy-mode" 15 | - "/enhancements/sig-architecture/8-addon-framework" 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/98-cluster-permission/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: cluster-permission 2 | authors: 3 | - "@xiangjingli" 4 | - "@jnpacker" 5 | - "@mikeshng" 6 | reviewers: 7 | - "@qiujian16" 8 | - "@deads2k" 9 | approvers: 10 | - "@qiujian16" 11 | - "@deads2k" 12 | creation-date: 2023-03-30 13 | last-updated: 2023-03-30 14 | status: provisional 15 | see-also: 16 | - "/enhancements/sig-architecture/19-projected-serviceaccount-token" 17 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/141-grpc-based-registration/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: event-based-manifestwork 2 | authors: 3 | - "@qiujian16" 4 | - "@skeeey" 5 | - "@morvencao" 6 | reviewers: 7 | - "@qiujian16" 8 | - "@clyang82" 9 | approvers: 10 | - "@jnpacker" 11 | - "@zhujian7" 12 | - "@elgnay" 13 | creation-date: 2025-04-09 14 | last-updated: 2025-04-09 15 | status: provisionable 16 | see-also: 17 | - "/enhancements/sig-architecture/224-event-based-manifestwork" 18 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/231-manifestworkreplicaset-rollout-plugin/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: manifestworkreplicaset-rollout-plugin 2 | authors: 3 | - "@youngbupark" 4 | - "@annelaucg" 5 | reviewers: 6 | - "@qiujian16" 7 | - "@haoqing0110" 8 | approvers: 9 | - "@qiujian16" 10 | - "@haoqing0110" 11 | creation-date: 2025-10-31 12 | last-updated: 2025-12-01 13 | status: implementable 14 | see-also: 15 | - "/enhancements/sig-architecture/229-manifestworkreplicaset-rollback/README.md" -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/04-rollback/nameReference.yaml: -------------------------------------------------------------------------------- 1 | nameReference: 2 | - group: addon.open-cluster-management.io 3 | version: v1alpha1 4 | kind: AddOnHubConfig 5 | isClusterScoped: true 6 | fieldSpecs: 7 | - group: addon.open-cluster-management.io 8 | version: v1alpha1 9 | kind: ClusterManagementAddOn 10 | path: spec/installStrategy/placements/configs/name 11 | isClusterScoped: true 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/166-dynamic-scoring-framework-addon/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: dynamic-scoring-framework-addon 2 | authors: 3 | - "@KA-Takeuchi" 4 | reviewers: 5 | - TBD 6 | approvers: 7 | - TBD 8 | creation-date: 2025-11-10 9 | last-updated: 2025-11-10 10 | status: provisional 11 | see-also: 12 | - "/enhancements/sig-architecture/58-addon-configuration" 13 | - "/enhancements/sig-architecture/76-addon-install-strategy" 14 | - "/enhancements/sig-architecture/81-addon-lifecycle" 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/01-fresh-install/nameReference.yaml: -------------------------------------------------------------------------------- 1 | nameReference: 2 | - group: addon.open-cluster-management.io 3 | version: v1alpha1 4 | kind: AddOnHubConfig 5 | isClusterScoped: true 6 | fieldSpecs: 7 | - group: addon.open-cluster-management.io 8 | version: v1alpha1 9 | kind: ClusterManagementAddOn 10 | path: spec/installStrategy/placements/configs/name 11 | isClusterScoped: true 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/02-rolling-update/nameReference.yaml: -------------------------------------------------------------------------------- 1 | nameReference: 2 | - group: addon.open-cluster-management.io 3 | version: v1alpha1 4 | kind: AddOnHubConfig 5 | isClusterScoped: true 6 | fieldSpecs: 7 | - group: addon.open-cluster-management.io 8 | version: v1alpha1 9 | kind: ClusterManagementAddOn 10 | path: spec/installStrategy/placements/configs/name 11 | isClusterScoped: true 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-template 2 | authors: 3 | - "@zhujian7" 4 | reviewers: 5 | - "@qiujian16" 6 | - "haoqing0110" 7 | approvers: 8 | - "@qiujian16" 9 | creation-date: 2023-2-13 10 | last-updated: 2024-12-9 11 | status: provisional 12 | see-also: 13 | - "/enhancements/sig-architecture/58-addon-configuration" 14 | - "/enhancements/sig-architecture/76-addon-install-strategy" 15 | - "/enhancements/sig-architecture/81-addon-lifecycle" 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/136-placement-cel-selector/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: select-clusters-with-cel-expressions 2 | authors: 3 | - "@haoqing0110" 4 | reviewers: 5 | - "@qiujian16" 6 | approvers: 7 | - "@qiujian16" 8 | creation-date: 2024-12-18 9 | last-updated: 2024-12-18 10 | status: provisional 11 | see-also: 12 | - "/enhancements/sig-architecture/6-placements" 13 | - "/enhancements/sig-architecture/15-resourcebasedscheduling" 14 | - "/enhancements/sig-architecture/32-extensiblescheduling" 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/03-rolling-update-with-canary/nameReference.yaml: -------------------------------------------------------------------------------- 1 | nameReference: 2 | - group: addon.open-cluster-management.io 3 | version: v1alpha1 4 | kind: AddOnHubConfig 5 | isClusterScoped: true 6 | fieldSpecs: 7 | - group: addon.open-cluster-management.io 8 | version: v1alpha1 9 | kind: ClusterManagementAddOn 10 | path: spec/installStrategy/placements/configs/name 11 | isClusterScoped: true 12 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/123-addon-multiple-configs/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-multiple-configs 2 | authors: 3 | - "@haoqing0110" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@JoaoBraveCoding" 7 | approvers: 8 | - "@qiujian16" 9 | creation-date: 2024-07-08 10 | last-updated: 2024-07-30 11 | status: provisional 12 | see-also: 13 | - "/enhancements/sig-architecture/58-addon-configuration" 14 | - "/enhancements/sig-architecture/76-addon-install-strategy" 15 | - "/enhancements/sig-architecture/81-addon-lifecycle" 16 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/01-fresh-install/cluster-management-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ClusterManagementAddOn 3 | metadata: 4 | name: helloworld 5 | spec: 6 | installStrategy: 7 | type: Placements 8 | placements: 9 | - name: aws-placement 10 | namespace: default 11 | configs: 12 | - group: addon.open-cluster-management.io 13 | resource: addonhubconfigs 14 | name: hub-config 15 | -------------------------------------------------------------------------------- /guidelines/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: neat-enhancement-idea 2 | authors: 3 | - "@janedoe" 4 | reviewers: 5 | - TBD 6 | - "@alicedoe" 7 | approvers: 8 | - TBD 9 | - "@oscardoe" 10 | creation-date: yyyy-mm-dd 11 | last-updated: yyyy-mm-dd 12 | status: provisional|implementable|implemented|deferred|rejected|withdrawn|replaced 13 | see-also: 14 | - "/enhancements/this-other-neat-thing.md" 15 | replaces: 16 | - "/enhancements/that-less-than-great-idea.md" 17 | superseded-by: 18 | - "/enhancements/our-past-effort.md" 19 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-lifecycle 2 | authors: 3 | - "@haoqing0110" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@deads2k" 7 | - "@elgnay" 8 | - "@skeeey" 9 | - "@zhiweiyin318" 10 | approvers: 11 | - "@qiujian16" 12 | - "@deads2k" 13 | - "@skeeey" 14 | creation-date: 2022-11-18 15 | last-updated: 2023-02-22 16 | status: provisional 17 | see-also: 18 | - "/enhancements/sig-architecture/58-addon-configuration" 19 | - "/enhancements/sig-architecture/76-addon-install-strategy" 20 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/signca-secret-rolebinding.yaml: -------------------------------------------------------------------------------- 1 | # grant permission to addon-manager-controller-sa to get the customer CA secret 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: RoleBinding 4 | metadata: 5 | name: get-customer-ca 6 | namespace: open-cluster-management-hub 7 | roleRef: 8 | apiGroup: rbac.authorization.k8s.io 9 | kind: Role 10 | name: get-customer-ca 11 | subjects: 12 | - kind: ServiceAccount 13 | name: addon-manager-controller-sa 14 | namespace: open-cluster-management-hub 15 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/32-extensiblescheduling/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: placement-extensible-scheduling 2 | authors: 3 | - "@haoqing0110" 4 | reviewers: 5 | - "@deads2k" 6 | - "@qiujian16" 7 | - "@elgnay" 8 | approvers: 9 | - "@deads2k" 10 | - "@qiujian16" 11 | - "@elgnay" 12 | creation-date: 2021-11-01 13 | last-updated: 2025-01-08 14 | status: implemented 15 | see-also: 16 | - "/enhancements/sig-architecture/6-placements" 17 | - "/enhancements/sig-architecture/15-resourcebasedscheduling" 18 | - "/enhancements/sig-architecture/136-placement-cel-selector" -------------------------------------------------------------------------------- /.github/workflows/dco.yml: -------------------------------------------------------------------------------- 1 | name: DCO 2 | on: 3 | workflow_dispatch: {} 4 | pull_request: 5 | branches: 6 | - main 7 | - release-* 8 | 9 | jobs: 10 | dco_check: 11 | runs-on: ubuntu-latest 12 | name: DCO Check 13 | steps: 14 | - name: Get PR Commits 15 | id: 'get-pr-commits' 16 | uses: tim-actions/get-pr-commits@master 17 | with: 18 | token: ${{ secrets.GITHUB_TOKEN }} 19 | - name: DCO Check 20 | uses: tim-actions/dco@master 21 | with: 22 | commits: ${{ steps.get-pr-commits.outputs.commits }} 23 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/158-addon-v1beta1/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: addon-v1beta1 2 | authors: 3 | - "@haoqing0110" 4 | reviewers: 5 | - "@qiujian16" 6 | - "@elgnay" 7 | approvers: 8 | - "@qiujian16" 9 | - "@elgnay" 10 | creation-date: 2025-10-29 11 | last-updated: 2025-10-29 12 | status: provisional 13 | see-also: 14 | - "/enhancements/sig-architecture/58-addon-configuration" 15 | - "/enhancements/sig-architecture/76-addon-install-strategy" 16 | - "/enhancements/sig-architecture/81-addon-lifecycle" 17 | - "/enhancements/sig-architecture/82-addon-template" 18 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/105-aws-iam-registration/metadata.yaml: -------------------------------------------------------------------------------- 1 | title: aws-iam-registration 2 | authors: 3 | - "@jaswalkiranavtar" 4 | - "@suvaanshkumar" 5 | reviewers: 6 | - "@qiujian16" 7 | - "@deads2k" 8 | - "@mikeshng" 9 | approvers: 10 | - "@qiujian16" 11 | - "@deads2k" 12 | - "@mikeshng" 13 | creation-date: 2024-06-19 14 | last-updated: 2024-06-19 15 | status: provisional 16 | see-also: 17 | - "https://github.com/dangorst1066/enhancements/tree/main/enhancements/sig-architecture/68-token-registration" 18 | - "/enhancements/sig-architecture/19-projected-serviceaccount-token" 19 | replaces: [] 20 | superseded-by: [] -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/02-rolling-update/cluster-management-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ClusterManagementAddOn 3 | metadata: 4 | name: helloworld 5 | spec: 6 | installStrategy: 7 | type: Placements 8 | placements: 9 | - name: aws-placement 10 | namespace: default 11 | configs: 12 | - group: addon.open-cluster-management.io 13 | resource: addonhubconfigs 14 | name: hub-config 15 | rolloutStrategy: 16 | type: RollingUpdate 17 | rollingUpdate: 18 | maxConcurrentlyUpdating: 25% 19 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/01-fresh-install/kustomized-output.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config-kg44ddfcdc 5 | spec: 6 | desiredVersion: v0.10.0 7 | --- 8 | apiVersion: addon.open-cluster-management.io/v1alpha1 9 | kind: ClusterManagementAddOn 10 | metadata: 11 | name: helloworld 12 | spec: 13 | installStrategy: 14 | placements: 15 | - configs: 16 | - group: addon.open-cluster-management.io 17 | name: hub-config-kg44ddfcdc 18 | resource: addonhubconfigs 19 | name: aws-placement 20 | namespace: default 21 | type: Placements 22 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/cluster-management-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ClusterManagementAddOn 3 | metadata: 4 | name: hello-template 5 | annotations: 6 | addon.open-cluster-management.io/lifecycle: "addon-manager" 7 | spec: 8 | addOnMeta: 9 | description: hello-template 10 | displayName: hello-template 11 | supportedConfigs: 12 | - group: addon.open-cluster-management.io 13 | resource: addontemplates 14 | defaultConfig: 15 | name: hello-template 16 | - group: addon.open-cluster-management.io 17 | resource: addondeploymentconfigs 18 | defaultConfig: 19 | name: hello-template 20 | namespace: cluster1 21 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/02-rolling-update/kustomized-output.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config-ft2b9k8t8h 5 | spec: 6 | desiredVersion: v0.11.0 7 | --- 8 | apiVersion: addon.open-cluster-management.io/v1alpha1 9 | kind: ClusterManagementAddOn 10 | metadata: 11 | name: helloworld 12 | spec: 13 | installStrategy: 14 | placements: 15 | - configs: 16 | - group: addon.open-cluster-management.io 17 | name: hub-config-ft2b9k8t8h 18 | resource: addonhubconfigs 19 | name: aws-placement 20 | namespace: default 21 | rolloutStrategy: 22 | rollingUpdate: 23 | maxConcurrentlyUpdating: 25% 24 | type: RollingUpdate 25 | type: Placements 26 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/roles.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: Role 3 | metadata: 4 | name: cm-reader 5 | namespace: open-cluster-management 6 | rules: 7 | - apiGroups: 8 | - "" 9 | resources: 10 | - configmaps 11 | verbs: 12 | - get 13 | - list 14 | - watch 15 | 16 | --- 17 | 18 | apiVersion: rbac.authorization.k8s.io/v1 19 | kind: ClusterRole 20 | metadata: 21 | name: cm-admin 22 | rules: 23 | - apiGroups: 24 | - "" 25 | resources: 26 | - configmaps 27 | verbs: 28 | - get 29 | - list 30 | - watch 31 | - create 32 | - update 33 | - patch 34 | - delete 35 | - apiGroups: 36 | - "addon.open-cluster-management.io" 37 | resources: 38 | - managedclusteraddons 39 | verbs: 40 | - get 41 | - list 42 | - watch 43 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/04-rollback/cluster-management-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ClusterManagementAddOn 3 | metadata: 4 | name: helloworld 5 | spec: 6 | installStrategy: 7 | type: Placements 8 | placements: 9 | - name: aws-placement 10 | namespace: default 11 | configs: 12 | - group: addon.open-cluster-management.io 13 | resource: addonhubconfigs 14 | name: hub-config 15 | rolloutStrategy: 16 | type: RollingUpdateWithCanary 17 | rollingUpdateWithCanary: 18 | maxConcurrentlyUpdating: 25% 19 | placement: 20 | name: canary-placement 21 | namespace: default 22 | - name: canary-placement 23 | namespace: default 24 | configs: 25 | - group: addon.open-cluster-management.io 26 | resource: addondeploymentconfigs 27 | name: hub-config 28 | rolloutStrategy: 29 | type: RollingUpdate 30 | rollingUpdate: 31 | maxConcurrentlyUpdating: 25% 32 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/03-rolling-update-with-canary/cluster-management-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ClusterManagementAddOn 3 | metadata: 4 | name: helloworld 5 | spec: 6 | installStrategy: 7 | type: Placements 8 | placements: 9 | - name: aws-placement 10 | namespace: default 11 | configs: 12 | - group: addon.open-cluster-management.io 13 | resource: addonhubconfigs 14 | name: hub-config 15 | rolloutStrategy: 16 | type: RollingUpdateWithCanary 17 | rollingUpdateWithCanary: 18 | maxConcurrentlyUpdating: 25% 19 | placement: 20 | name: canary-placement 21 | namespace: default 22 | - name: canary-placement 23 | namespace: default 24 | configs: 25 | - group: addon.open-cluster-management.io 26 | resource: addondeploymentconfigs 27 | name: hub-config 28 | rolloutStrategy: 29 | type: RollingUpdate 30 | rollingUpdate: 31 | maxConcurrentlyUpdating: 25% 32 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/managed-cluster-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ManagedClusterAddOn 3 | metadata: 4 | name: hello-template 5 | namespace: cluster1 6 | spec: 7 | installNamespace: open-cluster-management-agent-addon 8 | # config: 9 | # - group: addon.open-cluster-management.io 10 | # resource: addontemplates 11 | # name: hello-template 12 | status: 13 | configReferences: 14 | - group: addon.open-cluster-management.io 15 | resource: addontemplates 16 | name: hello-template #Deprecated 17 | desiredConfig: 18 | name: hello-template 19 | specHash: yyy 20 | lastAppliedConfig: 21 | name: hello-template 22 | specHash: "" 23 | - group: addon.open-cluster-management.io 24 | resource: addondeploymentconfigs 25 | name: hello-template #Deprecated 26 | namespace: cluster1 #Deprecated 27 | desiredConfig: 28 | name: hello-template 29 | namespace: cluster1 30 | specHash: ccc 31 | lastAppliedConfig: 32 | name: hello-template 33 | namespace: cluster1 34 | specHash: "" 35 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/upgrade/cluster-management-addon.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: ClusterManagementAddOn 3 | metadata: 4 | name: hello-template 5 | annotations: 6 | addon.open-cluster-management.io/lifecycle: "addon-manager" 7 | spec: 8 | addOnMeta: 9 | description: hello-template 10 | displayName: hello-template 11 | supportedConfigs: 12 | - group: addon.open-cluster-management.io 13 | resource: addontemplates 14 | defaultConfig: 15 | name: hello-template 16 | - group: addon.open-cluster-management.io 17 | resource: addondeploymentconfigs 18 | defaultConfig: 19 | name: hello-template 20 | namespace: cluster1 21 | installStrategy: 22 | type: Placements 23 | placements: 24 | - name: placement-all 25 | namespace: default 26 | configs: 27 | - group: addon.open-cluster-management.io 28 | resource: addontemplates 29 | name: hello-template-v2 30 | rolloutStrategy: 31 | type: All 32 | all: 33 | timeout: 30m 34 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/04-rollback/kustomized-output.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config-kg44ddfcdc 5 | spec: 6 | desiredVersion: v0.10.0 7 | --- 8 | apiVersion: addon.open-cluster-management.io/v1alpha1 9 | kind: ClusterManagementAddOn 10 | metadata: 11 | name: helloworld 12 | spec: 13 | installStrategy: 14 | placements: 15 | - configs: 16 | - group: addon.open-cluster-management.io 17 | name: hub-config-kg44ddfcdc 18 | resource: addonhubconfigs 19 | name: aws-placement 20 | namespace: default 21 | rolloutStrategy: 22 | rollingUpdateWithCanary: 23 | maxConcurrentlyUpdating: 25% 24 | placement: 25 | name: canary-placement 26 | namespace: default 27 | type: RollingUpdateWithCanary 28 | - configs: 29 | - group: addon.open-cluster-management.io 30 | name: hub-config-kg44ddfcdc 31 | resource: addondeploymentconfigs 32 | name: canary-placement 33 | namespace: default 34 | rolloutStrategy: 35 | rollingUpdate: 36 | maxConcurrentlyUpdating: 25% 37 | type: RollingUpdate 38 | type: Placements 39 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/81-addon-lifecycle/kustomize-rollout-examples/03-rolling-update-with-canary/kustomized-output.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnHubConfig 3 | metadata: 4 | name: hub-config-ft2b9k8t8h 5 | spec: 6 | desiredVersion: v0.11.0 7 | --- 8 | apiVersion: addon.open-cluster-management.io/v1alpha1 9 | kind: ClusterManagementAddOn 10 | metadata: 11 | name: helloworld 12 | spec: 13 | installStrategy: 14 | placements: 15 | - configs: 16 | - group: addon.open-cluster-management.io 17 | name: hub-config-ft2b9k8t8h 18 | resource: addonhubconfigs 19 | name: aws-placement 20 | namespace: default 21 | rolloutStrategy: 22 | rollingUpdateWithCanary: 23 | maxConcurrentlyUpdating: 25% 24 | placement: 25 | name: canary-placement 26 | namespace: default 27 | type: RollingUpdateWithCanary 28 | - configs: 29 | - group: addon.open-cluster-management.io 30 | name: hub-config-ft2b9k8t8h 31 | resource: addondeploymentconfigs 32 | name: canary-placement 33 | namespace: default 34 | rolloutStrategy: 35 | rollingUpdate: 36 | maxConcurrentlyUpdating: 25% 37 | type: RollingUpdate 38 | type: Placements 39 | -------------------------------------------------------------------------------- /DCO: -------------------------------------------------------------------------------- 1 | Developer Certificate of Origin 2 | Version 1.1 3 | 4 | Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 5 | 1 Letterman Drive 6 | Suite D4700 7 | San Francisco, CA, 94129 8 | 9 | Everyone is permitted to copy and distribute verbatim copies of this 10 | license document, but changing it is not allowed. 11 | 12 | 13 | Developer's Certificate of Origin 1.1 14 | 15 | By making a contribution to this project, I certify that: 16 | 17 | (a) The contribution was created in whole or in part by me and I 18 | have the right to submit it under the open source license 19 | indicated in the file; or 20 | 21 | (b) The contribution is based upon previous work that, to the best 22 | of my knowledge, is covered under an appropriate open source 23 | license and I have the right under that license to submit that 24 | work with modifications, whether created in whole or in part 25 | by me, under the same open source license (unless I am 26 | permitted to submit under a different license), as indicated 27 | in the file; or 28 | 29 | (c) The contribution was provided directly to me by some other 30 | person who certified (a), (b) or (c) and I have not modified 31 | it. 32 | 33 | (d) I understand and agree that this project and the contribution 34 | are public and that a record of the contribution (including all 35 | personal information I submit with it, including my sign-off) is 36 | maintained indefinitely and may be redistributed consistent with 37 | this project or the open source license(s) involved. 38 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/agent-deployment-output.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | app: hello-template-agent 6 | name: hello-template-agent 7 | namespace: open-cluster-management-agent-addon 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: hello-template-agent 13 | template: 14 | metadata: 15 | labels: 16 | app: hello-template-agent 17 | spec: 18 | containers: 19 | - args: 20 | - /helloworld 21 | - agent 22 | - --cluster-name=cluster1 23 | - --addon-namespace=open-cluster-management-agent-addon 24 | - --addon-name=hello-template 25 | - --hub-kubeconfig=/managed/hub-kubeconfig/kubeconfig 26 | - --v=4 27 | env: 28 | - name: HUB_KUBECONFIG 29 | value: /managed/hub-kubeconfig/kubeconfig 30 | - name: CLUSTER_NAME 31 | value: cluster1 32 | - name: INSTALL_NAMESPACE 33 | value: open-cluster-management-agent-addon 34 | image: quay.io/open-cluster-management/addon-examples:latest 35 | imagePullPolicy: IfNotPresent 36 | name: helloworld-agent 37 | volumeMounts: 38 | - mountPath: /managed/hub-kubeconfig 39 | name: hub-kubeconfig 40 | - mountPath: /managed/example.com-signer-name 41 | name: cert-example-com-signer-name 42 | serviceAccount: hello-template-agent-sa 43 | serviceAccountName: hello-template-agent-sa 44 | volumes: 45 | - name: hub-kubeconfig 46 | secret: 47 | defaultMode: 420 48 | secretName: hello-template-hub-kubeconfig 49 | - name: cert-example-com-signer-name 50 | secret: 51 | defaultMode: 420 52 | secretName: hello-template-example.com-signer-name-client-cert 53 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Open Cluster Management Enhancement Proposals 2 | 3 | This repository contains enhancement proposals for [Open Cluster Management](https://open-cluster-management.io/), a community-driven project focused on multicluster and multicloud scenarios for Kubernetes. 4 | 5 | ## What This Repository Is For 6 | 7 | This repository serves as the central location for proposing, discussing, and tracking significant new features and improvements to Open Cluster Management. Enhancement proposals provide a structured way for the community to collaborate on the evolution of the project. 8 | 9 | ## Enhancement Process 10 | 11 | Enhancement proposals follow a process inspired by the [Kubernetes enhancement](https://github.com/kubernetes/enhancements) process. This repository provides a place for the community to discuss, debate, and reach consensus on how new features are introduced to Open Cluster Management. 12 | 13 | Enhancements may take multiple releases to complete and form the basis of the community roadmap. Anyone in the community may propose enhancements, but they require consensus from the relevant project maintainers to be implemented. 14 | 15 | For template references and guidelines, see the [enhancement template](guidelines/README.md). 16 | 17 | ## Is My Thing an Enhancement? 18 | 19 | Consider creating an enhancement proposal if your idea: 20 | 21 | - Would be worth writing a blog post about after release 22 | - Requires significant effort or introduces substantial changes 23 | - Impacts upgrade or downgrade processes 24 | - Requires coordination across multiple repositories or domains 25 | - Introduces API changes or graduates between stability levels 26 | - Will be noticed by users and become something they rely on 27 | 28 | You probably don't need an enhancement proposal if your work: 29 | 30 | - Fixes a bug 31 | - Adds more testing 32 | - Refactors code that only affects internal implementation 33 | - Has minimal impact on the project as a whole 34 | 35 | If you're unsure whether your proposed work requires an enhancement, file an issue and ask the community. 36 | 37 | ## When to Create a New Enhancement 38 | 39 | Create an enhancement proposal after you have: 40 | 41 | - Shared your idea with the community to gauge interest 42 | - Optionally created a prototype to validate the concept 43 | - Identified people willing to work on and maintain the enhancement 44 | - Recognized that the work may span multiple releases 45 | 46 | ## Why Track Enhancements 47 | 48 | As Open Cluster Management evolves, it's important for the community to understand how we build, test, and document new capabilities. Enhancement proposals help ensure the community can collaborate effectively on design and implementation before significant development work begins. 49 | 50 | ## Participating in Enhancement Discussions 51 | 52 | Please comment on enhancement issues to: 53 | - Request clarification on the process 54 | - Update the status of enhancement efforts 55 | - Link to related issues in other repositories 56 | 57 | For detailed design discussions, use linked issues or design pull requests rather than the main enhancement issue. 58 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/upgrade/addon-template-v2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnTemplate 3 | metadata: 4 | name: hello-template-v2 5 | spec: 6 | addonName: hello-template 7 | agentSpec: 8 | workload: 9 | manifests: 10 | - kind: Deployment 11 | apiVersion: apps/v1 12 | metadata: 13 | name: hello-template-agent 14 | namespace: open-cluster-management-agent-addon 15 | labels: 16 | app: hello-template-agent 17 | addon-template-version: v2 18 | spec: 19 | replicas: 1 20 | selector: 21 | matchLabels: 22 | app: hello-template-agent 23 | template: 24 | metadata: 25 | labels: 26 | app: hello-template-agent 27 | addon-template-version: v2 28 | spec: 29 | serviceAccountName: hello-template-agent-sa 30 | containers: 31 | - name: helloworld-agent 32 | image: quay.io/open-cluster-management/addon-examples:latest 33 | imagePullPolicy: IfNotPresent 34 | args: 35 | - "/helloworld" 36 | - "agent" 37 | - "--cluster-name={{CLUSTER_NAME}}" 38 | - "--addon-namespace=open-cluster-management-agent-addon" 39 | - "--addon-name=hello-template" 40 | - "--hub-kubeconfig={{HUB_KUBECONFIG}}" 41 | - "--v={{LOG_LEVEL}}" # addonDeploymentConfig variables 42 | - kind: ServiceAccount 43 | apiVersion: v1 44 | metadata: 45 | name: hello-template-agent-sa 46 | namespace: open-cluster-management-agent-addon 47 | - kind: ClusterRoleBinding 48 | apiVersion: rbac.authorization.k8s.io/v1 49 | metadata: 50 | name: hello-template-agent 51 | roleRef: 52 | apiGroup: rbac.authorization.k8s.io 53 | kind: ClusterRole 54 | name: cluster-admin 55 | subjects: 56 | - kind: ServiceAccount 57 | name: hello-template-agent-sa 58 | namespace: open-cluster-management-agent-addon 59 | registration: 60 | - type: KubeClient 61 | kubeClient: 62 | hubPermissions: 63 | - type: CurrentCluster 64 | currentCluster: 65 | clusterRoleName: cm-admin 66 | - type: SingleNamespace 67 | singleNamespace: 68 | namespace: open-cluster-management 69 | roleRef: 70 | apiGroup: rbac.authorization.k8s.io 71 | kind: Role 72 | name: cm-reader 73 | - type: CustomSigner 74 | customSigner: 75 | signerName: example.com/signer-name 76 | subject: 77 | user: user1 78 | groups: 79 | - g1 80 | - g2 81 | organizationUnit: 82 | - o1 83 | - o2 84 | signingCA: 85 | name: ca-secret 86 | -------------------------------------------------------------------------------- /enhancements/sig-policy/78-policy-collection/README.md: -------------------------------------------------------------------------------- 1 | # Policy Collection 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management/website/) 11 | 12 | ## Summary 13 | 14 | The goal of this feature is to automate and simplify deployment of policies, policy sets and 15 | policy generator projects by creating (transferring) a repository that provides best practices, samples, blogs 16 | and deployment resources. The repository for this policy collection will be 17 | [policy-collection](https://github.com/open-cluster-management-io/policy-collection/). The content will be transferred 18 | from [existing policy-collection](https://github.com/stolostron/policy-collection/). 19 | 20 | ## Motivation 21 | 22 | The governance policy framework is currently provided without any samples or guidance on 23 | how to create policies other than by detailing the architecture of the 24 | [policy framework](https://open-cluster-management.io/getting-started/integration/policy-framework/). 25 | While this is functional, it does not lend itself to easy adoption by new community users. 26 | 27 | ### Goals 28 | 29 | - Transfer the policy collection community that provides samples, best practices and encourages quicker 30 | adoption of the policy framework. 31 | - Make sure the transferred community is easy to consume by first time users of ocm policy. 32 | 33 | ### Non-Goals 34 | 35 | - Because policies can be intended to work in many different environments with prerequisites that 36 | may not be available, validation of policies is focused on syntax initially and additional validation 37 | can be considered in the future. 38 | 39 | ## Proposal 40 | 41 | ### User Stories 42 | 43 | 1. As a k8s admin, I want to deploy policies from the community with an experience that fosters: 44 | - Best practice community policies and policy sets to achieve governance goals for common k8s platforms 45 | - Donations of policies from users that have forked the community repository and added extra value 46 | to existing or created new policies 47 | - Quick adoption of the policy framework through blogs and documentation that re-enforces concepts 48 | documented through the policy framework 49 | - Community adoption. Validate documentation is centered around the open cluster management 50 | community and not product specific. 51 | 52 | ### Architecture 53 | 54 | The policy-collection does not add any architectural changes to the Policy Framework. It provides 55 | samples and resources that help community members adopt the Policy Framework and take advantage of the 56 | features that are provided. By showing the existing features, we also hope to make it easier for 57 | community members to see where new features can be added or where existing enhancements can be made. 58 | 59 | ### Test Plan 60 | 61 | - Basic policy validation will be added 62 | - Policies can be tested by applying them to a cluster 63 | - New users should be able to understand how to get started 64 | 65 | ### Graduation Criteria 66 | 67 | #### Alpha 68 | 69 | 1. The proposal is reviewed and accepted 70 | 2. Implementation is completed to support the primary goals 71 | 72 | #### Beta 73 | 74 | 1. Policy validation is performed 75 | 2. Stable policy process is defined 76 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/82-addon-template/examples/addon-template.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: addon.open-cluster-management.io/v1alpha1 2 | kind: AddOnTemplate 3 | metadata: 4 | name: hello-template 5 | spec: 6 | addonName: hello-template 7 | agentSpec: 8 | workload: 9 | manifests: 10 | - kind: Deployment 11 | apiVersion: apps/v1 12 | metadata: 13 | name: hello-template-agent 14 | namespace: open-cluster-management-agent-addon 15 | labels: 16 | app: hello-template-agent 17 | spec: 18 | replicas: 1 19 | selector: 20 | matchLabels: 21 | app: hello-template-agent 22 | template: 23 | metadata: 24 | labels: 25 | app: hello-template-agent 26 | spec: 27 | serviceAccountName: hello-template-agent-sa 28 | containers: 29 | - name: helloworld-agent 30 | image: quay.io/open-cluster-management/addon-examples:latest 31 | imagePullPolicy: IfNotPresent 32 | args: 33 | - "/helloworld" 34 | - "agent" 35 | - "--cluster-name={{CLUSTER_NAME}}" 36 | - "--addon-namespace=open-cluster-management-agent-addon" 37 | - "--addon-name=hello-template" 38 | - "--hub-kubeconfig={{HUB_KUBECONFIG}}" 39 | - "--v={{LOG_LEVEL}}" # addonDeploymentConfig variables 40 | - kind: ServiceAccount 41 | apiVersion: v1 42 | metadata: 43 | name: hello-template-agent-sa 44 | namespace: open-cluster-management-agent-addon 45 | - kind: ClusterRoleBinding 46 | apiVersion: rbac.authorization.k8s.io/v1 47 | metadata: 48 | name: hello-template-agent 49 | roleRef: 50 | apiGroup: rbac.authorization.k8s.io 51 | kind: ClusterRole 52 | name: cluster-admin 53 | subjects: 54 | - kind: ServiceAccount 55 | name: hello-template-agent-sa 56 | namespace: open-cluster-management-agent-addon 57 | registration: 58 | # kubeClient or custom signer, if kubeClient, user and group is in a certain format. 59 | # user is "system:open-cluster-management:cluster:{clusterName}:addon:{addonName}:agent:{agentName}" 60 | # group is ["system:open-cluster-management:cluster:{clusterName}:addon:{addonName}", 61 | # "system:open-cluster-management:addon:{addonName}", "system:authenticated"] 62 | - type: KubeClient 63 | kubeClient: 64 | hubPermissions: 65 | - type: CurrentCluster 66 | currentCluster: 67 | clusterRoleName: cm-admin 68 | - type: SingleNamespace 69 | singleNamespace: 70 | namespace: open-cluster-management 71 | roleRef: 72 | apiGroup: rbac.authorization.k8s.io 73 | kind: Role 74 | name: cm-reader 75 | - type: CustomSigner 76 | customSigner: 77 | signerName: example.com/signer-name 78 | subject: 79 | user: user1 80 | groups: 81 | - g1 82 | - g2 83 | organizationUnit: 84 | - o1 85 | - o2 86 | signingCA: 87 | name: ca-secret 88 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at [acm-contact@redhat.com](mailto:acm-contact@redhat.com). All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | For answers to common questions about this code of conduct, see 76 | https://www.contributor-covenant.org/faq 77 | -------------------------------------------------------------------------------- /enhancements/sig-policy/43-governance-addon-controller/README.md: -------------------------------------------------------------------------------- 1 | # Governance Policy Addon Controller 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management/website/) 11 | 12 | ## Summary 13 | 14 | The goal of this feature is to automate and simplify deployment of the governance policy components 15 | to managed clusters by creating a controller that leverages the 16 | [addon framework](https://github.com/open-cluster-management-io/addon-framework/) in order to deploy 17 | components using the ManagedClusterAddOn kind. 18 | 19 | ## Motivation 20 | 21 | The governance policy framework is currently deployed using individual manifest files deployed 22 | directly by the user. While this is functional, it does not lend itself to automation, 23 | customization, scalability, nor hub-centered management that Open Cluster Management demands. 24 | 25 | ### Goals 26 | 27 | - Develop a controller that deploys governance policy components to managed clusters using the 28 | ManagedClusterAddOn kind. 29 | 30 | ### Non-Goals 31 | 32 | - None at this time. 33 | 34 | ## Proposal 35 | 36 | ### User Stories 37 | 38 | 1. As a k8s admin, I want to deploy, uninstall, and customize the governance policy framework on my 39 | managed clusters. 40 | 41 | ### Architecture 42 | 43 | The governance policy addon controller manages installations of policy addons on managed clusters by 44 | using 45 | [ManifestWorks](https://github.com/open-cluster-management-io/api/blob/main/docs/manifestwork.md). 46 | The addons can be enabled, disabled, and configured via their `ManagedClusterAddOn` resources. 47 | 48 | (For more information on the addon framework, see the 49 | [addon-framework enhancement/design](https://github.com/open-cluster-management-io/enhancements/tree/main/enhancements/sig-architecture/8-addon-framework).) 50 | 51 | The addons managed by this controller are: 52 | 53 | - The "config-policy-controller" consisting of the 54 | [Configuration Policy Controller](https://github.com/open-cluster-management-io/config-policy-controller). 55 | - The "governance-policy-framework" consisting of the 56 | [Policy Spec Sync](https://github.com/open-cluster-management-io/governance-policy-spec-sync), the 57 | [Policy Status Sync](https://github.com/open-cluster-management-io/governance-policy-status-sync), 58 | and the 59 | [Policy Template Sync](https://github.com/open-cluster-management-io/governance-policy-template-sync). 60 | 61 | For example, this manifest deploys the Configuration Policy Controller to a managed cluster called 62 | `my-managed-cluster` to the `open-cluster-management-agent-addon` namespace: 63 | 64 | ```yaml 65 | apiVersion: addon.open-cluster-management.io/v1alpha1 66 | kind: ManagedClusterAddOn 67 | metadata: 68 | name: config-policy-controller 69 | namespace: my-managed-cluster 70 | spec: 71 | installNamespace: open-cluster-management-agent-addon 72 | ``` 73 | 74 | To modify the image used by the Configuration Policy Controller on this managed cluster, you can add 75 | an annotation like: 76 | 77 | ```shell 78 | kubectl -n my-managed-cluster annotate ManagedClusterAddOn config-policy-controller addon.open-cluster-management.io/values=' 79 | { "global": { 80 | "imageOverrides": { 81 | "config_policy_controller":"quay.io/my-repo/my-configpolicy:imagetag" 82 | }}}' 83 | ``` 84 | 85 | Helm charts are contained within the addon controller repo, and any values in the Helm chart's 86 | `values.yaml` can be modified using such annotations. 87 | 88 | Of particular relevance is modifying this value for the framework addon: 89 | 90 | - `addon.open-cluster-management.io/values={"onMulticlusterHub":true}` 91 | - This will modify the deployment to accommodate for connections and synchronization unique to a 92 | hub that manages itself via Open Cluster Management. 93 | 94 | Additionally, there will be specific simplified annotations for the addons, including: 95 | 96 | - `policy-addon-pause=true` 97 | - Pause the addon controller from making changes to and reconciling with the deployment on the 98 | managed cluster. 99 | - `log-level=4` 100 | - Set the logging level of the addon to adjust verbosity of the logs. 101 | 102 | ### Test Plan 103 | 104 | - Unit tests are not realistic because this is an implementation of the addon-framework 105 | - E2E/integration tests to ensure that the hub is able to manage itself and that all components can 106 | successfully be deployed via the governance addon controller. 107 | 108 | ### Graduation Criteria 109 | 110 | #### Alpha 111 | 112 | 1. The proposal is reviewed and accepted 113 | 2. Implementation is completed to support the primary goals 114 | 3. Test cases developed to demonstrate that the user stories are fulfilled 115 | 116 | #### Beta 117 | 118 | 1. E2E/Integration tests complete 119 | -------------------------------------------------------------------------------- /enhancements/sig-policy/83-multiline-policy-templating/README.md: -------------------------------------------------------------------------------- 1 | # Support for Multiline YAML Values in Policy Templates 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 11 | 12 | ## Summary 13 | 14 | Policy templates are limited to one line of YAML and cannot produce data structures that extend beyond that. A new field 15 | of `object-templates-raw` will be introduced so that a single multiline string of YAML with any form of templates can be 16 | specified. The result of `object-templates-raw` after template execution must be in the format of `object-templates`. 17 | 18 | ## Motivation 19 | 20 | Policy templates are limited to one line of YAML and cannot produce data structures that extend beyond that. Policy 21 | template output was historically limited to just strings, integers, and booleans. This has recently improved with the 22 | addition of the `toLiteral` template function, but this is complex to use when the data structures are large as they 23 | must be all in one line of JSON. 24 | 25 | Users also cannot use policy templates to define a variable set of objects using `range` to configure in a particular 26 | `ConfigurationPolicy`. This leads to duplication within the `object-templates` array. 27 | 28 | Lastly, one cannot use a `ConfigurationPolicy` to enforce a value on all objects of a certain type in a namespace. Being 29 | able to use `range` would allow this. 30 | 31 | ### Goals 32 | 33 | 1. To provide a flexible way where policy templates aren't restricted to a single line of YAML. 34 | 35 | ### Non-Goals 36 | 37 | N/A 38 | 39 | ## Proposal 40 | 41 | A new field of `object-templates-raw` will be introduced so that a single multiline string of YAML with any form of 42 | templates can be specified. The result of `object-templates-raw` after template execution must be in the format of 43 | `object-templates`. A user cannot specify both `object-templates` and `object-templates-raw`. 44 | 45 | ```yaml 46 | apiVersion: policy.open-cluster-management.io/v1 47 | kind: Policy 48 | metadata: 49 | name: machine-config 50 | namespace: policies 51 | spec: 52 | disabled: false 53 | remediationAction: enforce 54 | policy-templates: 55 | - objectDefinition: 56 | apiVersion: policy.open-cluster-management.io/v1 57 | kind: ConfigurationPolicy 58 | metadata: 59 | name: machine-config 60 | spec: 61 | remediationAction: enforce 62 | severity: high 63 | object-templates-raw: | 64 | {{- range (lookup "machine.openshift.io/v1beta1" "MachineSet" "openshift-machine-api" "").items }} 65 | - complianceType: musthave 66 | objectDefinition: 67 | apiVersion: machine.openshift.io/v1beta1 68 | kind: MachineSet 69 | metadata: 70 | labels: 71 | machine.openshift.io/cluster-api-cluster: {{ .metadata.name }} 72 | name: {{ .metadata.name }} 73 | namespace: openshift-machine-api 74 | spec: 75 | {{- if eq .metadata.name "policy-grc-cp-autocla-j245q-worker-us-east-1c" }} 76 | replicas: 2 77 | {{- else }} 78 | replicas: 1 79 | {{- end }} 80 | {{- end }} 81 | ``` 82 | 83 | ### User Stories 84 | 85 | #### Story 1 86 | 87 | As a policy user, I would like to use ranges in my policy templates to avoid duplication in my `object-templates` 88 | definition. 89 | 90 | #### Story 2 91 | 92 | As a policy user, I would like use conditionals around arrays and objects so that I can avoid duplicating policies for 93 | different environments. 94 | 95 | ### Implementation Details/Notes/Constraints [optional] 96 | 97 | On every `ConfigurationPolicy` evaluation, the Configuration Policy controller will resolve the templates in the 98 | `object-templates-raw` string and then store the unmarshaled result in the `object-templates` field. The rest of the 99 | processing will continue as is. 100 | 101 | The OpenAPI validation in the CRD must disallow specifying both `object-templates` and `object-templates-raw` in the 102 | same `ConfigurationPolicy` manifest. 103 | 104 | ### Risks and Mitigation 105 | 106 | ### Open Questions [optional] 107 | 108 | N/A 109 | 110 | ### Test Plan 111 | 112 | **Note:** _Section not required until targeted at a release._ 113 | 114 | ### Graduation Criteria 115 | 116 | It would be GA in the release after implementation. 117 | 118 | ### Upgrade / Downgrade Strategy 119 | 120 | There are no concerns with upgrades since this is not a breaking change and does not require user changes. 121 | 122 | ### Version Skew Strategy 123 | 124 | ## Implementation History 125 | 126 | N/A 127 | 128 | ## Drawbacks 129 | 130 | See the Risks section. 131 | 132 | ## Alternatives 133 | 134 | N/A 135 | 136 | ## Infrastructure Needed [optional] 137 | 138 | N/A 139 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/100-multiple-bootstrapkubeconfigs/README.md: -------------------------------------------------------------------------------- 1 | # Support multiple bootstrap kubeconfigs and switching between hubs - MultipleHubs 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [] Enhancement is `implemented` 6 | - [] Design details are appropriately documented from clear requirements 7 | - [] Test plan is defined 8 | - [] Graduation criteria for dev preview, tech preview, GA 9 | - [] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | This proposal aims to provide a mechanism to support configuring multiple bootstrapkubeconfigs and that will enable agents to smoothly switch between hubs at scale with fewer manual operations. 13 | 14 | ## Motivation 15 | Currently, an agent can only point to one hub, if we want to switch to another hub, we need replace the secret `bootstrap-hub-kubeconfig` in `open-cluster-management-agent` namespace manually. 16 | 17 | To address this, we will support configuring multiple bootstrapkubeconfig secrets and provide a simple mechanism for agents to determine which hub they should connect to. And with multiple bootstrap kubeconfigs configured, agents are possible to complete the switching process without manual operations. 18 | 19 | This mechanism will be beneficial for various scenarios, including upgrading, disaster recovery, improving availability, etc. 20 | 21 | ## Goals 22 | - Trigger the agent to reselect when the current hub it connected to is not available. 23 | - Trigger the agent to reselect when user explicitly deny the connection to the current hub. 24 | 25 | ## Non-Goals 26 | - To ensure bootstrapkubeconfig candidates are valid **during the candidate selection phase** is out of the scope. 27 | - To determine which bootstrap kubeconfig is suitable for a sepcific hub at any given time. This depends on the user's specific scenario. 28 | - To synchronize customer workloads and configurations between hubs, ensure that the resources of the hubs are synchronized. 29 | - If the resources are not synchronized, the old customer workload may be wiped out after the switch. 30 | 31 | ## Use cases 32 | 33 | ### Story1 - Backup and restore 34 | As a user, I want to switch agents to the new(restored) hub **without manual operations**. 35 | 36 | ### Story2 - Rolling upgrade 37 | As a user, I want to switch half of the agents to the upgraded hub to test the functionality of the upgraded components. 38 | 39 | ### Story3 - High availability architecture 40 | As a user, I want to configure 2 hubs in active-active mode to improve management availability. In this setup, if one hub goes down, the other hub will continue to provide management capabilities. 41 | 42 | ## Risk and Mitigation 43 | N/A 44 | 45 | ## Design Details 46 | 47 | The `MultipleHubs` feature should be controlled by a feature gate of Klusterlet: 48 | 49 | ``` 50 | apiVersion: operator.open-cluster-management.io/v1 51 | kind: Klusterlet 52 | ... 53 | spec: 54 | ... 55 | registrationConfiguration: 56 | ... 57 | featureGates: 58 | - feature: MultipleHubs 59 | mode: Enable 60 | ``` 61 | 62 | The multiple bootstrapkubeconfigs can be configured in the Klusterlet by the `bootstrapKubeConfigs` field: 63 | 64 | ``` 65 | apiVersion: operator.open-cluster-management.io/v1 66 | kind: Klusterlet 67 | ... 68 | spec: 69 | ... 70 | registrationConfiguration: 71 | ... 72 | featureGates: 73 | - feature: MultipleHubs 74 | mode: Enable 75 | bootstrapKubeConfigs: 76 | type: "LocalSecrets" 77 | localSecretsConfig: 78 | kubeConfigSecrets: 79 | - name: "hub1-bootstrap" 80 | - name: "hub2-bootstrap" 81 | hubConnectionTimeoutSeconds: 600 82 | ``` 83 | 84 | At the initial stage, we only support the `LocalSecrets` type, which means the bootstrapkubeconfig secrets are stored in the managed cluster. 85 | 86 | ### Workflow 87 | 88 | Let’s seen an example, assume we have 2 hubs: `hub1` and `hub2`, and a managed cluster named `cluster1`. We want `cluster1` register to `hub1` first, and then switch to `hub2`. 89 | 90 | First, we create 2 secrets on the agent `open-cluster-management-agent` namespace on the managed cluster `cluster1`: 91 | 92 | ```yaml 93 | # The kubeconfig points to hub1 94 | kind: Secret 95 | apiVersion: v1 96 | metadata: 97 | name: hub1-bootstrap 98 | namespace: open-cluster-management-agent 99 | data: 100 | kubeconfig: YXBpVmVyc2lvbjogdjEK... 101 | ``` 102 | 103 | ```yaml 104 | # The kubeconfig points to hub2 105 | kind: Secret 106 | apiVersion: v1 107 | metadata: 108 | name: hub2-bootstrap 109 | namespace: open-cluster-management-agent 110 | data: 111 | kubeconfig: YXBpVmVyc2lvbjogdjEK... 112 | ``` 113 | 114 | And the `bootstrapKubeConfigs` field in the Klusterlet is configured as follows: 115 | 116 | ```yaml 117 | bootstrapKubeConfigs: 118 | type: "LocalSecrets" 119 | localSecretsConfig: 120 | kubeConfigSecrets: 121 | - name: "hub1-bootstrap" 122 | - name: "hub2-bootstrap" 123 | hubConnectionTimeoutSeconds: 600 124 | ``` 125 | 126 | The agent will choose bootstrap kubeconfigs in order from the `kubeConfigSecrets` list. It will first try the kubeconfig from the first secret in the list ("hub1-bootstrap"), and if that fails it will try the second one ("hub2-bootstrap"), and so on. Once a working kubeconfig is found, the agent will use it to start bootstrapping. 127 | 128 | In our case, the agent will first register on hub1 successfully. And to trigger it switches to `hub2`, we simply set the `hubAcceptsClient` to `false` on the managed cluster CR on hub1: 129 | 130 | ```yaml 131 | apiVersion: cluster.open-cluster-management.io/v1 132 | kind: ManagedCluster 133 | ... 134 | spec: 135 | ... 136 | hubAcceptsClient: false 137 | ``` 138 | 139 | If the hubAcceptsClient is set to false, the managed cluster currently connected to the hub will immediately disconnect from the hub and try to connect to another hub cluster. 140 | 141 | On the other hand, if the agent lost connection to the hub over the `hubConnectionTimeoutSeconds`, it will try to connect to another hub cluster as well. 142 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/17-cluster-features/README.md: -------------------------------------------------------------------------------- 1 | # Managed cluster feature labels 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | 13 | There are requirements about selecting `ManagedClusters` based on the certain features of the managed cluster by `Placement` API. These requirements include: 14 | 15 | 1. Select a set of the clusters that a certain `ManagedClusterAddon` is enabled. For instance, a user wants to install an application that needs multicluster service connectivity, the use will need to select a set of clusters that `Submariner` addon is enabled by using `Placement` API. 16 | 2. Select a set of the clusters with a certain capability. For instance, a user wants to deploy an AI workload to a set of the clusters in which GPUs are installed. 17 | 18 | ## Motivation 19 | 20 | This proposal is to standardise the labels and `ClusterClaims` that are categorized as features of a `ManagedCluster`. The idea is inspired by [node features](https://kubernetes-sigs.github.io/node-feature-discovery/) in kubernetes. 21 | 22 | ### Goals 23 | 24 | - define the format of labels and `ClusterClaims` for `ManagedClusters` to represent a feature in managed cluster 25 | - implement addon as feature labels in `registration`. 26 | 27 | ## Proposal 28 | 29 | ### Design Details 30 | 31 | Features can be classified to 3 categories: 32 | 33 | 1. Features enabled by hub cluster: these features are enabled by user on the hub cluster to a certain set of managedclusters. For example, when user enables an addon on a managed cluster, the cluster will have a feature relates to the addon. Each of these features can be represented as a label on the `ManagedCluster`. 34 | 35 | 2. Existing node features on mananged cluster: these are features discovered by node feature discovery daemon in kubernetes cluster and has been represented as a label in the node of the managed cluster. Most of such features are physical device related. Each of these features should be represented as a `ManagedClusterClaim`. 36 | 37 | 3. Existing cluster-wide features on managed cluster: these features resides in managed cluster, but are not specific to a certain node. Examples includes whether a global load balancer service is supported in this managed cluster or a certain storage can be used with `PVC`. Each of these features should be represented as a `ManagedClusterClaim`. 38 | 39 | 40 | ### Feature labels 41 | 42 | Feature labels should have the prefix of `feature.open-cluster-management.io`. Each addon will be represented as a feature label in the format of `feature.open-cluster-management.io/addon-`. The values of the lable are `available`, `unhealthy`, `unreachable`. For example, the `application-manager` addon will be represented as a label on `ManagedCluster` as: 43 | 44 | ``` 45 | feature.open-cluster-management.io/addon-application-manager: available 46 | ``` 47 | 48 | `registration` controller will update these labels based on the existence and status of each `ManagedClusterAddon` resources. 49 | 50 | ### Feature claims 51 | 52 | Features in the managed cluster should be represented as `ClusterClaim` instead of labels. A node feature in a managed cluster should be represented as a `ClusterClaim` with the name of `.node.feautre.open-cluster-management.io`. For instance, if a node in the cluster has a feature of `feature.node.kubernetes.io/iommu-enabled: true`. A corresponding cluster claim should be created as 53 | 54 | ```yaml 55 | apiVersion: cluster.open-cluster-management.io/v1alpha1 56 | kind: ClusterClaim 57 | metadata: 58 | name: iommu-enabled.node.feautre.open-cluster-management.io 59 | spec: 60 | value: true 61 | ``` 62 | 63 | A cluster featurre should be represented as a `ClusterClaim` with the name of `.cluster.feautre.open-cluster-management.io`. For instance, another discovery agent can check the support of load balancer service and create a claim as: 64 | 65 | ```yaml 66 | apiVersion: cluster.open-cluster-management.io/v1alpha1 67 | kind: ClusterClaim 68 | metadata: 69 | name: lb-enabled.cluster.feautre.open-cluster-management.io 70 | spec: 71 | value: true 72 | ``` 73 | 74 | The management of these two types of `ClusterClaim` will require additional discovery agent, so we will not implement them in the existing components. The details of the implementation is out of the scope and should be discussed in an additional proposal. 75 | 76 | ### Test Plan 77 | 78 | Add integration/e2e test to ensure that 79 | 80 | - when an addon is added, the label is set on the `ManagedCluster` 81 | - when an addon is deleted, the label is removed from the `ManagedCluster` 82 | - when the status of an addon is updated, the value of the corresponding label is updated. 83 | 84 | ### Graduation Criteria 85 | 86 | #### Alpha 87 | 1. The proposal is reviewed and accepted; 88 | 2. Implementation is completed to support the minimum functionalities; 89 | 3. Develop test cases to demonstrate that the above user stories work correctly; 90 | 4. The feature is under `feature-discovery` feature gate and disabled by default 91 | 92 | #### Beta 93 | 1. At least one component uses this feature; 94 | 2. The feature is under `feature-discovery` feature gate and enabled by default 95 | 3. At least one implementation of node/cluster feature discovery agent. 96 | 97 | #### GA 98 | 1. Pass the performance/scalability testing; 99 | 100 | ### Upgrade Strategy 101 | 102 | This will be new features, so no need to consider upgrade. 103 | 104 | ### Version Skew Strategy 105 | 106 | This will be new features, so no need to consider version skew. 107 | 108 | ## Alternatives 109 | 110 | In the `Placement` API, we introduce a new field `addons` to indicate that the selected clusters should have those addons enabled (or disabled). This will require `Placement` controllers to watch all the `ManagedClusterAddon` resources. -------------------------------------------------------------------------------- /enhancements/sig-policy/128-resource-selector/README.md: -------------------------------------------------------------------------------- 1 | # Resource selector for `ConfigurationPolicy` 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 11 | 12 | ## Summary 13 | 14 | Add a resource selector to pair with the existing `namespaceSelector` in order to allow a 15 | `ConfigurationPolicy` to filter and select objects when an `objectDefinition` doesn't have a name 16 | provided. The resource selector would also enable the enforcement of `objectDefinitions` without a 17 | name. 18 | 19 | ## Motivation 20 | 21 | The `ConfigurationPolicy` currently has a `namespaceSelector` to configure the controller to filter 22 | and range over namespaces relevant to a particular namespaced `objectDefinition`. Furthermore, 23 | currently an `objectDefinition` without a name is feasible but is not enforceable. A 24 | `resourceSelector` would both pair well with the `namespaceSelector` for further object filtering as 25 | well as enable an enforceable unnamed `objectDefinition` by providing the controller with a list of 26 | names to handle. 27 | 28 | Adding a `resourceSelector` would lower the bar to entry for `ConfigurationPolicy`, adding an 29 | additional configuration available to users without requiring knowledge of Go templating as is 30 | required with the alternative, `object-templates-raw`. 31 | 32 | ### Goals 33 | 34 | - Add a `resourceSelector` to `ConfigurationPolicy` to allow the controller to handle objects 35 | without requiring `object-templates-raw` 36 | - Consolidate compliance messages to list out objects per namespace (one downside to the alternative 37 | `object-templates-raw` is there's a compliance message for each object) 38 | - Make unnamed `objectDefinition`s enforceable 39 | 40 | ### Non-Goals 41 | 42 | - Establishing complex logic in the `resourceSelector` that could not be handled readily by the 43 | Kubernetes API and, by extension, the `kubernetes-dependency-watches` library. 44 | 45 | ## Proposal 46 | 47 | ### User Stories 48 | 49 | #### Story 1 50 | 51 | As a policy user, I want to be able to create namespaces based on the `namespaceSelector`. 52 | 53 | #### Story 2 54 | 55 | As a policy user, I want to be able to select objects by label and have the controller iterate over 56 | them, both to `inform` and `enforce`. 57 | 58 | ### Implementation Details/Notes/Constraints 59 | 60 | Since the `resourceSelector` refers to a specific object, the `resourceSelector` would be embedded 61 | in each element of the `object-templates` array. The `resourceSelector` would ONLY be composed of a 62 | `LabelSelector` since that can be handled by the Kubernetes API. The `namespaceSelector` has 63 | `include` and `exclude` filters for filtering by name, but such a feature requires that it run in 64 | its own `Namespace` reconciler, which would be unwieldy with a `resourceSelector` where the `Kind` 65 | is arbitrary. 66 | 67 | In a YAML manifest, this new field would appear as follows: 68 | 69 | ```yaml 70 | apiVersion: policy.open-cluster-management.io/v1 71 | kind: ConfigurationPolicy 72 | metadata: 73 | name: resource-selector-policy 74 | spec: 75 | remediationAction: inform 76 | namespaceSelector: # <---- Existing namespaceSelector 77 | exclude: [] # 78 | include: [] # 79 | matchLabels: {} # 80 | matchExpressions: [] # 81 | object-templates: 82 | - complianceType: musthave 83 | resourceSelector: # <---- New resourceSelector 84 | matchLabels: {} # 85 | matchExpressions: [] # 86 | objectDefinition: 87 | apiVersion: v1 88 | kind: Pod 89 | metadata: 90 | labels: 91 | selected: true 92 | - ... 93 | ``` 94 | 95 | The `config-policy-controller` would list the provided `Kind` across all namespaces using the 96 | provided `resourceSelector` and iterate over each matching object in each namespace to determine 97 | compliance. Namespaces would be determined from either the namespace defined in the 98 | `objectDefinition` or the `namespaceSelector` results. 99 | 100 | The label selectors in `resourceSelector` will be a pointer to determine whether the user has 101 | provided it to distinguish between fetching all objects vs fetching no objects. 102 | 103 | ### Risks and Mitigation 104 | 105 | - Currently `mustnothave` doesn't consider fields when a name is provided, but _does_ consider 106 | fields when a name is not provided. 107 | - Caution should be taken (and tests written) to refactor `mustnothave` to consider fields as part 108 | of the `resourceSelector`. 109 | - The implementation could increase load on the Kubernetes API. 110 | - Depending on how it's implemented, I wouldn't think the load would be any greater than if 111 | `object-templates-raw` were in play. Ideally the initial `List` request would populate the cache 112 | and any subsequent `Get` (if any) would fetch from the cache. 113 | - Adding a `resourceSelector` could enable enforcement where enforcement wasn't previously 114 | available, which could have unintended consequences for users. 115 | - One possibility here would be to not enable enforcement, but I think there would be more demand 116 | for enforcement than not. And a change would need to be made to the policy in order to enable 117 | it, which hopefully users would test beforehand. 118 | 119 | ### Open Questions 120 | 121 | 1. Should `remediationAction: enforce` be allowed? 122 | - Some users may rely on the current behavior that an unnamed `objectDefinition` can only be 123 | `inform` and may cause unexpected results when a `resourceSelector` is newly applied to a 124 | policy they didn't realize was enforced. 125 | 2. Should the `resourceSelector` be able to determine the namespace if the namespace is not 126 | provided? 127 | - Currently if the namespace is not provided in the `objectDefinition` and the 128 | `namespaceSelector` returns no results, no objects are returned. I think this behavior should 129 | be preserved since it would follow the behaviors of tools like `kubectl`, but I could be 130 | convinced otherwise. 131 | 3. Should the `resourceSelector` have include/exclude name filtering? 132 | - This question was raised previously in standup. 133 | 134 | ### Test Plan 135 | 136 | All code will have tests meeting coverage targets. All code will have e2e tests. 137 | 138 | ### Graduation Criteria 139 | 140 | -- 141 | 142 | ## Implementation History 143 | 144 | -- 145 | 146 | ## Alternatives 147 | 148 | Users can accomplish this currently using `objects-templates-raw`. 149 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/19-projected-serviceaccount-token/README.md: -------------------------------------------------------------------------------- 1 | # Addon: Managed ServiceAccount 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implemented` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | 13 | This proposal will be adding a new OCM addon named `managed-serviceaccount` 14 | which helps the administrators to automatically distribute service-accounts 15 | to the managed clusters and manage the lifecycle of those service-accounts 16 | including token rotation, ensuring token validity, destroying the token, etc. 17 | The token of the distributed service-account will also the periodically 18 | projected / reported to the hub cluster as a secret resource with the same 19 | name and the same namespace as the corresponding "ManagedServiceAccount" 20 | resource. A valid "ManagedServiceAccount" should be either in a "cluster 21 | namespace" or a "cluster-set namespace" which is bind to a valid 22 | "ManagedClusterSet" resource. 23 | 24 | ## Motivation 25 | 26 | ### Distribute and manage a multi-cluster service-account 27 | 28 | Creating such a "multi-cluster service-account" will help us easily distribute 29 | service-accounts to the managed cluster and on the other hand destroying the 30 | "multi-cluster service-account" will manage the lifecycle of ensuring, 31 | revoking, removing the service-accounts from each of the managed cluster. Not 32 | only the addon will protect these distributed service-account from unexpected 33 | modification and removal, and also it will synchronize the lifecycle of 34 | the service-accounts according to its parent resource "multi-cluster 35 | service-account". 36 | 37 | ### Token projection 38 | 39 | After ensuring the service-accounts' presence on each managed cluster, we will 40 | be able to dynamically asking the managed cluster for signing a token w/ expected 41 | token audience prescribed. The existing `authenticationv1.TokenRequest` api is 42 | particularly designed for such usages, so it's rather simple for the addon 43 | to request new token from each managed cluster as long as rbac permission of 44 | token request is granted. 45 | 46 | ### Authenticate hub->spoke outbound requests 47 | 48 | In some cases when a component (such as an operator) deployed in the hub 49 | cluster needs to proactively invoke the kube-apiservers of the managed 50 | clusters, the outbound requests ought to be authenticated before being 51 | served by the kube-apiserver. Then attaching the token to those outbound 52 | requests will pass the managed cluster's authentication. 53 | 54 | 55 | ## Goals & Non-Goals 56 | 57 | ### Goals 58 | 59 | - Manage the lifecycle of service-accounts across multiple cluster. 60 | - Persist the token of the service-account back to the hub cluster. 61 | 62 | ### Non-Goals 63 | 64 | - The addon will not be an external token signer. 65 | - The addon will not be managing rbac authorization for the managed tokens 66 | (i.e. authorization). 67 | 68 | ## Design 69 | 70 | ### Component & API 71 | 72 | We purpose to add two components by this design: 73 | 74 | 1. An addon manager named `managed-serviceaccount-addon-manager` which 75 | is developed based on addon-framework in the hub cluster. 76 | 2. Addon agents in each of the managed cluster named `managed-serviceaccount 77 | -addon-agent`. 78 | 79 | Additionally there will also be new custom resource named 80 | `ManagedServiceAccount` introduced into OCM by this proposal: 81 | 82 | A sample of the `ManagedServiceAccount` will be: 83 | 84 | ```yaml 85 | # `ManagedServiceAccount`is cluster-scoped 86 | apiVersion: proxy.open-cluster-management.io/v1alpha1 87 | kind: ManagedServiceAccount 88 | metadata: 89 | name: my-foo-operator-serviceaccount 90 | namespace: cluster-1 91 | spec: 92 | audiences: 93 | - foo 94 | - bar 95 | rotation: 96 | enabled: true 97 | validity: 4320h0m0s 98 | status: 99 | conditions: 100 | - type: AllServiceAccountPresent 101 | status: True 102 | ``` 103 | 104 | 105 | The `ManagedServiceAccount` resource is expected to be created under: 106 | 107 | - "cluster namespace": a namespace with the same name as the managed cluster, 108 | which indicates that the `ManagedServiceAccount` will be bound to a service 109 | account with the same name as the `ManagedServiceAccount` and under the 110 | namespace where the "managed-serviceaccount-addon-agent" deploys. 111 | 112 | - "cluster-set namespace": a namespace with valid clusterset bounded, which 113 | indicates the `ManagedServiceAccount` to replicate the service accounts (with 114 | the same name as the `ManagedServiceAccount` and the namespace where the agent 115 | lives) to all the clusters connected to that cluster set. 116 | 117 | A sample of the reported/persisted service-account token will be: 118 | 119 | (We make the token persisted as a secret resource here so that users 120 | can mount it easier) 121 | 122 | ```yaml 123 | apiVersion: v1 124 | kind: Secret 125 | metadata: 126 | ownerReferences: 127 | - apiVersion: proxy.open-cluster-management.io/v1alpha1 128 | resource: managedserviceaccounts 129 | name: my-foo-operator-serviceaccount 130 | name: my-foo-operator-serviceaccount 131 | namespace: cluster-1 132 | type: Opaque 133 | data: 134 | ca.crt: ... 135 | token: ... 136 | ``` 137 | 138 | Based on the extensibility provided by the addon-framework, the addon manager 139 | will be doing the following things upon detecting the presence of the addon: 140 | 141 | - Install addon-agent to each managed cluster. 142 | - Grant the addon-agent the permission of writing secrets to its "cluster 143 | namespace" in the hub cluster. 144 | - Grant the addon-agent the permission of list-watching `ManagedServiceAccount` 145 | in the hub cluster and updating the status subresource. 146 | - Grant addon-agent the permission of writing service-account in the managed 147 | cluster. 148 | - Grant addon-agent the permission of issuing `authenticationv1.TokenRequest` 149 | in the managed cluster. 150 | 151 | Then addon-agent will be list-watching the `ManagedServiceAccount` in its 152 | "cluster namespace" after started and dynamically provisions local service-accounts 153 | and report the token back to the hub cluster based on the rotation policy. 154 | 155 | ### Consuming the tokens 156 | 157 | #### By directly list-watching the secrets 158 | 159 | The component consuming the tokens is expected to list-watch the secrets w/i 160 | across the cluster namespaces. Optionally the project can provide a library for 161 | discovering and reading tokens dynamically from the hub cluster. 162 | 163 | #### By copying and mounting the secrets 164 | 165 | The other consumers working in the hub cluster are supposed to copy the token 166 | secret by themselves e.g. in order to the mount the token. 167 | 168 | ### Test Plan 169 | 170 | - Unit tests 171 | - Integration tests -------------------------------------------------------------------------------- /enhancements/sig-architecture/76-addon-install-strategy/README.md: -------------------------------------------------------------------------------- 1 | # Addon Install Strategy 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | 13 | This proposal is to propose an enhancement of the `ClusterManagementAddon` API, so user can configure which clusters the related 14 | `ManagedClusterAddon` should be enabled. 15 | 16 | ## Motivation 17 | 18 | We are seeing requirements that user would like the addon to be automatically installed when 19 | a new cluster is registered. We provided a [installStrategy interface](https://github.com/open-cluster-management-io/addon-framework/blob/main/pkg/agent/inteface.go#L56) 20 | in `addon-framework`, so the addon developer can select to automatically enable the `ManagedClusterAddon` in all or selected clusters. 21 | However, this mechanism is not user friendly: 22 | - An addon user cannot easily change the install strategy. For example, the user may want to disable the addon in certain clusters by deleting `ManagedClusterAddon`, but the addon-manager would recreate it. 23 | - It also confuses the addon developers, since we do not have an API standard to define the install strategy. 24 | 25 | ### Goals 26 | 27 | - Enhance `ClusterManagementAddon` API to allow user to configure addon install strategy 28 | - A centralized addon-install-controller on hub being responsible of creation/deletion of `ManagedClusterAddon` 29 | 30 | ### Non-Goals 31 | 32 | N/A 33 | 34 | ## Proposal 35 | 36 | ### User Stories 37 | 38 | #### Story 1 39 | 40 | The admin can configure the `ClusterManagementAddon` API to automatically install a `ManagedClusterAddon` in all clusters 41 | or selected clusters. 42 | 43 | #### Story 2 44 | 45 | The admin can select NOT to install a `ManagedClusterAddon` in one or several clusters. 46 | 47 | #### Story 3 48 | 49 | The addon developer should not care about which cluster the `ManagedClusterAddon` should be installed. 50 | 51 | ## Design Details 52 | 53 | ### ClusterManagementAddon API change 54 | 55 | ```go 56 | type InstallStrategy struct { 57 | // Type is the type of the install strategy, it can be: 58 | // - Manual: no automatic install 59 | // - Placements: install to clusters selected by placements. 60 | // +kubebuilder:validation:Enum=Manual;Placements 61 | // +kubebuilder:default:=Manual 62 | Type string `json:"type"` 63 | 64 | // Placements is a list of placement references honored when install strategy type is 65 | // Placements. All clusters selected by these placements will install the addon 66 | // +listType=map 67 | // +listMapKey=name 68 | // +listMapKey=namespace 69 | Placements[] PlacementStrategy `json:"placements,omitempty"` 70 | } 71 | 72 | type PlacementStrategy { 73 | // Placement is the reference to a placement 74 | Placement PlacementRef `json:",inline"` 75 | 76 | // AddonAgent is the configuration of managedClusterAddon during installation. 77 | // User can override the configuration by updating the managedClusterAddon directly. 78 | AddonAgent AddonAgentConfig `json:"addonAgent,omitempty"` 79 | } 80 | 81 | type PlacementRef struct { 82 | // Name of the placement 83 | // +required 84 | Name string `json:"name"` 85 | 86 | // Namespace of the placement 87 | // +required 88 | Namespace string `json:"space"` 89 | } 90 | 91 | type AddonAgentConfig struct { 92 | // InstallNamespace is the namespace to install the agent on spoke cluster. 93 | InstallNamespace string `json:"addonAgent"` 94 | } 95 | ``` 96 | 97 | ### Install addon according to a placement 98 | 99 | ```yaml 100 | apiVersion: addon.open-cluster-management.io/v1alpha1 101 | kind: ClusterManagementAddOn 102 | metadata: 103 | name: helloworld 104 | spec: 105 | installStrategy: 106 | type: Placements 107 | placements: 108 | - name: aws-placement 109 | namespace: default 110 | ``` 111 | 112 | - when a cluster is added into the related `PlacementDecision`, the `ManagedClusterAddon` will be created on the cluster namespace. 113 | - when a cluster is removed from the related `PlacementDecision`, the `ManagedClusterAddon` will be deleted on the cluster namespace. 114 | 115 | ### Install addons based on multiple placements 116 | 117 | ```yaml 118 | apiVersion: addon.open-cluster-management.io/v1alpha1 119 | kind: ClusterManagementAddOn 120 | metadata: 121 | name: helloworld 122 | spec: 123 | installStrategy: 124 | type: Placements 125 | placements: 126 | - name: aws-placement 127 | namespace: default 128 | addonAgent: 129 | installNamespace: aws-addon-ns 130 | - name: gcp-placement 131 | namespace: default 132 | addonAgent: 133 | installNamespace: gcp-addon-ns 134 | ``` 135 | 136 | This will ensure that addon agents installed in aws are deployed in aws-addon-ns namespace 137 | while addon installed in gcp in gcp-addon-ns. If the user updates the `installNamespace` of a certain `ManagedClusterAddon`, 138 | it will NOT be reverted based on the configs in `ClusterManagementAddon`. 139 | 140 | ### Disable addon in some clusters 141 | 142 | At first change the install strategy to `Manual` 143 | 144 | ```yaml 145 | apiVersion: addon.open-cluster-management.io/v1alpha1 146 | kind: ClusterManagementAddOn 147 | metadata: 148 | name: helloworld 149 | spec: 150 | installStrategy: 151 | type: Manual 152 | ``` 153 | 154 | Then admin can delete `ManagedClusterAddon` in certain cluster namespaces. 155 | 156 | When the user changes the strategy type from `Placement` to `Manual`, all the created are `ManagedClusterAddon` is kept. `ManagedClusterAddon` 157 | will not be auto-created or deleted anymore. 158 | 159 | When the user changes the strategy type from `Manual` to `Placement`, `ManagedClusterAddon` created manually by the the user will NOT be 160 | overriden by the configuration on the `ClusterManagementAddon`. 161 | 162 | ### Addon Install Controller 163 | 164 | We need to build and run a centralized addon install controller on the hub cluster. The addon 165 | install controller will create/delete `ManagedClusterAddon` according to the install strategy 166 | in related `ClusterManagementAddon`. The addon install controller will: 167 | - create or delete but not modify the `ManagedClusterAddon` if it exists already. 168 | - do not create when the `ManagedCluster` is deleting. 169 | - delete all `ManagedClusterAddon` in the cluster namespace when the `ManagedCluster` is deleting. 170 | 171 | ### Test Plan 172 | 173 | - e2e test on using cluster label selector type 174 | - e2e test on using placements type 175 | - e2e test on using manual type 176 | 177 | ### Graduation Criteria 178 | 179 | TBD 180 | 181 | ### Upgrade / Downgrade Strategy 182 | 183 | Addon-framework needs to be upgraded for the addons. The original `InstallStrategy` interface in addon-framework will be marked as deprecated. 184 | If the old addon has not adopted the new API change and still use the `InstallStrategy` interface in addon-framework, the `installStrategy` 185 | set in `ClusterManagementAddon` MUST be Manual. 186 | 187 | If an addon upgrade to use the new addon-framework, the related `ClusterManagementAddon` should also be updated to use the appropriate 188 | `installStrategy`. 189 | 190 | ## Alternatives 191 | 192 | N/A 193 | -------------------------------------------------------------------------------- /enhancements/sig-policy/72-content-to-ansilbe-job/README.md: -------------------------------------------------------------------------------- 1 | # Providing Context to the Ansible Automation for a Policy Violation 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 11 | 12 | ## Summary 13 | 14 | The Open Cluster Management (OCM) Policy add-on Ansible integration should pass the additional information 15 | into an Ansible job when triggering the creation. It means that the Ansible user can get more policy violation 16 | information and use them in the Ansible playbook. 17 | 18 | ## Motivation 19 | 20 | Policy Automation uses the `policyRef` field to match the target policy and monitor its non-compliance status. 21 | A non-compliance policy can trigger its Policy Automation to create an Ansible job on the Ansible Tower, which 22 | launches a predefined Ansible playbook against an inventory of hosts. During this creation, we can pass the 23 | additional information to the Ansible job as `extra_vars`. 24 | 25 | The current limitation is that we only pass the non-compliant cluster list as `extra_vars`, which isn't enough to 26 | describe the policy violation status. The following fields could be added: 27 | 1. policy_name: the violated root policy name 28 | 2. namespace: the namespace of the root policy 29 | 3. hub_cluster: the hub cluster name 30 | 4. policy_set: the policy set name 31 | 5. policy_violation_context: the replicated policy status for each non-compliant cluster 32 | - compliant: the replicated policy compliance status 33 | - violationMessage: the latest replicated policy violation message 34 | - details: the replicated policy violation details on each template 35 | 36 | 37 | ### Goals 38 | 39 | Adding the name of the violated root policy, the root policy namespace, the hub cluster name, the policy_set name, 40 | and the replicated policy status for each non-compliant cluster into the Ansible Job. 41 | 42 | ### Non-Goals 43 | 44 | - None at this time 45 | 46 | ## Proposal 47 | Policy automation runs on the hub cluster. It should not get the propagated policy status from the managed cluster, 48 | which requires Search or Observability dependency. The same design also applies to the hub of the cluster scenario. 49 | 50 | ### New Content Structure 51 | 52 | Current `extra_vars` has a field `target_clusters` contains non-compliant cluster name list: 53 | ```json 54 | { 55 | "target_clusters": [ 56 | "cluster1", 57 | "cluster2", 58 | "cluster3", 59 | ] 60 | } 61 | ``` 62 | 63 | The new content structure will pass more information: 64 | ```json 65 | { 66 | "target_clusters": [ 67 | "cluster1", 68 | "cluster2", 69 | "cluster3", 70 | ], 71 | "policy_name": "policy1", 72 | "namespace": "policy-namespace", 73 | "hub_cluster": "policy-grc.dev08.red-chesterfield.com", 74 | "policy_set": [ 75 | "policyset-1", 76 | "policyset-2", 77 | ], 78 | "policy_violation_context": { 79 | "cluster1": { 80 | "compliant": "NonCompliant", 81 | "violationMessage:": "NonCompliant; violation - xxxxxx", 82 | "details": [ 83 | { 84 | "compliant": "NonCompliant", 85 | "history": [ 86 | { 87 | "lastTimestamp": "2022-09-08T16:03:23Z", 88 | "message": "NonCompliant; violation - pods not found: [testpod] in namespace default missing" 89 | }, 90 | ], 91 | }, 92 | ] 93 | }, 94 | "cluster2": { 95 | "..." 96 | }, 97 | } 98 | } 99 | ``` 100 | 101 | - `policy_name` is the name of the non-compliance policy that triggers the Ansible job on the hub cluster, 102 | which is from `rootPolicy.GetName()`. 103 | 104 | - `namespace` is the namespace of the root policy, which is from `rootPolicy.GetNamespace()` 105 | 106 | - `hub_cluster` is the name of the hub cluster, which is from the Kubernetes configuration Host value. 107 | 108 | - `policy_set` contains all associated policy set names of the root policy, which are from `metadata.name` 109 | under `kind: PolicySet`. If the policy doesn't belong to any policy set, `policy_set` will be empty. 110 | 111 | - `policy_violation_context` is a map. The keys are non-compliant cluster names, and the value is the raw replicated 112 | policy status for each non-compliant cluster. 113 | The policy automation looks at the `cluster namespaces` of the hub cluster and gets the replicated policies. 114 | Then it filters out the non-compliant replicated policies and puts the status into the `policy_violation_context` map. 115 | 116 | Suppose the policy automation runs on the hub cluster which is self-managed. In that case, `policy_violation_context` 117 | would be a map that only contains `local-cluster`. 118 | 119 | The field path of the raw status is `policy.policyStatus`. The below unnecessary fields in `policy.policyStatus` will be 120 | skipped and not in `policy_violation_context`: 121 | 1. `policy.policyStatus.details[n].templateMeta`. 122 | 2. `policy.policyStatus.details[n].history[n].eventName` 123 | 124 | ### User Stories 125 | 126 | #### Story 1 127 | 128 | As an OCM Policy add-on user, I would like to get the additional status in an Ansible job's `extra_vars` field. For example, 129 | regarding the CertificatePolicy policy, which checks for expiring certificates in secrets, I want to get the replicated policy 130 | status for each non-compliant cluster so I can know which secret expires on a specific cluster in the Anisble Job. Then, the 131 | Ansible Automation Platform can create a notice ticket and provide the violation details. 132 | 133 | ### Implementation Details/Notes/Constraints [optional] 134 | 135 | N/A 136 | 137 | ### Risks and Mitigation 138 | 139 | Since `policy_violation_context` is the replicated policy status for all non-compliant managed clusters, the length of 140 | `policy_violation_context` may be oversized when the amount of managed clusters is large. 141 | 142 | To solve this, we will only pass the top `N` replicated policy status into `policy_violation_context`, where `N` is configurable 143 | via an optional policy automation field called `policyViolationContextLimit`. If this field is not set, `N` will default to the 144 | first 1000 managed clusters. When this field is set to 0, it means no limit and every replicated policy status will be passed. 145 | 146 | ## Design Details 147 | 148 | See the [Proposal](#proposal) section. 149 | 150 | ### Open Questions [optional] 151 | 152 | N/A 153 | 154 | ### Test Plan 155 | 156 | - Unit tests can be implemented to check the created Ansible job. For each scenario, the `extra_vars` contain the designed data. 157 | 158 | ### Graduation Criteria 159 | 160 | 1. The proposal is reviewed and accepted 161 | 2. Implementation is completed to support the primary goals 162 | 3. Test cases developed to demonstrate that the user stories are fulfilled 163 | 164 | ### Upgrade / Downgrade Strategy 165 | 166 | There are no concerns with upgrades since this is not a breaking change. 167 | 168 | Downgrading would only pass `target_clusters` to Ansible Job, which is an acceptable outcome. 169 | 170 | ### Version Skew Strategy 171 | 172 | N/A 173 | 174 | ## Implementation History 175 | 176 | N/A 177 | 178 | ## Drawbacks 179 | 180 | 1. This design does not restructure `policy_violation_context` data for different policies. 181 | 182 | ## Alternatives 183 | 184 | Add the additional replicated policy status parse logic in [`governance-policy-propagator`](https://github.com/open-cluster-management-io/governance-policy-propagator) 185 | for each kind of policy. Policies can have different '`policy_violation_context` data structures and only display the needed information. 186 | 187 | - Advantages - Reduce the `policy_violation_context` size. The data will be more user-friendly. 188 | - Disadvantages - We need to add a new parse code for a new kind of policy and update the existing parse code if the new 189 | fields are wanted. If the policy kind isn't in the existing parse codes, policy automation will not pass any replicated 190 | policy status or pass the raw replicated policy status. 191 | 192 | ## Infrastructure Needed [optional] 193 | 194 | No new infrastructure is needed since all the necessary changes can take place in the existing 195 | `governance-policy-propagator`](https://github.com/open-cluster-management-io/governance-policy-propagator). 196 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/35-work-placement/work-placement.md: -------------------------------------------------------------------------------- 1 | 2 | ## Release Signoff Checklist 3 | 4 | - [x] Enhancement is `implementable` 5 | - [ ] Design details are appropriately documented from clear requirements 6 | - [ ] Test plan is defined 7 | - [ ] Graduation criteria for dev preview, tech preview, GA 8 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 9 | 10 | ## Summary 11 | 12 | There were some discussions and feedback about how an app developer or end user can leverage ManifestWork and Placement API to deploy application workload easily. ManifestWork and Placement API are independent concepts. It needs a higher level controller to link them together for multicluster application workload scheduling and deployment. 13 | 14 | Some existing examples that use placement/manifestwork for multicluster workload orchestration including: 15 | * Multicloud-operator-subscription 16 | * KubeVela 17 | 18 | But we may need a built-in API in OCM that can do workload orchestration. It differs from Multicloud-operator-subscription and Kubevela, since it should not intend to be a gitops tool by integrating with various sources, and it should concentrate on kubernetes native workload (deployment/service etc). 19 | 20 | In addition, we should also consider more advanced workload scheduling beyond placement API. Some prior work including Kim Min’s proposal on https://github.com/open-cluster-management-io/enhancements/pull/31, and discussion in https://github.com/open-cluster-management-io/OCM/issues/27 21 | 22 | ## Motivation 23 | 24 | ManifestWork and Placement API are independent concepts. It makes sense to bring them together to achieve fleet wide orchestration. 25 | 26 | ### Goals 27 | 28 | * Ease of use: 29 | * A user should be able to use this API to easily deploy a multicluster application without understanding many concepts in OCM. 30 | * A user should have a CRD that is similar enought to leverage Spec from their existing manifests 31 | 32 | ### Non-Goals 33 | 34 | * Initially out of scope: 35 | * Replica distribution 36 | * A user can choose how to distribute a resource. For example, the user can define a deployment and declare to replicate the deployment to every cluster, or to distribute the replicas of the deployment to each cluster. 37 | * A user can choose which cluster to deploy the workload. 38 | 39 | * Fallback and reschedule 40 | * When a workload cannot be fully run on a spoke (lack of resources, image pull backoff). It should be able to reschedule to another cluster. 41 | * When a cluster was added/removed from a PlacementDecision, it should be able to reschedule. 42 | * Including the changes upon the Placement’s matching policy. 43 | 44 | ## Proposal 45 | 46 | ### User Stories 47 | 48 | #### Story 1 49 | User wants to distribute work (kubernetes resources) to a number of clusters in their fleet, filtering this group of clusters with placement. 50 | 51 | #### Story 2 52 | User removes a distributed work (distributed to clusters in the placement decision), the resources should be cleaned up similar to exisitng work behaviours 53 | 54 | #### Story 3 55 | Work placement should contain enough status information to determine if the manifest(s) were successfully applied to the clusters in the placement decision. 56 | 57 | ### Implementation Details/Notes/Constraints 58 | 59 | * The user needs to create the resource in the workspace ns on the hub cluster. 60 | * The controller will pick the cluster based on the placement decision. 61 | * The controller will read the resources and generate ManifestWork on the related cluster. 62 | * `[NON-GOLA]` If a user specifies a placement ref, the controller will use this placement to find clusters. Otherwise the controller will create a new empty placement that selects all clusters scoped in this ns. The user can mutate the placement spec afterwards for advanced cluster selection. 63 | * `[NON-GOLA]` By default, the controller will replicate the manifest to each manifestwork, if the mode is split. The controller will split the replica field and generate manifests with different replicas in different clusters (similar as splitter in kcp). 64 | * `[NON-GOLA]` The controller will check the status of the applied resources in the manifestwork, if the resource is degraded (e.g. deployment available replica is not enough). The controller will exclude this cluster and try to select another one. 65 | 66 | #### API 67 | ```yaml 68 | apiversion: work.open-cluster-management.io/v1alpha1 69 | kind: PlacementManifestWork # ApplicationWork? 70 | metadata: 71 | name: app 72 | namespace: default 73 | spec: 74 | manifestWorkTemplate: 75 | workloads: 76 | manifest: 77 | - apiVersion: v1 78 | kind: Service 79 | metadata: 80 | name: test01 81 | namespace: default 82 | spec: … 83 | - apiVersion: apps/v1 84 | kind: Deployment 85 | metadata: 86 | name: test02 87 | namespace: default 88 | spec: … 89 | ## The following is a spike ## 90 | distributionStrategy: 91 | - name: test01 92 | namespace: default 93 | type: Replicate # replicate means the resource is replicated to each cluster 94 | - name: test02 95 | namespace: default 96 | # splits spec.Replicas by default. 97 | # if spec.Replicas = 3, and it is to deploy to cluster1 and cluster2 98 | # manifests on cluster1 will have spec.Replicas=2, that on cluster 2 will be spec.Replicas=1 99 | # the user should also be able to define another field to split. 100 | type: Split 101 | split: 102 | type: Balance # this is to reuse the idea in this proposal 103 | limitRange: 104 | min: 1 105 | max: 2 106 | 107 | distributionStrategy: 108 | type: Replicate # replicate means the resource is replicated to each cluster 109 | 110 | placementRef: 111 | # user can choose to specify an existing placement, or if the field is unspecified, 112 | # a new empty placement will be generated, user can change the spec of the 113 | # generated placement. The controller will not revert users' change on placement. 114 | name: placement 115 | status: 116 | conditions: 117 | - type: PlacementVerified 118 | - type: ManifestworkApplied 119 | PlacedManifestWorkSummary: 120 | # this is to describe the number of clusters that have resources are applied 121 | applied: 4 122 | # this is to describe the number of clusters that have resources are available 123 | available: 4 124 | # this to describe the TOTAL number of clusters that the resources 125 | total: 4 126 | # this is to describe the number of clusters that have resources are progressing 127 | progressing: 4 128 | # this is to describe the number of clusters that have resources are degraded (or work is degraded) 129 | degraded: 4 130 | ``` 131 | 132 | ### Risks and Mitigation 133 | 134 | * Security is based on ManagedClusterSets and the bindings to the namespace where the work placement resource is created. 135 | 136 | ## Design Details 137 | 138 | ### Open Questions 139 | 140 | None at the moment. 141 | 142 | ### Test Plan 143 | 144 | **Note:** *Section not required until targeted at a release.* 145 | 146 | * Functional tests are provided 147 | 148 | ### Graduation Criteria 149 | 150 | **Note:** *Section not required until targeted at a release.* 151 | 152 | Define graduation milestones. 153 | 154 | These may be defined in terms of API maturity, or as something else. Initial proposal 155 | should keep this high-level with a focus on what signals will be looked at to 156 | determine graduation. 157 | 158 | Consider the following in developing the graduation criteria for this 159 | enhancement: 160 | 161 | - [Maturity levels][maturity-levels] 162 | - [Deprecation policy][deprecation-policy] 163 | 164 | Clearly define what graduation means by either linking to the [API doc definition](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning), 165 | or by redefining what graduation means. 166 | 167 | In general, we try to use the same stages (alpha, beta, stable), regardless how the functionality is accessed. 168 | 169 | [maturity-levels]: https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions 170 | [deprecation-policy]: https://kubernetes.io/docs/reference/using-api/deprecation-policy/ 171 | 172 | ### Upgrade / Downgrade Strategy 173 | 174 | N/A 175 | 176 | 177 | ## Implementation History 178 | 179 | - [x] Enhancement submitted 180 | 181 | ## Drawbacks 182 | 183 | * If the administrator is not aware that CRUD for this new resource is granted to a namespace that is bound to a ManagedClusterSet, 184 | user may distribute kubernetes manifests to those available clusters. 185 | 186 | ## Alternatives 187 | 188 | * Higher level constructs can be leverages (listed above): 189 | * Multicloud-operator-subscription 190 | * KubeVela -------------------------------------------------------------------------------- /enhancements/sig-architecture/179-work-feedback-scrape-type/README.md: -------------------------------------------------------------------------------- 1 | ## Release Signoff Checklist 2 | 3 | - [ ] Enhancement is `implementable` 4 | - [ ] Design details are appropriately documented from clear requirements 5 | - [ ] Test plan is defined 6 | - [ ] Graduation criteria for dev preview, tech preview, GA 7 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 8 | 9 | ## Summary 10 | Currently, the ManifestWork controller periodically polls (every 60s) resource 11 | statuses in managed clusters to populate statusFeedback values. This enhancement 12 | introduces a new field, `feedbackScrapeType` to ManifestWorkReplicaSet CRD allowing 13 | users to specify whether status feedback should be collected via polling or 14 | watch-based mechanisms. 15 | 16 | The MWRS controller should also be updated to explicitly watch and reconcile on changes 17 | in both ManifestWork.status.conditions and ManifestWork.status.feedbacks, when 18 | `feedbackScrapeType` == `WATCH` ensuring that feedback updates are properly aggregated 19 | into MWRS status in real time. 20 | 21 | This feature enhances responsiveness and efficiency by enabling near-real-time status 22 | feedback updates for selected resources while preserving backward compatibility and 23 | operator control. 24 | 25 | ## Motivation 26 | 27 | Today, the ManifestWork controller on the managed cluster periodically polls resources 28 | every 60 seconds to collect status feedback defined in manifestConfigs. While this 29 | polling mechanism ensures eventual consistency, it introduces unnecessary latency for 30 | resources that change frequently and adds overhead from repeated list-and-get operations, 31 | even when no changes occur. This limits OCM’s ability to provide near-real-time 32 | visibility of resource states and can delay downstream reconciliation actions or 33 | status reporting in large environments. 34 | 35 | By introducing a new field, feedbackScrapeType, OCM allows users and platform operators 36 | to choose how feedback is collected—either through the existing periodic polling model 37 | (POLL) or an event-driven watch model (WATCH). With WATCH, the managed cluster agent can 38 | immediately react to changes in specified resources, updating the corresponding 39 | ManifestWork and ensuring faster and more efficient feedback propagation to the hub. 40 | This reduces the need for fixed polling cadences, minimizes API load, and improves 41 | responsiveness for high-value workloads that require timely status updates. 42 | 43 | For the ManifestWorkReplicaSet (MWRS) controller, this change ensures it accurately 44 | reflects aggregated feedback in real time, responding to both condition and feedback 45 | updates from its associated ManifestWork objects. This makes the OCM feedback loop 46 | more dynamic, scalable, and adaptive to workload characteristics. 47 | 48 | ### Goals 49 | 50 | * Provide configurable feedback collection modes (POLL or WATCH) per resource in ManifestWork. 51 | 52 | * Ensure MWRS correctly aggregates and reflects feedback updates from both polling and watch-based sources. 53 | 54 | * Maintain full backward compatibility for existing users and automation. 55 | 56 | ### Non-Goals 57 | 58 | * Changing the core aggregation semantics in MWRS beyond supporting the new event-driven updates. 59 | 60 | ## Proposal 61 | 62 | This is where we get down to the nitty gritty of what the proposal actually is. 63 | 64 | ### User Stories 65 | 66 | Detail the things that people will be able to do if this is implemented. 67 | Include as much detail as possible so that people can understand the "how" of 68 | the system. The goal here is to make this feel real for users without getting 69 | bogged down. 70 | 71 | #### Story 1 72 | 73 | User A uses MWRS to deploy a rollout to multiple cluster. The user wants 74 | to track the number of pod replicas that contain the change as it is rolled out to the cluster on the MWRS level. 75 | To do this, the user must get the replica count as the change is rolling out, if it relies on the existing mechanism, it is likely that the change will be missed. 76 | 77 | For example, if the rollout happens every 10 seconds, then User A will not be able to track the per replica count. And only get the count when the rollout is completed. 78 | 79 | ### Implementation Details/Notes/Constraints [optional] 80 | 81 | What are the caveats to the implementation? What are some important details that 82 | didn't come across above. Go in to as much detail as necessary here. This might 83 | be a good place to talk about core concepts and how they relate. 84 | 85 | ### Risks and Mitigation 86 | 87 | #### Risk #1: Too many watch events and update calls to the ManifestWork. 88 | 89 | This will be mitigated by limiting when the ManifestWork controller sends updates to the ManifestWork and to stop processing WATCH events once the ManifestWork is "Ready". 90 | 91 | ManifestWork Controller will parse the response from the informer watch and only send an update to the ManifestWork if there is an update on the feedbackRules. 92 | 93 | ManifestWork Controller that registers informers per resource, will unregister the informer once the ManifestWork resource is no longer present. 94 | 95 | Controller will control the total number of concurrent watch request, and set a limit, so if a new work comes requesting for watch, if the limit is reached, it will fallback to use poll mode. The controller can keep track of the number of informers created and cap the limit there. 96 | 97 | ## Design Details 98 | ### ManifestWorkReplicaSet CRD changes 99 | ```yaml 100 | apiVersion: work.open-cluster-management.io/v1alpha1 101 | kind: ManifestWorkReplicaSet 102 | ... 103 | spec: 104 | workload: ... 105 | manifestConfigs: 106 | - resourceIdentifier: 107 | group: apps 108 | resource: deployments 109 | namespace: default 110 | name: hello 111 | feedbackScrapeType: WATCH (Options: WATCH | POLL, POLL is default behavior) 112 | .... 113 | ``` 114 | 115 | #### ManifestWork CRD changes 116 | ```yaml 117 | apiVersion: work.open-cluster-management.io/v1alpha1 118 | kind: ManifestWork 119 | ... 120 | spec: 121 | manifestConfigs: 122 | - resourceIdentifier: 123 | group: apps 124 | resource: deployments 125 | namespace: default 126 | name: hello 127 | feedbackScrapeType: WATCH (Options: WATCH | POLL, POLL is default behavior) 128 | .... 129 | ``` 130 | 131 | This change will also be populated upwards to the ManifestWorkReplicaSet level of 132 | `spec.manifestWorkTemplate.manifestConfigs`. 133 | 134 | ```mermaid 135 | sequenceDiagram 136 | participant A as Resources managed by OCM 137 | participant B as ManifestWorks 138 | participant C as ManifestWorkReplicaSet 139 | 140 | B->>A: (POLL) every 60s 141 | A->>B: feedbackRules & conditionRule Update 142 | B->>C: Watch MW, Placement, observedGeneration updates 143 | C->>C: Update status.Conditions 144 | 145 | opt if feedbackScrapeType == WATCH 146 | A->>B: (WATCH) event for resource changes 147 | B->>B: feedbackRules & conditionRule Updates (if there is any update) 148 | B->>C: (WATCH) for feedbackRule updates only (if Progressing = True & Degraded = False|not set) 149 | C->>C: Update status.Conditions 150 | end 151 | ``` 152 | 153 | #### Spoke Level ManifestWork Controller Reconciler 154 | 155 | Default: resources continue to be polled on the existing interval. 156 | If a resource is marked WATCH, the controller registers it for watch-based updates: 157 | Immediate status/feedback recomputation on events. 158 | Poll remains as a safety net. 159 | 160 | Create a FeedbackScrape interface that polls or watches resources. 161 | 162 | In the ManifestWork Controller (pkg/work/spoke/controllers/statuscontroller/availablestatus_controller.go) 163 | introduce a watch-based path alongside the existing poll loop. 164 | 165 | When syncing the ManifestWork, register a informer for the resource if `feedbackScrapeType` WATCH. If the `feedbackScrapeType` is no longer WATCH, unregister the informer. If the ManifestWork is no longer available for that resource, unregister the informer. This will prevent long running WATCH from triggering too many ManifestWork changes. 166 | 167 | When the informer recieves an event (create, update, delete), verify that there is a change on the fields defined in the `feedbackRules` and patch the status conditions for that resource. If there is no change on one of the `feedbackRules` field, ignore the WATCH event. 168 | 169 | #### Hub Level ManifestWorkReplicaSet Controller Reconciler 170 | 171 | There should be no updates on the MWRS reconciler. The expected behavior is that when the ManifestWork has an update in the status feedback, trigger a resync of the MWRS to get the status feedback details. 172 | 173 | In the current model, the MWRS controller is reconciled if there are changes to the ManifestWork (that are managed by manifestworkreplicaset), PlacementDecisions and Placements. Therefore any update to the status.Feedback on the ManifestWork should trigger the reconcile. Standard factory used [here](https://github.com/open-cluster-management-io/ocm/blob/main/pkg/work/hub/manager.go#L97-L98) 174 | 175 | 176 | ### Open Questions [optional] 177 | 178 | TBD 179 | 180 | ### Upgrade / Downgrade Strategy 181 | 182 | With the `POLL` flag or not set `feedbackScrapeType`, controllers will fallback to the previous behavior. 183 | ## Alternatives 184 | 185 | TBD 186 | 187 | ## Infrastructure Needed [optional] 188 | 189 | TBD 190 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/10-deletepropagationstrategy/README.md: -------------------------------------------------------------------------------- 1 | # Introduce DeletePropagation in ManifestWork 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management/website/) 10 | 11 | ## Summary 12 | 13 | Today, when a manifestwork is deleted or a manifest resource in the manifestwork is removed, the work agent 14 | deletes the related resources on the managedcluster in the `ForegroundDeletion` strategy. It means the manifestwork 15 | will be kept in deleting status until the related resource owned by the manifestwork is deleted on the managedcluster. 16 | There is use cases that asks for othere deletion strategies. This proposal is to introduce a property in manifestwork 17 | that the user can define the orphan deletion propagation for resources in the manifestwork. 18 | 19 | ## Motivation 20 | 21 | ### Goals 22 | 23 | - Define the DeletePropagationStrategy property in manifestwork 24 | - Indicate the behavior of work agent upon the existence of DeletePropagationStrategy property 25 | 26 | ### Non-Goals 27 | 28 | - Deletion strategy other than orphan. 29 | 30 | ## Proposal 31 | 32 | We will introduce a `DeletePropagations` property in the manifestwork API. 33 | 34 | ### User Stories 35 | 36 | #### Story 1 37 | 38 | As a manifestwork user, I create a manifeswork with a list of resources. When I delete the manifestwork on the hub, 39 | I do not want the resources defined in the manifestwork to be garbage collected on the managedcluster. 40 | 41 | #### Story 2 42 | 43 | As a manifestwork user, I create a manifeswork with a list of resources. When I remove one or several resources from 44 | the manifestwork, I do not what these resources to be garbage collected on the managedcluster. 45 | 46 | 47 | ## Design Details 48 | 49 | ### Manifestwork API update 50 | 51 | We will introduce a `DeleteOption` property as following: 52 | 53 | ```go 54 | // ManifestWorkSpec represents a desired configuration of manifests to be deployed on the managed cluster. 55 | type ManifestWorkSpec struct { 56 | // Workload represents the manifest workload to be deployed on a managed cluster. 57 | Workload ManifestsTemplate `json:"workload,omitempty"` 58 | 59 | // DeleteOption represents deletion strategy when the manifestwork is deleted. 60 | // Foreground deletion strategy is applied to all the resource in this manifestwork if it is not set. 61 | // +optional 62 | DeleteOption DeleteOption `json:"deleteOption,omitempty"` 63 | } 64 | 65 | type DeleteOption struct { 66 | // propagationPolicy can be Orphan or SelectivelyOrphan 67 | // SelectivelyOrphan should be rarely used. It is provided for cases where particular resources is transfering 68 | // ownership from one ManifestWork to another or another management unit. 69 | // Setting this value will allow a flow like 70 | // 1. create manifestwork/2 to manage foo 71 | // 2. update manifestwork/1 to selectively orphan foo 72 | // 3. remove foo from manifestwork/1 without impacting continuity because manifestwork/2 adopts it. 73 | // +optional 74 | PropagationPolicy DeletePropagationPolicyType `json:"propagationPolicy,omitempty"` 75 | 76 | // selectivelyOrphan represents a list of resources following orphan deletion stratecy 77 | SelectivelyOrphan *SelectivelyOrphan `json:"selectivelyOrphans,omitempty"` 78 | } 79 | 80 | // SelectivelyOrphan represents a list of resources following orphan deletion stratecy 81 | type SelectivelyOrphan struct { 82 | // orphaningRules defines a slice of orphaningrule. 83 | // Each orphaningrule identifies a single resource included in this manifestwork 84 | // +optional 85 | OrphaningRules []OrphaningRule `json:"orphaningRules,omitempty"` 86 | } 87 | 88 | // OrphaningRule identifies a single resource included in this manifestwork 89 | type OrphaningRule struct { 90 | // Group is the api group of the resources in the workload that the strategy is applied 91 | // +required 92 | Group string `json:"group"` 93 | // Resource is the resources in the workload that the strategy is applied 94 | // +required 95 | Resource string `json:"resource"` 96 | // ResourceNamespace is the namespaces of the resources in the workload that the strategy is applied 97 | // +optional 98 | ResourceNamespace string `json:"resourceNamespace"` 99 | // ResourceName is the names of the resources in the workload that the strategy is applied 100 | // +required 101 | ResourceName string `json:"resourceName"` 102 | } 103 | 104 | type DeletePropagationPolicyType string 105 | 106 | const ( 107 | // DeletePropagationPolicyTypeOrphan represents that all the resources in the manifestwork is orphaned 108 | // when the manifestwork is deleted. 109 | DeletePropagationPolicyTypeOrphan DeletePropagationPolicyType = "Orphan" 110 | // DeletePropagationPolicyTypeSelectivelyOrphan represents that only selected resources in the manifestwork 111 | // is orphaned when the manifestwork is deleted. 112 | DeletePropagationPolicyTypeSelectivelyOrphan DeletePropagationPolicyType = "SelectivelyOrphan" 113 | ) 114 | ``` 115 | 116 | ### Examples 117 | 118 | #### All the resources in manifestwork is orphaned 119 | 120 | ```yaml 121 | apiVersion: work.open-cluster-management.io/v1 122 | kind: ManifestWork 123 | metadata: 124 | name: work 125 | namespace: cluster1 126 | spec: 127 | workload: 128 | manifests: 129 | - apiVersion: v1 130 | kind: ConfigMap 131 | metadata: 132 | name: cm1 133 | namespace: default 134 | deleteOption: 135 | propagationPolicy: Orphan 136 | ``` 137 | 138 | The manifestwork defition indicates that all resources will be orphaned when the manifestwork is deleted 139 | 140 | #### Only certain resource type is orphaned 141 | 142 | The following example indicates only customresourcedefitions foo is orphaned. When customresourcedefition foo is 143 | removed from manifestwork, it is also orphaned. 144 | 145 | ```yaml 146 | apiVersion: work.open-cluster-management.io/v1 147 | kind: ManifestWork 148 | metadata: 149 | name: work 150 | namespace: cluster1 151 | spec: 152 | workload: 153 | manifests: 154 | - apiVersion: v1 155 | kind: ConfigMap 156 | metadata: 157 | name: cm1 158 | namespace: default 159 | - apiVersion: apiextensions.k8s.io/v1 160 | kind: CustomResourceDefinition 161 | name: foo 162 | ... 163 | deleteOption: 164 | propagationPolicy: SelectivelyOrphan 165 | selectivelyOrphan: 166 | - group: apiextensions.k8s.io 167 | resource: customresourcedefinitions 168 | resourceName: foo 169 | ``` 170 | 171 | Another example as below indicates that only configmap with name of cm2 will be orphan deleted. When cm2 is removed from 172 | manifeswork, the cm1 on managedcluster is orphaned. While when cm1 is removed from the manifestwork, cm1 will be deleted 173 | on the managedcluster. 174 | 175 | ```yaml 176 | apiVersion: work.open-cluster-management.io/v1 177 | kind: ManifestWork 178 | metadata: 179 | name: work 180 | namespace: cluster1 181 | spec: 182 | workload: 183 | manifests: 184 | - apiVersion: v1 185 | kind: ConfigMap 186 | metadata: 187 | name: cm1 188 | namespace: default 189 | - apiVersion: v1 190 | kind: ConfigMap 191 | metadata: 192 | name: cm2 193 | namespace: default 194 | deleteOption: 195 | propagationPolicy: SelectivelyOrphan 196 | selectivelyOrphan: 197 | - group: apiextensions.k8s.io 198 | resource: customresourcedefinitions 199 | resourceName: cm2 200 | resourceNamespace: default 201 | ``` 202 | 203 | ### Implementation Details 204 | 1. When work agent apply a resource, it should evaluate the DeletePropagationStrategy field. If a resource is included in the field, 205 | the ownerref of the appliemanifestwork will not be added onto the resource. 206 | 2. When the resource is removed from manifestwork, the work agent should check if the resource has the ownerref of the related 207 | appliedmanifestwork. The work agent skip the deletion if the ownerref does not exist. 208 | 3. When the manifestwork is deleted, the work agent should check if each resource has the ownerref of the related 209 | appliedmanifestwork. The work agent skip the deletion if the ownerref does not exist. 210 | 4. If the manifestwork is already in deleting state, mutating DeletePropagationStrategy orphan field should be disallowed. 211 | 212 | ### Test Plan 213 | 214 | e2e test will be implement to cover the following cases: 215 | - orphan all resources in a mannifestwork 216 | - orphan certain resources in a manifestwork 217 | - disable orphan deletion to ensure backward compatible 218 | - mutate orphan configuration. 219 | 220 | ### Graduation Criteria 221 | 222 | ### Upgrade / Downgrade Strategy 223 | 224 | - The registration-operator should be upgraded to ensure that manifestwork API is upgraded 225 | - If the DeletePropagationStrategy property is not set in the manifestwork, we need to ensure that manifestwork 226 | is treated following foreground deletion processs 227 | 228 | ### Version Skew Strategy 229 | 230 | - The DeletePropagationStrategy is optional, and if it is not set, the manifestwork can be correctly treated by work agent with elder version 231 | - The elder version work agent will ignore the DeletePropagationStrategy field. 232 | 233 | ## Implementation History 234 | 235 | ## Drawbacks 236 | 237 | ## Alternatives 238 | 239 | - We should only allow orphan delete the whole manifestwork. Specifying certain resource to be orphaned might be complicated and confusing 240 | 241 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/132-work-update-only-when-spec-change/README.md: -------------------------------------------------------------------------------- 1 | # New Update Strategy only when workload changes in ManifestWork 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [] Enhancement is `provisional` 6 | - [] Design details are appropriately documented from clear requirements 7 | - [] Test plan is defined 8 | - [] Graduation criteria for dev preview, tech preview, GA 9 | - [] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | This proposal is to introduce new options in update stratgey of manifestwork to resolve issues 13 | on how to coordinate other actors and work agent when they apply the same resource on the spoke 14 | cluster. 15 | 16 | ## Motivation 17 | We have introduced several update strategy in `ManifestWork` to resolve resource conflict before: 18 | - `CreateOnly` let work-agent to create the resource only and ignore any further change on the 19 | resource 20 | - `ServerSideApply` utilize the server side apply mechanism in kube-apiserver, so if work-agent 21 | and another actor try to operator the same field in a resource, a conflict can be identified. 22 | 23 | However, some issues are raised indicating the above strategy can still not meet all the requirements 24 | - [issue 631](https://github.com/open-cluster-management-io/ocm/issues/631) reports a case that user 25 | uses manifestwork to create a configmap, and then another actor on the spoke change the configmap. The 26 | user does not want the configmap to be changed back by the work-agent. However, when the configmap resource 27 | in the manifestwork is updated, the user wants the configmap to be updated accordingly. 28 | - [issue 670](https://github.com/open-cluster-management-io/ocm/issues/670) is a case when the user 29 | uses manifestwork to create argocd's `Application` resource. The `Application` resource has an `operation` 30 | field which is used to trigger the sync, and the field will be removed by the argocd when the sync is done. 31 | User in the manifestwork sets the `operation` field and when argocd removes the field, the user does not want 32 | the field to be updated back by the work-agent. 33 | - [issue 690](https://github.com/open-cluster-management-io/ocm/issues/690) is the case that user wants 34 | to create a deployment using manifestwork, but what HPA on the spoke the control the replicas of the deployment. 35 | Hence the user wants the work-agent to ignore the replicas field of the deployment in the manfiestwor if it is 36 | set. 37 | 38 | 39 | ## Proposal 40 | 41 | We would like to introduce new options and `ServerSideApply` update strategy to resolve the above issue. 42 | The user can set the option in the manifestwork, so when another actor on the spoke cluster update the resource. The 43 | work-agent will ignore the change and not try to change the resource back. On the other hand, when the resource spec 44 | defined in the manifestwork is updated by the user, the work-agent will still update the resource accordingly. In 45 | summary, the option is to ignore the change triggered from spoke cluster. 46 | 47 | We would also introduce a new `ignoreFields` options similar as what 48 | is defined in argoCD (https://argo-cd.readthedocs.io/en/stable/user-guide/diffing/). Such that user can choose to not 49 | let work-agent patch certain resource fields when it is changed on the managed cluster or in the manifestwork. 50 | 51 | ### Design Details 52 | 53 | #### API change 54 | 55 | The change will be added into the `updateStratey` field. For `ServerSideApply` strategy, we will add: 56 | 57 | ```go 58 | type ServerSideApplyConfig struct { 59 | ... 60 | // IgnoreFields defines a list of json paths in the resource that will not be updated on the spoke. 61 | // +listType:=map 62 | // +listMapKey:=type 63 | // +optional 64 | IgnoreFields []IgnoreField `json:"ignoreFields,omitempty"` 65 | } 66 | 67 | type IgnoreField struct { 68 | // Condition defines the condition that the fields should be ignored when apply the resource. All the fields 69 | // will be ignored if the condition is met, otherwise all fields will be updated. 70 | // +kubebuilder:default=OnSpokePresent 71 | // +kubebuilder:validation:Enum=OnSpokePresent;OnSpokeChange 72 | // +kubebuilder:validation:Required 73 | // +required 74 | Condition string `json:"condition"` 75 | 76 | // JSONPaths defines the list of json path in the resource to be ignored 77 | JSONPaths []string `json:"jsonPaths"` 78 | } 79 | ``` 80 | 81 | The `IgnoreFields` defines the setting that under certain situation, will not update the certain fields on the 82 | resource when the field is changed in the manifestwork or in the spoke cluster by another actor. It has two 83 | types: 84 | - `OnSpokeChange` means the agent only ignores field update when the field is changed on the managed cluster, the field 85 | will still be updated when it is updated in the manifestwork. 86 | - `OnSpokePresent` means the agent ignores the field update as long as the resource is present on the managed cluster, no 87 | matter the change is made on the managed cluster or in the manifestwork. 88 | 89 | #### agent implemntation 90 | 91 | To handle the `IgnoreFields` for `ServerSideApply`, we will remove the fields of the resource to be applied defined in 92 | the `IgnoreFields` and then generates the apply patch. The agent when apply the resource to the spoke cluster will add 93 | an annotation with the key `open-cluster-management.io/object-hash`. The value of the annotation is the computed hash 94 | of the resource manifests in the `ManifestWork`, excluding the items in the `IgnoreFields` with the `OnSpokePresent` 95 | condition. Later when another actor updates the resource, the work-agent will at first check if the object-hash mismatches 96 | with the current resource spec in the `ManifestWork`. If it is the same, the resource will not be updated so the 97 | change from spoke is ignored. When the resource field not included in the `OnSpokePresent` in the manifestwork is 98 | updated, the annotation will not match which then trigger the work-agent to update. 99 | 100 | When the resource is deleted by another actor, the resource will be deleted with the annotation. So when work-agent 101 | recreates the resource using the latest spec. The `IgnoreFields` setting applies only when 102 | the resource is created or present already on the spoke cluster. 103 | 104 | When the `ManifestWork` is deleted, and the `DeleteOption` is `Orphan`, the resource will be kept on the spoke cluster 105 | together with the object-hash. It means if the user creates another `ManifestWork` with the same resource manifest and 106 | set `IgnoreFields` in the strategy, the resource might not be updated on the spoke cluster. 107 | 108 | #### examples 109 | 110 | To resolve [issue 631](https://github.com/open-cluster-management-io/ocm/issues/631), user can set the manifestwork 111 | with `OnSpokeChange` 112 | 113 | ```yaml 114 | apiVersion: work.open-cluster-management.io/v1 115 | kind: ManifestWork 116 | metadata: 117 | namespace: 118 | name: hello-work-demo 119 | spec: 120 | workload: ... 121 | manifestConfigs: 122 | - resourceIdentifier: 123 | resource: configmaps 124 | namespace: default 125 | name: some-configmap 126 | updateStrategy: 127 | type: ServerSideApply 128 | force: true 129 | serverSideApply: 130 | ignoreFields: 131 | - condition: OnSpokeChange 132 | jsonPaths: 133 | - .data 134 | 135 | ``` 136 | 137 | To resolve [issue 670](https://github.com/open-cluster-management-io/ocm/issues/670), user also can do the same for 138 | argocd application. 139 | 140 | ```yaml 141 | apiVersion: work.open-cluster-management.io/v1 142 | kind: ManifestWork 143 | metadata: 144 | namespace: 145 | name: hello-work-demo 146 | spec: 147 | workload: ... 148 | manifestConfigs: 149 | - resourceIdentifier: 150 | group: argoproj.io/v1alpha1 151 | resource: application 152 | namespace: default 153 | name: application1 154 | updateStrategy: 155 | type: ServerSideApply 156 | serverSideApply: 157 | ignoreFields: 158 | - condition: OnSpokeChange 159 | jsonPaths: 160 | - .operation 161 | ``` 162 | 163 | To resolve [issue 690](https://github.com/open-cluster-management-io/ocm/issues/690), user can set like: 164 | 165 | ```yaml 166 | apiVersion: work.open-cluster-management.io/v1 167 | kind: ManifestWork 168 | metadata: 169 | namespace: 170 | name: hello-work-demo 171 | spec: 172 | workload: ... 173 | manifestConfigs: 174 | - resourceIdentifier: 175 | group: apps/v1 176 | resource: deployment 177 | namespace: default 178 | name: deploy1 179 | updateStrategy: 180 | type: ServerSideApply 181 | serverSideApply: 182 | ignoreFields: 183 | - condition: OnSpokePresent 184 | jsonPaths: 185 | - .spec.replicas 186 | ``` 187 | 188 | 189 | ### Test Plan 190 | 191 | - test on `IgnoreFields` with `OnSpokePresent` and `OnSpokeChange` option. 192 | - test on `IgnoreFields` with a single field, a full strcutrue and an item in the list. 193 | 194 | ### Graduation Criteria 195 | N/A 196 | 197 | ### Upgrade Strategy 198 | It will need upgrade on CRD of ManifestWork on hub cluster, and upgrade of work agent on managed cluster. 199 | 200 | When a user needs to use this feature with an existing `ManifestWork`, the user needs to update the `ManifestWork` to 201 | `ServerSideApply` strategy and enable force apply if the `ManifestWork` uses `Update` strategy before. 202 | 203 | ### Version Skew Strategy 204 | - The field is optional, and if it is not set, the manifestwork will be updated as is by the work agent with elder version 205 | - The elder version work agent will ignore the newly added field. 206 | 207 | ## Alternatives 208 | -------------------------------------------------------------------------------- /enhancements/sig-policy/59-ansible-everyevent-mode/README.md: -------------------------------------------------------------------------------- 1 | # Add an "everyEvent" Ansible Job Run Mode to the Policy addon 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 11 | 12 | ## Summary 13 | 14 | The Open Cluster Management (OCM) Policy addon Ansible integration should support a new run mode of `everyEvent` so that 15 | an Ansible job can be run for every unique policy violation per managed cluster. This means that if you have two 16 | noncompliant clusters that a policy applies to, the Ansible job would run only once for every cluster until the cluster 17 | becomes compliant and then noncompliant again. 18 | 19 | ## Motivation 20 | 21 | The Open Cluster Management (OCM) Policy addon supports running Ansible jobs on policy violations. The Policy addon 22 | currently supports the run modes `once` and `disabled`. The `once` mode is limiting since after a policy violation 23 | occurs, the Ansible job runs once and then it is disabled. A user must manually renable it for the Ansible job to run 24 | again on the next policy violation. 25 | 26 | An additional mode of `everyEvent` should be added so that an Ansible job can be run for every unique policy violation 27 | per managed cluster. A simple example is that a user might use the OCM Policy addon Ansible integration to create 28 | tickets in their ticketing system on every policy violation per cluster. The new `everyEvent` mode would enable this 29 | use-case. 30 | 31 | ### Goals 32 | 33 | 1. The Policy addon is able to run an Ansible job on every unique policy violation per managed cluster. 34 | 1. Provide an option to avoid excessive Ansible job creation on an unstable policy (e.g. frequest compliant -> 35 | noncompliant -> compliant). 36 | 37 | ### Non-Goals 38 | 39 | 1. Keeping track of every historical `PolicyAutomation` run. 40 | 1. Adding additional `eventHook` options. 41 | 1. Avoiding infinite creation of Ansible jobs on unstable policies (e.g. frequest compliant -> noncompliant -> 42 | compliant). 43 | 44 | ## Proposal 45 | 46 | ### New Automation Mode 47 | 48 | To configure Ansible automation when a policy violation occurs, a `PolicyAutomation` object is created which references 49 | the triggering policy as shown below. 50 | 51 | ```yaml 52 | apiVersion: policy.open-cluster-management.io/v1beta1 53 | kind: PolicyAutomation 54 | metadata: 55 | name: create-ticket 56 | spec: 57 | mode: everyEvent 58 | policyRef: enable-etcd-encryption 59 | eventHook: noncompliant 60 | automationDef: 61 | name: Demo Job Template 62 | secret: ansible-automation-platform-secret-name 63 | extra_vars: 64 | sn_severity: 1 65 | sn_priority: 1 66 | ``` 67 | 68 | The proposal is to add a new mode of `everyEvent` that runs an Ansible job on every unique policy violation per managed 69 | cluster. In order to achieve this, the status of the `PolicyAutomation` must be able to track which managed clusters the 70 | Ansible job has run for already. 71 | 72 | Take for example a scenario where a policy called `enable-etcd-encryption` applies to managed clusters `cluster1` and 73 | `cluster2`. The user has also defined a `PolicyAutomation` object and set the `mode` to `everyEvent`. Then the 74 | `enable-etcd-encryption` policy becomes noncompliant (i.e. violated) on `cluster1`. The Governance Policy Propagator 75 | controller would create an Ansible job on the Hub with the `extra_var` of `target_clusters: ["cluster1"]` like when 76 | using the `once` mode. If there were more noncompliant clusters at the time, then `target_clusters` would also contain 77 | those as well. The controller would then set the following status on the `PolicyAutomation` object. The 78 | `automationStartTime` represents the time the Ansible job was created and the `eventTime` represents the time of the 79 | last time the cluster became noncompliant. Note that there is currently no status that ever gets set on 80 | `PolicyAutomation` objects. 81 | 82 | ```yaml 83 | status: 84 | clustersWithEvent: 85 | cluster1: 86 | automationStartTime: "2022-06-22T23:59:30Z" 87 | eventTime: "2022-06-22T23:59:29Z" 88 | ``` 89 | 90 | This status update would tell the Governance Policy Propagator controller to not run the Ansible job again on `cluster1` 91 | the next time it notices that the `enable-etcd-encryption` policy is noncompliant on `cluster1` such as during periodic 92 | reconciling or status updates. This isn't meant to store a complete history of which clusters had an Ansible job run due 93 | to a policy violation. It is meant to track the clusters **currently** in the state based on the `eventHook` value in 94 | the `PolicyAutomation` object and the appropriate automation was run for those clusters. Such a history can be found on 95 | the Ansible Tower/Ansible Automation Platform job history. 96 | 97 | Several minutes later, the `enable-etcd-encryption` policy becomes noncompliant on `cluster2`. The Governance Policy 98 | Propagator controller would run the Ansible job from the Hub while passing the `extra_var` of 99 | `target_clusters: ["cluster2"]`. It would then set the following status on the `PolicyAutomation` object. 100 | 101 | ```yaml 102 | status: 103 | clustersWithEvent: 104 | cluster1: 105 | automationStartTime: "2022-06-22T23:58:30Z" 106 | eventTime: "2022-06-22T23:58:29Z" 107 | cluster2: 108 | automationStartTime: "2022-06-22T23:59:30Z" 109 | eventTime: "2022-06-22T23:59:29Z" 110 | ``` 111 | 112 | Several minutes later, the `enable-etcd-encryption` policy becomes compliant on `cluster1`. The Governance Policy 113 | Propagator controller would not run an Ansible job and set the following status on the `PolicyAutomation` object. 114 | 115 | ```yaml 116 | status: 117 | clustersWithEvent: 118 | cluster2: 119 | automationStartTime: "2022-06-22T23:59:30Z" 120 | eventTime: "2022-06-22T23:59:29Z" 121 | ``` 122 | 123 | Now consider the case where the user made a change to their Ansible job and wants the Governance Policy Propagator 124 | controller to rerun the Ansible jobs for all current noncompliant managed clusters. The user would set the existing 125 | annotation of `policy.open-cluster-management.io/rerun` to `true` on the `PolicyAutomation` object and thus signal the 126 | Governance Policy Propagator controller to ignore `status.clustersWithEvent` and run an Ansible job for every 127 | noncompliant managed cluster. 128 | 129 | ### Limiting the Number of Ansible Jobs 130 | 131 | If a user has a policy that they are concerned with often going between compliant and non-compliant, they may not want 132 | to run an Ansible job every time the cluster becomes noncompliant in order to not overwhelm Ansible Tower/AAP. In this 133 | case, a new optional spec field of `delayAfterRunSeconds` (follows the 134 | [API naming conventions](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#naming-conventions)) 135 | would be added so the user could specify the minimum amount of seconds before an automation can be retriggered on the 136 | same cluster. This would default to 0 seconds and only be applicable for the `mode` of `everyEvent`. 137 | 138 | If a user sets `delayAfterRunSeconds` to `600` for example, and the cluster becomes noncompliant a second time and stays 139 | noncompliant past the 600 seconds, the Ansible job would be run again for that cluster a maximum of one time. This is 140 | detected by looking at the `eventTime` in the `status.clustersWithEvent` entry to see if the timestamp is newer than the 141 | `automationStartTime` value. When the `delayAfterRunSeconds` value is set, the `PolicyAutomation` controller would not 142 | remove the `status.clustersWithEvent` entry for the cluster until it has become compliant and the number of seconds set 143 | in `delayAfterRunSeconds` have elapsed since the `automationStartTime` value. From an implementation point of view, this 144 | would mean requeing the reconcile request to that point in time. 145 | 146 | ### User Stories 147 | 148 | #### Story 1 149 | 150 | As an OCM Policy addon user, I would like to run an Ansible job for every unique policy violation per managed cluster so 151 | that I can guarantee that my Ansible automation is run for every managed cluster. 152 | 153 | #### Story 2 154 | 155 | As a Kubernetes administrator, I would like to use Ansible to create a ServiceNow ticket for each managed cluster that 156 | my OCM policy is noncompliant on. 157 | 158 | ### Implementation Details/Notes/Constraints [optional] 159 | 160 | N/A 161 | 162 | ### Risks and Mitigation 163 | 164 | N/A 165 | 166 | ## Design Details 167 | 168 | See the [Proposal](#proposal) section. 169 | 170 | ### Open Questions [optional] 171 | 172 | N/A 173 | 174 | ### Test Plan 175 | 176 | **Note:** _Section not required until targeted at a release._ 177 | 178 | Consider the following in developing a test plan for this enhancement: 179 | 180 | - Will there be e2e and integration tests, in addition to unit tests? 181 | - How will it be tested in isolation vs with other components? 182 | 183 | No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the 184 | implementation and anything particularly challenging to test should be called out. 185 | 186 | All code is expected to have adequate tests (eventually with coverage expectations). 187 | 188 | All code is expected to have sufficient e2e tests. 189 | 190 | ### Graduation Criteria 191 | 192 | The `PolicyAutomation` kind is still `v1beta1` and it would remain so after this enhancement. 193 | 194 | ### Upgrade / Downgrade Strategy 195 | 196 | There are no concerns with upgrades since this is not a breaking change. 197 | 198 | Downgrading would lead to the `PolicyAutomation` object influencing nothing on policy violations because the `mode` of 199 | `everyEvent` would be considered invalid. This is an acceptable outcome. 200 | 201 | ### Version Skew Strategy 202 | 203 | This is not applicable because the change is only on the Hub side. 204 | 205 | ## Implementation History 206 | 207 | N/A 208 | 209 | ## Drawbacks 210 | 211 | 1. This does not track the total number of Ansible jobs that were created. 212 | 213 | ## Alternatives 214 | 215 | N/A 216 | 217 | ## Infrastructure Needed [optional] 218 | 219 | N/A 220 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/123-addon-multiple-configs/README.md: -------------------------------------------------------------------------------- 1 | # Addon Multiple Configs with Same GVK 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | 13 | This proposal proposes an enhancement to the addon configurations, enabling users to configure multiple configs of the same GVK. 14 | 15 | ## Motivation 16 | 17 | Users have three locations to set [addon configurations](https://open-cluster-management.io/concepts/addon/#add-on-configurations) today: 18 | 1. In `ClusterManagementAddon` `spec.supportedConfigs`, the user can set the `defaultConfig` to declare a default configuration for all the addons. 19 | 2. In `ClusterManagementAddOn`, `spec.installStrategy.placements[].configs`, the user can declare configuration for addons on a group of clusters selected by placement. 20 | 3. In `ManagedClusterAddOn` `spec.configs`, the user can declare the configuration for the addon on a specific cluster. 21 | 22 | Currently, configs with the same GVK only support a single config. The config with the same GVK in place `3` will override the config in place `2`, 23 | which will override the config in place `1`. The `mca.status` lists the final effective addon configuration. Using `AddonDeploymentConfig` resource as an example, 24 | a user can declare, in place `1`, a default `AddonDeploymentConfig`. Additionally, the user can also declare, in place `2`, an `AddonDeploymentConfig` for a group of clusters, which will 25 | override the default config. Finally, the user can declare, in place `3`, for a specific cluster another `AddonDeploymentConfig`, which will override the ones in `1` and `2`. 26 | 27 | Now we are seeing the requirement to set multiple configs for the same GVK. For example, for `OpenTelemetryCollector`, the user may want to define 2 `OpenTelemetryCollector` 28 | in place `2`, one CR is configured to collect traces and another to collect logs on the same cluster. This proposal aims to satisfy that requirement. 29 | 30 | 31 | ### Goals 32 | 33 | - Enhance `ClusterManagementAddon` and `ManagementClusterAddon` API to allow users to configure multiple addon configs with the same GVK. 34 | 35 | ### Non-Goals 36 | 37 | - `ClusterManagementAddon` `spec.supportedConfigs` does not allow the user to configure multiple addon configs with the same GVK in the `defaultConfig`. 38 | 39 | ## Proposal 40 | 41 | ### User Stories 42 | 43 | 1. The admin can configure the `ClusterManagementAddon` API to set multiple configs with the same GVK per install strategy. 44 | 2. The admin can configure the `ManagementClusterAddon` API to set multiple configs with the same GVK for a specific cluster. 45 | 46 | ## Design Details 47 | 48 | ### API changes 49 | 50 | No API changes are needed, `cma.spec.installStrategy.placements[].configs` and `mca.spec.configs` already support multiple configs. 51 | 52 | ```go 53 | // Configs is the configuration of managedClusterAddon during installation. 54 | // User can override the configuration by updating the managedClusterAddon directly. 55 | // +optional 56 | Configs []AddOnConfig `json:"configs,omitempty"` 57 | ``` 58 | 59 | ### Effective addon configs 60 | 61 | As the motivation mentioned, today there are 3 places to set addon configurations. 62 | 63 | Currently, since each GVK supports only one config, the GVK is used as the identifier for configs. When configs with the same GVK configured in 64 | multiple places, it follows the override sequence and only one config is listed as the effective config in the `mca.status`. 65 | 66 | With the support for multiple configs with the same GVK, `GVK + namespace + name` becomes the identifier. 67 | We need to reconsider the logic for setting `mca.status` when configuring multiple configs in multiple places. 68 | 69 | In the initial discussion we listed three options, finally choose the below "Override by GVK" option, as it better satisfies a real use case. 70 | The other two are listed in the **Alternatives** for future needs. 71 | 72 | The below example omits the group and namespace for simplicity. 73 | 74 | ```yaml 75 | apiVersion: addon.open-cluster-management.io/v1alpha1 76 | kind: ClusterManagementAddOn 77 | metadata: 78 | name: example 79 | spec: 80 | supportedConfigs: 81 | - resource: r1 82 | defaultConfig: 83 | name: n1 84 | - resource: r2 85 | installStrategy: 86 | type: Placements 87 | placements: 88 | - name: example 89 | namespace: example 90 | configs: 91 | - resource: r2 92 | name: n1 93 | - resource: r2 94 | name: n2 95 | 96 | 97 | apiVersion: addon.open-cluster-management.io/v1alpha1 98 | kind: ManagedClusterAddOn 99 | metadata: 100 | name: example 101 | namespace: cluster1 102 | spec: 103 | installNamespace: open-cluster-management-agent-addon 104 | configs: 105 | - resource: r2 106 | name: n3 107 | ``` 108 | 109 | #### Override by GVK 110 | 111 | "Override by GVK" is a way try to keep consistent with current behavior. GVK is used as the identifier for a group of configs. 112 | 113 | In the above example, config `r2` `n3` in `ManagedClusterAddOn` (place `3`) will override `r2` `n1` and `n2` in `installStrategy` (place `2`). The default config `r1` `n1` will remain, since no other config for resource `r1` is defined in either place `2` or `3`. 114 | The `mca.status` will look like the following: 115 | 116 | ```yaml 117 | apiVersion: addon.open-cluster-management.io/v1alpha1 118 | kind: ManagedClusterAddOn 119 | metadata: 120 | name: example 121 | namespace: cluster1 122 | status: 123 | ... 124 | configReferences: 125 | - desiredConfig: 126 | name: n3 127 | specHash: 128 | name: n3 129 | resource: r2 130 | - desiredConfig: 131 | name: n1 132 | specHash: 133 | name: n1 134 | resource: r1 135 | ``` 136 | 137 | The limitation is if the `r2n1` and `r2n2` resources are some global configs, the user may not want it to be overridden by `r2n3`. In this case, the user need to define 138 | both `r2n1`,`r2n2` and `r2n3` in `ManagedClusterAddOn`. 139 | 140 | ### Effective config values 141 | 142 | `WithGetValuesFuncs()` depends on how some functions handle multiple configs, for example: 143 | [GetAddOnDeploymentConfigValues()](https://github.com/open-cluster-management-io/addon-framework/blob/main/pkg/addonfactory/addondeploymentconfig.go#L144) will only read the last object list in `mca.status`. 144 | 145 | ```go 146 | // GetAddOnDeploymentConfigValues uses AddOnDeploymentConfigGetter to get the AddOnDeploymentConfig object, then 147 | // uses AddOnDeploymentConfigToValuesFunc to transform the AddOnDeploymentConfig object to Values object 148 | // If there are multiple AddOnDeploymentConfig objects in the AddOn ConfigReferences, the big index object will 149 | // override the one from small index 150 | func GetAddOnDeploymentConfigValues( 151 | getter utils.AddOnDeploymentConfigGetter, toValuesFuncs ...AddOnDeploymentConfigToValuesFunc) GetValuesFunc { 152 | ... 153 | } 154 | ``` 155 | 156 | For other addons, could write their own `WithGetValuesFuncs()` to determine how to use the listed configs in `mca.status`. 157 | 158 | ### How to trigger rollout 159 | 160 | Below is the current rollout logic and step 2 needs to be enhanced since one GVK will have multiple configs. 161 | 162 | 1. The [buildConfigurationGraph](https://github.com/open-cluster-management-io/ocm/blob/release-0.13/pkg/addon/controllers/addonconfiguration/controller.go#L166) 163 | will build and list the desired configs for each mca. 164 | 2. The [setRolloutStatus](https://github.com/open-cluster-management-io/ocm/blob/release-0.13/pkg/addon/controllers/addonconfiguration/graph.go#L54) 165 | will compare the actual configs in `mca.Status.ConfigReferences` with desired configs, using GVK as key, compare the actual and desired name+namespace+hash, if not match, set status to ToApply. 166 | 3. The [generateRolloutResult](https://github.com/open-cluster-management-io/ocm/blob/release-0.13/pkg/addon/controllers/addonconfiguration/controller.go#L132) 167 | will get clusters that need to rollout (with ToApply status) based on the rollout strategy. 168 | 169 | ### Test Plan 170 | 171 | - e2e test on setting configs in `ClusterManagementAddon` `spec.supportedConfigs`. 172 | - e2e test on setting multiple configs in `ClusterManagementAddOn` `spec.installStrategy.placements[].configs`. 173 | - e2e test on setting multiple configs in `ManagedClusterAddOn` `spec.configs`. 174 | 175 | ### Graduation Criteria 176 | 177 | TBD 178 | 179 | ### Upgrade / Downgrade Strategy 180 | 181 | TBD 182 | 183 | ## Alternatives 184 | 185 | ### Effective addon configs 186 | #### Merge 187 | 188 | With `GVK + namespace + name` as the identifier for each config, there is no overriding concept now actually, all the configured configs will be listed in the `mca.status`. 189 | 190 | Configs will be ordered, with the last configured on top. For example, configs in mca will be on top of cma configs, and mca configs with higher indices 191 | will be on top of those with lower indices. 192 | 193 | The addon's own controller will determine how to use the configs (e.g., choose one, read all, etc). 194 | 195 | ```yaml 196 | apiVersion: addon.open-cluster-management.io/v1alpha1 197 | kind: ManagedClusterAddOn 198 | metadata: 199 | name: example 200 | namespace: cluster1 201 | status: 202 | ... 203 | configReferences: 204 | - desiredConfig: 205 | name: n3 206 | specHash: 207 | name: n3 208 | resource: r2 209 | - desiredConfig: 210 | name: n2 211 | specHash: 212 | name: n2 213 | resource: r2 214 | - desiredConfig: 215 | name: n1 216 | specHash: 217 | name: n1 218 | resource: r2 219 | - desiredConfig: 220 | name: n1 221 | specHash: 222 | name: n1 223 | resource: r1 224 | ``` 225 | 226 | #### Override 227 | 228 | "Override" is straightforward, any configs in the mca will override the configs in cma install strategy and override the configs in cma default configs. 229 | 230 | ```yaml 231 | apiVersion: addon.open-cluster-management.io/v1alpha1 232 | kind: ManagedClusterAddOn 233 | metadata: 234 | name: example 235 | namespace: cluster1 236 | status: 237 | ... 238 | configReferences: 239 | - desiredConfig: 240 | name: n3 241 | specHash: 242 | name: n3 243 | resource: r2 244 | ``` -------------------------------------------------------------------------------- /enhancements/sig-architecture/98-cluster-permission/README.md: -------------------------------------------------------------------------------- 1 | # New API: ClusterPermission 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implemented` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | 13 | This proposal will be adding a new OCM API named `ClusterPermission` 14 | which helps the administrators to automatically distribute RBAC resources 15 | to the managed clusters and manage the lifecycle of those RBAC resources 16 | including Role, ClusterRole, RoleBinding, ClusterRoleBinding. 17 | A valid `ClusterPermission` resource should be in a "cluster namespace" and 18 | the associated RBAC resources will be deliver to the associated managed cluster 19 | with that "cluster namespace". 20 | 21 | ## Users of this API 22 | 23 | - The hub cluster admin is allowed to crud `ClusterPermission` in the managed cluster namespace. 24 | - By default users and controllers do not have privilege to crud `ClusterPermission` 25 | in the managed cluster namespace unless their permission is elevated through Role and ClusterRole. 26 | 27 | ## Motivation 28 | 29 | ### Distribute and manage multi-cluster RBAC resources 30 | 31 | The `ClusterPermission` will help us easily distribute 32 | RBAC resources to the managed cluster. The `ClusterPermission` controller using 33 | `ManifestWork` API will ensuring, updating, removing the RBAC resources 34 | from each of the managed cluster. 35 | The `ClusterPermission` API will protect these distributed RBAC resources from unexpected 36 | modification and removal. 37 | 38 | ### Enhance the usability of the existing ManagedServiceAccount API 39 | 40 | After ensuring the service-accounts' presence on each managed cluster using the existing 41 | [ManagedServiceAccount](https://github.com/open-cluster-management-io/managed-serviceaccount) 42 | API, the `ClusterPermission` can reference the `ManagedServiceAccount` and use that 43 | as the Subject for the RBAC resources. This will help regulate the access level for 44 | the `ManagedServiceAccount`. 45 | 46 | ### New API instead of just using ManifestWork API 47 | 48 | There are several reasons why we should create a new API instead of using `ManifestWork`: 49 | - `ManifestWork` API is meant to be a multicluster workload delivery primitive type. 50 | - `ClusterPermission` API is a higher level concept that users can understand its multicluster 51 | RBAC resources scope easily. 52 | - `ManifestWork` API payload does not have RBAC specific fields definitions. 53 | - `ClusterPermission` can determine the `ManagedServiceAccount` subject and create `ManifestWork` 54 | with the correct RoleBinding/ClusterRoleBinding subject. 55 | 56 | ## Goals & Non-Goals 57 | 58 | ### Goals 59 | 60 | - Manage the lifecycle of RBAC resources across multiple cluster. 61 | - Allows the typical Subject Kind: Group, ServiceAccount, and User 62 | for RoleBinding and ClusterRoleBinding. 63 | - Allows the `ManagedServiceAccount` as Subject Kind 64 | for RoleBinding and ClusterRoleBinding. 65 | 66 | ### Use cases 67 | 68 | The primary use case of the `ClusterPermission` is to allow a controller on the hub cluster to 69 | gain access to the managed cluster API server with authorization regulations. 70 | The access to the managed cluster API server is done via the ServiceAccount token 71 | managed by the `ManagedServiceAccount` and the access level of the ServiceAccount 72 | is handled by the `ClusterPermission`. 73 | 74 | ### Future goals 75 | 76 | - It is currently assumed that the user of `ClusterPermission` is either a cluster admin or 77 | a user who can create `ClusterPermission` in the hub cluster's managed "cluster namespace". 78 | It will be useful if there is an approval process that a cluster admin can approve, 79 | which will allow for other users to create `ClusterPermission` resources. 80 | - Support creating a RoleBinding/ClusterRoleBinding without defining any Role/ClusterRole 81 | rules. 82 | 83 | ## Design 84 | 85 | ### Component & API 86 | 87 | We propose to adding a new custom resource named 88 | `ClusterPermission` introduced into OCM by this proposal: 89 | 90 | A sample of the `ClusterPermission` will be: 91 | 92 | ```yaml 93 | apiVersion: rbac.open-cluster-management.io/v1alpha1 94 | kind: ClusterPermission 95 | metadata: 96 | name: my-managed-rbac 97 | namespace: cluster1 98 | spec: 99 | clusterRole: 100 | rules: 101 | - apiGroups: ["apps"] 102 | resources: ["deployments"] 103 | verbs: ["update"] 104 | clusterRoleBinding: 105 | subject: 106 | apiGroup: rbac.authorization.k8s.io 107 | kind: Group 108 | name: system:serviceaccounts:qa 109 | roles: 110 | - namespace: default 111 | rules: 112 | - apiGroups: ["apps"] 113 | resources: ["deployments"] 114 | verbs: ["update"] 115 | - namespaceSelector: 116 | matchExpressions: 117 | - key: foo.com/managed-state 118 | operator: In 119 | values: 120 | - managed 121 | rules: 122 | - apiGroups: [""] 123 | resources: ["configmaps"] 124 | verbs: ["update"] 125 | roleBindings: 126 | - namespace: kube-system 127 | roleRef: 128 | kind: ClusterRole 129 | subject: 130 | apiGroup: authentication.open-cluster-management.io 131 | kind: ManagedServiceAccount 132 | name: managed-sa-sample 133 | - namespace: default 134 | roleRef: 135 | kind: Role 136 | subject: 137 | apiGroup: rbac.authorization.k8s.io 138 | kind: Group 139 | name: system:serviceaccounts 140 | - namespaceSelector: 141 | matchExpressions: 142 | - key: foo.com/managed-state 143 | operator: In 144 | values: 145 | - managed 146 | roleRef: 147 | kind: Role 148 | subject: 149 | apiGroup: rbac.authorization.k8s.io 150 | kind: User 151 | name: "alice@example.com" 152 | status: 153 | conditions: 154 | - type: RBACManifestWorkCreated 155 | status: True 156 | - type: SubjectPresent 157 | status: True 158 | ``` 159 | 160 | The `ClusterPermission` resource is expected to be created under the 161 | "cluster namespace" which is a namespace with the same name as 162 | the managed cluster, the Role/RoleBinding/ClusterRole/ClusterRoleBinding 163 | delivered to the managed cluster will have the same name as the 164 | `ClusterPermission` resource. 165 | 166 | We propose adding a new hub cluster controller that watches 167 | and reconcile `ClusterPermission`. When `ClusterPermission` is reconciled, create/update 168 | a `ManifestWork` with the payload of Roles, ClusterRole, 169 | RoleBindings, and ClusterRoleBinding. All the RBAC resources will have 170 | the same name as the `ClusterPermission`. The `ManifestWork` owner is set to 171 | `ClusterPermission` so that when the `ClusterPermission` is deleted, the `ManifestWork` 172 | will be garbage collected, which will trigger the removal of all the RBAC 173 | related resources in the managed cluster. It's expected that the executor 174 | of the `ManifestWork` is the work agent. Therefore, a Role of the following 175 | content will be needed for the controller: 176 | 177 | ``` 178 | rules: 179 | - apiGroups: 180 | - work.open-cluster-management.io 181 | resources: 182 | - manifestworks 183 | verbs: 184 | - execute-as 185 | resourcenames: 186 | - system:serviceaccount::klusterlet-work-sa 187 | ``` 188 | 189 | It is expected that the work agent has "escalate" and "bind" role verbs 190 | which allows it to manage the RBAC resources inside the `ManifestWork` payload. 191 | 192 | *Note* If the feature gate `NilExecutorValidating` is enabled on the ClusterManager. 193 | It is expected that the `ManifestWork` validator will populate that field so error 194 | won't occur. 195 | 196 | If the subject of the binding is 197 | a kind `ManagedServiceAccount` then the binding will be updated to be 198 | a kind `ServiceAccount` and the namespace will be the value of 199 | `ManagedServiceAccount`'s managed cluster service account namespace. 200 | The `ManagedServiceAccount` will be watched by the `ClusterPermission` controller. 201 | If a previously exist `ManagedServiceAccount` is now deleted then 202 | `ClusterPermission` the condition type `SubjectPresent` will be set to false. 203 | 204 | A sample of the `ClusterPermission` using Role/ClusterRole references: 205 | 206 | ```yaml 207 | apiVersion: rbac.open-cluster-management.io/v1alpha1 208 | kind: ClusterPermission 209 | metadata: 210 | name: my-managed-rbac-with-refs 211 | namespace: cluster1 212 | spec: 213 | roleRefs: 214 | - kind: Role 215 | name: pod-reader 216 | namespace: default 217 | apiGroup: rbac.authorization.k8s.io 218 | - kind: ClusterRole 219 | name: secret-reader 220 | apiGroup: rbac.authorization.k8s.io 221 | ... 222 | ``` 223 | 224 | You can also use references Role/ClusterRole on the hub cluster and the same 225 | Role/ClusterRole will be delivered to the managed cluster. 226 | 227 | As an enhancement to the usability of the existing `ManagedServiceAccount`, 228 | there is no "Placement Reference" in the `ClusterPermission` API because we want to 229 | follow the same convention as the `ManagedServiceAccount` API. A `ClusterPermission` 230 | resource lives in the hub cluster managed cluster "cluster namespace" similar 231 | to `ManagedServiceAccount`'s design and implementation. 232 | 233 | ### Test Plan 234 | 235 | - Unit tests 236 | - Integration tests 237 | 238 | ### Graduation Criteria 239 | #### Alpha 240 | At first, This proposal will be in the alpha stage and needs to meet 241 | 1. The new APIs are reviewed and accepted; 242 | 2. Implementation is completed to support the functionalities; 243 | 3. Develop test cases to demonstrate this proposal works correctly; 244 | 245 | #### Beta 246 | 1. Need to revisit the API shape before upgrading to beta based on user feedback. 247 | 248 | ### Upgrade / Downgrade Strategy 249 | TBD 250 | 251 | ### Version Skew Strategy 252 | N/A 253 | 254 | ## Alternatives 255 | 256 | We can create a ClusterSetPermission API which grants permissions to managed clusters 257 | that belong to a managed cluster set. 258 | Pros: 259 | - Reduce the number of permission CRs needed to set permission to multiple managed clusters. 260 | Cons: 261 | - Users will always need to define a cluster set to set the permission. 262 | Since `ClusterPermission` is trying to follow the same principal as `ManagedServiceAccount` 263 | and `ManagedServiceAccount` does not need to have a cluster set, `ClusterPermission` will 264 | be on a per managed cluster basis for now. 265 | -------------------------------------------------------------------------------- /enhancements/sig-architecture/47-manifestwork-updatestrategy/README.md: -------------------------------------------------------------------------------- 1 | # Manifest Update Strategy in ManifestWork 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [] Enhancement is `implemented` 6 | - [] Design details are appropriately documented from clear requirements 7 | - [] Test plan is defined 8 | - [] Graduation criteria for dev preview, tech preview, GA 9 | - [] User-facing documentation is created in [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 10 | 11 | ## Summary 12 | 13 | Today work agent do a fully update of each manifest defined in the `ManifestWork`. There is some special treatments for different resources and fields: 14 | 15 | ### `ApplyDirectly` for known types 16 | 17 | `ApplyDirectly` in `openshift/library-go` is used for resources with known types, including those resources in core API group, CRDs, 18 | APIServices etc. In general, `ApplyDirectly` will get the existing resources; compare the spec of existing and required, and update 19 | the existing if there is diff. `Labels` and `Annotations` will be merged from required to the existing resource. 20 | In addition, there are special treatments for certain types. For example, if `Secret.type` is changed in the required resource, `ApplyDirectly` will delete the existing and create a new `Secret` as required, since `Secret.type` is an immutable field. 21 | 22 | ### Apply unstructured 23 | 24 | If a resource is not in the known type for `ApplyDirectly`, the resource will be regarded as an unstrcuture object. `Labels` and `Annotations` will also be merged similar as `ApplyDirectly`. For fields not in `metadata` and `status`, work agent will do a 25 | deepequal and fully update the resource if there is diff. Apply unstructured covers much more resource types including `apps` API 26 | group and all the custom resources. 27 | 28 | However apply unstructure method brings several issues for work agent: 29 | 30 | #### Too many update calls 31 | 32 | Kube APIServer will commonly set default values for resources, and there will always be diff when work agent compare existing with 33 | required resource. It will cause many unnecessary update calls. For example, when a `deployment` is created, kube-apiserver will set 34 | default service account for the deployment spec. Since the manifest defined in `ManifestWork` does not have this field, work-agent 35 | will update deployment constantly. 36 | 37 | #### Apply failure 38 | 39 | Some field of the resources has both defaulting and immutable set, with which work agent is unable to update the resource. `Job` is an example. If user does not set `spec.Selector` of job, the kube-apiserver will set a default selector and label on pod. 40 | These fields are immutable, so work agent will fail later when it compares the job manifest and tries to update the job. 41 | 42 | #### Fight with other actors 43 | 44 | A resource might not only managed by work agent, but also other actors. Other actors would want to mutate some field in the resource 45 | which will later be rolled back by the work agent. Some use cases may need these actors to control some fields of the resources other than work agent. 46 | 47 | A possible solution to resolve the first two problems is that work-agent records the hash of required spec and the generation of the last applied resource, and instead of comparing required to the existing resources, work-agent compares hash (if required changes) and generation (if existing changes). However, it cannot resolve the third problem involving multiple actors. 48 | 49 | ## Motivation 50 | 51 | Provide an interface for user to control how resource defined in `ManifestWork` should be updated. 52 | 53 | ## Proposal 54 | 55 | We propose to update `ManifestWork` API so the user can specify upstrategy for each resource manifest in the `ManifestWork`. Possible strategy includes: 56 | 57 | 1. by default, work-agent update resource as what is implemented today. 58 | 2. user can choose not to update the resource, work-agent will only check the existence of the resource, but not keep its spec update. 59 | 3. user can choose to use server side apply when work-agent update the resource. 60 | 61 | ### Design Details 62 | 63 | #### API change 64 | 65 | We could add an updateStrategy for each manifest as below: 66 | 67 | ```go 68 | type UpdateStrategy struct { 69 | // type defines the strategy to update this manifest, default value is Update. 70 | // Update type means to update resource by an update call. 71 | // CreateOnly type means do not update resource based on current manifest. 72 | // ServerSideApply type means to update resource using server side apply as fieldManager of work-controller. 73 | // If there is conflict, the related Applied confition of manifest will in the status of False with the 74 | // reason of ApplyConflict. 75 | // +kubebuilder:default=Update 76 | // +kubebuilder:validation:Enum=Update;CreateOnly;ServerSideApply 77 | // +kubebuilder:validation:Required 78 | // +required 79 | Type string `json:"type"` 80 | 81 | // serverSideApply defines the configuration for server side apply. It is honored only when 82 | // type of updateStrategy is ServerSideApply 83 | // +optional 84 | ServerSideApply *ServerSideApplyConfig `json:"serverSideApply,omitempty"` 85 | } 86 | 87 | type ServerSideApplyConfig struct { 88 | // Force represents to force apply the manifest. 89 | // +optional 90 | Force bool `json:"force"` 91 | } 92 | ``` 93 | 94 | And example of the API would be: 95 | 96 | ```yaml 97 | kind: ManifestWork 98 | metadata: 99 | name: demo-work1 100 | spec: 101 | workload: 102 | manifests: 103 | - apiVersion: apps/v1 104 | kind: Deployment 105 | metadata: 106 | name: hello 107 | namespace: default 108 | spec: 109 | replica: 1 110 | selector: 111 | matchLabels: 112 | app: hello 113 | template: 114 | metadata: 115 | labels: 116 | app: hello 117 | spec: 118 | containers: 119 | - name: hello 120 | image: quay.io/asmacdo/busybox 121 | command: ['sh', '-c', 'echo "Hello, World!" && sleep 3600'] 122 | manifestConfigs: 123 | - resourceIdentifier: 124 | group: apps 125 | resource: deployments 126 | name: hello 127 | namespace: default 128 | updateStrategy: 129 | type: ServerSideApply 130 | ``` 131 | 132 | It means that the to create deployment `hello` on the managed cluster if it does not exist, and using server side apply to update 133 | the resource if it exists on the managed cluster already. 134 | 135 | #### Server side apply 136 | 137 | With service side apply, it is possible to make work agent coordinate with other actors on the spoke. Take the above `ManifestWork` as an 138 | example, if there is an HPA controller on the spoke that take the ownership of the `spec.replicas` field. The condition of `ManifestWork` 139 | should show a status condition of applied failed as below: 140 | 141 | ```yaml 142 | status: 143 | resourceStatus: 144 | manifests: 145 | - conditions: 146 | - lastTransitionTime: "2021-10-14T14:59:09Z" 147 | message: Manifest ownership conflict 148 | reason: ApplyManifestConflict 149 | status: "false" 150 | type: Applied 151 | - lastTransitionTime: "2021-10-14T14:59:09Z" 152 | message: Resource is available 153 | reason: ResourceAvailable 154 | status: "True" 155 | type: Available 156 | resourceMeta: 157 | group: apps 158 | kind: Deployment 159 | name: hello 160 | namespace: default 161 | ordinal: 0 162 | resource: deployments 163 | version: v1 164 | ``` 165 | 166 | When the user see this status condition, the user can update the `ManifestWork` by removing the replica field, which will turn the applied status condition to true again. The message of of `Applied` condition should have the details to tell user that the update of which field is conflict with which field manager. It should also show specific message when the spoke does not support server side apply. 167 | 168 | ```yaml 169 | kind: ManifestWork 170 | metadata: 171 | name: demo-work1 172 | spec: 173 | workload: 174 | manifests: 175 | - apiVersion: apps/v1 176 | kind: Deployment 177 | metadata: 178 | name: hello 179 | namespace: default 180 | spec: 181 | selector: 182 | matchLabels: 183 | app: hello 184 | template: 185 | metadata: 186 | labels: 187 | app: hello 188 | spec: 189 | containers: 190 | - name: hello 191 | image: quay.io/asmacdo/busybox 192 | command: ['sh', '-c', 'echo "Hello, World!" && sleep 3600'] 193 | manifestConfigs: 194 | - resourceIdentifier: 195 | group: apps 196 | resource: deployments 197 | name: hello 198 | namespace: default 199 | updateStrategy: 200 | type: ServerSideApply 201 | ``` 202 | 203 | Service side apply has some limitations comparing with legacy update strategy. User cannot remove a field in the resources with 204 | server side apply in some scenarios. For example, if the merge stratefy defined for a certain field in api schema is `merge`, 205 | user will not be able to remove an item in the list of this field. 206 | 207 | #### Create Only 208 | 209 | Create only strategy defines that work agent will only ensure the existence of the resource on the managed 210 | cluster, and ignore the update of resource upon any change on the manifestwork. Compared with server side apply 211 | strategy, create only strategy gives ownership of all the fields in the resource to the admin or controllers 212 | on the managedcluster, while server side apply strategy has finer grained control on resource fiels ownership. 213 | Create only strategy is useful in the older version cluster on which server side apply is not supported, 214 | but the user on hub still want the controllers on the managed cluster to be able to managed resources 215 | created by the manifestwork. 216 | 217 | 218 | ### Test Plan 219 | - test different update strategy 220 | - test update strategy in the manifestwork 221 | - test resource apply conflict case and ensure that the message returned is correct 222 | 223 | ### Graduation Criteria 224 | N/A 225 | 226 | ### Upgrade Strategy 227 | It will need upgrade on CRD of ManifestWork on hub cluster, and upgrade of work agent on managed cluster. 228 | 229 | ### Version Skew Strategy 230 | - The UpdateStrategy field is optional, and if it is not set, the manifestwork will be updated as is by the work agent with elder version 231 | - The elder version work agent will ignore the UpdateStrategy field. 232 | 233 | ## Alternatives 234 | -------------------------------------------------------------------------------- /enhancements/sig-policy/134-standalone-hub-templates/README.md: -------------------------------------------------------------------------------- 1 | # Standalone Hub Templates 2 | 3 | ## Release Signoff Checklist 4 | 5 | - [ ] Enhancement is `implementable` 6 | - [ ] Design details are appropriately documented from clear requirements 7 | - [ ] Test plan is defined 8 | - [ ] Graduation criteria for dev preview, tech preview, GA 9 | - [ ] User-facing documentation is created in 10 | [website](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/) 11 | 12 | ## Summary 13 | 14 | To support Open Cluster Management (OCM) hub templates without the policy framework, this feature provides an additional 15 | `governance-standalone-hub-templating` addon which generates a user per cluster on the hub to be leveraged by the 16 | Configuration Policy controller for hub template resolution. The OCM hub administrator will be responsible for granting 17 | `list` and `watch` permissions to the resources needed by the hub templates. A sample `ConfigurationPolicy` can be a way 18 | to automate this for a cluster set. 19 | 20 | ## Motivation 21 | 22 | When an Open Cluster Management (OCM) policy user deploys their policies with external tools such as Argo CD, the hub 23 | templating functionality is lost. This is often a critical feature for large scale deployments and prevents users from 24 | choosing an external tool if they require an alternative to the OCM policy framework. 25 | 26 | ### Goals 27 | 28 | 1. Support hub templates for `ConfigurationPolicy` and `OperatorPolicy` when deployed through an external tool such as 29 | Argo CD. 30 | 1. Hub templates should resolve when referenced objects are updated on the hub. 31 | 1. Policies can still be evaluated if the managed cluster is disconnected from the hub and the policy definition remains 32 | the same. 33 | 1. Permissions to the hub are per managed cluster. 34 | 1. Provide a hub only `ConfigurationPolicy` to provide permissions to cluster sets. 35 | 1. It must work without the rest of the policy framework on the managed cluster. 36 | 37 | ### Non-Goals 38 | 39 | 1. `CertificatePolicy` and Gatekeeper constraints will not be directly supported. It can be wrapped in a 40 | `ConfigurationPolicy` if hub templates are needed for those. 41 | 1. Updating a `ConfigurationPolicy` definition with hub templates while disconnected from the hub is not explicitly 42 | supported. This will almost always work based on the proposed design as long as there aren't new hub template API 43 | queries, but if the update adds a hub template API query that isn't in the Configuration Policy controller cache, it 44 | will not work. 45 | 46 | ## Proposal 47 | 48 | ### Design 49 | 50 | #### New governance-standalone-hub-templating Addon 51 | 52 | Currently, hub templates are resolved by the Governance Policy Propagator as part of the replicated `Policy` creation. 53 | In standalone mode, the Governance Policy Propagator and the Governance Policy Framework are not part of the equation. 54 | Because of this, the simplest and most reliable approach is to add hub template support in the Configuration Policy 55 | controller. 56 | 57 | A way to achieve this is to create an additional addon that is **disabled** by default called 58 | `governance-standalone-hub-templating`. Enabling this addon will cause the following: 59 | 60 | - The addon framework creates a user/subject represented by a certificate on the hub for each managed cluster. The 61 | initial permissions will be `list` and `watch` on the managed cluster's `ManagedCluster` object on the hub. This will 62 | allow the hub template variable of `.ManagedClusterLabels` to be populated. 63 | - The addon framework will create a `Secret` in the addon namespace on the managed cluster with the hub user kubeconfig 64 | that will be leveraged by the Configuration Policy controller. 65 | - The Governance Policy Addon controller should configure the `config-policy-controller` to mount the generated 66 | kubeconfig when the `governance-standalone-hub-templating` addon is enabled and pass a flag to enable the standalone 67 | hub templating mode. 68 | 69 | #### Configuration Policy changes 70 | 71 | When the Configuration Policy controller encounters a hub template when standalone hub templating mode is disabled, it 72 | should mark the policy as `NonCompliant` with a message such as 73 | `The governance-standalone-hub-templating addon must be enabled to resolve hub templates from the cluster`. The 74 | `ConfigurationPolicy` objects created on the managed cluster through the policy framework will never have hub templates 75 | since the policy framework marks these policies as `NonCompliant` assuming the hub templates failed to resolve on the 76 | hub. With this in mind, the user's intentions are safe to assume when a hub template is encountered. 77 | 78 | When the Configuration Policy controller has standalone hub templating mode enabled, it should instantiate a separate 79 | `TemplateResolver` with the `governance-standalone-hub-templating` hub user and resolve hub templates (i.e. 80 | `{{hub ... hub}}`) prior to managed cluster templates (i.e. `{{ ... }}`). By default, this `TemplateResolver` will have 81 | no permission on the hub cluster. The hub administrator must grant `list` and `watch` permissions on the objects 82 | accessed by hub templates. The error messages for lack of permissions are clear due to the work done in the "Expand Hub 83 | Template Access on Policies" feature. 84 | 85 | If the changes stopped here, the case where the managed cluster is always connected to the hub would work well. Short 86 | disconnects would be okay since objects referenced by the hub templates are cached. Once the Configuration Policy 87 | controller restarts and thus the cache is cleared, the policy could no longer be evaluated due to hub template 88 | resolution failing. 89 | 90 | The proposed solution is for the Configuration Policy controller to create a `Secret` named `-last-resolved` 91 | in the managed cluster namespace with owner references to the policy. This `Secret` would contain a stripped down 92 | `ConfigurationPolicy`/`OperatorPolicy` after hub templates are resolved. This must include at least the `generation` and 93 | the `spec`. In the event the Configuration Policy controller restarts and fails to resolve hub templates due to a 94 | "connection refused" error, the Configuration Policy controller wil fallback to the value in the 95 | `-last-resolved` `Secret` if the `generation` matches the current `ConfigurationPolicy`/`OperatorPolicy`. 96 | The `generation` detects if the `ConfigurationPolicy`/`OperatorPolicy` was updated since it was last resolved, so in 97 | other words, modifying a `ConfigurationPolicy`/`OperatorPolicy` with hub templates while disconnected from the hub will 98 | not be supported. It may work depending on what's in the `TemplateResolver` cache but this feature will not guarantee 99 | any resilience to this as noted in the "Non-Goals" section. The reason it's stored in a `Secret` is to leverage etcd 100 | encryption in the event a hub template references a `Secret`. 101 | 102 | ### User Stories 103 | 104 | #### Story 1 105 | 106 | As an Argo CD user, I'd like to leverage `ConfigurationPolicy` to securely copy secrets from the Open Cluster Management 107 | hub to the managed cluster. 108 | 109 | #### Story 2 110 | 111 | As a `ConfigurationPolicy` user, I'd like to leverage a centralized `ConfigMap` on the hub for dynamic configuration 112 | parameters for each managed cluster. 113 | 114 | #### Story 3 115 | 116 | As a hub template user without the policy framework, I'd like to have an easy way to grant permissions to groups of 117 | clusters for hub templates. This will be a sample ConfigurationPolicy to grant a `Role` to a cluster set. 118 | 119 | #### Story 4 120 | 121 | As a policy framework user, I'd like to disable standalone hub templates support since I want to decreate the attack 122 | surface of the Open Cluster Management hub. 123 | 124 | ### Implementation Details/Notes/Constraints 125 | 126 | #### Event Driven Aspect 127 | 128 | Lets continue using the [go-template-utils](https://github.com/stolostron/go-template-utils) library which uses the 129 | [kubernetes-dependency-watches](https://github.com/stolostron/kubernetes-dependency-watches) library under the hood for 130 | resolving this style of hub templates. The difference is that the API watches are for a separate cluster than the 131 | cluster the Configuration Policy controller is running on. In the case that the managed cluster is disconnected from the 132 | hub, the [kubernetes-dependency-watches](https://github.com/stolostron/kubernetes-dependency-watches) library's use of 133 | `RetryWatcher` causes it to continuously retry without a backoff. Not only does this spam the logs, it can keep the CPU 134 | unnecessarily busy. 135 | 136 | Proposed solution: 137 | 138 | 1. Modify 139 | [watchLatest](https://github.com/stolostron/kubernetes-dependency-watches/blob/85c50006641c8c5a951bdd35bea3d48708c7e9a5/client/client.go#L725-L791) 140 | to have an exponential backoff. 141 | 1. Lower the log level for connection refused errors in client-go in the 142 | [RetryWatcher doReceive](https://github.com/kubernetes/kubernetes/blob/5864a4677267e6adeae276ad85882a8714d69d9d/staging/src/k8s.io/client-go/tools/watch/retrywatcher.go#L123-L134) 143 | method. 144 | 145 | #### Enabling the Addon For All Clusters 146 | 147 | We'll need to document that a `ClusterManagementAddon` can be tied to a global `Placement` leveraging the addon 148 | `installStrategy`. 149 | 150 | ### Risks and Mitigation 151 | 152 | - Increased hub attack surface due to an additional hub user. The addon is disabled by default and the user must 153 | explicitly grant permissions to the hub user. 154 | 155 | ### Test Plan 156 | 157 | **Note:** _Section not required until targeted at a release._ 158 | 159 | ### Graduation Criteria 160 | 161 | N/A 162 | 163 | ### Upgrade / Downgrade Strategy 164 | 165 | A downgrade would cause the `ConfigurationPolicy` to enforce unresolved hub templates as the actual values. The user 166 | must remove these policies before downgrading. 167 | 168 | ### Version Skew Strategy 169 | 170 | ## Implementation History 171 | 172 | N/A 173 | 174 | ## Drawbacks 175 | 176 | - Requires a lot of effort by the user to configure the permissions in a large environment. A provided 177 | `ConfigurationPolicy` example should help. 178 | 179 | ## Alternatives 180 | 181 | - We could consider reusing the Configuration Policy hub user rather than a separate addon as it only has access to 182 | manage leases on the hub. This would greatly reduce the complexity of the setup, but it would violate the least 183 | privilege principle and the risk could be greater if the Configuration Policy controller requires additional 184 | permissions on the hub in the future. 185 | 186 | ## Infrastructure Needed [optional] 187 | 188 | N/A 189 | --------------------------------------------------------------------------------