├── License.md ├── README.md ├── bin ├── exa-csi-metrics-exporter.tar ├── exascaler-csi-file-driver └── exascaler-csi-file-driver.tar ├── deploy ├── helm-chart │ ├── Chart.yaml │ ├── templates │ │ ├── config.yaml │ │ ├── controller-driver.yaml │ │ ├── metrics │ │ │ ├── crb.yaml │ │ │ ├── daemonset.yaml │ │ │ ├── lease.yaml │ │ │ ├── prometheus-rule.yaml │ │ │ ├── rbac.yaml │ │ │ ├── service.yaml │ │ │ └── servicemonitor.yaml │ │ └── node-driver.yaml │ └── values.yaml ├── kubernetes │ ├── exascaler-csi-file-driver-config.yaml │ ├── exascaler-csi-file-driver.yaml │ ├── metrics │ │ ├── crb.yaml │ │ ├── daemonset.yaml │ │ ├── lease.yaml │ │ ├── prometheus │ │ │ └── prometheus-config.yaml │ │ ├── rbac.yaml │ │ ├── service.yaml │ │ └── servicemonitor.yaml │ └── snapshots │ │ ├── crds.yaml │ │ └── snapshotter.yaml └── openshift │ ├── exascaler-csi-file-driver-config.yaml │ ├── exascaler-csi-file-driver.yaml │ ├── lustre-module │ ├── ko2iblnd-mod.yaml │ ├── ksocklnd-mod.yaml │ ├── lnet-configuration-ds.yaml │ ├── lnet-mod.yaml │ ├── lustre-dockerfile-configmap.yaml │ └── lustre-mod.yaml │ └── snapshots │ ├── crds.yaml │ └── snapshotter.yaml └── examples ├── exa-dynamic-nginx-sub.yaml ├── exa-dynamic-nginx-zone.yaml ├── exa-dynamic-nginx.yaml ├── exa-nginx-replicaset.yaml ├── nginx-combined-volumes.yaml ├── nginx-from-snapshot.yaml ├── nginx-persistent-volume.yaml ├── pvc-bindmount.yaml ├── pvc-configname.yaml ├── sc-override-config-dynamic.yaml ├── snapshot-class.yaml ├── snapshot-from-dynamic.yaml └── snapshot-from-persistent.yaml /License.md: -------------------------------------------------------------------------------- 1 | DDN licenses the EXA CSI driver under its license at: https://www.ddn.com/resources/legal/ddn-end-user-license-agreement/ 2 | 3 | The DDN license is subject to 3rd party licenses applicable to modules of the exa-csi-driver as listed below. 4 | 5 | 6 | 7 | Here is list of 3rd party modules� licenses: 8 | 9 | |Components |License Link| 10 | |--- |--- | 11 | |github.com/antonfisher/nested-logrus-formatter|https://github.com/antonfisher/nested-logrus-formatter/blob/master/LICENSE| 12 | |github.com/container-storage-interface/spec|https://github.com/container-storage-interface/spec/blob/master/LICENSE| 13 | |github.com/educlos/testrail|https://github.com/educlos/testrail/blob/master/LICENSE| 14 | |google.golang.org/protobuf|https://pkg.go.dev/google.golang.org/protobuf?tab=licenses| 15 | |github.com/kubernetes-csi/csi-lib-utils|https://github.com/kubernetes-csi/csi-lib-utils/blob/master/LICENSE| 16 | |github.com/sirupsen/logrus|https://github.com/sirupsen/logrus/blob/master/LICENSE| 17 | |golang.org/x/net|https://pkg.go.dev/golang.org/x/net?tab=licenses| 18 | |google.golang.org/grpc|https://pkg.go.dev/google.golang.org/grpc?tab=licenses| 19 | |gopkg.in/yaml.v2|https://github.com/go-yaml/yaml/blob/v3/LICENSE| 20 | |k8s.io/kubernetes|https://github.com/kubernetes/registry.k8s.io/blob/main/LICENSE| 21 | |k8s.io/utils|https://github.com/kubernetes/registry.k8s.io/blob/main/LICENSE| 22 | |base image - alpine linux|https://github.com/alpinelinux/alpine-wiki/blob/master/LICENSE| 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Exascaler-csi-file-driver 2 | 3 | Releases can be found here - https://github.com/DDNStorage/exa-csi-driver/releases 4 | 5 | ## Compatibility matrix 6 | |CSI driver version|EXAScaler client version|EXA Scaler server version| 7 | |--- |---|---| 8 | |>=v2.3.0|>=2.14.0-ddn182|>=6.3.2| 9 | 10 | ## Feature List 11 | |Feature|Feature Status|CSI Driver Version|CSI Spec Version|Kubernetes Version|Openshift Version| 12 | |--- |--- |--- |--- |--- |--- | 13 | |Static Provisioning|GA|>= 1.0.0|>= 1.0.0|>=1.18|>=4.13| 14 | |Dynamic Provisioning|GA|>= 1.0.0|>= 1.0.0|>=1.18|>=4.13| 15 | |RW mode|GA|>= 1.0.0|>= 1.0.0|>=1.18|>=4.13| 16 | |RO mode|GA|>= 1.0.0|>= 1.0.0|>=1.18|>=4.13| 17 | |Expand volume|GA|>= 1.0.0|>= 1.1.0|>=1.18|>=4.13| 18 | |StorageClass Secrets|GA|>= 1.0.0|>=1.0.0|>=1.18|>=4.13| 19 | |Mount options|GA|>= 1.0.0|>= 1.0.0|>=1.18|>=4.13| 20 | |Topology|GA|>= 2.0.0|>= 1.0.0|>=1.17|>=4.13| 21 | |Snapshots|GA|>= 2.2.6|>= 1.0.0|>=1.17|>=4.13| 22 | |Exascaler Hot Nodes|GA|>= 2.3.0|>= 1.0.0|>=1.18| Not supported yet| 23 | 24 | ## Access Modes support 25 | |Access mode| Supported in version| 26 | |--- |--- | 27 | |ReadWriteOnce| >=1.0.0 | 28 | |ReadOnlyMany| >=2.2.3 | 29 | |ReadWriteMany| >=1.0.0 | 30 | |ReadWriteOncePod| >=2.2.3 | 31 | 32 | ## OpenShift Certification 33 | |OpenShift Version| CSI driver Version| EXA Version| 34 | |---|---|---| 35 | |v4.13|>=v2.2.3|>=v6.3.0| 36 | |v4.14|>=v2.2.4|>=v6.3.0| 37 | |v4.15|>=v2.2.4|>=v6.3.0| 38 | 39 | ## OpenShift 40 | ### Prerequisites 41 | Internal OpenShift image registry needs to be patched to allow building lustre modules with KMM. 42 | ```bash 43 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 44 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' 45 | oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}' 46 | ``` 47 | 48 | ### Building lustre rpms 49 | You will need a vm with the kernel version matching that of the OpenShift nodes. To check on nodes: 50 | ```bash 51 | oc get nodes 52 | oc debug node/c1-pk6k4-worker-0-2j6w8 53 | uname -r 54 | ``` 55 | 56 | On the builder vm, install the matching kernel and log in to it, example for Rhel 9.2: 57 | ```bash 58 | # login to subscription-manager 59 | subscription-manager register --username --password --auto-attach 60 | # list available kernels 61 | yum --showduplicates list available kernel 62 | yum install kernel--. # e.g.: yum install kernel-5.14.0-284.25.1.el9_2.x86_64 63 | grubby --info=ALL | grep title 64 | title="Red Hat Enterprise Linux (5.14.0-284.11.1.el9_2.x86_64) 9.2 (Plow)" <---- 0 65 | title="Red Hat Enterprise Linux (5.14.0-284.25.1.el9_2.x86_64) 9.2 (Plow)" <---- 1 66 | 67 | grub2-set-default 1 68 | reboot 69 | ``` 70 | 71 | Copy EXAScaler client tar from the EXAScaler server: 72 | ```bash 73 | scp root@:/scratch/EXAScaler-/exa-client-.tar.gz . 74 | tar -xvf exa-client-.tar.gz 75 | cd exa-client 76 | ./exa_client_deploy.py -i 77 | ``` 78 | 79 | This will build the rpms and install the client. 80 | Upload the rpms to any repository available from the cluster and change deploy/openshift/lustre-module/lustre-dockerfile-configmap.yaml lines 12-13 accordingly. 81 | Make sure that `kmod-lustre-client-*.rpm`, `lustre-client-*.rpm` and `lustre-client-devel-*.rpm` packages are present. 82 | ``` 83 | RUN git clone https://github.com/Qeas/rpms.git # change this to your repo with matching rpms 84 | RUN yum -y install rpms/*.rpm 85 | ``` 86 | 87 | ### Loading lustre modules in OpenShift 88 | 89 | Before loading the lustre modules, make sure to install OpenShift Kernel Module Management (KMM) via OpenShift console. 90 | 91 | ```bash 92 | oc create -n openshift-kmm -f deploy/openshift/lustre-module/lustre-dockerfile-configmap.yaml 93 | oc apply -n openshift-kmm -f deploy/openshift/lustre-module/lnet-mod.yaml 94 | ``` 95 | Wait for the builder pod (e.g. `lnet-build-5f265-build`) to finish. After builder finishes, 96 | you should have `lnet-8b72w-6fjwh` running on each worker node. 97 | ```bash 98 | # run ko2iblnd-mod if you are using Infiniband network 99 | oc apply -n openshift-kmm -f deploy/openshift/lustre-module/ko2iblnd-mod.yaml 100 | ``` 101 | 102 | Make changes to `deploy/openshift/lustre-module/lnet-configuration-ds.yaml` line 38 according to the cluster's network 103 | ``` 104 | lnetctl net add --net tcp --if br-ex # change interface according to your cluster 105 | ``` 106 | Configure lnet and install lustre 107 | ``` 108 | oc apply -n openshift-kmm -f deploy/openshift/lustre-module/lnet-configuration-ds.yaml 109 | oc apply -n openshift-kmm -f deploy/openshift/lustre-module/lustre-mod.yaml 110 | ``` 111 | 112 | ### Installing the driver 113 | 114 | Make sure that `openshift: true` in `deploy/openshift/exascaler-csi-file-driver-config.yaml`. 115 | Create a secret from the config file and apply the driver yaml. 116 | 117 | ```bash 118 | oc create -n openshift-kmm secret generic exascaler-csi-file-driver-config --from-file=deploy/openshift/exascaler-csi-file-driver-config.yaml 119 | oc apply -n openshift-kmm -f deploy/openshift/exascaler-csi-file-driver.yaml 120 | ``` 121 | 122 | ### Uninstall 123 | 124 | ```bash 125 | oc delete -n openshift-kmm secret exascaler-csi-file-driver-config 126 | oc delete -n openshift-kmm -f deploy/openshift/exascaler-csi-file-driver.yaml 127 | oc delete -n openshift-kmm -f deploy/openshift/lustre-module/lustre-mod.yaml 128 | oc delete -n openshift-kmm -f deploy/openshift/lustre-module/lnet-configuration-ds.yaml 129 | oc delete -n openshift-kmm -f deploy/openshift/lustre-module/ko2iblnd-mod.yaml 130 | oc delete -n openshift-kmm -f deploy/openshift/lustre-module/lnet-mod.yaml 131 | oc delete -n openshift-kmm -f deploy/openshift/lustre-module/lustre-dockerfile-configmap.yaml 132 | oc get images | grep lustre-client-moduleloader | awk '{print $1}' | xargs oc delete image 133 | ``` 134 | 135 | 136 | ### Snapshots 137 | To use CSI snapshots, the snapshot CRDs along with the csi-snapshotter must be installed. 138 | ```bash 139 | oc apply -f deploy/openshift/snapshots/ 140 | ``` 141 | 142 | After that the snapshot class for EXA CSI must be created 143 | Snapshot parameters can be passed through snapshot class as can be seen is `examples/snapshot-class.yaml` 144 | List of available snapshot parameters: 145 | 146 | | Name | Description | Example | 147 | |----------------|-------------------------------------------------------------------|--------------------------------------| 148 | | `snapshotFolder` | [Optional] Folder on ExaScaler filesystem where the snapshots will be created. | `csi-snapshots` | 149 | | `snapshotUtility` | [Optional] Either `tar` or `dtar`. Default is `tar`| `dtar` | 150 | | `snapshotMd5Verify` | [Optional] Defines whether the driver should do md5sum check on the snapshot. Ensures that the snapshot is not corrupt but reduces performance. Default is `false` | `true` | 151 | | `exaFS` | Same parameter as in storage class. If the original volume was created using storage class parameter for `exaFS` this MUST match the value of storage class. | `10.3.3.200@tcp:/csi-fs` | 152 | | `mountPoint` | Same parameter as in storage class. If the original volume was created using storage class parameter for `mountPoint` this MUST match the value of storage class. | `/exa-csi-mnt` | 153 | 154 | ```bash 155 | oc apply -f examples/snapshot-class.yaml 156 | ``` 157 | 158 | Now a snapshot can be created, for example 159 | ```bash 160 | oc apply -f examples/snapshot-from-dynamic.yaml 161 | ``` 162 | 163 | We can create a volume using the snapshot 164 | ```bash 165 | oc apply -f examples/nginx-from-snapshot.yaml 166 | ``` 167 | 168 | Exascaler CSI driver supports 2 snapshot modes: `tar` or `dtar`. 169 | Default mode is `tar`. Dtar is much faster. 170 | To enable `dtar` set `snapshotUtility: dtar` in config. 171 | 172 | 173 | ## Kubernetes 174 | ### Requirements 175 | 176 | - Required API server and kubelet feature gates for k8s version < 1.16 (skip this step for k8s >= 1.16) 177 | ([instructions](https://github.com/kubernetes-csi/docs/blob/735f1ef4adfcb157afce47c64d750b71012c8151/book/src/Setup.md#enabling-features)): 178 | ``` 179 | --feature-gates=ExpandInUsePersistentVolumes=true,ExpandCSIVolumes=true,ExpandPersistentVolumes=true 180 | ``` 181 | - Mount propagation must be enabled, the Docker daemon for the cluster must allow shared mounts 182 | ([instructions](https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation)) 183 | 184 | - MpiFileUtils dtar must be installed on all kubernetes nodes in order to use `dtar` as snapshot utility. Not required if `tar` is used for snapshots (default). 185 | 186 | ### Prerequisites 187 | EXAScaler client must be installed and configured on all kubernetes nodes. Please refer to EXAScaler Installation and Administration Guide. 188 | 189 | ### Installation 190 | Clone or untar driver (depending on where you get the driver) 191 | ```bash 192 | git clone -b https://github.com/DDNStorage/exa-csi-driver.git /opt/exascaler-csi-file-driver 193 | ``` 194 | e.g:- 195 | ```bash 196 | git clone -b 2.2.4 https://github.com/DDNStorage/exa-csi-driver.git /opt/exascaler-csi-file-driver 197 | ``` 198 | or 199 | ```bash 200 | rpm -Uvh exa-csi-driver-1.0-1.el7.x86_64.rpm 201 | ``` 202 | 203 | ### Using helm chart 204 | 205 | Pull latest helm chart configuration before installing or upgrading. 206 | 207 | #### Install 208 | - Make changes to `deploy/helm-chart/values.yaml` according to your Kubernetes and EXAScaler clusters environment. 209 | 210 | - If metrics exporter is required, make changes to `deploy/helm-chart/values.yaml` according to your environment. refer to [metrics exporter configuration](#metrics-exporter-configuration) 211 | 212 | - Run `helm install -n ${namespace} exascaler-csi-file-driver deploy/helm-chart/` 213 | 214 | #### Uninstall 215 | `helm uninstall -n ${namespace} exascaler-csi-file-driver` 216 | 217 | #### Upgrade 218 | 219 | - Make any necessary changes to the chart, for example new driver version: `tag: "v2.3.4"` in `deploy/helm-chart/values.yaml`. 220 | - If metrics exporter is required, make changes to `deploy/helm-chart/values.yaml` according to your environment. refer to [metrics exporter configuration](#metrics-exporter-configuration) 221 | - Run `helm upgrade -n ${namespace} exascaler-csi-file-driver deploy/helm-chart/` 222 | 223 | 224 | ### Metrics exporter configuration. 225 | 226 | List of exported metrics: 227 | - exa_csi_pvc_capacity_bytes - Total capacity in bytes of PVs with provisioner exa.csi.ddn.com 228 | - exa_csi_pvc_used_bytes - Used bytes of PVs with provisioner exa.csi.ddn.com 229 | - exa_csi_pvc_available_bytes - Available bytes of PVs with provisioner exa.csi.ddn.com 230 | - exa_csi_pvc_pod_count - Number of pods using a PVC with provisioner exa.csi.ddn.com 231 | 232 | Each of those metrics reports with 3 labels: "pvc", "storage_class", "exported_namespace". 233 | These labels can be used to group the metrics by kubernetes storage class and namespace as shown below in [Add Prometheus Rules for StorageClass and Namespace Aggregation](#add-prometheus-rules-for-storageclass-and-namespace-aggregation) 234 | 235 | ```yaml 236 | metrics: 237 | enabled: true # Enable metrics exporter 238 | exporter: 239 | repository: quay.io/ddn/exa-csi-metrics-exporter 240 | tag: master # use latest version same as driver version e.g - tag: "2.3.4" 241 | pullPolicy: Always 242 | containerPort: "9200" # Metrics Exporter port 243 | servicePort: "9200" # This port will be used to create service for exporter 244 | namespace: default # Namespace where metrics exporter will be deployed 245 | logLevel: info # Log level. Debug is not recommended for production, default is info. 246 | collectTimeout: 10 # Metrics refresh timeout in seconds. Default is 10. 247 | serviceMonitor: 248 | enabled: false # set to true if using prometheus operator 249 | interval: 30s 250 | namespace: monitoring # Namespace where prometheus operator is deployed 251 | prometheus: 252 | releaseLabel: prometheus # release label for prometheus operator 253 | createClusterRole: false # set to false if using existing ClusterRole 254 | createClustrerRoleName: exa-prometheus-cluster-role # Name of ClusterRole to create, this name will be used to bind cluster role to prometheus service account 255 | clusterRoleName: cluster-admin # Name of ClusterRole to bind to prometheus service account if createClusterRole is set to false otherwise `createClustrerRoleName` will be used 256 | serviceAccountName: prometheus-kube-prometheus-prometheus # Service account name of prometheus 257 | 258 | ``` 259 | 260 | ## Prometheus Server Integration 261 | 262 | This section describes how to: 263 | - Add the EXAScaler CSI metrics exporter to Prometheus' scrape configuration. 264 | - Apply PrometheusRule definitions for metrics aggregation by StorageClass and Namespace. 265 | 266 | ### 1. Add EXAScaler CSI Metrics Exporter to Prometheus 267 | 268 | #### Step 1.1 – Create a Secret with Scrape Config 269 | 270 | Get the hostPort number of the EXAScaler CSI metrics exporter: 271 | ```bash 272 | kubectl get daemonset exa-csi-metrics-exporter -o jsonpath='{.spec.template.spec.containers[*].ports[*].hostPort}{"\n"}' 273 | 274 | 32666 275 | ``` 276 | 277 | Create a Kubernetes secret that holds your additional Prometheus scrape configuration: 278 | ```bash 279 | PORT=32666 280 | cat < additional-scrape-configs.yaml 281 | - job_name: 'exa-csi-metrics-exporter-remote' 282 | static_configs: 283 | - targets: 284 | - 10.20.30.1:$PORT 285 | - 10.20.30.2:$PORT 286 | - 10.20.30.3:$PORT 287 | EOF 288 | kubectl create secret generic additional-scrape-configs \ 289 | --from-file=additional-scrape-configs.yaml -n monitoring 290 | ``` 291 | Note: Replace the targets value with the actual IP and port of your EXAScaler CSI metrics exporter. 292 | 293 | #### Step 1.2 – Patch the Prometheus Custom Resource 294 | 295 | Edit the Prometheus CR (custom resource) and add the additionalScrapeConfigs reference: 296 | ```bash 297 | kubectl edit prometheus prometheus-kube-prometheus-prometheus -n monitoring 298 | ``` 299 | Add the following under spec: 300 | ```yaml 301 | additionalScrapeConfigs: 302 | name: additional-scrape-configs 303 | key: additional-scrape-configs.yaml 304 | ``` 305 | 306 | #### Step 1.3 – Restart Prometheus 307 | 308 | After editing the CR, restart Prometheus so it reloads the configuration: 309 | ```bash 310 | kubectl delete pod -l app.kubernetes.io/name=prometheus -n monitoring 311 | ``` 312 | 313 | ### 2. Add Prometheus Rules for StorageClass and Namespace Aggregation 314 | 315 | Apply a PrometheusRule resource to aggregate PVC metrics by StorageClass and Namespace: 316 | ```yaml 317 | apiVersion: monitoring.coreos.com/v1 318 | kind: PrometheusRule 319 | metadata: 320 | name: exa-csi-rules 321 | namespace: monitoring 322 | labels: 323 | release: prometheus 324 | spec: 325 | groups: 326 | - name: exa-storageclass-rules 327 | interval: 15s 328 | rules: 329 | - record: exa_csi_sc_capacity_bytes 330 | expr: sum(exa_csi_pvc_capacity_bytes) by (storage_class) 331 | - record: exa_csi_sc_used_bytes 332 | expr: sum(exa_csi_pvc_used_bytes) by (storage_class) 333 | - record: exa_csi_sc_available_bytes 334 | expr: sum(exa_csi_pvc_available_bytes) by (storage_class) 335 | - record: exa_csi_sc_pvc_count 336 | expr: count(exa_csi_pvc_capacity_bytes) by (storage_class) 337 | 338 | - name: exa-namespace-rules 339 | interval: 15s 340 | rules: 341 | - record: exa_csi_namespace_capacity_bytes 342 | expr: sum(exa_csi_pvc_capacity_bytes) by (exported_namespace) 343 | - record: exa_csi_namespace_used_bytes 344 | expr: sum(exa_csi_pvc_used_bytes) by (exported_namespace) 345 | - record: exa_csi_namespace_available_bytes 346 | expr: sum(exa_csi_pvc_available_bytes) by (exported_namespace) 347 | - record: exa_csi_namespace_pvc_count 348 | expr: count(exa_csi_pvc_capacity_bytes) by (exported_namespace) 349 | ``` 350 | Apply the rule: 351 | ```bash 352 | kubectl apply -f exa-csi-rules.yaml 353 | ``` 354 | Ensure that Prometheus is watching the monitoring namespace for PrometheusRule CRDs. 355 | 356 | 357 | 358 | ### Using docker load and kubectl commands 359 | ```bash 360 | docker load -i /opt/exascaler-csi-file-driver/bin/exascaler-csi-file-driver.tar 361 | ``` 362 | #### Verify version 363 | kubectl get deploy/exascaler-csi-controller -o jsonpath="{..image}" 364 | 365 | 2. Copy /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver-config.yaml to /etc/exascaler-csi-file-driver-v1.0/exascaler-csi-file-driver-config.yaml 366 | ``` 367 | cp /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver-config.yaml /etc/exascaler-csi-file-driver-v1.0/exascaler-csi-file-driver-config.yaml 368 | ``` 369 | Edit `/etc/exascaler-csi-file-driver-v1.0/exascaler-csi-file-driver-config.yaml` file. Driver configuration example: 370 | ```yaml 371 | exascaler_map: 372 | exa1: 373 | mountPoint: /exaFS # mountpoint on the host where the exaFS will be mounted 374 | exaFS: 192.168.88.114@tcp2:192.168.98.114@tcp2:/testfs # default path to exa filesystem where the PVCs will be stored 375 | managementIp: 10.204.86.114@tcp # network for management operations, such as create/delete volume 376 | zone: zone-1 377 | 378 | exa2: 379 | mountPoint: /exaFS-zone-2 # mountpoint on the host where the exaFS will be mounted 380 | exaFS: 192.168.78.112@tcp2:/testfs/zone-2 # default path to exa filesystem where the PVCs will be stored 381 | managementIp: 10.204.86.114@tcp # network for management operations, such as create/delete volume 382 | zone: zone-2 383 | 384 | exa3: 385 | mountPoint: /exaFS-zone-3 # mountpoint on the host where the exaFS will be mounted 386 | exaFS: 192.168.98.113@tcp2:192.168.88.113@tcp2:/testfs/zone-3 # default path to exa filesystem where the PVCs will be stored 387 | managementIp: 10.204.86.114@tcp # network for management operations, such as create/delete volume 388 | zone: zone-3 389 | 390 | debug: true 391 | ``` 392 | 393 | 3. Create Kubernetes secret from the file: 394 | ```bash 395 | kubectl create secret generic exascaler-csi-file-driver-config --from-file=/etc/exascaler-csi-file-driver-v1.0/exascaler-csi-file-driver-config.yaml 396 | ``` 397 | 398 | 4. Register driver to Kubernetes: 399 | ```bash 400 | kubectl apply -f /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver.yaml 401 | ``` 402 | 403 | ## Usage 404 | 405 | ### Dynamically provisioned volumes 406 | 407 | For dynamic volume provisioning, the administrator needs to set up a _StorageClass_ in the PV yaml (/opt/exascaler-csi-file-driver/examples/exa-dynamic-nginx.yaml for this example) pointing to the driver. 408 | For dynamically provisioned volumes, Kubernetes generates volume name automatically (for example `pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8-projectId-1001`). 409 | 410 | Basic storage class example, uses config to access Exascaler 411 | ```yaml 412 | apiVersion: storage.k8s.io/v1 413 | kind: StorageClass 414 | metadata: 415 | name: exascaler-csi-file-driver-sc-nginx-dynamic 416 | provisioner: exa.csi.ddn.com 417 | configName: exa1 418 | ``` 419 | 420 | The following example shows how to use storage class to override config values 421 | ```yaml 422 | apiVersion: storage.k8s.io/v1 423 | kind: StorageClass 424 | metadata: 425 | name: exascaler-csi-file-driver-sc-nginx-dynamic 426 | provisioner: exa.csi.ddn.com 427 | allowedTopologies: 428 | - matchLabelExpressions: 429 | - key: topology.exa.csi.ddn.com/zone 430 | values: 431 | - zone-1 432 | mountOptions: # list of options for `mount -o ...` command 433 | # - noatime # 434 | parameters: 435 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 436 | exaMountGid: "1002" # Gid which will be used to access the volume in pod. Should be synced between EXA server and clients. 437 | bindMount: "true" # Determines, whether volume will bind mounted or as a separate lustre mount. 438 | exaFS: "10.204.86.114@tcp:/testfs" # Overrides exaFS value from config. Use this to support multiple EXA filesystems. 439 | mountPoint: /exaFS # Overrides mountPoint value from config. Use this to support multiple EXA filesystems. 440 | mountOptions: ro,noflock 441 | minProjectId: 10001 # project id range for this storage class 442 | maxProjectId: 20000 443 | ``` 444 | 445 | #### Example 446 | 447 | Run Nginx pod with dynamically provisioned volume: 448 | 449 | ```bash 450 | kubectl apply -f /opt/exascaler-csi-file-driver/examples/exa-dynamic-nginx.yaml 451 | 452 | # to delete this pod: 453 | kubectl delete -f /opt/exascaler-csi-file-driver/examples/exa-dynamic-nginx.yaml 454 | ``` 455 | 456 | ### Static (pre-provisioned) volumes 457 | 458 | The driver can use already existing Exasaler filesystem, 459 | in this case, _StorageClass_, _PersistentVolume_ and _PersistentVolumeClaim_ should be configured. 460 | 461 | Quota can be manually assigned for static volumes. First we need to associate a project with the volume folder 462 | ```bash 463 | lfs project -p 1000001 -s /mountPoint-csi/nginx-persistent 464 | ``` 465 | 466 | Then a quota can be set for that project 467 | ```bash 468 | lfs setquota -p 1000001 -B 2G /mountPoint-csi/ 469 | ``` 470 | 471 | #### _StorageClass_ configuration 472 | 473 | ```yaml 474 | apiVersion: storage.k8s.io/v1 475 | kind: StorageClass 476 | metadata: 477 | name: exascaler-csi-driver-sc-nginx-persistent 478 | provisioner: exa.csi.ddn.com 479 | allowedTopologies: 480 | - matchLabelExpressions: 481 | - key: topology.exa.csi.ddn.com/zone 482 | values: 483 | - zone-1 484 | mountOptions: # list of options for `mount -o ...` command 485 | # - noatime # 486 | ``` 487 | 488 | #### _PersistentVolume_ configuration 489 | 490 | ```yaml 491 | apiVersion: v1 492 | kind: PersistentVolume 493 | metadata: 494 | name: exascaler-csi-driver-pv-nginx-persistent 495 | labels: 496 | name: exascaler-csi-driver-pv-nginx-persistent 497 | spec: 498 | storageClassName: exascaler-csi-driver-sc-nginx-persistent 499 | accessModes: 500 | - ReadWriteMany 501 | capacity: 502 | storage: 1Gi 503 | csi: 504 | driver: exa.csi.ddn.com 505 | volumeHandle: exa1:10.3.3.200@tcp;/exaFS:/mountPoint-csi:/nginx-persistent 506 | volumeAttributes: # volumeAttributes are the alternative of storageClass params for static (precreated) volumes. 507 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. 508 | #mountOptions: ro, flock # list of options for `mount` command 509 | projectId: "1000001" 510 | ``` 511 | 512 | ### Topology configuration 513 | In order to configure CSI driver with kubernetes topology, use the `zone` parameter in driver config or storageClass parameters. Example config file with zones: 514 | ```bash 515 | 516 | exascaler_map: 517 | exa1: 518 | exaFS: 10.3.196.24@tcp:/csi 519 | mountPoint: /exaFS 520 | zone: us-west 521 | exa2: 522 | exaFS: 10.3.196.24@tcp:/csi-2 523 | mountPoint: /exaFS-zone2 524 | zone: us-east 525 | ``` 526 | 527 | This will assign volumes to be created on Exascaler cluster that correspond with the zones requested by allowedTopologies values. 528 | 529 | For topology-aware scheduling: 530 | 531 | Each node must have the `topology.exa.csi.ddn.com/zone` label with the zone configuration. 532 | 533 | Label node with topology zone: 534 | ```bash 535 | kubectl label node topology.exa.csi.ddn.com/zone=us-west 536 | ``` 537 | For removing label: 538 | ```bash 539 | kubectl label node topology.exa.csi.ddn.com/zone- 540 | ``` 541 | 542 | Important: If topology labels are added after driver installation, the node driver must be restarted on affected nodes. 543 | 544 | Restart node driver pod on specific node: 545 | ```bash 546 | kubectl delete pod -l app=exascaler-csi-node -n --field-selector spec.nodeName= 547 | ``` 548 | 549 | or restart all node driver pods: 550 | ```bash 551 | kubectl rollout restart daemonset exascaler-csi-node -n 552 | ``` 553 | 554 | For using `volumeBindingMode: WaitForFirstConsumer`, the topology must be configured. 555 | 556 | ```yaml 557 | apiVersion: storage.k8s.io/v1 558 | kind: StorageClass 559 | metadata: 560 | name: exascaler-csi-file-driver-sc-nginx-dynamic 561 | provisioner: exa.csi.ddn.com 562 | volumeBindingMode: WaitForFirstConsumer 563 | parameters: 564 | configName: exa1 565 | ``` 566 | 567 | ### CSI Parameters 568 | If a parameter is available for both config and storage class, storage class parameter will override config values 569 | 570 | |Config|Storage class| Description | Example | 571 | |--------|------------|------------------------------------------------------------------|--------------------------------------| 572 | | `exaFS` | `exaFS` | [required] Full path to EXAScaler filesystem | `10.3.3.200@tcp:/csi-fs` | 573 | | `mountPoint` |`mountPoint` | [required] Mountpoint on Kubernetes host where the exaFS will be mounted | `/exa-csi-mnt` | 574 | | - | `driver` [required] | Installed driver name " exa.csi.ddn.com" | `exa.csi.ddn.com` | 575 | | - | `volumeHandle` | [required for static volumes] The format is <configName>:<NID>;<Exascaler filesystem path>:<mountPoint>:<volumeHandle>. **Note:** The NID and Exascaler filesystem path are separated by a semicolon ( ; ), all other fields are delimited by a colon ( : ). | `exa1:10.3.3.200@tcp;/exaFS:/mountPoint-csi:/nginx-persistent` | 576 | | - | `exaMountUid` | Uid which will be used to access the volume from the pod. | `1015` | 577 | | - | `exaMountGid` | Gid which will be used to access the volume from the pod. | `1015` | 578 | | - | `projectId` | Points to EXA project id to be used to set volume quota. Automatically generated by the driver if not provided. | `100001` | 579 | | `managementIp` | `managementIp` | Should be used if there is a separate network configured for management operations, such as create/delete volumes. This network should have access to all Exa filesystems in a isolated zones environment | `192.168.10.20@tcp2` | 580 | | `bindMount` | `bindMount` | Determines, whether volume will bind mounted or as a separate lustre mount. Default is `true` | `true` | 581 | | `defaultMountOptions` | `mountOptions` | Options that will be passed to mount command (-o ) | `ro,flock` | 582 | | - | `minProjectId` | Minimum project ID number for automatic generation. Only used when projectId is not provided. | 10000 | 583 | | - | `maxProjectId` | Maximum project ID number for automatic generation. Only used when projectId is not provided. | 4294967295 | 584 | | - | `generateProjectIdRetries` | Maximum retry count for generating random project ID. Only used when projectId is not provided. | `5` | 585 | | `zone` | `zone` | Topology zone to control where the volume should be created. Should match topology.exa.csi.ddn.com/zone label on node(s). | `us-west` | 586 | | `v1xCompatible` | - | [Optional] Only used when upgrading the driver from v1.x.x to v2.x.x. Provides compatibility for volumes that were created beore the upgrade. Set it to `true` to point to the Exa cluster that was configured before the upgrade | `false` | 587 | |`tempMountPoint` | `tempMountPoint` | [Optional] Used when `exaFS` points to a subdirectory that does not exist on Exascaler and will be automatically created by the driver. This parameter sets the directory where Exascaler filesystem will be temporarily mounted to create the subdirectory. | `/tmp/exafs-mnt` | 588 | | `volumeDirPermissions` | `volumeDirPermissions` | [Optional] Defines file permissions for mounted volumes. | `0777` | 589 | | `hotNodes` | `hotNodes` | Determines whether `HotNodes` feature should be used. This feature can only be used by the driver when Hot Nodes (PCC) service is disabled and not used manually on the kubernetes workers. | `false` | 590 | | `pccCache` | `pccCache` | Directory for cached files of the file system. Note that lpcc does not recognize directories with a 591 | trailing slash (“/” at the end). | `/csi-pcc` | 592 | | `pccAutocache` | `pccAutocache` | Condition for automatic file attachment (caching) | `projid={500}` | 593 | | `pccPurgeHighUsage` | `pccPurgeHighUsage` | If the disk usage of cache device is higher than high_usage, start detaching the files. Defaults to 90 (90% disk/inode usage). | `90` | 594 | | `pccPurgeLowUsage` | `pccPurgeLowUsage` | If the disk usage of cache device is lower than low_usage, stop detaching the files. Defaults to 75 (75% disk/inode usage). | `70` | 595 | | `pccPurgeScanThreads` | `pccPurgeScanThreads` | Threads to use for scanning cache device in parallel. Defaults to 1. | `1` | 596 | | `pccPurgeInterval` | `pccPurgeInterval` | Interval for lpcc_purge to check cache device usage, in seconds. Defaults to 30. | `30` | 597 | | `pccPurgeLogLevel` | `pccPurgeLogLevel` | Log level for lpcc_purge: either “fatal”, “error”, “warn”, “normal”, “info” (default), or “debug”. | `info` | 598 | | `pccPurgeForceScanInterval` | `pccPurgeForceScanInterval` | Scan PCC backends forcefully after this number of seconds to refresh statistic data. | `30` | 599 | | - | `compression` | Algorithm ["lz4", "gzip", "lzo"] to use for data compression. default is "false" | `false` | 600 | | - | `configName` | Config entry name to use from the config map | `exa1` | 601 | 602 | #### _PersistentVolumeClaim_ (pointing to created _PersistentVolume_) 603 | 604 | ```yaml 605 | apiVersion: v1 606 | kind: PersistentVolumeClaim 607 | metadata: 608 | name: exascaler-csi-driver-pvc-nginx-persistent 609 | spec: 610 | storageClassName: exascaler-csi-driver-cs-nginx-persistent 611 | accessModes: 612 | - ReadWriteMany 613 | resources: 614 | requests: 615 | storage: 1Gi 616 | selector: 617 | matchLabels: 618 | # to create 1-1 relationship for pod - persistent volume use unique labels 619 | name: exascaler-csi-file-driver-pv-nginx-persistent 620 | ``` 621 | 622 | #### Example 623 | 624 | Run nginx server using PersistentVolume. 625 | 626 | **Note:** Pre-configured filesystem should exist on the EXAScaler: 627 | `/exaFS/nginx-persistent`. 628 | 629 | ```bash 630 | kubectl apply -f /opt/exascaler-csi-file-driver/examples/nginx-persistent-volume.yaml 631 | 632 | # to delete this pod: 633 | kubectl delete -f /opt/exascaler-csi-file-driver/examples/nginx-persistent-volume.yaml 634 | ``` 635 | 636 | ### Snapshots 637 | To use CSI snapshots, the snapshot CRDs along with the csi-snapshotter must be installed. 638 | ```bash 639 | kubectl apply -f deploy/kubernetes/snapshots/ 640 | ``` 641 | 642 | After that the snapshot class for EXA CSI must be created 643 | Snapshot parameters can be passed through snapshot class as can be seen is `examples/snapshot-class.yaml` 644 | List of available snapshot parameters: 645 | 646 | | Name | Description | Example | 647 | |----------------|-------------------------------------------------------------------|--------------------------------------| 648 | | `snapshotFolder` | [Optional] Folder on ExaScaler filesystem where the snapshots will be created. | `csi-snapshots` | 649 | | `snapshotUtility` | [Optional] Either `tar` or `dtar`. `dtar` is faster but requires `dtar` to be installed on all k8s nodes. Default is `tar`| `dtar` | 650 | | `dtarPath` | [Optional] If `snapshotUtility` is `dtar` points to where the `dtar` utility is installed | `/opt/ddn/mpifileutils/bin/dtar` | 651 | | `snapshotMd5Verify` | [Optional] Defines whether the driver should do md5sum check on the snapshot. Ensures that the snapshot is not corrupt but reduces performance. Default is `false` | `true` | 652 | | `exaFS` | Same parameter as in storage class. If the original volume was created using storage class parameter for `exaFS` this MUST match the value of storage class. | `10.3.3.200@tcp:/csi-fs` | 653 | | `mountPoint` | Same parameter as in storage class. If the original volume was created using storage class parameter for `mountPoint` this MUST match the value of storage class. | `/exa-csi-mnt` | 654 | 655 | ```bash 656 | kubectl apply -f examples/snapshot-class.yaml 657 | ``` 658 | 659 | Now a snapshot can be created, for example 660 | ```bash 661 | kubectl apply -f examples/snapshot-from-dynamic.yaml 662 | ``` 663 | 664 | We can create a volume using the snapshot 665 | ```bash 666 | kubectl apply -f examples/nginx-from-snapshot.yaml 667 | ``` 668 | 669 | Exascaler CSI driver supports 2 snapshot modes: `tar` or `dtar`. 670 | Default mode is `tar`. 671 | To enable `dtar` set `snapshotUtility: dtar` in config. 672 | Dtar is much faster but requires mpifileutils `dtar` installed on all the nodes as a prerequisite. 673 | Installing `dtar` might differ depending on your OS. Here is an example for Ubuntu 22 674 | 675 | ```bash 676 | # Install dependencies mpifileutils 677 | cd 678 | mkdir -p /opt/ddn/mpifileutils 679 | cd /opt/ddn/mpifileutils 680 | sudo apt-get install -y cmake libarchive-dev libbz2-dev libcap-dev libssl-dev openmpi-bin libopenmpi-dev libattr1-dev 681 | 682 | export INSTALL_DIR=/opt/ddn/mpifileutils 683 | 684 | git clone https://github.com/LLNL/lwgrp.git 685 | cd lwgrp 686 | ./autogen.sh 687 | ./configure --prefix=$INSTALL_DIR 688 | make -j 16 689 | make install 690 | cd .. 691 | 692 | git clone https://github.com/LLNL/dtcmp.git 693 | cd dtcmp 694 | ./autogen.sh 695 | ./configure --prefix=$INSTALL_DIR --with-lwgrp=$INSTALL_DIR 696 | make -j 16 697 | make install 698 | cd .. 699 | 700 | git clone https://github.com/hpc/libcircle.git 701 | cd libcircle 702 | sh ./autogen.sh 703 | ./configure --prefix=$INSTALL_DIR 704 | make -j 16 705 | make install 706 | cd .. 707 | 708 | git clone https://github.com/hpc/mpifileutils.git mpifileutils-clone 709 | cd mpifileutils-clone 710 | cmake ./ -DWITH_DTCMP_PREFIX=$INSTALL_DIR -DWITH_LibCircle_PREFIX=$INSTALL_DIR -DCMAKE_INSTALL_PREFIX=$INSTALL_DIR -DENABLE_LUSTRE=ON -DENABLE_XATTRS=ON 711 | make -j 16 712 | make install 713 | cd .. 714 | ``` 715 | 716 | If you use a different `INSTALL_DIR` path, pass it in config using `DtarPath` parameter. 717 | 718 | 719 | ## Updating the driver version 720 | To update to a new driver version, you need to follow the following steps: 721 | 722 | 1. Remove the old driver version 723 | ```bash 724 | kubectl delete -f /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver.yaml 725 | kubectl delete secrets exascaler-csi-file-driver-config 726 | rpm -evh exa-csi-driver 727 | ``` 728 | 2. Download the new driver version (git clone or new ISO) 729 | 3. Copy and edit config file 730 | ```bash 731 | cp /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver-config.yaml /etc/exascaler-csi-file-driver-v1.1/exascaler-csi-file-driver-config.yaml 732 | ``` 733 | 734 | If you are upgrading from v1.x.x to v2.x.x, config file structure will change to a map of Exascaler clusters instead of a flat structure. To support old volume that were created using v1.x.x, old config should be put in the config map with `v1xCompatible: true`. For example: 735 | 736 | v1.x.x config 737 | ```bash 738 | exaFS: 10.3.196.24@tcp:/csi 739 | mountPoint: /exaFS 740 | debug: true 741 | ``` 742 | v2.x.x config with support of previously created volumes 743 | ```bash 744 | 745 | exascaler_map: 746 | exa1: 747 | exaFS: 10.3.196.24@tcp:10.3.199.24@tcp:/csi 748 | mountPoint: /exaFS 749 | v1xCompatible: true 750 | 751 | exa2: 752 | exaFS: 10.3.1.200@tcp:10.3.2.200@tcp:10.3.3.200@tcp:/csi-fs 753 | mountPoint: /mnt2 754 | 755 | debug: true 756 | ``` 757 | 758 | Only one of the Exascaler clusters can have `v1xCompatible: true` since old config supported only 1 cluster. 759 | 760 | 4. Update version in /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver.yaml 761 | ``` 762 | image: exascaler-csi-file-driver:v2.2.5 763 | ``` 764 | 5. Load new image 765 | ```bash 766 | docker load -i /opt/exascaler-csi-file-driver/bin/exascaler-csi-file-driver.tar 767 | ``` 768 | 6. Apply new driver 769 | ```bash 770 | kubectl create secret generic exascaler-csi-file-driver-config --from-file=/etc/exascaler-csi-file-driver-v1.1/exascaler-csi-file-driver-config.yaml 771 | kubectl apply -f /opt/exascaler-csi-file-driver/deploy/kubernetes/exascaler-csi-file-driver.yaml 772 | ``` 773 | 7. Verify version 774 | ```bash 775 | kubectl get deploy/exascaler-csi-controller -o jsonpath="{..image}" 776 | ``` 777 | 778 | ## Troubleshooting 779 | ### Driver logs 780 | To collect all driver related logs, you can use the `kubectl logs` command. 781 | All in on command: 782 | ```bash 783 | mkdir exa-csi-logs 784 | for name in $(kubectl get pod -owide | grep exascaler | awk '{print $1}'); do kubectl logs $name --all-containers > exa-csi-logs/$name; done 785 | ``` 786 | 787 | To get logs from all containers of a single pod 788 | ```bash 789 | kubectl logs --all-containers 790 | ``` 791 | 792 | Logs from a single container of a pod 793 | ```bash 794 | kubectl logs -c driver 795 | ``` 796 | 797 | #### Driver secret 798 | ```bash 799 | kubectl get secret exascaler-csi-file-driver-config -o json | jq '.data | map_values(@base64d)' > exa-csi-logs/exascaler-csi-file-driver-config 800 | ``` 801 | 802 | #### PV/PVC/Pod data 803 | ```bash 804 | kubectl get pvc > exa-csi-logs/pvcs 805 | kubectl get pv > exa-csi-logs/pvs 806 | kubectl get pod > exa-csi-logs/pods 807 | ``` 808 | 809 | #### Extended info about a PV/PVC/Pod: 810 | ```bash 811 | kubectl describe pvc > exa-csi-logs/ 812 | kubectl describe pv > exa-csi-logs/ 813 | kubectl describe pod > exa-csi-logs/ 814 | ``` 815 | -------------------------------------------------------------------------------- /bin/exa-csi-metrics-exporter.tar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DDNStorage/exa-csi-driver/029ca41423c889ca6aeb5ec26cfbd7a9302b65e3/bin/exa-csi-metrics-exporter.tar -------------------------------------------------------------------------------- /bin/exascaler-csi-file-driver: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DDNStorage/exa-csi-driver/029ca41423c889ca6aeb5ec26cfbd7a9302b65e3/bin/exascaler-csi-file-driver -------------------------------------------------------------------------------- /bin/exascaler-csi-file-driver.tar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/DDNStorage/exa-csi-driver/029ca41423c889ca6aeb5ec26cfbd7a9302b65e3/bin/exascaler-csi-file-driver.tar -------------------------------------------------------------------------------- /deploy/helm-chart/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: exa-csi 3 | description: A Helm chart for ExaScaler CSI file driver. 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | # Versions are expected to follow Semantic Versioning (https://semver.org/) 18 | version: 2.3.5 19 | 20 | # This is the version number of the application being deployed. This version number should be 21 | # incremented each time you make changes to the application. Versions are not expected to 22 | # follow Semantic Versioning. They should reflect the version the application is using. 23 | # It is recommended to use it with quotes. 24 | appVersion: 2.3.5 25 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/config.yaml: -------------------------------------------------------------------------------- 1 | # This secret is used to set the initial credentials of the node container. 2 | apiVersion: v1 3 | kind: Secret 4 | metadata: 5 | name: {{ .Values.secretName }} 6 | namespace: {{ .Release.Namespace }} 7 | type: "Opaque" 8 | data: 9 | exascaler-csi-file-driver-config.yaml: {{ .Values.config | toYaml | b64enc }} 10 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/controller-driver.yaml: -------------------------------------------------------------------------------- 1 | # ---------------------- 2 | # Exascaler CSI Driver 3 | # ---------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: CSIDriver 7 | metadata: 8 | name: exa.csi.ddn.com 9 | spec: 10 | attachRequired: false 11 | podInfoOnMount: false 12 | fsGroupPolicy: File 13 | --- 14 | 15 | # --------------------------------- 16 | # Exascaler CSI Controller Server 17 | # --------------------------------- 18 | # 19 | # Runs single driver controller server (driver + provisioner + attacher + snapshotter) on one of the nodes 20 | # 21 | 22 | apiVersion: v1 23 | kind: ServiceAccount 24 | metadata: 25 | name: exascaler-csi-controller-service-account 26 | namespace: {{ .Release.Namespace }} 27 | --- 28 | 29 | kind: ClusterRole 30 | apiVersion: rbac.authorization.k8s.io/v1 31 | metadata: 32 | name: exascaler-csi-controller-cluster-role 33 | rules: 34 | - apiGroups: [''] 35 | resources: ['secrets'] 36 | verbs: ['get', 'list', "watch"] 37 | - apiGroups: [''] 38 | resources: ['persistentvolumes'] 39 | verbs: ['get', 'list', 'watch', 'create', 'update', 'delete'] # "update" for attacher 40 | - apiGroups: [''] 41 | resources: ['persistentvolumeclaims'] 42 | verbs: ['get', 'list', 'watch', 'update'] 43 | - apiGroups: ['storage.k8s.io'] 44 | resources: ['storageclasses'] 45 | verbs: ['get', 'list', 'watch'] 46 | - apiGroups: [''] 47 | resources: ['events'] 48 | verbs: ['list', 'watch', 'create', 'update', 'patch'] 49 | - apiGroups: ['snapshot.storage.k8s.io'] 50 | resources: ['volumesnapshots'] 51 | verbs: ['get', 'list'] 52 | - apiGroups: ['snapshot.storage.k8s.io'] 53 | resources: ['volumesnapshotcontents'] 54 | verbs: ['get', 'list'] 55 | # attacher specific 56 | - apiGroups: [''] 57 | resources: ['nodes', 'pods'] 58 | verbs: ['get', 'list', 'watch'] 59 | - apiGroups: ['csi.storage.k8s.io'] 60 | resources: ['csinodeinfos'] 61 | verbs: ['get', 'list', 'watch'] 62 | - apiGroups: ['storage.k8s.io'] 63 | resources: ['volumeattachments'] 64 | verbs: ['get', 'list', 'watch', 'update'] 65 | # snapshotter specific 66 | - apiGroups: ['snapshot.storage.k8s.io'] 67 | resources: ['volumesnapshotclasses'] 68 | verbs: ['get', 'list', 'watch'] 69 | - apiGroups: ['snapshot.storage.k8s.io'] 70 | resources: ['volumesnapshotcontents'] 71 | verbs: ['create', 'get', 'list', 'watch', 'update', 'delete', 'patch'] 72 | - apiGroups: ['snapshot.storage.k8s.io'] 73 | resources: ['volumesnapshots'] 74 | verbs: ['get', 'list', 'watch', 'update'] 75 | - apiGroups: ["snapshot.storage.k8s.io"] 76 | resources: ["volumesnapshots/status"] 77 | verbs: ["update"] 78 | - apiGroups: ["snapshot.storage.k8s.io"] 79 | resources: ["volumesnapshotcontents/status"] 80 | verbs: ["update"] 81 | - apiGroups: ['apiextensions.k8s.io'] 82 | resources: ['customresourcedefinitions'] 83 | verbs: ['create', 'list', 'watch', 'delete'] 84 | - apiGroups: [""] 85 | resources: ["persistentvolumeclaims/status"] 86 | verbs: ["update", "patch"] 87 | # CSINode specific 88 | - apiGroups: ["storage.k8s.io"] 89 | resources: ["csinodes"] 90 | verbs: ["watch", "list", "get"] 91 | --- 92 | 93 | kind: ClusterRoleBinding 94 | apiVersion: rbac.authorization.k8s.io/v1 95 | metadata: 96 | name: exascaler-csi-controller-cluster-role-binding 97 | subjects: 98 | - kind: ServiceAccount 99 | name: exascaler-csi-controller-service-account 100 | namespace: {{ .Release.Namespace }} 101 | roleRef: 102 | kind: ClusterRole 103 | name: exascaler-csi-controller-cluster-role 104 | apiGroup: rbac.authorization.k8s.io 105 | --- 106 | 107 | # External Resizer 108 | kind: ClusterRole 109 | apiVersion: rbac.authorization.k8s.io/v1 110 | metadata: 111 | name: csi-resizer-role 112 | rules: 113 | # The following rule should be uncommented for plugins that require secrets 114 | # for provisioning. 115 | - apiGroups: [""] 116 | resources: ["secrets"] 117 | verbs: ["get", "list", "watch"] 118 | - apiGroups: [""] 119 | resources: ["persistentvolumes"] 120 | verbs: ["get", "list", "watch", "update", "patch"] 121 | - apiGroups: [""] 122 | resources: ["persistentvolumeclaims"] 123 | verbs: ["get", "list", "watch"] 124 | - apiGroups: [""] 125 | resources: ["persistentvolumeclaims/status"] 126 | verbs: ["update", "patch"] 127 | - apiGroups: ["storage.k8s.io"] 128 | resources: ["storageclasses"] 129 | verbs: ["get", "list", "watch"] 130 | - apiGroups: [""] 131 | resources: ["events"] 132 | verbs: ["list", "watch", "create", "update", "patch"] 133 | 134 | --- 135 | kind: ClusterRoleBinding 136 | apiVersion: rbac.authorization.k8s.io/v1 137 | metadata: 138 | name: csi-resizer-binding 139 | subjects: 140 | - kind: ServiceAccount 141 | name: exascaler-csi-controller-service-account 142 | namespace: {{ .Release.Namespace }} 143 | roleRef: 144 | kind: ClusterRole 145 | name: csi-resizer-role 146 | apiGroup: rbac.authorization.k8s.io 147 | 148 | --- 149 | kind: Role 150 | apiVersion: rbac.authorization.k8s.io/v1 151 | metadata: 152 | namespace: {{ .Release.Namespace }} 153 | name: external-resizer-cfg 154 | rules: 155 | - apiGroups: ["coordination.k8s.io"] 156 | resources: ["leases"] 157 | verbs: ["get", "watch", "list", "delete", "update", "create"] 158 | 159 | --- 160 | kind: RoleBinding 161 | apiVersion: rbac.authorization.k8s.io/v1 162 | metadata: 163 | name: csi-resizer-role-cfg 164 | namespace: {{ .Release.Namespace }} 165 | subjects: 166 | - kind: ServiceAccount 167 | name: exascaler-csi-controller-service-account 168 | namespace: {{ .Release.Namespace }} 169 | roleRef: 170 | kind: Role 171 | name: external-resizer-cfg 172 | apiGroup: rbac.authorization.k8s.io 173 | --- 174 | 175 | kind: Service 176 | apiVersion: v1 177 | metadata: 178 | name: exascaler-csi-controller-service 179 | labels: 180 | app: exascaler-csi-controller 181 | spec: 182 | selector: 183 | app: exascaler-csi-controller 184 | ports: 185 | - name: dummy 186 | port: 12345 187 | --- 188 | 189 | kind: Deployment 190 | apiVersion: apps/v1 191 | metadata: 192 | name: exascaler-csi-controller 193 | namespace: {{ .Release.Namespace }} 194 | spec: 195 | # serviceName: exascaler-csi-controller-service 196 | selector: 197 | matchLabels: 198 | app: exascaler-csi-controller # has to match .spec.template.metadata.labels 199 | template: 200 | metadata: 201 | labels: 202 | app: exascaler-csi-controller 203 | spec: 204 | serviceAccount: exascaler-csi-controller-service-account 205 | priorityClassName: {{ .Values.priorityClassName }} 206 | containers: 207 | # csi-provisioner: sidecar container that watches Kubernetes PersistentVolumeClaim objects 208 | # and triggers CreateVolume/DeleteVolume against a CSI endpoint 209 | - name: csi-provisioner 210 | resources: {{ .Values.resources | default .Values.provisioner.resources | toYaml | nindent 12 }} 211 | image: {{ .Values.provisioner.repository }}:{{ .Values.provisioner.tag }} 212 | imagePullPolicy: {{ .Values.provisioner.pullPolicy }} 213 | args: 214 | - --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock 215 | - --volume-name-prefix={{ .Values.volumeNamePrefix }} 216 | - --strict-topology 217 | - --immediate-topology=false 218 | - --feature-gates=Topology=true 219 | - --timeout={{ .Values.provisioner.timeout }} 220 | env: 221 | - name: GOMEMLIMIT 222 | valueFrom: 223 | resourceFieldRef: 224 | divisor: "0" 225 | resource: limits.memory 226 | - name: GOMAXPROCS 227 | valueFrom: 228 | resourceFieldRef: 229 | divisor: "0" 230 | resource: limits.cpu 231 | volumeMounts: 232 | - name: socket-dir 233 | mountPath: /var/lib/csi/sockets/pluginproxy 234 | - name: csi-attacher 235 | resources: {{ .Values.resources | default .Values.attacher.resources | toYaml | nindent 12 }} 236 | image: {{ .Values.attacher.repository }}:{{ .Values.attacher.tag }} 237 | imagePullPolicy: {{ .Values.attacher.pullPolicy }} 238 | args: 239 | - --csi-address=$(ADDRESS) 240 | - --v=2 241 | - --leader-election=true 242 | env: 243 | - name: ADDRESS 244 | value: /var/lib/csi/sockets/pluginproxy/csi.sock 245 | - name: GOMEMLIMIT 246 | valueFrom: 247 | resourceFieldRef: 248 | divisor: "0" 249 | resource: limits.memory 250 | - name: GOMAXPROCS 251 | valueFrom: 252 | resourceFieldRef: 253 | divisor: "0" 254 | resource: limits.cpu 255 | volumeMounts: 256 | - name: socket-dir 257 | mountPath: /var/lib/csi/sockets/pluginproxy/ 258 | - name: csi-snapshotter 259 | resources: {{ .Values.resources | default .Values.csi_snapshotter.resources | toYaml | nindent 12 }} 260 | image: {{ .Values.csi_snapshotter.repository }}:{{ .Values.csi_snapshotter.tag }} 261 | imagePullPolicy: {{ .Values.csi_snapshotter.pullPolicy }} 262 | args: 263 | - -v=3 264 | - --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock 265 | - --timeout={{ .Values.csi_snapshotter.timeout }} 266 | volumeMounts: 267 | - name: socket-dir 268 | mountPath: /var/lib/csi/sockets/pluginproxy 269 | env: 270 | - name: GOMEMLIMIT 271 | valueFrom: 272 | resourceFieldRef: 273 | divisor: "0" 274 | resource: limits.memory 275 | - name: GOMAXPROCS 276 | valueFrom: 277 | resourceFieldRef: 278 | divisor: "0" 279 | resource: limits.cpu 280 | - name: csi-resizer 281 | resources: {{ .Values.resources | default .Values.resizer.resources | toYaml | nindent 12 }} 282 | image: {{ .Values.resizer.repository }}:{{ .Values.resizer.tag }} 283 | imagePullPolicy: {{ .Values.resizer.pullPolicy }} 284 | args: 285 | - "--csi-address=$(ADDRESS)" 286 | env: 287 | - name: ADDRESS 288 | value: /var/lib/csi/sockets/pluginproxy/csi.sock 289 | - name: GOMEMLIMIT 290 | valueFrom: 291 | resourceFieldRef: 292 | divisor: "0" 293 | resource: limits.memory 294 | - name: GOMAXPROCS 295 | valueFrom: 296 | resourceFieldRef: 297 | divisor: "0" 298 | resource: limits.cpu 299 | volumeMounts: 300 | - name: socket-dir 301 | mountPath: /var/lib/csi/sockets/pluginproxy/ 302 | - name: driver 303 | resources: {{ .Values.resources | default .Values.image.resources | toYaml | nindent 12 }} 304 | securityContext: 305 | privileged: true 306 | capabilities: 307 | add: ['SYS_ADMIN'] 308 | allowPrivilegeEscalation: true 309 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 310 | imagePullPolicy: "{{ .Values.image.pullPolicy }}" 311 | args: 312 | - --nodeid=$(KUBE_NODE_NAME) 313 | - --endpoint=unix://csi/csi.sock 314 | - --role=controller 315 | env: 316 | - name: KUBE_NODE_NAME 317 | valueFrom: 318 | fieldRef: 319 | fieldPath: spec.nodeName 320 | - name: GOMEMLIMIT 321 | valueFrom: 322 | resourceFieldRef: 323 | divisor: "0" 324 | resource: limits.memory 325 | - name: GOMAXPROCS 326 | valueFrom: 327 | resourceFieldRef: 328 | divisor: "0" 329 | resource: limits.cpu 330 | volumeMounts: 331 | - name: socket-dir 332 | mountPath: /csi 333 | - name: secret 334 | mountPath: /config 335 | readOnly: true 336 | - name: host 337 | mountPath: /host 338 | mountPropagation: Bidirectional 339 | volumes: 340 | - name: socket-dir 341 | emptyDir: 342 | - name: secret 343 | secret: 344 | secretName: {{ .Values.secretName }} 345 | - name: host 346 | hostPath: 347 | path: / 348 | type: Directory 349 | --- 350 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/crb.yaml: -------------------------------------------------------------------------------- 1 | {{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} 2 | 3 | {{- if .Values.metrics.prometheus.createClusterRole }} 4 | apiVersion: rbac.authorization.k8s.io/v1 5 | kind: ClusterRole 6 | metadata: 7 | name: {{ .Values.metrics.prometheus.createClustrerRoleName }} 8 | rules: 9 | - apiGroups: [""] 10 | resources: 11 | - pods 12 | - pods/metrics 13 | - pods/status 14 | - services 15 | - endpoints 16 | - nodes 17 | - nodes/metrics 18 | - nodes/proxy 19 | - nodes/stats 20 | - namespaces 21 | - configmaps 22 | - secrets 23 | - serviceaccounts 24 | - events 25 | verbs: ["get", "list", "watch"] 26 | - apiGroups: ["apps"] 27 | resources: 28 | - deployments 29 | - daemonsets 30 | - replicasets 31 | - statefulsets 32 | verbs: ["get", "list", "watch"] 33 | - apiGroups: ["monitoring.coreos.com"] 34 | resources: 35 | - servicemonitors 36 | - prometheusrules 37 | - podmonitors 38 | - alerts 39 | - alertmanagers 40 | - prometheuses 41 | - thanosrulers 42 | - probes 43 | verbs: ["get", "list", "watch"] 44 | - apiGroups: ["networking.k8s.io"] 45 | resources: 46 | - ingresses 47 | - networkpolicies 48 | verbs: ["get", "list", "watch"] 49 | 50 | {{- end }} 51 | 52 | --- 53 | apiVersion: rbac.authorization.k8s.io/v1 54 | kind: ClusterRoleBinding 55 | metadata: 56 | name: exa-prometheus-clustre-role-binding 57 | subjects: 58 | - kind: ServiceAccount 59 | name: {{ .Values.metrics.prometheus.serviceAccountName }} 60 | namespace: {{ .Values.metrics.serviceMonitor.namespace }} 61 | roleRef: 62 | kind: ClusterRole 63 | {{- if .Values.metrics.prometheus.createClusterRole }} 64 | name: {{ .Values.metrics.prometheus.createClustrerRoleName }} 65 | {{- else }} 66 | name: {{ .Values.metrics.prometheus.clusterRoleName }} 67 | {{- end }} 68 | apiGroup: rbac.authorization.k8s.io 69 | {{- end }} 70 | 71 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/daemonset.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.metrics.enabled }} 2 | apiVersion: apps/v1 3 | kind: DaemonSet 4 | metadata: 5 | name: exa-csi-metrics-exporter 6 | namespace: {{ .Values.metrics.exporter.namespace }} 7 | spec: 8 | selector: 9 | matchLabels: 10 | app: exa-csi-metrics-exporter 11 | template: 12 | metadata: 13 | labels: 14 | app: exa-csi-metrics-exporter 15 | spec: 16 | serviceAccountName: exa-csi-metrics-sa 17 | automountServiceAccountToken: true # Ensure token is mounted 18 | containers: 19 | - name: exporter 20 | imagePullPolicy: {{ .Values.metrics.exporter.pullPolicy }} 21 | image: {{ .Values.metrics.exporter.repository }}:{{ .Values.metrics.exporter.tag }} 22 | securityContext: 23 | privileged: true 24 | capabilities: 25 | add: ['SYS_ADMIN'] 26 | allowPrivilegeEscalation: true 27 | ports: 28 | - containerPort: {{ .Values.metrics.exporter.containerPort }} 29 | name: http 30 | hostPort: {{ .Values.metrics.exporter.servicePort }} 31 | env: 32 | - name: COLLECT_INTERVAL 33 | value: "{{ .Values.metrics.exporter.collectInterval }}" 34 | - name: LOG_LEVEL 35 | value: "{{ .Values.metrics.exporter.logLevel }}" 36 | - name: PORT 37 | value: "{{ .Values.metrics.exporter.containerPort }}" 38 | - name: POD_NAME 39 | valueFrom: 40 | fieldRef: 41 | fieldPath: metadata.name 42 | - name: POD_NAMESPACE 43 | valueFrom: 44 | fieldRef: 45 | fieldPath: metadata.namespace 46 | - name: NODE_NAME 47 | valueFrom: 48 | fieldRef: 49 | fieldPath: spec.nodeName 50 | volumeMounts: 51 | - name: secret 52 | mountPath: /config 53 | readOnly: true 54 | - name: ca-certificates 55 | mountPath: /var/run/secrets/kubernetes.io/serviceaccount 56 | readOnly: true 57 | - name: host 58 | mountPath: /host 59 | mountPropagation: Bidirectional 60 | volumes: 61 | - name: secret 62 | secret: 63 | secretName: {{ .Values.secretName }} 64 | - name: host 65 | hostPath: 66 | path: / 67 | type: Directory 68 | - name: ca-certificates 69 | projected: 70 | sources: 71 | - serviceAccountToken: 72 | path: token 73 | expirationSeconds: 3600 74 | - configMap: 75 | name: kube-root-ca.crt # Mount CA cert 76 | items: 77 | - key: ca.crt 78 | path: ca.crt 79 | 80 | {{- end }} 81 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/lease.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.metrics.enabled }} 2 | apiVersion: coordination.k8s.io/v1 3 | kind: Lease 4 | metadata: 5 | name: exa-csi-metrics-exporter-leader-election 6 | namespace: {{ .Values.metrics.exporter.namespace }} 7 | spec: 8 | holderIdentity: "" 9 | leaseDurationSeconds: 15 10 | renewTime: null 11 | acquireTime: null 12 | leaseTransitions: 0 13 | {{- end }} 14 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/prometheus-rule.yaml: -------------------------------------------------------------------------------- 1 | {{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} 2 | 3 | apiVersion: monitoring.coreos.com/v1 4 | kind: PrometheusRule 5 | metadata: 6 | name: exa-csi-rules 7 | namespace: {{ .Values.metrics.serviceMonitor.namespace }} # Ensure this is the correct namespace where Prometheus is running 8 | labels: 9 | release: {{ .Values.metrics.prometheus.releaseLabel }} 10 | spec: 11 | groups: 12 | - name: exa-storageclass-rules 13 | interval: 15s 14 | rules: 15 | - record: exa_csi_sc_capacity_bytes 16 | expr: sum(exa_csi_pvc_capacity_bytes) by (storage_class) 17 | - record: exa_csi_sc_used_bytes 18 | expr: sum(exa_csi_pvc_used_bytes) by (storage_class) 19 | - record: exa_csi_sc_available_bytes 20 | expr: sum(exa_csi_pvc_available_bytes) by (storage_class) 21 | - record: exa_csi_sc_pvc_count 22 | expr: count(exa_csi_pvc_capacity_bytes) by (storage_class) 23 | 24 | - name: exa-namespace-rules 25 | interval: 15s 26 | rules: 27 | - record: exa_csi_namespace_capacity_bytes 28 | expr: sum(exa_csi_pvc_capacity_bytes) by (exported_namespace) 29 | - record: exa_csi_namespace_used_bytes 30 | expr: sum(exa_csi_pvc_used_bytes) by (exported_namespace) 31 | - record: exa_csi_namespace_available_bytes 32 | expr: sum(exa_csi_pvc_available_bytes) by (exported_namespace) 33 | - record: exa_csi_namespace_pvc_count 34 | expr: count(exa_csi_pvc_capacity_bytes) by (exported_namespace) 35 | 36 | {{- end }} 37 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/rbac.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.metrics.enabled }} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: exa-csi-metrics-sa 6 | namespace: {{ .Values.metrics.exporter.namespace }} 7 | --- 8 | apiVersion: rbac.authorization.k8s.io/v1 9 | kind: ClusterRole 10 | metadata: 11 | name: exa-csi-metrics-clusterrole 12 | rules: 13 | - apiGroups: [""] 14 | resources: ["persistentvolumes", "persistentvolumeclaims", "nodes", "pods"] 15 | verbs: ["get", "list"] 16 | - apiGroups: ["storage.k8s.io"] 17 | resources: ["storageclasses"] 18 | verbs: ["get", "list", "watch"] 19 | --- 20 | apiVersion: rbac.authorization.k8s.io/v1 21 | kind: ClusterRoleBinding 22 | metadata: 23 | name: exa-csi-metrics-clusterrole-binding 24 | subjects: 25 | - kind: ServiceAccount 26 | name: exa-csi-metrics-sa 27 | namespace: {{ .Values.metrics.exporter.namespace }} 28 | roleRef: 29 | kind: ClusterRole 30 | name: exa-csi-metrics-clusterrole 31 | apiGroup: rbac.authorization.k8s.io 32 | --- 33 | apiVersion: rbac.authorization.k8s.io/v1 34 | kind: Role 35 | metadata: 36 | name: exa-csi-leader-election 37 | namespace: {{ .Values.metrics.exporter.namespace }} 38 | rules: 39 | - apiGroups: ["coordination.k8s.io"] 40 | resources: ["leases"] 41 | verbs: ["get", "watch", "list", "create", "update", "patch"] 42 | --- 43 | apiVersion: rbac.authorization.k8s.io/v1 44 | kind: RoleBinding 45 | metadata: 46 | name: exa-csi-leader-election 47 | namespace: {{ .Values.metrics.exporter.namespace }} 48 | roleRef: 49 | apiGroup: rbac.authorization.k8s.io 50 | kind: Role 51 | name: exa-csi-leader-election 52 | subjects: 53 | - kind: ServiceAccount 54 | name: exa-csi-metrics-sa 55 | namespace: {{ .Values.metrics.exporter.namespace }} 56 | {{- end }} 57 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/service.yaml: -------------------------------------------------------------------------------- 1 | {{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: exa-csi-metrics-exporter 6 | namespace: {{ .Values.metrics.exporter.namespace }} 7 | labels: 8 | app: exa-csi-metrics-exporter 9 | spec: 10 | type: ClusterIP 11 | ports: 12 | - port: {{ .Values.metrics.exporter.servicePort }} 13 | targetPort: http 14 | protocol: TCP 15 | name: http 16 | selector: 17 | app: exa-csi-metrics-exporter 18 | {{- end }} 19 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/metrics/servicemonitor.yaml: -------------------------------------------------------------------------------- 1 | {{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} 2 | apiVersion: monitoring.coreos.com/v1 3 | kind: ServiceMonitor 4 | metadata: 5 | name: exa-csi-metrics-exporter 6 | namespace: {{ .Values.metrics.serviceMonitor.namespace }} 7 | labels: 8 | release: {{ .Values.metrics.prometheus.releaseLabel }} 9 | spec: 10 | endpoints: 11 | - interval: {{ .Values.metrics.serviceMonitor.interval }} 12 | path: /metrics 13 | port: http # Ensure this matches exporter port http 14 | scheme: http 15 | namespaceSelector: 16 | matchNames: 17 | - {{ .Values.metrics.exporter.namespace }} 18 | selector: 19 | matchLabels: 20 | app: exa-csi-metrics-exporter 21 | {{- end }} 22 | -------------------------------------------------------------------------------- /deploy/helm-chart/templates/node-driver.yaml: -------------------------------------------------------------------------------- 1 | 2 | # --------------------------- 3 | # exascaler CSI Node Server 4 | # --------------------------- 5 | # 6 | # Runs driver node server (driver + registrar) on each node 7 | # 8 | 9 | apiVersion: v1 10 | kind: ServiceAccount 11 | metadata: 12 | name: exascaler-csi-node-service-account 13 | namespace: {{ .Release.Namespace }} 14 | --- 15 | 16 | kind: ClusterRole 17 | apiVersion: rbac.authorization.k8s.io/v1 18 | metadata: 19 | name: exascaler-csi-node-cluster-role 20 | rules: 21 | - apiGroups: [''] 22 | resources: ['events'] 23 | verbs: ['get', 'list', 'watch', 'create', 'update', 'patch'] 24 | - apiGroups: [''] 25 | resources: ['nodes'] 26 | verbs: ['get'] 27 | --- 28 | 29 | kind: ClusterRoleBinding 30 | apiVersion: rbac.authorization.k8s.io/v1 31 | metadata: 32 | name: exascaler-csi-node-cluster-role-binding 33 | subjects: 34 | - kind: ServiceAccount 35 | name: exascaler-csi-node-service-account 36 | namespace: {{ .Release.Namespace }} 37 | roleRef: 38 | kind: ClusterRole 39 | name: exascaler-csi-node-cluster-role 40 | apiGroup: rbac.authorization.k8s.io 41 | --- 42 | 43 | # exascaler Node Server as a daemon 44 | 45 | kind: DaemonSet 46 | apiVersion: apps/v1 47 | metadata: 48 | name: exascaler-csi-node 49 | namespace: {{ .Release.Namespace }} 50 | spec: 51 | selector: 52 | matchLabels: 53 | app: exascaler-csi-node 54 | template: 55 | metadata: 56 | labels: 57 | app: exascaler-csi-node 58 | spec: 59 | serviceAccount: exascaler-csi-node-service-account 60 | priorityClassName: {{ .Values.priorityClassName }} 61 | hostNetwork: true 62 | containers: 63 | # driver-registrar: sidecar container that: 64 | # 1) registers the CSI driver with kubelet 65 | # 2) adds the drivers custom NodeId to a label on the Kubernetes Node API Object 66 | - name: driver-registrar 67 | resources: {{ .Values.resources | default .Values.registrar.resources | toYaml | nindent 12 }} 68 | image: {{ .Values.registrar.repository }}:{{ .Values.registrar.tag }} 69 | imagePullPolicy: {{ .Values.registrar.pullPolicy }} 70 | args: 71 | - --v=5 72 | - --csi-address=/csi/csi.sock 73 | - --kubelet-registration-path=/var/lib/kubelet/plugins/exa.csi.ddn.com/csi.sock 74 | livenessProbe: 75 | exec: 76 | command: 77 | - /csi-node-driver-registrar 78 | - --kubelet-registration-path=/var/lib/kubelet/plugins/exa.csi.ddn.com/csi.sock 79 | - --mode=kubelet-registration-probe 80 | initialDelaySeconds: 30 81 | timeoutSeconds: 15 82 | env: 83 | - name: KUBE_NODE_NAME 84 | valueFrom: 85 | fieldRef: 86 | fieldPath: spec.nodeName 87 | - name: GOMEMLIMIT 88 | valueFrom: 89 | resourceFieldRef: 90 | divisor: "0" 91 | resource: limits.memory 92 | - name: GOMAXPROCS 93 | valueFrom: 94 | resourceFieldRef: 95 | divisor: "0" 96 | resource: limits.cpu 97 | volumeMounts: 98 | - name: socket-dir 99 | mountPath: /csi 100 | - name: registration-dir 101 | mountPath: /registration 102 | - name: driver 103 | resources: {{ .Values.resources | default .Values.image.resources | toYaml | nindent 12 }} 104 | securityContext: 105 | privileged: true 106 | capabilities: 107 | add: ['SYS_ADMIN'] 108 | allowPrivilegeEscalation: true 109 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 110 | imagePullPolicy: "{{ .Values.image.pullPolicy }}" 111 | args: 112 | - --nodeid=$(KUBE_NODE_NAME) 113 | - --endpoint=unix://csi/csi.sock 114 | - --role=node 115 | env: 116 | - name: KUBE_NODE_NAME 117 | valueFrom: 118 | fieldRef: 119 | fieldPath: spec.nodeName 120 | - name: GOMEMLIMIT 121 | valueFrom: 122 | resourceFieldRef: 123 | divisor: "0" 124 | resource: limits.memory 125 | - name: GOMAXPROCS 126 | valueFrom: 127 | resourceFieldRef: 128 | divisor: "0" 129 | resource: limits.cpu 130 | volumeMounts: 131 | - name: socket-dir 132 | mountPath: /csi 133 | - name: secret 134 | mountPath: /config 135 | - name: host 136 | mountPath: /host 137 | mountPropagation: Bidirectional 138 | - name: pods-mount-dir 139 | mountPath: /var/lib/kubelet/pods 140 | mountPropagation: Bidirectional 141 | volumes: 142 | - name: socket-dir 143 | hostPath: 144 | path: /var/lib/kubelet/plugins/exa.csi.ddn.com 145 | type: DirectoryOrCreate 146 | - name: registration-dir 147 | hostPath: 148 | path: /var/lib/kubelet/plugins_registry/ 149 | type: Directory 150 | - name: pods-mount-dir 151 | hostPath: 152 | path: /var/lib/kubelet/pods 153 | type: Directory 154 | - name: host 155 | hostPath: 156 | path: / 157 | type: Directory 158 | - name: secret 159 | secret: 160 | secretName: {{ .Values.secretName }} 161 | --- 162 | -------------------------------------------------------------------------------- /deploy/helm-chart/values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for exa-csi. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | 5 | volumeNamePrefix: "pvc-exa" 6 | 7 | image: 8 | repository: "quay.io/ddn/exascaler-csi-file-driver" 9 | tag: v2.3.5 10 | pullPolicy: "Always" 11 | # Overrides the image tag whose default is the chart appVersion. 12 | resources: 13 | 14 | registrar: 15 | repository: registry.k8s.io/sig-storage/csi-node-driver-registrar 16 | tag: v2.10.1 17 | pullPolicy: IfNotPresent 18 | resources: 19 | 20 | provisioner: 21 | repository: registry.k8s.io/sig-storage/csi-provisioner 22 | tag: v4.0.1 23 | pullPolicy: IfNotPresent 24 | timeout: 120m 25 | resources: 26 | 27 | attacher: 28 | repository: registry.k8s.io/sig-storage/csi-attacher 29 | tag: v4.5.1 30 | pullPolicy: IfNotPresent 31 | resources: 32 | 33 | resizer: 34 | repository: registry.k8s.io/sig-storage/csi-resizer 35 | tag: v1.10.1 36 | pullPolicy: IfNotPresent 37 | resources: 38 | 39 | csi_snapshotter: 40 | repository: registry.k8s.io/sig-storage/csi-snapshotter 41 | tag: v5.0.1 42 | pullPolicy: IfNotPresent 43 | timeout: 120m 44 | resources: 45 | 46 | resources: 47 | limits: 48 | cpu: 8 49 | memory: 512Mi 50 | requests: 51 | cpu: 2 52 | memory: 256Mi 53 | 54 | priorityClassName: system-cluster-critical 55 | 56 | secretName: exascaler-csi-file-driver-config 57 | 58 | # exascaler driver specific config 59 | config: 60 | exascaler_map: 61 | exa1: 62 | mountPoint: /exaFS # mountpoint on the host where the exaFS will be mounted 63 | exaFS: 10.204.86.217@tcp:/testfs # default path to exa filesystem 64 | zone: zone-1 65 | # exa2: 66 | # mountPoint: /exaFS-zone-2 # mountpoint on the host where the exaFS will be mounted 67 | # exaFS: 10.204.86.114@tcp:/testfs/zone-2 # default path to exa filesystem 68 | # zone: zone-2 69 | # exa3: 70 | # mountPoint: /exaFS-zone-3 # mountpoint on the host where the exaFS will be mounted 71 | # exaFS: 10.204.86.114@tcp:/testfs/zone-3 # default path to exa filesystem 72 | # zone: zone-3 73 | debug: true 74 | openshift: false 75 | 76 | # metrics exporter 77 | metrics: 78 | enabled: false 79 | exporter: 80 | repository: quay.io/ddn/exa-csi-metrics-exporter 81 | tag: v2.3.5 82 | pullPolicy: Always 83 | containerPort: "9200" 84 | servicePort: "32666" 85 | namespace: default 86 | logLevel: info 87 | collectInterval: 30 88 | serviceMonitor: 89 | enabled: false # set to true if using local prometheus operator 90 | interval: 30s 91 | namespace: monitoring 92 | prometheus: # prometheus configuration if using prometheus operator 93 | releaseLabel: prometheus # release label for prometheus operator 94 | createClusterRole: false # set to false if using existing ClusterRole 95 | createClustrerRoleName: exa-prometheus-cluster-role # Name of ClusterRole to create, this name will be used to bind cluster role to prometheus service account 96 | clusterRoleName: cluster-admin # Name of ClusterRole to bind to prometheus service account if createClusterRole is set to false otherwise `createClustrerRoleName` will be used 97 | serviceAccountName: prometheus-kube-prometheus-prometheus # Service account name of prometheus 98 | -------------------------------------------------------------------------------- /deploy/kubernetes/exascaler-csi-file-driver-config.yaml: -------------------------------------------------------------------------------- 1 | # exascaler-csi-file-driver config file to create k8s secret 2 | # 3 | # $ kubectl create secret generic exascaler-csi-file-driver-config \ 4 | # --from-file=deploy/kubernetes/exascaler-csi-file-driver-config.yaml 5 | # 6 | 7 | exascaler_map: 8 | exa1: 9 | mountPoint: /exaFS # mountpoint on the host where the exaFS will be mounted 10 | exaFS: 10.204.86.114@tcp:/testfs # default path to exa filesystem 11 | zone: zone-1 12 | v1xCompatible: true # Optional. Can only be true for one of the Exa clusters 13 | volumeDirPermissions: 0777 # Optional. Defines file permissions for mounted volumes. 14 | 15 | exa2: 16 | mountPoint: /exaFS-zone-2 # mountpoint on the host where the exaFS will be mounted 17 | exaFS: 10.204.86.114@tcp:/testfs/zone-2 # default path to exa filesystem 18 | zone: zone-2 19 | 20 | exa3: 21 | mountPoint: /exaFS-zone-3 # mountpoint on the host where the exaFS will be mounted 22 | exaFS: 10.204.86.114@tcp:/testfs/zone-3 # default path to exa filesystem 23 | zone: zone-3 24 | 25 | debug: true # more logs 26 | -------------------------------------------------------------------------------- /deploy/kubernetes/exascaler-csi-file-driver.yaml: -------------------------------------------------------------------------------- 1 | # Exascaler CSI Driver (2.2.4) 2 | # 3 | # This driver version works with Kubernetes version >=1.14 4 | # 5 | # In production, each CSI driver deployment has to be customized to avoid conflicts, 6 | # use non-default namespace and different names for non-namespaced entities like the ClusterRole 7 | # 8 | # Install to Kubernetes: 9 | # $ kubectl apply -f deploy/kubernetes/exascaler-csi-file-driver.yaml 10 | # 11 | 12 | 13 | # ---------------------- 14 | # Exascaler CSI Driver 15 | # ---------------------- 16 | 17 | apiVersion: storage.k8s.io/v1 18 | kind: CSIDriver 19 | metadata: 20 | name: exa.csi.ddn.com 21 | spec: 22 | attachRequired: false 23 | podInfoOnMount: false 24 | fsGroupPolicy: File 25 | --- 26 | 27 | 28 | # ---- 29 | # RBAC 30 | # ---- 31 | 32 | apiVersion: v1 33 | kind: ServiceAccount 34 | metadata: 35 | name: exascaler-csi-controller-service-account 36 | namespace: default # replace with non-default namespace name if needed 37 | --- 38 | 39 | kind: ClusterRole 40 | apiVersion: rbac.authorization.k8s.io/v1 41 | metadata: 42 | name: exascaler-csi-controller-cluster-role 43 | rules: 44 | - apiGroups: [''] 45 | resources: ['secrets'] 46 | verbs: ['get', 'list', "watch"] 47 | - apiGroups: [''] 48 | resources: ['persistentvolumes'] 49 | verbs: ['get', 'list', 'watch', 'create', 'update', 'delete'] # "update" for attacher 50 | - apiGroups: [''] 51 | resources: ['persistentvolumeclaims'] 52 | verbs: ['get', 'list', 'watch', 'update'] 53 | - apiGroups: ['storage.k8s.io'] 54 | resources: ['storageclasses'] 55 | verbs: ['get', 'list', 'watch'] 56 | - apiGroups: [''] 57 | resources: ['events'] 58 | verbs: ['list', 'watch', 'create', 'update', 'patch'] 59 | # attacher specific 60 | - apiGroups: [''] 61 | resources: ['nodes', 'pods'] 62 | verbs: ['get', 'list', 'watch'] 63 | - apiGroups: ['csi.storage.k8s.io'] 64 | resources: ['csinodeinfos'] 65 | verbs: ['get', 'list', 'watch'] 66 | - apiGroups: ['storage.k8s.io'] 67 | resources: ['volumeattachments'] 68 | verbs: ['get', 'list', 'watch', 'update'] 69 | - apiGroups: ['storage.k8s.io'] 70 | resources: ['volumeattachments/status'] 71 | verbs: ['get', 'list', 'watch', 'update', 'patch'] 72 | # snapshotter specific 73 | - apiGroups: ['snapshot.storage.k8s.io'] 74 | resources: ['volumesnapshotclasses'] 75 | verbs: ['get', 'list', 'watch'] 76 | - apiGroups: ['snapshot.storage.k8s.io'] 77 | resources: ['volumesnapshotcontents'] 78 | verbs: ['create', 'get', 'list', 'watch', 'update', 'delete', 'patch'] 79 | - apiGroups: ['snapshot.storage.k8s.io'] 80 | resources: ['volumesnapshots'] 81 | verbs: ['get', 'list', 'watch', 'update'] 82 | - apiGroups: ["snapshot.storage.k8s.io"] 83 | resources: ["volumesnapshots/status"] 84 | verbs: ["update"] 85 | - apiGroups: ["snapshot.storage.k8s.io"] 86 | resources: ["volumesnapshotcontents/status"] 87 | verbs: ["update"] 88 | - apiGroups: ['apiextensions.k8s.io'] 89 | resources: ['customresourcedefinitions'] 90 | verbs: ['create', 'list', 'watch', 'delete'] 91 | - apiGroups: [""] 92 | resources: ["persistentvolumeclaims/status"] 93 | verbs: ["update", "patch"] 94 | # CSINode specific 95 | - apiGroups: ["storage.k8s.io"] 96 | resources: ["csinodes"] 97 | verbs: ["watch", "list", "get"] 98 | --- 99 | 100 | kind: ClusterRoleBinding 101 | apiVersion: rbac.authorization.k8s.io/v1 102 | metadata: 103 | name: exascaler-csi-controller-cluster-role-binding 104 | subjects: 105 | - kind: ServiceAccount 106 | name: exascaler-csi-controller-service-account 107 | namespace: default # replace with non-default namespace name if needed 108 | roleRef: 109 | kind: ClusterRole 110 | name: exascaler-csi-controller-cluster-role 111 | apiGroup: rbac.authorization.k8s.io 112 | --- 113 | 114 | # External Resizer 115 | kind: ClusterRole 116 | apiVersion: rbac.authorization.k8s.io/v1 117 | metadata: 118 | name: csi-resizer-role 119 | rules: 120 | # The following rule should be uncommented for plugins that require secrets 121 | # for provisioning. 122 | - apiGroups: [""] 123 | resources: ["secrets"] 124 | verbs: ["get", "list", "watch"] 125 | - apiGroups: [""] 126 | resources: ["persistentvolumes"] 127 | verbs: ["get", "list", "watch", "update", "patch"] 128 | - apiGroups: [""] 129 | resources: ["persistentvolumeclaims"] 130 | verbs: ["get", "list", "watch"] 131 | - apiGroups: [""] 132 | resources: ["persistentvolumeclaims/status"] 133 | verbs: ["update", "patch"] 134 | - apiGroups: ["storage.k8s.io"] 135 | resources: ["storageclasses"] 136 | verbs: ["get", "list", "watch"] 137 | - apiGroups: [""] 138 | resources: ["events"] 139 | verbs: ["list", "watch", "create", "update", "patch"] 140 | 141 | --- 142 | kind: ClusterRoleBinding 143 | apiVersion: rbac.authorization.k8s.io/v1 144 | metadata: 145 | name: csi-resizer-binding 146 | subjects: 147 | - kind: ServiceAccount 148 | name: exascaler-csi-controller-service-account 149 | namespace: default 150 | roleRef: 151 | kind: ClusterRole 152 | name: csi-resizer-role 153 | apiGroup: rbac.authorization.k8s.io 154 | 155 | --- 156 | kind: Role 157 | apiVersion: rbac.authorization.k8s.io/v1 158 | metadata: 159 | namespace: default 160 | name: external-resizer-cfg 161 | rules: 162 | - apiGroups: ["coordination.k8s.io"] 163 | resources: ["leases"] 164 | verbs: ["get", "watch", "list", "delete", "update", "create"] 165 | 166 | --- 167 | kind: RoleBinding 168 | apiVersion: rbac.authorization.k8s.io/v1 169 | metadata: 170 | name: csi-resizer-role-cfg 171 | namespace: default 172 | subjects: 173 | - kind: ServiceAccount 174 | name: exascaler-csi-controller-service-account 175 | namespace: default 176 | roleRef: 177 | kind: Role 178 | name: external-resizer-cfg 179 | apiGroup: rbac.authorization.k8s.io 180 | --- 181 | 182 | kind: Service 183 | apiVersion: v1 184 | metadata: 185 | name: exascaler-csi-controller-service 186 | labels: 187 | app: exascaler-csi-controller 188 | spec: 189 | selector: 190 | app: exascaler-csi-controller 191 | ports: 192 | - name: dummy 193 | port: 12345 194 | --- 195 | 196 | # --------------------------------- 197 | # Exascaler CSI Controller Server 198 | # --------------------------------- 199 | # 200 | # Runs single driver controller server (driver + provisioner + attacher + snapshotter) on one of the nodes. 201 | # Controller driver deployment does not support running multiple replicas. 202 | # 203 | kind: Deployment 204 | apiVersion: apps/v1 205 | metadata: 206 | name: exascaler-csi-controller 207 | spec: 208 | # serviceName: exascaler-csi-controller-service 209 | selector: 210 | matchLabels: 211 | app: exascaler-csi-controller # has to match .spec.template.metadata.labels 212 | template: 213 | metadata: 214 | labels: 215 | app: exascaler-csi-controller 216 | spec: 217 | serviceAccount: exascaler-csi-controller-service-account 218 | containers: 219 | # csi-provisioner: sidecar container that watches Kubernetes PersistentVolumeClaim objects 220 | # and triggers CreateVolume/DeleteVolume against a CSI endpoint 221 | - name: csi-provisioner 222 | image: k8s.gcr.io/sig-storage/csi-provisioner:v3.0.0 223 | imagePullPolicy: IfNotPresent 224 | args: 225 | - --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock 226 | - --volume-name-prefix=pvc-exa 227 | - --strict-topology 228 | - --immediate-topology=false 229 | - --feature-gates=Topology=true 230 | - --timeout=120m 231 | volumeMounts: 232 | - name: socket-dir 233 | mountPath: /var/lib/csi/sockets/pluginproxy 234 | - name: csi-attacher 235 | image: k8s.gcr.io/sig-storage/csi-attacher:v3.5.0 236 | imagePullPolicy: IfNotPresent 237 | args: 238 | - --csi-address=$(ADDRESS) 239 | - --v=2 240 | - --leader-election=true 241 | env: 242 | - name: ADDRESS 243 | value: /var/lib/csi/sockets/pluginproxy/csi.sock 244 | volumeMounts: 245 | - name: socket-dir 246 | mountPath: /var/lib/csi/sockets/pluginproxy/ 247 | - name: csi-snapshotter 248 | image: registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1 249 | imagePullPolicy: IfNotPresent 250 | args: 251 | - -v=3 252 | - --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock 253 | - --timeout=120m 254 | volumeMounts: 255 | - name: socket-dir 256 | mountPath: /var/lib/csi/sockets/pluginproxy 257 | - name: csi-resizer 258 | image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 259 | args: 260 | - "--csi-address=$(ADDRESS)" 261 | env: 262 | - name: ADDRESS 263 | value: /var/lib/csi/sockets/pluginproxy/csi.sock 264 | imagePullPolicy: "IfNotPresent" 265 | volumeMounts: 266 | - name: socket-dir 267 | mountPath: /var/lib/csi/sockets/pluginproxy/ 268 | - name: driver 269 | securityContext: 270 | privileged: true 271 | capabilities: 272 | add: ['SYS_ADMIN'] 273 | allowPrivilegeEscalation: true 274 | image: quay.io/ddn/exascaler-csi-file-driver:v2.3.5 275 | imagePullPolicy: Always 276 | args: 277 | - --nodeid=$(KUBE_NODE_NAME) 278 | - --endpoint=unix://csi/csi.sock 279 | - --role=controller 280 | env: 281 | - name: KUBE_NODE_NAME 282 | valueFrom: 283 | fieldRef: 284 | fieldPath: spec.nodeName 285 | volumeMounts: 286 | - name: socket-dir 287 | mountPath: /csi 288 | - name: secret 289 | mountPath: /config 290 | readOnly: true 291 | - name: host 292 | mountPath: /host 293 | mountPropagation: Bidirectional 294 | volumes: 295 | - name: socket-dir 296 | emptyDir: 297 | - name: secret 298 | secret: 299 | secretName: exascaler-csi-file-driver-config 300 | - name: host 301 | hostPath: 302 | path: / 303 | type: Directory 304 | --- 305 | 306 | 307 | # --------------------------- 308 | # exascaler CSI Node Server 309 | # --------------------------- 310 | # 311 | # Runs driver node server (driver + registrar) on each node 312 | # 313 | 314 | apiVersion: v1 315 | kind: ServiceAccount 316 | metadata: 317 | name: exascaler-csi-node-service-account 318 | namespace: default # replace with non-default namespace name if needed 319 | --- 320 | 321 | kind: ClusterRole 322 | apiVersion: rbac.authorization.k8s.io/v1 323 | metadata: 324 | name: exascaler-csi-node-cluster-role 325 | rules: 326 | - apiGroups: [''] 327 | resources: ['events'] 328 | verbs: ['get', 'list', 'watch', 'create', 'update', 'patch'] 329 | - apiGroups: [''] 330 | resources: ['nodes'] 331 | verbs: ['get'] 332 | --- 333 | 334 | kind: ClusterRoleBinding 335 | apiVersion: rbac.authorization.k8s.io/v1 336 | metadata: 337 | name: exascaler-csi-node-cluster-role-binding 338 | subjects: 339 | - kind: ServiceAccount 340 | name: exascaler-csi-node-service-account 341 | namespace: default # replace with non-default namespace name if needed 342 | roleRef: 343 | kind: ClusterRole 344 | name: exascaler-csi-node-cluster-role 345 | apiGroup: rbac.authorization.k8s.io 346 | --- 347 | 348 | # exascaler Node Server as a daemon 349 | 350 | kind: DaemonSet 351 | apiVersion: apps/v1 352 | metadata: 353 | name: exascaler-csi-node 354 | spec: 355 | selector: 356 | matchLabels: 357 | app: exascaler-csi-node 358 | template: 359 | metadata: 360 | labels: 361 | app: exascaler-csi-node 362 | spec: 363 | serviceAccount: exascaler-csi-node-service-account 364 | hostNetwork: true 365 | containers: 366 | # driver-registrar: sidecar container that: 367 | # 1) registers the CSI driver with kubelet 368 | # 2) adds the drivers custom NodeId to a label on the Kubernetes Node API Object 369 | - name: driver-registrar 370 | image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0 371 | imagePullPolicy: IfNotPresent 372 | args: 373 | - --v=5 374 | - --csi-address=/csi/csi.sock 375 | - --kubelet-registration-path=/var/lib/kubelet/plugins/exa.csi.ddn.com/csi.sock 376 | livenessProbe: 377 | exec: 378 | command: 379 | - /csi-node-driver-registrar 380 | - --kubelet-registration-path=/var/lib/kubelet/plugins/exa.csi.ddn.com/csi.sock 381 | - --mode=kubelet-registration-probe 382 | initialDelaySeconds: 30 383 | timeoutSeconds: 15 384 | env: 385 | - name: KUBE_NODE_NAME 386 | valueFrom: 387 | fieldRef: 388 | fieldPath: spec.nodeName 389 | volumeMounts: 390 | - name: socket-dir 391 | mountPath: /csi 392 | - name: registration-dir 393 | mountPath: /registration 394 | - name: driver 395 | securityContext: 396 | privileged: true 397 | capabilities: 398 | add: ['SYS_ADMIN'] 399 | allowPrivilegeEscalation: true 400 | image: quay.io/ddn/exascaler-csi-file-driver:v2.3.5 401 | imagePullPolicy: Always 402 | args: 403 | - --nodeid=$(KUBE_NODE_NAME) 404 | - --endpoint=unix://csi/csi.sock 405 | - --role=node 406 | env: 407 | - name: KUBE_NODE_NAME 408 | valueFrom: 409 | fieldRef: 410 | fieldPath: spec.nodeName 411 | volumeMounts: 412 | - name: socket-dir 413 | mountPath: /csi 414 | - name: secret 415 | mountPath: /config 416 | - name: host 417 | mountPath: /host 418 | mountPropagation: Bidirectional 419 | - name: pods-mount-dir 420 | mountPath: /var/lib/kubelet/pods 421 | mountPropagation: Bidirectional 422 | volumes: 423 | - name: socket-dir 424 | hostPath: 425 | path: /var/lib/kubelet/plugins/exa.csi.ddn.com 426 | type: DirectoryOrCreate 427 | - name: registration-dir 428 | hostPath: 429 | path: /var/lib/kubelet/plugins_registry/ 430 | type: Directory 431 | - name: pods-mount-dir 432 | hostPath: 433 | path: /var/lib/kubelet/pods 434 | type: Directory 435 | - name: host 436 | hostPath: 437 | path: / 438 | type: Directory 439 | - name: secret 440 | secret: 441 | secretName: exascaler-csi-file-driver-config 442 | --- 443 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/crb.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRoleBinding 3 | metadata: 4 | name: prometheus-endpoints-access 5 | subjects: 6 | - kind: ServiceAccount 7 | name: monitoring-kube-prometheus-prometheus 8 | namespace: monitoring 9 | roleRef: 10 | kind: ClusterRole 11 | name: cluster-admin 12 | apiGroup: rbac.authorization.k8s.io 13 | 14 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/daemonset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: exa-csi-metrics-exporter 5 | namespace: default 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: exa-csi-metrics-exporter 10 | template: 11 | metadata: 12 | labels: 13 | app: exa-csi-metrics-exporter 14 | spec: 15 | serviceAccountName: exa-csi-metrics-sa 16 | automountServiceAccountToken: true # Ensure token is mounted 17 | containers: 18 | - name: exporter 19 | imagePullPolicy: Always 20 | image: quay.io/ddn/exa-csi-metrics-exporter:v2.3.5 21 | securityContext: 22 | privileged: true 23 | capabilities: 24 | add: ['SYS_ADMIN'] 25 | allowPrivilegeEscalation: true 26 | ports: 27 | - containerPort: 9200 28 | name: http 29 | hostPort: 32666 30 | env: 31 | - name: NODE_NAME 32 | valueFrom: 33 | fieldRef: 34 | fieldPath: spec.nodeName 35 | - name: PORT 36 | value: "9200" 37 | - name: POD_NAME 38 | valueFrom: 39 | fieldRef: 40 | fieldPath: metadata.name 41 | - name: POD_NAMESPACE 42 | valueFrom: 43 | fieldRef: 44 | fieldPath: metadata.namespace 45 | - name: COLLECT_INTERVAL 46 | value: "30" 47 | - name: LOG_LEVEL 48 | value: "info" 49 | volumeMounts: 50 | - name: secret 51 | mountPath: /config 52 | readOnly: true 53 | - name: ca-certificates 54 | mountPath: /var/run/secrets/kubernetes.io/serviceaccount 55 | readOnly: true 56 | - name: host 57 | mountPath: /host 58 | mountPropagation: Bidirectional 59 | volumes: 60 | - name: secret 61 | secret: 62 | secretName: exascaler-csi-file-driver-config 63 | - name: host 64 | hostPath: 65 | path: / 66 | type: Directory 67 | - name: ca-certificates 68 | projected: 69 | sources: 70 | - serviceAccountToken: 71 | path: token 72 | expirationSeconds: 3600 73 | - configMap: 74 | name: kube-root-ca.crt # Mount CA cert 75 | items: 76 | - key: ca.crt 77 | path: ca.crt 78 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/lease.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: coordination.k8s.io/v1 2 | kind: Lease 3 | metadata: 4 | name: exa-csi-metrics-exporter-leader-election 5 | namespace: default 6 | spec: 7 | holderIdentity: "" 8 | leaseDurationSeconds: 15 9 | renewTime: null 10 | acquireTime: null 11 | leaseTransitions: 0 12 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/prometheus/prometheus-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: prometheus-config 5 | namespace: monitoring 6 | data: 7 | prometheus.yml: | 8 | global: 9 | scrape_interval: 15s 10 | 11 | rule_files: 12 | - "exa.rules.yaml" 13 | 14 | scrape_configs: 15 | - job_name: "kubernetes-nodes" 16 | kubernetes_sd_configs: 17 | - role: node 18 | 19 | - job_name: "exa-csi-metrics-exporter" 20 | kubernetes_sd_configs: 21 | - role: pod 22 | relabel_configs: 23 | - source_labels: [__meta_kubernetes_pod_label_app] 24 | action: keep 25 | regex: exa-csi-metrics-exporter 26 | metrics_path: /metrics 27 | scheme: http 28 | static_configs: 29 | - targets: ["exa-csi-metrics-exporter.default.svc.cluster.local:9200"] 30 | 31 | exa.rules.yaml: | 32 | groups: 33 | - name: exa-storageclass-rules 34 | interval: 15s 35 | rules: 36 | - record: exa_csi_sc_capacity_bytes 37 | expr: sum (exa_csi_pvc_capacity_bytes) by (storage_class) 38 | - record: exa_csi_sc_used_bytes 39 | expr: sum (exa_csi_pvc_used_bytes) by (storage_class) 40 | - record: exa_csi_sc_available_bytes 41 | expr: sum (exa_csi_pvc_available_bytes) by (storage_class) 42 | - record: exa_csi_sc_pvc_count 43 | expr: count(exa_csi_pvc_capacity_bytes) by (storage_class) 44 | 45 | - name: exa-namespace-rules 46 | interval: 15s 47 | rules: 48 | - record: exa_csi_namespace_capacity_bytes 49 | expr: sum(exa_csi_pvc_capacity_bytes) by (exported_namespace) 50 | - record: exa_csi_namespace_used_bytes 51 | expr: sum(exa_csi_pvc_used_bytes) by (exported_namespace) 52 | - record: exa_csi_namespace_available_bytes 53 | expr: sum(exa_csi_pvc_available_bytes) by (exported_namespace) 54 | - record: exa_csi_namespace_pvc_count 55 | expr: count(exa_csi_pvc_capacity_bytes) by (exported_namespace) 56 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: exa-csi-metrics-sa 5 | namespace: default 6 | --- 7 | apiVersion: rbac.authorization.k8s.io/v1 8 | kind: ClusterRole 9 | metadata: 10 | name: metrics-reader 11 | rules: 12 | - apiGroups: [""] 13 | resources: ["persistentvolumes", "persistentvolumeclaims", "nodes", "pods"] 14 | verbs: ["get", "list"] 15 | - apiGroups: ["storage.k8s.io"] 16 | resources: ["storageclasses"] 17 | verbs: ["get", "list", "watch"] 18 | --- 19 | apiVersion: rbac.authorization.k8s.io/v1 20 | kind: ClusterRoleBinding 21 | metadata: 22 | name: metrics-reader-binding 23 | subjects: 24 | - kind: ServiceAccount 25 | name: exa-csi-metrics-sa 26 | namespace: default 27 | roleRef: 28 | kind: ClusterRole 29 | name: metrics-reader 30 | apiGroup: rbac.authorization.k8s.io 31 | --- 32 | apiVersion: rbac.authorization.k8s.io/v1 33 | kind: Role 34 | metadata: 35 | name: exa-csi-leader-election 36 | namespace: default 37 | rules: 38 | - apiGroups: ["coordination.k8s.io"] 39 | resources: ["leases"] 40 | verbs: ["get", "watch", "list", "create", "update", "patch"] 41 | --- 42 | apiVersion: rbac.authorization.k8s.io/v1 43 | kind: RoleBinding 44 | metadata: 45 | name: exa-csi-leader-election 46 | namespace: default 47 | roleRef: 48 | apiGroup: rbac.authorization.k8s.io 49 | kind: Role 50 | name: exa-csi-leader-election 51 | subjects: 52 | - kind: ServiceAccount 53 | name: exa-csi-metrics-sa 54 | namespace: default 55 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: exa-csi-metrics-exporter 5 | namespace: default 6 | labels: 7 | app: exa-csi-metrics-exporter 8 | spec: 9 | type: ClusterIP 10 | ports: 11 | - port: 32666 12 | targetPort: http 13 | protocol: TCP 14 | name: http 15 | selector: 16 | app: exa-csi-metrics-exporter 17 | -------------------------------------------------------------------------------- /deploy/kubernetes/metrics/servicemonitor.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: monitoring.coreos.com/v1 2 | kind: ServiceMonitor 3 | metadata: 4 | name: exa-csi-metrics-exporter 5 | namespace: monitoring 6 | labels: 7 | release: prometheus 8 | spec: 9 | endpoints: 10 | - interval: 30s 11 | path: /metrics 12 | port: "9200" # Ensure this matches exporter port 13 | namespaceSelector: 14 | matchNames: 15 | - default 16 | selector: 17 | matchLabels: 18 | app: exa-csi-metrics-exporter 19 | -------------------------------------------------------------------------------- /deploy/kubernetes/snapshots/crds.yaml: -------------------------------------------------------------------------------- 1 | # kubectl create -f deploy/kubernetes/snapshots/crds.yaml 2 | 3 | --- 4 | apiVersion: apiextensions.k8s.io/v1 5 | kind: CustomResourceDefinition 6 | metadata: 7 | annotations: 8 | controller-gen.kubebuilder.io/version: v0.4.0 9 | api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/419" 10 | creationTimestamp: null 11 | name: volumesnapshotclasses.snapshot.storage.k8s.io 12 | spec: 13 | group: snapshot.storage.k8s.io 14 | names: 15 | kind: VolumeSnapshotClass 16 | listKind: VolumeSnapshotClassList 17 | plural: volumesnapshotclasses 18 | singular: volumesnapshotclass 19 | scope: Cluster 20 | versions: 21 | - additionalPrinterColumns: 22 | - jsonPath: .driver 23 | name: Driver 24 | type: string 25 | - description: Determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. 26 | jsonPath: .deletionPolicy 27 | name: DeletionPolicy 28 | type: string 29 | - jsonPath: .metadata.creationTimestamp 30 | name: Age 31 | type: date 32 | name: v1 33 | schema: 34 | openAPIV3Schema: 35 | description: VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced 36 | properties: 37 | apiVersion: 38 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 39 | type: string 40 | deletionPolicy: 41 | description: deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required. 42 | enum: 43 | - Delete 44 | - Retain 45 | type: string 46 | driver: 47 | description: driver is the name of the storage driver that handles this VolumeSnapshotClass. Required. 48 | type: string 49 | kind: 50 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 51 | type: string 52 | parameters: 53 | additionalProperties: 54 | type: string 55 | description: parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes. 56 | type: object 57 | required: 58 | - deletionPolicy 59 | - driver 60 | type: object 61 | served: true 62 | storage: false 63 | subresources: {} 64 | - additionalPrinterColumns: 65 | - jsonPath: .driver 66 | name: Driver 67 | type: string 68 | - description: Determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. 69 | jsonPath: .deletionPolicy 70 | name: DeletionPolicy 71 | type: string 72 | - jsonPath: .metadata.creationTimestamp 73 | name: Age 74 | type: date 75 | name: v1beta1 76 | schema: 77 | openAPIV3Schema: 78 | description: VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced 79 | properties: 80 | apiVersion: 81 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 82 | type: string 83 | deletionPolicy: 84 | description: deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required. 85 | enum: 86 | - Delete 87 | - Retain 88 | type: string 89 | driver: 90 | description: driver is the name of the storage driver that handles this VolumeSnapshotClass. Required. 91 | type: string 92 | kind: 93 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 94 | type: string 95 | parameters: 96 | additionalProperties: 97 | type: string 98 | description: parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes. 99 | type: object 100 | required: 101 | - deletionPolicy 102 | - driver 103 | type: object 104 | served: true 105 | storage: true 106 | subresources: {} 107 | status: 108 | acceptedNames: 109 | kind: "" 110 | plural: "" 111 | conditions: [] 112 | storedVersions: [] 113 | 114 | --- 115 | apiVersion: apiextensions.k8s.io/v1 116 | kind: CustomResourceDefinition 117 | metadata: 118 | annotations: 119 | controller-gen.kubebuilder.io/version: v0.4.0 120 | api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/419" 121 | creationTimestamp: null 122 | name: volumesnapshotcontents.snapshot.storage.k8s.io 123 | spec: 124 | group: snapshot.storage.k8s.io 125 | names: 126 | kind: VolumeSnapshotContent 127 | listKind: VolumeSnapshotContentList 128 | plural: volumesnapshotcontents 129 | singular: volumesnapshotcontent 130 | scope: Cluster 131 | versions: 132 | - additionalPrinterColumns: 133 | - description: Indicates if the snapshot is ready to be used to restore a volume. 134 | jsonPath: .status.readyToUse 135 | name: ReadyToUse 136 | type: boolean 137 | - description: Represents the complete size of the snapshot in bytes 138 | jsonPath: .status.restoreSize 139 | name: RestoreSize 140 | type: integer 141 | - description: Determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. 142 | jsonPath: .spec.deletionPolicy 143 | name: DeletionPolicy 144 | type: string 145 | - description: Name of the CSI driver used to create the physical snapshot on the underlying storage system. 146 | jsonPath: .spec.driver 147 | name: Driver 148 | type: string 149 | - description: Name of the VolumeSnapshotClass to which this snapshot belongs. 150 | jsonPath: .spec.volumeSnapshotClassName 151 | name: VolumeSnapshotClass 152 | type: string 153 | - description: Name of the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. 154 | jsonPath: .spec.volumeSnapshotRef.name 155 | name: VolumeSnapshot 156 | type: string 157 | - jsonPath: .metadata.creationTimestamp 158 | name: Age 159 | type: date 160 | name: v1 161 | schema: 162 | openAPIV3Schema: 163 | description: VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system 164 | properties: 165 | apiVersion: 166 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 167 | type: string 168 | kind: 169 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 170 | type: string 171 | spec: 172 | description: spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. 173 | properties: 174 | deletionPolicy: 175 | description: deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. 176 | enum: 177 | - Delete 178 | - Retain 179 | type: string 180 | driver: 181 | description: driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. 182 | type: string 183 | source: 184 | description: source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. 185 | properties: 186 | snapshotHandle: 187 | description: snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. 188 | type: string 189 | volumeHandle: 190 | description: volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 191 | type: string 192 | type: object 193 | oneOf: 194 | - required: ["snapshotHandle"] 195 | - required: ["volumeHandle"] 196 | volumeSnapshotClassName: 197 | description: name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. 198 | type: string 199 | volumeSnapshotRef: 200 | description: volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 201 | properties: 202 | apiVersion: 203 | description: API version of the referent. 204 | type: string 205 | fieldPath: 206 | description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' 207 | type: string 208 | kind: 209 | description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 210 | type: string 211 | name: 212 | description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' 213 | type: string 214 | namespace: 215 | description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' 216 | type: string 217 | resourceVersion: 218 | description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' 219 | type: string 220 | uid: 221 | description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' 222 | type: string 223 | type: object 224 | required: 225 | - deletionPolicy 226 | - driver 227 | - source 228 | - volumeSnapshotRef 229 | type: object 230 | status: 231 | description: status represents the current information of a snapshot. 232 | properties: 233 | creationTime: 234 | description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. 235 | format: int64 236 | type: integer 237 | error: 238 | description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. 239 | properties: 240 | message: 241 | description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' 242 | type: string 243 | time: 244 | description: time is the timestamp when the error was encountered. 245 | format: date-time 246 | type: string 247 | type: object 248 | readyToUse: 249 | description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. 250 | type: boolean 251 | restoreSize: 252 | description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 253 | format: int64 254 | minimum: 0 255 | type: integer 256 | snapshotHandle: 257 | description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. 258 | type: string 259 | type: object 260 | required: 261 | - spec 262 | type: object 263 | served: true 264 | storage: false 265 | subresources: 266 | status: {} 267 | - additionalPrinterColumns: 268 | - description: Indicates if the snapshot is ready to be used to restore a volume. 269 | jsonPath: .status.readyToUse 270 | name: ReadyToUse 271 | type: boolean 272 | - description: Represents the complete size of the snapshot in bytes 273 | jsonPath: .status.restoreSize 274 | name: RestoreSize 275 | type: integer 276 | - description: Determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. 277 | jsonPath: .spec.deletionPolicy 278 | name: DeletionPolicy 279 | type: string 280 | - description: Name of the CSI driver used to create the physical snapshot on the underlying storage system. 281 | jsonPath: .spec.driver 282 | name: Driver 283 | type: string 284 | - description: Name of the VolumeSnapshotClass to which this snapshot belongs. 285 | jsonPath: .spec.volumeSnapshotClassName 286 | name: VolumeSnapshotClass 287 | type: string 288 | - description: Name of the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. 289 | jsonPath: .spec.volumeSnapshotRef.name 290 | name: VolumeSnapshot 291 | type: string 292 | - jsonPath: .metadata.creationTimestamp 293 | name: Age 294 | type: date 295 | name: v1beta1 296 | schema: 297 | openAPIV3Schema: 298 | description: VolumeSnapshotContent represents the actual "on-disk" snapshot object in the underlying storage system 299 | properties: 300 | apiVersion: 301 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 302 | type: string 303 | kind: 304 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 305 | type: string 306 | spec: 307 | description: spec defines properties of a VolumeSnapshotContent created by the underlying storage system. Required. 308 | properties: 309 | deletionPolicy: 310 | description: deletionPolicy determines whether this VolumeSnapshotContent and its physical snapshot on the underlying storage system should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. For dynamically provisioned snapshots, this field will automatically be filled in by the CSI snapshotter sidecar with the "DeletionPolicy" field defined in the corresponding VolumeSnapshotClass. For pre-existing snapshots, users MUST specify this field when creating the VolumeSnapshotContent object. Required. 311 | enum: 312 | - Delete 313 | - Retain 314 | type: string 315 | driver: 316 | description: driver is the name of the CSI driver used to create the physical snapshot on the underlying storage system. This MUST be the same as the name returned by the CSI GetPluginName() call for that driver. Required. 317 | type: string 318 | source: 319 | description: source specifies whether the snapshot is (or should be) dynamically provisioned or already exists, and just requires a Kubernetes object representation. This field is immutable after creation. Required. 320 | properties: 321 | snapshotHandle: 322 | description: snapshotHandle specifies the CSI "snapshot_id" of a pre-existing snapshot on the underlying storage system for which a Kubernetes object representation was (or should be) created. This field is immutable. 323 | type: string 324 | volumeHandle: 325 | description: volumeHandle specifies the CSI "volume_id" of the volume from which a snapshot should be dynamically taken from. This field is immutable. 326 | type: string 327 | type: object 328 | volumeSnapshotClassName: 329 | description: name of the VolumeSnapshotClass from which this snapshot was (or will be) created. Note that after provisioning, the VolumeSnapshotClass may be deleted or recreated with different set of values, and as such, should not be referenced post-snapshot creation. 330 | type: string 331 | volumeSnapshotRef: 332 | description: volumeSnapshotRef specifies the VolumeSnapshot object to which this VolumeSnapshotContent object is bound. VolumeSnapshot.Spec.VolumeSnapshotContentName field must reference to this VolumeSnapshotContent's name for the bidirectional binding to be valid. For a pre-existing VolumeSnapshotContent object, name and namespace of the VolumeSnapshot object MUST be provided for binding to happen. This field is immutable after creation. Required. 333 | properties: 334 | apiVersion: 335 | description: API version of the referent. 336 | type: string 337 | fieldPath: 338 | description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' 339 | type: string 340 | kind: 341 | description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 342 | type: string 343 | name: 344 | description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' 345 | type: string 346 | namespace: 347 | description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' 348 | type: string 349 | resourceVersion: 350 | description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' 351 | type: string 352 | uid: 353 | description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' 354 | type: string 355 | type: object 356 | required: 357 | - deletionPolicy 358 | - driver 359 | - source 360 | - volumeSnapshotRef 361 | type: object 362 | status: 363 | description: status represents the current information of a snapshot. 364 | properties: 365 | creationTime: 366 | description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it indicates the creation time is unknown. The format of this field is a Unix nanoseconds time encoded as an int64. On Unix, the command `date +%s%N` returns the current time in nanoseconds since 1970-01-01 00:00:00 UTC. 367 | format: int64 368 | type: integer 369 | error: 370 | description: error is the last observed error during snapshot creation, if any. Upon success after retry, this error field will be cleared. 371 | properties: 372 | message: 373 | description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' 374 | type: string 375 | time: 376 | description: time is the timestamp when the error was encountered. 377 | format: date-time 378 | type: string 379 | type: object 380 | readyToUse: 381 | description: readyToUse indicates if a snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. 382 | type: boolean 383 | restoreSize: 384 | description: restoreSize represents the complete size of the snapshot in bytes. In dynamic snapshot creation case, this field will be filled in by the CSI snapshotter sidecar with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 385 | format: int64 386 | minimum: 0 387 | type: integer 388 | snapshotHandle: 389 | description: snapshotHandle is the CSI "snapshot_id" of a snapshot on the underlying storage system. If not specified, it indicates that dynamic snapshot creation has either failed or it is still in progress. 390 | type: string 391 | type: object 392 | required: 393 | - spec 394 | type: object 395 | served: true 396 | storage: true 397 | subresources: 398 | status: {} 399 | status: 400 | acceptedNames: 401 | kind: "" 402 | plural: "" 403 | conditions: [] 404 | storedVersions: [] 405 | 406 | --- 407 | apiVersion: apiextensions.k8s.io/v1 408 | kind: CustomResourceDefinition 409 | metadata: 410 | annotations: 411 | controller-gen.kubebuilder.io/version: v0.4.0 412 | api-approved.kubernetes.io: "https://github.com/kubernetes-csi/external-snapshotter/pull/419" 413 | creationTimestamp: null 414 | name: volumesnapshots.snapshot.storage.k8s.io 415 | spec: 416 | group: snapshot.storage.k8s.io 417 | names: 418 | kind: VolumeSnapshot 419 | listKind: VolumeSnapshotList 420 | plural: volumesnapshots 421 | singular: volumesnapshot 422 | scope: Namespaced 423 | versions: 424 | - additionalPrinterColumns: 425 | - description: Indicates if the snapshot is ready to be used to restore a volume. 426 | jsonPath: .status.readyToUse 427 | name: ReadyToUse 428 | type: boolean 429 | - description: If a new snapshot needs to be created, this contains the name of the source PVC from which this snapshot was (or will be) created. 430 | jsonPath: .spec.source.persistentVolumeClaimName 431 | name: SourcePVC 432 | type: string 433 | - description: If a snapshot already exists, this contains the name of the existing VolumeSnapshotContent object representing the existing snapshot. 434 | jsonPath: .spec.source.volumeSnapshotContentName 435 | name: SourceSnapshotContent 436 | type: string 437 | - description: Represents the minimum size of volume required to rehydrate from this snapshot. 438 | jsonPath: .status.restoreSize 439 | name: RestoreSize 440 | type: string 441 | - description: The name of the VolumeSnapshotClass requested by the VolumeSnapshot. 442 | jsonPath: .spec.volumeSnapshotClassName 443 | name: SnapshotClass 444 | type: string 445 | - description: Name of the VolumeSnapshotContent object to which the VolumeSnapshot object intends to bind to. Please note that verification of binding actually requires checking both VolumeSnapshot and VolumeSnapshotContent to ensure both are pointing at each other. Binding MUST be verified prior to usage of this object. 446 | jsonPath: .status.boundVolumeSnapshotContentName 447 | name: SnapshotContent 448 | type: string 449 | - description: Timestamp when the point-in-time snapshot was taken by the underlying storage system. 450 | jsonPath: .status.creationTime 451 | name: CreationTime 452 | type: date 453 | - jsonPath: .metadata.creationTimestamp 454 | name: Age 455 | type: date 456 | name: v1 457 | schema: 458 | openAPIV3Schema: 459 | description: VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. 460 | properties: 461 | apiVersion: 462 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 463 | type: string 464 | kind: 465 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 466 | type: string 467 | spec: 468 | description: 'spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required.' 469 | properties: 470 | source: 471 | description: source specifies where a snapshot will be created from. This field is immutable after creation. Required. 472 | properties: 473 | persistentVolumeClaimName: 474 | description: persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. 475 | type: string 476 | volumeSnapshotContentName: 477 | description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 478 | type: string 479 | type: object 480 | oneOf: 481 | - required: ["persistentVolumeClaimName"] 482 | - required: ["volumeSnapshotContentName"] 483 | volumeSnapshotClassName: 484 | description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' 485 | type: string 486 | required: 487 | - source 488 | type: object 489 | status: 490 | description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 491 | properties: 492 | boundVolumeSnapshotContentName: 493 | description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' 494 | type: string 495 | creationTime: 496 | description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. 497 | format: date-time 498 | type: string 499 | error: 500 | description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. 501 | properties: 502 | message: 503 | description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' 504 | type: string 505 | time: 506 | description: time is the timestamp when the error was encountered. 507 | format: date-time 508 | type: string 509 | type: object 510 | readyToUse: 511 | description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. 512 | type: boolean 513 | restoreSize: 514 | type: string 515 | description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 516 | pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ 517 | x-kubernetes-int-or-string: true 518 | type: object 519 | required: 520 | - spec 521 | type: object 522 | served: true 523 | storage: false 524 | subresources: 525 | status: {} 526 | - additionalPrinterColumns: 527 | - description: Indicates if the snapshot is ready to be used to restore a volume. 528 | jsonPath: .status.readyToUse 529 | name: ReadyToUse 530 | type: boolean 531 | - description: If a new snapshot needs to be created, this contains the name of the source PVC from which this snapshot was (or will be) created. 532 | jsonPath: .spec.source.persistentVolumeClaimName 533 | name: SourcePVC 534 | type: string 535 | - description: If a snapshot already exists, this contains the name of the existing VolumeSnapshotContent object representing the existing snapshot. 536 | jsonPath: .spec.source.volumeSnapshotContentName 537 | name: SourceSnapshotContent 538 | type: string 539 | - description: Represents the minimum size of volume required to rehydrate from this snapshot. 540 | jsonPath: .status.restoreSize 541 | name: RestoreSize 542 | type: string 543 | - description: The name of the VolumeSnapshotClass requested by the VolumeSnapshot. 544 | jsonPath: .spec.volumeSnapshotClassName 545 | name: SnapshotClass 546 | type: string 547 | - description: Name of the VolumeSnapshotContent object to which the VolumeSnapshot object intends to bind to. Please note that verification of binding actually requires checking both VolumeSnapshot and VolumeSnapshotContent to ensure both are pointing at each other. Binding MUST be verified prior to usage of this object. 548 | jsonPath: .status.boundVolumeSnapshotContentName 549 | name: SnapshotContent 550 | type: string 551 | - description: Timestamp when the point-in-time snapshot was taken by the underlying storage system. 552 | jsonPath: .status.creationTime 553 | name: CreationTime 554 | type: date 555 | - jsonPath: .metadata.creationTimestamp 556 | name: Age 557 | type: date 558 | name: v1beta1 559 | schema: 560 | openAPIV3Schema: 561 | description: VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. 562 | properties: 563 | apiVersion: 564 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 565 | type: string 566 | kind: 567 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 568 | type: string 569 | spec: 570 | description: 'spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required.' 571 | properties: 572 | source: 573 | description: source specifies where a snapshot will be created from. This field is immutable after creation. Required. 574 | properties: 575 | persistentVolumeClaimName: 576 | description: persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. 577 | type: string 578 | volumeSnapshotContentName: 579 | description: volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 580 | type: string 581 | type: object 582 | volumeSnapshotClassName: 583 | description: 'VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field.' 584 | type: string 585 | required: 586 | - source 587 | type: object 588 | status: 589 | description: status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 590 | properties: 591 | boundVolumeSnapshotContentName: 592 | description: 'boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object.' 593 | type: string 594 | creationTime: 595 | description: creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. 596 | format: date-time 597 | type: string 598 | error: 599 | description: error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurrs during the snapshot creation. Upon success, this error field will be cleared. 600 | properties: 601 | message: 602 | description: 'message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information.' 603 | type: string 604 | time: 605 | description: time is the timestamp when the error was encountered. 606 | format: date-time 607 | type: string 608 | type: object 609 | readyToUse: 610 | description: readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. 611 | type: boolean 612 | restoreSize: 613 | type: string 614 | description: restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. 615 | pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ 616 | x-kubernetes-int-or-string: true 617 | type: object 618 | required: 619 | - spec 620 | type: object 621 | served: true 622 | storage: true 623 | subresources: 624 | status: {} 625 | status: 626 | acceptedNames: 627 | kind: "" 628 | plural: "" 629 | conditions: [] 630 | storedVersions: [] 631 | 632 | -------------------------------------------------------------------------------- /deploy/kubernetes/snapshots/snapshotter.yaml: -------------------------------------------------------------------------------- 1 | # RBAC file for the snapshot controller. 2 | # 3 | # The snapshot controller implements the control loop for CSI snapshot functionality. 4 | # It should be installed as part of the base Kubernetes distribution in an appropriate 5 | # namespace for components implementing base system functionality. For installing with 6 | # Vanilla Kubernetes, kube-system makes sense for the namespace. 7 | 8 | apiVersion: v1 9 | kind: ServiceAccount 10 | metadata: 11 | name: snapshot-controller 12 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 13 | 14 | --- 15 | kind: ClusterRole 16 | apiVersion: rbac.authorization.k8s.io/v1 17 | metadata: 18 | # rename if there are conflicts 19 | name: snapshot-controller-runner 20 | rules: 21 | - apiGroups: [""] 22 | resources: ["persistentvolumes"] 23 | verbs: ["get", "list", "watch"] 24 | - apiGroups: [""] 25 | resources: ["persistentvolumeclaims"] 26 | verbs: ["get", "list", "watch", "update"] 27 | - apiGroups: ["storage.k8s.io"] 28 | resources: ["storageclasses"] 29 | verbs: ["get", "list", "watch"] 30 | - apiGroups: [""] 31 | resources: ["events"] 32 | verbs: ["list", "watch", "create", "update", "patch"] 33 | - apiGroups: ["snapshot.storage.k8s.io"] 34 | resources: ["volumesnapshotclasses"] 35 | verbs: ["get", "list", "watch"] 36 | - apiGroups: ["snapshot.storage.k8s.io"] 37 | resources: ["volumesnapshotcontents"] 38 | verbs: ["create", "get", "list", "watch", "update", "delete"] 39 | - apiGroups: ["snapshot.storage.k8s.io"] 40 | resources: ["volumesnapshots"] 41 | verbs: ["get", "list", "watch", "update"] 42 | - apiGroups: ["snapshot.storage.k8s.io"] 43 | resources: ["volumesnapshots/status"] 44 | verbs: ["update"] 45 | 46 | --- 47 | kind: ClusterRoleBinding 48 | apiVersion: rbac.authorization.k8s.io/v1 49 | metadata: 50 | name: snapshot-controller-role 51 | subjects: 52 | - kind: ServiceAccount 53 | name: snapshot-controller 54 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 55 | roleRef: 56 | kind: ClusterRole 57 | # change the name also here if the ClusterRole gets renamed 58 | name: snapshot-controller-runner 59 | apiGroup: rbac.authorization.k8s.io 60 | 61 | --- 62 | kind: Role 63 | apiVersion: rbac.authorization.k8s.io/v1 64 | metadata: 65 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 66 | name: snapshot-controller-leaderelection 67 | rules: 68 | - apiGroups: ["coordination.k8s.io"] 69 | resources: ["leases"] 70 | verbs: ["get", "watch", "list", "delete", "update", "create"] 71 | 72 | --- 73 | kind: RoleBinding 74 | apiVersion: rbac.authorization.k8s.io/v1 75 | metadata: 76 | name: snapshot-controller-leaderelection 77 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 78 | subjects: 79 | - kind: ServiceAccount 80 | name: snapshot-controller 81 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 82 | roleRef: 83 | kind: Role 84 | name: snapshot-controller-leaderelection 85 | apiGroup: rbac.authorization.k8s.io 86 | 87 | # This YAML file shows how to deploy the snapshot controller 88 | 89 | # The snapshot controller implements the control loop for CSI snapshot functionality. 90 | # It should be installed as part of the base Kubernetes distribution in an appropriate 91 | # namespace for components implementing base system functionality. For installing with 92 | # Vanilla Kubernetes, kube-system makes sense for the namespace. 93 | 94 | --- 95 | kind: StatefulSet 96 | apiVersion: apps/v1 97 | metadata: 98 | name: snapshot-controller 99 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 100 | spec: 101 | serviceName: "snapshot-controller" 102 | replicas: 1 103 | selector: 104 | matchLabels: 105 | app: snapshot-controller 106 | template: 107 | metadata: 108 | labels: 109 | app: snapshot-controller 110 | spec: 111 | serviceAccount: snapshot-controller 112 | containers: 113 | - name: snapshot-controller 114 | image: k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0 115 | args: 116 | - "--v=5" 117 | - "--leader-election=false" 118 | imagePullPolicy: IfNotPresent -------------------------------------------------------------------------------- /deploy/openshift/exascaler-csi-file-driver-config.yaml: -------------------------------------------------------------------------------- 1 | # exascaler-csi-file-driver config file to create k8s secret 2 | # 3 | # $ oc create secret generic exascaler-csi-file-driver-config \ 4 | # --from-file=deploy/openshift/exascaler-csi-file-driver-config.yaml 5 | # 6 | 7 | exascaler_map: 8 | exa1: 9 | bindMount: true 10 | mountPoint: /exaFS # mountpoint on the host where the exaFS will be mounted 11 | exaFS: 10.204.86.217@tcp:/testfs # default path to exa filesystem 12 | zone: zone-1 13 | v1xCompatible: true # Optional. Can only be true for one of the Exa clusters 14 | volumeDirPermissions: 0777 # Optional. Defines file permissions for mounted volumes. 15 | 16 | exa2: 17 | mountPoint: /exaFS-zone-2 # mountpoint on the host where the exaFS will be mounted 18 | exaFS: 10.204.86.217@tcp:/testfs/zone-2 # default path to exa filesystem 19 | zone: zone-2 20 | 21 | exa3: 22 | mountPoint: /exaFS-zone-3 # mountpoint on the host where the exaFS will be mounted 23 | exaFS: 10.204.86.217@tcp:/testfs/zone-3 # default path to exa filesystem 24 | zone: zone-3 25 | 26 | debug: true # more logs 27 | openshift: true 28 | -------------------------------------------------------------------------------- /deploy/openshift/exascaler-csi-file-driver.yaml: -------------------------------------------------------------------------------- 1 | # Exascaler CSI Driver (1.0.0) 2 | # 3 | # In production, each CSI driver deployment has to be customized to avoid conflicts, 4 | # use non-default namespace and different names for non-namespaced entities like the ClusterRole 5 | # 6 | # Install to Openshift: 7 | # $ oc apply -f deploy/openshift/exascaler-csi-file-driver.yaml 8 | # 9 | 10 | # ---------------------- 11 | # Exascaler CSI Driver 12 | # ---------------------- 13 | 14 | apiVersion: storage.k8s.io/v1 15 | kind: CSIDriver 16 | metadata: 17 | name: exa.csi.ddn.com 18 | spec: 19 | attachRequired: false 20 | podInfoOnMount: false 21 | --- 22 | 23 | 24 | # ---- 25 | # RBAC 26 | # ---- 27 | 28 | apiVersion: v1 29 | kind: ServiceAccount 30 | metadata: 31 | name: sa-driver 32 | namespace: openshift-kmm 33 | --- 34 | 35 | apiVersion: rbac.authorization.k8s.io/v1 36 | kind: Role 37 | metadata: 38 | name: sa-driver 39 | rules: 40 | - apiGroups: 41 | - security.openshift.io 42 | resources: 43 | - securitycontextconstraints 44 | verbs: 45 | - use 46 | resourceNames: 47 | - sa-driver 48 | --- 49 | 50 | apiVersion: rbac.authorization.k8s.io/v1 51 | kind: RoleBinding 52 | metadata: 53 | name: sa-driver 54 | roleRef: 55 | apiGroup: rbac.authorization.k8s.io 56 | kind: Role 57 | name: sa-driver 58 | subjects: 59 | - kind: ServiceAccount 60 | name: sa-driver 61 | userNames: 62 | - system:serviceaccount:openshift-kmm:sa-driver 63 | --- 64 | 65 | allowHostDirVolumePlugin: true 66 | allowHostIPC: true 67 | allowHostNetwork: true 68 | allowHostPID: true 69 | allowHostPorts: true 70 | allowPrivilegeEscalation: true 71 | allowPrivilegedContainer: true 72 | allowedCapabilities: null 73 | apiVersion: security.openshift.io/v1 74 | defaultAddCapabilities: null 75 | fsGroup: 76 | type: RunAsAny 77 | groups: [] 78 | kind: SecurityContextConstraints 79 | metadata: 80 | annotations: 81 | kubernetes.io/description: this is a custom SCC which combines the allowances of hostnetwork and hostmount-anyuid 82 | name: sa-driver 83 | priority: null 84 | readOnlyRootFilesystem: false 85 | runAsUser: 86 | type: RunAsAny 87 | seLinuxContext: 88 | type: MustRunAs 89 | supplementalGroups: 90 | type: RunAsAny 91 | users: 92 | - system:serviceaccount:openshift-kmm:sa-driver 93 | volumes: 94 | - configMap 95 | - downwardAPI 96 | - emptyDir 97 | - hostPath 98 | - nfs 99 | - persistentVolumeClaim 100 | - projected 101 | - secret 102 | --- 103 | 104 | apiVersion: rbac.authorization.k8s.io/v1 105 | kind: ClusterRole 106 | metadata: 107 | name: exascaler-driver-role 108 | rules: 109 | - apiGroups: 110 | - "" 111 | resources: 112 | - persistentvolumes 113 | verbs: 114 | - get 115 | - list 116 | - watch 117 | - create 118 | - delete 119 | - patch 120 | - apiGroups: 121 | - "" 122 | resources: 123 | - pods 124 | verbs: 125 | - get 126 | - list 127 | - watch 128 | - create 129 | - delete 130 | - apiGroups: 131 | - "storage.k8s.io" 132 | resources: 133 | - volumeattachments 134 | verbs: 135 | - get 136 | - list 137 | - watch 138 | - create 139 | - delete 140 | - update 141 | - patch 142 | - apiGroups: 143 | - "storage.k8s.io" 144 | resources: 145 | - volumeattachments/status 146 | verbs: 147 | - get 148 | - list 149 | - watch 150 | - create 151 | - delete 152 | - update 153 | - patch 154 | - apiGroups: 155 | - "" 156 | resources: 157 | - persistentvolumeclaims 158 | verbs: 159 | - get 160 | - list 161 | - watch 162 | - update 163 | - patch 164 | - apiGroups: 165 | - "" 166 | resources: 167 | - persistentvolumeclaims/status 168 | verbs: 169 | - get 170 | - list 171 | - watch 172 | - update 173 | - patch 174 | - apiGroups: 175 | - storage.k8s.io 176 | resources: 177 | - storageclasses 178 | verbs: 179 | - get 180 | - list 181 | - watch 182 | - apiGroups: 183 | - "" 184 | resources: 185 | - events 186 | verbs: 187 | - list 188 | - watch 189 | - create 190 | - update 191 | - patch 192 | - apiGroups: 193 | - storage.k8s.io 194 | resources: 195 | - csinodes 196 | verbs: 197 | - get 198 | - list 199 | - watch 200 | - apiGroups: 201 | - "" 202 | resources: 203 | - nodes 204 | - pods 205 | verbs: 206 | - get 207 | - list 208 | - watch 209 | - apiGroups: 210 | - coordination.k8s.io 211 | resources: 212 | - leases 213 | verbs: 214 | - get 215 | - watch 216 | - list 217 | - delete 218 | - update 219 | - create 220 | - apiGroups: 221 | - snapshot.storage.k8s.io 222 | resources: 223 | - volumesnapshotclasses 224 | verbs: 225 | - get 226 | - list 227 | - watch 228 | - apiGroups: 229 | - snapshot.storage.k8s.io 230 | resources: 231 | - volumesnapshotcontents 232 | verbs: 233 | - create 234 | - get 235 | - list 236 | - watch 237 | - update 238 | - delete 239 | - patch 240 | - apiGroups: 241 | - snapshot.storage.k8s.io 242 | resources: 243 | - volumesnapshots 244 | verbs: 245 | - get 246 | - list 247 | - watch 248 | - update 249 | - apiGroups: 250 | - snapshot.storage.k8s.io 251 | resources: 252 | - volumesnapshots/status 253 | verbs: 254 | - update 255 | - apiGroups: 256 | - snapshot.storage.k8s.io 257 | resources: 258 | - volumesnapshotcontents/status 259 | verbs: 260 | - update 261 | - apiGroups: 262 | - apiextensions.k8s.io 263 | resources: 264 | - customresourcedefinitions 265 | verbs: 266 | - get 267 | - list 268 | - watch 269 | - update 270 | --- 271 | 272 | apiVersion: rbac.authorization.k8s.io/v1 273 | kind: ClusterRoleBinding 274 | metadata: 275 | name: exascaler-driver-role-binding 276 | roleRef: 277 | apiGroup: rbac.authorization.k8s.io 278 | kind: ClusterRole 279 | name: exascaler-driver-role 280 | subjects: 281 | - kind: ServiceAccount 282 | name: sa-driver 283 | namespace: openshift-kmm 284 | --- 285 | 286 | # --------------------------------- 287 | # Exascaler CSI Controller Server 288 | # --------------------------------- 289 | # 290 | # Runs single driver controller server (driver + provisioner + attacher + snapshotter) on one of the nodes. 291 | # Controller driver deployment does not support running multiple replicas. 292 | # 293 | apiVersion: apps/v1 294 | kind: Deployment 295 | metadata: 296 | name: exascaler-csi-controller 297 | namespace: openshift-kmm 298 | spec: 299 | selector: 300 | matchLabels: 301 | app: exascaler-csi-controller 302 | template: 303 | metadata: 304 | labels: 305 | app: exascaler-csi-controller 306 | spec: 307 | serviceAccount: sa-driver 308 | hostNetwork: true 309 | securityContext: 310 | runAsUser: 0 311 | containers: 312 | - name: csi-provisioner 313 | image: registry.k8s.io/sig-storage/csi-provisioner:v3.6.3 314 | imagePullPolicy: IfNotPresent 315 | args: 316 | - --csi-address=/csi/csi.sock 317 | - --volume-name-prefix=pvc-exa 318 | - --strict-topology 319 | - --immediate-topology=false 320 | - --feature-gates=Topology=true 321 | - --timeout=120m 322 | volumeMounts: 323 | - name: socket-dir 324 | mountPath: /csi 325 | - name: csi-attacher 326 | image: registry.k8s.io/sig-storage/csi-attacher:v4.4.0 327 | imagePullPolicy: IfNotPresent 328 | args: 329 | - --csi-address=$(ADDRESS) 330 | - --v=2 331 | - --leader-election=true 332 | env: 333 | - name: ADDRESS 334 | value: /csi/csi.sock 335 | volumeMounts: 336 | - name: socket-dir 337 | mountPath: /csi/ 338 | - name: csi-snapshotter 339 | image: registry.k8s.io/sig-storage/csi-snapshotter:v5.0.1 340 | imagePullPolicy: IfNotPresent 341 | args: 342 | - -v=3 343 | - --csi-address=/var/lib/csi/sockets/pluginproxy/csi.sock 344 | - --timeout=120m 345 | volumeMounts: 346 | - name: socket-dir 347 | mountPath: /var/lib/csi/sockets/pluginproxy 348 | - name: csi-resizer 349 | image: k8s.gcr.io/sig-storage/csi-resizer:v1.4.0 350 | args: 351 | - "--csi-address=$(ADDRESS)" 352 | env: 353 | - name: ADDRESS 354 | value: /csi/csi.sock 355 | imagePullPolicy: IfNotPresent 356 | volumeMounts: 357 | - name: socket-dir 358 | mountPath: /csi/ 359 | - image: quay.io/ddn/exascaler-openshift-file-driver:v2.3.5 360 | imagePullPolicy: Always 361 | args: 362 | - --nodeid=$(KUBE_NODE_NAME) 363 | - --endpoint=unix:///csi/csi.sock 364 | - --role=controller 365 | env: 366 | - name: KUBE_NODE_NAME 367 | valueFrom: 368 | fieldRef: 369 | fieldPath: spec.nodeName 370 | securityContext: 371 | allowPrivilegeEscalation: true 372 | privileged: true 373 | name: driver 374 | volumeMounts: 375 | - mountPath: /csi 376 | name: socket-dir 377 | - name: secret 378 | mountPath: /config 379 | readOnly: true 380 | - name: dev 381 | mountPath: /dev 382 | 383 | volumes: 384 | - name: dev 385 | hostPath: 386 | path: /dev 387 | type: Directory 388 | - name: socket-dir 389 | hostPath: 390 | path: /var/lib/kubelet/plugins/exa.csi.ddn.com/ 391 | type: DirectoryOrCreate 392 | - name: secret 393 | secret: 394 | secretName: exascaler-csi-file-driver-config 395 | --- 396 | 397 | # --------------------------- 398 | # Exascaler CSI Node Server 399 | # --------------------------- 400 | # 401 | # Runs driver node server (driver + registrar) on each node 402 | # 403 | kind: DaemonSet 404 | apiVersion: apps/v1 405 | metadata: 406 | name: exascaler-csi-node 407 | spec: 408 | selector: 409 | matchLabels: 410 | app: exascaler-csi-node 411 | template: 412 | metadata: 413 | labels: 414 | app: exascaler-csi-node 415 | spec: 416 | serviceAccount: sa-driver 417 | hostNetwork: true 418 | containers: 419 | # driver-registrar: sidecar container that: 420 | # 1) registers the CSI driver with kubelet 421 | # 2) adds the drivers custom NodeId to a label on the Kubernetes Node API Object 422 | - name: driver-registrar 423 | image: registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.9.0 424 | imagePullPolicy: IfNotPresent 425 | args: 426 | - --v=5 427 | - --csi-address=/csi/csi.sock 428 | - --kubelet-registration-path=/var/lib/kubelet/plugins_registry/exa.csi.ddn.com/csi.sock 429 | livenessProbe: 430 | exec: 431 | command: 432 | - /csi-node-driver-registrar 433 | - --kubelet-registration-path=/var/lib/kubelet/plugins_registry/exa.csi.ddn.com/csi.sock 434 | - --mode=kubelet-registration-probe 435 | initialDelaySeconds: 30 436 | timeoutSeconds: 15 437 | env: 438 | - name: KUBE_NODE_NAME 439 | valueFrom: 440 | fieldRef: 441 | fieldPath: spec.nodeName 442 | volumeMounts: 443 | - name: socket-dir 444 | mountPath: /csi 445 | - name: registration-dir 446 | mountPath: /registration 447 | - name: driver 448 | securityContext: 449 | privileged: true 450 | allowPrivilegeEscalation: true 451 | image: quay.io/ddn/exascaler-openshift-file-driver:v2.3.5 452 | imagePullPolicy: Always 453 | args: 454 | - --nodeid=$(KUBE_NODE_NAME) 455 | - --endpoint=unix:///csi/csi.sock 456 | - --role=node 457 | env: 458 | - name: KUBE_NODE_NAME 459 | valueFrom: 460 | fieldRef: 461 | fieldPath: spec.nodeName 462 | volumeMounts: 463 | - name: kubelet-dir 464 | mountPath: /var/lib/kubelet 465 | mountPropagation: "Bidirectional" 466 | - name: dev 467 | mountPath: /dev 468 | - name: socket-dir 469 | mountPath: /csi 470 | - name: secret 471 | mountPath: /config 472 | volumes: 473 | - name: kubelet-dir 474 | hostPath: 475 | path: /var/lib/kubelet 476 | type: Directory 477 | - name: dev 478 | hostPath: 479 | path: /dev 480 | type: Directory 481 | - name: socket-dir 482 | hostPath: 483 | path: /var/lib/kubelet/plugins_registry/exa.csi.ddn.com/ 484 | type: DirectoryOrCreate 485 | - name: registration-dir 486 | hostPath: 487 | path: /var/lib/kubelet/plugins_registry/ 488 | type: Directory 489 | - name: secret 490 | secret: 491 | secretName: exascaler-csi-file-driver-config 492 | --- 493 | -------------------------------------------------------------------------------- /deploy/openshift/lustre-module/ko2iblnd-mod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kmm.sigs.x-k8s.io/v1beta1 2 | kind: Module 3 | metadata: 4 | name: ko2iblnd 5 | spec: 6 | moduleLoader: 7 | container: 8 | modprobe: 9 | moduleName: ko2iblnd 10 | dirName: /opt 11 | 12 | kernelMappings: # At least one item is required 13 | # For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 14 | - regexp: '^.+$' 15 | containerImage: "image-registry.openshift-image-registry.svc:5000/openshift-kmm/lustre-client-moduleloader:5.14.0-284.25.1.el9_2.x86_64" 16 | build: 17 | baseImageRegistryTLS: 18 | # Optional and not recommended! If true, the build will be allowed to pull the image in the Dockerfile's 19 | # FROM instruction using plain HTTP. 20 | insecure: false 21 | # Optional and not recommended! If true, the build will skip any TLS server certificate validation when 22 | # pulling the image in the Dockerfile's FROM instruction using plain HTTP. 23 | insecureSkipTLSVerify: false 24 | dockerfileConfigMap: # Required 25 | name: lustre-ci-dockerfile 26 | registryTLS: 27 | # Optional and not recommended! If true, KMM will be allowed to check if the container image already exists 28 | # using plain HTTP. 29 | insecure: false 30 | # Optional and not recommended! If true, KMM will skip any TLS server certificate validation when checking if 31 | # the container image already exists. 32 | insecureSkipTLSVerify: false 33 | 34 | selector: 35 | node-role.kubernetes.io/worker: "" 36 | -------------------------------------------------------------------------------- /deploy/openshift/lustre-module/ksocklnd-mod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kmm.sigs.x-k8s.io/v1beta1 2 | kind: Module 3 | metadata: 4 | name: ksocklnd 5 | spec: 6 | moduleLoader: 7 | container: 8 | modprobe: 9 | moduleName: ksocklnd 10 | dirName: /opt 11 | 12 | kernelMappings: # At least one item is required 13 | # For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 14 | - regexp: '^.+$' 15 | containerImage: "image-registry.openshift-image-registry.svc:5000/openshift-kmm/lustre-client-moduleloader:5.14.0-284.25.1.el9_2.x86_64" 16 | build: 17 | baseImageRegistryTLS: 18 | # Optional and not recommended! If true, the build will be allowed to pull the image in the Dockerfile's 19 | # FROM instruction using plain HTTP. 20 | insecure: false 21 | # Optional and not recommended! If true, the build will skip any TLS server certificate validation when 22 | # pulling the image in the Dockerfile's FROM instruction using plain HTTP. 23 | insecureSkipTLSVerify: false 24 | dockerfileConfigMap: # Required 25 | name: lustre-ci-dockerfile 26 | registryTLS: 27 | # Optional and not recommended! If true, KMM will be allowed to check if the container image already exists 28 | # using plain HTTP. 29 | insecure: false 30 | # Optional and not recommended! If true, KMM will skip any TLS server certificate validation when checking if 31 | # the container image already exists. 32 | insecureSkipTLSVerify: false 33 | 34 | selector: 35 | node-role.kubernetes.io/worker: "" 36 | -------------------------------------------------------------------------------- /deploy/openshift/lustre-module/lnet-configuration-ds.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: DaemonSet 4 | metadata: 5 | name: lnet-configuration 6 | spec: 7 | selector: 8 | matchLabels: 9 | name: lnet-configuration 10 | template: 11 | metadata: 12 | labels: 13 | name: lnet-configuration 14 | spec: 15 | nodeSelector: 16 | kmm.node.kubernetes.io/openshift-kmm.lnet.ready: 17 | hostNetwork: true 18 | tolerations: 19 | # this toleration is to have the daemonset runnable on master nodes 20 | # remove it if your masters can't run pods 21 | - effect: NoSchedule 22 | key: node-role.kubernetes.io/master 23 | serviceAccount: kmm-operator-module-loader 24 | serviceAccountName: kmm-operator-module-loader 25 | containers: 26 | - command: 27 | - sleep 28 | - infinity 29 | image: "image-registry.openshift-image-registry.svc:5000/openshift-kmm/lustre-client-moduleloader:5.14.0-284.25.1.el9_2.x86_64" 30 | imagePullPolicy: IfNotPresent 31 | lifecycle: 32 | postStart: 33 | exec: 34 | command: 35 | - /bin/sh 36 | - -c 37 | - lnetctl lnet configure; 38 | lnetctl net add --net tcp --if br-ex # change interface according to your cluster 39 | preStop: 40 | exec: 41 | command: 42 | - /bin/sh 43 | - -c 44 | - lnetctl lnet unconfigure && lustre_rmmod && yum remove -y lustre-client kmod-lustre-client && rm -rf /opt/lib/modules/$(uname -r)/extra/lustre-client 45 | name: lnet-configuration 46 | resources: {} 47 | securityContext: 48 | allowPrivilegeEscalation: true 49 | privileged: true 50 | runAsUser: 0 51 | seLinuxOptions: 52 | type: spc_t 53 | terminationMessagePath: /dev/termination-log 54 | terminationMessagePolicy: File 55 | 56 | -------------------------------------------------------------------------------- /deploy/openshift/lustre-module/lnet-mod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kmm.sigs.x-k8s.io/v1beta1 2 | kind: Module 3 | metadata: 4 | name: lnet 5 | spec: 6 | moduleLoader: 7 | container: 8 | modprobe: 9 | moduleName: lnet 10 | dirName: /opt 11 | modulesLoadingOrder: 12 | - lnet 13 | - ksocklnd 14 | # - ko2iblnd # for Infiniband network 15 | 16 | kernelMappings: # At least one item is required 17 | # For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 18 | - regexp: '^.+$' 19 | containerImage: "image-registry.openshift-image-registry.svc:5000/openshift-kmm/lustre-client-moduleloader:5.14.0-284.25.1.el9_2.x86_64" 20 | build: 21 | baseImageRegistryTLS: 22 | # Optional and not recommended! If true, the build will be allowed to pull the image in the Dockerfile's 23 | # FROM instruction using plain HTTP. 24 | insecure: false 25 | # Optional and not recommended! If true, the build will skip any TLS server certificate validation when 26 | # pulling the image in the Dockerfile's FROM instruction using plain HTTP. 27 | insecureSkipTLSVerify: false 28 | dockerfileConfigMap: # Required 29 | name: lustre-ci-dockerfile 30 | registryTLS: 31 | # Optional and not recommended! If true, KMM will be allowed to check if the container image already exists 32 | # using plain HTTP. 33 | insecure: false 34 | # Optional and not recommended! If true, KMM will skip any TLS server certificate validation when checking if 35 | # the container image already exists. 36 | insecureSkipTLSVerify: false 37 | 38 | selector: 39 | node-role.kubernetes.io/worker: "" 40 | -------------------------------------------------------------------------------- /deploy/openshift/lustre-module/lustre-dockerfile-configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: lustre-ci-dockerfile 5 | data: 6 | dockerfile: | 7 | ARG DTK_AUTO 8 | FROM ${DTK_AUTO} as builder 9 | ARG KERNEL_VERSION 10 | WORKDIR /build/ 11 | ENV SMDEV_CONTAINER_OFF=1 12 | RUN git clone -b ${KERNEL_VERSION} https://github.com/Qeas/rpms.git # change this to your repo with matching rpms 13 | RUN yum -y install rpms/*.rpm 14 | RUN mkdir -p /opt/lib && \ 15 | cp -r /lib/modules /opt/lib/modules 16 | RUN depmod -b /opt 17 | -------------------------------------------------------------------------------- /deploy/openshift/lustre-module/lustre-mod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kmm.sigs.x-k8s.io/v1beta1 2 | kind: Module 3 | metadata: 4 | name: lustre 5 | spec: 6 | moduleLoader: 7 | container: 8 | modprobe: 9 | moduleName: lustre 10 | dirName: /opt 11 | modulesLoadingOrder: 12 | - lustre 13 | - mgc 14 | 15 | kernelMappings: # At least one item is required 16 | # For any other kernel, build the image using the Dockerfile in the my-kmod ConfigMap. 17 | - regexp: '^.+$' 18 | containerImage: "image-registry.openshift-image-registry.svc:5000/openshift-kmm/lustre-client-moduleloader:5.14.0-284.25.1.el9_2.x86_64" 19 | build: 20 | baseImageRegistryTLS: 21 | # Optional and not recommended! If true, the build will be allowed to pull the image in the Dockerfile's 22 | # FROM instruction using plain HTTP. 23 | insecure: false 24 | # Optional and not recommended! If true, the build will skip any TLS server certificate validation when 25 | # pulling the image in the Dockerfile's FROM instruction using plain HTTP. 26 | insecureSkipTLSVerify: false 27 | dockerfileConfigMap: # Required 28 | name: lustre-ci-dockerfile 29 | registryTLS: 30 | # Optional and not recommended! If true, KMM will be allowed to check if the container image already exists 31 | # using plain HTTP. 32 | insecure: false 33 | # Optional and not recommended! If true, KMM will skip any TLS server certificate validation when checking if 34 | # the container image already exists. 35 | insecureSkipTLSVerify: false 36 | 37 | selector: 38 | node-role.kubernetes.io/worker: "" 39 | -------------------------------------------------------------------------------- /deploy/openshift/snapshots/snapshotter.yaml: -------------------------------------------------------------------------------- 1 | # RBAC file for the snapshot controller. 2 | # 3 | # The snapshot controller implements the control loop for CSI snapshot functionality. 4 | # It should be installed as part of the base Kubernetes distribution in an appropriate 5 | # namespace for components implementing base system functionality. For installing with 6 | # Vanilla Kubernetes, kube-system makes sense for the namespace. 7 | 8 | apiVersion: v1 9 | kind: ServiceAccount 10 | metadata: 11 | name: snapshot-controller 12 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 13 | 14 | --- 15 | kind: ClusterRole 16 | apiVersion: rbac.authorization.k8s.io/v1 17 | metadata: 18 | # rename if there are conflicts 19 | name: snapshot-controller-runner 20 | rules: 21 | - apiGroups: [""] 22 | resources: ["persistentvolumes"] 23 | verbs: ["get", "list", "watch"] 24 | - apiGroups: [""] 25 | resources: ["persistentvolumeclaims"] 26 | verbs: ["get", "list", "watch", "update"] 27 | - apiGroups: ["storage.k8s.io"] 28 | resources: ["storageclasses"] 29 | verbs: ["get", "list", "watch"] 30 | - apiGroups: [""] 31 | resources: ["events"] 32 | verbs: ["list", "watch", "create", "update", "patch"] 33 | - apiGroups: ["snapshot.storage.k8s.io"] 34 | resources: ["volumesnapshotclasses"] 35 | verbs: ["get", "list", "watch"] 36 | - apiGroups: ["snapshot.storage.k8s.io"] 37 | resources: ["volumesnapshotcontents"] 38 | verbs: ["create", "get", "list", "watch", "update", "delete"] 39 | - apiGroups: ["snapshot.storage.k8s.io"] 40 | resources: ["volumesnapshots"] 41 | verbs: ["get", "list", "watch", "update"] 42 | - apiGroups: ["snapshot.storage.k8s.io"] 43 | resources: ["volumesnapshots/status"] 44 | verbs: ["update"] 45 | 46 | --- 47 | kind: ClusterRoleBinding 48 | apiVersion: rbac.authorization.k8s.io/v1 49 | metadata: 50 | name: snapshot-controller-role 51 | subjects: 52 | - kind: ServiceAccount 53 | name: snapshot-controller 54 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 55 | roleRef: 56 | kind: ClusterRole 57 | # change the name also here if the ClusterRole gets renamed 58 | name: snapshot-controller-runner 59 | apiGroup: rbac.authorization.k8s.io 60 | 61 | --- 62 | kind: Role 63 | apiVersion: rbac.authorization.k8s.io/v1 64 | metadata: 65 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 66 | name: snapshot-controller-leaderelection 67 | rules: 68 | - apiGroups: ["coordination.k8s.io"] 69 | resources: ["leases"] 70 | verbs: ["get", "watch", "list", "delete", "update", "create"] 71 | 72 | --- 73 | kind: RoleBinding 74 | apiVersion: rbac.authorization.k8s.io/v1 75 | metadata: 76 | name: snapshot-controller-leaderelection 77 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 78 | subjects: 79 | - kind: ServiceAccount 80 | name: snapshot-controller 81 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 82 | roleRef: 83 | kind: Role 84 | name: snapshot-controller-leaderelection 85 | apiGroup: rbac.authorization.k8s.io 86 | 87 | # This YAML file shows how to deploy the snapshot controller 88 | 89 | # The snapshot controller implements the control loop for CSI snapshot functionality. 90 | # It should be installed as part of the base Kubernetes distribution in an appropriate 91 | # namespace for components implementing base system functionality. For installing with 92 | # Vanilla Kubernetes, kube-system makes sense for the namespace. 93 | 94 | --- 95 | kind: StatefulSet 96 | apiVersion: apps/v1 97 | metadata: 98 | name: snapshot-controller 99 | namespace: default # TODO: replace with the namespace you want for your controller, e.g. kube-system 100 | spec: 101 | serviceName: "snapshot-controller" 102 | replicas: 1 103 | selector: 104 | matchLabels: 105 | app: snapshot-controller 106 | template: 107 | metadata: 108 | labels: 109 | app: snapshot-controller 110 | spec: 111 | serviceAccount: snapshot-controller 112 | containers: 113 | - name: snapshot-controller 114 | image: k8s.gcr.io/sig-storage/snapshot-controller:v4.0.0 115 | args: 116 | - "--v=5" 117 | - "--leader-election=false" 118 | imagePullPolicy: IfNotPresent -------------------------------------------------------------------------------- /examples/exa-dynamic-nginx-sub.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc-sub 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | parameters: 12 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 13 | exaMountGid: "1001" # Gid which will be used to access the volume in pod. Should be synced between EXA server and clients. 14 | --- 15 | 16 | # ------------------------------------------------ 17 | # Exaxscaler CSI Driver - Persistent Volume Claim 18 | # ------------------------------------------------ 19 | 20 | apiVersion: v1 21 | kind: PersistentVolumeClaim 22 | metadata: 23 | name: exascaler-csi-file-driver-pvc-sub 24 | spec: 25 | storageClassName: exascaler-csi-file-driver-sc-sub 26 | accessModes: 27 | - ReadWriteMany 28 | resources: 29 | requests: 30 | storage: 1Gi 31 | --- 32 | 33 | # --------- 34 | # Nginx pod 35 | # --------- 36 | 37 | apiVersion: v1 38 | kind: Pod 39 | metadata: 40 | name: nginx-dynamic-volume-sub 41 | spec: 42 | securityContext: 43 | runAsUser: 1001 44 | containers: 45 | - image: nginxinc/nginx-unprivileged 46 | imagePullPolicy: IfNotPresent 47 | name: nginx 48 | command: [ "/bin/bash", "-c", "--" ] 49 | args: [ "while true; do echo $(date) > /data/timefile; sleep 5; sync; done;" ] 50 | ports: 51 | - containerPort: 80 52 | protocol: TCP 53 | volumeMounts: 54 | - mountPath: /data 55 | name: exascaler-csi-file-driver-data 56 | - image: nginxinc/nginx-unprivileged 57 | imagePullPolicy: IfNotPresent 58 | name: nginx-sub1 59 | command: [ "/bin/bash", "-c", "--" ] 60 | args: [ "while true; do echo $(date) > /data/timefile; sleep 5; sync; done;" ] 61 | ports: 62 | - containerPort: 81 63 | protocol: TCP 64 | volumeMounts: 65 | - mountPath: /data 66 | subPath: sub1 67 | name: exascaler-csi-file-driver-data 68 | - image: nginxinc/nginx-unprivileged 69 | imagePullPolicy: IfNotPresent 70 | name: nginx-sub2 71 | command: [ "/bin/bash", "-c", "--" ] 72 | args: [ "while true; do echo $(date) > /data/timefile; sleep 5; sync; done;" ] 73 | ports: 74 | - containerPort: 82 75 | protocol: TCP 76 | volumeMounts: 77 | - mountPath: /data 78 | subPath: sub2 79 | name: exascaler-csi-file-driver-data 80 | volumes: 81 | - name: exascaler-csi-file-driver-data 82 | persistentVolumeClaim: 83 | claimName: exascaler-csi-file-driver-pvc-sub 84 | readOnly: false 85 | -------------------------------------------------------------------------------- /examples/exa-dynamic-nginx-zone.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc-zone 9 | provisioner: exa.csi.ddn.com 10 | allowedTopologies: 11 | - matchLabelExpressions: 12 | - key: topology.exa.csi.ddn.com/zone 13 | values: 14 | - zone-1 15 | allowVolumeExpansion: true 16 | parameters: 17 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 18 | --- 19 | 20 | # ------------------------------------------------ 21 | # Exaxscaler CSI Driver - Persistent Volume Claim 22 | # ------------------------------------------------ 23 | 24 | apiVersion: v1 25 | kind: PersistentVolumeClaim 26 | metadata: 27 | name: exascaler-csi-file-driver-pvc-zone 28 | spec: 29 | storageClassName: exascaler-csi-file-driver-sc-zone 30 | accessModes: 31 | - ReadWriteMany 32 | resources: 33 | requests: 34 | storage: 1Gi 35 | --- 36 | 37 | # --------- 38 | # Nginx pod 39 | # --------- 40 | 41 | apiVersion: v1 42 | kind: Pod 43 | metadata: 44 | name: nginx-dynamic-volume-zone 45 | spec: 46 | topologySpreadConstraints: 47 | - maxSkew: 1 48 | topologyKey: topology.exa.csi.ddn.com/zone 49 | whenUnsatisfiable: DoNotSchedule 50 | labelSelector: 51 | matchLabels: 52 | topology.exa.csi.ddn.com/zone: zone-1 53 | securityContext: 54 | runAsUser: 1001 55 | containers: 56 | - image: nginxinc/nginx-unprivileged 57 | imagePullPolicy: IfNotPresent 58 | name: nginx 59 | ports: 60 | - containerPort: 80 61 | protocol: TCP 62 | volumeMounts: 63 | - mountPath: /data 64 | name: exascaler-csi-file-driver-data 65 | volumes: 66 | - name: exascaler-csi-file-driver-data 67 | persistentVolumeClaim: 68 | claimName: exascaler-csi-file-driver-pvc-zone 69 | readOnly: false 70 | -------------------------------------------------------------------------------- /examples/exa-dynamic-nginx.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | --- 12 | 13 | # ------------------------------------------------ 14 | # Exaxscaler CSI Driver - Persistent Volume Claim 15 | # ------------------------------------------------ 16 | 17 | apiVersion: v1 18 | kind: PersistentVolumeClaim 19 | metadata: 20 | name: exascaler-csi-file-driver-pvc 21 | spec: 22 | storageClassName: exascaler-csi-file-driver-sc 23 | accessModes: 24 | - ReadWriteMany 25 | resources: 26 | requests: 27 | storage: 1Gi 28 | --- 29 | 30 | # --------- 31 | # Nginx pod 32 | # --------- 33 | 34 | apiVersion: v1 35 | kind: Pod 36 | metadata: 37 | name: nginx-dynamic-volume 38 | spec: 39 | containers: 40 | - image: nginx 41 | imagePullPolicy: IfNotPresent 42 | name: nginx 43 | ports: 44 | - containerPort: 80 45 | protocol: TCP 46 | volumeMounts: 47 | - mountPath: /data 48 | name: exascaler-csi-file-driver-data 49 | volumes: 50 | - name: exascaler-csi-file-driver-data 51 | persistentVolumeClaim: 52 | claimName: exascaler-csi-file-driver-pvc 53 | readOnly: false 54 | -------------------------------------------------------------------------------- /examples/exa-nginx-replicaset.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc-replicaset 9 | provisioner: exa.csi.ddn.com 10 | allowedTopologies: 11 | - matchLabelExpressions: 12 | - key: topology.exa.csi.ddn.com/zone 13 | values: 14 | - zone-1 15 | allowVolumeExpansion: true 16 | parameters: 17 | exaMountUid: "1000730000" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 18 | --- 19 | 20 | # ------------------------------------------------ 21 | # Exaxscaler CSI Driver - Persistent Volume Claim 22 | # ------------------------------------------------ 23 | 24 | apiVersion: v1 25 | kind: PersistentVolumeClaim 26 | metadata: 27 | name: exascaler-csi-file-driver-pvc-replicaset 28 | spec: 29 | storageClassName: exascaler-csi-file-driver-sc-replicaset 30 | accessModes: 31 | - ReadWriteMany 32 | resources: 33 | requests: 34 | storage: 1Gi 35 | --- 36 | 37 | 38 | # --------- 39 | # Nginx pods 40 | # --------- 41 | 42 | apiVersion: apps/v1 43 | kind: ReplicaSet 44 | metadata: 45 | name: nginx-dynamic-volume 46 | labels: 47 | app: nginx-service 48 | spec: 49 | replicas: 2 50 | selector: 51 | matchLabels: 52 | app: nginx-service 53 | template: 54 | metadata: 55 | labels: 56 | app: nginx-service 57 | spec: 58 | securityContext: 59 | runAsUser: 1000730000 60 | containers: 61 | - image: nginxinc/nginx-unprivileged 62 | imagePullPolicy: IfNotPresent 63 | name: nginx 64 | ports: 65 | - containerPort: 80 66 | protocol: TCP 67 | volumeMounts: 68 | - mountPath: /data 69 | name: exa-csi-driver-data 70 | volumes: 71 | - name: exa-csi-driver-data 72 | persistentVolumeClaim: 73 | claimName: exascaler-csi-file-driver-pvc-replicaset 74 | readOnly: false 75 | --- 76 | 77 | 78 | # ----------------- 79 | # Service for nginx 80 | # ----------------- 81 | 82 | kind: Service 83 | apiVersion: v1 84 | metadata: 85 | name: nginx-service-dynamic-volume 86 | spec: 87 | selector: 88 | app: nginx-service 89 | ports: 90 | - protocol: TCP 91 | port: 8888 92 | targetPort: 80 93 | -------------------------------------------------------------------------------- /examples/nginx-combined-volumes.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc-dynamic 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | parameters: 12 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. 13 | --- 14 | 15 | # ------------------------------------------------ 16 | # Exaxscaler CSI Driver - Persistent Volume Claim 17 | # ------------------------------------------------ 18 | 19 | apiVersion: v1 20 | kind: PersistentVolumeClaim 21 | metadata: 22 | name: exascaler-csi-file-driver-pvc-dynamic 23 | spec: 24 | storageClassName: exascaler-csi-file-driver-sc-dynamic 25 | accessModes: 26 | - ReadWriteMany 27 | resources: 28 | requests: 29 | storage: 1Gi 30 | --- 31 | 32 | # -------------------------------------- 33 | # Exascaler CSI Driver - Storage Class 34 | # -------------------------------------- 35 | 36 | apiVersion: storage.k8s.io/v1 37 | kind: StorageClass 38 | metadata: 39 | name: exascaler-csi-file-driver-sc-nginx-persistent 40 | provisioner: exa.csi.ddn.com 41 | allowVolumeExpansion: true 42 | --- 43 | 44 | # ------------------------------------------ 45 | # Exascaler CSI Driver - Persistent Volume 46 | # ------------------------------------------ 47 | 48 | apiVersion: v1 49 | kind: PersistentVolume 50 | metadata: 51 | name: exascaler-csi-file-driver-pv-nginx-persistent 52 | labels: 53 | name: exascaler-csi-file-driver-pv-nginx-persistent 54 | spec: 55 | storageClassName: exascaler-csi-file-driver-sc-nginx-persistent 56 | accessModes: 57 | - ReadWriteMany 58 | capacity: 59 | storage: 1Gi 60 | csi: 61 | driver: exa.csi.ddn.com 62 | volumeHandle: exa1:10.3.3.200@tcp;/exaFS:/mountPoint-csi:/nginx-persistent 63 | volumeAttributes: # volumeAttributes are the alternative of storageClass params for static (precreated) volumes. 64 | exaMountUid: "1001" # Uid which will be used to access the volume from the pod. 65 | --- 66 | 67 | # ------------------------------------------------ 68 | # Exascaler CSI Driver - Persistent Volume Claim 69 | # ------------------------------------------------ 70 | 71 | apiVersion: v1 72 | kind: PersistentVolumeClaim 73 | metadata: 74 | name: exascaler-csi-file-driver-pvc-nginx-persistent 75 | spec: 76 | storageClassName: exascaler-csi-file-driver-sc-nginx-persistent 77 | accessModes: 78 | - ReadWriteMany 79 | resources: 80 | requests: 81 | storage: 1Gi 82 | selector: 83 | matchLabels: 84 | # to create 1-1 relationship for pod - persistent volume use unique labels 85 | name: exascaler-csi-file-driver-pv-nginx-persistent 86 | --- 87 | 88 | # --------- 89 | # Nginx pod 90 | # --------- 91 | 92 | apiVersion: v1 93 | kind: Pod 94 | metadata: 95 | name: nginx-dynamic-volume-combined 96 | spec: 97 | securityContext: 98 | runAsUser: 1001 99 | containers: 100 | - image: nginxinc/nginx-unprivileged 101 | imagePullPolicy: IfNotPresent 102 | name: nginx 103 | ports: 104 | - containerPort: 80 105 | protocol: TCP 106 | volumeMounts: 107 | - mountPath: /usr/share/nginx/html 108 | name: exascaler-persistent-volume 109 | - mountPath: /data 110 | name: exascaler-dynamic-volume 111 | volumes: 112 | - name: exascaler-persistent-volume 113 | persistentVolumeClaim: 114 | claimName: exascaler-csi-file-driver-pvc-nginx-persistent 115 | readOnly: false 116 | - name: exascaler-dynamic-volume 117 | persistentVolumeClaim: 118 | claimName: exascaler-csi-file-driver-pvc-dynamic 119 | readOnly: false 120 | -------------------------------------------------------------------------------- /examples/nginx-from-snapshot.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | parameters: 12 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 13 | --- 14 | 15 | # ------------------------------------------------ 16 | # Exaxscaler CSI Driver - Persistent Volume Claim 17 | # ------------------------------------------------ 18 | 19 | apiVersion: v1 20 | kind: PersistentVolumeClaim 21 | metadata: 22 | name: exascaler-csi-file-driver-pvc-from-snapshot-1 23 | spec: 24 | storageClassName: exascaler-csi-file-driver-sc 25 | dataSource: 26 | kind: VolumeSnapshot 27 | apiGroup: snapshot.storage.k8s.io 28 | name: snapshot-test # snapshots created by ./snapshot-from-dynamic.yaml 29 | accessModes: 30 | - ReadWriteMany 31 | resources: 32 | requests: 33 | storage: 3Gi 34 | --- 35 | 36 | # --------- 37 | # Nginx pod 38 | # --------- 39 | 40 | apiVersion: v1 41 | kind: Pod 42 | metadata: 43 | name: nginx-dynamic-volume-from-snapshot 44 | spec: 45 | securityContext: 46 | runAsUser: 1001 47 | containers: 48 | - image: nginxinc/nginx-unprivileged 49 | imagePullPolicy: IfNotPresent 50 | name: nginx 51 | command: [ "/bin/bash", "-c", "--" ] 52 | args: [ "while true; do echo $(date) > /data/timefile; sleep 5; sync; done;" ] 53 | ports: 54 | - containerPort: 80 55 | protocol: TCP 56 | volumeMounts: 57 | - mountPath: /data 58 | name: exascaler-csi-file-driver-data 59 | volumes: 60 | - name: exascaler-csi-file-driver-data 61 | persistentVolumeClaim: 62 | claimName: exascaler-csi-file-driver-pvc-from-snapshot-1 63 | readOnly: false 64 | -------------------------------------------------------------------------------- /examples/nginx-persistent-volume.yaml: -------------------------------------------------------------------------------- 1 | # Nginx pod with pre provisioned storage using Exascaler CSI driver 2 | # 3 | # $ kubectl apply -f examples/nginx-persistent-volume.yaml 4 | # 5 | 6 | 7 | # -------------------------------------- 8 | # Exascaler CSI Driver - Storage Class 9 | # -------------------------------------- 10 | 11 | apiVersion: storage.k8s.io/v1 12 | kind: StorageClass 13 | metadata: 14 | name: exascaler-csi-file-driver-sc-nginx-persistent 15 | provisioner: exa.csi.ddn.com 16 | allowVolumeExpansion: true 17 | --- 18 | 19 | 20 | # ------------------------------------------ 21 | # Exascaler CSI Driver - Persistent Volume 22 | # ------------------------------------------ 23 | 24 | apiVersion: v1 25 | kind: PersistentVolume 26 | metadata: 27 | name: exascaler-csi-file-driver-pv-nginx-persistent 28 | labels: 29 | name: exascaler-csi-file-driver-pv-nginx-persistent 30 | spec: 31 | storageClassName: exascaler-csi-file-driver-sc-nginx-persistent 32 | accessModes: 33 | - ReadWriteMany 34 | capacity: 35 | storage: 1Gi 36 | csi: 37 | driver: exa.csi.ddn.com 38 | volumeHandle: exa1:10.204.86.113@tcp;/testfs:/mnt:/nginx-persistent # 39 | volumeAttributes: # volumeAttributes are the alternative of storageClass params for static (precreated) volumes. 40 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 41 | projectId: "4184814" 42 | --- 43 | 44 | 45 | # ------------------------------------------------ 46 | # Exascaler CSI Driver - Persistent Volume Claim 47 | # ------------------------------------------------ 48 | 49 | apiVersion: v1 50 | kind: PersistentVolumeClaim 51 | metadata: 52 | name: exascaler-csi-file-driver-pvc-nginx-persistent 53 | spec: 54 | storageClassName: exascaler-csi-file-driver-sc-nginx-persistent 55 | accessModes: 56 | - ReadWriteMany 57 | resources: 58 | requests: 59 | storage: 1Gi 60 | selector: 61 | matchLabels: 62 | # to create 1-1 relationship for pod - persistent volume use unique labels 63 | name: exascaler-csi-file-driver-pv-nginx-persistent 64 | --- 65 | 66 | 67 | # --------- 68 | # Nginx pod 69 | # --------- 70 | 71 | apiVersion: v1 72 | kind: Pod 73 | metadata: 74 | name: nginx-persistent-volume 75 | spec: 76 | securityContext: 77 | runAsUser: 1001 78 | containers: 79 | - image: nginxinc/nginx-unprivileged 80 | imagePullPolicy: IfNotPresent 81 | name: nginx 82 | ports: 83 | - containerPort: 80 84 | protocol: TCP 85 | volumeMounts: 86 | - mountPath: /usr/share/nginx/html 87 | name: exascaler-csi-file-driver-data 88 | volumes: 89 | - name: exascaler-csi-file-driver-data 90 | persistentVolumeClaim: 91 | claimName: exascaler-csi-file-driver-pvc-nginx-persistent 92 | readOnly: false -------------------------------------------------------------------------------- /examples/pvc-bindmount.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | parameters: 12 | bindMount: "true" # Determines, whether volume will bind mounted or as a separate lustre mount. 13 | --- 14 | 15 | # ------------------------------------------------ 16 | # Exaxscaler CSI Driver - Persistent Volume Claim 17 | # ------------------------------------------------ 18 | 19 | apiVersion: v1 20 | kind: PersistentVolumeClaim 21 | metadata: 22 | name: exascaler-csi-file-driver-pvc 23 | spec: 24 | storageClassName: exascaler-csi-file-driver-sc 25 | accessModes: 26 | - ReadWriteMany 27 | resources: 28 | requests: 29 | storage: 5Gi 30 | --- 31 | -------------------------------------------------------------------------------- /examples/pvc-configname.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc-exa1 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | parameters: 12 | configName: "exa1" 13 | --- 14 | 15 | # ------------------------------------------------ 16 | # Exaxscaler CSI Driver - Persistent Volume Claim 17 | # ------------------------------------------------ 18 | 19 | apiVersion: v1 20 | kind: PersistentVolumeClaim 21 | metadata: 22 | name: exascaler-csi-file-driver-pvc-exa1 23 | spec: 24 | storageClassName: exascaler-csi-file-driver-sc-exa1 25 | accessModes: 26 | - ReadWriteMany 27 | resources: 28 | requests: 29 | storage: 5Gi 30 | --- 31 | -------------------------------------------------------------------------------- /examples/sc-override-config-dynamic.yaml: -------------------------------------------------------------------------------- 1 | # -------------------------------------- 2 | # Exascaler CSI Driver - Storage Class 3 | # -------------------------------------- 4 | 5 | apiVersion: storage.k8s.io/v1 6 | kind: StorageClass 7 | metadata: 8 | name: exascaler-csi-file-driver-sc-override-config 9 | provisioner: exa.csi.ddn.com 10 | allowVolumeExpansion: true 11 | parameters: 12 | exaMountUid: "1001" # Uid which will be used to access the volume in pod. Should be synced between EXA server and clients. 13 | exaMountGid: "1001" # Gid which will be used to access the volume in pod. Should be synced between EXA server and clients. 14 | volumeDirPermissions: "1750" 15 | mountPoint: /exa-sc1 # mountpoint on the host where the exaFS will be mounted 16 | exaFS: 10.204.86.217@tcp:/testfs/sc1 # default path to exa filesystem 17 | --- 18 | 19 | # ------------------------------------------------ 20 | # Exaxscaler CSI Driver - Persistent Volume Claim 21 | # ------------------------------------------------ 22 | 23 | apiVersion: v1 24 | kind: PersistentVolumeClaim 25 | metadata: 26 | name: exascaler-csi-file-driver-pvc-sc-override-1 27 | spec: 28 | storageClassName: exascaler-csi-file-driver-sc-override-config 29 | accessModes: 30 | - ReadWriteMany 31 | resources: 32 | requests: 33 | storage: 1Gi 34 | --- 35 | 36 | # --------- 37 | # Nginx pod 38 | # --------- 39 | 40 | apiVersion: v1 41 | kind: Pod 42 | metadata: 43 | name: nginx-dynamic-volume-sc-override-1 44 | spec: 45 | securityContext: 46 | runAsUser: 1001 47 | runAsGroup: 1001 48 | containers: 49 | - image: nginxinc/nginx-unprivileged 50 | imagePullPolicy: IfNotPresent 51 | name: nginx 52 | command: [ "/bin/bash", "-c", "--" ] 53 | args: [ "while true; do echo $(date) > /data/timefile; sleep 5; sync; done;" ] 54 | ports: 55 | - containerPort: 80 56 | protocol: TCP 57 | volumeMounts: 58 | - mountPath: /data 59 | name: exascaler-csi-file-driver-data 60 | volumes: 61 | - name: exascaler-csi-file-driver-data 62 | persistentVolumeClaim: 63 | claimName: exascaler-csi-file-driver-pvc-sc-override-1 64 | readOnly: false 65 | -------------------------------------------------------------------------------- /examples/snapshot-class.yaml: -------------------------------------------------------------------------------- 1 | # Create a new snapshot class 2 | 3 | # $ kubectl apply -f examples/snapshot-class.yaml 4 | # 5 | 6 | apiVersion: snapshot.storage.k8s.io/v1 7 | kind: VolumeSnapshotClass 8 | metadata: 9 | name: exascaler-csi-snapshot-class 10 | driver: exa.csi.ddn.com 11 | deletionPolicy: Delete 12 | parameters: 13 | snapshotUtility: tar 14 | snapshotFolder: csi-snapshots 15 | # dtarPath: /opt/ddn/mpifileutils/bin/dtar 16 | snapshotMd5Verify: "false" 17 | -------------------------------------------------------------------------------- /examples/snapshot-from-dynamic.yaml: -------------------------------------------------------------------------------- 1 | # Take a new snapshot 2 | # 3 | # !!! Make sure to run exa-dynamic-nginx-2.yaml before running this example 4 | # !!! Make sure to run snapshot-class.yaml before running this example 5 | # 6 | # $ kubectl apply -f examples/snapshot-from-dynamic.yaml 7 | # 8 | 9 | apiVersion: snapshot.storage.k8s.io/v1 10 | kind: VolumeSnapshot 11 | metadata: 12 | name: snapshot-test 13 | spec: 14 | volumeSnapshotClassName: exascaler-csi-snapshot-class 15 | source: 16 | persistentVolumeClaimName: exascaler-csi-file-driver-pvc 17 | -------------------------------------------------------------------------------- /examples/snapshot-from-persistent.yaml: -------------------------------------------------------------------------------- 1 | # Take a new snapshot 2 | # 3 | # !!! Make sure to run nginx-persistent-volume.yaml before running this example 4 | # !!! Make sure to run snapshot-class.yaml before running this example 5 | # 6 | # $ kubectl apply -f examples/snapshot-from-persistent.yaml 7 | # 8 | 9 | apiVersion: snapshot.storage.k8s.io/v1 10 | kind: VolumeSnapshot 11 | metadata: 12 | name: snapshot-from-persistent 13 | spec: 14 | volumeSnapshotClassName: exascaler-csi-snapshot-class 15 | source: 16 | persistentVolumeClaimName: exascaler-csi-file-driver-pvc-nginx-persistent 17 | --------------------------------------------------------------------------------