├── MIT-LICENSE.txt ├── Readme.md ├── load-test ├── .helmignore ├── Chart.yaml ├── templates │ ├── NOTES.txt │ ├── aggregator-pod.yml │ ├── aggregator-role.yml │ ├── aggregator-rolebinding.yml │ ├── aggregator-sa.yml │ ├── loadbots-rc.yml │ ├── webserver-rc.yml │ └── webserver-svc.yml └── values.yaml └── network-test ├── .helmignore ├── Chart.yaml ├── templates ├── NOTES.txt ├── netperf-orchestrator-rc.yml ├── netperf-orchestrator-svc.yml ├── netperf-w1-rc.yml ├── netperf-w2-rc.yml ├── netperf-w2-svc.yml └── netperf-w3-rc.yml └── values.yaml /MIT-LICENSE.txt: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Mahmoud Rahbar Azad 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Readme.md: -------------------------------------------------------------------------------- 1 | ## Test suite for Kubernetes 2 | 3 | This test suite consists of two Helm charts for network bandwith testing and load testing a Kuberntes cluster. 4 | The structure of the included charts looks a follows: 5 | ```` 6 | k8s-testsuite/ 7 | - load-test/ 8 | |____Chart.yaml 9 | |____values.yaml 10 | |____.helmignore 11 | |____templates/ 12 | - network-test/ 13 | |____Chart.yaml 14 | |____values.yaml 15 | |____.helmignore 16 | |____templates/ 17 | ```` 18 | 19 | You can install a test by running: 20 | ```` 21 | > git clone https://github.com/mrahbar/k8s-testsuite.git 22 | > cd k8s-testsuite 23 | > helm install --namespace load-test ./load-test 24 | > helm install --namespace network-test ./network-test 25 | ```` 26 | 27 | 28 | ---------- 29 | 30 | 31 | ### Load test 32 | 33 | This Helm chart deployes a full load test suite in Kubernetes. It consists of the 3 microservices: 34 | 1. A webserver based on [simple-webserver](https://github.com/mrahbar/simple-webserver) 35 | 2. A loadbot client which is based on [this](https://github.com/kubernetes/contrib/tree/master/scale-demo/vegeta) and [that](https://github.com/tsenart/vegeta) 36 | 3. An [aggregator](https://github.com/mrahbar/kubespector/tree/master/resources/scaletest) which orchestrates the test run 37 | 38 | ##### Aggregator 39 | 40 | Since the webservers and the loadbots work autonomic the task of the aggregator ist to orchestrate the test run. 41 | It does this by useing the Kubernetes api via [client-go](https://github.com/kubernetes/client-go) library to talk set up the desires replicas of each unit. 42 | The test run consists of the following tests scenarios: 43 | 44 | | Szenario | Loadbots | Webserver | 45 | | ------------- |:-------------:| -----:| 46 | |Idle|1|1| 47 | |Under load | 1|10| 48 | |Equal load| 10|10| 49 | |Over load|100|10| 50 | |High load|100|100| 51 | 52 | The maximum count of replicas (default 100) can be set with `--set aggregator.maxReplicas=...`. 53 | 54 | ##### Loadbots 55 | The loadbots have the task to run a predefined level of queries per second. Vegeta publishes [detailed statistics](https://github.com/tsenart/vegeta#json) which will be fetched and evaluated by the aggregator. This metrics are: 56 | - Queries-Per-Second (QPS) 57 | - Success-Rate 58 | - Mean latency 59 | - 99th percentile latency 60 | 61 | #### Test results 62 | When all tests are finishes the aggregator will print the following summary to its logs: 63 | 64 | > GENERATING SUMMARY OUTPUT 65 | > Summary of load scenarios: 66 | > 0. Idle : QPS: 10037 Success: 100.00 % Latency: 949.82µs (mean) 3.004154ms (99th) 67 | > 1. Under load: QPS: 10014 Success: 100.00 % Latency: 965.549µs (mean) 1.985838ms (99th) 68 | > 2. Equal load: QPS: 50078 Success: 100.00 % Latency: 982.519µs (mean) 7.213018ms (99th) 69 | > 3. Over load : QPS: 501302 Success: 100.00 % Latency: 198.21451ms (mean) 859.504601ms (99th) 70 | > 4. High load : QPS: 502471 Success: 100.00 % Latency: 239.26364ms (mean) 1.018523444s (99th) 71 | > END SUMMARY DATA 72 | 73 | ##### Configuration 74 | 75 | The following table lists the configurable parameters of the chart and their default values. 76 | 77 | Parameter | Description | Default 78 | --------- | ----------- | ------- 79 | `cpuRequests.webserver` | Memory request of each webserver pod | 100m 80 | `cpuRequests.loadbot` | Memory request of each loadbots pod | 100m 81 | `aggregator.maxReplicas` | Maximum replicas for ReplicationController| 100 82 | `loadbot.rate` | QPS of each loadbot. [Docs](https://github.com/tsenart/vegeta#-rate) | 1000 83 | `loadbot.workers` | Initial number of workers used in the attack. [Docs](https://github.com/tsenart/vegeta#-workers) | 10 84 | `loadbot.duration` | Duration of each attack. [Docs](https://github.com/tsenart/vegeta#-duration) | 1s 85 | `images.*Version` | Image version for *loadbot*, *webserver* and *aggregator* | 1.0 86 | `imagePullPolicy` | Whether to Always pull imaged or only IfNotPresent | IfNotPresent 87 | `rbac.create` | Create rbac rules for aggregator | true 88 | `rbac.serviceAccountName` | rbac.create should be false to use this serviceAccount | 89 | 90 | ---------- 91 | 92 | 93 | ### Network test 94 | 95 | This Helm chart deployes a network test suite in Kubernetes. It consists of the 2 microservices: 96 | 1. An orchestrator 97 | 2. A worker launched three times 98 | 99 | Both services are run from the same image either as `--mode=orchestrator` or `--mode=worker`. The services are bases on [k8s-nptest](https://github.com/mrahbar/k8s-nptest) and use iperf3 and netperf-2.7.0 internally. 100 | 101 | ##### Orchestrator 102 | The orchestrator pod coordinates the worker pods to run tests in serial order for the 4 scenarios described below. Using pod affinity rules Worker Pods 1 and 2 are placed on the same Kubernetes node, and Worker Pod 3 is placed on a different node. The nodes all communicate with the orchestrator pod service using simple golang rpcs and request work items. **A minimum of two Kubernetes worker nodes are necessary for this test.** 103 | 104 | ##### Test scenario 105 | Five major network traffic paths are combination of Pod IP vs Virtual IP and whether the pods are co-located on the same node versus a remotely located pod. 106 | 107 | 1. Same VM using Pod IP: Same VM Pod to Pod traffic tests from Worker 1 to Worker 2 using its Pod IP. 108 | 109 | 2. Same VM using Cluster/Virtual IP: Same VM Pod to Pod traffic tests from Worker 1 to Worker 2 using its Service IP (also known as its Cluster IP or Virtual IP). 110 | 111 | 3. Remote VM using Pod IP: Worker 3 to Worker 2 traffic tests using Worker 2 Pod IP. 112 | 113 | 4. Remote VM using Cluster/Virtual IP: Worker 3 to Worker 2 traffic tests using Worker 2 Cluster/Virtual IP. 114 | 115 | 5. Same VM Pod Hairpin: Worker 2 to itself using Cluster IP 116 | 117 | For each test the MTU (MSS tuning for TCP and direct packet size tuning for UDP) will be linearly increased from 96 to 1460 in steps of 64. 118 | 119 | 120 | #### Output Raw CSV data 121 | The orchestrator and worker pods run independently of the initiator script, with the orchestrator pod sending work items to workers till the testcase schedule is complete. The iperf output (both TPC and UDP modes) and the netperf TCP output from all worker nodes is uploaded to the orchestrator pod where it is filtered and the results are written to the output file as well as to stdout log. Default file locations are `/tmp/result.csv` and `/tmp/output.txt` for the raw results. 122 | 123 | **All units in the csv file are in Gbits/second** 124 | ```console 125 | ALL TESTCASES AND MSS RANGES COMPLETE - GENERATING CSV OUTPUT 126 | the output for each MSS testpoint is a single value in Gbits/sec 127 | MSS , Maximum, 96, 160, 224, 288, 352, 416, 480, 544, 608, 672, 736, 800, 864, 928, 992, 1056, 1120, 1184, 1248, 1312, 1376, 1460 128 | 1 iperf TCP. Same VM using Pod IP ,24252.000000,22650,23224,24101,23724,23532,23092,23431,24102,24072,23431,23871,23897,23275,23146,23535,24252,23662,22133,,23514,23796,24008, 129 | 2 iperf TCP. Same VM using Virtual IP ,26052.000000,26052,0,25382,23702,0,22703,22549,0,23085,22074,0,22366,23516,0,23059,22991,0,23231,22603,0,23255,23605, 130 | 3 iperf TCP. Remote VM using Pod IP ,910.000000,239,426,550,663,708,742,769,792,811,825,838,849,859,866,874,883,888,894,898,903,907,910, 131 | 4 iperf TCP. Remote VM using Virtual IP ,906.000000,0,434,546,0,708,744,0,791,811,0,837,849,0,868,875,0,888,892,0,903,906,0, 132 | 5 iperf TCP. Hairpin Pod to own Virtual IP ,23493.000000,22798,21629,0,22159,21132,0,22900,21816,0,21775,21425,0,22172,21611,21869,22865,22003,22562,23493,22684,217872, 133 | 6 iperf UDP. Same VM using Pod IP ,6647.000000,6647, 134 | 7 iperf UDP. Same VM using Virtual IP ,6554.000000,6554, 135 | 8 iperf UDP. Remote VM using Pod IP ,1877.000000,1877, 136 | 9 iperf UDP. Remote VM using Virtual IP ,1695.000000,1695, 137 | 10 netperf. Same VM using Pod IP ,7003.430000,7003.43, 138 | 11 netperf. Same VM using Virtual IP ,0.000000,0.00, 139 | 12 netperf. Remote VM using Pod IP ,908.460000,908.46, 140 | 13 netperf. Remote VM using Virtual IP ,0.000000,0.00, 141 | END CSV DATA 142 | ``` 143 | 144 | ##### Configuration 145 | 146 | The following table lists the configurable parameters of the chart and their default values. 147 | 148 | Parameter | Description | Default 149 | --------- | ----------- | ------- 150 | `imagePullPolicy` | Whether to Always pull imaged or only IfNotPresent | IfNotPresent 151 | `images.orchestratorVersion` | Image version for the orchestrator | 1.1 152 | `images.workerVersion` | Image version for the worker | 1.1 153 | `debug.orchestrator` | Debug mode for the orchestrator | false 154 | `debug.worker` | Debug mode for the worker| false 155 | -------------------------------------------------------------------------------- /load-test/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /load-test/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | description: Load test Helm chart for Kubernetes 3 | name: loadtest 4 | version: 0.1.0 5 | -------------------------------------------------------------------------------- /load-test/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | Load test deployed. You can watch the test with: 2 | kubectl --namespace {{ .Release.Namespace }} logs -f aggregator 3 | -------------------------------------------------------------------------------- /load-test/templates/aggregator-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: aggregator 5 | labels: 6 | app: aggregator 7 | spec: 8 | serviceAccountName: {{ if .Values.rbac.create }}aggregator{{ else }}"{{ .Values.rbac.serviceAccountName }}"{{ end }} 9 | containers: 10 | - name: aggregator 11 | image: endianogino/aggregator:{{ .Values.images.aggregatorVersion }} 12 | imagePullPolicy: {{ .Values.imagePullPolicy }} 13 | args: 14 | - -max-replicas={{ .Values.aggregator.maxReplicas }} 15 | - -logtostderr 16 | - -v={{ .Values.aggregator.logLevel }} -------------------------------------------------------------------------------- /load-test/templates/aggregator-role.yml: -------------------------------------------------------------------------------- 1 | {{- if .Values.rbac.create -}} 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: Role 4 | metadata: 5 | name: aggregator 6 | rules: 7 | - apiGroups: 8 | - "" 9 | resources: 10 | - pods 11 | verbs: 12 | - get 13 | - list 14 | - apiGroups: 15 | - "" 16 | resources: 17 | - replicationcontrollers 18 | verbs: 19 | - get 20 | - update 21 | {{- end -}} -------------------------------------------------------------------------------- /load-test/templates/aggregator-rolebinding.yml: -------------------------------------------------------------------------------- 1 | {{- if .Values.rbac.create -}} 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: RoleBinding 4 | metadata: 5 | name: aggregator 6 | roleRef: 7 | apiGroup: rbac.authorization.k8s.io 8 | kind: Role 9 | name: aggregator 10 | subjects: 11 | - kind: ServiceAccount 12 | name: aggregator 13 | namespace: {{ .Release.Namespace }} 14 | {{- end -}} -------------------------------------------------------------------------------- /load-test/templates/aggregator-sa.yml: -------------------------------------------------------------------------------- 1 | {{- if .Values.rbac.create -}} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: aggregator 6 | {{- end -}} -------------------------------------------------------------------------------- /load-test/templates/loadbots-rc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: loadbots 5 | spec: 6 | replicas: 1 7 | selector: 8 | app: loadbots 9 | template: 10 | metadata: 11 | name: loadbots 12 | labels: 13 | app: loadbots 14 | spec: 15 | containers: 16 | - name: loadbots 17 | image: endianogino/vegeta-server:{{ .Values.images.loadbotVersion }} 18 | imagePullPolicy: {{ .Values.imagePullPolicy }} 19 | args: 20 | - -host=webserver 21 | - -address=:8080 22 | - -rate={{ .Values.loadbot.rate }} 23 | - -workers={{ .Values.loadbot.workers}} 24 | - -duration={{ .Values.loadbot.duration}} 25 | ports: 26 | - name: http-port 27 | protocol: TCP 28 | containerPort: 8080 29 | resources: 30 | requests: 31 | cpu: {{ .Values.cpuRequests.loadbot }} -------------------------------------------------------------------------------- /load-test/templates/webserver-rc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: webserver 5 | spec: 6 | replicas: 1 7 | selector: 8 | app: webserver 9 | template: 10 | metadata: 11 | name: webserver 12 | labels: 13 | app: webserver 14 | spec: 15 | containers: 16 | - name: webserver 17 | image: endianogino/simple-webserver:{{ .Values.images.webserverVersion }} 18 | imagePullPolicy: {{ .Values.imagePullPolicy }} 19 | args: 20 | - -port=80 21 | ports: 22 | - name: http-port 23 | protocol: TCP 24 | containerPort: 80 25 | resources: 26 | requests: 27 | cpu: {{ .Values.cpuRequests.webserver }} -------------------------------------------------------------------------------- /load-test/templates/webserver-svc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: webserver 5 | labels: 6 | app: webserver 7 | spec: 8 | type: ClusterIP 9 | ports: 10 | - name: http-port 11 | protocol: TCP 12 | port: 80 13 | targetPort: 80 14 | selector: 15 | app: webserver 16 | -------------------------------------------------------------------------------- /load-test/values.yaml: -------------------------------------------------------------------------------- 1 | imagePullPolicy: IfNotPresent 2 | images: 3 | loadbotVersion: "1.0" 4 | webserverVersion: "1.0" 5 | aggregatorVersion: "1.0" 6 | 7 | cpuRequests: 8 | loadbot: "100m" 9 | webserver: "100m" 10 | 11 | rbac: 12 | create: true 13 | serviceAccountName: default 14 | 15 | aggregator: 16 | maxReplicas: 100 17 | logLevel: 2 18 | 19 | loadbot: 20 | rate: 1000 21 | workers: 10 22 | duration: 1s -------------------------------------------------------------------------------- /network-test/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /network-test/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | description: Network test Helm chart for Kubernetes 3 | name: networktest 4 | version: 0.1.0 -------------------------------------------------------------------------------- /network-test/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | Network test deployed. You can watch the test with: 2 | export POD_NAME=$(kubectl --namespace {{ .Release.Namespace }} get po -l app=orchestrator -o jsonpath="{.items[0].metadata.name}") 3 | kubectl --namespace {{ .Release.Namespace }} logs -f $POD_NAME -------------------------------------------------------------------------------- /network-test/templates/netperf-orchestrator-rc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: orchestrator 5 | spec: 6 | replicas: 1 7 | selector: 8 | app: orchestrator 9 | template: 10 | metadata: 11 | name: orchestrator 12 | labels: 13 | app: orchestrator 14 | spec: 15 | containers: 16 | - name: orchestrator 17 | image: endianogino/netperf:{{ .Values.images.orchestratorVersion }} 18 | imagePullPolicy: {{ .Values.imagePullPolicy }} 19 | args: 20 | - --mode=orchestrator 21 | {{- if .Values.debug.orchestrator }} 22 | - -debug 23 | {{- end}} 24 | ports: 25 | - name: service-port 26 | protocol: TCP 27 | containerPort: 5202 -------------------------------------------------------------------------------- /network-test/templates/netperf-orchestrator-svc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: orchestrator 5 | labels: 6 | app: orchestrator 7 | spec: 8 | type: ClusterIP 9 | ports: 10 | - name: orchestrator 11 | protocol: TCP 12 | port: 5202 13 | targetPort: 5202 14 | selector: 15 | app: orchestrator 16 | -------------------------------------------------------------------------------- /network-test/templates/netperf-w1-rc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: worker1 5 | spec: 6 | replicas: 1 7 | selector: 8 | app: worker 9 | worker: w1 10 | template: 11 | metadata: 12 | name: worker1 13 | labels: 14 | app: worker 15 | worker: w1 16 | spec: 17 | containers: 18 | - name: w1 19 | image: endianogino/netperf:{{ .Values.images.workerVersion }} 20 | imagePullPolicy: {{ .Values.imagePullPolicy }} 21 | args: 22 | - --mode=worker 23 | {{- if .Values.debug.worker }} 24 | - -debug 25 | {{- end}} 26 | ports: 27 | - name: iperf3-port 28 | protocol: UDP 29 | containerPort: 5201 30 | - name: netperf-port 31 | protocol: TCP 32 | containerPort: 12865 33 | env: 34 | - name: workerName 35 | value: "netperf-w1" 36 | - name: workerPodIP 37 | valueFrom: 38 | fieldRef: 39 | fieldPath: status.podIP 40 | - name: orchestratorPort 41 | value: "5202" 42 | - name: orchestratorPodIP 43 | value: orchestrator -------------------------------------------------------------------------------- /network-test/templates/netperf-w2-rc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: worker2 5 | spec: 6 | replicas: 1 7 | selector: 8 | app: worker 9 | worker: w2 10 | template: 11 | metadata: 12 | name: worker2 13 | labels: 14 | app: worker 15 | worker: w2 16 | spec: 17 | affinity: 18 | podAffinity: 19 | requiredDuringSchedulingIgnoredDuringExecution: 20 | - labelSelector: 21 | matchExpressions: 22 | - key: worker 23 | operator: In 24 | values: 25 | - w1 26 | topologyKey: kubernetes.io/hostname 27 | containers: 28 | - name: w2 29 | image: endianogino/netperf:{{ .Values.images.workerVersion }} 30 | imagePullPolicy: {{ .Values.imagePullPolicy }} 31 | args: 32 | - --mode=worker 33 | {{- if .Values.debug.worker }} 34 | - -debug 35 | {{- end}} 36 | ports: 37 | - name: iperf3-port 38 | protocol: UDP 39 | containerPort: 5201 40 | - name: netperf-port 41 | protocol: TCP 42 | containerPort: 12865 43 | env: 44 | - name: workerName 45 | value: "netperf-w2" 46 | - name: workerPodIP 47 | valueFrom: 48 | fieldRef: 49 | fieldPath: status.podIP 50 | - name: orchestratorPort 51 | value: "5202" 52 | - name: orchestratorPodIP 53 | value: orchestrator -------------------------------------------------------------------------------- /network-test/templates/netperf-w2-svc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: netperf-w2 5 | labels: 6 | app: w2 7 | spec: 8 | type: ClusterIP 9 | ports: 10 | - name: netperf-w2 11 | protocol: TCP 12 | port: 5201 13 | targetPort: 5201 14 | - name: netperf-w2-udp 15 | protocol: UDP 16 | port: 5201 17 | targetPort: 5201 18 | - name: netperf-w2-netperf 19 | protocol: TCP 20 | port: 12865 21 | targetPort: 12865 22 | selector: 23 | app: worker 24 | worker: w2 25 | -------------------------------------------------------------------------------- /network-test/templates/netperf-w3-rc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: worker3 5 | spec: 6 | replicas: 1 7 | selector: 8 | app: worker 9 | worker: w3 10 | template: 11 | metadata: 12 | name: worker3 13 | labels: 14 | app: worker 15 | worker: w3 16 | spec: 17 | affinity: 18 | podAntiAffinity: 19 | requiredDuringSchedulingIgnoredDuringExecution: 20 | - labelSelector: 21 | matchExpressions: 22 | - key: worker 23 | operator: In 24 | values: 25 | - w1 26 | topologyKey: kubernetes.io/hostname 27 | containers: 28 | - name: w3 29 | image: endianogino/netperf:{{ .Values.images.workerVersion }} 30 | imagePullPolicy: {{ .Values.imagePullPolicy }} 31 | args: 32 | - --mode=worker 33 | {{- if .Values.debug.worker }} 34 | - -debug 35 | {{- end}} 36 | ports: 37 | - name: iperf3-port 38 | protocol: UDP 39 | containerPort: 5201 40 | - name: netperf-port 41 | protocol: TCP 42 | containerPort: 12865 43 | env: 44 | - name: workerName 45 | value: "netperf-w3" 46 | - name: workerPodIP 47 | valueFrom: 48 | fieldRef: 49 | fieldPath: status.podIP 50 | - name: orchestratorPort 51 | value: "5202" 52 | - name: orchestratorPodIP 53 | value: orchestrator -------------------------------------------------------------------------------- /network-test/values.yaml: -------------------------------------------------------------------------------- 1 | imagePullPolicy: IfNotPresent 2 | images: 3 | orchestratorVersion: 1.1 4 | workerVersion: 1.1 5 | 6 | debug: 7 | orchestrator: false 8 | worker: false --------------------------------------------------------------------------------