├── open-port.png ├── nodeport.yaml ├── service.yaml ├── scripts-cm.yaml ├── slave-deployment.yaml ├── master-deployment.yaml └── README.md /open-port.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mosesliao/kubernetes-locust/HEAD/open-port.png -------------------------------------------------------------------------------- /nodeport.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: locust-service 5 | spec: 6 | type: NodePort 7 | selector: 8 | app: locust-master 9 | ports: 10 | - protocol: TCP 11 | port: 8089 12 | targetPort: 8089 13 | -------------------------------------------------------------------------------- /service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: 5 | role: locust-master 6 | name: locust-master 7 | spec: 8 | type: ClusterIP 9 | ports: 10 | - port: 5557 11 | name: communication 12 | - port: 5558 13 | name: communication-plus-1 14 | - port: 8089 15 | targetPort: 8089 16 | name: web-ui 17 | selector: 18 | role: locust-master 19 | app: locust-master 20 | -------------------------------------------------------------------------------- /scripts-cm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: scripts-cm 5 | data: 6 | locustfile.py: | 7 | import time 8 | from locust import HttpUser, task 9 | 10 | class QuickstartUser(HttpUser): 11 | @task 12 | def hello_world(self): 13 | self.client.get("/hello") 14 | self.client.get("/world") 15 | 16 | @task(3) 17 | def view_item(self): 18 | for item_id in range(10): 19 | self.client.get(f"/item?id={item_id}", name="/item") 20 | time.sleep(1) 21 | 22 | def on_start(self): 23 | self.client.post("/login", json={"username":"foo", "password":"bar"}) 24 | -------------------------------------------------------------------------------- /slave-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | annotations: 5 | deployment.kubernetes.io/revision: "1" 6 | labels: 7 | role: locust-worker 8 | app: locust-worker 9 | name: locust-worker 10 | spec: 11 | replicas: 2 12 | selector: 13 | matchLabels: 14 | role: locust-worker 15 | app: locust-worker 16 | strategy: 17 | rollingUpdate: 18 | maxSurge: 1 19 | maxUnavailable: 1 20 | type: RollingUpdate 21 | template: 22 | metadata: 23 | labels: 24 | role: locust-worker 25 | app: locust-worker 26 | spec: 27 | containers: 28 | - image: locustio/locust 29 | imagePullPolicy: Always 30 | name: worker 31 | args: ["--worker", "--master-host=locust-master"] 32 | volumeMounts: 33 | - mountPath: /home/locust 34 | name: locust-scripts 35 | resources: {} 36 | terminationMessagePath: /dev/termination-log 37 | terminationMessagePolicy: File 38 | dnsPolicy: ClusterFirst 39 | restartPolicy: Always 40 | schedulerName: default-scheduler 41 | securityContext: {} 42 | terminationGracePeriodSeconds: 30 43 | volumes: 44 | - name: locust-scripts 45 | configMap: 46 | name: scripts-cm 47 | -------------------------------------------------------------------------------- /master-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | annotations: 5 | deployment.kubernetes.io/revision: "1" 6 | labels: 7 | role: locust-master 8 | app: locust-master 9 | name: locust-master 10 | spec: 11 | replicas: 1 12 | selector: 13 | matchLabels: 14 | role: locust-master 15 | app: locust-master 16 | strategy: 17 | rollingUpdate: 18 | maxSurge: 1 19 | maxUnavailable: 1 20 | type: RollingUpdate 21 | template: 22 | metadata: 23 | labels: 24 | role: locust-master 25 | app: locust-master 26 | spec: 27 | containers: 28 | - image: locustio/locust 29 | imagePullPolicy: Always 30 | name: master 31 | args: ["--master"] 32 | volumeMounts: 33 | - mountPath: /home/locust 34 | name: locust-scripts 35 | ports: 36 | - containerPort: 5557 37 | name: comm 38 | - containerPort: 5558 39 | name: comm-plus-1 40 | - containerPort: 8089 41 | name: web-ui 42 | resources: {} 43 | terminationMessagePath: /dev/termination-log 44 | terminationMessagePolicy: File 45 | dnsPolicy: ClusterFirst 46 | restartPolicy: Always 47 | schedulerName: default-scheduler 48 | securityContext: {} 49 | terminationGracePeriodSeconds: 30 50 | volumes: 51 | - name: locust-scripts 52 | configMap: 53 | name: scripts-cm 54 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | Distributed load testing using cloud computing is an attractive option for a variety of test scenarios. 4 | 5 | Cloud platforms provide a high degree of infrastructure elasticity, making it easy to test applications and services with large numbers of simulated clients, each generating traffic patterned after users or devices. 6 | 7 | Additionally, the pricing model of cloud computing fits very well with the very elastic nature of load testing. 8 | 9 | Locust supports running load tests on multiple machines. It's a perfect fit for Kubernetes which makes distributed deployments, container orchestration and scaling easy. 10 | 11 | In this installment of my Locust experiments, I'll prepare Kubernetes scripts for deploying and managing Locust cluster in AWS EKS and see if there are any surprises along the way. 12 | 13 | I will also *NOT* be creating my own locust docker image and instead use the official [locustio/locust](https://hub.docker.com/r/locustio/locust/) image to make sure I use the latest locust version 14 | 15 | # Locust distributed mode 16 | 17 | Running Locust in distributed mode is described in its documentation. 18 | In short: 19 | * you can have one master and multiple slave nodes (workers) 20 | * you need to tell the workers where the master is (give them the IP address and ports) 21 | * you need to supply each worker with the tests code (locustfiles) 22 | * the workers need to be able to connect to the master. 23 | 24 | That's pretty much it. 25 | 26 | # Kubernetes components 27 | Let's dive in and go through the building blocks of the Locust cluster to be deployed. 28 | 29 | Kubernetes' *configmaps* will help configure Locust. 30 | [One configmap](./scripts-cm.yaml) will hold a proper locustfile and will be mounted as a volume to the master and workers, 31 | [the second](./locust-cm.yaml) will contain other configuration settings (URL for tested host/service, for example) that will be injected to running Locust nodes as environment variables. 32 | 33 | 34 | I'll use *deployment* in order to "ask" K8s for making sure our [master](./master-deployment.yaml) and [slaves](./slave-deployment.yaml) are up and running. 35 | [The service](./service.yaml) will make the master addressable within the cluster. 36 | [The nodeport](./nodeport.yml) will help give your nodes an External IP address to your nodes. 37 | 38 | # Cluster deployment 39 | 40 | To set up the cluster: 41 | 42 | 1) Create an EKS cluster via CLI. If you have an existing EKS cluster run `aws eks update-kubeconfig --name ` 43 | 44 | 2) Then run the following: 45 | 46 | > git clone git@github.com:mosesliao/kubernetes-locust.git 47 | 48 | > cd kubernetes-locust 49 | 50 | > kubectl create -f nodeport.yaml -f scripts-cm.yaml -f master-deployment.yaml -f service.yaml -f slave-deployment.yaml 51 | 52 | *aws eks* command connects to my your eks cluster and crate the components mentioned above. 53 | If ran for the first time, it may take a while to complete (if there is no locust docker image on the cluster, it needs to be downloaded first). 54 | To see, if locust nodes are running we can inspect if the pods are up: 55 | 56 | > kubectl get -w pods 57 | NAME READY STATUS RESTARTS AGE 58 | locust-master-6dd5cc46d4-xrqt6 1/1 Running 0 26h 59 | locust-worker-bc7464db8-bs857 1/1 Running 0 26h 60 | locust-worker-bc7464db8-z84kp 1/1 Running 0 26h 61 | 62 | From the output we can pick up the master name can also take a look at the logs. 63 | `kubectl logs locust-master-754dc88dd8-zgs7m` should include the following information: 64 | 65 | ``` 66 | [2020-11-13 01:38:05,978] locust-master-6dd5cc46d4-xrqt6/INFO/locust.main: Starting web interface at http://:8089 67 | [2020-11-13 01:38:05,989] locust-master-6dd5cc46d4-xrqt6/INFO/locust.main: Starting Locust 1.1 68 | [2020-11-13 01:38:06,837] locust-master-6dd5cc46d4-xrqt6/INFO/locust.runners: Client 'locust-worker-bc7464db8-z84kp_324ebbc0df6f49c98c8198c8333195e1' reported as ready. Currently 1 clients ready to swarm. 69 | [2020-11-13 01:38:07,220] locust-master-6dd5cc46d4-xrqt6/INFO/locust.runners: Client 'locust-worker-bc7464db8-bs857_03b4f012581b4af2be62cf9912f45538' reported as ready. Currently 2 clients ready to swarm. 70 | ``` 71 | 72 | We can see that the master has started (line 1 and 2) and the slaves "volunteer" to do some work (lines 3-4). 73 | 74 | # Search for public IP and port. 75 | 76 | You need to know what are the public address the nodes are interfacing so that you can access to it: 77 | 78 | > kubectl get nodes -o wide | awk {'print $1" " $2 " " $7'} | column -t 79 | NAME STATUS EXTERNAL-IP 80 | ip-x-x-x-x.ap-southeast-1.compute.internal Ready x.x.x.x 81 | ip-y-y-y-y.ap-southeast-1.compute.internal Ready y.y.y.y 82 | 83 | You also need to know what port are the NodePort service running on: 84 | 85 | > kubectl get service/locust-service 86 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 87 | locust-service NodePort 10.100.3.131 8089:32535/TCP 26h 88 | 89 | Go to your EKS Cluster security group and open the port for external access. 90 | ![](open-port.png) 91 | 92 | From there you access `http://x.x.x.x:32535` or `http://y.y.y.y:32535` 93 | --------------------------------------------------------------------------------