├── README.md ├── clusterv2.yaml ├── makeArangoDBkube.sh └── LICENSE /README.md: -------------------------------------------------------------------------------- 1 | # ArangoDB on Kubernetes 2 | 3 | ## New Project from the Mothership 4 | 5 | As of March 20, 2018, ArangoDB has released a Kubernetes project of their own. It can be found at: https://github.com/arangodb/kube-arangodb For those of you using this project, I'd encourage you to checkout their project. While it is not yet production ready, it will inevitably surpass this project in terms of robustness and support. 6 | 7 | ## New Support for StatefulSets 8 | 9 | There is now a new file `clusterv2.yaml`, that allows you to quickly set-up an ArangoDB cluster on Kubernetes. This script should work with Kubernetes 1.5 and above. If you can, I'd recommend using this method to get your cluster up and running. 10 | 11 | ## Old Method 12 | 13 | To give it a try, just run: 14 | 15 | ./makeArangoDBKube.sh 16 | 17 | and deploy the resulting file to kubernetes like so: 18 | 19 | kubectl create -f arangodb_cluster.yaml 20 | 21 | Fully automatic scaling is not supported yet, but you can scale-up the arangodb-coordinator 22 | and arangodb-dbserver deployments. The arangodb-coordinator deployment 23 | can safely be scaled down. 24 | 25 | See also 26 | 27 | ./makeArangoDBKube.sh --help 28 | 29 | ## Kubernetes Quick Start 30 | 31 | There are many tutorials out there on how to get Kubernetes up and running on local hardware, but the method I used, and I found works best, comes straight from the source: https://kubernetes.io/docs/getting-started-guides/kubeadm/ 32 | 33 | My local Kubernetes environment consists of one bare metal server, running nine Ubuntu 16.04 LTS VMs under VirtualBox. I have one master, five "compute" slaves, and three "storage" slaves (I'll talk more about those below). The installation of Kubernetes using the instructions linked to above works pretty flawlessly. 34 | 35 | ## Using GlusterFS as Persistent Volume Storage on Kubernetes 36 | 37 | As with any database, the important part of the equation is the data. You don't want to lose your data just because the pod running your ArangoDB instance(s) goes down. That problem is rather easily solved by mounting a persistent volume into the pod at /var/lib/arangodb3. Most people, at least for development/testing purposes, use host volumes (i.e. the underlying hard disk of the pod host). 38 | 39 | The problem comes when the host itself goes down (i.e. a hardware failure). While Kubernetes is more than happy to reschedule your pod on a new host, your data is stuck on the failed host. This can be solved by storing your data on a resilient file system, like GlusterFS (or similar systems offered by cloud providers). Getting GlusterFS set-up and integrated with Kubernetes is not a trivial task, fortunately, someone else has done most of the work for you: https://github.com/gluster/gluster-kubernetes 40 | 41 | While the script works fairly well, there are a couple of caveats to be aware of before running the script: 42 | 43 | 1. All of the nodes on your Kubernetes cluster need to have the GlusterFS client installed. 44 | 2. You need to `modprobe dm_thin_pool` (or be sure it is loaded) on all of the nodes that will host GlusterFS. 45 | 3. You need to initialize the physical volumes on each GlusterFS node by running `pvcreate /dev/XXX -ff`, where XXX is the block device you are going to let GlusterFS use for storage. Beware that GlusterFS will destroy any existing data on that disk! 46 | 4. The script was built for OpenShift, but works fine on plain old Kubernetes. You do need to make sure you tell it the namescape (probably "default"), otherwise it will fail right away. My command line looked like: `./gk-deploy -vg topology.json -n default` 47 | 48 | If the script successfully executes, you should provided with a URL for the Heketi REST API. You'll need that to create a storage class to auto-provision volumes on GlusterFS as needed. Here's a link to a more manual method of installing GlusterFS (they use a lot of the same yaml files), with some troubleshooting ideas as well: http://blog.lwolf.org/post/how-i-deployed-glusterfs-cluster-to-kubernetes/ 49 | 50 | At the end of the above blog post, you'll see a section on creating a storage class. Use that example, substituting your actual Heketi REST API URL. 51 | 52 | The last thing you (optionally) need to do is mark your new storage class as the default. Now when a pod makes a persistent volume claim, the creation of the persistent volume will happen automagically in the background, and it will be correctly bound to the pod. 53 | 54 | ## Stateful Sets in Kubernetes 55 | 56 | With the new script, StatefulSets are now fully supported. You can use GlusterFS (see above) for persistent volume storage, or your favorite cloud provider's persistent storage mechanism. Either way, every node in the cluster has it's own persistent storage, that will follow it around the cluster if/when it gets rescheduled on a different node. 57 | 58 | You can scale your nodes down using Kubernetes, but be aware of two things: 59 | 60 | 1. If you haven't set a replcation factor >= 2, you will lose data! 61 | 2. The built-in ArangoDB cluster health will continue to show the node as down. 62 | 63 | Both problems can be overcome by using the ArangoDB REST api to take a server offline first (to solve problem #1), and to remove a permanently downed server from the cluster (to solve problem #2). 64 | 65 | The above two issues are next-up on the development roadmap for this project. 66 | -------------------------------------------------------------------------------- /clusterv2.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: arangodb-agents 6 | spec: 7 | ports: 8 | - port: 8529 9 | targetPort: 8529 10 | clusterIP: None 11 | selector: 12 | name: arangodb 13 | role: agent 14 | --- 15 | apiVersion: apps/v1beta1 16 | kind: StatefulSet 17 | metadata: 18 | name: arangodb-agent 19 | spec: 20 | serviceName: "arangodb-agents" 21 | replicas: 3 22 | template: 23 | metadata: 24 | labels: 25 | name: arangodb 26 | role: agent 27 | spec: 28 | containers: 29 | - name: arangodb 30 | image: arangodb/arangodb:latest 31 | env: 32 | - name: IP 33 | valueFrom: 34 | fieldRef: 35 | fieldPath: status.podIP 36 | - name: ARANGO_NO_AUTH 37 | value: "1" 38 | ports: 39 | - containerPort: 8529 40 | volumeMounts: 41 | - mountPath: /var/lib/arangodb3 42 | name: arangodb-agency-data 43 | args: 44 | - --server.authentication 45 | - "false" 46 | - --server.endpoint 47 | - tcp://0.0.0.0:8529 48 | - --agency.activate 49 | - "true" 50 | - --agency.size 51 | - "3" 52 | - --agency.supervision 53 | - "true" 54 | - --agency.my-address 55 | - tcp://$(IP):8529 56 | - --agency.endpoint 57 | - tcp://arangodb-agent-0.arangodb-agents.default.svc.cluster.local:8529 58 | volumeClaimTemplates: 59 | - metadata: 60 | name: arangodb-agency-data 61 | spec: 62 | accessModes: ["ReadWriteOnce"] 63 | resources: 64 | requests: 65 | storage: 1Gi 66 | --- 67 | apiVersion: v1 68 | kind: Service 69 | metadata: 70 | name: arangodb-coords 71 | spec: 72 | ports: 73 | - port: 8529 74 | targetPort: 8529 75 | type: LoadBalancer 76 | selector: 77 | name: arangodb 78 | role: coordinator 79 | --- 80 | apiVersion: apps/v1beta1 81 | kind: StatefulSet 82 | metadata: 83 | name: arangodb-coord 84 | spec: 85 | serviceName: "arangodb-coords" 86 | replicas: 2 87 | template: 88 | metadata: 89 | labels: 90 | name: arangodb 91 | role: coordinator 92 | spec: 93 | containers: 94 | - name: arangodb 95 | image: arangodb/arangodb:latest 96 | env: 97 | - name: IP 98 | valueFrom: 99 | fieldRef: 100 | fieldPath: status.podIP 101 | - name: ARANGO_NO_AUTH 102 | value: "1" 103 | ports: 104 | - containerPort: 8529 105 | volumeMounts: 106 | - mountPath: /var/lib/arangodb3 107 | name: arangodb-coords-data 108 | args: 109 | - --server.authentication 110 | - "false" 111 | - --server.endpoint 112 | - tcp://0.0.0.0:8529 113 | - --cluster.my-role 114 | - COORDINATOR 115 | - --cluster.my-local-info 116 | - "$(IP)" 117 | - --cluster.my-address 118 | - tcp://$(IP):8529 119 | - --cluster.agency-endpoint 120 | - tcp://arangodb-agent-0.arangodb-agents.default.svc.cluster.local:8529 121 | volumeClaimTemplates: 122 | - metadata: 123 | name: arangodb-coords-data 124 | spec: 125 | accessModes: ["ReadWriteOnce"] 126 | resources: 127 | requests: 128 | storage: 1Gi 129 | --- 130 | apiVersion: v1 131 | kind: Service 132 | metadata: 133 | name: arangodb-dbs 134 | spec: 135 | ports: 136 | - port: 8529 137 | targetPort: 8529 138 | type: LoadBalancer 139 | selector: 140 | name: arangodb 141 | role: dbserver 142 | --- 143 | apiVersion: apps/v1beta1 144 | kind: StatefulSet 145 | metadata: 146 | name: arangodb-db 147 | spec: 148 | serviceName: "arangodb-dbs" 149 | replicas: 3 150 | template: 151 | metadata: 152 | labels: 153 | name: arangodb 154 | role: dbserver 155 | spec: 156 | containers: 157 | - name: arangodb 158 | image: arangodb/arangodb:latest 159 | env: 160 | - name: IP 161 | valueFrom: 162 | fieldRef: 163 | fieldPath: status.podIP 164 | - name: ARANGO_NO_AUTH 165 | value: "1" 166 | ports: 167 | - containerPort: 8529 168 | volumeMounts: 169 | - mountPath: /var/lib/arangodb3 170 | name: arangodb-db-data 171 | - mountPath: /var/lib/arangodb3-apps 172 | name: arangodb-db-apps 173 | args: 174 | - --server.authentication 175 | - "false" 176 | - --server.endpoint 177 | - tcp://0.0.0.0:8529 178 | - --cluster.my-role 179 | - PRIMARY 180 | - --cluster.my-local-info 181 | - "$(IP)" 182 | - --cluster.my-address 183 | - tcp://$(IP):8529 184 | - --cluster.agency-endpoint 185 | - tcp://arangodb-agent-0.arangodb-agents.default.svc.cluster.local:8529 186 | volumeClaimTemplates: 187 | - metadata: 188 | name: arangodb-db-data 189 | spec: 190 | accessModes: ["ReadWriteOnce"] 191 | resources: 192 | requests: 193 | storage: 10Gi 194 | - metadata: 195 | name: arangodb-db-apps 196 | spec: 197 | accessModes: ["ReadWriteOnce"] 198 | resources: 199 | requests: 200 | storage: 1Gi -------------------------------------------------------------------------------- /makeArangoDBkube.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This scripts creates a yaml file to deploy an ArangoDB cluster to kubernetes. 4 | 5 | NAME=arangodb 6 | NRAGENTS=3 7 | NRCOORDINATORS=1 8 | NRDBSERVERS=2 9 | DOCKERIMAGE=registry.arangodb.com/arangodb/arangodb:3.1-devel 10 | OUTPUT=arangodb_cluster.yaml 11 | PROTO=tcp 12 | 13 | function help() { 14 | echo "Usage: makeArangoDBkube.sh [options]" 15 | echo "" 16 | echo "Options:" 17 | echo " -a/--agents NUMBER (odd integer, default: $NRAGENTS)" 18 | echo " -c/--coordinators NUMBER (integer >= 1, default: $NRCOORDINATORS)" 19 | echo " -d/--dbservers NUMBER (integer >= 2, default: $NRDBSERVERS)" 20 | echo " -i/--image NAME (name of Docker image, default: $DOCKERIMAGE)" 21 | echo " -n/--name NAME (name of ArangoDB cluster, default: $NAME)" 22 | echo " -o/--output FILENAME (name of output file, default: $OUTPUT)" 23 | echo " -t/--tls BOOL (if given, use TLS, default: false)" 24 | } 25 | 26 | while [[ ${1} ]] ; do 27 | case "${1}" in 28 | -a|--agents) 29 | NRAGENTS=${2} 30 | shift 31 | ;; 32 | -c|--coordinators) 33 | NRCOORDINATORS=${2} 34 | shift 35 | ;; 36 | -d|--dbservers) 37 | NRDBSERVERS=${2} 38 | shift 39 | ;; 40 | -i|--image) 41 | DOCKERIMAGE=${2} 42 | shift 43 | ;; 44 | -n|--name) 45 | NAME=${2} 46 | shift 47 | ;; 48 | -o|--output) 49 | OUTPUT=${2} 50 | shift 51 | ;; 52 | -t|--tls) 53 | if [ "${2}" = "true" ] ; then 54 | PROTO=ssl 55 | fi 56 | shift 57 | ;; 58 | -h|--help) 59 | help 60 | exit 1 61 | ;; 62 | esac 63 | 64 | if ! shift; then 65 | echo 'Missing parameter argument.' >&2 66 | exit 1 67 | fi 68 | done 69 | 70 | if [[ $(( $NRAGENTS % 2)) == 0 ]]; then 71 | echo "**ERROR: Number of agents must be odd! Bailing out." 72 | exit 1 73 | fi 74 | 75 | echo ====================== 76 | echo Planned configuration: 77 | echo ====================== 78 | echo 79 | echo " Number of agents: $NRAGENTS" 80 | echo " Number of coordinators: $NRCOORDINATORS" 81 | echo " Number of dbservers: $NRDBSERVERS" 82 | echo " Docker image: $DOCKERIMAGE" 83 | echo " ArangoDB cluster name: $NAME" 84 | echo " Output file: $OUTPUT" 85 | echo 86 | echo Getting to work... 87 | 88 | rm -f $OUTPUT 89 | touch $OUTPUT 90 | 91 | # Write out the deployments and services for the agents: 92 | for i in $(seq 1 $NRAGENTS) ; do 93 | cat >>$OUTPUT < 1 ]] ; then 132 | im=$(expr $i - 1) 133 | for j in $(seq 1 $im) ; do 134 | cat >>$OUTPUT <>$OUTPUT <>$OUTPUT <>$OUTPUT <>$OUTPUT <>$OUTPUT <>$OUTPUT <>$OUTPUT <>$OUTPUT <