├── .gitignore ├── README-original.md ├── README.md ├── config └── sync-gateway.config ├── docs ├── couchbase-kubernetes.monopic └── couchbase-kubernetes.txt ├── pods └── app-etcd.yaml ├── replication-controllers ├── couchbase-admin-server.yaml ├── couchbase-server.yaml └── sync-gateway.yaml ├── scripts ├── README.md ├── cleanup.sh ├── k8s-aws-register.sh ├── k8s-gce-cleanup.sh ├── k8s-gce-register.sh ├── nodes.js └── register.sh └── services ├── app-etcd.yaml ├── couchbase-admin-service.yaml ├── couchbase-service.yaml └── sync-gateway.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | \#* 2 | *~ 3 | .#* 4 | \#*\# 5 | 6 | *.class 7 | 8 | .idea 9 | *.iml 10 | .gradle 11 | local.properties 12 | /.DS_Store 13 | build 14 | cli/config.json 15 | kubernetes 16 | -------------------------------------------------------------------------------- /README-original.md: -------------------------------------------------------------------------------- 1 | Here are instructions on getting Couchbase Server and Couchbase Sync Gateway running under Kubernetes on GKE (Google Container Engine). 2 | 3 | To get a bird's eye view of what is being created, check the following [Architecture Diagrams](https://github.com/couchbase/kubernetes/wiki/Architecture-Diagrams) 4 | 5 | # Kubernetes cluster setup 6 | 7 | First you need to setup Kubernetes itself before running Couchbase on it. These instructions are specific to your particular setup (bare metal or Cloud Provider). 8 | 9 | # Couchbase Server 10 | 11 | ## Install Dependencies 12 | 13 | * [Google Container Engine tools](https://github.com/couchbase/kubernetes/wiki/Running-on-Google-Container-Engine-(GKE)) 14 | 15 | ## Clone couchbase-kubernetes 16 | 17 | ``` 18 | $ git clone https://github.com/couchbase/kubernetes.git couchbase-kubernetes 19 | $ cd couchbase-kubernetes 20 | ``` 21 | 22 | ## Start etcd 23 | 24 | Although Kubernetes runs its own etcd, this is not accessible to applications running within Kubernetes. The Couchbase sidekick containers require etcd to discover Couchbase Server Nodes and bootstrap the cluster. 25 | 26 | The current recommended approach is to either: 27 | 28 | 1. Run your own etcd cluster *outside* the Kubernetes cluster, and setup secure networking between the two (you don't want to expose your etcd cluster publicly) 29 | 1. Start up a single node etcd within the Kubernetes cluster. 30 | 31 | Running your own separate etcd cluster is outside the scope of this document, so we'll ignore that option for now and focus on the other option. 32 | 33 | The downside with running a single etcd node within Kubernetes has the major disadvantage of being a single point of failure, nor will it handle pod restarts of the etcd pod -- if that pod is restarted and gets a new ip address, then future couchbase nodes that are started won't be able to find etcd and auto-join the cluster. 34 | 35 | Here's how to start the app-etcd service and pod: 36 | 37 | ``` 38 | $ kubectl create -f services/app-etcd.yaml 39 | $ kubectl create -f pods/app-etcd.yaml 40 | ``` 41 | 42 | Get the pod ip: 43 | 44 | ``` 45 | $ kubectl describe pod app-etcd 46 | ``` 47 | 48 | you should see: 49 | 50 | ``` 51 | Name: app-etcd 52 | Namespace: default 53 | Image(s): tleyden5iwx/etcd-discovery 54 | Node: gke-couchbase-server-648006db-node-qgu2/10.240.158.17 55 | Labels: name=app-etcd 56 | Status: Running 57 | Reason: 58 | Message: 59 | IP: 10.248.1.5 60 | ``` 61 | 62 | Make a note of the Node it's running on (eg, gke-couchbase-server-648006db-node-qgu2) as well as the Pod IP (10.248.1.5) 63 | 64 | ## Add Couchbase Server Admin credentials in etcd 65 | 66 | 67 | First, you will need to ssh into the host node where the app-etcd pod is running (or any other node in the cluster): 68 | 69 | 70 | ( Alternatively on linux and mac you can use this one liner ) 71 | 72 | ``` 73 | kubectl describe pod app-etcd | grep Node | awk '{print $2}'| sed 's/\/.*//' 74 | ``` 75 | 76 | ``` 77 | $ gcloud compute ssh gke-couchbase-server-648006db-node-qgu2 78 | ``` 79 | 80 | Replace `gcloud compute ssh gke-couchbase-server-648006db-node-qgu2` with the host found in the previous step. 81 | 82 | Next, use curl to add a value for the `/couchbase.com/userpass` key in etcd. Use the Pod IP found above. 83 | 84 | ( Alternatively on linux and mac you can use this one liner to get the IP ) 85 | 86 | ``` 87 | kubectl describe pod app-etcd | grep IP: | awk '{print $2}' 88 | ``` 89 | 90 | ``` 91 | root@k8s~$ curl -L http://10.248.1.5:2379/v2/keys/couchbase.com/userpass -X PUT -d value="user:passw0rd" 92 | ``` 93 | 94 | Replace `user:passw0rd` with the actual values you want to use. 95 | 96 | After you run the command, exit the SSH session to get back to your workstation. 97 | 98 | ``` 99 | $ exit 100 | ``` 101 | 102 | ## Kick off Service and Replication Controller for couchbase-server 103 | 104 | First the replication controllers: 105 | ``` 106 | $ kubectl create -f replication-controllers/couchbase-admin-server.yaml 107 | $ kubectl create -f replication-controllers/couchbase-server.yaml 108 | ``` 109 | 110 | Then the services: 111 | 112 | ``` 113 | $ kubectl create -f services/couchbase-service.yaml 114 | $ kubectl create -f services/couchbase-admin-service.yaml 115 | ``` 116 | 117 | The `couchbase-admin` pod and service creates a couchbase server with an externally accessible admin ui. The admin replication controller (named `couchbase-admin-controller`) should never be scaled passed 1. Instead the `couchbase-controller` can be scaled to any desired number of replicas. The `couchbase-service` is configured to route traffic to both the `couchbase-admin-server` pod and the `couchbase-server` pods. 118 | 119 | ## Setup interaction 120 | 121 | Here is what is happening under the hood with the couchbase sidekicks to bootstrap the cluster: 122 | 123 | ``` 124 | ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ 125 | │ Couchbase │ │ OS / libc │ │ Couchbase │ │ Couchbase │ 126 | │ Sidekick │ │ │ │ Etcd │ │ Server │ 127 | └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ 128 | │ │ │ │ 129 | │ │ │ │ 130 | │ Get IP of first │ │ │ 131 | ├────non-loopback iface──────▶ │ │ 132 | │ │ │ │ 133 | │ Pod's IP │ │ │ 134 | ◀─────────address────────────┤ │ │ 135 | │ │ │ │ 136 | │ │ Create │ │ 137 | ├────────────────────────────┼──────/couchbase-node-state─────▶ │ 138 | │ │ dir │ │ 139 | │ │ │ │ 140 | │ │ Success OR │ │ 141 | ◀────────────────────────────┼──────────────Fail──────────────┤ │ 142 | │ │ │ │ 143 | │ │ │ Create OR │ 144 | ├────────────────────────────┼────────────────────────────────┼────────────Join ─────────▶ 145 | │ │ │ Cluster │ 146 | │ │ │ │ 147 | │ │ │ Add my pod IP under │ 148 | ├────────────────────────────┼────────────────────────────────┼───────cbs-node-state─────▶ 149 | │ │ │ │ 150 | │ │ │ │ 151 | ▼ ▼ ▼ ▼ 152 | 153 | ``` 154 | 155 | 156 | ## View container logs 157 | 158 | First find the pod names that the replication controller spawned: 159 | 160 | ``` 161 | $ kubectl get pods 162 | ``` 163 | 164 | Under the POD column in the resulting table formatted output, you should see pods similar to: 165 | 166 | ``` 167 | couchbase-admin-controller-ho6ta 168 | couchbase-controller-j7yzf 169 | ``` 170 | 171 | View the logs on all of the containers via: 172 | 173 | ``` 174 | $ kubectl logs couchbase-admin-controller-ho6ta couchbase-server 175 | $ kubectl logs couchbase-admin-controller-ho6ta couchbase-sidekick 176 | $ kubectl logs couchbase-controller-j7yzf couchbase-server 177 | $ kubectl logs couchbase-controller-j7yzf couchbase-sidekick 178 | ``` 179 | 180 | * Expected [couchbase-server logs](https://gist.github.com/tleyden/b9677515952fa054ddd2) 181 | * Expected [couchbase-sidekick logs](https://gist.github.com/tleyden/269679e71131b7e8536e) 182 | 183 | 184 | ## Connect to Couchbase Server Admin UI 185 | 186 | This is platform specific. 187 | 188 | Currently there are only instructions for [Google Container Engine](https://github.com/couchbase/kubernetes/wiki/Running-on-Google-Container-Engine-(GKE)) 189 | 190 | # Sync Gateway 191 | 192 | ## Create a Sync Gateway replication set 193 | 194 | Sync Gateway is a server-side component for Couchbase Mobile which provides a REST API in front of Couchbase Server, which Couchbase Lite enabled mobile apps connect to in order to sync their data. 195 | 196 | It provides a good example of setting up an application tier on top of Couchbase Server. If you were creating a tier of webservers that used a Couchbase SDK to store data in Couchbase Server, your architecture would be very similar to this. 197 | 198 | To kick off a Sync Gateway replica set, run: 199 | 200 | ``` 201 | $ kubectl create -f replication-controllers/sync-gateway.yaml 202 | ``` 203 | 204 | By default, it will use the sync gateway config in [`config/sync-gateway.config`](https://github.com/couchbase/kubernetes/blob/master/config/sync-gateway.config) -- note that for the IP address of Couchbase Server, it uses the **dns service** address in the `default` namespace: `http://couchbase-service.default.svc.cluster.local:8091`. SkyDNS is enabled by default in GKE/GCE, but if you are not running SkyDNS, then you will need to change the config to the service ip shown in `kubectl get service couchbase-service`. 205 | 206 | ## Create a publicly exposed Sync Gateway service 207 | 208 | ``` 209 | $ kubectl create -f services/sync-gateway.yaml 210 | ``` 211 | 212 | To find the IP address after the pod is running, run: 213 | 214 | ``` 215 | $ kubectl describe service sync-gateway 216 | ``` 217 | 218 | and you should see: 219 | 220 | ``` 221 | ... 222 | LoadBalancer Ingress: 104.197.15.37 223 | ... 224 | ``` 225 | 226 | where `104.197.15.37` is a publicly accessible IP. To verify, from your local workstation or any machine connected to the internet, wait for a few minutes to give it a chance to startup, and then run: 227 | 228 | ``` 229 | $ curl 104.197.15.37:4984 230 | ``` 231 | 232 | and you should see: 233 | 234 | ``` 235 | {"couchdb":"Welcome","vendor":{"name":"Couchbase Sync Gateway","version":1},"version":"Couchbase Sync Gateway/HEAD(nobranch)(04138fd)"} 236 | ``` 237 | 238 | Congrats! You are now running Couchbase Server and Sync Gateway on Kubernetes. 239 | 240 | ## TODO 241 | 242 | * Documentation on how to run on a different Kubernetes environment other than GKE. (eg, AWS) 243 | * Improve the story when Pods go down. Currently some manual intervention is needed to rebalance the cluster, ideally I'd like this to be fully automated. (possibly via pod shutdown hook). Currently: 244 | * New pod comes up with different ip 245 | * Rebalance fails because there are now 3 couchbase server nodes, one which is unreachable 246 | * To manually fix: fail over downed cb node, kick off rebalance 247 | * Improve the story surrounding etcd 248 | * Look into persistent data storage host mounted volumes 249 | 250 | ## Related Work 251 | 252 | * [tophatch/CouchbaseMobileWithKubernetes](https://github.com/tophatch/CouchbaseMobileWithKubernetes) 253 | 254 | ## References 255 | 256 | * [Couchbase Docker image on Dockerhub](https://hub.docker.com/u/couchbase/server) 257 | 258 | * [Google cloud sdk](https://registry.hub.docker.com/u/google/cloud-sdk/) 259 | 260 | * https://cloud.google.com/container-engine/docs/hello-wordpress 261 | 262 | * https://cloud.google.com/container-engine/docs/guestbook 263 | 264 | * [google groups post regarding etcd service](https://groups.google.com/d/msg/google-containers/rFIFD6Y0_Ew/GeDa8ZuPWd8J). 265 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 🚨 **_This project is deprecated_** 🚨 2 | 3 | ➡️ Couchbase is now natively supported on Kubernetes using the [Couchbase Autonomous Operator](https://www.couchbase.com/products/cloud/kubernetes)! ⬅️ 4 | -------------------------------------------------------------------------------- /config/sync-gateway.config: -------------------------------------------------------------------------------- 1 | { 2 | "log": ["*"], 3 | "databases": { 4 | "db": { 5 | "server": "http://couchbase-service.default.svc.cluster.local:8091", 6 | "bucket": "default", 7 | "users": { "GUEST": { "disabled": false, "admin_channels": ["*"] } } 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /docs/couchbase-kubernetes.monopic: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/couchbase/kubernetes/6d51e1cd83779fe7517a0d8497963273070e3f17/docs/couchbase-kubernetes.monopic -------------------------------------------------------------------------------- /docs/couchbase-kubernetes.txt: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | ┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ 5 | │ Kubernetes Cluster │ 6 | │ │ 7 | │ ┌──────────────────────────────────────────────────────────────────────┐ ┌──────────────────────────────────────────────────┐ │ 8 | │ │ Kubernetes Node 1 │ │ Kubernetes Node 2 │ │ 9 | │ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┌ ─ ─ ─ ─ ─ ─ ─ ─ │ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ │ │ 10 | │ │ Couchbase ReplicaSet (count=2) │ CB etcd service │ │ │ Couchbase ReplicaSet (count=2) │ │ │ 11 | │ │ │ ┌──────────────────────────────────────────┐ │ │ │ │ ┌──────────────────────────────────────────┐ │ │ 12 | │ │ │ couchbase-replicaset-pod-1 │ │ ┌────────────┐ │ │ │ │ couchbase-replicaset-pod-2 │ │ │ │ 13 | │ │ │ │ ┌─────────────────┐ ┌───────────────────┐│ │ │etcd pod │ │ │ │ │ ┌─────────────────┐ ┌───────────────────┐│ │ │ 14 | │ │ │ │couchbase-server │ │couchbase-sidekick ││ │ │ │ │ │ │ │ │couchbase-server │ │couchbase-sidekick ││ │ │ │ 15 | │ │ │ │ │ container │ │ container ││ │ │ ┌────────┐ │ │ │ │ │ │ container │ │ container ││ │ │ 16 | │ │ │ └─────────────────┘ └───────────────────┘│ │ │ │etcd │ │ │ │ │ │ └─────────────────┘ └───────────────────┘│ │ │ │ 17 | │ │ │ └──────────────────────────────────────────┘ │ │ │containe│ │ │ │ │ └──────────────────────────────────────────┘ │ │ 18 | │ │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ │r │ │ │ │ │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ │ 19 | │ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ │ │ └────────┘ │ │ │ ┌ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ │ │ 20 | │ │ Sync Gateway ReplicaSet (count=2) │ └────────────┘ │ │ │ Sync Gateway ReplicaSet (count=2) │ │ │ 21 | │ │ │ ┌──────────────────────────────────────────┐ └ ─ ─ ─ ─ ─ ─ ─ ─ │ │ │ ┌──────────────────────────────────────────┐ │ │ 22 | │ │ │ sync-gw-replicaset-pod-1 │ │ │ │ │ sync-gw-replicaset-pod-2 │ │ │ │ 23 | │ │ │ │ ┌──────────────────────────────────────┐ │ │ │ │ │ ┌──────────────────────────────────────┐ │ │ │ 24 | │ │ │ │ sync gateway │ │ │ │ │ │ │ sync gateway │ │ │ │ │ 25 | │ │ │ │ │ container │ │ │ │ │ │ │ container │ │ │ │ 26 | │ │ │ └──────────────────────────────────────┘ │ │ │ │ │ └──────────────────────────────────────┘ │ │ │ │ 27 | │ │ │ └──────────────────────────────────────────┘ │ │ │ └──────────────────────────────────────────┘ │ │ 28 | │ │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ │ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┘ │ │ 29 | │ └──────────────────────────────────────────────────────────────────────┘ └──────────────────────────────────────────────────┘ │ 30 | │ │ 31 | └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ 32 | 33 | 34 | 35 | ┌────────────────────────────────────────────────────────────────────┐ 36 | │ Google Container Engine (GKE) │ 37 | │ │ 38 | │ │ 39 | │ │ 40 | ┌────────┐ │ ┌──────────────────────────────────────────────────┐ │ 41 | │ REST ├───┐ │ │ Kubernetes Cluster │ │ 42 | │ Client │ │ │ │ │ │ 43 | └────────┘ │ │ ┌──────────┐ │ ┌─────────────┐ ┌─────────┐ ┌────────┐ │ │ 44 | └──┼─▶ external │ │ │sync-gateway │ │couchbase│ │ etcd │ │ │ 45 | │ │ load ──┼──┼─▶│ service ├────▶│ service ├─────▶│service │ │ │ 46 | ┌────────┐ ┌──┼─▶ balancer │ │ │ │ │ │ │ │ │ │ 47 | │ REST │ │ │ └──────────┘ │ └─────────────┘ └─────────┘ └────────┘ │ │ 48 | │ Client ├───┘ │ │ │ │ 49 | └────────┘ │ └──────────────────────────────────────────────────┘ │ 50 | │ │ 51 | └────────────────────────────────────────────────────────────────────┘ 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ 61 | │ Couchbase │ │ OS / libc │ │ Couchbase │ │ Couchbase │ 62 | │ Sidekick │ │ │ │ Etcd │ │ Server │ 63 | └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ 64 | │ │ │ │ 65 | │ │ │ │ 66 | │ Get IP of first │ │ │ 67 | ├────non-loopback iface──────▶ │ │ 68 | │ │ │ │ 69 | │ Pod's IP │ │ │ 70 | ◀─────────address────────────┤ │ │ 71 | │ │ │ │ 72 | │ │ Create │ │ 73 | ├────────────────────────────┼──────/couchbase-node-state─────▶ │ 74 | │ │ dir │ │ 75 | │ │ │ │ 76 | │ │ Success OR │ │ 77 | ◀────────────────────────────┼──────────────Fail──────────────┤ │ 78 | │ │ │ │ 79 | │ │ │ Create OR │ 80 | ├────────────────────────────┼────────────────────────────────┼────────────Join ─────────▶ 81 | │ │ │ Cluster │ 82 | │ │ │ │ 83 | │ │ │ Add my pod IP under │ 84 | ├────────────────────────────┼────────────────────────────────┼───────cbs-node-state─────▶ 85 | │ │ │ │ 86 | │ │ │ │ 87 | ▼ ▼ ▼ ▼ -------------------------------------------------------------------------------- /pods/app-etcd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: app-etcd 5 | labels: 6 | name: app-etcd 7 | spec: 8 | containers: 9 | - name: app-etcd 10 | image: tleyden5iwx/etcd-discovery 11 | command: 12 | - etcdisco 13 | - -listen-client-urls 14 | - http://0.0.0.0:2379 15 | - -advertise-client-urls 16 | - http://{{.LOCAL_IP}}:2379 17 | ports: 18 | - name: client 19 | containerPort: 2379 20 | - name: peer 21 | containerPort: 2380 -------------------------------------------------------------------------------- /replication-controllers/couchbase-admin-server.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: couchbase-admin-controller 5 | spec: 6 | replicas: 1 7 | # selector identifies the set of Pods that this 8 | # replicaController is responsible for managing 9 | selector: 10 | name: couchbase-server 11 | role: admin 12 | # podTemplate defines the 'cookie cutter' used for creating 13 | # new pods when necessary 14 | template: 15 | metadata: 16 | labels: 17 | # Important: these labels need to match the selector above 18 | # The api server enforces this constraint. 19 | name: couchbase-server 20 | role: admin 21 | spec: 22 | containers: 23 | - name: couchbase-server 24 | image: couchbase/server:enterprise-4.0.0-beta 25 | ports: 26 | - name: admin 27 | containerPort: 8091 28 | - name: views 29 | containerPort: 8092 30 | - name: couchbase-sidekick 31 | image: tleyden5iwx/couchbase-cluster-go:latest 32 | command: 33 | - /bin/sh 34 | - -c 35 | - update-wrapper --skip-etcd-check couchbase-cluster start-couchbase-sidekick --discover-local-ip --etcd-servers http://$APP_ETCD_SERVICE_SERVICE_HOST:$APP_ETCD_SERVICE_SERVICE_PORT_CLIENT 36 | -------------------------------------------------------------------------------- /replication-controllers/couchbase-server.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: couchbase-controller 5 | spec: 6 | replicas: 1 7 | # selector identifies the set of Pods that this 8 | # replicaController is responsible for managing 9 | selector: 10 | name: couchbase-server 11 | role: nodes 12 | # podTemplate defines the 'cookie cutter' used for creating 13 | # new pods when necessary 14 | template: 15 | metadata: 16 | labels: 17 | # Important: these labels need to match the selector above 18 | # The api server enforces this constraint. 19 | name: couchbase-server 20 | role: nodes 21 | spec: 22 | containers: 23 | - name: couchbase-server 24 | image: couchbase/server:enterprise-4.0.0-beta 25 | ports: 26 | - name: admin 27 | containerPort: 8091 28 | - name: views 29 | containerPort: 8092 30 | - name: couchbase-sidekick 31 | image: tleyden5iwx/couchbase-cluster-go:latest 32 | command: 33 | - /bin/sh 34 | - -c 35 | - update-wrapper --skip-etcd-check couchbase-cluster start-couchbase-sidekick --discover-local-ip --etcd-servers http://$APP_ETCD_SERVICE_SERVICE_HOST:$APP_ETCD_SERVICE_SERVICE_PORT_CLIENT 36 | # Can also use http://app-etcd-service.kubernetes.local 37 | -------------------------------------------------------------------------------- /replication-controllers/sync-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: sync-gateway 5 | spec: 6 | replicas: 2 7 | # selector identifies the set of Pods that this 8 | # replicaController is responsible for managing 9 | selector: 10 | name: sync-gateway 11 | # podTemplate defines the 'cookie cutter' used for creating 12 | # new pods when necessary 13 | template: 14 | metadata: 15 | labels: 16 | # Important: these labels need to match the selector above 17 | # The api server enforces this constraint. 18 | name: sync-gateway 19 | spec: 20 | containers: 21 | - name: sync-gateway 22 | image: couchbase/sync-gateway 23 | command: 24 | - sync_gateway 25 | - https://raw.githubusercontent.com/couchbase/kubernetes/master/config/sync-gateway.config 26 | ports: 27 | - containerPort: 4984 28 | - containerPort: 4985 -------------------------------------------------------------------------------- /scripts/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | * **k8s-gce-register.sh** -- this stands up the Couchbase cluster if you are running on raw GCE (as opposed to GKE, which has Kubernetes running already). Assumes you have a running GCE cluster started with `echo curl -sS https://get.k8s.io | bash` as per the source's comments 4 | * **k8s-gce-cleanup.sh** -- cleanup/teardown for k8s-gce-register.sh 5 | * **register.sh** -- this stands up the Couchbase cluster if you are running on GKE 6 | * **cleanup.sh** -- cleanup/teardown for register.sh 7 | * **nodes.js** -- ?? 8 | -------------------------------------------------------------------------------- /scripts/cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | PROJECT_ID=[your project id] 4 | ZONE=us-central1-f 5 | REGION=us-central1 6 | CLUSTER=[your cluster name] 7 | 8 | # config 9 | gcloud components update alpha 10 | gcloud config set project $PROJECT_ID 11 | gcloud config set compute/zone $ZONE 12 | gcloud config set compute/region $REGION 13 | 14 | gcloud compute forwarding-rules delete --quiet k8s-$CLUSTER 15 | gcloud compute firewall-rules delete --quiet cbs-8091 16 | gcloud compute firewall-rules delete --quiet cbs2-8091 17 | gcloud compute firewall-rules delete --quiet k8s-$CLUSTER-all 18 | gcloud compute firewall-rules delete --quiet k8s-$CLUSTER-master-https 19 | gcloud compute firewall-rules delete --quiet k8s-$CLUSTER-vms 20 | gcloud compute target-pools delete --quiet k8s-$CLUSTER 21 | gcloud alpha container clusters delete --quiet $CLUSTER 22 | -------------------------------------------------------------------------------- /scripts/k8s-aws-register.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Adds couchbase to an existing cluster on GCE 4 | # 5 | # Assumes you have a running kubernetes cluster 6 | # created with `export KUBERNETES_PROVIDER=aws; export MASTER_SIZE=t2.medium; MINION_SIZE=t2.medium; echo curl -sS https://get.k8s.io | bash` 7 | # or a local Kubernetes source folder where 8 | # kubernetes/cluster/kube-up.sh was run 9 | 10 | 11 | CBUSER=user 12 | CBPASSWORD=passw0rd 13 | 14 | SKYDNS_DOMAIN=default.svc.cluster.local 15 | AWS_SSH_KEY=${AWS_SSH_KEY:-$HOME/.ssh/kube_aws_rsa} 16 | 17 | SSH_USER=ubuntu 18 | AWS_CMD="aws --output json ec2" 19 | 20 | function get_instanceid_from_name { 21 | local tagName=$1 22 | $AWS_CMD --output text describe-instances \ 23 | --filters Name=tag:Name,Values=${tagName} \ 24 | Name=instance-state-name,Values=running \ 25 | --query Reservations[].Instances[].InstanceId 26 | } 27 | 28 | function get_instance_public_ip { 29 | local instance_id=$1 30 | $AWS_CMD --output text describe-instances \ 31 | --instance-ids ${instance_id} \ 32 | --query Reservations[].Instances[].NetworkInterfaces[0].Association.PublicIp 33 | } 34 | 35 | MASTER_NAME=kubernetes-master 36 | if [[ -z "${KUBE_MASTER_ID-}" ]]; then 37 | KUBE_MASTER_ID=$(get_instanceid_from_name ${MASTER_NAME}) 38 | fi 39 | if [[ -z "${KUBE_MASTER_ID-}" ]]; then 40 | echo "Could not detect Kubernetes master node. Make sure you've launched a cluster with 'kube-up.sh'" 41 | exit 1 42 | fi 43 | if [[ -z "${KUBE_MASTER_IP-}" ]]; then 44 | KUBE_MASTER_IP=$(get_instance_public_ip ${KUBE_MASTER_ID}) 45 | fi 46 | 47 | 48 | # pods 49 | printf "\nCreating pods ...\n" 50 | kubernetes/cluster/kubectl.sh create -f pods/app-etcd.yaml 51 | 52 | printf "\nWaiting for etcd cluster to initialise ...\n" 53 | sleep 20 54 | kubernetes/cluster/kubectl.sh create -f services/app-etcd.yaml 55 | 56 | # config file adjustments 57 | printf "\nAdjusting config files ..." 58 | 59 | # gcloud compute ssh kubernetes-master --command "curl --silent -L http://localhost:8080/api/v1/proxy/namespaces/default/pods/app-etcd:2379/v2/keys/couchbase.com/userpass -X PUT -d value='$CBUSER:$CBPASSWORD'" 60 | 61 | ssh -oStrictHostKeyChecking=no -i "${AWS_SSH_KEY}" ${SSH_USER}@${KUBE_MASTER_IP} "curl --silent -L http://localhost:8080/api/v1/proxy/namespaces/default/pods/app-etcd:2379/v2/keys/couchbase.com/userpass -X PUT -d value='$CBUSER:$CBPASSWORD'" 62 | 63 | # Best practice to create pods/RC's before services 64 | # replication-controllers 65 | printf "\nCreating replication-controllers ...\n" 66 | kubernetes/cluster/kubectl.sh create -f replication-controllers/couchbase-server.yaml 67 | kubernetes/cluster/kubectl.sh create -f replication-controllers/couchbase-admin-server.yaml 68 | 69 | # services 70 | printf "\nCreating services ...\n" 71 | kubernetes/cluster/kubectl.sh create -f services/couchbase-service.yaml 72 | kubernetes/cluster/kubectl.sh create -f services/couchbase-admin-service.yaml 73 | 74 | # firewall and forwarding-rules 75 | # printf "\nCreating firewall and forwarding-rules ...\n" 76 | 77 | # Done. 78 | CBADMINIP=$(kubernetes/cluster/kubectl.sh get -o json service couchbase-admin-service | jsawk 'return this.status.loadBalancer.ingress[0].hostname') 79 | 80 | # Can goto any k8s minion and get there, or go through the master 81 | printf "\nDone.\n\n Go to http://$CBADMINIP:8091\n or \n http://:8080/api/v1/proxy/namespaces/default/services/couchbase-admin-service:8091/". 82 | -------------------------------------------------------------------------------- /scripts/k8s-gce-cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | gcloud compute firewall-rules delete --quiet cbs-8091 4 | -------------------------------------------------------------------------------- /scripts/k8s-gce-register.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Adds couchbase to an existing cluster on GCE 4 | # 5 | # Assumes you have a running kubernetes cluster 6 | # created with `echo curl -sS https://get.k8s.io | bash` 7 | # or a local Kubernetes source folder where 8 | # kubernetes/cluster/kube-up.sh was run 9 | 10 | CBUSER=user 11 | CBPASSWORD=passw0rd 12 | 13 | SKYDNS_DOMAIN=default.svc.cluster.local 14 | 15 | # pods 16 | printf "\nCreating pods ...\n" 17 | kubernetes/cluster/kubectl.sh create -f pods/app-etcd.yaml 18 | 19 | printf "\nWaiting for cluster to initialise ...\n" 20 | sleep 20 21 | kubernetes/cluster/kubectl.sh create -f services/app-etcd.yaml 22 | 23 | # config file adjustments 24 | printf "\nAdjusting config files ..." 25 | 26 | gcloud compute ssh kubernetes-master --command "curl --silent -L http://localhost:8080/api/v1/proxy/namespaces/default/pods/app-etcd:2379/v2/keys/couchbase.com/userpass -X PUT -d value='$CBUSER:$CBPASSWORD'" 27 | 28 | NODE1=$(gcloud compute instances list --format json | ./nodes.js | awk 'NR==1{print $1}') 29 | NODE2=$(gcloud compute instances list --format json | ./nodes.js | awk 'NR==2{print $1}') 30 | 31 | # Best practice to create pods/RC's before services 32 | # replication-controllers 33 | printf "\nCreating replication-controllers ...\n" 34 | kubernetes/cluster/kubectl.sh create -f replication-controllers/couchbase-server.yaml 35 | kubernetes/cluster/kubectl.sh create -f replication-controllers/couchbase-admin-server.yaml 36 | 37 | # services 38 | printf "\nCreating services ...\n" 39 | kubernetes/cluster/kubectl.sh create -f services/couchbase-service.yaml 40 | kubernetes/cluster/kubectl.sh create -f services/couchbase-admin-service.yaml 41 | 42 | # firewall and forwarding-rules 43 | printf "\nCreating firewall and forwarding-rules ...\n" 44 | gcloud compute firewall-rules create cbs-8091 --allow tcp:8091 --target-tags kubernetes-minion 45 | gcloud compute firewall-rules create cbs-4984 --allow tcp:4984 --target-tags kubernetes-minion 46 | 47 | # Done. 48 | CBADMINIP=$(kubernetes/cluster/kubectl.sh get -o json service couchbase-admin-service | jsawk 'return this.status.loadBalancer.ingress[0].ip') 49 | 50 | # CBADMINIP=kubernetes/cluster/kubectl.sh get -o template service couchbase-admin-service --template={{.status.loadBalancer.ingress}} 51 | 52 | # Can goto any k8s minion and get there, or go through the master 53 | printf "\nDone.\n\n Go to http://$CBADMINIP:8091\n or \n http://:8080/api/v1/proxy/namespaces/default/services/couchbase-admin-service:8091/". 54 | -------------------------------------------------------------------------------- /scripts/nodes.js: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env node 2 | 3 | var data = ''; 4 | 5 | function withPipe(data) { 6 | var j = JSON.parse(data); 7 | 8 | for(k in j) { 9 | if (j[k].name.indexOf('node') !== -1) { 10 | console.log(j[k].name); 11 | } 12 | } 13 | } 14 | 15 | var self = process.stdin; 16 | self.on('readable', function() { 17 | var chunk = this.read(); 18 | if (chunk !== null) { 19 | data += chunk; 20 | } 21 | }); 22 | 23 | self.on('end', function() { 24 | withPipe(data); 25 | }); 26 | -------------------------------------------------------------------------------- /scripts/register.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | PROJECT_ID=[your project id] 4 | ZONE=us-central1-f 5 | REGION=us-central1 6 | CLUSTER=[your cluster name] 7 | 8 | CBUSER=Administrator 9 | CBPASSWORD=passw0rd 10 | 11 | # config 12 | gcloud components update beta 13 | gcloud config set project $PROJECT_ID 14 | gcloud config set compute/zone $ZONE 15 | gcloud config set compute/region $REGION 16 | 17 | gcloud beta container clusters create $CLUSTER \ 18 | --num-nodes 2 \ 19 | --quiet \ 20 | --machine-type g1-small 21 | 22 | gcloud config set container/cluster $CLUSTER 23 | 24 | 25 | # pods 26 | printf "\nCreating pods ...\n" 27 | kubectl create -f services/app-etcd.json 28 | kubectl create -f pods/app-etcd.json 29 | 30 | printf "\nWaiting for cluster to initialise ...\n" 31 | sleep 300 32 | 33 | # config file adjustments 34 | printf "\nAdjusting config files ..." 35 | PODIP=$(kubectl get -o json pod app-etcd | jsawk 'return this.status.podIP') 36 | sleep 5 37 | ETCDHOST=$(kubectl get -o json pod app-etcd | jsawk 'return this.spec.nodeName') 38 | 39 | sed -i'.bak' "s/etcd.pod.ip/$PODIP/" replication-controllers/couchbase.controller.json 40 | 41 | gcloud --quiet compute ssh $ETCDHOST --command "curl --silent -L http://$PODIP:2379/v2/keys/couchbase.com/userpass -X PUT -d value='$CBUSER:$CBPASSWORD'" 42 | 43 | NODE1=$(gcloud compute instances list --format json | ./nodes.js | awk 'NR==1{print $1}') 44 | NODE2=$(gcloud compute instances list --format json | ./nodes.js | awk 'NR==2{print $1}') 45 | 46 | # replication-controllers 47 | printf "\nCreating replication-controllers ...\n" 48 | kubectl create -f replication-controllers/couchbase.controller.json 49 | kubectl create -f replication-controllers/couchbase-admin.controller.json 50 | 51 | # services 52 | printf "\nCreating services ...\n" 53 | kubectl create -f services/couchbase.service.json 54 | kubectl create -f services/couchbase-admin.service.json 55 | 56 | # firewall and forwarding-rules 57 | printf "\nCreating firewall and forwarding-rules ...\n" 58 | gcloud compute instances add-tags $NODE1 --tags cb1 59 | gcloud compute firewall-rules create cbs-8091 --allow tcp:8091 --target-tags cb1 60 | gcloud compute instances add-tags $NODE2 --tags cb2 61 | gcloud compute firewall-rules create cbs2-8091 --allow tcp:8091 --target-tags cb2 62 | 63 | # reset config file for next run 64 | printf "\nResetting config files for next run ...\n" 65 | rm replication-controllers/couchbase.controller.json 66 | mv replication-controllers/couchbase.controller.json.bak replication-controllers/couchbase.controller.json 67 | 68 | # Done. 69 | CBNODEIP=$(kubectl get -o json service couchbase-admin-service | jsawk 'return this.status.loadBalancer.ingress[0].ip') 70 | 71 | printf "\nDone.\n\nGo to http://$CBNODEIP:8091\n\n". 72 | -------------------------------------------------------------------------------- /services/app-etcd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: app-etcd-service 5 | spec: 6 | ports: 7 | - port: 2379 8 | name: client 9 | targetPort: client 10 | - port: 2380 11 | name: peer 12 | targetPort: peer 13 | selector: 14 | name: app-etcd 15 | -------------------------------------------------------------------------------- /services/couchbase-admin-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: couchbase-admin-service 5 | spec: 6 | ports: # see http://docs.couchbase.com/admin/admin/Install/install-networkPorts.html 7 | - port: 8091 8 | name: admin 9 | targetPort: admin 10 | - port: 8092 11 | name: views 12 | targetPort: views 13 | type: "LoadBalancer" 14 | # just like the selector in the replication controller, 15 | # but this time it identifies the set of pods to load balance 16 | # traffic to. 17 | selector: 18 | name: couchbase-server 19 | role: admin 20 | -------------------------------------------------------------------------------- /services/couchbase-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: couchbase-service 5 | spec: 6 | ports: # see http://docs.couchbase.com/admin/admin/Install/install-networkPorts.html 7 | - port: 8091 8 | name: admin 9 | targetPort: admin 10 | - port: 8092 11 | name: views 12 | targetPort: views 13 | # just like the selector in the replication controller, 14 | # but this time it identifies the set of pods to load balance 15 | # traffic to. 16 | selector: 17 | name: couchbase-server 18 | -------------------------------------------------------------------------------- /services/sync-gateway.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: sync-gateway-service 5 | spec: 6 | ports: 7 | - port: 4984 8 | name: apiport 9 | targetPort: 4984 10 | type: "LoadBalancer" 11 | # just like the selector in the replication controller, 12 | # but this time it identifies the set of pods to load balance 13 | # traffic to. 14 | selector: 15 | name: sync-gateway 16 | --------------------------------------------------------------------------------