├── database ├── deployment.yml ├── pvc.yml └── service.yml ├── license.md ├── namespace.yml ├── prisma ├── configmap.yml ├── deployment.yml └── service.yml └── readme.md /database/deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: database 5 | namespace: prisma 6 | labels: 7 | stage: production 8 | name: database 9 | app: mysql 10 | spec: 11 | replicas: 1 12 | strategy: 13 | type: Recreate 14 | template: 15 | metadata: 16 | labels: 17 | stage: production 18 | name: database 19 | app: mysql 20 | spec: 21 | containers: 22 | - name: mysql 23 | image: 'mysql:5.7' 24 | args: 25 | - --ignore-db-dir=lost+found 26 | env: 27 | - name: MYSQL_ROOT_PASSWORD 28 | value: "prisma" 29 | ports: 30 | - name: mysql-3306 31 | containerPort: 3306 32 | volumeMounts: 33 | - name: database-disk 34 | readOnly: false 35 | mountPath: /var/lib/mysql 36 | volumes: 37 | - name: database-disk 38 | persistentVolumeClaim: 39 | claimName: database-disk -------------------------------------------------------------------------------- /database/pvc.yml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | name: database-disk 5 | namespace: prisma 6 | labels: 7 | stage: production 8 | name: database 9 | app: mysql 10 | spec: 11 | accessModes: 12 | - ReadWriteOnce 13 | resources: 14 | requests: 15 | storage: 20Gi -------------------------------------------------------------------------------- /database/service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: database 5 | namespace: prisma 6 | spec: 7 | ports: 8 | - port: 3306 9 | targetPort: 3306 10 | protocol: TCP 11 | selector: 12 | stage: production 13 | name: database 14 | app: mysql 15 | -------------------------------------------------------------------------------- /license.md: -------------------------------------------------------------------------------- 1 | Copyright 2018 André König 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | -------------------------------------------------------------------------------- /namespace.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: prisma 5 | -------------------------------------------------------------------------------- /prisma/configmap.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: prisma-configmap 5 | namespace: prisma 6 | labels: 7 | stage: production 8 | name: prisma 9 | app: prisma 10 | data: 11 | PRISMA_CONFIG: | 12 | port: 4466 13 | # uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security 14 | # managementApiSecret: my-secret 15 | databases: 16 | default: 17 | connector: mysql 18 | host: database 19 | port: 3306 20 | user: root 21 | password: prisma 22 | migrations: true 23 | -------------------------------------------------------------------------------- /prisma/deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: prisma 5 | namespace: prisma 6 | labels: 7 | stage: production 8 | name: prisma 9 | app: prisma 10 | spec: 11 | replicas: 1 12 | strategy: 13 | type: Recreate 14 | template: 15 | metadata: 16 | labels: 17 | stage: production 18 | name: prisma 19 | app: prisma 20 | spec: 21 | containers: 22 | - name: prisma 23 | image: 'prismagraphql/prisma:1.8' 24 | ports: 25 | - name: prisma-4466 26 | containerPort: 4466 27 | env: 28 | - name: PRISMA_CONFIG 29 | valueFrom: 30 | configMapKeyRef: 31 | name: prisma-configmap 32 | key: PRISMA_CONFIG -------------------------------------------------------------------------------- /prisma/service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: prisma 5 | namespace: prisma 6 | spec: 7 | ports: 8 | - port: 4466 9 | targetPort: 4466 10 | protocol: TCP 11 | selector: 12 | stage: production 13 | name: prisma 14 | app: prisma 15 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Prisma – Kubernetes Deployment Demo 2 | 3 | Repository for demonstrating how to deploy a [Prisma](https://www.prismagraphql.com/) server to a [Kubernetes](https://kubernetes.io/) cluster. 4 | 5 | ## Motivation 6 | 7 | In this tutorial, you will learn how to deploy a Prisma server on Kubernetes. 8 | 9 | [Kubernetes](https://kubernetes.io/) is a container orchestrator, that helps with deploying and scaling of your containerized applications. 10 | 11 | 12 | 13 | The setup in this tutorial assumes that you have a running Kubernetes cluster in place. There are several providers out there that gives you the possibility to establish and maintain a production grade cluster. This tutorial aims to be provider agnostic, because Kubernetes is actually the abstraction layer. The only part which differs slightly is the mechanism for creating `persistent volumes`. For demonstration purposes, we use the [Kubernetes Engine](https://cloud.google.com/kubernetes-engine) on the [Google Cloud Platform](https://cloud.google.com/) in this tutorial. 14 | 15 | 16 | 17 | ## Prerequisites 18 | 19 | If you haven't done that before, you need to fulfill the following prerequisites before you can deploy a Prisma cluster on Kubernetes. You need ... 20 | 21 | * ... a running Kubernetes cluster (e.g. on the Google Cloud Platform) 22 | * ... a local version of [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) which is configured to communicate with your running Kubernetes cluster 23 | 24 | You can go ahead now and create a new directory on your local machine – call it `kubernetes-demo`. This will be the reference directory for our journey. 25 | 26 | ## Creating a separate namespace 27 | 28 | As you may know, Kubernetes comes with a primitive called `namespace`. This allows you to group your workload logically. Before applying the actual namespace on the cluster, we have to write the definition file for it. Inside our project directory, create a file called `namespace.yml` with the following content: 29 | 30 | ```yml(path="kubernetes-demo/namespace.yml") 31 | apiVersion: v1 32 | kind: Namespace 33 | metadata: 34 | name: prisma 35 | ``` 36 | 37 | This definition will lead to a new namespace, called `prisma`. Now, with the help of `kubectl`, you can apply the namespace by executing: 38 | 39 | ```sh 40 | kubectl apply -f namespace.yml 41 | ``` 42 | 43 | Afterwards, you can perform a `kubectl get namespaces` in order to check if the actual namespace has been created. You should see the following on a fresh Kubernetes cluster: 44 | 45 | ``` 46 | ❯ kubectl get namespaces 47 | NAME STATUS AGE 48 | default Active 1d 49 | kube-public Active 1d 50 | kube-system Active 1d 51 | prisma Active 2s 52 | ``` 53 | 54 | ## MySQL 55 | 56 | Prisma supports a good range of different database systems. Although we use MySQL for this tutorial, the steps can be easily adopted for a different database system, like PostgreSQL. 57 | 58 | ### Disk provisioning 59 | 60 | Now that we have a valid namespace in which we can rage, it is time to deploy MySQL. Kubernetes separates between stateless and stateful deployments. A database is by nature a stateful deployment and needs a disk to actually store the data. So how do we tell our cluster to create a new disk on the cluster? By using a [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims): 61 | 62 | ```yml(path="kubernetes-demo/database/pvc.yml") 63 | kind: PersistentVolumeClaim 64 | apiVersion: v1 65 | metadata: 66 | name: database-disk 67 | namespace: prisma 68 | labels: 69 | stage: production 70 | name: database 71 | app: mysql 72 | spec: 73 | accessModes: 74 | - ReadWriteOnce 75 | resources: 76 | requests: 77 | storage: 20Gi 78 | ``` 79 | 80 | Here we request a disk with a storage capacity of 20 GB. You can apply this PVC by executing: 81 | 82 | ``` 83 | kubectl apply -f database/pvc.yml 84 | ``` 85 | 86 | You should see a new disk in the [Disk Overview](https://console.cloud.google.com/compute/disks) on the Google Cloud Platform after a couple of seconds. 87 | 88 | ### Deploying the Pod 89 | 90 | Now where we have our disk for the database, it is time to create the actual deployment definition of our MySQL instance. A short reminder: Kubernetes comes with the primitives of `Pods` and `ReplicationControllers`. 91 | 92 | A `Pod` is like a "virtual machine" in which a containerized application runs. It gets an own internal IP address and (if configured) disks attached to it. The `ReplicationController` is responsible for scheduling your `Pod` on cluster nodes and ensuring that they are running and scaled as configured. 93 | 94 | In older releases of Kubernetes it was necessary to configure those separately. In recent versions, there is a new definition resource, called `Deployment`. In such a configuration you define what kind of container image you want to use, how much replicas should be run and, in our case, which disk should be mounted. 95 | 96 | The deployment definition of our MySQL database looks like: 97 | 98 | ```yml(path="kubernetes-demo/database/deployment.yml") 99 | apiVersion: extensions/v1beta1 100 | kind: Deployment 101 | metadata: 102 | name: database 103 | namespace: prisma 104 | labels: 105 | stage: production 106 | name: database 107 | app: mysql 108 | spec: 109 | replicas: 1 110 | strategy: 111 | type: Recreate 112 | template: 113 | metadata: 114 | labels: 115 | stage: production 116 | name: database 117 | app: mysql 118 | spec: 119 | containers: 120 | - name: mysql 121 | image: 'mysql:5.7' 122 | args: 123 | - --ignore-db-dir=lost+found 124 | env: 125 | - name: MYSQL_ROOT_PASSWORD 126 | value: "prisma" 127 | ports: 128 | - name: mysql-3306 129 | containerPort: 3306 130 | volumeMounts: 131 | - name: database-disk 132 | readOnly: false 133 | mountPath: /var/lib/mysql 134 | volumes: 135 | - name: database-disk 136 | persistentVolumeClaim: 137 | claimName: database-disk 138 | ``` 139 | 140 | When applied, this definition schedules one Pod (`replicas: 1`), with a running container based on the image `mysql:5.7`, configures the environment (sets the password of the `root` user to `prisma`) and mounts the disk `database-disk` to the path `/var/lib/mysql`. 141 | 142 | To actually apply that definition, execute: 143 | 144 | ``` 145 | kubectl apply -f database/deployment.yml 146 | ``` 147 | 148 | You can check if the actual Pod has been scheduled by executing: 149 | 150 | ``` 151 | kubectl get pods --namespace prisma 152 | 153 | NAME READY STATUS RESTARTS AGE 154 | database-3199294884-93hw4 1/1 Running 0 1m 155 | ``` 156 | 157 | It runs! 158 | 159 | ## Deploying the Service 160 | 161 | Before diving into this section, here's a short recap. 162 | 163 | Our MySQL database pod is now running and available within the cluster internally. Remember, Kubernetes assigns a local IP address to the `Pod` so that another application could access the database. 164 | 165 | Now, imagine a scenario in which your database crashes. The cluster management system will take care of that situation and schedules the `Pod` again. In this case, Kubernetes will assign a different IP address which results in crashes of your applications that are communicating with the database. 166 | 167 | To avoid such a situation, the cluster manager provides an internal DNS resolution mechanism. You have to use a different primitive, called `Service`, to benefit from this. A service is an internal load balancer that is reachable via the `service name`. Its task is to forward the traffic to your `Pod(s)` and make it reachable across the cluster by its name. 168 | 169 | A service definition for our MySQL database would look like: 170 | 171 | ```yml(path="kubernetes-demo/database/service.yml") 172 | apiVersion: v1 173 | kind: Service 174 | metadata: 175 | name: database 176 | namespace: prisma 177 | spec: 178 | ports: 179 | - port: 3306 180 | targetPort: 3306 181 | protocol: TCP 182 | selector: 183 | stage: production 184 | name: database 185 | app: mysql 186 | ``` 187 | 188 | The definition would create an internal load balancer with the name `database`. The service is then reachable by this name within the `prisma` namespace. A little explanation about the `spec` section: 189 | 190 | * **ports:** Here you map the service port to the actual container port. In this case the mapping is `3306` to `3306`. 191 | * **selector:** Kind of a query. The load balancer identifies `Pods` by selecting the ones with the specified labels. 192 | 193 | After creating this file, you can apply it with: 194 | 195 | ```sh 196 | kubectl apply -f database/service.yml 197 | ``` 198 | 199 | To verify that the service is up, execute: 200 | 201 | ```sh 202 | kubectl get services --namespace prisma 203 | 204 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 205 | database ClusterIP 10.3.241.165 3306/TCP 1m 206 | ``` 207 | 208 | ## Prisma 209 | 210 | Okay, fair enough, the database is deployed. Next up: Deploying the actual Prisma server which is responsible for serving as an endpoint for the Prisma CLI. 211 | 212 | This application communicates with the already deployed `database` service and uses it as the storage backend. Therefore, the Prisma server is a stateless application because it doesn't need any additional disk storage. 213 | 214 | ### Deploying the ConfigMap 215 | 216 | The Prisma server needs some configuration, like the database connection information and which connector Prisma should use. We will deploy this configuration as a so-called `ConfigMap` which acts like an ordinary configuration file, but whose content can be injected into an environment variable: 217 | 218 | ```yml(path="kubernetes-demo/prisma/configmap.yml") 219 | apiVersion: v1 220 | kind: ConfigMap 221 | metadata: 222 | name: prisma-configmap 223 | namespace: prisma 224 | labels: 225 | stage: production 226 | name: prisma 227 | app: prisma 228 | data: 229 | PRISMA_CONFIG: | 230 | port: 4466 231 | # uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security 232 | # managementApiSecret: my-secret 233 | databases: 234 | default: 235 | connector: mysql 236 | host: database 237 | port: 3306 238 | user: root 239 | password: prisma 240 | migrations: true 241 | ``` 242 | 243 | After defining the file, you can apply it via: 244 | 245 | ```sh 246 | kubectl apply -f prisma/configmap.yml 247 | ``` 248 | 249 | ### Deploying the Pod 250 | 251 | Deploying the actual Prisma server to run in a Pod is pretty straightforward. First of all you have to define the deployment definition: 252 | 253 | ```yml(path="kubernetes-demo/prisma/deployment.yml") 254 | apiVersion: extensions/v1beta1 255 | kind: Deployment 256 | metadata: 257 | name: prisma 258 | namespace: prisma 259 | labels: 260 | stage: production 261 | name: prisma 262 | app: prisma 263 | spec: 264 | replicas: 1 265 | strategy: 266 | type: Recreate 267 | template: 268 | metadata: 269 | labels: 270 | stage: production 271 | name: prisma 272 | app: prisma 273 | spec: 274 | containers: 275 | - name: prisma 276 | image: 'prismagraphql/prisma:1.8' 277 | ports: 278 | - name: prisma-4466 279 | containerPort: 4466 280 | env: 281 | - name: PRISMA_CONFIG 282 | valueFrom: 283 | configMapKeyRef: 284 | name: prisma-configmap 285 | key: PRISMA_CONFIG 286 | ``` 287 | 288 | This configuration looks similar to the deployment configuration of the MySQL database. We tell Kubernetes that it should schedule one replica of the server and define the environment variable by using the previously deployed `ConfigMap`. 289 | 290 | Afterwards, we are ready to apply that deployment definition: 291 | 292 | ```sh 293 | kubectl apply -f prisma/deployment.yml 294 | ``` 295 | 296 | As in the previous sections: In order to check that the Prisma server has been scheduled on the Kubernetes cluster, execute: 297 | 298 | ```sh 299 | kubectl get pods --namespace prisma 300 | 301 | NAME READY STATUS RESTARTS AGE 302 | database-3199294884-93hw4 1/1 Running 0 5m 303 | prisma-1733176504-zlphg 1/1 Running 0 1m 304 | ``` 305 | 306 | Yay! The Prisma server is running! Off to our next and last step: 307 | 308 | ### Deploying the Service 309 | 310 | Okay, cool, the database `Pod` is running and has an internal load balancer in front of it, the Prisma server `Pod` is also running, but is missing the load balancer a.k.a. `Service`. Let's fix that: 311 | 312 | ```yml(path="kubernetes-demo/prisma/service.yml") 313 | apiVersion: v1 314 | kind: Service 315 | metadata: 316 | name: prisma 317 | namespace: prisma 318 | spec: 319 | ports: 320 | - port: 4466 321 | targetPort: 4466 322 | protocol: TCP 323 | selector: 324 | stage: production 325 | name: prisma 326 | app: prisma 327 | ``` 328 | 329 | Apply it via: 330 | 331 | ```sh 332 | kubectl apply -f prisma/service.yml 333 | ``` 334 | 335 | Okay, done! The Prisma server is now reachable within the Kubernetes cluster via its name `prisma`. 336 | 337 | That's all. Prisma is running on Kubernetes! 338 | 339 | The last step is to configure your local `Prisma CLI` so that you can communicate with the instance on the Kubernetes Cluster. 340 | 341 | 342 | The upcoming last step is also necessary if you want to integrate `prisma deploy` into your CI/CD process. 343 | 344 | 345 | ## Configuration of the Prisma CLI 346 | 347 | The Prisma server is running on the Kubernetes cluster and has an internal load balancer. This is a sane security default, because you won't expose the Prisma server to the public directly. Instead, you would develop a GraphQL API and deploy it to the Kubernetes cluster as well. 348 | 349 | You may ask: "Okay, but how do I execute `prisma deploy` in order to populate my data model when I'm not able to communicate with the Prisma server directly?". That is indeed a very good question! `kubectl` comes with a mechanism that allows forwarding a local port to an application that lives on the Kubernetes cluster. 350 | 351 | So every time you want to communicate with your Prisma server on the Kubernetes cluster, you have to perform the following steps: 352 | 353 | 1. `kubectl get pods --namespace prisma` to identify the pod name 354 | 2. `kubectl port-forward --namespace prisma 4467:4466` – This will forward from `127.0.0.1:4467` -> `kubernetes-cluster:4466` 355 | 356 | The Prisma server is now reachable via `http://localhost:4467`. This is the actual `endpoint` you have to specify in your `prisma.yml`. So when your service should have the name `myservice` and you want to deploy to stage `production`, your endpoint URL would look like: `http://localhost:4467/myservice/production`. 357 | 358 | An example `prisma.yml` could look like: 359 | 360 | ```yml 361 | endpoint: http://localhost:4467/myservice/production 362 | datamodel: datamodel.graphql 363 | ``` 364 | 365 | With this in place, you can deploy the Prisma service via the Prisma CLI (`prisma deploy`) as long as your port forwarding to the cluster is active. 366 | 367 | Okay, you made it! Congratulations, you have successfully deployed a Prisma server to a production Kubernetes cluster environment. 368 | 369 | ## Author 370 | 371 | MIT © [André König](https://andrekoenig.de) 372 | --------------------------------------------------------------------------------