├── LICENSE ├── README.md └── images └── crud.png /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Alex Ellis 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Guide to build a CRUD API with Postgresql and Node.js with OpenFaaS 2 | 3 | In this guide you will learn how to create a CRUD API using Postgresql for storage, Node.js for application code, and OpenFaaS to automate Kubernetes. Since we are building on top of Kubernetes, we also get portability out of the box, scaling and self-healing infrastructure. 4 | 5 | ## Conceptual architecture 6 | 7 | ![Conceptual architecture](/images/crud.png) 8 | 9 | We'll use the CRUD API for fleet management of a number of Raspberry Pis distributed around the globe. Postgresql will provide the backing datastore, OpenFaaS will provide scale-out compute upon Kubernetes, and our code will be written in Node.js using an OpenFaaS template and the pg library from npmjs. 10 | 11 | ## The guide 12 | 13 | These are the component parts of the guide: 14 | 15 | * [Kubernetes cluster](https://kubernetes.io/) - "Production-grade container orchestration" 16 | 17 | Created either on Civo using managed k3s (#KUBE100), using k3sup to provision k3s to one or more instances, or using any other Kubernetes service. 18 | 19 | * [Postgresql](https://www.postgresql.org) - "The World's Most Advanced Database" 20 | 21 | We can install Postgresql using its helm chart, or we could use separate VM or managed service. 22 | 23 | * [OpenFaaS](https://www.openfaas.com) - "Serverless Made Simple for Kubernetes" 24 | 25 | Once installed, OpenFaaS makes application development on Kubernetes simple. You can get an endpoint in production in a couple of minutes. 26 | 27 | * [Docker](https://docker.com/) - container runtime 28 | 29 | You will also need Docker on your computer to build Docker images for use with OpenFaaS. 30 | 31 | * [Node.js](https://nodejs.org) - sever-side JavaScript 32 | 33 | We'll build the CRUD API using the OpenFaaS `node12` template which uses Node.js. Node is a fast programming language which is well suited to I/O and networking. 34 | 35 | ### 1) Get your Kubernetes cluster 36 | 37 | #### For KUBE100 users 38 | 39 | If you're a part of the [#KUBE100 program](https://www.civo.com/blog/kube100-is-here), then create a new cluster in your Civo dashboard and configure your `kubectl` to point at the new cluster. 40 | 41 | > For a full walk-through of Civo k3s you can see my blog post - [The World's First Managed k3s](https://blog.alexellis.io/the-worlds-first-managed-k3s/) 42 | 43 | #### For everyone else 44 | 45 | Alternatively, you can use any other Kubernetes cluster, or if you are on Civo already but not in #KUBE100, then create a new Small or Medium Instance, then use [k3sup ('ketchup')](https://k3sup.dev) to install k3s. 46 | 47 | Before going any further, check that you are pointing at the correct cluster: 48 | 49 | ```sh 50 | kubectl config get-contexts 51 | 52 | kubectl get node -o wide 53 | ``` 54 | 55 | ### 2) Install Postgresql 56 | 57 | If you're a KUBE100 user, then you can add Postgresql as an application in your Civo dashboard. 58 | 59 | For everyone else run the following. 60 | 61 | * Install arkade which can install Postgresql and OpenFaaS 62 | 63 | ```sh 64 | curl -sLfS https://dl.get-arkade.dev | sudo sh 65 | ``` 66 | 67 | > If you prefer, you can also run this command without `sh`, and then move the arkade binary into your PATH afterwards. 68 | 69 | * Install Postgresql 70 | 71 | ```sh 72 | arkade app install postgresql 73 | ``` 74 | 75 | You should see output like this: 76 | 77 | ``` 78 | ======================================================================= 79 | = postgresql has been installed. = 80 | ======================================================================= 81 | 82 | PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster: 83 | 84 | postgresql.default.svc.cluster.local - Read/Write connection 85 | 86 | To get the password for "postgres" run: 87 | 88 | export POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) 89 | 90 | To connect to your database run the following command: 91 | 92 | kubectl run postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host postgresql -U postgres -d postgres -p 5432 93 | 94 | To connect to your database from outside the cluster execute the following commands: 95 | 96 | kubectl port-forward --namespace default svc/postgresql 5432:5432 & 97 | PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432 98 | 99 | # Find out more at: https://github.com/helm/charts/tree/master/stable/postgresql 100 | ``` 101 | 102 | Note the output carefully, it will show you the following, which you need in later steps: 103 | 104 | * how to get your password 105 | * how to connect with the Postgresql administrative CLI called `pgsql` 106 | 107 | ### 3) Install OpenFaaS 108 | 109 | * Install the OpenFaaS CLI 110 | 111 | ```sh 112 | curl -sLSf https://cli.openfaas.com | sudo sh 113 | ``` 114 | 115 | If you're a KUBE100 user, then you can add OpenFaaS as an application in your Civo dashboard. 116 | 117 | For everyone else run the following. 118 | 119 | * Install openfaas using arkade 120 | 121 | ```sh 122 | arkade app install openfaas 123 | ``` 124 | 125 | Again, note the output because it will show you how to connect and fetch your password. 126 | 127 | ``` 128 | ======================================================================= 129 | = OpenFaaS has been installed. = 130 | ======================================================================= 131 | 132 | # Get the faas-cli 133 | curl -SLsf https://cli.openfaas.com | sudo sh 134 | 135 | # Forward the gateway to your machine 136 | kubectl rollout status -n openfaas deploy/gateway 137 | kubectl port-forward -n openfaas svc/gateway 8080:8080 & 138 | 139 | # If basic auth is enabled, you can now log into your gateway: 140 | PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo) 141 | echo -n $PASSWORD | faas-cli login --username admin --password-stdin 142 | 143 | faas-cli store deploy figlet 144 | faas-cli list 145 | 146 | # For Raspberry Pi 147 | faas-cli store list \ 148 | --platform armhf 149 | 150 | faas-cli store deploy figlet \ 151 | --platform armhf 152 | 153 | # Find out more at: 154 | # https://github.com/openfaas/faas 155 | ``` 156 | 157 | ### 4) Install Docker 158 | 159 | If you do not have Docker on your local machine, then add it. 160 | 161 | * [Docker Homepage](https://www.docker.com) 162 | 163 | Docker images provide a secure, repeatable, and portable way to build, and ship code between environments. 164 | 165 | * Sign up for a [Docker Hub account](https://hub.docker.com) 166 | 167 | The Docker Hub is an easy way to share our Docker images between our laptop and our cluster. 168 | 169 | Run `docker login` and use your new username and password 170 | 171 | Now set your Docker username for use with OpenFaaS: 172 | 173 | ```sh 174 | export OPENFAAS_PREFIX="alexellis2" 175 | ``` 176 | 177 | ### 5) Node.js 178 | 179 | Now let's build the CRUD application that can store the status, uptime, and temperatures of any number of Raspberry Pis. 180 | 181 | * Install Node.js 12 on your local computer 182 | 183 | Get Node.js [here](https://nodejs.org/en/) 184 | 185 | * Create a function using Node.js: 186 | 187 | ```sh 188 | mkdir -p $HOME/crud/ 189 | cd $HOME/crud/ 190 | 191 | faas-cli new --lang node12 device-status 192 | 193 | mv device-status.yml stack.yml 194 | ``` 195 | 196 | This creates three files for us: 197 | 198 | ```sh 199 | ├── device-status 200 | │ ├── handler.js 201 | │ └── package.json 202 | └── stack.yml 203 | ``` 204 | 205 | * Add the `pq` npm module 206 | 207 | Add the Postgresql library for Node.js (`pg`), we'll enable Pooling of connections, so that each HTTP request to our CRUD API does not create a new connection. 208 | 209 | Here's what the Node.js library says about Pooling: 210 | 211 | > The PostgreSQL server can only handle a limited number of clients at a time. Depending on the available memory of your PostgreSQL server you may even crash the server if you connect an unbounded number of clients. 212 | 213 | > PostgreSQL can only process one query at a time on a single connected client in a first-in first-out manner. If your multi-tenant web application is using only a single connected client all queries among all simultaneous requests will be pipelined and executed serially, one after the other. 214 | 215 | The good news is that we can access Pooling easily and you'll see that in action below. 216 | 217 | ``` 218 | cd $HOME/crud/device-status/ 219 | npm install --save pg 220 | ``` 221 | 222 | This step will update our `package.json` file. 223 | 224 | ### 6) Build the database schema 225 | 226 | We'll have two tables, one will hold the devices and a key that they use to access the CRUD API, and the other will contain the status data. 227 | 228 | ```sql 229 | -- Each device 230 | CREATE TABLE device ( 231 | device_id INT GENERATED ALWAYS AS IDENTITY, 232 | device_key text NOT NULL, 233 | device_name text NOT NULL, 234 | device_desc text NOT NULL, 235 | device_location point NOT NULL, 236 | created_at timestamp with time zone default now() 237 | ); 238 | 239 | -- Set the primary key for device 240 | ALTER TABLE device ADD CONSTRAINT device_id_key PRIMARY KEY(device_id); 241 | 242 | -- Status of the device 243 | CREATE TABLE device_status ( 244 | status_id INT GENERATED ALWAYS AS IDENTITY, 245 | device_id integer NOT NULL references device(device_id), 246 | uptime bigint NOT NULL, 247 | temperature_c int NOT NULL, 248 | created_at timestamp with time zone default now() 249 | ); 250 | 251 | -- Set the primary key for device_status 252 | ALTER TABLE device_status ADD CONSTRAINT device_status_key PRIMARY KEY(status_id); 253 | ``` 254 | 255 | Create both tables using a `pqsql` prompt, you took a note of this in the earlier step when you installed Postgresql with `arkade`. The command given will run the `pgsql` CLI in a Kubernetes container. 256 | 257 | Simply paste the text into the prompt. 258 | 259 | * Provision a device 260 | 261 | We'll provision our devices manually and use the CRUD API for the devices to call home. 262 | 263 | Create an API key using bash: 264 | 265 | ```sh 266 | export KEY=$(head -c 16 /dev/urandom | shasum | cut -d" " -f "1") 267 | echo $KEY 268 | ``` 269 | 270 | ```sql 271 | INSERT INTO device (device_key, device_name, device_desc, device_location) values 272 | ('4dcf92826314c9c3308b643fa0e579b87f7afe37', 'k4s-1', 'OpenFaaS ARM build machine', POINT(35,-52.3)); 273 | ``` 274 | 275 | At this point you'll get a new row, you'll need two parts for your device, its `device_id` which identifies it and its `device_key` which it will use to access the CRUD API. 276 | 277 | ```sql 278 | SELECT device_id, device_key FROM device WHERE device_name = 'k4s-1'; 279 | ``` 280 | 281 | Keep track of this for use with the CRUD API later: 282 | 283 | ```sql 284 | device_id | device_key 285 | -----------+------------------------------------------ 286 | 1 | 4dcf92826314c9c3308b643fa0e579b87f7afe37 287 | (1 row) 288 | ``` 289 | 290 | ### 7) Build the CREATE operation 291 | 292 | The C in CRUD stands for CREATE and we will use it to insert a row into the `device_status` database. The Raspberry Pi will run some code on a periodic basis using CRON, and access our API that way. 293 | 294 | The connection pool code in the `pg` library needs several pieces of data to connect our function to the database. 295 | 296 | * DB host (confidential) 297 | * DB user (confidential) 298 | * DB password (confidential) 299 | * DB name (non-confidential) 300 | * DB (non-confidential) 301 | 302 | All confidential data will be stored in a Kubernetes secret, non-confidential data will be stored in `stack.yml` using environment variables. 303 | 304 | * Create a secret named `db` 305 | 306 | Remember that the instructions for getting the Postgresql password were given in the helm step. 307 | 308 | ```sh 309 | export USER="postgres" 310 | export PASS="" 311 | export HOST="postgresql.default.svc.cluster.local" 312 | 313 | kubectl create secret generic -n openfaas-fn db \ 314 | --from-literal db-username="$USER" \ 315 | --from-literal db-password="$PASS" \ 316 | --from-literal db-host="$HOST" 317 | 318 | secret/db created 319 | ``` 320 | 321 | * Attach the secret and environment variables to `stack.yml` 322 | 323 | ```yaml 324 | version: 1.0 325 | provider: 326 | name: openfaas 327 | gateway: http://127.0.0.1:8080 328 | 329 | functions: 330 | device-status: 331 | lang: node12 332 | handler: ./device-status 333 | image: alexellis2/device-status:latest 334 | environment: 335 | db_port: 5432 336 | db_name: postgres 337 | secrets: 338 | - db 339 | ``` 340 | 341 | * Edit `handler.js` and connect to the database 342 | 343 | ```js 344 | "use strict" 345 | 346 | const { Client } = require('pg') 347 | const Pool = require('pg').Pool 348 | const fs = require('fs') 349 | var pool; 350 | 351 | module.exports = async (event, context) => { 352 | if(!pool) { 353 | const poolConf = { 354 | user: fs.readFileSync("/var/openfaas/secrets/db-username", "utf-8"), 355 | host: fs.readFileSync("/var/openfaas/secrets/db-host", "utf-8"), 356 | database: process.env["db_name"], 357 | password: fs.readFileSync("/var/openfaas/secrets/db-password", "utf-8"), 358 | port: process.env["db_port"], 359 | }; 360 | pool = new Pool(poolConf) 361 | await pool.connect() 362 | } 363 | 364 | let deviceKey = event.headers["x-device-key"] 365 | let deviceID = event.headers["x-device-id"] 366 | 367 | if(deviceKey && deviceID) { 368 | 369 | const { rows } = await pool.query("SELECT device_id, device_key FROM device WHERE device_id = $1 and device_key = $2", [deviceID,deviceKey]); 370 | if(rows.length) { 371 | await insertStatus(event, pool); 372 | return context.status(200).succeed({"status": "OK"}); 373 | } else { 374 | return context.status(401).error({"status": "invalid authorization or device"}); 375 | } 376 | } 377 | 378 | return context.status(200).succeed({"status": "No action"}); 379 | } 380 | 381 | async function insertStatus(event, pool) { 382 | let id = event.headers["x-device-id"]; 383 | let uptime = event.body.uptime; 384 | let temperature = event.body.temperature; 385 | 386 | try { 387 | let res = await pool.query('INSERT INTO device_status (device_id, uptime, temperature_c) values($1, $2, $3)', 388 | [id, uptime, temperature]); 389 | console.log(res) 390 | } catch(e) { 391 | console.error(e) 392 | } 393 | } 394 | ``` 395 | 396 | The code validates the header contains the correct device_key for the device ID specified in a request body. 397 | 398 | Deploy the code with `faas up` 399 | 400 | ``` 401 | faas-cli up 402 | ``` 403 | 404 | * Now test it with `curl`: 405 | 406 | ``` 407 | curl 127.0.0.1:8080/function/device-status \ 408 | --data '{"uptime": 2294567, "temperature": 38459}' \ 409 | -H "X-Device-Key: 4dcf92826314c9c3308b643fa0e579b87f7afe37" \ 410 | -H "X-Device-ID: 1" \ 411 | -H "Content-Type: application/json" 412 | ``` 413 | 414 | Use `pqsql` to see if the row got inserted: 415 | 416 | ```sql 417 | SELECT * from device_status; 418 | 419 | status_id | device_id | uptime | temperature_c | created_at 420 | -----------+-----------+---------+---------------+------------------------------- 421 | 1 | 1 | 2294567 | 38459 | 2019-12-11 11:56:04.380975+00 422 | (1 row) 423 | ``` 424 | 425 | If you ran into any errors then you can use a simple debugging approach: 426 | 427 | * `faas-cli logs device-status` - this shows anything printed to stdout or stderr 428 | * Add a log statement with `console.log("message")` and run `faas up` again 429 | 430 | ### 8) Build the RETRIEVE operation 431 | 432 | Let's allow users or devices to select data when a valid device_key and device_id combination are given. 433 | 434 | ```js 435 | "use strict" 436 | 437 | const { Client } = require('pg') 438 | const Pool = require('pg').Pool 439 | const fs = require('fs') 440 | 441 | const pool = initPool() 442 | 443 | module.exports = async (event, context) => { 444 | let client = await pool.connect() 445 | 446 | let deviceKey = event.headers["x-device-key"] 447 | let deviceID = event.headers["x-device-id"] 448 | 449 | if(deviceKey && deviceID) { 450 | const { rows } = await client.query("SELECT device_id, device_key FROM device WHERE device_id = $1 and device_key = $2", [deviceID, deviceKey]); 451 | if(rows.length) { 452 | 453 | if(event.method == "POST") { 454 | await insertStatus(deviceID, event, client); 455 | client.release() 456 | return context.status(200).succeed({"status": "OK"}); 457 | } else if(event.method == "GET") { 458 | let rows = await getStatus(deviceID, event, client); 459 | client.release() 460 | return context.status(200).succeed({"status": "OK", "data": rows}); 461 | } 462 | client.release() 463 | return context.status(405).fail({"status": "method not allowed"}); 464 | } else { 465 | client.release() 466 | return context.status(401).fail({"status": "invalid authorization or device"}); 467 | } 468 | } 469 | 470 | client.release() 471 | return context.status(200).succeed({"status": "No action"}); 472 | } 473 | 474 | async function insertStatus(deviceID, event, client) { 475 | let uptime = event.body.uptime; 476 | let temperature = event.body.temperature; 477 | 478 | try { 479 | let res = await client.query('INSERT INTO device_status (device_id, uptime, temperature_c) values($1, $2, $3)', 480 | [deviceID, uptime, temperature]); 481 | console.log(res) 482 | } catch(e) { 483 | console.error(e) 484 | } 485 | } 486 | 487 | async function getStatus(deviceID, event, client) { 488 | let uptime = event.body.uptime; 489 | let temperature = event.body.temperature; 490 | 491 | let {rows} = await client.query('SELECT * FROM device_status WHERE device_id = $1', 492 | [deviceID]); 493 | return rows 494 | } 495 | 496 | 497 | function initPool() { 498 | return new Pool({ 499 | user: fs.readFileSync("/var/openfaas/secrets/db-username", "utf-8"), 500 | host: fs.readFileSync("/var/openfaas/secrets/db-host", "utf-8"), 501 | database: process.env["db_name"], 502 | password: fs.readFileSync("/var/openfaas/secrets/db-password", "utf-8"), 503 | port: process.env["db_port"], 504 | }); 505 | } 506 | ``` 507 | 508 | Deploy the code with `faas up` 509 | 510 | ``` 511 | faas-cli up 512 | ``` 513 | 514 | * Now test it with `curl`: 515 | 516 | ``` 517 | curl -s 127.0.0.1:8080/function/device-status \ 518 | -H "X-Device-Key: 4dcf92826314c9c3308b643fa0e579b87f7afe37" \ 519 | -H "X-Device-ID: 1" \ 520 | -H "Content-Type: application/json" 521 | ``` 522 | 523 | If you have `jq` installed, it can also format the output, here's an example: 524 | 525 | ``` 526 | { 527 | "status": "OK", 528 | "data": [ 529 | { 530 | "status_id": 2, 531 | "device_id": 1, 532 | "uptime": "2294567", 533 | "temperature_c": 38459, 534 | "created_at": "2019-12-11T11:56:04.380Z" 535 | } 536 | ] 537 | } 538 | ``` 539 | 540 | The response body is in JSON and shows the underlying table schema. This can be "prettified" using an altered select statement, or by transforming each row in code to a new name. 541 | 542 | ### 8) Build client 543 | 544 | We can now create a client for our CRUD API using any programming language, or even by using `curl` as above. Python is the best supported language for the Raspberry Pi ecosystem, so let's use that to create a simple HTTP client. 545 | 546 | * Log into your [Raspberry Pi](https://www.raspberrypi.org) running [Raspbian](https://www.raspberrypi.org/downloads/raspbian/) 547 | 548 | > Note you could also use a computer for this step, but must remove the temperature query since that will only work on the Raspberry Pi. 549 | 550 | * Install Python3 and `pip` 551 | 552 | ```sh 553 | sudo apt update -qy 554 | sudo apt install -qy python3 python3-pip 555 | ``` 556 | 557 | * Add the Python `requests` module which can be used to make HTTP requests 558 | 559 | ```sh 560 | sudo pip3 install requests 561 | ``` 562 | 563 | * Create a folder for the code 564 | 565 | ```sh 566 | mkdir -p $HOME/client/ 567 | ``` 568 | 569 | * Create a `client/requirements.txt` file 570 | 571 | ``` 572 | requests 573 | ``` 574 | 575 | * Create the `client/app.py` 576 | 577 | ```python 578 | import requests 579 | import os 580 | 581 | ip = os.getenv("HOST_IP") 582 | 583 | port = "31112" 584 | 585 | deviceKey = os.getenv("DEVICE_KEY") 586 | deviceID = os.getenv("DEVICE_ID") 587 | 588 | uptimeSecs = 0 589 | with open("/proc/uptime", "r") as f: 590 | data = f.read() 591 | uptimeSecs = int(data[:str.find(data,".")]) 592 | 593 | tempC = 0 594 | with open("/sys/class/thermal/thermal_zone0/temp", "r") as f: 595 | data = f.read() 596 | tempC = int(data) 597 | 598 | payload = {"temperature": tempC, "uptime": uptimeSecs} 599 | headers = {"X-Device-Key": deviceKey, "X-Device-ID": deviceID, "Content-Type": "application/json"} 600 | 601 | r = requests.post("http://{}:{}/function/device-status".format(ip, port), headers=headers, json=payload) 602 | 603 | print("Temp: {}\tUptime: {} mins\tStatus: {}".format(tempC/1000, uptimeSecs/60, r.status_code)) 604 | ``` 605 | 606 | * Create a script to run the client at `client/run_client.sh` 607 | 608 | ``` 609 | #!/bin/bash 610 | 611 | export HOST_IP=91.211.152.145 612 | export DEVICE_KEY=4dcf92826314c9c3308b643fa0e579b87f7afe37 613 | export DEVICE_ID=1 614 | 615 | cd /home/pi/client 616 | python3 ./app.py 617 | ``` 618 | 619 | * Set up a schedule for the client using `cron` 620 | 621 | ```sh 622 | chmod +x ./client/run_client.sh 623 | sudo systemctl enable cron 624 | sudo systemctl start cron 625 | 626 | crontab -e 627 | ``` 628 | 629 | Enter the following to create a reading every 15 minutes. 630 | 631 | ``` 632 | */15 * * * * /home/pi/client/run_client.py 633 | ``` 634 | 635 | Test the script manually, then check the database table again. 636 | 637 | ``` 638 | /home/pi/client/run_client.py 639 | ``` 640 | 641 | Here are three consecutive runs I made using the command prompt: 642 | 643 | ```sql 644 | postgres=# SELECT * from device_status; 645 | status_id | device_id | uptime | temperature_c | created_at 646 | -----------+-----------+---------+---------------+------------------------------- 647 | 8 | 1 | 2297470 | 38459 | 2019-12-11 12:28:16.28936+00 648 | 9 | 1 | 2297472 | 38946 | 2019-12-11 12:28:18.809582+00 649 | 10 | 1 | 2297475 | 38459 | 2019-12-11 12:28:21.321489+00 650 | (3 rows) 651 | ``` 652 | 653 | ## Now, over to you. 654 | 655 | Let's stop there and look at what we've built. 656 | 657 | * An architecture on Kubernetes with a database and scalable compute 658 | * A CRUD API with CREATE and RETRIEVE implemented in Node.js 659 | * Security on a per device level using symmetrical keys 660 | * A working client that can be used today on any Raspberry Pi 661 | 662 | It's over to you now to continue learning and building out the CRUD solution with OpenFaaS. 663 | 664 | ### Keep learning 665 | 666 | From here, you should be able to build the DELETE and UPDATE operations on your own, if you think that they are suited to this use-case. Perhaps when a device is de-provisioned, it should be able to delete its own rows? 667 | 668 | The device_key enabled access to read the records for a single device, but should there be a set of administrative keys, such that users can query the device_status records for any devices? 669 | 670 | What else would you like to record from your fleet Raspberry Pis? Perhaps you can add the external and IP address of each host to the table and update the Python client to collect that? There are several free websites available that can provide this data. 671 | 672 | -------------------------------------------------------------------------------- /images/crud.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/alexellis/crud-postgresql-nodejs-openfaas/ff8d4e422ea680448cd7a31f722643ed0b030e16/images/crud.png --------------------------------------------------------------------------------