├── .gitignore ├── .gitmodules ├── LICENSE ├── README-20181213.md ├── README-20190131.md ├── README-20190402.md ├── README.md ├── adobe-xd ├── README.md ├── intro-adobe-xd.pptx ├── ui-kits │ ├── apple-ui-elements-ipad.xd │ ├── apple-ui-elements-iphone8.xd │ ├── apple-ui-elements-iphoneX.xd │ └── google-stickersheet-components.xd └── xd-tutorial.xd ├── cypress ├── README.md ├── koa │ ├── .gitignore │ ├── Dockerfile │ ├── app.js │ ├── docker-compose.yml │ ├── modules │ │ ├── accommodations │ │ │ ├── accommodationsController.js │ │ │ └── accommodationsSchema.js │ │ ├── auth │ │ │ ├── authController.js │ │ │ └── helpers.js │ │ └── users │ │ │ ├── usersController.js │ │ │ └── usersSchema.js │ ├── package.json │ ├── seed.js │ └── yarn.lock └── react │ ├── .eslintignore │ ├── .eslintrc.json │ ├── .gitignore │ ├── .prettierrc │ ├── .vscode │ └── components.code-snippets │ ├── README.md │ ├── cypress.json │ ├── cypress │ ├── _integration │ │ └── examples │ │ │ ├── actions.spec.js │ │ │ ├── aliasing.spec.js │ │ │ ├── assertions.spec.js │ │ │ ├── connectors.spec.js │ │ │ ├── cookies.spec.js │ │ │ ├── cypress_api.spec.js │ │ │ ├── files.spec.js │ │ │ ├── local_storage.spec.js │ │ │ ├── location.spec.js │ │ │ ├── misc.spec.js │ │ │ ├── navigation.spec.js │ │ │ ├── network_requests.spec.js │ │ │ ├── querying.spec.js │ │ │ ├── spies_stubs_clocks.spec.js │ │ │ ├── traversal.spec.js │ │ │ ├── utilities.spec.js │ │ │ ├── viewport.spec.js │ │ │ ├── waiting.spec.js │ │ │ └── window.spec.js │ ├── fixtures │ │ └── example.json │ ├── integration │ │ ├── appbar.spec.js │ │ └── dashboard.spec.js │ ├── plugins │ │ └── index.js │ └── support │ │ ├── commands.js │ │ ├── helpers.js │ │ └── index.js │ ├── package-lock.json │ ├── package.json │ ├── public │ ├── favicon.ico │ ├── index.html │ └── manifest.json │ ├── src │ ├── App.js │ ├── App.test.js │ ├── common │ │ ├── auth.js │ │ ├── context.js │ │ └── fetch.js │ ├── components │ │ ├── AccommodationCard │ │ │ └── AccommodationCard.js │ │ ├── Accommodations │ │ │ └── Accommodations.js │ │ ├── AccommodationsDetail │ │ │ └── AccommodationsDetail.js │ │ ├── Appbar │ │ │ └── Appbar.js │ │ ├── CreateAccommodation │ │ │ └── CreateAccommodation.js │ │ ├── Error │ │ │ └── Page404.js │ │ ├── Frame │ │ │ └── Frame.js │ │ └── Login │ │ │ ├── LoginDialog.js │ │ │ └── Register.js │ ├── configureStore.js │ ├── index.css │ ├── index.js │ ├── resources │ │ ├── CaretLeftIcon.js │ │ ├── CaretRightIcon.js │ │ └── HouseIcon.js │ └── store │ │ └── accommodations │ │ ├── actions.js │ │ └── reducer.js │ └── yarn.lock ├── expose-localclient-ngrok-localtunnel └── README.md ├── jsonata-query-and-transform-json-documents ├── Dockerfile ├── README.md ├── countries-module.js ├── explore-countries.js ├── explore-oracleopenworld-catalog.js ├── json-server.js ├── oow2018-sessions-catalog.json ├── package-lock.json └── package.json ├── kafka-connect-workshop ├── Introduction Kafka Connect.pptx ├── LICENSE ├── README.md ├── Workshop Kafka Connect.docx └── files │ ├── Dockerfile │ ├── docker-compose.yml │ ├── elasticsearchsink.json │ ├── filesource.json │ ├── load.ps1 │ ├── load.sh │ ├── loadelasticsearchsink.ps1 │ ├── loadelasticsearchsink.sh │ ├── loadfilesource.ps1 │ ├── loadfilesource.sh │ ├── loadoutputasindex.ps1 │ ├── loadoutputasindex.sh │ ├── outputas.txt │ ├── outputasindex.json │ ├── readme.txt │ ├── restart.ps1 │ └── restart.sh ├── katacoda ├── README.md ├── assets │ └── docker-compose.yml ├── docker-compose.yml ├── finish.md ├── index.json ├── intro.md ├── step1.md ├── step2.md ├── step3.md └── step4.md ├── kibana └── introductie kibana.md ├── lastPass-password-manager └── README.md ├── linux-and-docker-host-on-windows-machine ├── README.md └── Vagrantfile ├── neo4j-graphdatabase ├── README.md ├── neo4j-node.js ├── package-lock.json └── package.json ├── powershell └── Introducing PowerShell.md ├── quick-sql └── README.md ├── ssh-tunnels ├── Basics of SSH tunnels.docx ├── README.md └── SSHTunnels.pptx ├── traefik ├── README.md ├── docker-backend │ └── docker-compose.yml └── search │ ├── docker-compose.yml │ └── traefik.toml └── zsh-and-ohmyzsh └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Logs 2 | logs 3 | *.log 4 | npm-debug.log* 5 | yarn-debug.log* 6 | yarn-error.log* 7 | 8 | # Runtime data 9 | pids 10 | *.pid 11 | *.seed 12 | *.pid.lock 13 | 14 | # Directory for instrumented libs generated by jscoverage/JSCover 15 | lib-cov 16 | 17 | # Coverage directory used by tools like istanbul 18 | coverage 19 | 20 | # nyc test coverage 21 | .nyc_output 22 | 23 | # Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) 24 | .grunt 25 | 26 | # Bower dependency directory (https://bower.io/) 27 | bower_components 28 | 29 | # node-waf configuration 30 | .lock-wscript 31 | 32 | # Compiled binary addons (https://nodejs.org/api/addons.html) 33 | build/Release 34 | 35 | # Dependency directories 36 | node_modules/ 37 | jspm_packages/ 38 | 39 | # TypeScript v1 declaration files 40 | typings/ 41 | 42 | # Optional npm cache directory 43 | .npm 44 | 45 | # Optional eslint cache 46 | .eslintcache 47 | 48 | # Optional REPL history 49 | .node_repl_history 50 | 51 | # Output of 'npm pack' 52 | *.tgz 53 | 54 | # Yarn Integrity file 55 | .yarn-integrity 56 | 57 | # dotenv environment variables file 58 | .env 59 | 60 | # next.js build output 61 | .next 62 | -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "code-cafe-20190520"] 2 | path = code-cafe-20190520 3 | url = https://github.com/AMIS-Services/code-cafe-20190520 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 AMIS Services BV 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README-20181213.md: -------------------------------------------------------------------------------- 1 | # Code Cafe Thursday 13th December 2018 2 | The Code Café is a series of low key, relaxed meetups where we explore various technologies, tools, programming aids and other fun stuff in a relaxed setting. This repository contains artifacts for the Code Café sessions. 3 | 4 | Note: in the Code Café, we can work with various technologies - browser based or local (typically in containers) - and across the stack: UI, (micro)service, database, code editing and programming utilities and general productivity boosters. 5 | 6 | Code Café sessions will usually not work with slides and presentations. Items are introduced with a brief introduction and a demonstration - followed by trying out! 7 | 8 | ## On the Menu for the first Code Café 9 | 10 | * Vagrant (& VirtualBox) - tool for efficient management of VMs and a local Docker Engine); especially useful for running Docker on Windows laptops : https://github.com/AMIS-Services/code-cafe/tree/master/linux-and-docker-host-on-windows-machine 11 | * JSONata - XPath & XSLT/XQuery like expression language for retrieving data from and transforming JSON documents - https://github.com/AMIS-Services/code-cafe/tree/master/jsonata-query-and-transform-json-documents 12 | * ngrok (& local tunnel) for exposing locally running services on the internet - https://github.com/AMIS-Services/code-cafe/tree/master/expose-localclient-ngrok-localtunnel 13 | * QuickSQL - free cloud service for rapid generation of SQL DDL and DML scripts for tables, generated test data, constraints, views, triggers and APIs: https://github.com/AMIS-Services/code-cafe/blob/master/quick-sql/README.md 14 | * Neo4j - open source Graph Database: https://github.com/AMIS-Services/code-cafe/tree/master/neo4j-graphdatabase 15 | -------------------------------------------------------------------------------- /README-20190131.md: -------------------------------------------------------------------------------- 1 | # Code Cafe Thursday 31st January 2019 2 | The Code Café is a series of low key, relaxed meetups where we explore various technologies, tools, programming aids and other fun stuff in a relaxed setting. This repository contains artifacts for the Code Café sessions. 3 | 4 | Note: in the Code Café, we can work with various technologies - browser based or local (typically in containers) - and across the stack: UI, (micro)service, database, code editing and programming utilities and general productivity boosters. 5 | 6 | Code Café sessions will usually not work with slides and presentations. Items are introduced with a brief introduction and a demonstration - followed by trying out! 7 | 8 | ## On the Menu for this Code Café session 9 | 10 | * [Cypress Testing](https://github.com/AMIS-Services/code-cafe/tree/master/cypress) (Bram Kaashoek) - automated unit testing of browser applications (Selenium++): https://www.cypress.io/ 11 | * [ZSH and OhMyZSH](https://github.com/AMIS-Services/code-cafe/tree/master/zsh-and-ohmyzsh) (Nathan Breuring) - https://ohmyz.sh/ Linux Shell 12 | * [PowerShell](https://github.com/AMIS-Services/code-cafe/tree/master/powershell) (Robert van den Nieuwendijk) - task automation and configuration management framework from Microsoft, consisting of a command-line shell and associated scripting language - https://docs.microsoft.com/en-us/powershell/?view=powershell-6 13 | * [Traefik](https://github.com/AMIS-Services/code-cafe/tree/master/traefik) (Lucas Jellema) - Load Balancer and Proxy ("Cloud Native Edge Router") https://traefik.io/ 14 | * [Katacoda](https://github.com/AMIS-Services/code-cafe/tree/master/katacoda) (Lucas Jellema) - Browser based Playgrounds & Tutorials; also for creating your own tutorials - https://www.katacoda.com/ 15 | 16 | 17 | -------------------------------------------------------------------------------- /README-20190402.md: -------------------------------------------------------------------------------- 1 | # Code Cafe Tuesday 2nd April 2019 2 | The Code Café is a series of low key, relaxed meetups where we explore various technologies, tools, programming aids and other fun stuff in a relaxed setting. This repository contains artifacts for the Code Café sessions. 3 | 4 | Note: in the Code Café, we can work with various technologies - browser based or local (typically in containers) - and across the stack: UI, (micro)service, database, code editing and programming utilities and general productivity boosters. 5 | 6 | Code Café sessions will usually not work with slides and presentations. Items are introduced with a brief introduction and a demonstration - followed by trying out! 7 | 8 | ## On the Menu for this Code Café session 9 | 10 | * Node-RED - Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways - (Ron Hendriks) - https://nodered.org/ 11 | * [Adobe XD](adobe-xd) - design tool for web and mobile apps (Robert van Mölken) - https://www.adobe.com/products/xd.html 12 | * [LastPass](lastPass-password-manager) – auto-pilot for all your passwords (Kjettil Hennis) https://www.lastpass.com/ 13 | * [Kafka Connect](kafka-connect-workshop) – adapters en connectoren tussen Kafka Topics en bronnen en doelen van events (Maarten Smeets) - https://kafka.apache.org/documentation/#connect 14 | * Kibana Dashboards – (Bram de Beer) 15 | 16 | ## LastPass 17 | Er zijn verschillende mogelijkheden om wachtwoorden te beheren: onthouden, opschrijven met pen en papier of elektronisch opslaan. Met Lastpass Password Manager sla ze elektronisch op. We gaan in deze workshop kijken naar wat het is en welke mogelijkheden je hebt met het gebruik van Lastpass. 18 | 19 | 20 | ## Node-RED 21 | Heb je een IOT project en wil je snel een POC doen? Dan is node-red de tool die je daar bij helpen kan! Het is een opensource, lichtgewicht omgeving met een grafische ontwikkel omgeving. Gebaseerd op NodeJS bied het mogelijkheden voor het verzenden, ontvangen en transformeren van berichten. Daarnaast zijn ook zaken als UI, hardware monitoring en databases geen enkel probleem. In deze sessie lopen we langs de diverse mogelijkheden van het pakket. 22 | 23 | 24 | ## Kafka Connect 25 | Kafka is een gedistribueerd streaming platform welke (bijvoorbeeld) als enterprise messaging oplossing ingezet kan worden. Kafka Connect is een framework om Kafka te verbinden met externe systemen zoals databases, key-value stores en file systems. Connectoren implementeren het Kafka Connect framework en er zijn er veel beschikbaar (zie https://www.confluent.io/hub/). Tijdens deze workshop zal je kennis maken met gebruik en monitoring van deze integratiecomponenten aan de hand van een IoT voorbeeld. #integratie #kafka #IoT 26 | 27 | ## Adobe XD 28 | Adobe XD is een gratis vectorgebaseerde tool voor het ontwerpen en prototypen van user experiences voor web- en mobiele apps. De software is beschikbaar voor macOS en Windows en ondersteunt vectorontwerp en wireframing van websites. Met deze tool kan je eenvoudig interactieve en doorklikbare prototypen maken. Tevens bestaan er veel plugins zoals voor React om code te genereren. Tijdens deze korte workshop maak je kennis met de tool en maken we een simpele web-app. Een gratis account by Adobe is een vereiste. 29 | 30 | ## Kibana Dashboards 31 | Hoe groter een automatiseringsproject wordt, hoe meer data er vaak verwerkt wordt. Kibana biedt een mogelijkheid om trends en bijzonderheden in een grote informatiestroom inzichtelijk te maken, om voor ontwikkelaars, beheer, en business duidelijkheid te scheppen over de staat en gezondheid van een project. Tijdens deze workshop gaan we zien hoe duizelingwekkend veel data relatief simpel overzichtelijk kan worden gemaakt. 32 | 33 | 34 | 35 | 36 | -------------------------------------------------------------------------------- /adobe-xd/README.md: -------------------------------------------------------------------------------- 1 | # Adobe XD 2 | 3 | When starting building new front-end applications it can really help communicating with your end customer to show them in early phase how the application is going to look like and how the flow within in App feels like. 4 | 5 | In many cases a UI/UX designer create drawings or wireframes. Adobe XD is the next level application to create app designs and prototypes. 6 | 7 | ## What is XD? 8 | 9 | Adobe Experience Design (XD) is a cross-platform app for designing and prototyping websites, mobile apps, etc. 10 | 11 | ### Who is it for? 12 | 13 | XD is a single app for UX/UI and digital designers to design and create wireframes and app proposals. 14 | 15 | ### What can be build with XD? 16 | 17 | - Design and prototype in a single desktop app. 18 | - Preview your design live on mobile devices. 19 | - XD’s minimal interface is focused on UX/UI design, so you can work faster. 20 | - A fast app with small files so projects can be easily edited. 21 | - Object-oriented (so you can quickly navigate, move, or adjust elements). 22 | - Vector-based, so it’s easy to draw icons, etc. 23 | 24 | ## Getting started 25 | 26 | ### Requirements 27 | 28 | You will need: 29 | 30 | - free account at Adobe Creative Cloud 31 | - download Creative Cloud Desktop App 32 | - install Adobe XD CC: https://www.adobe.com/nl/products/xd.html 33 | 34 | For the demo you will also need: 35 | 36 | 1. XD Tutorial File 37 | 2. (Optional) UI Kit for iOS or Google Material 38 | 39 | ### Try out XD: 40 | 41 | - open Adobe XD 42 | - for option 1: start main tutorial or open file `xd-tutorial.xd` 43 | - for option 2: open one of the UI kit files `apple-***.xd` or `google-***.xd` 44 | 45 | ### Other tutorials: 46 | - [create interactive presentation](https://helpx.adobe.com/xd/how-to/create-interactive-presentation.html?playlist=/ccx/v1/collection/product/xd/topics/xd-projects-more/collection.ccx.js&ref=helpx.adobe.com) 47 | - [create a slideshow](https://helpx.adobe.com/xd/how-to/create-slideshow.html?playlist=/content/help/en/ccx/v1/collection/product/xd/topics/xd-projects-more/collection.ccx.js) 48 | 49 | ## Resources 50 | 51 | main site: https://www.adobe.com/nl/products/xd.html 52 | docs: https://helpx.adobe.com/nl/xd/user-guide.html 53 | tutorials: https://helpx.adobe.com/xd/tutorials.html 54 | -------------------------------------------------------------------------------- /adobe-xd/intro-adobe-xd.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/adobe-xd/intro-adobe-xd.pptx -------------------------------------------------------------------------------- /adobe-xd/ui-kits/apple-ui-elements-ipad.xd: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/adobe-xd/ui-kits/apple-ui-elements-ipad.xd -------------------------------------------------------------------------------- /adobe-xd/ui-kits/apple-ui-elements-iphone8.xd: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/adobe-xd/ui-kits/apple-ui-elements-iphone8.xd -------------------------------------------------------------------------------- /adobe-xd/ui-kits/apple-ui-elements-iphoneX.xd: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/adobe-xd/ui-kits/apple-ui-elements-iphoneX.xd -------------------------------------------------------------------------------- /adobe-xd/ui-kits/google-stickersheet-components.xd: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/adobe-xd/ui-kits/google-stickersheet-components.xd -------------------------------------------------------------------------------- /adobe-xd/xd-tutorial.xd: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/adobe-xd/xd-tutorial.xd -------------------------------------------------------------------------------- /cypress/README.md: -------------------------------------------------------------------------------- 1 | # Cypress.io 2 | 3 | In order to deploy an app to production with confidence, automated testing is a necessity. 4 | Using browser-based intregration testing has some major advantages: 5 | 6 | - large swathes of code are hit with small amounts of testing 7 | - whereas unit tests and snapshot tests for front end often test implementation, cypress specs test functionality 8 | - allows to test for oddities of browser and changes brought on by changes in resolution 9 | 10 | Up until now Selenium was the tool of choice, but it is being challende by Cypress. 11 | Cypress is very simple to use, comes with a bundle set of tools and is completely standalone, alieviating the need to fiddle with webdriver. 12 | Some of the included tools are Mocha with helpers such as beforeEach, describe, context etc, Chai for assertions and Sinon for stubs and spies. 13 | 14 | ## Getting started 15 | 16 | ### Requirements 17 | 18 | You will need: 19 | 20 | - node (8+) 21 | - a node package manager (yarn / npm) 22 | 23 | For the demo you will also need: 24 | 25 | - docker and docker compose 26 | 27 | ### Running the demo: 28 | 29 | - cd into koa 30 | - `yarn` 31 | - `yarn start` 32 | - `yarn seed` 33 | - cd into react 34 | - `yarn` 35 | - `yarn start` 36 | - new terminal window, same path 37 | - `yarn cypress` 38 | 39 | The test files are found in code-cafe/react/cypress/integration 40 | 41 | ### Try it for yourself: 42 | 43 | In an existing project, run `npm install cypress` and then run `cypress open`. 44 | This will add several folders and files, including some examples of what's possible with cypress. 45 | 46 | ## Features 47 | 48 | - open source 49 | - fast 50 | - easy to write and debug 51 | - built for responsiveness testing 52 | - spies, stubs, clocks 53 | - direct manipulation of data management solutions (e.g. redux) 54 | - screenshots and video of failing tests 55 | - excellent documentation 56 | 57 | ## Resources 58 | 59 | main site: https://www.cypress.io/ 60 | 61 | docs: https://docs.cypress.io/ 62 | 63 | source code: https://github.com/cypress-io/cypress 64 | 65 | npm page: https://www.npmjs.com/package/cypress 66 | -------------------------------------------------------------------------------- /cypress/koa/.gitignore: -------------------------------------------------------------------------------- 1 | # See https://help.github.com/ignore-files/ for more about ignoring files. 2 | 3 | # dependencies 4 | /node_modules 5 | 6 | -------------------------------------------------------------------------------- /cypress/koa/Dockerfile: -------------------------------------------------------------------------------- 1 | from node:8 2 | WORKDIR '/var/www/app' 3 | copy . . 4 | RUN yarn 5 | EXPOSE 3030 -------------------------------------------------------------------------------- /cypress/koa/app.js: -------------------------------------------------------------------------------- 1 | import Koa from "koa"; 2 | import Router from "koa-router"; 3 | import mongoose from "mongoose"; 4 | import bodyParser from "koa-bodyparser"; 5 | import cors from "@koa/cors"; 6 | 7 | import { router as accommodationsRouter } from "./modules/accommodations/accommodationsController"; 8 | import { router as authRouter } from "./modules/auth/authController"; 9 | import { router as usersRouter } from "./modules/users/usersController"; 10 | 11 | const koa = new Koa(); 12 | const app = new Router(); 13 | mongoose.connect( 14 | "mongodb://mongo:27017/AMISBnB", 15 | { useNewUrlParser: true } 16 | ); 17 | 18 | koa.use(cors()); 19 | app.use(bodyParser()); 20 | 21 | app.get("/", async ctx => { 22 | console.log("GET /"); 23 | ctx.body = "Koa running"; 24 | }); 25 | 26 | app.use("/accommodations", accommodationsRouter.routes()); 27 | app.use("/auth", authRouter.routes()); 28 | app.use("/users", usersRouter.routes()); 29 | koa.use(app.routes()); 30 | koa.use(app.allowedMethods()); 31 | 32 | const server = koa.listen(3030); 33 | console.log(`Koa up at ${server.address().address}:${server.address().port}`); 34 | -------------------------------------------------------------------------------- /cypress/koa/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | services: 3 | mongo: 4 | image: mongo 5 | ports: 6 | - 27017:27017 7 | koa: 8 | image: node:8 9 | build: ./ 10 | volumes: 11 | - ./:/var/www/app 12 | ports: 13 | - 3030:3030 14 | depends_on: 15 | - mongo 16 | command: yarn start-koa 17 | -------------------------------------------------------------------------------- /cypress/koa/modules/accommodations/accommodationsController.js: -------------------------------------------------------------------------------- 1 | import Router from "koa-router"; 2 | import Accommodation from "./accommodationsSchema"; 3 | import { isAuth, getUserId } from "../auth/helpers"; 4 | 5 | export const router = new Router(); 6 | 7 | router.get("/", async ctx => { 8 | console.log("GET /accommodations"); 9 | const accommodations = await Accommodation.find(); 10 | 11 | const accommodationsOverview = accommodations.map(a => ({ 12 | _id: a._id, 13 | location: a.location, 14 | name: a.name, 15 | description: a.description, 16 | image: a.images[0] 17 | })); 18 | ctx.body = accommodationsOverview; 19 | }); 20 | 21 | router.get("/:id", async ctx => { 22 | const id = ctx.params.id; 23 | console.log(`GET /accommodations/${id}`); 24 | if (!id.match(/^[0-9a-fA-F]{24}$/)) { 25 | ctx.throw(404); 26 | } 27 | const accommodation = await Accommodation.findById(id); 28 | ctx.body = accommodation; 29 | }); 30 | 31 | router.use(isAuth).post("/", async ctx => { 32 | console.log("POST /accommodations"); 33 | const accommodation = ctx.request.body; 34 | const userId = getUserId(ctx.request); 35 | accommodation.createdBy = userId; 36 | const savedAccommodation = await Accommodation.create(accommodation); 37 | ctx.body = savedAccommodation; 38 | }); 39 | 40 | router.use(isAuth).put("/:id", async ctx => { 41 | const id = ctx.params.id; 42 | console.log(`PUT /accommodations/${id}`); 43 | await Accommodation.update({ _id: id }, ctx.request.body); 44 | const accommodation = await Accommodation.findById(id); 45 | ctx.body = accommodation; 46 | }); 47 | -------------------------------------------------------------------------------- /cypress/koa/modules/accommodations/accommodationsSchema.js: -------------------------------------------------------------------------------- 1 | import mongoose from "mongoose"; 2 | 3 | const AccommodationSchema = new mongoose.Schema( 4 | { 5 | name: { type: String }, 6 | location: { type: String }, 7 | images: { type: [String] }, 8 | amenities: { type: [String] }, 9 | description: { type: String }, 10 | createBy: { type: mongoose.Schema.Types.ObjectId, ref: "User" } 11 | }, 12 | { timestamps: true } 13 | ); 14 | 15 | const Accommodation = mongoose.model("Accommodation", AccommodationSchema); 16 | 17 | export default Accommodation; 18 | -------------------------------------------------------------------------------- /cypress/koa/modules/auth/authController.js: -------------------------------------------------------------------------------- 1 | import Router from "koa-router"; 2 | import User from "../users/usersSchema"; 3 | import bcrypt from "bcrypt"; 4 | import jsonwebtoken from "jsonwebtoken"; 5 | 6 | export const router = new Router(); 7 | 8 | router.post("/", async ctx => { 9 | const email = ctx.request.body.email; 10 | const password = ctx.request.body.password; 11 | console.log(`POST /auth for ${email}`); 12 | const user = await User.findOne({ email }); 13 | if (!user) { 14 | const error = `User not found`; 15 | console.log(error); 16 | ctx.throw(404, error); 17 | } 18 | if (!(await bcrypt.compare(password, user.password))) { 19 | const error = "Password incorrect"; 20 | console.log(error); 21 | ctx.throw(401, error); 22 | } 23 | const jwt = jsonwebtoken.sign({ id: user.id }, "super veilige key"); 24 | ctx.body = { jwt }; 25 | }); 26 | -------------------------------------------------------------------------------- /cypress/koa/modules/auth/helpers.js: -------------------------------------------------------------------------------- 1 | import jwt from "koa-jwt"; 2 | 3 | export const isAuth = jwt({ secret: "super veilige key" }); 4 | 5 | export const getUserId = request => { 6 | const authHeader = request.header.authorization; 7 | const base64 = authHeader 8 | .split(" ")[1] 9 | .split(".")[1] 10 | .replace("-", "+") 11 | .replace("_", "/"); 12 | const user = Buffer.from(base64, "base64").toString(); 13 | return JSON.parse(user).id; 14 | }; 15 | -------------------------------------------------------------------------------- /cypress/koa/modules/users/usersController.js: -------------------------------------------------------------------------------- 1 | import Router from "koa-router"; 2 | import User from "./usersSchema"; 3 | import bcrypt from "bcrypt"; 4 | import { isAuth, getUserId } from "../auth/helpers"; 5 | 6 | export const router = new Router(); 7 | 8 | router.post("/", async ctx => { 9 | console.log(`POST /users`); 10 | const user = ctx.request.body; 11 | const existingUser = await User.findOne({ $or: [({ email: user.email }, { username: user.username })] }); 12 | if (!user.email || !user.username || !user.password) { 13 | const error = `please provide email, username and password`; 14 | console.log(error); 15 | ctx.throw(400, error); 16 | } 17 | if (existingUser) { 18 | const error = "This email and/or username are already in use"; 19 | console.log(error); 20 | ctx.throw(418, error); 21 | } 22 | user.password = await bcrypt.hash(user.password, 10); 23 | const savedUser = await User.create(user); 24 | ctx.body = savedUser; 25 | }); 26 | 27 | router.use(isAuth).get("/:id", async ctx => { 28 | const id = ctx.params.id; 29 | console.log(`GET user ${id}`); 30 | const user = await User.findById(id); 31 | ctx.body = user; 32 | }); 33 | 34 | router.use(isAuth).put("/:id", async ctx => { 35 | const id = ctx.params.id; 36 | console.log(`PUT user ${id}`); 37 | const userId = getUserId(ctx.request); 38 | if (id !== userId) { 39 | const error = `you are not allowed to change another users info`; 40 | console.log(error); 41 | ctx.throw(401, error); 42 | } 43 | await User.update({ _id: id }, ctx.request.body); 44 | const user = await User.findById(id); 45 | ctx.body = user; 46 | }); 47 | -------------------------------------------------------------------------------- /cypress/koa/modules/users/usersSchema.js: -------------------------------------------------------------------------------- 1 | import mongoose from "mongoose"; 2 | 3 | const UserSchema = new mongoose.Schema( 4 | { 5 | email: { type: String, unique: true }, 6 | username: { type: String, unique: true }, 7 | password: { type: String }, 8 | favoriteAccommodations: [{ type: mongoose.Schema.Types.ObjectId, ref: "Accommodation" }] 9 | }, 10 | { timestamps: true } 11 | ); 12 | 13 | UserSchema.set("toJSON", { 14 | transform: (doc, ret, opt) => { 15 | delete ret["password"]; 16 | } 17 | }); 18 | 19 | const User = mongoose.model("User", UserSchema); 20 | 21 | export default User; 22 | -------------------------------------------------------------------------------- /cypress/koa/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "scripts": { 3 | "start": "docker-compose up -d", 4 | "stop": "docker-compose down", 5 | "seed": "node seed", 6 | "start-koa": "nodemon app.js --exec babel-node --presets es2015", 7 | "seed-db": "node seed", 8 | "stop-db": "docker rm -f mongoSIG" 9 | }, 10 | "dependencies": { 11 | "@koa/cors": "^2.2.2", 12 | "bcrypt": "^3.0.2", 13 | "jsonwebtoken": "^8.3.0", 14 | "koa": "^2.6.1", 15 | "koa-bodyparser": "^4.2.1", 16 | "koa-jwt": "^3.5.1", 17 | "koa-router": "^7.4.0", 18 | "mongoose": "^5.2.5", 19 | "nodemon": "^1.18.3", 20 | "request": "^2.88.0", 21 | "request-promise": "^4.2.2" 22 | }, 23 | "devDependencies": { 24 | "babel-cli": "^6.26.0", 25 | "babel-preset-es2015": "^6.24.1" 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /cypress/koa/seed.js: -------------------------------------------------------------------------------- 1 | const request = require("request-promise"); 2 | 3 | const users = [ 4 | { 5 | email: "bram.kaashoek@amis.nl", 6 | password: "123", 7 | username: "Bram", 8 | favoriteAccommodations: [] 9 | } 10 | ]; 11 | 12 | const accommodations = [ 13 | { 14 | name: "Romantische vuurtoren", 15 | location: "Scheveningen", 16 | favorite: false, 17 | images: [ 18 | "http://infomarken.com/wp-content/uploads/2016/03/tourPaard1.jpg", 19 | "http://www.nieman.nl/wp-content/uploads/2017/08/Interieur-De-Vuurtoren-Kop-Noordereiland-Harderwijk.jpg", 20 | "https://static.webshopapp.com/shops/186977/files/089588126/image.jpg" 21 | ], 22 | amenities: [ 23 | "stofzuiger", 24 | "strijkijzer", 25 | "WiFi", 26 | "droger", 27 | "verwarming", 28 | "TV", 29 | "wieg", 30 | "airconditioning", 31 | "verduisterende gordijnen", 32 | "centrale verwarming", 33 | "beddegoed", 34 | "parkeerplek", 35 | "feun", 36 | "shampoo en conditioner", 37 | "wasmachine", 38 | "droger", 39 | "zeep", 40 | "tuin", 41 | "schommel", 42 | "EHBO kit" 43 | ], 44 | description: 45 | "Een zeer romantische lokatie aan de zee. Verblijf in een authentieke vuurtoren en overzie vanaf grote hoogte de zee. Enkel voor gasten met een goede conditie." 46 | }, 47 | { 48 | name: "Bezemkast in Amsterdam", 49 | location: "Amsterdam", 50 | favorite: false, 51 | images: [ 52 | "https://i.pinimg.com/originals/db/2b/20/db2b207c1bb61e5b7d1bafe1c81455ff.jpg", 53 | "https://ronafischman.com/wp-content/uploads/2015/10/Understairs.bedroom.jpg" 54 | ], 55 | amenities: ["WiFi", "verwarming", "beddegoed", "zeep", "EHBO kit"], 56 | description: 57 | "De beste woning die je in Amsterdam gaat vinden onder de 500 euro per nacht. Accepteer de realiteit van de woningnood en zie de positieve kant in: het is erg knus." 58 | }, 59 | { 60 | name: "Hutje op de hei", 61 | location: "De Veluwe", 62 | favorite: false, 63 | images: [ 64 | "https://i.pinimg.com/originals/b6/e8/12/b6e812e996f90dc575acb6207235adf5.jpg", 65 | "https://roomed.nl/wp-content/uploads/2017/06/roomed-amsterdamse-loft4.jpg", 66 | "https://cdn.shopify.com/s/files/1/2954/9184/files/slow-cabins-interieur5_2048x2048.jpg?v=1520013611" 67 | ], 68 | amenities: [ 69 | "stofzuiger", 70 | "strijkijzer", 71 | "verwarming", 72 | "wieg", 73 | "beddegoed", 74 | "parkeerplek", 75 | "kookgerei", 76 | "fornuis", 77 | "koffiezet apparaat", 78 | "eetgerei", 79 | "koelkast", 80 | "feun", 81 | "shampoo en conditioner", 82 | "zeep", 83 | "tuin", 84 | "schommel", 85 | "EHBO kit" 86 | ], 87 | description: 88 | "Wat is nou een klassiekere Nederlands verblijfsplaats dan een hutje op de hei? Als je even weg wil van de drukte en belachelijkhe huizenprijzen van de stad is dit een perfecte locatie" 89 | }, 90 | { 91 | name: "Vakantievilla Vacuna", 92 | location: "Domburg", 93 | favorite: true, 94 | images: [ 95 | "http://www.zeelandrelais.com/files/DSC00204.jpg", 96 | "https://static.ferienhausmiete.de/pictures/106343/bilder_original/106343_1462960274.jpg", 97 | "https://static.ferienhausmiete.de/pictures/106343/bilder_original/106343_1340009402.jpg" 98 | ], 99 | amenities: [ 100 | "stofzuiger", 101 | "WiFi", 102 | "droger", 103 | "verwarming", 104 | "TV", 105 | "wieg", 106 | "centrale verwarming", 107 | "beddegoed", 108 | "parkeerplek", 109 | "kookgerei", 110 | "fornuis", 111 | "koffiezet apparaat", 112 | "eetgerei", 113 | "magnetron/oven combinatie", 114 | "koelkast", 115 | "vaatwasser", 116 | "feun", 117 | "shampoo en conditioner", 118 | "wasmachine", 119 | "droger", 120 | "zeep", 121 | "tuin", 122 | "EHBO kit" 123 | ], 124 | description: 125 | "Een rustieke vakantievilla gelegen in Domburg aan de zeeuwse kust. Uitstekende beheersing van de Duitse taal is helaas wel een vereist voor communicatie in de omgeving." 126 | }, 127 | { 128 | name: "Appartement voor twee personen", 129 | location: "Utrecht", 130 | favorite: false, 131 | images: [ 132 | "http://www.interieur-inrichting.net/afbeeldingen/knus-scandinavisch-appartement-645x484.jpg" 133 | ], 134 | amenities: [ 135 | "WiFi", 136 | "droger", 137 | "verwarming", 138 | "TV", 139 | "verduisterende gordijnen", 140 | "centrale verwarming", 141 | "beddegoed", 142 | "kookgerei", 143 | "fornuis", 144 | "koffiezet apparaat", 145 | "eetgerei", 146 | "magnetron/oven combinatie", 147 | "koelkast", 148 | "vaatwasser" 149 | ], 150 | description: 151 | "Een appartement voor twee personen in Utrecht. Ingericht in typische vtwonen stijl. Mooi op het plaatje maar in de realiteit veel te wit." 152 | } 153 | ]; 154 | 155 | const getOptions = (path, body, token = undefined) => { 156 | const options = { 157 | url: `http://localhost:3030/${path}`, 158 | headers: { "Content-Type": "application/json" }, 159 | method: "POST", 160 | body: JSON.stringify(body) 161 | }; 162 | if (token) options.headers["Authorization"] = `Bearer ${token}`; 163 | return options; 164 | }; 165 | 166 | const seed = async options => { 167 | await request(options, (err, res) => { 168 | if (err) console.log(err); 169 | if (res && res.statusCode === 200) console.log(`seeded on ${options.url}`); 170 | if (res && res.statusCode !== 200) 171 | console.log(`error: statuscode ${res.statusCode}`); 172 | }); 173 | }; 174 | 175 | // seed users 176 | const userPromises = users.map(async user => { 177 | const options = getOptions("users", user); 178 | await seed(options); 179 | }); 180 | 181 | // wait for all users to be seeded 182 | let token = undefined; 183 | Promise.all(userPromises).then(async () => { 184 | // get auth token for the first user 185 | await request( 186 | getOptions("auth", { 187 | ...users[0] 188 | }), 189 | (_, res) => { 190 | console.log(`got auth token`); 191 | token = JSON.parse(res.body).jwt; 192 | accommodations.map(acc => { 193 | const options = getOptions("accommodations", acc, token); 194 | seed(options); 195 | }); 196 | } 197 | ); 198 | }); 199 | -------------------------------------------------------------------------------- /cypress/react/.eslintignore: -------------------------------------------------------------------------------- 1 | src/registerServiceWorker.js -------------------------------------------------------------------------------- /cypress/react/.eslintrc.json: -------------------------------------------------------------------------------- 1 | { 2 | "extends": "airbnb", 3 | "parser": "babel-eslint", 4 | "rules": { 5 | "quotes": "off", 6 | "no-underscore-dangle": "off", 7 | "import/prefer-default-export": 0, 8 | "object-curly-newline": 0, 9 | "react/destructuring-assignment": 0, 10 | "react/prop-types": 0, 11 | "react/jsx-filename-extension": 0, 12 | "react/jsx-one-expression-per-line": 0, 13 | "react/prefer-stateless-function": 0, 14 | "arrow-parens": 0, 15 | "max-len": 0, 16 | "arrow-body-style": 0, 17 | "no-alert": 0, 18 | "prefer-template": 0, 19 | "jsx-a11y/click-events-have-key-events": 0, 20 | "jsx-a11y/no-static-element-interactions": 0, 21 | "operator-linebreak": 0, 22 | "react/no-array-index-key": 0, 23 | "no-console": 0, 24 | "implicit-arrow-linebreak": 0, 25 | "comma-dangle": 0 26 | }, 27 | "globals": { 28 | "window": true, 29 | "document": true, 30 | "it": true 31 | } 32 | } 33 | -------------------------------------------------------------------------------- /cypress/react/.gitignore: -------------------------------------------------------------------------------- 1 | # See https://help.github.com/ignore-files/ for more about ignoring files. 2 | 3 | # dependencies 4 | /node_modules 5 | 6 | # testing 7 | /coverage 8 | 9 | # production 10 | /build 11 | 12 | # misc 13 | .DS_Store 14 | .env.local 15 | .env.development.local 16 | .env.test.local 17 | .env.production.local 18 | 19 | npm-debug.log* 20 | yarn-debug.log* 21 | yarn-error.log* 22 | -------------------------------------------------------------------------------- /cypress/react/.prettierrc: -------------------------------------------------------------------------------- 1 | { 2 | "printWidth": 120, 3 | "trailingComma": "es5" 4 | } 5 | -------------------------------------------------------------------------------- /cypress/react/.vscode/components.code-snippets: -------------------------------------------------------------------------------- 1 | { 2 | "component": { 3 | "prefix": "component", 4 | "body": [ 5 | "import * as React from 'react';", 6 | "", 7 | "class ${1:ComponentName} extends React.Component {", 8 | "render(){", 9 | "return
Hello World!
", 10 | "}", 11 | "}", 12 | "", 13 | "export default ${1:ComponentName}" 14 | ] 15 | }, 16 | "styledComponent": { 17 | "prefix": "styledComponent", 18 | "body": [ 19 | "import * as React from 'react';", 20 | "import withStyles from '@material-ui/core/styles/withStyles';", 21 | "", 22 | "const styles = () => ({ root: {}})", 23 | "", 24 | "class ${1:ComponentName} extends React.Component {", 25 | "render() {", 26 | "return(<>hello world)", 27 | "}", 28 | "}", 29 | "", 30 | "export default withStyles(styles)(${1:ComponentName})" 31 | ] 32 | }, 33 | "connectedComponent": { 34 | "prefix": "connectedComponent", 35 | "body": [ 36 | "import * as React from 'react';", 37 | "import { connect } from 'react-redux';", 38 | "", 39 | "class ${1:ComponentName} extends React.Component {", 40 | "render() {", 41 | "return(<>hello world)", 42 | "}", 43 | "}", 44 | "", 45 | "const mapStateToProps = ({}:{}) => ({})", 46 | "", 47 | "export default connect(mapStateToProps, {})(${1:ComponentName})" 48 | ] 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /cypress/react/cypress.json: -------------------------------------------------------------------------------- 1 | {} 2 | -------------------------------------------------------------------------------- /cypress/react/cypress/_integration/examples/actions.spec.js: -------------------------------------------------------------------------------- 1 | /// 2 | 3 | context('Actions', () => { 4 | beforeEach(() => { 5 | cy.visit('https://example.cypress.io/commands/actions') 6 | }) 7 | 8 | // https://on.cypress.io/interacting-with-elements 9 | 10 | it('.type() - type into a DOM element', () => { 11 | // https://on.cypress.io/type 12 | cy.get('.action-email') 13 | .type('fake@email.com').should('have.value', 'fake@email.com') 14 | 15 | // .type() with special character sequences 16 | .type('{leftarrow}{rightarrow}{uparrow}{downarrow}') 17 | .type('{del}{selectall}{backspace}') 18 | 19 | // .type() with key modifiers 20 | .type('{alt}{option}') //these are equivalent 21 | .type('{ctrl}{control}') //these are equivalent 22 | .type('{meta}{command}{cmd}') //these are equivalent 23 | .type('{shift}') 24 | 25 | // Delay each keypress by 0.1 sec 26 | .type('slow.typing@email.com', { delay: 100 }) 27 | .should('have.value', 'slow.typing@email.com') 28 | 29 | cy.get('.action-disabled') 30 | // Ignore error checking prior to type 31 | // like whether the input is visible or disabled 32 | .type('disabled error checking', { force: true }) 33 | .should('have.value', 'disabled error checking') 34 | }) 35 | 36 | it('.focus() - focus on a DOM element', () => { 37 | // https://on.cypress.io/focus 38 | cy.get('.action-focus').focus() 39 | .should('have.class', 'focus') 40 | .prev().should('have.attr', 'style', 'color: orange;') 41 | }) 42 | 43 | it('.blur() - blur off a DOM element', () => { 44 | // https://on.cypress.io/blur 45 | cy.get('.action-blur').type('About to blur').blur() 46 | .should('have.class', 'error') 47 | .prev().should('have.attr', 'style', 'color: red;') 48 | }) 49 | 50 | it('.clear() - clears an input or textarea element', () => { 51 | // https://on.cypress.io/clear 52 | cy.get('.action-clear').type('Clear this text') 53 | .should('have.value', 'Clear this text') 54 | .clear() 55 | .should('have.value', '') 56 | }) 57 | 58 | it('.submit() - submit a form', () => { 59 | // https://on.cypress.io/submit 60 | cy.get('.action-form') 61 | .find('[type="text"]').type('HALFOFF') 62 | cy.get('.action-form').submit() 63 | .next().should('contain', 'Your form has been submitted!') 64 | }) 65 | 66 | it('.click() - click on a DOM element', () => { 67 | // https://on.cypress.io/click 68 | cy.get('.action-btn').click() 69 | 70 | // You can click on 9 specific positions of an element: 71 | // ----------------------------------- 72 | // | topLeft top topRight | 73 | // | | 74 | // | | 75 | // | | 76 | // | left center right | 77 | // | | 78 | // | | 79 | // | | 80 | // | bottomLeft bottom bottomRight | 81 | // ----------------------------------- 82 | 83 | // clicking in the center of the element is the default 84 | cy.get('#action-canvas').click() 85 | 86 | cy.get('#action-canvas').click('topLeft') 87 | cy.get('#action-canvas').click('top') 88 | cy.get('#action-canvas').click('topRight') 89 | cy.get('#action-canvas').click('left') 90 | cy.get('#action-canvas').click('right') 91 | cy.get('#action-canvas').click('bottomLeft') 92 | cy.get('#action-canvas').click('bottom') 93 | cy.get('#action-canvas').click('bottomRight') 94 | 95 | // .click() accepts an x and y coordinate 96 | // that controls where the click occurs :) 97 | 98 | cy.get('#action-canvas') 99 | .click(80, 75) // click 80px on x coord and 75px on y coord 100 | .click(170, 75) 101 | .click(80, 165) 102 | .click(100, 185) 103 | .click(125, 190) 104 | .click(150, 185) 105 | .click(170, 165) 106 | 107 | // click multiple elements by passing multiple: true 108 | cy.get('.action-labels>.label').click({ multiple: true }) 109 | 110 | // Ignore error checking prior to clicking 111 | cy.get('.action-opacity>.btn').click({ force: true }) 112 | }) 113 | 114 | it('.dblclick() - double click on a DOM element', () => { 115 | // https://on.cypress.io/dblclick 116 | 117 | // Our app has a listener on 'dblclick' event in our 'scripts.js' 118 | // that hides the div and shows an input on double click 119 | cy.get('.action-div').dblclick().should('not.be.visible') 120 | cy.get('.action-input-hidden').should('be.visible') 121 | }) 122 | 123 | it('.check() - check a checkbox or radio element', () => { 124 | // https://on.cypress.io/check 125 | 126 | // By default, .check() will check all 127 | // matching checkbox or radio elements in succession, one after another 128 | cy.get('.action-checkboxes [type="checkbox"]').not('[disabled]') 129 | .check().should('be.checked') 130 | 131 | cy.get('.action-radios [type="radio"]').not('[disabled]') 132 | .check().should('be.checked') 133 | 134 | // .check() accepts a value argument 135 | cy.get('.action-radios [type="radio"]') 136 | .check('radio1').should('be.checked') 137 | 138 | // .check() accepts an array of values 139 | cy.get('.action-multiple-checkboxes [type="checkbox"]') 140 | .check(['checkbox1', 'checkbox2']).should('be.checked') 141 | 142 | // Ignore error checking prior to checking 143 | cy.get('.action-checkboxes [disabled]') 144 | .check({ force: true }).should('be.checked') 145 | 146 | cy.get('.action-radios [type="radio"]') 147 | .check('radio3', { force: true }).should('be.checked') 148 | }) 149 | 150 | it('.uncheck() - uncheck a checkbox element', () => { 151 | // https://on.cypress.io/uncheck 152 | 153 | // By default, .uncheck() will uncheck all matching 154 | // checkbox elements in succession, one after another 155 | cy.get('.action-check [type="checkbox"]') 156 | .not('[disabled]') 157 | .uncheck().should('not.be.checked') 158 | 159 | // .uncheck() accepts a value argument 160 | cy.get('.action-check [type="checkbox"]') 161 | .check('checkbox1') 162 | .uncheck('checkbox1').should('not.be.checked') 163 | 164 | // .uncheck() accepts an array of values 165 | cy.get('.action-check [type="checkbox"]') 166 | .check(['checkbox1', 'checkbox3']) 167 | .uncheck(['checkbox1', 'checkbox3']).should('not.be.checked') 168 | 169 | // Ignore error checking prior to unchecking 170 | cy.get('.action-check [disabled]') 171 | .uncheck({ force: true }).should('not.be.checked') 172 | }) 173 | 174 | it('.select() - select an option in a } 85 | > 86 | {amenities.map(amenity => ( 87 | 88 | {amenity} 89 | 90 | ))} 91 | 92 | 93 |
94 | 97 |
98 | 99 | ); 100 | } 101 | } 102 | 103 | const mapStateToProps = accommodation => accommodation; 104 | const mapDispatchToProps = { createAccommodation }; 105 | 106 | export default connect( 107 | mapStateToProps, 108 | mapDispatchToProps 109 | )(withStyles(styles)(withRouter(CreateAccommodation))); 110 | -------------------------------------------------------------------------------- /cypress/react/src/components/Error/Page404.js: -------------------------------------------------------------------------------- 1 | import * as React from "react"; 2 | import { withStyles } from "@material-ui/core/styles"; 3 | 4 | const styles = { 5 | root: { 6 | display: "flex", 7 | flexDirection: "column", 8 | width: "100%", 9 | height: "70vh", 10 | alignItems: "center", 11 | justifyContent: "center", 12 | "& h1": { 13 | width: "fit-content", 14 | }, 15 | }, 16 | }; 17 | 18 | class Page404 extends React.Component { 19 | render() { 20 | return ( 21 |
22 |

Oops!

23 |

Page not found!

24 |
25 | ); 26 | } 27 | } 28 | 29 | export default withStyles(styles)(Page404); 30 | -------------------------------------------------------------------------------- /cypress/react/src/components/Frame/Frame.js: -------------------------------------------------------------------------------- 1 | import * as React from "react"; 2 | import { Route, Switch } from "react-router-dom"; 3 | import { login, checkSession, logout } from "../../common/auth"; 4 | import Appbar from "../Appbar/Appbar"; 5 | import Accommodations from "../Accommodations/Accommodations"; 6 | import CreateAccommodation from "../CreateAccommodation/CreateAccommodation"; 7 | import Page404 from "../Error/Page404"; 8 | import AccommodationsDetail from "../AccommodationsDetail/AccommodationsDetail"; 9 | import { UserContext } from "../../common/context"; 10 | import Register from "../Login/Register"; 11 | 12 | class Frame extends React.Component { 13 | state = { 14 | user: undefined, 15 | login: async (email, password) => { 16 | const user = await login(email, password); 17 | this.setState({ user }); 18 | return user; 19 | }, 20 | logout: () => { 21 | logout(); 22 | this.setState({ user: undefined }); 23 | }, 24 | updateUser: user => { 25 | this.setState({ user }); 26 | }, 27 | }; 28 | 29 | async componentDidMount() { 30 | const user = await checkSession(); 31 | if (user) this.setState({ user }); 32 | } 33 | 34 | render() { 35 | return ( 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | ); 49 | } 50 | } 51 | 52 | export default Frame; 53 | -------------------------------------------------------------------------------- /cypress/react/src/components/Login/LoginDialog.js: -------------------------------------------------------------------------------- 1 | import * as React from "react"; 2 | import { withStyles, Dialog, DialogTitle, Input, Button } from "@material-ui/core"; 3 | import { UserContext } from "../../common/context"; 4 | import { Link } from "react-router-dom"; 5 | 6 | const styles = { 7 | title: { 8 | width: 320, 9 | backgroundColor: "red", 10 | "& h2": { 11 | color: "white", 12 | fontWeight: "bold", 13 | }, 14 | }, 15 | userFields: { 16 | display: "flex", 17 | flexDirection: "column", 18 | margin: 16, 19 | "& div:nth-child(2n)": { 20 | marginTop: 16, 21 | }, 22 | }, 23 | buttons: { 24 | display: "flex", 25 | justifyContent: "flex-end", 26 | margin: 16, 27 | }, 28 | link: { 29 | marginLeft: 16, 30 | color: "black", 31 | fontSize: 12, 32 | }, 33 | }; 34 | 35 | class LoginDialog extends React.Component { 36 | constructor(props) { 37 | super(props); 38 | this.emailRef = React.createRef(); 39 | this.passwordRef = React.createRef(); 40 | } 41 | 42 | handleLogin = () => { 43 | const user = this.context.login(this.emailRef.value, this.passwordRef.value); 44 | if (user) this.props.handleClose(); 45 | }; 46 | 47 | render() { 48 | return ( 49 | 50 | Login 51 |
52 | { 55 | this.emailRef = input; 56 | }} 57 | placeholder="Email" 58 | /> 59 | { 63 | this.passwordRef = input; 64 | }} 65 | placeholder="Password" 66 | /> 67 |
68 |
69 | 70 | Nog geen account? 71 | 72 |
73 |
74 | 77 | 78 |
79 |
80 | ); 81 | } 82 | } 83 | 84 | LoginDialog.contextType = UserContext; 85 | 86 | export default withStyles(styles)(LoginDialog); 87 | -------------------------------------------------------------------------------- /cypress/react/src/components/Login/Register.js: -------------------------------------------------------------------------------- 1 | import * as React from "react"; 2 | import { withStyles } from "@material-ui/core/styles"; 3 | import { TextField, Button } from "@material-ui/core"; 4 | import { UserContext } from "../../common/context"; 5 | import { fetch } from "../../common/fetch"; 6 | 7 | const styles = { 8 | root: { 9 | margin: 32, 10 | width: 320, 11 | }, 12 | inputs: { 13 | display: "flex", 14 | flexDirection: "column", 15 | }, 16 | buttons: { 17 | display: "flex", 18 | justifyContent: "flex-end", 19 | margin: "16px 0 16px 16px", 20 | }, 21 | }; 22 | 23 | class Register extends React.Component { 24 | state = { 25 | email: "", 26 | password: "", 27 | username: "", 28 | }; 29 | 30 | handleChange = event => { 31 | this.setState({ [event.target.id]: event.target.value }); 32 | }; 33 | 34 | submit = async () => { 35 | const { ...user } = this.state; 36 | await fetch("users", { method: "POST", body: user }); 37 | this.context.login(this.state.email, this.state.password); 38 | this.props.history.push("/"); 39 | }; 40 | 41 | render() { 42 | return ( 43 | 44 | {context => ( 45 |
46 |
47 | 54 | 62 | 71 | 72 |
73 | 74 |
75 |
76 | )} 77 |
78 | ); 79 | } 80 | } 81 | 82 | Register.contextType = UserContext; 83 | 84 | export default withStyles(styles)(Register); 85 | -------------------------------------------------------------------------------- /cypress/react/src/configureStore.js: -------------------------------------------------------------------------------- 1 | import { createStore, applyMiddleware, combineReducers } from "redux"; 2 | import { composeWithDevTools } from "redux-devtools-extension"; 3 | import thunk from "redux-thunk"; 4 | 5 | import { accommodationsReducer } from "./store/accommodations/reducer"; 6 | 7 | const reducers = combineReducers({ accommodations: accommodationsReducer }); 8 | 9 | export const configureStore = () => { 10 | return createStore(reducers, composeWithDevTools(applyMiddleware(thunk))); 11 | }; 12 | -------------------------------------------------------------------------------- /cypress/react/src/index.css: -------------------------------------------------------------------------------- 1 | body { 2 | margin: 0; 3 | padding: 0; 4 | font-family: sans-serif; 5 | background-color: lightgray; 6 | } 7 | -------------------------------------------------------------------------------- /cypress/react/src/index.js: -------------------------------------------------------------------------------- 1 | import React from "react"; 2 | import ReactDOM from "react-dom"; 3 | import { Provider } from "react-redux"; 4 | import { configureStore } from "./configureStore"; 5 | import "./index.css"; 6 | import App from "./App"; 7 | 8 | const store = configureStore(); 9 | 10 | ReactDOM.render( 11 | 12 | 13 | , 14 | document.getElementById("root") 15 | ); 16 | 17 | if (window.Cypress) { 18 | window.store = store; 19 | } 20 | -------------------------------------------------------------------------------- /cypress/react/src/resources/CaretLeftIcon.js: -------------------------------------------------------------------------------- 1 | import * as React from "react"; 2 | import SvgIcon from "@material-ui/core/SvgIcon"; 3 | 4 | export function CaretLeftIcon(props) { 5 | return ( 6 | 7 | 8 | 9 | ); 10 | } 11 | -------------------------------------------------------------------------------- /cypress/react/src/resources/CaretRightIcon.js: -------------------------------------------------------------------------------- 1 | import * as React from "react"; 2 | import SvgIcon from "@material-ui/core/SvgIcon"; 3 | 4 | export function CaretRightIcon(props) { 5 | return ( 6 | 7 | 8 | 9 | ); 10 | } 11 | -------------------------------------------------------------------------------- /cypress/react/src/resources/HouseIcon.js: -------------------------------------------------------------------------------- 1 | import * as React from 'react'; 2 | import SvgIcon from '@material-ui/core/SvgIcon'; 3 | 4 | export function HouseIcon(props) { 5 | return ( 6 | 7 | 8 | 9 | ); 10 | } 11 | -------------------------------------------------------------------------------- /cypress/react/src/store/accommodations/actions.js: -------------------------------------------------------------------------------- 1 | import { fetch } from "../../common/fetch"; 2 | 3 | export const REQUEST_ACCOMMODATIONS = "REQUEST_ACCOMMODATIONS"; 4 | export const RECEIVED_ACCOMMODATIONS = "RECEIVED_ACCOMMODATIONS"; 5 | export const CREATING_ACCOMMODATION = "CREATING_ACCOMMODATION"; 6 | export const CREATED_ACCOMMODATION = "CREATED_ACCOMMODATION"; 7 | export const ERROR_ACCOMMODATION = "ERROR_ACCOMMODATION"; 8 | 9 | export const getAllAccommodations = () => { 10 | return async dispatch => { 11 | dispatch({ type: REQUEST_ACCOMMODATIONS }); 12 | 13 | const result = await fetch("accommodations"); 14 | 15 | if (!result) { 16 | dispatch({ type: ERROR_ACCOMMODATION }); 17 | } else { 18 | dispatch({ type: RECEIVED_ACCOMMODATIONS, payload: result }); 19 | } 20 | }; 21 | }; 22 | 23 | export const createAccommodation = accommodation => { 24 | return async dispatch => { 25 | dispatch({ type: CREATING_ACCOMMODATION }); 26 | 27 | const result = await fetch("accommodations", { method: "POST", body: accommodation }); 28 | 29 | if (!result) { 30 | dispatch({ type: ERROR_ACCOMMODATION }); 31 | } else { 32 | dispatch({ type: CREATED_ACCOMMODATION, payload: result }); 33 | } 34 | }; 35 | }; 36 | -------------------------------------------------------------------------------- /cypress/react/src/store/accommodations/reducer.js: -------------------------------------------------------------------------------- 1 | import { 2 | RECEIVED_ACCOMMODATIONS, 3 | REQUEST_ACCOMMODATIONS, 4 | CREATED_ACCOMMODATION, 5 | ERROR_ACCOMMODATION, 6 | CREATING_ACCOMMODATION, 7 | } from "./actions"; 8 | 9 | export const accommodationsReducer = (state = [], action = {}) => { 10 | switch (action.type) { 11 | case REQUEST_ACCOMMODATIONS: 12 | return state; 13 | case RECEIVED_ACCOMMODATIONS: 14 | return [...action.payload]; 15 | case CREATING_ACCOMMODATION: 16 | return state; 17 | case CREATED_ACCOMMODATION: 18 | return [...state, action.payload]; 19 | case ERROR_ACCOMMODATION: 20 | return state; 21 | default: 22 | return state; 23 | } 24 | }; 25 | -------------------------------------------------------------------------------- /expose-localclient-ngrok-localtunnel/README.md: -------------------------------------------------------------------------------- 1 | # Expose local Docker Container services on the Internet 2 | 3 | The challenge: you are running a service, API or web application locally on your laptop, in a Docker container. You would like to provide access to external consumers - yourself on your smart phone, a piece of code running in a cloud environment, a colleague on your local network or on the other side of the world. 4 | 5 | We will look at how ngrok - a tool and a cloud service - makes this happen. It generates a public URL and ensures that all requests sent to that URL are forwarded to a local agent (running in its own, stand alone Docker container) that can then pass them on to the local service. See https://technology.amis.nl/2016/12/07/publicly-exposing-a-local-service-to-nearby-and-far-away-consumer-on-the-internet-using-ngrok/ for an introduction to ngrok. 6 | 7 | 8 | ## First steps with ngrok and Docker 9 | 10 | Define a logical network `myngroknet` to link two or more containers together: 11 | 12 | ``` 13 | docker network create myngroknet 14 | ``` 15 | 16 | Run a Docker Container called www based on the nginx image and associate it with the mynrgoknet network: 17 | ``` 18 | docker run -d -p 80 --net myngroknet --name www nginx 19 | ``` 20 | 21 | Run a container called ngrok based on the ngrok container image. Associate the container with the myngroknet network; this enables the container to access container www using its container name as hostname (for example http://www). Expose port 4040 - where the ngrok inspection interface is accessed. Specify that ngrok should open a tunnel (expose a public url) for HTTP requests to port 80 on container www: 22 | 23 | ``` 24 | docker run -d -p 4040:4040 --net myngroknet --name ngrok wernight/ngrok ngrok http www:80 25 | ``` 26 | 27 | Access ngrok monitor: access port 4040 on the Docker host. If you are running with Vagrant that is probably this URL: `http://192.168.188.142:4040` 28 | 29 | Either from the ngrok monitor of from the command line in the Docker host using `curl $(docker port ngrok 4040)/api/tunnels` get the public url that has been assigned to the ngrok session. 30 | 31 | Access that URL from any browser on any machine anywhere in the world. The request from the browser should be handled by the Docker Container, in this case the www container running nginx. 32 | 33 | 34 | ### Expose a local Node Application on the Internet 35 | 36 | On the Docker host - for example the Ubuntu Linux VM created with Vagrant - clone the code-cafe GitHub repository 37 | ``` 38 | git clone https://github.com/AMIS-Services/code-cafe 39 | ``` 40 | 41 | Then navigate to the directory that contains the Node application that we will expose on the internet: 42 | ``` 43 | cd code-cafe/jsonata-query-and-transform-json-documents 44 | ``` 45 | And run a Docker container with a Node runtime called json-server; the current directory ($PWD) is mapped into the container at /usr/src/app. The container is associated with the myngroknet network that makes it accessible later on to the container running ngrok. 46 | ``` 47 | docker run -it --rm -p 8080:8080 -v "$PWD":/usr/src/app --net myngroknet --name json-server node:10 bash 48 | ``` 49 | Once the container is started, you will find yourself in a shell in the container. Perform the following steps to copy the sources, install dependencies and run the Node application: 50 | ``` 51 | cp -r /usr/src/app /app 52 | 53 | cd /app 54 | 55 | npm install 56 | 57 | node json-server 58 | ``` 59 | 60 | The Node application is up and listening at port 8080. You can verify this from the Docker Host `http://localhost:8080/?region=Europe` (or possibly the Windows host: `http://:8080/?region=Europe`. 61 | 62 | Run the ngrok Docker Container to create a tunnel from the a newly assigned public URL to port 8080 on the json-server container (at which the Node application is handling requests ) 63 | ``` 64 | docker run -d -p 4040:4040 --net myngroknet --name ngrok wernight/ngrok ngrok http json-server:8080 65 | ``` 66 | 67 | Inspect ngrok at port 4040 (for example `http://192.168.188.142:4040`) and learn about the public url - or use `curl $(docker port ngrok 4040)/api/tunnels` to get that url. 68 | 69 | Access the Node application from any client anywhere in the world (for example your mobile device) at the url: `http://.ngrok.io/?region=Europe` 70 | 71 | 72 | ## Localtunnel 73 | 74 | localtunnel exposes your localhost to the world for easy testing and sharing! No need to mess with DNS or deploy just to have others test out your changes. 75 | 76 | Check out: https://github.com/localtunnel/localtunnel 77 | 78 | Localtunnel is available in a Docker Container, very similar to the ngrok solution discussed overhead: 79 | https://hub.docker.com/r/efrecon/localtunnel/. 80 | 81 | Note: localtunnel can use localtunnel.me as its server - or you can run your own server to handle all requests (see: https://github.com/localtunnel/server) 82 | 83 | ## Vagrant Share 84 | 85 | Vagrant Share allows you to share your Vagrant environment with anyone in the world, enabling collaboration directly in your Vagrant environment in almost any network environment with just a single command: `vagrant share`. 86 | 87 | See https://www.vagrantup.com/docs/share/ for details and http://www.gizmola.com/blog/archives/121-Vagrant-Share-and-Ngrok.html for more background. 88 | 89 | 90 | ## Resources 91 | 92 | Ngrok: https://ngrok.com/ 93 | 94 | Details on Docker Container Image docker-ngrok : https://github.com/wernight/docker-ngrok 95 | 96 | Vagrant Share: https://www.vagrantup.com/docs/share/ 97 | 98 | LocalTunnel: https://localtunnel.me/ and https://github.com/localtunnel/localtunnel 99 | (LocalTunnel custom server: https://github.com/localtunnel/server) 100 | 101 | Alternative solution: http://serveo.net/ (no client agent required) - with also the option to 'self serve' -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM node:10-alpine 2 | 3 | EXPOSE 8080 4 | 5 | RUN wget https://github.com/mledoze/countries/blob/master/countries.json 6 | RUN npm install jsonata request 7 | 8 | COPY countries-module.js . 9 | COPY json-server.js . 10 | COPY package.json . 11 | 12 | CMD ["node","json-server.js"] 13 | 14 | # docker build -t msm-countries . 15 | # docker run -d -p 8080:8080 msm-countries 16 | 17 | # http://localhost:8080/?name=ra®ion=Europe 18 | 19 | # find with docker ps the container to stop msm-countries 20 | # docker stop 21 | -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/README.md: -------------------------------------------------------------------------------- 1 | # Explore JSONata 2 | 3 | JSONata is a lightweight query and transformation language for JSON data. It reminds me of XPath and XSLT or XQuery. Queries and transformations can be expressed in JSONata using declarative and intuitive syntax (for example: expression Account.Order[0].OrderID to quert the OrderID property of the first Order element in an Account object). The npm module `jsonata` provides a JavaScript implementation that can be used in Node JS and in client side browser code. 4 | 5 | ## Try Out JSONata in browser-based Explorer tool 6 | 7 | Try out JSONata in the live browser based JSONata explorer: http://try.jsonata.org/ 8 | 9 | (13 Dec 2018: an alternative instance is available at: http://docs.jsonata.org/jsonata-exerciser/) 10 | 11 | Note: see below for running the JSONata Exerciser locally (for example in case the site is not available) 12 | 13 | Some expressions on the example Account document: 14 | 15 | Get the name of of the first product in the first order: `{"name": Account.Order[0].Product[0]."Product Name"}` (or: `Account.Order[0].Product[0]."Product Name"`) 16 | 17 | Get a list of all names of ordered products: `{"products": Account.Order.Product.( $."Product Name")}` or: `Account.Order.Product.( $."Product Name")` 18 | 19 | Total number of items in all orders in the account: `Account.$sum(Order.Product[]."Quantity")` 20 | 21 | Average height of all items: `$average(Account.Order.Product."Description"."Height")` 22 | 23 | A document with some aggregate values: 24 | ``` 25 | { "total number of items": Account.$sum(Order.Product[]."Quantity") 26 | , "total cost": $sum(Account.Order.Product.(Price * Quantity)) 27 | , "average height":$average(Account.Order.Product."Description"."Height") 28 | , "longest product name": $max(Account.Order.Product."Description".$length("Product Name")) 29 | } 30 | ``` 31 | 32 | A nicely transformed orders document: 33 | ``` 34 | Account.Order. 35 | {"order": { "id": OrderID 36 | , "products": $.Product[]. {"name": $."Product Name" 37 | ,"price": $."Price" 38 | ,"colour" : $."Description".Colour 39 | } 40 | ,"numberOfItems" : $sum($.Product[]."Quantity") 41 | } 42 | } 43 | ``` 44 | 45 | ### Run JSONata Exerciser locally 46 | 47 | The Webapplication to try out JSONata is available on GitHub at: https://github.com/jsonata-js/jsonata-exerciser. 48 | 49 | You can run this application locally with the following steps: 50 | 1. Run a container with Node runtime: 51 | ``` 52 | docker run -it --rm -p 8080:3000 node:10 bash 53 | ``` 54 | 2. Clone the GitHub Repository for JSONata Exerciser: 55 | ``` 56 | git clone https://github.com/jsonata-js/jsonata-exerciser 57 | ``` 58 | 3. Start the application 59 | ``` 60 | cd jsonata-exerciser 61 | rm package-lock.json 62 | npm install 63 | npm start 64 | ``` 65 | 66 | The JSONata exerciser can then be accessed in a browser on your laptop at port 8080: `127.0.0.1:8080` or at the IP address assigned to the VM running the Docker engine: `http://192.168.188.142:8080` 67 | 68 | ## Use JSONdata from Node JS 69 | JSONata has been implemented in JavaScript and can be used in a browser client as well as in a server side Node JS application. Here we will take a look at the latter. 70 | 71 | To run a clean Node environment, execute the following command: 72 | `docker run -it --rm -p 8080:8080 -v "$PWD":/usr/src/app node:10 bash` 73 | 74 | This runs a container with the Node 10 run time environment, with a mapping of the current working directory into the directory /usr/src/app inside the container and with port 8080 in the container exposed at port 8080 on the Docker host. This allows us to run a Node application that can handle HTTP requests at port 8080. 75 | 76 | Note: if you are working in a `vagrant ssh` shell, you may want to copy the directory jsonata-query-and-transform-json-documents into the directory that contains the Vagrantfile. This makes the directory available inside the Linux environment under /vagrant. If you run the docker node container from this /vagrant directory, then you will have the Node application sources available inside the container - in the mounted /usr/src/app directory. Copy this mounted, read-only directory to a read-write container owned directory: `cp -r /usr/src/app /app` and work in the /app directory. 77 | 78 | Check out how jsonata is used in `explore-oracleopenworld-catalog.js` to extract data from the JSON document `oow2018-sessions-catalog.json`. 79 | 80 | Before you can run explore-oracleopenworld-catalog.js, you first need to run `npm install` to have npm module jsonata installed. 81 | 82 | ### Exploring Country Data 83 | The document https://github.com/mledoze/countries/blob/master/countries.json contains country data in JSON format. We can explore this data using JSONata. 84 | 85 | Check out the contents of `explore-countries.js`. You can run this file with `node explore-countries.js`. It will load the JSON document and perform a number of data extractions from that document. 86 | 87 | Note for example how enrichment can be done - by composing a source document from multiple data sets and transforming that composite source with a single (or multiple) JSONata expression(s). 88 | 89 | Also note the use of variables that can help manage complex queries and transformations in a elegant, understandable way; see the comparison between Netherlands and Belgium. 90 | 91 | ### Country Server 92 | The Node files `json-server.js` and `countries-module.js` together create a very simple HTTP service for providing country details. The service accepts http GET requests with query parameters region (Europe, Asia, Africa, Americas, ...) and name (optional). The service returns a JSON document with for all countries in the indicated region and whose name satisfies the optional name filter, their name and capital city. 93 | 94 | To start the HTTP service, simply run `node json-server.js'. The HTTP server listens at port 8080, the port that is exposed from the container and mapped to port 8080 on the Linux server. 95 | 96 | Access the service from Linux (for example with Curl) or in your browser - at the IP address of the Linux Docker Host (when on Windows, that is the IP address specified in the Vagrant file, for example: 192.168.188.142) 97 | 98 | For example on Linux: `curl 'http://localhost:8080/?name=ra®ion=Europe'` 99 | 100 | From the browser: `http://192.168.188.142:8080/?name=x®ion=Americas` 101 | 102 | ### Use Dockerfile 103 | 104 | Build and run docker container: 105 | ``` 106 | docker build -t msm-countries . 107 | docker run -d -p 8080:8080 msm-countries 108 | ``` 109 | Request in browser 110 | ``` 111 | http://localhost:8080/?name=ra®ion=Europe 112 | ``` 113 | Stop container 114 | ``` 115 | docker ps # The container to stop: msm-countries CONTAINER ID 116 | docker stop 117 | ``` 118 | 119 | ## Resources 120 | Homepage for JSONata: http://jsonata.org/ 121 | Documentation for JSONata: http://docs.jsonata.org/ 122 | npm module for JSONata: https://www.npmjs.com/package/jsonata 123 | GitHub Repo: https://github.com/jsonata-js/jsonata 124 | 125 | Support for JSONata in Elastic Stack: https://www.elastic.io/jsonata-transformation-language-building-complex-workflows/ 126 | -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/countries-module.js: -------------------------------------------------------------------------------- 1 | const jsonata = require("jsonata"); 2 | const request = require('request'); 3 | ; 4 | var countriesDocumentURL = "https://raw.githubusercontent.com/mledoze/countries/master/countries.json" 5 | var countries={}; 6 | request(countriesDocumentURL, function (error, response, body) { 7 | countries = JSON.parse(body) 8 | }) 9 | 10 | module.exports.countriesQuery = function (region, nameFilter) { 11 | var expression = ` $[][region='${region}' ${nameFilter ? ' and $contains( name.common, \''+nameFilter+'\')':''} ].{"name":name.common, "capital":capital[0]}` 12 | var expressionCountries = jsonata(expression); 13 | return expressionCountries.evaluate(countries); 14 | } 15 | 16 | -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/explore-countries.js: -------------------------------------------------------------------------------- 1 | const jsonata = require("jsonata"); 2 | const request = require('request'); 3 | 4 | var countriesDocumentURL = "https://raw.githubusercontent.com/mledoze/countries/master/countries.json" 5 | request(countriesDocumentURL, function (error, response, body) { 6 | var countries = JSON.parse(body) 7 | 8 | var expressionAllCarabbeanCountries = jsonata( 9 | ` $[][subregion='Caribbean'].{"name":name.common, "capital":capital[0]} 10 | `); 11 | console.log("All Caribbean Countries\n" + JSON.stringify(expressionAllCarabbeanCountries.evaluate(countries))) 12 | 13 | var expressionAllCountriesBorderingWithChina = jsonata( 14 | `$[][$contains($join(borders),'CHN')].name.common 15 | `); 16 | console.log("All Countries [land]bordering with China \n" + JSON.stringify(expressionAllCountriesBorderingWithChina.evaluate(countries))) 17 | 18 | var enrichedCountries = { 19 | "countries": countries 20 | , "continents": 21 | [ { "name": "Asia", "population": 4436224000 } 22 | , { "name": "Europe", "population": 738849000 } 23 | , { "name": "Africa", "population": 1216130000 } 24 | , { "name": "Americas", "population": 10015590000 } 25 | ] 26 | } 27 | 28 | var expressionEnrichedCountriesWithY = jsonata( 29 | `$.countries[][$contains(name.common,'y')].{ 30 | "name":name.common 31 | , "capital":capital[0] 32 | , "continent":region 33 | , "continentPopulation": ($var:= region; $$.continents[name=$var].population) 34 | } 35 | 36 | `); 37 | console.log("Enriched Countries with Y in name\n" + JSON.stringify(expressionEnrichedCountriesWithY.evaluate(enrichedCountries))) 38 | 39 | var expressionCompareNedAndBel = jsonata( 40 | `( 41 | $ned:= $[][name.common='Netherlands']; 42 | $bel:= $[][name.common='Belgium']; 43 | {"Comparison": "Netherlands and Belgium" 44 | , "Region": $ned.region & ' vs ' & $bel.region 45 | , "Capital": $ned.capital & ' vs ' & $bel.capital 46 | , "Languages": $ned.languages & ' vs ' & $bel.languages 47 | , "Area": $ned.area & ' vs ' & $bel.area 48 | }) 49 | `); 50 | console.log("Comparison between Netherlands and Belgium\n" 51 | + JSON.stringify(expressionCompareNedAndBel.evaluate(countries))) 52 | 53 | }); -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/explore-oracleopenworld-catalog.js: -------------------------------------------------------------------------------- 1 | var jsonata = require("jsonata"); 2 | var fs = require("fs"); 3 | 4 | 5 | var oow2018Filename = 'oow2018-sessions-catalog.json'; 6 | 7 | var buf = fs.readFileSync(oow2018Filename); 8 | var catalog = JSON.parse(buf) 9 | 10 | var expressionAllSQLSessions = jsonata( 11 | ` $[][$contains(title," SQL ")]."title" 12 | `); 13 | console.log("All SQL Sessions\n"+ JSON.stringify(expressionAllSQLSessions.evaluate(catalog))) 14 | 15 | var expressionCountAllCloudSessions = jsonata( 16 | `$count($[][$contains(title,"Cloud")]) 17 | `); 18 | console.log("Number of Cloud Sessions\n"+ JSON.stringify(expressionCountAllCloudSessions.evaluate(catalog))) 19 | 20 | var expressionCountSessionsPresentedByArchitect = jsonata( 21 | `$count($[][speakers.$contains(jobTitle,"Architect")]) 22 | `); 23 | console.log("Number of Sessions presented by Architect\n"+ JSON.stringify(expressionCountSessionsPresentedByArchitect.evaluate(catalog))) 24 | 25 | 26 | var expressionSessionsInSpecificRoom = jsonata( 27 | `$[][slots.room='Moscone West - Room 2003'].{"title":title, "date":slots.date,"time":slots.time} 28 | `); 29 | console.log("All Sessions in Moscone West - Room 2003\n"+ JSON.stringify(expressionSessionsInSpecificRoom.evaluate(catalog))) 30 | -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/json-server.js: -------------------------------------------------------------------------------- 1 | const http = require('http'); 2 | const querystring = require('querystring'); 3 | 4 | const countriesModule = require('./countries-module.js') 5 | const server = http.createServer(); 6 | server.on('request', (request, response) => { 7 | if (request.method === 'GET') { 8 | 9 | var query = querystring.parse(request.url.split('?')[1]); 10 | response.setHeader('Content-Type', 'application/json'); 11 | response.setHeader('X-Powered-By', 'code-cafe'); 12 | response.end(JSON.stringify(countriesModule.countriesQuery(query.region, query.name))); 13 | }//if GET 14 | }); 15 | server.listen(8080); 16 | console.log('JSON Server is running and listening at port 8080' ) -------------------------------------------------------------------------------- /jsonata-query-and-transform-json-documents/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "node-explore-oow18-with-jsonata", 3 | "version": "0.0.0", 4 | "private": true, 5 | "scripts": { 6 | "start": "node explore-oracleopenworld-catalog.js" 7 | }, 8 | "dependencies": { 9 | "jsonata": "^1.5.4", 10 | "request": "^2.88.0" 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /kafka-connect-workshop/Introduction Kafka Connect.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/kafka-connect-workshop/Introduction Kafka Connect.pptx -------------------------------------------------------------------------------- /kafka-connect-workshop/LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Maarten Smeets 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /kafka-connect-workshop/README.md: -------------------------------------------------------------------------------- 1 | # kafka-connect-workshop -------------------------------------------------------------------------------- /kafka-connect-workshop/Workshop Kafka Connect.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/kafka-connect-workshop/Workshop Kafka Connect.docx -------------------------------------------------------------------------------- /kafka-connect-workshop/files/Dockerfile: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright 2018 Confluent Inc. 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | 16 | FROM confluentinc/cp-kafka-connect:5.1.2 17 | COPY outputas.txt /tmp 18 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '2' 3 | services: 4 | zookeeper: 5 | image: confluentinc/cp-zookeeper:5.1.2 6 | hostname: zookeeper 7 | container_name: zookeeper 8 | ports: 9 | - "2181:2181" 10 | environment: 11 | ZOOKEEPER_CLIENT_PORT: 2181 12 | ZOOKEEPER_TICK_TIME: 2000 13 | 14 | broker: 15 | image: confluentinc/cp-enterprise-kafka:5.1.2 16 | hostname: broker 17 | container_name: broker 18 | depends_on: 19 | - zookeeper 20 | ports: 21 | - "9092:9092" 22 | - "29092:29092" 23 | environment: 24 | KAFKA_BROKER_ID: 1 25 | KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181' 26 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 27 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092,PLAINTEXT_HOST://localhost:29092 28 | KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter 29 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 30 | KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0 31 | CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker:9092 32 | CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181 33 | CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1 34 | CONFLUENT_METRICS_ENABLE: 'true' 35 | CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous' 36 | 37 | schema-registry: 38 | image: confluentinc/cp-schema-registry:5.1.2 39 | hostname: schema-registry 40 | container_name: schema-registry 41 | depends_on: 42 | - zookeeper 43 | - broker 44 | ports: 45 | - "8081:8081" 46 | environment: 47 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 48 | SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181' 49 | 50 | connect: 51 | image: confluentinc/kafka-connect:latest 52 | build: 53 | context: . 54 | dockerfile: Dockerfile 55 | hostname: connect 56 | container_name: connect 57 | depends_on: 58 | - zookeeper 59 | - broker 60 | - schema-registry 61 | ports: 62 | - "8083:8083" 63 | environment: 64 | CONNECT_BOOTSTRAP_SERVERS: 'broker:9092' 65 | CONNECT_REST_ADVERTISED_HOST_NAME: connect 66 | CONNECT_REST_PORT: 8083 67 | CONNECT_GROUP_ID: compose-connect-group 68 | CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs 69 | CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1 70 | CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000 71 | CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets 72 | CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1 73 | CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status 74 | CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1 75 | CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter 76 | CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.storage.StringConverter 77 | CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081 78 | CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" 79 | CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" 80 | CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181' 81 | # Assumes image is based on confluentinc/kafka-connect-datagen:latest which is pulling 5.1.2 Connect image 82 | CLASSPATH: /usr/share/java/monitoring-interceptors/monitoring-interceptors-5.1.2.jar 83 | CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor" 84 | CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor" 85 | CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components" 86 | CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR 87 | 88 | control-center: 89 | image: confluentinc/cp-enterprise-control-center:5.1.2 90 | hostname: control-center 91 | container_name: control-center 92 | depends_on: 93 | - zookeeper 94 | - broker 95 | - schema-registry 96 | - connect 97 | ports: 98 | - "9021:9021" 99 | environment: 100 | CONTROL_CENTER_BOOTSTRAP_SERVERS: 'broker:9092' 101 | CONTROL_CENTER_ZOOKEEPER_CONNECT: 'zookeeper:2181' 102 | CONTROL_CENTER_CONNECT_CLUSTER: 'connect:8083' 103 | CONTROL_CENTER_KSQL_URL: "http://ksql-server:8088" 104 | CONTROL_CENTER_KSQL_ADVERTISED_URL: "http://localhost:8088" 105 | CONTROL_CENTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081" 106 | CONTROL_CENTER_REPLICATION_FACTOR: 1 107 | CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1 108 | CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1 109 | CONFLUENT_METRICS_TOPIC_REPLICATION: 1 110 | PORT: 9021 111 | 112 | rest-proxy: 113 | image: confluentinc/cp-kafka-rest:5.1.2 114 | depends_on: 115 | - zookeeper 116 | - broker 117 | - schema-registry 118 | ports: 119 | - 8082:8082 120 | hostname: rest-proxy 121 | container_name: rest-proxy 122 | environment: 123 | KAFKA_REST_HOST_NAME: rest-proxy 124 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:9092' 125 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 126 | KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081' 127 | 128 | elasticsearch: 129 | image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1 130 | container_name: elasticsearch 131 | environment: 132 | - node.name=es01 133 | - cluster.name=docker-cluster 134 | - bootstrap.memory_lock=true 135 | - "ES_JAVA_OPTS=-Xms512m -Xmx512m" 136 | ulimits: 137 | nproc: 65535 138 | memlock: 139 | soft: -1 140 | hard: -1 141 | cap_add: 142 | - ALL 143 | privileged: true 144 | ports: 145 | - 9200:9200 146 | - 9300:9300 147 | 148 | kibana: 149 | image: docker.elastic.co/kibana/kibana-oss:6.6.1 150 | container_name: kibana 151 | environment: 152 | SERVER_NAME: localhost 153 | ELASTICSEARCH_URL: http://elasticsearch:9200/ 154 | ports: 155 | - 5601:5601 156 | ulimits: 157 | nproc: 65535 158 | memlock: 159 | soft: -1 160 | hard: -1 161 | cap_add: 162 | - ALL 163 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/elasticsearchsink.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "elasticsearch-sink", 3 | "config":{ 4 | "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector", 5 | "topics":"connect-test", 6 | "key.ignore":"true", 7 | "schema.ignore":"true", 8 | "connection.url":"http://elasticsearch:9200", 9 | "type.name":"doc", 10 | "key.converter":"org.apache.kafka.connect.json.JsonConverter", 11 | "value.converter":"org.apache.kafka.connect.json.JsonConverter", 12 | "key.converter.schemas.enable":"false", 13 | "value.converter.schemas.enable":"false" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/filesource.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "local-file-source", 3 | "config":{ 4 | "connector.class": "org.apache.kafka.connect.file.FileStreamSourceConnector", 5 | "file": "/tmp/outputas.txt", 6 | "topic": "connect-test", 7 | "key.converter":"org.apache.kafka.connect.storage.StringConverter", 8 | "value.converter":"org.apache.kafka.connect.storage.StringConverter", 9 | "batch.size":200000 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/load.ps1: -------------------------------------------------------------------------------- 1 | .\loadfilesource.ps1 2 | sleep 20 3 | .\loadoutputasindex.ps1 4 | sleep 20 5 | .\loadelasticsearchsink.ps1 6 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/load.sh: -------------------------------------------------------------------------------- 1 | .\loadfilesource.sh 2 | sleep 20 3 | .\loadoutputasindex.sh 4 | sleep 20 5 | .\loadelasticsearchsink.sh 6 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/loadelasticsearchsink.ps1: -------------------------------------------------------------------------------- 1 | New-Variable -Name "dockernatip" -Visibility Public -Value ((Get-NetIPAddress | Where-Object { $_.InterfaceAlias -eq "vEthernet (DockerNAT)" -and $_.AddressFamily -eq "IPv4"}).IPAddress) 2 | New-Variable -Name "uri" -Visibility Public -Value (-join("http://",$dockernatip,":8083/connectors")) 3 | Invoke-WebRequest -uri $uri -Method Post -ContentType 'application/json' -Infile elasticsearchsink.json 4 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/loadelasticsearchsink.sh: -------------------------------------------------------------------------------- 1 | curl -X POST -H "Content-Type: application/json" --data @elasticsearchsink.json http://localhost:8083/connectors 2 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/loadfilesource.ps1: -------------------------------------------------------------------------------- 1 | New-Variable -Name "dockernatip" -Visibility Public -Value ((Get-NetIPAddress | Where-Object { $_.InterfaceAlias -eq "vEthernet (DockerNAT)" -and $_.AddressFamily -eq "IPv4"}).IPAddress) 2 | New-Variable -Name "uri" -Visibility Public -Value (-join("http://",$dockernatip,":8083/connectors")) 3 | Invoke-WebRequest -uri $uri -Method Post -ContentType 'application/json' -Infile filesource.json 4 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/loadfilesource.sh: -------------------------------------------------------------------------------- 1 | curl -X POST -H "Content-Type: application/json" --data @filesource.json http://localhost:8083/connectors 2 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/loadoutputasindex.ps1: -------------------------------------------------------------------------------- 1 | New-Variable -Name "dockernatip" -Visibility Public -Value ((Get-NetIPAddress | Where-Object { $_.InterfaceAlias -eq "vEthernet (DockerNAT)" -and $_.AddressFamily -eq "IPv4"}).IPAddress) 2 | New-Variable -Name "uri" -Visibility Public -Value (-join("http://",$dockernatip,":9200/connect-test")) 3 | Invoke-WebRequest -uri $uri -Method Put -ContentType 'application/json' -Infile outputasindex.json 4 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/loadoutputasindex.sh: -------------------------------------------------------------------------------- 1 | curl -X PUT -H "Content-Type: application/json" --data @outputasindex.json http://localhost:9200/connect-test 2 | 3 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/outputasindex.json: -------------------------------------------------------------------------------- 1 | { 2 | "settings": { 3 | "number_of_replicas": 1, 4 | "number_of_shards": 3, 5 | "analysis": {}, 6 | "refresh_interval": "1s" 7 | }, 8 | "mappings": { 9 | "doc": { 10 | "properties": { 11 | "actions": { 12 | "type": "text" 13 | }, 14 | "eventtime": { 15 | "type": "date", 16 | "format": "epoch_millis" 17 | }, 18 | "filename": { 19 | "type": "text" 20 | }, 21 | "owner": { 22 | "type": "text" 23 | }, 24 | "watch": { 25 | "type": "text" 26 | } 27 | } 28 | } 29 | } 30 | } -------------------------------------------------------------------------------- /kafka-connect-workshop/files/readme.txt: -------------------------------------------------------------------------------- 1 | Docker desktop on Windows 2 | docker-machine ssh 3 | sudo sysctl -w vm.max_map_count=262144 4 | exit 5 | 6 | Linux 7 | sysctl -w vm.max_map_count=262144 8 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/restart.ps1: -------------------------------------------------------------------------------- 1 | docker-compose stop 2 | docker-compose rm -f 3 | docker-compose up -d 4 | 5 | -------------------------------------------------------------------------------- /kafka-connect-workshop/files/restart.sh: -------------------------------------------------------------------------------- 1 | docker-compose stop 2 | docker-compose rm -f 3 | docker-compose up -d 4 | 5 | -------------------------------------------------------------------------------- /katacoda/assets/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | networks: 4 | cafenet: 5 | 6 | services: 7 | proxy: 8 | image: traefik 9 | command: --web --web.metrics.prometheus --web.metrics.prometheus.buckets="0.1,0.3,1.2,5.0" --docker --docker.domain=docker.localhost --logLevel=DEBUG 10 | ports: 11 | - "80:80" 12 | - "8080:8080" 13 | - "443:443" 14 | volumes: 15 | - /var/run/docker.sock:/var/run/docker.sock 16 | - /dev/null:/traefik.toml 17 | networks: 18 | - cafenet 19 | -------------------------------------------------------------------------------- /katacoda/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | networks: 4 | cafenet: 5 | 6 | services: 7 | proxy: 8 | image: traefik 9 | command: --web --web.metrics.prometheus --web.metrics.prometheus.buckets="0.1,0.3,1.2,5.0" --docker --docker.domain=docker.localhost --logLevel=DEBUG 10 | ports: 11 | - "80:80" 12 | - "8080:8080" 13 | - "443:443" 14 | volumes: 15 | - /var/run/docker.sock:/var/run/docker.sock 16 | - /dev/null:/traefik.toml 17 | networks: 18 | - cafenet 19 | -------------------------------------------------------------------------------- /katacoda/finish.md: -------------------------------------------------------------------------------- 1 | # Wow! Well Done! 2 | 3 | You have seen Traefik in action - as edge router and load balancer, able to dynamically adapt to new containers and additional container instances. 4 | 5 | There is much more to discover. 6 | 7 | ### Other fun topics 8 | * Health Checks 9 | * Circuit Breaking (take a badly performing backend server out of the pool) 10 | * Header Manipulation 11 | * Timeout 12 | * IP White Listing 13 | * Canary Release (using load balancing with uneven weights for new release end point vs existing release) 14 | * Throttling (rate limiting) 15 | 16 | ## Resources 17 | Traefik Homepage: https://traefik.io/ 18 | Traefik Documentation: https://docs.traefik.io/ 19 | 20 | Getting Started Scenario: https://docs.traefik.io/ 21 | 22 | Also check out this other Katacode scenario: [Load Balance Containers using Traefik](https://www.katacoda.com/courses/traefik/deploy-load-balancer) 23 | 24 | The sources for this Katacoda scenario are also available: [GitHub](https://github.com/lucasjellema/katacoda-scenarios/tree/master/httpsgithubcomlucasjellemakatacoda-scenarios. 25 | -------------------------------------------------------------------------------- /katacoda/index.json: -------------------------------------------------------------------------------- 1 | { 2 | "title": "Traefik and Docker Compose in action", 3 | "description": "In this scenario, we will use `docker-compose` to start several containers that are behind a single Traefik edge router instance that performs routing and load balancing. Labels on the containers provide instructions to Traefik. Note that Traefik itself also runs in a Docker container.", 4 | "difficulty": "beginner", 5 | "time": "10-15 minutes", 6 | "details": { 7 | "steps": [ 8 | { 9 | "title": "Run Traefik using Docker Compose", 10 | "text": "step1.md" 11 | }, 12 | { 13 | "title": "Step 2 - Start a Docker Container that Traefik Routes traffic to", 14 | "text": "step2.md" 15 | }, 16 | { 17 | "title": "Step 3 - Start a second Docker Container that Traefik also Routes traffic to", 18 | "text": "step3.md" 19 | }, 20 | { 21 | "title": "Step 4 - Scale Up one of the Docker Contains and watch how Traefik starts load balancing", 22 | "text": "step4.md" 23 | } 24 | ], 25 | "intro": { 26 | "text": "intro.md" 27 | }, 28 | "finish": { 29 | "text": "finish.md" 30 | }, 31 | "assets": { 32 | "client": [ 33 | { 34 | "file": "docker-compose.yml", 35 | "target": "~/" 36 | } 37 | ] 38 | } 39 | }, 40 | "environment": { 41 | "showdashboard": true, 42 | "dashboards": [ 43 | { 44 | "name": "Traefik Dashboard", 45 | "port": 8080 46 | } 47 | ], 48 | "uilayout": "editor-terminal", 49 | "uimessage1": "\u001b[32mYour Interactive Bash Terminal. A safe place to learn and execute commands.\u001b[m\r\n", 50 | }, 51 | "backend": { 52 | "imageid": "docker" 53 | } 54 | } -------------------------------------------------------------------------------- /katacoda/intro.md: -------------------------------------------------------------------------------- 1 | # Traefik 2 | 3 | TL;DR; 4 | Traefik is an edge router, first released in 2015. It does load balancing, routing, IP white listing, SSL offloading, header rewriting, health checking, circuit breaking and service discovery. It supports dynamic reconfiguration, exposes metrics for monitoring and can be run in cluster configuration. Traefik plays very nicely with orchestation platforms - such as Kubernetes, Mesos, Swarm and Docker Engine - as well as service discovery components and configuration tools such as Consul, Eureka, etcd, Zookeeper. 5 | 6 | Traefik compares with HAProxy, NGinxm, Apache HTTP Server, AWS Elastic Load Balancing, Kong, Zuul and even Varnish (web application accelerator also known as a caching HTTP reverse proxy) and hardware load balancers. 7 | 8 | [traefik.io](https://traefik.io/) 9 | 10 | In this scenario, you will spin up a number of Docker containers. One for Traefik and two more for services that Traefik will front and route requests to and load balances requests over. 11 | 12 | You will get a feel for the bare functionality of Traefik - and how to get it going. -------------------------------------------------------------------------------- /katacoda/step1.md: -------------------------------------------------------------------------------- 1 | Execute this command to run docker-compose `docker-compose up -d`{{execute}} 2 | 3 | When the previous command is complete, check the Docker containers running `docker ps`{{execute}} 4 | 5 | Checkout the Traefik dashboard - by click on the tab labeled *Traefik Dashboard* or by clicking on this link: 6 | https://[[HOST_SUBDOMAIN]]-8080-[[KATACODA_HOST]].environments.katacoda.com/dashboard/ 7 | 8 | The Traefik metrics are available at: 9 | https://[[HOST_SUBDOMAIN]]-8080-[[KATACODA_HOST]].environments.katacoda.com/metrics 10 | 11 | The Traefik API is exposed at: 12 | https://[[HOST_SUBDOMAIN]]-8080-[[KATACODA_HOST]].environments.katacoda.com/api 13 | -------------------------------------------------------------------------------- /katacoda/step2.md: -------------------------------------------------------------------------------- 1 | Click on the file docker-compose.yml to open the editor. 2 | 3 | Copy this service definition into the docker-compose.yml file: 4 |
 5 |   code-cafe-machine:
 6 |     image: katacoda/docker-http-server
 7 |     labels:
 8 |       - "traefik.backend=code-cafe-machine-echo"
 9 |       - "traefik.frontend.rule=Host:machine.code.cafe"
10 |     networks:
11 |       - cafenet
12 | 
13 | 
14 | 15 | Now execute this command to stop and tear down all Docker artifacts created before using docker-compose `docker-compose down`{{execute}} 16 | 17 | Let's restart Traefik as well as the newly defined container using docker-compose `docker-compose up -d`{{execute}}. 18 | 19 | Now check the Traefik dashboard - by click on the tab labeled *Traefik Dashboard* or by clicking on this link: 20 | https://[[HOST_SUBDOMAIN]]-8080-[[KATACODA_HOST]].environments.katacoda.com/dashboard/ 21 | 22 | You should see a new frontend (rule) and backend - based on the Docker container added in this step through the Traefik labels. 23 | 24 | Let's try to invoke the new service: 25 | `curl -H Host:machine.code.cafe http://host01`{{execute}} 26 | -------------------------------------------------------------------------------- /katacoda/step3.md: -------------------------------------------------------------------------------- 1 | Click on the file docker-compose.yml to open the editor. 2 | 3 | We will startup another container that will result in one more frontend associated with a new backend. 4 | 5 | Copy this service definition into the docker-compose.yml file: 6 |
 7 |   code-cafe-echo:
 8 |     image: katacoda/docker-http-server:v2
 9 |     labels:
10 |       - "traefik.backend=code-cafe-echo"
11 |       - "traefik.frontend.rule=Host:echo.code.cafe"
12 |     networks:
13 |       - cafenet
14 | 
15 | 
16 | 17 | Now execute this command to stop and tear down all Docker artifacts created before using docker-compose `docker-compose down`{{execute}} 18 | 19 | Let's restart Traefik as well as the newly defined container using docker-compose `docker-compose up -d`{{execute}}. 20 | 21 | Now check the Traefik dashboard - by click on the tab labeled *Traefik Dashboard* or by clicking on this link: 22 | https://[[HOST_SUBDOMAIN]]-8080-[[KATACODA_HOST]].environments.katacoda.com/dashboard/ 23 | 24 | You should see a new frontend (rule) and backend - based on the Docker container *code-cafe-echo* added in this step through the Traefik labels. 25 | 26 | Let's try to invoke the new service: 27 | `curl -H Host:echo.code.cafe http://host01`{{execute}} 28 | -------------------------------------------------------------------------------- /katacoda/step4.md: -------------------------------------------------------------------------------- 1 | Now execute this command to stop and tear down all Docker artifacts created before using docker-compose `docker-compose scale code-cafe-machine=3`{{execute}} 2 | 3 | Now check the Traefik dashboard - by clicking on the tab labeled *Traefik Dashboard* or by clicking on this link: 4 | https://[[HOST_SUBDOMAIN]]-8080-[[KATACODA_HOST]].environments.katacoda.com/dashboard/ 5 | 6 | You should see three servers for the code-cafe-machine backend. 7 | 8 | Now try to to invoke the code-cafe-machine service - several times: 9 | `curl -H Host:machine.code.cafe http://host01`{{execute}} 10 | 11 | The load balancing act performed by Traefik should result in the requests being routed to different containers in a round robin pattern. 12 | -------------------------------------------------------------------------------- /kibana/introductie kibana.md: -------------------------------------------------------------------------------- 1 | # Introductie 2 | 3 | Kibana is een eenvoudinge, maar krachtige tool om snel en herbruikbaar grote hoeveelheden data inzichtelijk te maken, wanneer dit al lang niet meer met de hand kan. Kibana maakt gebruik van Elasticsearch, een library om grote hoeveelheden tekst te doorzoeken. Over Elasticsearch: data in JSON, toegang via REST API, vrije documentstructuur. Samen met Logstash vormen ze de 'Elastic stack', eerder ook wel 'ELK stack' genoemd. 4 | 5 | ## Kibana installeren 6 | 7 | Hosted: 8 | `https://www.elastic.co/cloud/elasticsearch-service/signup` 9 | 10 | Windows: 11 | `https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.7.0.zip` 12 | `https://artifacts.elastic.co/downloads/kibana/kibana-6.7.0-windows-x86_64.zip` 13 | Download de zips, unzip en start .\bin\elasticsearch.bat en .\bin\kibana.bat 14 | 15 | RPM: 16 | `https://www.elastic.co/guide/en/kibana/current/rpm.html` 17 | 18 | ## Sample data invoeren 19 | 20 | - Klik op de home page op de link naast 'Sample data'. 21 | - Klik op de eCommerce sample data card op 'add'. 22 | - Klik op 'view data'. 23 | 24 | # Waarom visualiseren? 25 | Integratie is allesbehalve glamoureus. Iedereen die werkt tussen back- en front-end, weet dat de mooiste oplossingen vaak niet gezien worden. Visualiseren maakt dat onzichtbaar werk zichtbaar. De ‘business’ wil vaak inzicht in hoe het gaat met de ontwikkeling en het onderhoud van een applicatie, ongeacht hoe dit gebeurt. Door zaken visueel te maken breek je door de 'taalbarriere' die bestaat tussen de IT-er en de nIT-er. Daarnaast een visuele interface veel eenvoudiger monitoren voor jezelf, maar ook voor andere partijen. Zo kan iemand die je applicatie niet goed kent, wel zien wat er mee gebeurt, en daarop aansturen. Ook zijn visualisaties indeaal voor demo’s en wekelijkse/maandelijkse rapportages. 26 | 27 | # Hoe wel, en hoe niet? 28 | ## Wel: 29 | - Geef je data vorm met het doel in gedachten, bedenk wat je uiteindelijk wilt weergeven of monitoren en laat dat bepalend zijn in wat je hoe opslaat. 30 | - Denk in indexes, niet in tabellen, het gaat niet om een uniforme dataset. 31 | - Voer netjes mappings en tokenizations op, om je invoer betekenis te geven. 32 | ## Niet: 33 | - Relaties tussen entries. Het kan even wennen zijn, maar alles mag en alles kan, zonder dat er een relatie is tussen entries. 34 | - Achteraf waarden berekenen. Dit is eigenlijk ook in strijd met puntje 1 van 'wel'. Kibana is geen rekentool: sorteren, tellen, middelen e.d. gaat natuurlijk prima, maar denk vooraf na over je data. 35 | 36 | # Wat biedt Kibana? 37 | - **Discover**: Elasticsearch (met Lucene Query String syntax), een ingang in je data om te kunnen filteren, sorteren en daadwerkelijk je entries te zien. 38 | - **Visualize**: het visueel maken van 1 of meerdere eigenschappen. 39 | - **Dashboarding**: meerdere visualisaties en tabellen in 1 oogopslag. 40 | - **Timelion**: time series om dataverloop over tijd weer te geven. 41 | - **Canvas**: herbruikbare rapportage, gevuld met live data. 42 | - **Machine learning**: eenvoudige herkennen van Anomalies e.d., om minder waardeloze meldingen te krijgen, en juist wel een heads up als er iets 'geks' gebeurt. 43 | 44 | # Voorbeelden uit de praktijk 45 | MISA bij Medux gebruikt Kibana voor: 46 | * Eventrapportage en monitoring (successrate, errortypes, API performance, etc.) 47 | * Data tracking across API’s 48 | * Troubleshooting 49 | * ‘Weggeef-dashboards’: dashboards die nuttig zijn voor feedback door klantenservice / front end / back end / business 50 | * Machine learned anomaly detection 51 | 52 | # Zelf aan de slag! 53 | - Installeer Kibana 54 | - Selecteer op de Discover page de eCommerce sample data 55 | - Probeer op de discover pagina een gevoel te krijgen bij de data 56 | - Maak een nieuwe visualisatie 57 | - Maak een dashboard en zet je visualisatie er in 58 | Maak het dashboard interactief met behulp van input elementen 59 | -------------------------------------------------------------------------------- /lastPass-password-manager/README.md: -------------------------------------------------------------------------------- 1 | # Lastpass Password Manager 2 | 3 | ## Install and Sign-up/Sign-in 4 | 5 | First thing you do is install the Lastpass extensions. I've provided the URL to either [Google Chrome](https://chrome.google.com/webstore/detail/lastpass-free-password-ma/hdokiejnpimakedhajhdlcegeplioahd?hl=nl) or [Mozilla Firefox](https://addons.mozilla.org/en-US/firefox/addon/lastpass-password-manager/). Although you can use only the web interface of Lastpass, I found using the extensions much easier. After the installation a grey icon with three dots in it will appear on the right side next to the adressbar. 6 | 7 | Click on the icon. A pop-up appears from which you can either log in or create a user account. Go ahead and create your user account or log in if you already have one. When logged in your "vault" will open. The vault is where all your accounts are stored. In the Vault paragraph I will explain what the vault is and show its possibilities. 8 | 9 | ### Vault 10 | 11 | All sites are stored in what's called the `Vault`. By default sites aren't stored in a folder. It is possible to create folders and organize sites in folders. You can see the credentials and more information when you click on a site in the Vault. It is also possible to create form-fills and to do a `Security Challenge`. Those possibilities will be explained in the Optional paragraph. 12 | 13 | ## Add a website 14 | 15 | Although you can add website manually to your vault. The easiest option is to add the websites with the extension. To do this you go to a website and enter the credentials on that website. When you login Lastpass will ask you if you want to add the account to your Vault. Click `add` and the website is added. 16 | 17 | ## Login with Lastpass 18 | 19 | When you added a website go to the login page of the website. The Lastpass icon in the toolbar shows a number which lets you now how many accounts Lastpass has on this URL. By default Lastpass autofills you're login credentials. This setting can be disabled (see [edit sites](#edit-sites)). Inside the input fields the Lastpass icon appears as well. If you click on it you get the option to fill in your credentials or edit. 20 | 21 | ## Generate Password 22 | 23 | Lastpass gives you the option to generate passwords as well. Click on the Lastpass icon you'll see the option `Generate Secure Password`. When you are generating password you have some options. Those are: 24 | 25 | - Length 26 | - Uppercase and lowercase 27 | - Numbers 28 | - Special Characters 29 | - Pronounceable 30 | 31 | You can use up to 100 characters in length. You have basic options like uppercase, lowercase, numbers and special characters. Besides those options you have also the option to make your password pronounceable. It generates a password which supposedly should be easier to pronounce thus easier to remember. When using this option you can't use numbers or special characters. 32 | 33 | Click on `Use password` to use it on the current website or click on `Copy Password` to copy it. 34 | 35 | ## Optional 36 | 37 | Lastpass has more features which you can use. These you can try these out if you are interested or if you are done with the rest. 38 | 39 | ### Forms 40 | 41 | When you have to fill in forms most of the time you are fill in same information over and over again like name, address, emailadress, hometown, birthday and more. With forms you have the possibility to enter this kind of information with Lastpass. In the menu on the leftside you see the option Form Fills. Press the plus sign if there ain't a form that already exists. You can create a form with all the information you automatically want to be filled in by Lastpass. 42 | 43 | Besides name, age and hometown, you can also enter you more private information like creditcard information or bank information. I personally don't have this information filled in except for name and surname, but it's up to you if you want. 44 | 45 | To use the form you go to a website where can enter this kind of information. The website [linkedIn](https://www.linkedin.com/) for example. When you try to login you have also the option to join now. This information can automatically be filled in. 46 | 47 | ### Edit Sites 48 | 49 | You can also edit a website. This means you can change the name, URL, username, password and folder. You have also advanced settings. This give you the possibility to use reprompt, auto-login and disable the auto-fill. 50 | 51 | - Reprompt: You must enter your master password before you enter your credentials as an extra security layer. Not only for entering the credentials but also when you want to edit some information. 52 | - Auto-login: You'll automatically login with your credentials when accessing the website. 53 | - Auto-fill: automatically enters your credentials when you are on the login page. 54 | 55 | ### Security Challenge 56 | 57 | The security challenge is a challenge on how secure your passwords are. You get three scores: 58 | 59 | - Your Security Score: This is a combined rating of how strong your passwords generally are, meaning their overall length and complexity, with the highest possible score being 100 points. 60 | 61 | - Your Lastpass Standing: This compares your scores against all other LastPass users who have run the Security Challenge. You are placed in a percentile according to your current security score. The lower the percentage, the better your ranking. 62 | 63 | - Master Password: This rates how strong your Master Password is based on length and complexity. 64 | 65 | After this score you see an overview with steps on how to improve your score. This is split up in four steps: 66 | 67 | 1. Change compromised passwords 68 | 2. Change weak passwords 69 | 3. Change reused passwords 70 | 4. Change old passwords 71 | 72 | You also see a detailed overview with all you passwords and how strong they are. Here you can also filter passwords out based on one of the four steps. 73 | -------------------------------------------------------------------------------- /linux-and-docker-host-on-windows-machine/README.md: -------------------------------------------------------------------------------- 1 | # Linux and Docker Host on Windows machine 2 | In the Code Café, most explorations we will do locally will take place inside a Docker container or at least on a Linux machine. This host or container could run anywhere - including on the cloud. However, it will probably be most convenient to simply run the Linux environment with a Docker engine on your laptop. 3 | 4 | If you happen to have a Windows laptop, this requires a little preparation. For Docker containers only, using the Docker Quickstart Terminal/Docker Desktop is one way of doing so, and to some extent that works fine. But whenever I want to have more control over the Linux environment that runs the Docker host, or I want to run multiple such environments in parallel, I like to just run one or more VMs under my own control and use them to run Docker inside. 5 | 6 | The easiest way to create and run a Linux environment with a Docker engine inside is using combination of Vagrant and VirtualBox. VirtualBox runs the VM and takes care of network from and to the VM as well as mapping local directories on the Windows host machine into the VM. Vagrant runs on the Windows machine as a command line tool. It interacts with the VirtualBox APIs, to create, start, pause and resume and stop the VMs. Based on simple declarative definitions – text files – it will configure the VM and take care of it. 7 | 8 | The steps to go through in order to have on your Windows laptop a Linux VM with Docker engine inside are these: 9 | 10 | 1. Download and install VirtualBox - https://www.virtualbox.org/wiki/Downloads 11 | 2. Download and install Vagrant - https://www.vagrantup.com/docs/installation/ and https://www.vagrantup.com/downloads.html 12 | 3. Open a command line, navigate to the (this) directory that contains the Vagrantfile and run `vagrant up`. This will create a VirtualBox VM with Ubuntu 18.04 as operating system and Docker Engine including Docker Compose installed and running. Note that the directory containing the Vagrantfile is mapped into the VM as /vagrant. You can ping the VM `ping 192.168.188.142` (or the IP address that you have configured in the Vagrantfile) 13 | 4. Use `vagrant ssh` to open a shell session into the VM. You can then execute `docker ps` to verify which containers are running (none) and verify that docker is installed in the VM. Note: you can run `vagrant ssh` from multiple command line windows to have several parallel Shell sessions on the Linux VM. 14 | 5. Use `docker run hello-world` you can quickly see Docker in action; this will pull and run a container image, show an output message and stop the container (see: https://hub.docker.com/_/hello-world/ for details). With `docker run -it ubuntu bash` you can start the same container and open a shell session into; in this shell session you can run Linux commands such as `ls -l ` and `ps -ef`. Type `exit` to abandon the container session (this also stops the container) 15 | 6. Use `exit` to exit from the vagrant SSH shell session. Use `vagrant halt` on the Windows command line to shutdown the VM. Note that the state of the VM is retained; if you restart using `vagrant up` any changes to the file system are still in tact. 16 | 17 | With `vagrant pause` and `vagrant resume` we can create a snapshot of the VM in mid flight and at a later moment (which can be after a restart of the host system) continue where we left off. 18 | 19 | Using `vagrant destroy` you can completely remove the VM, releasing the host (disk) resources that were consumed by it. 20 | 21 | See: this blog article with a little more details on how to configure and run a Linux and Docker Host VM on a Windows machine: https://technology.amis.nl/2018/05/21/rapidly-spinning-up-a-vm-with-ubuntu-and-docker-on-my-windows-machine-using-vagrant-and-virtualbox/ -------------------------------------------------------------------------------- /linux-and-docker-host-on-windows-machine/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure(2) do |config| 5 | # The most common configuration options are documented and commented below. 6 | # For a complete reference, please see the online documentation at 7 | # https://docs.vagrantup.com. 8 | 9 | # Every Vagrant development environment requires a box. You can search for 10 | # boxes at https://atlas.hashicorp.com/search. 11 | # alternatives to this Ubuntu 18.04 box are "v0rtex/xenial64" with the 16.04 LTS and ubuntu/trusty64 with 14.04 LTS 12 | config.vm.box = "bento/ubuntu-18.04" 13 | # access a port on your host machine (via localhost) and have all data forwarded to a port on the guest machine. 14 | config.vm.network "forwarded_port", guest: 9092, host: 9192 15 | config.vm.network "forwarded_port", guest: 4040, host: 5050 16 | config.vm.network "forwarded_port", guest: 8080, host: 8180 17 | # Create a private network, which allows host-only access to the machine 18 | # using a specific IP - I have arbitrarily decided on 192.168.188.142. Feel free to change this 19 | config.vm.network "private_network", ip: "192.168.188.142" 20 | 21 | # define a larger than default (40GB) disksize 22 | # note: this requires the Vagrant plugin vagrant-disksize (see https://github.com/sprotheroe/vagrant-disksize) using "vagrant plugin install vagrant-disksize" 23 | # config.disksize.size = '50GB' 24 | 25 | config.vm.provider "virtualbox" do |vb| 26 | vb.name = 'ubuntu18-docker-vm' 27 | vb.memory = 4096 28 | vb.cpus = 1 29 | vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"] 30 | vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"] 31 | end 32 | 33 | # set up Docker in the new VM: 34 | config.vm.provision :docker 35 | # install docker-compose into the VM 36 | # note: this next line requires a plugin to be installed - using "vagrant plugin install vagrant-docker-compose" 37 | # config.vm.provision :docker_compose 38 | # install docker-compose into the VM and run the docker-compose.yml file in the local directory - if it exists - whenever the VM starts (https://github.com/leighmcculloch/vagrant-docker-compose) 39 | # config.vm.provision :docker_compose, yml: "/vagrant/docker-compose.yml", run:"always" 40 | 41 | end -------------------------------------------------------------------------------- /neo4j-graphdatabase/neo4j-node.js: -------------------------------------------------------------------------------- 1 | const neo4j = require('neo4j-driver').v1; 2 | const request = require('request'); 3 | 4 | // SET YOUR VALUE FOR THE PASSWORD AND THE IP ADDRESS WHERE THE NEO4J SERVER CAN BE ACCESSED!! 5 | var user = "neo4j", password = "neo4j1", uri = "bolt://192.168.188.142:7687" 6 | 7 | const driver = neo4j.driver(uri, neo4j.auth.basic(user, password)); 8 | const session = driver.session(neo4j.WRITE); 9 | 10 | var countriesDocumentURL = "https://raw.githubusercontent.com/mledoze/countries/master/countries.json" 11 | async function addConstraints(tx) { 12 | console.log(`Adding Constraints`) 13 | return tx.run('CREATE CONSTRAINT ON (l:Language) ASSERT l.name IS UNIQUE').then(tx.run('CREATE CONSTRAINT ON (c:Country) ASSERT c.name IS UNIQUE').then(tx.run('CREATE CONSTRAINT ON (r:Region) ASSERT r.name IS UNIQUE').then(tx.run('CREATE CONSTRAINT ON (c:Country) ASSERT c.code IS UNIQUE')))); 14 | } 15 | 16 | 17 | async function addLanguages(tx, languages) { 18 | console.log(`Adding languages`) 19 | var s = ''; 20 | languages.forEach(element => s = s.concat(`CREATE (:Language {name: '${element}'}) 21 | `)) 22 | console.log("Statement " + s) 23 | return tx.run(s); 24 | } 25 | async function addRegions(tx, regions) { 26 | console.log(`Adding regions`) 27 | var s = ''; 28 | regions.forEach(element => s = s.concat(`CREATE (:Region {name: '${element}'}) 29 | `)) 30 | console.log("Statement " + s) 31 | return tx.run(s); 32 | } 33 | async function addSubRegions(tx, subregions) { 34 | console.log(`Adding subregions`) 35 | var s = ''; 36 | subregions.forEach(element => s = s.concat(`CREATE (:SubRegion {name: '${element}'}) 37 | `)) 38 | console.log("Statement " + s) 39 | return tx.run(s); 40 | } 41 | 42 | // Create a country node 43 | async function addCountry(tx, country) { 44 | console.log(`Adding country ${country.name.common}`) 45 | var createCountry = `CREATE (c:Country {name: $name, capital: $capital, area: $area, code: $code})` 46 | var updateCountry = `MATCH (sub:SubRegion{name:$subregion}), (reg:Region{name:$region}), (c:Country{code: $code}) 47 | ` 48 | for (var language in country.languages) { 49 | updateCountry = updateCountry.concat(` 50 | , (${language}:Language{name:'${country.languages[language]}'}) 51 | `) 52 | } 53 | 54 | updateCountry = updateCountry.concat(` 55 | MERGE (c)-[:PART_OF]->(sub) 56 | MERGE (c)-[:IN]->(reg) 57 | `) 58 | for (var language in country.languages) { 59 | updateCountry = updateCountry.concat(` 60 | MERGE (c)-[:SPEAKS]->(${language}) 61 | `) 62 | } 63 | console.log(`create country ${createCountry}`) 64 | console.log(`update country ${updateCountry}`) 65 | return tx.run(createCountry 66 | , { 67 | 'name': country.name.common, 'capital': country.capital, 'region': country.region 68 | , 'code': country.cca3, 'subregion': country.subregion 69 | , 'area': country.area 70 | }).then(tx.run(updateCountry 71 | , { 72 | 'name': country.name.common, 'capital': country.capital, 'region': country.region 73 | , 'code': country.cca3, 'subregion': country.subregion 74 | , 'area': country.area 75 | })); 76 | } 77 | 78 | // Create a country node 79 | async function addBorderingCountries(tx, country) { 80 | console.log(`Adding bordering countries for ${country.name.common}`) 81 | var matchCountry = `MATCH (c:Country { code: $code}) 82 | ` 83 | country.borders.forEach(borderingCountryCode => matchCountry = matchCountry.concat(` 84 | MATCH (${borderingCountryCode}:Country{code:'${borderingCountryCode}'}) 85 | `)) 86 | 87 | country.borders.forEach(borderingCountryCode => matchCountry = matchCountry.concat(` 88 | MERGE (c)-[:BORDERS_WITH]->(${borderingCountryCode}) 89 | `)) 90 | 91 | console.log(`add bordering countries ${matchCountry}`) 92 | return tx.run(matchCountry 93 | , { 94 | 'code': country.cca3 95 | }); 96 | } 97 | 98 | 99 | request(countriesDocumentURL, async function (error, response, body) { 100 | 101 | var countries = JSON.parse(body) 102 | // get unique region values (see: https://codeburst.io/javascript-array-distinct-5edc93501dc4) 103 | // take all elements in the countries array, for each of them: take the region element; create a s Set of all the resulting region values (a Set contains unique elements) 104 | var regions = [...new Set(countries.map(country => country.region))] 105 | var subregions = [...new Set(countries.map(country => country.subregion))] 106 | // see https://stackoverflow.com/questions/39837678/why-no-array-prototype-flatmap-in-javascript for this flatMap function 107 | const flatMap = (f, xs) => 108 | xs.reduce((acc, x) => 109 | acc.concat(f(x)), []) 110 | 111 | // take all elements in the countries array, for each of them: take the array of languages ); create one big array of all small arrays of languages (this is what the flatmap does) and turn that big array into a Set (of unique language values) 112 | var languages = [...new Set(flatMap(country => Object.values(country.languages), countries))] 113 | 114 | // prepare constraints in the Neo4J database 115 | await session.writeTransaction(tx => addConstraints(tx)) 116 | //now create objects in Neo4J for each category 117 | await session.writeTransaction(tx => addLanguages(tx, languages)) 118 | await session.writeTransaction(tx => addRegions(tx, regions)) 119 | await session.writeTransaction(tx => addSubRegions(tx, subregions)) 120 | 121 | for (var i = 0; i < countries.length; i++) { 122 | var country = countries[i] 123 | await session.writeTransaction(tx => addCountry(tx, country)) 124 | } 125 | // add bordering countries 126 | for (var i = 0; i < countries.length; i++) { 127 | var country = countries[i] 128 | if (country.borders && country.borders.length > 0) 129 | await session.writeTransaction(tx => addBorderingCountries(tx, country)) 130 | } 131 | 132 | const fromCountry = 'France', toCountry = 'India' 133 | 134 | session.run("MATCH (c:Country) -[:IN]-> (r) RETURN c,r") 135 | .then(result => { 136 | return result.records.map(record => { // Iterate through records 137 | var country = record._fields[0].properties; 138 | var region = record._fields[1].properties; 139 | console.log(`${country.name} (Capital: ${country.capital}, Region: ${region.name})`); // Access the name property from the RETURN statement 140 | }); 141 | }) 142 | .then(() => session.run(`Match path = shortestpath( (f:Country{name:$from}) –[:BORDERS_WITH *1..15]-(p:Country{name:$to})) return path 143 | ` 144 | , { from: fromCountry, to: toCountry } 145 | ) 146 | .then(result => { 147 | return result.records.map(record => { // Iterate through records 148 | var segments = record._fields[0].segments; 149 | var path = ''; 150 | segments.forEach(segment => path = path.concat(segment.start.properties.name + '|')) 151 | path = path.concat(segments[segments.length - 1].end.properties.name) 152 | console.log(`Path from ${fromCountry} to ${toCountry}: ${path}`) 153 | }); 154 | }) 155 | .then(() => { 156 | session.close(); 157 | driver.close(); 158 | console.log("Done - closed Neo4J Session and Driver") 159 | })); // Always remember to close the session 160 | }); 161 | -------------------------------------------------------------------------------- /neo4j-graphdatabase/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "neo4j-node", 3 | "version": "0.0.0", 4 | "private": true, 5 | "scripts": { 6 | "start": "node neo4j-node.js" 7 | }, 8 | "dependencies": { 9 | "neo4j-driver": "^1.5.4", 10 | "request": "^2.88.0" 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /powershell/Introducing PowerShell.md: -------------------------------------------------------------------------------- 1 | # Introducing PowerShell 2 | 3 | PowerShell is a command-line shell and scripting language. There are two versions of PowerShell: Windows PowerShell and PowerShell Core. Windows PowerShell is installed on Windows 10 and Windows Server. PowerShell Core is built on .NET Core and runs on Windows, Linux, macOS and in Docker containers. 4 | 5 | PowerShell uses commands named cmdlets (pronounced command-lets) that have a verb-noun naming convention. For example, the `Get-Command` cmdlet retrieves all of the cmdlets available in your PowerShell session. 6 | 7 | ## Downloading PowerShell 8 | 9 | Windows PowerShell is installed on Windows 10 and Windows Server. You can download PowerShell Core from: [https://www.github.com/PowerShell/PowerShell](https://www.github.com/PowerShell/PowerShell). 10 | 11 | To run PowerShell from using a docker container, use the following command: 12 | 13 | ```cmd 14 | docker run -it microsoft/powershell 15 | ``` 16 | 17 | ## Using PowerShell cmdlets 18 | 19 | The `Get-Command` cmdlet gets all commands that are installed on the computer, including cmdlets, aliases, functions, workflows, filters, scripts, and applications. 20 | 21 | The `Update-Help` cmdlet downloads the newest help files for PowerShell and installs them on your computer. 22 | 23 | The `Get-Help` cmdlet displays information about Windows PowerShell concepts and commands, including cmdlets, functions, CIM commands, workflows, providers, aliases and scripts. To show the output in a separate window, use the following command. 24 | 25 | ```powershell 26 | Get-Help -ShowWindow 27 | ``` 28 | 29 | The `Get-Member` cmdlet gets the members, the properties and methods, of objects. For example, the following command shows the methods and properties of the `System.Management.Automation.CmdletInfo` object. 30 | 31 | ```powershell 32 | Get-Command -Name Get-Command | Get-Member 33 | ``` 34 | 35 | The `Where-Object` cmdlet selects objects from a collection based on their property values. For example, the following command shows all the processes with a handle greater than 1000. 36 | 37 | ```powershell 38 | Get-Process | Where-Object Handles -gt 1000 39 | ``` 40 | 41 | The `Select-Object` selects objects or object properties. The following command shows the status and name of all of the services. 42 | 43 | ```powershell 44 | Get-Service | Select-Object -Property Status,Name 45 | ``` 46 | 47 | The `Sort-Object` cmdlet sorts objects by property values. The following command sorts the processes on the handles property in a descending order. 48 | 49 | ```powershell 50 | Get-Process | Sort-Object -Property Handles -Descending 51 | ``` 52 | 53 | The `ForEach-Object` cmdlet performs an operation against each item in a collection of input objects. For example, the following command shows the even numbers from 2 to 20. 54 | 55 | ```powershell 56 | 1..10 | Foreach-Object {$_ * 2} 57 | ``` 58 | 59 | The `Measure-Object` cmdlet calculates the numeric properties of objects, and the characters, words, and lines in string objects, such as files of text. The following command displays the minimum, maximum, and average sizes of the working sets of the processes on the computer. 60 | 61 | ```powershell 62 | Get-Process | Measure-Object -Property workingset -Minimum -Maximum -Average 63 | ``` 64 | 65 | ## Using variables 66 | 67 | In PowerShell, variables are represented by text strings that begin with a dollar sign `$`, such as `$name`, or `$service`. The following command assigns the string `"C:\Windows\System32"` to the variable `$path`. 68 | 69 | ```powershell 70 | $path = "C:\Windows\System32" 71 | ``` 72 | 73 | Variables are case insensitive. The `Get-Variable` cmdlet gets the variables in the current console. 74 | 75 | You can clear the value of a variable with the `Clear-variable` cmdlet. 76 | 77 | ```powershell 78 | Clear-Variable -Name path 79 | ``` 80 | 81 | To remove a variable, use the `Remove-variable` cmdlet. 82 | 83 | ```powershell 84 | Remove-Variable -Name Path 85 | ``` 86 | 87 | You can get the help about using variables with the following command. 88 | 89 | ```powershell 90 | Get-Help about_Variables 91 | ``` 92 | 93 | ## Using aliases 94 | 95 | Aliases are alternate names for cmdlets and functions in PowerShell. For example, `dir` and `ls` are aliases for the `Get-ChildItem` cmdlet. `cat` and `type` are aliases for the `Get-Content` cmdlet. 96 | 97 | The `Get-Alias` cmdlet retrieves a list of the aliases in the current PowerShell session. 98 | 99 | You can use the `New-Alias` cmdlet to create a new alias. The following command creates a new alias named `List` for the `Get-ChildItem` cmdlet. 100 | 101 | ```powershell 102 | New-Alias -Name List -Value Get-ChildItem 103 | ``` 104 | 105 | You can get the help about aliases with the following command. 106 | 107 | ```powershell 108 | Get-Help about_aliases 109 | ``` 110 | 111 | ## Using functions 112 | 113 | A function is a list of Windows PowerShell statements that has a name that you assign. When you run a function, you type the function name. The statements in the list run as if you had typed them at the command prompt. 114 | 115 | The following function `Add-Int` adds two integers and returns the sum of the integers. 116 | 117 | ```powershell 118 | function Add-Int { 119 | param( 120 | [int]$i1, 121 | [int]$i2 122 | ) 123 | $i1+$i2 124 | } 125 | ``` 126 | 127 | You can call the `Add-Int` function with the following command to add 1 and 2. 128 | 129 | ```powershell 130 | Add-Int -I1 1 -I2 2 131 | ``` 132 | 133 | Because the `Add-Int` function parameters have a fixed position, you can also call the function without the parameter names using the following command. 134 | 135 | ```powershell 136 | Add-Int 1 2 137 | ``` 138 | 139 | You can get the help about using functions with the following command. 140 | 141 | ```powershell 142 | Get-Help about_Functions 143 | ``` 144 | 145 | ## Using pipelines 146 | 147 | In PowerShell whole objects are passed through the pipeline. The `$_` variable represents the current object in the pipeline. For example, the following command retrieves all of the process names on your computer. 148 | 149 | ```powershell 150 | Get-Process | Foreach-Object {$_.Name} 151 | ``` 152 | 153 | You can also use the `PipelineVariable` parameter to assign the current pipeline object to a variable. The following command retrieves all of the process names on your computer using the `PipelineVariable` parameter. 154 | 155 | ```powershell 156 | Get-Process -PipelineVariable Process | Foreach-Object {$Process.Name} 157 | ``` 158 | 159 | You can get the help about using pipelines with the following command. 160 | 161 | ```powershell 162 | Get-Help about_pipelines 163 | ``` 164 | 165 | ## Using modules 166 | 167 | Modules are packages that contain PowerShell commands such as cmdlets, providers, functions, workflows, variables, and aliases. You can write your own modules and you can also use modules written by other people. The PowerShell Galery is a central repository for sharing and acquiring PowerShell code including PowerShell modules, scripts, and Desired State Configuration resources. 168 | 169 | To find modules that are installed in a default module location, but not yet imported into your session, type the following command. 170 | 171 | ```powershell 172 | Get-Module -ListAvailable 173 | ``` 174 | 175 | The `Get-Module` cmdlet retrieves the modules that have already been imported into your session. 176 | 177 | Use the `Find-Module` cmdlet to get a list of all of the modules in the PowerShell Gallery. 178 | 179 | The `Install-Module` cmdlet downloads a module from an online gallery and installs it on your computer. The following command downloads and installs the `Azure` module from the PowerShell Gallery. 180 | 181 | ```powershell 182 | Find-Module -Name Azure | Install-Module 183 | ``` 184 | 185 | The `Import-Module` cmdlet imports a module in your PowerShell session. The following command imports the `Azure` module in your session. 186 | 187 | ```powershell 188 | Import-Module -Name Azure 189 | ``` 190 | 191 | The following command shows you all of the cmdlets and functions in the Azure module. 192 | 193 | ```powershell 194 | Get-Command -Module Azure 195 | ``` 196 | 197 | You can get help about using modules with the following command. 198 | 199 | ```powershell 200 | Get-Help about_Modules 201 | ``` -------------------------------------------------------------------------------- /ssh-tunnels/Basics of SSH tunnels.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/ssh-tunnels/Basics of SSH tunnels.docx -------------------------------------------------------------------------------- /ssh-tunnels/README.md: -------------------------------------------------------------------------------- 1 | # Welcome to this SSH tunnel tutorial 2 | 3 | -------------------------------------------------------------------------------- /ssh-tunnels/SSHTunnels.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/AMIS-Services/code-cafe/b205b7288e84a082c01330a32171bfba72e21591/ssh-tunnels/SSHTunnels.pptx -------------------------------------------------------------------------------- /traefik/docker-backend/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | proxy: 5 | image: traefik 6 | command: --web --web.metrics.prometheus --web.metrics.prometheus.buckets="0.1,0.3,1.2,5.0" --docker --docker.domain=docker.localhost --logLevel=DEBUG 7 | ports: 8 | - "80:80" 9 | - "8080:8080" 10 | - "443:443" 11 | volumes: 12 | - /var/run/docker.sock:/var/run/docker.sock 13 | - /dev/null:/traefik.toml 14 | networks: 15 | - cafenet 16 | 17 | code-cafe-machine: 18 | image: katacoda/docker-http-server 19 | labels: 20 | - "traefik.backend=code-cafe-machine-echo" 21 | - "traefik.frontend.rule=Host:machine.code.cafe" 22 | networks: 23 | - cafenet 24 | 25 | code-cafe-echo: 26 | image: katacoda/docker-http-server:v2 27 | labels: 28 | - "traefik.backend=code-cafe-echo" 29 | - "traefik.frontend.rule=Host:echo.code.cafe" 30 | networks: 31 | - cafenet 32 | 33 | networks: 34 | cafenet: 35 | -------------------------------------------------------------------------------- /traefik/search/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | proxy: 5 | image: traefik 6 | ports: 7 | - "80:80" 8 | - "8080:8080" 9 | volumes: 10 | - /var/run/docker.sock:/var/run/docker.sock 11 | - $PWD/traefik.toml:/etc/traefik/traefik.toml 12 | 13 | -------------------------------------------------------------------------------- /traefik/search/traefik.toml: -------------------------------------------------------------------------------- 1 | # traefik.toml 2 | logLevel = "DEBUG" 3 | defaultEntryPoints = ["http"] 4 | [entryPoints] 5 | [entryPoints.http] 6 | address = ":80" 7 | 8 | # Ping definition 9 | [ping] 10 | # Name of the related entry point 11 | # 12 | # Optional 13 | # Default: "traefik" 14 | # 15 | entryPoint = "traefik" 16 | # Metrics definition 17 | # https://docs.traefik.io/configuration/metrics/ 18 | 19 | [metrics] 20 | # To enable Traefik to export internal metrics to Prometheus 21 | [metrics.prometheus] 22 | 23 | # Name of the related entry point 24 | entryPoint = "traefik" 25 | 26 | # Buckets for latency metrics 27 | buckets = [0.1,0.3,1.2,5.0] 28 | [api] 29 | # Name of the related entry point 30 | entryPoint = "traefik" 31 | 32 | # Enable Dashboard 33 | dashboard = true 34 | 35 | [file] 36 | watch = true 37 | 38 | # rules 39 | [backends] 40 | # the backend called backendsearch represents multiple round robin load balanced search engines 41 | # requests forwarded to this backend will be distributed over the servers specified in this backend 42 | [backends.backendsearch] 43 | [backends.backendsearch.servers.server1] 44 | url = "http://www.google.com" 45 | weight = 1 46 | [backends.backendsearch.servers.server2] 47 | url = "http://www.bing.com:80" 48 | weight = 1 49 | [backends.backendsearch.servers.server3] 50 | url = "http://www.duckduckgo.com:80" 51 | weight = 1 52 | #[backends.backendsearch.servers.server4] 53 | #url = "http://www.yahoo.com:80" 54 | #weight = 1 55 | 56 | [frontends] 57 | [frontends.frontendsearch] 58 | backend = "backendsearch" 59 | [frontends.frontendsearch.routes.search-com] 60 | rule = "Host:search.com" 61 | -------------------------------------------------------------------------------- /zsh-and-ohmyzsh/README.md: -------------------------------------------------------------------------------- 1 | # ZSH and Oh My Zsh 2 | 3 | ## Why would you use them? 4 | 5 | Because your productivity will skyrocket! Zsh is targeted towards a developer audience and installing the Oh My Zsh framework on top of it enables you to install plugins for programs and features you often use. You won't need to type out entire git commands for instance, when using the git-plugin you'll have access to almost any git command by just typing 3 characters. After installing ZSH there is no more need to navigate folders by typing `cd`, just type your paths and press enter. There is also no more need to type the entire path, just type the first letter of each directive, as long as the combination is unique you can autocomplete it by pressing tab. Are you worried about learning all the commands of a new shell? Don't! You won't have to learn any new commands to replace bash, you can still use 99% of those. 6 | 7 | ### Prerequisites 8 | 9 | In order to experience ZShell and OhMyZsh yourself, you will need a Unix-like operating system (Linux or macOS). If you are reading this on a Windows machine, refer to https://github.com/AMIS-Services/code-cafe/tree/master/linux-and-docker-host-on-windows-machine to get instructions on how to set up an Ubuntu environment using VirtualBox. 10 | 11 | - curl or wget should be installed 12 | - git should be installed 13 | 14 | #### Installing Zsh 15 | 16 | Ubuntu: 17 | 18 | ```console 19 | apt install zsh 20 | ``` 21 | 22 | For other Linux distro's or macOS see https://github.com/robbyrussell/oh-my-zsh/wiki/Installing-ZSH 23 | 24 | Make Zsh your default shell: 25 | 26 | ```console 27 | chsh -s $(which zsh) 28 | ``` 29 | 30 | After installation, your version should be greater than 5.1.1 31 | 32 | ```console 33 | zsh --version 34 | ``` 35 | 36 | ### The big Zsh features 37 | 38 | 1. Autocd option -- "executing" a directory will "cd" to it instead 39 | 2. Show exit status of last command 40 | 3. Filename correction during completion 41 | 4. Shorthand notation for unique combinations of directives: if dir1/x exists and dir2 exists, then "dir/x" `tab` completes to dir1/x 42 | 5. `some-command` press `tab` -- expand output of some-command right in your shell line 43 | 6. Suffix aliases: Associate files with specific extensions with default applications. E.g.: 44 | ```console 45 | $ alias -s cpp=vim 46 | ``` 47 | Executing test.cpp will open the file using vim 48 | 7. Auto-complete: when typing `kill` and pressing `tab` you will see all the processes one can kill 49 | 8. Enables you to use Oh My Zsh :) 50 | 51 | These are just some of my favorites, for more see: 52 | https://gist.github.com/ashrithr/5793891 and https://code.joejag.com/2014/why-zsh.html 53 | 54 | #### Installing Oh My Zsh 55 | 56 | via curl 57 | 58 | ```console 59 | sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" 60 | ``` 61 | 62 | via wget 63 | 64 | ```console 65 | sh -c "$(wget https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh -O -)" 66 | ``` 67 | 68 | ### The big Oh My Zsh features 69 | 70 | 1. Plugins - You can install plugins for programs such as Git, AWS, Docker, JHipster. The git plugin is by far my favorite. Take a look at all the shortcuts the git plugin provides: 71 | https://github.com/robbyrussell/oh-my-zsh/wiki/Plugin:git 72 | By default you can see the status of your repo in a glance. 73 | Green: no changes 74 | Yellow with circle icon: untracked files 75 | Yellow with plus icon: tracked files, etc. 76 | 2. Themes - There is a whole community sharing themes that alter what your console looks like. Or you could create your own theme. 77 | 3. A whole lot of aliases 78 | `..` instead of `cd ..` 79 | `...` instead of `cd ../..` 80 | `....` instead of `cd ../../..` 81 | `rd` instead of `rmdir` 82 | `_` instead of `sudo` 83 | .. and the list continues, type `alias` to reference them all 84 | 4. Further configuration of Zsh, enabling advanced features 85 | 86 | #### Configuring plugins 87 | 88 | There are lots of plugins available. Browse https://github.com/robbyrussell/oh-my-zsh/wiki/Plugins. Once you see one you like, add it to your .zshrc file. 89 | 90 | ```console 91 | vi ~/.zshrc 92 | ``` 93 | 94 | #### Themes 95 | 96 | See https://github.com/robbyrussell/oh-my-zsh/wiki/Themes 97 | --------------------------------------------------------------------------------