├── .editorconfig ├── .github ├── CODEOWNERS ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE │ ├── bug_report.md │ └── feature_request.md ├── NOTUSED-ISSUE_TEMPLATE.md ├── PULL_REQUEST_TEMPLATE.md ├── dependabot.yml └── workflows │ └── publish-to-github-packages.yml ├── .gitignore ├── CHANGELOG.md ├── LICENCE ├── README.md ├── bulk-git.sh ├── bulk-npm.sh ├── docker-devenv ├── .gitignore ├── README.md ├── docker-compose-comms │ ├── README.md │ ├── c1 │ │ └── docker-compose.yml │ └── c2 │ │ └── docker-compose.yml ├── kafka │ ├── README.md │ └── docker-compose.yml ├── keycloak │ ├── README.md │ ├── docker-compose.yml │ └── realm-export.json ├── mongodb │ ├── README.md │ └── docker-compose.yml ├── mysql │ └── docker-compose.yml ├── opensearch │ ├── README.md │ └── docker-compose.yml ├── postgres │ └── docker-compose.yml ├── rabbitmq │ └── docker-compose.yml ├── redis │ └── docker-compose.yml └── vault │ ├── README.md │ └── docker-compose.yml ├── docs ├── backend.md ├── cloud.md ├── db-trigger-audit.md ├── deployment.md ├── devtools.md ├── git │ ├── git.md │ ├── github.md │ ├── merge-from-upstream.md │ └── sparse-checkout.md ├── hashicorp-vault.md ├── home.md ├── js.md ├── mongodb.md ├── nodejs.md ├── npm.md ├── sqa-nodejs.md ├── team-guidance.md ├── vue3reactivity-demo.html ├── web-identity-forms.md └── web.md ├── git-hooks └── pre-commit ├── package.json ├── recipes └── README.md └── sandbox ├── README.md ├── aws ├── .env.sample ├── README.md ├── index.ts ├── package.json ├── test.js └── utils.ts ├── jsdoc-ts ├── .gitignore ├── README.md ├── package.json ├── src │ ├── index.js │ ├── types-address.ts │ └── types.ts └── tsconfig.json ├── serialserver ├── .env.example ├── .gitignore ├── README.md ├── logger.js ├── package.json └── serial-server.js ├── services ├── .env.example ├── .gitignore ├── README.md ├── aes.js ├── csv.js ├── ga.js ├── gen-pems.js ├── kafkaRx.js ├── kafkaTx.js ├── knex_trx.js ├── my-tcp-app.js ├── net-retries.js ├── net.js ├── package.json ├── process-cron.js ├── process-long.js ├── scaled-ws.js ├── tcpClient.js ├── tcpServer.js ├── tcp_server.js ├── test-exc-handler.js └── todo.js └── worker-threads ├── README.md ├── index-pool.js ├── index.js ├── nodejs-worker-performance ├── .gitignore ├── README.md ├── package.json ├── project_modules │ ├── calculus.js │ ├── cpuIntensive.js │ ├── poolifierWorker.js │ ├── routing.js │ └── worker.js ├── server.js └── test │ ├── 10thread-10size.PNG │ ├── 10thread-12size.PNG │ ├── 1thread-10size.PNG │ ├── 1thread-14size.PNG │ └── jmeter.jmx └── worker.js /.editorconfig: -------------------------------------------------------------------------------- 1 | root = true 2 | 3 | [*] 4 | charset = utf-8 5 | indent_style = space 6 | indent_size = 2 7 | end_of_line = lf 8 | insert_final_newline = true 9 | trim_trailing_whitespace = true 10 | 11 | -------------------------------------------------------------------------------- /.github/CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @ais-one -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # cookbook Contributing Guide 2 | Hello and thank you for your interest in helping make cookbook better. Please take a few moments to review the following guidelines: 3 | 4 | ## IMPORTANT INFORMATION 5 | * For general questions, please join [Our Discussion Board](https://github.com/ais-one/cookbook/discussions). 6 | 7 | ## Reporting Issues 8 | * The issue list of this repo is **exclusively** for Bug Reports and Feature Requests. 9 | * Bug reproductions should be as **concise** as possible. 10 | * **Search** for your issue, it _may_ have been answered. 11 | * See if the error is **reproduceable** with the latest version. 12 | * If reproduceable, please provide a [Codepen](https://codepen.io/) or public repository that can be cloned to produce the expected behavior. It is preferred that you create an initial commit with no changes first, then another one that will cause the issue. 13 | * **Never** comment "+1" or "me too!" on issues without leaving additional information, use the :+1: button in the top right instead. 14 | 15 | ## Pull Requests 16 | * Always work on a new branch. Making changes on your fork's `dev` or `master` branch can cause problems. (See [The beginner's guide to contributing to a GitHub project](https://akrabat.com/the-beginners-guide-to-contributing-to-a-github-project/)) 17 | * Bug fixes should be submitted to the `master` branch. 18 | * New features and breaking changes should be submitted to the `dev` branch. 19 | * Use a descriptive title no more than 64 characters long. This will be used as the commit message when your PR is merged. 20 | * For changes and feature requests, please include an example of what you are trying to solve and an example of the markup. It is preferred that you create an issue first however, as that will allow the team to review your proposal before you start. 21 | * Please reference the issue # that the PR resolves, something like `Fixes #1234` or `Resolves #6458` (See [closing issues using keywords](https://help.github.com/articles/closing-issues-using-keywords/)) 22 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/bug_report.md: -------------------------------------------------------------------------------- 1 | 5 | 6 | ## Description 7 | 8 | **Environment:** 9 | 10 | 11 | **Current behavior:** 12 | 13 | 14 | **Expected behavior:** 15 | 16 | 17 | **Steps to reproduce:** 18 | 19 | 20 | **Related code:** 21 | 32 | 33 | ``` 34 | insert short code snippets here 35 | ``` 36 | 37 | ## Other information 38 | 39 | **npm, node, OS, Browser** 40 | ``` 41 | 46 | ``` 47 | 48 | **Angular, Nebular** 49 | ``` 50 | 53 | ``` 54 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/feature_request.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Feature Request 3 | about: Suggest new idea 4 | labels: feature 5 | --- 6 | 7 | ## Summary 8 | 9 | Brief explanation of the feature 10 | 11 | ### Basic example 12 | 13 | Include specs, document, code examples, screenshots, etc. 14 | 15 | ### Motivation 16 | 17 | - Why is this needed? 18 | - What use cases does it support? 19 | - What is the expected outcome? -------------------------------------------------------------------------------- /.github/NOTUSED-ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 5 | 6 | ### Issue type 7 | 8 | **I'm submitting a ...** (check one with "x") 9 | 10 | * [ ] bug report 11 | * [ ] feature request 12 | * [ ] question about the decisions made in the repository 13 | 14 | ### Issue description 15 | 16 | **Environment:** 17 | 18 | 19 | **Current behavior:** 20 | 21 | 22 | **Expected behavior:** 23 | 24 | 25 | **Steps to reproduce:** 26 | 27 | 28 | **Related code:** 29 | 40 | 41 | ``` 42 | insert short code snippets here 43 | ``` 44 | 45 | ### Other information: 46 | 47 | **npm, node, OS, Browser** 48 | ``` 49 | 54 | ``` 55 | 56 | **Angular, Nebular** 57 | ``` 58 | 61 | ``` 62 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | ### Please read and mark the following check list before creating a pull request (check one with "x"): 2 | 3 | - [ ] I read and followed the [Contribution guide](https://github.com/ais-one/cookbook/blob/main/.github/CONTRIBUTING.md) 4 | 5 | #### Short description of what this resolves: 6 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | # Enable version updates for npm 4 | - package-ecosystem: 'npm' 5 | # Look for `package.json` and `lock` files in the `root` directory 6 | directory: '/' 7 | # Check the npm registry for updates every week 8 | schedule: 9 | interval: 'weekly' -------------------------------------------------------------------------------- /.github/workflows/publish-to-github-packages.yml: -------------------------------------------------------------------------------- 1 | name: Publish package to GitHub Packages 2 | on: 3 | release: 4 | types: [created] 5 | jobs: 6 | build: 7 | runs-on: ubuntu-latest 8 | permissions: 9 | contents: read 10 | packages: write 11 | steps: 12 | - uses: actions/checkout@v3 13 | # Setup .npmrc file to publish to GitHub Packages 14 | - uses: actions/setup-node@v3 15 | with: 16 | node-version: '16.x' 17 | registry-url: 'https://npm.pkg.github.com' 18 | # Defaults to the user or organization that owns the workflow file 19 | scope: '@octocat' 20 | - run: npm ci 21 | - run: npm publish 22 | env: 23 | NODE_AUTH_TOKEN: ${{ secrets.GITHUB_TOKEN }} -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | package-lock.json 3 | 4 | # Log files 5 | npm-debug.log* 6 | 7 | # apple stuff 8 | .DS_Store 9 | 10 | # Editor directories and files 11 | .idea 12 | .vscode 13 | 14 | .env -------------------------------------------------------------------------------- /LICENCE: -------------------------------------------------------------------------------- 1 | 2 | The MIT License 3 | 4 | Copyright (c) 2018 Aaron Gong 5 | 6 | Permission is hereby granted, free of charge, to any person obtaining a copy 7 | of this software and associated documentation files (the "Software"), to deal 8 | in the Software without restriction, including without limitation the rights 9 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 | copies of the Software, and to permit persons to whom the Software is 11 | furnished to do so, subject to the following conditions: 12 | 13 | The above copyright notice and this permission notice shall be included in 14 | all copies or substantial portions of the Software. 15 | 16 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 22 | THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![master commit](https://badgen.net/github/last-commit/ais-one/cookbook/master) 2 | ![release](https://img.shields.io/github/v/release/ais-one/cookbook) 3 | [![npm version](https://badge.fury.io/js/cookbook.svg)](https://badge.fury.io/js/cookbook) 4 | [![npm](https://img.shields.io/npm/dm/cookbook.svg)](https://www.npmjs.com/package/cookbook) 5 | [![Sonarcloud Status](https://sonarcloud.io/api/project_badges/measure?project=com.lapots.breed.judge:judge-rule-engine&metric=alert_status)](https://sonarcloud.io/dashboard?id=com.lapots.breed.judge:judge-rule-engine) 6 | [![Known Vulnerabilities](https://snyk.io/test/github/ais-one/cookbook/badge.svg)](https://snyk.io/test/github/ais-one/cookbook) 7 | [![MadeWithVueJs.com shield](https://madewithvuejs.com/storage/repo-shields/823-shield.svg)](https://madewithvuejs.com/p/cookbook/shield-link) 8 | 9 | ### 1 - IMPORTANT - Read Me First! 10 | 11 | The `templates` (express and vuejs template) and `libraries` (shareable libraries and tools) projects referenced in the [Recipes](recipes/README.md) are based on the two principles below. 12 | 13 | ### 1.1 - Updateable Templates 14 | 15 | Your project is created using a template. If template updates, can upstream changes be merged with minimal impact on userland codes? 16 | 17 | Yes and it is achieved through: 18 | - Design 19 | - Create folder where all userland code is placed, template must NOT touch this folder 20 | - template should not to be part of a monorepo 21 | - Process 22 | - clone template and create remote called `upstream` pointing to template 23 | - update framework when necessary by merging `upstream` into `origin` 24 | 25 | ### 1.2 - Manageable Sharing 26 | 27 | You have code shared between multiple projects and libraries. If the code is updated, is breaking dependents and dependencies avoidable? 28 | 29 | Yes, based on the following principles: 30 | - Shared libraries should be isolated and versioned. Use last-known-good version and update when ready 31 | - Isolation and versioning can be extended to `types` (for Typescript) and `contracts` (for API) 32 | - minimize inter & nested dependencies, and technical debt 33 | 34 | --- 35 | 36 | ### 2 - General Requirements 37 | 38 | - git, github (for actions, secrets, etc) & IDE (e.g. vscode), Docker 39 | - unix shell (Windows use git-bash or WSL2) 40 | - node 20+ LTS & npm 9+ (npm i -g npm@latest `to update`) 41 | 42 | ### 3 - Sandbox 43 | 44 | Research and exploration [Sandbox](sandbox/README.md) 45 | 46 | ### 4 - Docker Dev Env 47 | 48 | Container setups for supporting apps for local development and testing [docker-devenv/README.md]() 49 | 50 | ### 5 - Documentation 51 | 52 | The [docs](docs/home.md) folder contains useful information is in the midst of a major cleanup 53 | 54 | ### 6 - Useful scripts - For Use By Maintainers 55 | 56 | - `bulk-git.sh`: script to diff, pull, push git (for repos in `recipies`) 57 | - `bulk-npm.sh`: script to check for and/or update dependencies in package.json (for repos in `recipies`) 58 | -------------------------------------------------------------------------------- /bulk-git.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # number of arguments... $# 3 | echo "Git push across multiple projects - A script to save time" 4 | echo 5 | 6 | PS3="Please enter your choice: " 7 | options=("Confirm All" "Confirm Single" "Quit") 8 | select opt in "${options[@]}" 9 | do 10 | case $opt in 11 | "Confirm Single") 12 | echo "Confirm files individually" 13 | break 14 | ;; 15 | "Confirm All") 16 | echo "Confirm all files" 17 | break 18 | ;; 19 | "Quit") 20 | exit 0 21 | ;; 22 | *) echo "invalid option $opt";; 23 | esac 24 | done 25 | 26 | echo 27 | 28 | PS3="Please enter your choice: " 29 | options2=("Diff" "Pull" "Push") 30 | select opt2 in "${options2[@]}" 31 | do 32 | case $opt2 in 33 | "Diff") 34 | echo "Do git diff" 35 | break 36 | ;; 37 | "Pull") 38 | echo "Do git pull" 39 | break 40 | ;; 41 | "Push") 42 | echo "Do git push" 43 | break 44 | ;; 45 | *) echo "invalid option $opt2";; 46 | esac 47 | done 48 | 49 | declare -a PACKAGES=( 50 | "cookbook" 51 | "jscommon" 52 | "express-template" 53 | "vue-antd-template" 54 | ) 55 | 56 | BASEPATH=`cd .. && pwd` 57 | # BASEPATH=`pwd` 58 | 59 | # TO IMPROVE GET CHOICE TO CHOOSE INDIVIDUALLY OR DO FOR ALL 60 | echo "Running pushes..." 61 | echo 62 | 63 | for package in "${PACKAGES[@]}" 64 | do 65 | DOIT="Y" 66 | packagePath="${BASEPATH}/${package}" 67 | 68 | if [ "$opt" == "Confirm Single" ]; then 69 | echo -n "Run for... ${package} (y/n)? " 70 | read DOIT 71 | fi 72 | 73 | if [ "$DOIT" != "${DOIT#[Yy]}" ]; then 74 | # or do whatever with individual element of the array 75 | echo "Running for... ${package}" 76 | cd $packagePath 77 | 78 | if [ "$opt2" == "Diff" ]; then 79 | echo `git diff` 80 | elif [ "$opt2" == "Push" ]; then 81 | echo `git add .` 82 | echo `git commit -m "update"` 83 | if [ "$package" == "cookbook" ]; then 84 | echo "Push for cookbook project manually" 85 | else 86 | echo `git push origin main` 87 | fi 88 | elif [ "$opt2" == "Pull" ]; then 89 | if [ "$package" == "cookbook" ]; then 90 | echo "Push for cookbook project manually" 91 | else 92 | echo `git pull origin main` 93 | fi 94 | fi 95 | else 96 | echo "Skipped... ${packagePath}" 97 | fi 98 | done 99 | 100 | # echo checking app 101 | 102 | echo done... 103 | 104 | read 105 | -------------------------------------------------------------------------------- /bulk-npm.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # number of arguments... $# 3 | echo "Maintain npm packages across multiple projects - A script to save time" 4 | echo "supported commands: npm outdated, npm update, npm i [options] [package], npm r [package]" 5 | echo "IF RUNNING ON WINDOWS PLEASE ENSURE YOU INTEGRATE GIT BASH TO CMD SHELL WHEN INSTALLING GIT ON WINDOWS" 6 | echo 7 | 8 | PS3="Please enter your choice: " 9 | options=("Confirm All" "Confirm Single" "Quit") 10 | select opt in "${options[@]}" 11 | do 12 | case $opt in 13 | "Confirm Single") 14 | echo "Confirm files individually" 15 | break 16 | ;; 17 | "Confirm All") 18 | echo "Confirm all files" 19 | break 20 | ;; 21 | "Quit") 22 | exit 0 23 | ;; 24 | *) echo "invalid option $REPLY";; 25 | esac 26 | done 27 | 28 | echo 29 | 30 | declare -a PACKAGES=( 31 | "/cookbook" 32 | "/jscommon" 33 | "/express-template" 34 | "/express-template/apps" 35 | "/vue-antd-template" 36 | "/vue-antd-template/src/apps" 37 | ) 38 | 39 | BASEPATH=`cd .. && pwd` 40 | # BASEPATH=`pwd` 41 | 42 | # TO IMPROVE GET CHOICE TO CHOOSE INDIVIDUALLY OR DO FOR ALL 43 | echo "Running npm ${*}..." 44 | echo 45 | 46 | for package in "${PACKAGES[@]}" 47 | do 48 | DOIT="Y" 49 | packagePath="${BASEPATH}${package}" 50 | 51 | if [ "$opt" == "Confirm Single" ]; then 52 | echo -n "Run for ${packagePath} (y/n)? " 53 | read DOIT 54 | fi 55 | 56 | if [ "$DOIT" != "${DOIT#[Yy]}" ]; then 57 | # or do whatever with individual element of the array 58 | echo "Running for... ${packagePath}" 59 | cd $packagePath 60 | if [ "$*" == "update" ]; then 61 | echo `npm $* --save` 62 | else 63 | echo `npm $*` 64 | fi 65 | else 66 | echo "Skipped... ${packagePath}" 67 | fi 68 | done 69 | 70 | # echo checking app 71 | 72 | echo done... 73 | 74 | read 75 | -------------------------------------------------------------------------------- /docker-devenv/.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | mongodb/data 3 | mysql/data 4 | redis/data 5 | keycloak/data 6 | rabbitmq/data 7 | -------------------------------------------------------------------------------- /docker-devenv/README.md: -------------------------------------------------------------------------------- 1 | # Docker Development Environment 2 | 3 | Setup consistent stack for local development using docker / docker-compose 4 | 5 | ## Network 6 | 7 | network: my-test-net 8 | 9 | ## Applications 10 | 11 | applications: 12 | - keycloak 13 | - mongodb 14 | - mysql 15 | - redis 16 | - vault 17 | - rabbitmq (in progress) 18 | - kafka (in progress) 19 | - opensearch (in progress) 20 | - docker-compose-comms (example of communication between 2 or more docker-compose) 21 | 22 | ## Persistent Data Folder 23 | 24 | Refer to mongodb example 25 | -------------------------------------------------------------------------------- /docker-devenv/docker-compose-comms/README.md: -------------------------------------------------------------------------------- 1 | ## Network Between Containers - Docker Compose 2 | 3 | How to access applications created by multiple docker compose files 4 | 5 | They should be on the same network name 6 | 7 | ```bash 8 | # create a network 9 | docker network create blah-example 10 | ``` 11 | 12 | ```yaml 13 | # in docker-compose.yml add the network 14 | 15 | networks: 16 | default: 17 | external: 18 | name: 19 | ``` 20 | 21 | 22 | ## Test Out Example 23 | 24 | ### Run the containers 25 | 26 | 1. create network 27 | 2. set the network names in 28 | - c1/docker-complse.yml 29 | - c2/docker-complse.yml 30 | 3. run docker-compose up for both files 31 | 32 | ### Test 33 | 34 | ```bash 35 | # container 1 ping container 2 app 36 | docker exec -it compose1_service1_1 ping service2 37 | 38 | # container 2 ping container 1 app 39 | docker exec -it compose2_service2_1 ping service1 40 | ``` 41 | -------------------------------------------------------------------------------- /docker-devenv/docker-compose-comms/c1/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | 3 | services: 4 | service1: 5 | image: busybox 6 | command: sleep infinity 7 | 8 | networks: 9 | default: 10 | external: 11 | name: redis_default 12 | -------------------------------------------------------------------------------- /docker-devenv/docker-compose-comms/c2/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.9" 2 | 3 | services: 4 | service2: 5 | image: busybox 6 | command: sleep infinity 7 | 8 | networks: 9 | default: 10 | external: 11 | name: redis_default 12 | -------------------------------------------------------------------------------- /docker-devenv/kafka/README.md: -------------------------------------------------------------------------------- 1 | https://github.com/codingharbour/kafka-docker-compose 2 | 3 | https://dev.to/de_maric/how-to-get-started-with-apache-kafka-in-5-minutes-18k5 4 | 5 | 6 | docker-compose up -d 7 | 8 | docker-compose ps 9 | 10 | docker exec -it sn-kafka /bin/bash 11 | 12 | ```bash 13 | kafka-topics --bootstrap-server localhost:9092 --create --topic test-topic --partitions 1 --replication-factor 1 14 | kafka-topics --bootstrap-server localhost:9092 --list 15 | kafka-console-producer --broker-list localhost:9092 --topic test-topic 16 | kafka-console-consumer --bootstrap-server localhost:9092 --topic test-topic 17 | kafka-console-consumer --bootstrap-server localhost:9092 --topic test-topic --from-beginning 18 | ``` 19 | 20 | 21 | 22 | 23 | 24 | --- 25 | 26 | 27 | version: '2.1' 28 | services: 29 | zookeeper-1: 30 | image: confluentinc/cp-zookeeper:latest 31 | environment: 32 | ZOOKEEPER_SERVER_ID: 1 33 | ZOOKEEPER_CLIENT_PORT: 22181 34 | ZOOKEEPER_TICK_TIME: 2000 35 | ZOOKEEPER_INIT_LIMIT: 5 36 | ZOOKEEPER_SYNC_LIMIT: 2 37 | ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888 38 | network_mode: host 39 | extra_hosts: 40 | — "moby:127.0.0.1" 41 | 42 | zookeeper-2: 43 | image: confluentinc/cp-zookeeper:latest 44 | environment: 45 | ZOOKEEPER_SERVER_ID: 2 46 | ZOOKEEPER_CLIENT_PORT: 32181 47 | ZOOKEEPER_TICK_TIME: 2000 48 | ZOOKEEPER_INIT_LIMIT: 5 49 | ZOOKEEPER_SYNC_LIMIT: 2 50 | ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888 51 | network_mode: host 52 | extra_hosts: 53 | — "moby:127.0.0.1" 54 | 55 | zookeeper-3: 56 | image: confluentinc/cp-zookeeper:latest 57 | environment: 58 | ZOOKEEPER_SERVER_ID: 3 59 | ZOOKEEPER_CLIENT_PORT: 42181 60 | ZOOKEEPER_TICK_TIME: 2000 61 | ZOOKEEPER_INIT_LIMIT: 5 62 | ZOOKEEPER_SYNC_LIMIT: 2 63 | ZOOKEEPER_SERVERS: localhost:22888:23888;localhost:32888:33888;localhost:42888:43888 64 | network_mode: host 65 | extra_hosts: 66 | — "moby:127.0.0.1" 67 | kafka-1: 68 | image: confluentinc/cp-kafka:latest 69 | network_mode: host 70 | depends_on: 71 | — zookeeper-1 72 | — zookeeper-2 73 | — zookeeper-3 74 | environment: 75 | KAFKA_BROKER_ID: 1 76 | KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181 77 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:19092 78 | ports: 79 | — "19092:19092" 80 | extra_hosts: 81 | — "moby:127.0.0.1" 82 | 83 | kafka-2: 84 | image: confluentinc/cp-kafka:latest 85 | network_mode: host 86 | depends_on: 87 | — zookeeper-1 88 | — zookeeper-2 89 | — zookeeper-3 90 | environment: 91 | KAFKA_BROKER_ID: 2 92 | KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181 93 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:29092 94 | ports: 95 | — "29092:29092" 96 | extra_hosts: 97 | — "moby:127.0.0.1" 98 | 99 | kafka-3: 100 | image: confluentinc/cp-kafka:latest 101 | network_mode: host 102 | depends_on: 103 | — zookeeper-1 104 | — zookeeper-2 105 | — zookeeper-3 106 | environment: 107 | KAFKA_BROKER_ID: 3 108 | KAFKA_ZOOKEEPER_CONNECT: localhost:22181,localhost:32181,localhost:42181 109 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:39092 110 | ports: 111 | — "39092:39092" 112 | extra_hosts: 113 | — "moby:127.0.0.1" 114 | 115 | 116 | 124 | 125 | https://medium.com/better-programming/kafka-docker-run-multiple-kafka-brokers-and-zookeeper-services-in-docker-3ab287056fd5 126 | 127 | docker logs 128 | docker run --net=host --rm confluentinc/cp-zookeeper:latest bash -c “echo stat | nc localhost | grep Mode” 129 | docker logs 130 | docker run --net=host --rm confluentinc/cp-kafka:latest kafka-topics --create --topic --partitions --replication-factor --if-not-exists --zookeeper localhost:32181 131 | docker run --net=host --rm confluentinc/cp-kafka:latest kafka-topics --describe --topic testTopic --zookeeper localhost:32181 132 | docker run --net=host --rm confluentinc/cp-kafka:latest bash -c “seq 42 | kafka-console-producer --broker-list localhost:29092 --topic testTopic && echo ‘Producer 42 message.’” 133 | docker run --net=host --rm confluentinc/cp-kafka:latest kafka-console-consumer --bootstrap-server localhost:29092 --topic testTopic --new-consumer --from-beginning --max-message 42 134 | 135 | Kafka_host = “0.0.0.0:19092,0.0.0.0:29092,0.0.0.0:39092” -------------------------------------------------------------------------------- /docker-devenv/kafka/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '2' 3 | services: 4 | zookeeper: 5 | image: confluentinc/cp-zookeeper:5.5.0 6 | ports: 7 | - 2181:2181 8 | environment: 9 | ZOOKEEPER_CLIENT_PORT: 2181 10 | ZOOKEEPER_TICK_TIME: 2000 11 | container_name: sn-zookeeper 12 | kafka: 13 | # "`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,- 14 | # An important note about accessing Kafka from clients on other machines: 15 | # ----------------------------------------------------------------------- 16 | # 17 | # The config used here exposes port 9092 for _external_ connections to the broker 18 | # i.e. those from _outside_ the docker network. This could be from the host machine 19 | # running docker, or maybe further afield if you've got a more complicated setup. 20 | # If the latter is true, you will need to change the value 'localhost' in 21 | # KAFKA_ADVERTISED_LISTENERS to one that is resolvable to the docker host from those 22 | # remote clients 23 | # 24 | # For connections _internal_ to the docker network, such as from other services 25 | # and components, use kafka:29092. 26 | # 27 | # See https://rmoff.net/2018/08/02/kafka-listeners-explained/ for details 28 | # "`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,-'"`-._,- 29 | # 30 | image: confluentinc/cp-kafka:5.5.0 31 | depends_on: 32 | - zookeeper 33 | ports: 34 | - 9092:9092 35 | container_name: sn-kafka 36 | environment: 37 | KAFKA_BROKER_ID: 1 38 | KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 39 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092 40 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 41 | KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT 42 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 -------------------------------------------------------------------------------- /docker-devenv/keycloak/README.md: -------------------------------------------------------------------------------- 1 | # Keycloak docker-compose 2 | ## References 3 | 4 | - https://openidconnect.net/ 5 | - https://github.com/keycloak/keycloak-containers/tree/master/docker-compose-examples 6 | - https://www.redpill-linpro.com/techblog/2022/10/20/into_to_oidc_with_keycloak_and_vertx.html 7 | 8 | ## Notees 9 | 10 | Current Keycloak Version: 22.0.1 11 | 12 | ## Login 13 | 14 | - http://127.0.0.1:8081 15 | - select administration console 16 | - admin / admin 17 | 18 | ## Quick Setup 19 | 20 | Add new realm with import using the file [realm-export.json](realm-export.json)... Might Not Work... 21 | 22 | 23 | ## Setting Up SSO (Working Example) 24 | 25 | ### Realm 26 | 27 | - Create `dev` realm 28 | - Set Require SSL to `None` 29 | 30 | ### Realm User 31 | 32 | - Create User named: test 33 | - Add password credentials named: test 34 | 35 | ### Realm Client - OIDC 36 | 37 | Select client in the left menu and click Create client. 38 | - Client type: OpenID Connect. 39 | - Client ID: dev-client-oidc. 40 | (Next page - capability config) 41 | - Client authentication to on. 42 | (Next page - login settings) 43 | - Root URL: http://127.0.0.1:3000 44 | - Home URL: http://127.0.0.1:3000/sso.html 45 | - Valid Redirect URLS: http://127.0.0.1:3000/api/oidc/auth 46 | (Save) 47 | 48 | Explore Keycloak OpenID Connect discovery endpoint on 127.0.0.1:8081/realms/dev/.well-known/openid-configuration 49 | 50 | Setup the app env variables 51 | 52 | ``` 53 | OIDC_OPTIONS='{ 54 | "URL": "http://127.0.0.1:8081/realms/dev/protocol/openid-connect", 55 | "CLIENT_ID": "dev-client-oidc", 56 | "CLIENT_SECRET": " Signature and Encryption -> sign documents = On 74 | - sha256, none, exclusive 75 | - Settings -> Signature and Encryption -> sign assertions = Off 76 | - Keys -> Signing keys config = Off 77 | - Keys -> Encryption keys config = Off 78 | - Settings -> SAML capabilities -> Name ID format = username 79 | - Settings -> SAML capabilities -> Force POST binding = On 80 | - Settings -> SAML capabilities -> Include AuthnStatement = On 81 | - NOTE: there is no client secret created 82 | 83 | Setup the app env variables 84 | 85 | ``` 86 | SAML_OPTIONS='{ 87 | "cert": "", 88 | "callbackUrl": "http://127.0.0.1:3000/api/saml/callback", 89 | "entryPoint": "http://127.0.0.1:8081/realms/dev/protocol/saml", 90 | "issuer": "dev-client-saml" 91 | }' 92 | 93 | ``` 94 | 95 | 96 | ## Kubernetes - FYI Only 97 | 98 | https://hkiang01.github.io/kubernetes/keycloak/ 99 | 100 | --- 101 | 102 | ## To Document Better 103 | 104 | - http://127.0.0.1:8081/realms/test/.well-known/openid-configuration 105 | - http://127.0.0.1:8081/realms/test/protocol/openid-connect/certs 106 | - https://developer.okta.com/docs/guides/refresh-tokens/refresh-token-rotation/ 107 | - https://datatracker.ietf.org/doc/html/rfc6749 108 | - https://openid.net/specs/openid-connect-core-1_0.html#TokenRequest 109 | 110 | 111 | ### OIDC HTTP Commands 112 | 113 | ``` 114 | POST /token HTTP/1.1 115 | Host: server.example.com 116 | Authorization: Basic czZCaGRSa3F0MzpnWDFmQmF0M2JW 117 | Content-Type: application/x-www-form-urlencoded 118 | 119 | grant_type=refresh_token&refresh_token=tGzv3JOkF0XG5Qx2TlKWIA 120 | 121 | http --form POST https://${yourOktaDomain}/oauth2/default/v1/token \ 122 | accept:application/json \ 123 | authorization:'Basic MG9hYmg3M...' \ 124 | cache-control:no-cache \ 125 | content-type:application/x-www-form-urlencoded \ 126 | grant_type=refresh_token \ 127 | redirect_uri=http://localhost:8080 \ 128 | scope=offline_access%20openid \ 129 | refresh_token=MIOf-U1zQbyfa3MUfJHhvnUqIut9ClH0xjlDXGJAyqo 130 | 131 | { 132 | "access_token": "eyJhbGciOiJ[...]K1Sun9bA", 133 | "token_type": "Bearer", 134 | "expires_in": 3600, 135 | "scope": "offline_access%20openid", 136 | "refresh_token": "MIOf-U1zQbyfa3MUfJHhvnUqIut9ClH0xjlDXGJAyqo", 137 | "id_token": "eyJraWQiO[...]hMEJQX6WRQ" 138 | } 139 | ``` 140 | -------------------------------------------------------------------------------- /docker-devenv/keycloak/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | # volumes: 4 | # mysql_data: 5 | # driver: local 6 | 7 | services: 8 | # mysql: 9 | # platform: linux/arm64 10 | # image: mysql:5.7 11 | # volumes: 12 | # # - mysql_data:/var/lib/mysql 13 | # - ./data:/var/lib/mysql 14 | # environment: 15 | # MYSQL_ROOT_PASSWORD: root 16 | # MYSQL_DATABASE: keycloak 17 | # MYSQL_USER: keycloak 18 | # MYSQL_PASSWORD: password 19 | keycloak: 20 | platform: linux/arm64 21 | image: quay.io/keycloak/keycloak:latest 22 | environment: 23 | # DB_VENDOR: MYSQL 24 | # DB_ADDR: mysql 25 | # DB_DATABASE: keycloak 26 | # DB_USER: keycloak 27 | # DB_PASSWORD: password 28 | KEYCLOAK_USER: admin 29 | KEYCLOAK_PASSWORD: admin 30 | # Uncomment the line below if you want to specify JDBC parameters. The parameter below is just an example, and it shouldn't be used in production without knowledge. It is highly recommended that you read the MySQL JDBC driver documentation in order to use it. 31 | #JDBC_PARAMS: "connectTimeout=30000" 32 | # latest 33 | KEYCLOAK_ADMIN: admin 34 | KEYCLOAK_ADMIN_PASSWORD: admin 35 | # latest 36 | command: 37 | - start-dev 38 | ports: 39 | - 8081:8080 40 | -------------------------------------------------------------------------------- /docker-devenv/mongodb/README.md: -------------------------------------------------------------------------------- 1 | ## MongoDB 2 | 3 | > pre-requisite: you must have run docker-compose up -d 4 | 5 | If you are on VS Code, use the Docker extension 6 | 7 | **NOTE:** Remember to initiate replica set in new mongodb installs 8 | 9 | ### Restore 10 | - copy my-mongo-dump.gz to OS folder data/db1 11 | - attach shell to mongo1 container and cd /data/db 12 | - mongorestore --archive=my-mongo-dump.gz --gzip 13 | 14 | ### Backup 15 | - attach shell to mongo1 container and cd /data/db 16 | - mongodump --archive=my-mongo-dump.gz --gzip 17 | - copy my-mongo-dump.gz from OS folder data/db1 18 | 19 | ### fixing lost primary 20 | 21 | Run the following command in mongo shell 22 | 23 | ``` 24 | cfg = rs.conf(); 25 | cfg.members[0].host = "127.0.0.1:27017"; 26 | rs.reconfig(cfg, { force: true }); 27 | ``` 28 | 29 | ### Client Connection String 30 | 31 | ``` 32 | mongodb://127.0.0.1:27017/testdb-development?readPreference=primary&ssl=false 33 | ``` 34 | -------------------------------------------------------------------------------- /docker-devenv/mongodb/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # Reference https://gitlab.com/Tjth-ltd/docker-mongodb-replicaset 2 | # 3 | # sudo docker-compose up -d 4 | # rs.initiate( { _id : "test", members: [ { _id: 0, host: "mongodb1:27017" }, { _id: 1, host: "mongodb2:27018" }, { _id: 2, host: "mongodb3:27019" },] }) 5 | # rs.initiate() # if you are only initiating with 1 member 6 | # rs.status() 7 | # rs.add( { host: "mongodb4:27020", priority: 0, votes: 0 } ) # add new member 8 | # Making a "Hidden" Replicaset member 9 | # rs.add( { host: "mongodb5:27021", priority: 0, votes: 0 } ) 10 | # Now use rs.status() to get the "id" of the new replicaset member - Make a note of this _id: 11 | # Now edit the replicaset configuration to set this new member to "hidden", replacing *id* with the value you noted before (Eg 4) 12 | # cfg = rs.conf() 13 | # cfg.members[4].priority = 0 14 | # cfg.members[4].hidden = true 15 | # rs.reconfig(cfg) 16 | # rs.remove("mongodb4:27020") # Removing a Replicaset Member - run command on current Mongodb PRIMARY server 17 | version: '3.5' 18 | services: 19 | # nodejs: 20 | # build: 21 | # context: . 22 | # dockerfile: Dockerfile 23 | # image: nodejs # image name 24 | # container_name: nodejs # container name 25 | # restart: unless-stopped 26 | # environment: 27 | # - NODE_ENV=$NODE_ENV 28 | # ports: 29 | # - '80:3000' # "host:container" 30 | # depends_on: 31 | # - mongodb1: 32 | # condition: service_healthy 33 | # env_file: .env 34 | # volumes: 35 | # - .:/home/node/app # "host:container" 36 | # - node_modules:/home/node/app/node_modules # "host:container" 37 | # networks: 38 | # - app-network 39 | # command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js 40 | # links: 41 | # - mongo 42 | 43 | # app: 44 | # image: 'workflowengine:0.6.0' 45 | # depends_on: 46 | # - mongodb1: 47 | # condition: service_healthy 48 | # environment: 49 | # - NODE_ENV=uat 50 | # ports: 51 | # - '8080:8080' 52 | mongodb1: 53 | networks: 54 | - my-network-name 55 | image: 'mongo:4' # not ready to move to Mongo5 yet 56 | healthcheck: 57 | test: echo 'db.runCommand("ping").ok' | mongo localhost:27017/test --quiet 58 | container_name: 'mongodb1' 59 | environment: 60 | - 'MONGO_DATA_DIR=/data/db' 61 | ports: 62 | - '27017:27017' 63 | volumes: 64 | - './data/db1:/data/db' 65 | command: 'mongod --replSet "test" --bind_ip 0.0.0.0 --port 27017' 66 | restart: 'unless-stopped' # no (default), on-failure, always, unless-stopped 67 | # mongodb2: 68 | # image: 'mongo:latest' 69 | # container_name: 'mongodb2' 70 | # environment: 71 | # - 'MONGO_DATA_DIR=/data/db' 72 | # ports: 73 | # - '27018:27018' 74 | # volumes: 75 | # - './data/db2:/data/db' 76 | # command: 'mongod --replSet "test" --bind_ip 0.0.0.0 --port 27018' 77 | # restart: 'always' 78 | # mongodb3: 79 | # image: 'mongo:latest' 80 | # container_name: 'mongodb3' 81 | # environment: 82 | # - 'MONGO_DATA_DIR=/data/db' 83 | # ports: 84 | # - '27019:27019' 85 | # volumes: 86 | # - './data/db3:/data/db' 87 | # command: 'mongod --replSet "test" --bind_ip 0.0.0.0 --port 27019' 88 | # restart: 'always' 89 | # mongodb4: 90 | # image: 'mongo:latest' 91 | # container_name: 'mongodb4' 92 | # environment: 93 | # - 'MONGO_DATA_DIR=/data/db' 94 | # ports: 95 | # - '27020:27020' 96 | # volumes: 97 | # - './data/db4:/data/db' 98 | # command: 'mongod --replSet "test" --bind_ip 0.0.0.0 --port 27020' 99 | # restart: 'always' 100 | # mongodb5: 101 | # image: 'mongo:latest' 102 | # container_name: 'mongodb5' 103 | # environment: 104 | # - 'MONGO_DATA_DIR=/data/db' 105 | # ports: 106 | # - '27021:27021' 107 | # volumes: 108 | # - './data/db5:/data/db' 109 | # command: 'mongod --replSet "test" --bind_ip 0.0.0.0 --port 27021' 110 | # restart: 'always' 111 | 112 | networks: 113 | my-network-name: 114 | name: my-test-net 115 | -------------------------------------------------------------------------------- /docker-devenv/mysql/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # Use root/root as user/password credentials 2 | # use HeidiSQL instead of adminer 3 | version: '3.1' 4 | services: 5 | db: 6 | image: mysql 7 | command: --default-authentication-plugin=mysql_native_password 8 | restart: 'unless-stopped' 9 | environment: 10 | MYSQL_ROOT_PASSWORD: root 11 | volumes: 12 | - './data:/var/lib/mysql' 13 | ports: 14 | - 3306:3306 15 | # adminer: 16 | # image: adminer 17 | # restart: unless-stopped 18 | # ports: 19 | # - 8080:8080 20 | -------------------------------------------------------------------------------- /docker-devenv/opensearch/README.md: -------------------------------------------------------------------------------- 1 | ## Docker compose for opensearch 2 | 3 | https://opensearch.org/docs/opensearch/install/docker/ -------------------------------------------------------------------------------- /docker-devenv/opensearch/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | opensearch-node1: 4 | image: opensearchproject/opensearch:1.0.1 5 | container_name: opensearch-node1 6 | environment: 7 | - cluster.name=opensearch-cluster 8 | - node.name=opensearch-node1 9 | - discovery.seed_hosts=opensearch-node1,opensearch-node2 10 | - cluster.initial_master_nodes=opensearch-node1,opensearch-node2 11 | - bootstrap.memory_lock=true # along with the memlock settings below, disables swapping 12 | - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # minimum and maximum Java heap size, recommend setting both to 50% of system RAM 13 | ulimits: 14 | memlock: 15 | soft: -1 16 | hard: -1 17 | nofile: 18 | soft: 65536 # maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems 19 | hard: 65536 20 | volumes: 21 | - opensearch-data1:/usr/share/opensearch/data 22 | ports: 23 | - 9200:9200 24 | - 9600:9600 # required for Performance Analyzer 25 | networks: 26 | - opensearch-net 27 | opensearch-node2: 28 | image: opensearchproject/opensearch:1.0.1 29 | container_name: opensearch-node2 30 | environment: 31 | - cluster.name=opensearch-cluster 32 | - node.name=opensearch-node2 33 | - discovery.seed_hosts=opensearch-node1,opensearch-node2 34 | - cluster.initial_master_nodes=opensearch-node1,opensearch-node2 35 | - bootstrap.memory_lock=true 36 | - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" 37 | ulimits: 38 | memlock: 39 | soft: -1 40 | hard: -1 41 | nofile: 42 | soft: 65536 43 | hard: 65536 44 | volumes: 45 | - opensearch-data2:/usr/share/opensearch/data 46 | networks: 47 | - opensearch-net 48 | opensearch-dashboards: 49 | image: opensearchproject/opensearch-dashboards:1.0.1 50 | container_name: opensearch-dashboards 51 | ports: 52 | - 5601:5601 53 | expose: 54 | - "5601" 55 | environment: 56 | OPENSEARCH_HOSTS: '["https://opensearch-node1:9200","https://opensearch-node2:9200"]' # must be a string with no spaces when specified as an environment variable 57 | networks: 58 | - opensearch-net 59 | 60 | volumes: 61 | opensearch-data1: 62 | opensearch-data2: 63 | 64 | networks: 65 | opensearch-net: -------------------------------------------------------------------------------- /docker-devenv/postgres/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # https://hub.docker.com/_/postgres 2 | # Use postgres/example user/password credentials 3 | version: '3.1' 4 | 5 | services: 6 | db: 7 | image: postgres 8 | # restart: always 9 | restart: 'unless-stopped' 10 | environment: 11 | POSTGRES_PASSWORD: example 12 | volumes: 13 | - './data:/var/lib/postgresql/data' 14 | ports: 15 | - 5432:5432 16 | # adminer: 17 | # image: adminer 18 | # restart: always 19 | # ports: 20 | # - 8080:8080 -------------------------------------------------------------------------------- /docker-devenv/rabbitmq/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | services: 4 | rabbitmq: 5 | image: rabbitmq:management-alpine 6 | ports: 7 | - '4369:4369' 8 | - '5672:5672' 9 | - '25672:25672' 10 | - '15672:15672' 11 | - '35197:35197' 12 | environment: 13 | - RABBITMQ_SECURE_PASSWORD=yes 14 | - RABBITMQ_DEFAULT_USER=user 15 | - RABBITMQ_DEFAULT_PASS=pass 16 | - RABBITMQ_DEFAULT_VHOST=vhost 17 | volumes: 18 | - './data/lib:/var/lib/rabbitmq' 19 | - './data/log:/var/log/rabbitmq' 20 | hostname: rabbit 21 | # volumes: 22 | # data: 23 | # driver: local 24 | -------------------------------------------------------------------------------- /docker-devenv/redis/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # Use root/root as user/password credentials 2 | # use HeidiSQL instead of adminer 3 | version: '3.1' 4 | services: 5 | cache: 6 | image: redis:alpine 7 | container_name: redis 8 | entrypoint: redis-server --appendonly yes 9 | restart: unless-stopped 10 | volumes: 11 | - ./data:/data 12 | ports: 13 | - 6379:6379 14 | 15 | # services: 16 | # app: 17 | # image: lagden/cep_consulta:5.0.0 18 | # command: ["node", "index.js"] 19 | # environment: 20 | # - NODE_ENV=production 21 | # - RHOST=redis 22 | # ports: 23 | # - 1235:3000 24 | # networks: 25 | # - redis-net 26 | # depends_on: 27 | # - redis 28 | 29 | # redis: 30 | # image: redis:4.0.5-alpine 31 | # command: ["redis-server", "--appendonly", "yes"] 32 | # hostname: redis 33 | # networks: 34 | # - redis-net 35 | # volumes: 36 | # - redis-data:/data 37 | 38 | # networks: 39 | # redis-net: 40 | 41 | # volumes: 42 | # redis-data: 43 | -------------------------------------------------------------------------------- /docker-devenv/vault/README.md: -------------------------------------------------------------------------------- 1 | 1. use VAULT_DEV_ROOT_TOKEN_ID to access UI and API 2 | 3 | 2. use secret/ instead of cubbyhole/ 4 | 5 | 3. create test in secret 6 | 7 | 4. add you secrets 8 | 9 | { 10 | "test1": "111", 11 | "test2": "222", 12 | "test3": "'{\n \"a\": \"name\",\n \"b\": 123\n}'" 13 | } -------------------------------------------------------------------------------- /docker-devenv/vault/docker-compose.yml: -------------------------------------------------------------------------------- 1 | # you can access ui on http://127.0.0.1:8200 2 | # https://learn.hashicorp.com/tutorials/vault/versioned-kv?in=vault/secrets-management 3 | version: '3.3' 4 | services: 5 | vault-dev: 6 | environment: 7 | - VAULT_DEV_ROOT_TOKEN_ID=roottoken 8 | - VAULT_DEV_LISTEN_ADDRESS=0.0.0.0:8200 9 | ports: 10 | - 8200:8200 11 | image: vault 12 | -------------------------------------------------------------------------------- /docs/backend.md: -------------------------------------------------------------------------------- 1 | # Backend Services 2 | 3 | ## Kafka 4 | 5 | Install the dependencies first 6 | 7 | From cookbook folder 8 | 9 | ``` 10 | npm i 11 | ``` 12 | 13 | ### Kafka Broker 14 | 15 | Start up using docker-compose 16 | 17 | From cookbook folder 18 | 19 | ``` 20 | cd docker-devenv 21 | cd kafka 22 | docker-compose up 23 | ``` 24 | 25 | Enter the running container shell. Create & list topic 26 | 27 | ```bash 28 | docker exec -it sn-kafka /bin/bash 29 | 30 | kafka-topics --bootstrap-server localhost:9092 --create --topic test-topic --partitions 1 --replication-factor 1 31 | kafka-topics --bootstrap-server localhost:9092 --list 32 | 33 | ``` 34 | 35 | When you are done later after testing, you can run 36 | 37 | ``` 38 | docker-compose down 39 | ``` 40 | 41 | ### Kafka Consumer 42 | 43 | From cookbook folder 44 | 45 | ``` 46 | cd sandbox 47 | node kafkaRx.js 48 | ``` 49 | 50 | You will see message from a running producer every 2 seconds on the console 51 | 52 | Kafka Consumer Example Usage file location [../sandbox/kafkaRx.js]() 53 | 54 | ### Kafka Producer 55 | 56 | From cookbook folder 57 | 58 | ``` 59 | cd sandbox 60 | node kafkaTx.js 61 | ``` 62 | 63 | The producer will send a message every 2 seconds 64 | 65 | Kafka Producer Example Usage file location [../sandbox/kafkaTx.js]() 66 | 67 | ## TCP Server 68 | 69 | Running the example 70 | 71 | ``` 72 | npm i 73 | cd sandbox/services 74 | node tcpServer.js 75 | ``` 76 | 77 | A TCP server will listen on port 7070. Use putty (for windows) as client to connect (Raw TCP) and test 78 | 79 | Example Usage file location [../../sandbox/services/tcpServer.js]() 80 | 81 | More Samples 82 | 83 | - More TCP Usage [../../sandbox/services/net.js]() 84 | - TCP Retries [../../sandbox/services/net-retries.js]() 85 | 86 | # Micro Service 87 | 88 | ## URL 89 | 90 | 1. service.domain 91 | 2. domain/service 92 | 3. context.domain/service 93 | 94 | - SSL certificates (you might need a wildcard in 1, but not 2) 95 | - if using API gateway, 2 will probably be better, you can abstract microservice architecture behind API gateway and split or merge them without impacting routes. To the consumers it will look like a huge monolith with a lot of resources exposed 96 | - if a flat architecture with consumers calling directly each microservices (i.e. no API gateway) 1 will probably be better as each microservice acts as a separate application (with each its own SSL certificate, etc.) 97 | - you can also mix both approaches (see 3), like for instance app.eg.com, auth.eg.com and admin.eg.com as large "bounded contexts" then a finer split into microservices once inside the boundary (app.eg.com/cart and app.eg.com/billing) 98 | 99 | - 1 allows u to use DNS to route the request but updating DNS can be slow. 100 | - 2 is just a standard proxy 101 | - you specify the correct upstream server for url pattern matching 102 | - but now u have to maintain a HA proxy server on your own 103 | - if u use DNS, u r using the cloud provider's infrastructure 104 | -------------------------------------------------------------------------------- /docs/cloud.md: -------------------------------------------------------------------------------- 1 | # Cloud 2 | 3 | Where GCP, AWS, Azure Aliyun, Most concepts are the same 4 | 5 | Try to use terraform (TODO refer to DSO project) 6 | 7 | ## Install CLI 8 | 9 | - GCP 10 | - https://cloud.google.com/sdk/ 11 | - gcloud 12 | - gsutil (for Storage) 13 | - AWS 14 | - AWS-cli 15 | - Others 16 | - kubectl (for kubernetes) 17 | - openshift 18 | - etc. 19 | 20 | https://cloud.google.com/storage/docs/gsutil_install 21 | 22 | ## Using Google Cloud SDK 23 | 24 | ### auth, change users, set projects, etc. 25 | 26 | ```bash 27 | gcloud init 28 | 29 | login: gcloud auth login 30 | logout from all the accounts: gcloud auth revoke --all 31 | logout from a specific account: gcloud auth revoke 32 | 33 | gcloud projects list 34 | gcloud config set project [PROJECT_ID] 35 | gcloud config set compute/zone [ZONE] 36 | gcloud config list 37 | 38 | ### components 39 | gcloud components list 40 | gcloud components install [COMPONENT_ID] (e.g. kubectl) 41 | gcloud components update 42 | gcloud components remove [COMPONENT_ID] 43 | ``` 44 | 45 | ## IAM and Role Policy and Service Keys 46 | 47 | gcloud auth activate-service-account --key-file=service_key.json 48 | 49 | Associate appropriate policy and/or role to IAM user 50 | 51 | ## Minimal Permissions And Components Used 52 | 53 | - Object Storage 54 | - Storage Admin 55 | - Storage Object Admin 56 | - Cloud Run 57 | - Service Account User 58 | - Cloud Run Admin 59 | - Cloud Function 60 | - ECI instance (aliyun) 61 | - Container Registy 62 | - Load Balancer 63 | - Resource Group 64 | - VPC + VSwitch, Elastic IP, Security Group 65 | - Logger 66 | 67 | ## Hosting Static Website on GCP 68 | 69 | https://cloud.google.com/storage/docs/hosting-static-website 70 | 71 | NAME TYPE DATA 72 | www.example.com CNAME c.storage.googleapis.com 73 | 74 | Verify Domain ownership/control using TXT 75 | 76 | gsutil rsync -R spa/dist gs://uat.mybot.live 77 | 78 | give permissions to view for public 79 | 80 | set website info index page, error page 81 | 82 | gsutil web set -m index.html -e 404.html gs://www.example.com 83 | gsutil rm -r gs://www.example.com 84 | 85 | ## Docker Build And Push To gcr.io 86 | 87 | Prerequisites: Dockerfile prepared 88 | 89 | See [../deployment/deployment-container.md#backend-application](../deployment/deployment-container.md#backend-application) 90 | 91 | Reference: 92 | 93 | - https://levelup.gitconnected.com/dockerizing-and-autoscaling-node-js-on-google-cloud-ef8db3b99486 94 | - https://codelabs.developers.google.com/codelabs/cloud-running-a-nodejs-container 95 | - https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git (nodejs-docs-samples/containerengine/hello-world) 96 | 97 | ```bash 98 | # If using Cloud Build - https://console.cloud.google.com/cloud-build 99 | # gcloud builds submit --tag gcr.io/viow-270002/viow-node-app . 100 | 101 | docker build -t gcr.io/[PROJECT_ID]/[your-app-name]:latest . 102 | gcloud auth configure-docker 103 | docker push gcr.io/[PROJECT_ID]/[your-app-name]:latest 104 | ``` 105 | 106 | ```bash 107 | # run in wsl 108 | sudo docker build -t gcr.io/mybot-live/vcx-app:latest --build-arg ARG_API_PORT=8080 . 109 | 110 | # run in windows - slow in wsl 111 | gcloud config set project mybot-live 112 | gcloud auth configure-docker 113 | docker push gcr.io/mybot-live/vcx-app:latest 114 | gcloud run deploy vcx-app-service --image gcr.io/mybot-live/vcx-app:latest --platform managed --region asia-east1 --allow-unauthenticated 115 | 116 | gcloud run deploy helloworld --image gcr.io/cloudrun/hello --platform managed --region asia-east1 --allow-unauthenticated --port=3000 117 | gcloud container images delete gcr.io/cloudrun/helloworld 118 | gcloud run services delete helloworld --platform managed --region asia-east1 119 | ``` 120 | 121 | # Cloud Run 122 | 123 | ## Pre-Requisites 124 | 125 | See [basic.md](basic.md) -> Docker Build And Push To gcr.io 126 | 127 | ## Cloud Run Contract 128 | 129 | https://cloud.google.com/run/docs/reference/container-contract 130 | 131 | ## Some Commands 132 | 133 | see deploy.sh **deploy-cr** on manually deploying to cloud run 134 | 135 | ``` 136 | gcloud services enable run.googleapis.com 137 | gcloud run deploy [service name] --image gcr.io/$GOOGLE_CLOUD_PROJECT/helloworld --platform managed --region asia-southeast1 --allow-unauthenticated 138 | gcloud container images delete gcr.io/$GOOGLE_CLOUD_PROJECT/helloworld 139 | gcloud run services delete [service name] --platform managed --region asia-southeast1 140 | gcloud container images list 141 | ``` 142 | 143 | Optional, setup custom domain (requires your own domain name) 144 | 145 | **Notes:** 146 | 147 | - Need to check if service deployed properly 148 | - Clean up 149 | - service revisions 150 | - container images 151 | 152 | ## Example 153 | 154 | ```bash 155 | gcloud run deploy helloworld --image gcr.io/cloudrun/hello --platform managed --region asia-east1 --allow-unauthenticated --port=3000 156 | # gcloud container images delete gcr.io/cloudrun/helloworld 157 | gcloud run services delete helloworld --platform managed --region asia-east1 158 | ``` 159 | 160 | ## Environment Variables And Security Note 161 | 162 | https://cloud.google.com/run/docs/configuring/environment-variables#command-line 163 | 164 | ## Cloudflare 165 | 166 | 1. Use Full SSL 167 | 168 | https://serverfault.com/questions/995010/putting-google-cloud-platform-cloud-run-behind-cloud-flare 169 | 170 | 2. Set SSL Edge Certificate Flag (Not Really Needed) 171 | 172 | https://cloud.google.com/run/docs/mapping-custom-domains#dns_update 173 | 174 | 3. Firewall -> Tools -> Rate Limiting 175 | 176 | Set filter for login API 177 | 178 | https://community.cloudflare.com/t/same-type-of-harmful-requests-slow-the-server/188520/4 179 | 180 | # Firebase Getting Started 181 | 182 | Go to https://firebase.google.com/ 183 | 184 | Click on get started and register. **Important** Add you credit details and enable billing 185 | 186 | ## Create User 187 | 188 | 1. Goto firebase Authentication 189 | 2. Enable Email/Password Sign-in Method 190 | 3. Create a user in Firebase Auth with Email/Password login 191 | 192 | https://firebase.google.com/docs/auth/web/password-auth 193 | 194 | ## Firebase Web Client Credentials (subject to change) 195 | 196 | Get your firebase web-client credentials from Project -> Settings -> General 197 | 198 | Your app, select icon for web application 199 | 200 | Client credentials should like like something below: 201 | 202 | ``` 203 | 204 | 216 | ``` 217 | 218 | ## Firebase Backend Credentials (subject to change) 219 | 220 | Get your firebase backend credentials from Project -> Settings -> Service Accounts 221 | 222 | ## Messaging 223 | 224 | Use Firebase Messaging for Push Notifications 225 | 226 | ## Hosting To Firebase 227 | 228 | https://firebase.google.com/docs/hosting/quickstart 229 | 230 | ## Cloud Storage 231 | 232 | Upload & Download Files 233 | 234 | 1. If public read access enable permissions 235 | 2. Upload use signed URL 236 | 3. Private download/read access use signed URL 237 | -------------------------------------------------------------------------------- /docs/db-trigger-audit.md: -------------------------------------------------------------------------------- 1 | 2 | ## References 3 | 4 | - https://dba.stackexchange.com/questions/278943/how-to-pass-application-user-names-to-the-database-server-for-audit-purposes 5 | - https://dbfiddle.uk/n8cJjMvU 6 | 7 | 8 | ## Postgresql 9 | 10 | https://medium.com/israeli-tech-radar/postgresql-trigger-based-audit-log-fd9d9d5e412c 11 | 12 | ``` 13 | CREATE TABLE IF NOT EXISTS audit_log ( 14 | id serial PRIMARY KEY, 15 | table_name TEXT, 16 | record_id TEXT, 17 | operation_type TEXT, 18 | changed_at TIMESTAMP DEFAULT now(), 19 | changed_by TEXT, 20 | original_values jsonb, 21 | new_values jsonb, 22 | ); 23 | ``` 24 | 25 | ``` 26 | CREATE OR REPLACE FUNCTION audit_trigger() RETURNS TRIGGER AS $$ 27 | DECLARE 28 | new_data jsonb; 29 | old_data jsonb; 30 | key text; 31 | new_values jsonb; 32 | old_values jsonb; 33 | user_id text; 34 | BEGIN 35 | 36 | user_id := current_setting('audit.user_id', true); 37 | 38 | IF user_id IS NULL THEN 39 | user_id := current_user; 40 | END IF; 41 | 42 | new_values := '{}'; 43 | old_values := '{}'; 44 | 45 | IF TG_OP = 'INSERT' THEN 46 | new_data := to_jsonb(NEW); 47 | new_values := new_data; 48 | 49 | ELSIF TG_OP = 'UPDATE' THEN 50 | new_data := to_jsonb(NEW); 51 | old_data := to_jsonb(OLD); 52 | 53 | FOR key IN SELECT jsonb_object_keys(new_data) INTERSECT SELECT jsonb_object_keys(old_data) 54 | LOOP 55 | IF new_data ->> key != old_data ->> key THEN 56 | new_values := new_values || jsonb_build_object(key, new_data ->> key); 57 | old_values := old_values || jsonb_build_object(key, old_data ->> key); 58 | END IF; 59 | END LOOP; 60 | 61 | ELSIF TG_OP = 'DELETE' THEN 62 | old_data := to_jsonb(OLD); 63 | old_values := old_data; 64 | 65 | FOR key IN SELECT jsonb_object_keys(old_data) 66 | LOOP 67 | old_values := old_values || jsonb_build_object(key, old_data ->> key); 68 | END LOOP; 69 | 70 | END IF; 71 | 72 | IF TG_OP = 'INSERT' OR TG_OP = 'UPDATE' THEN 73 | INSERT INTO audit_log (table_name, record_id, operation_type, changed_by, original_values, new_values) 74 | VALUES (TG_TABLE_NAME, NEW.id, TG_OP, user_id, old_values, new_values); 75 | 76 | RETURN NEW; 77 | ELSE 78 | INSERT INTO audit_log (table_name, record_id, operation_type, changed_by, original_values, new_values) 79 | VALUES (TG_TABLE_NAME, OLD.id, TG_OP, user_id, old_values, new_values); 80 | 81 | RETURN OLD; 82 | END IF; 83 | END; 84 | $$ LANGUAGE plpgsql; 85 | ``` 86 | 87 | ``` 88 | SELECT set_config('audit.user_id', 'test user', true); 89 | ``` 90 | 91 | 92 | 93 | 94 | ## MySQL 95 | 96 | https://medium.com/@rajeshkumarraj82/mysql-table-audit-trail-using-triggers-bd32b772cce5 97 | 98 | ``` 99 | create table persons_audit_trail(id int NOT NULL AUTO_INCREMENT, 100 | Personid int NOT NULL, 101 | column_name varchar(255), 102 | old_value varchar(255), 103 | new_value varchar(255), 104 | done_by varchar(255) NOT NULL, 105 | done_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, 106 | PRIMARY KEY (id)); 107 | ``` 108 | 109 | Insert 110 | 111 | ``` 112 | DELIMITER $$ 113 | CREATE TRIGGER persons_create 114 | AFTER INSERT 115 | ON persons FOR EACH ROW 116 | BEGIN 117 | insert into persons_audit_trail(Personid, column_name, new_value, done_by) values(NEW.Personid,'FirstName',NEW.FirstName,NEW.created_by); 118 | insert into persons_audit_trail(Personid, column_name, new_value, done_by) values(NEW.Personid,'LastName',NEW.LastName,NEW.created_by); 119 | insert into persons_audit_trail(Personid, column_name, new_value, done_by) values(NEW.Personid,'Age',NEW.Age,NEW.created_by); 120 | 121 | END$$ 122 | DELIMITER ; 123 | ``` 124 | 125 | ``` 126 | DELIMITER $$ 127 | CREATE TRIGGER persons_update 128 | AFTER UPDATE 129 | ON persons FOR EACH ROW 130 | BEGIN 131 | IF OLD.FirstName <> new.FirstName THEN 132 | insert into persons_audit_trail(Personid, column_name, old_value, new_value, done_by) values(NEW.Personid,'FirstName',OLD.FirstName,NEW.FirstName,NEW.updated_by); 133 | END IF; 134 | IF OLD.LastName <> new.LastName THEN 135 | insert into persons_audit_trail(Personid, column_name, old_value, new_value, done_by) values(NEW.Personid,'LastName',OLD.LastName,NEW.LastName,NEW.updated_by); 136 | END IF; 137 | IF OLD.Age <> new.Age THEN 138 | insert into persons_audit_trail(Personid, column_name, old_value, new_value, done_by) values(NEW.Personid,'Age',OLD.Age,NEW.Age,NEW.updated_by); 139 | END IF; 140 | IF OLD.is_deleted <> new.is_deleted THEN 141 | insert into persons_audit_trail(Personid, column_name, old_value, new_value, done_by) values(NEW.Personid,'is_deleted',OLD.is_deleted,NEW.is_deleted,NEW.updated_by); 142 | END IF; 143 | END$$ 144 | DELIMITER ; 145 | ``` -------------------------------------------------------------------------------- /docs/deployment.md: -------------------------------------------------------------------------------- 1 | # Deployment Sample 2 | 3 | The following are the environments 4 | 5 | - development (used for local development) 6 | - uat & higher (e.g production) 7 | 8 | ### development environment 9 | 10 | The development environment is on a local machine used by developers. 11 | 12 | Docker compose can be used to set up supporting applications such as Redis, ElasticSearch, Kafka, etc. 13 | 14 | ## Deployment On Single VM 15 | 16 | For FIXED scale deployments / demos 17 | 18 | - Deploy on single VM everything - GCP GCE, AWS EC2, Digital Ocean, Linode 19 | - nodejs/npm, pm2 or systemd, local upload folder, mongodb/sqlite,mysql, redis, nginx 20 | 21 | ### uat (and higher) environment 22 | 23 | The UAT, production and (optional staging) environments are on the cloud provider. 24 | 25 | Use services that are common/similar in multiple cloud providers 26 | 27 | - Domain name verification 28 | - cloudflare 29 | - DNS (for API, for frontend) 30 | - full SSL (can be self-signed at server side) 31 | - frontend - GCP object storage, https 32 | - backend - docker-> Google Cloud Run, https 33 | - OPTION deploy to GCP Group Instances (need to set load balancer and networking) **WIP** 34 | - OPTION deploy to GKE **WIP** 35 | - Mongodb - Mongo Atlas 36 | - files - Google object storage 37 | - Mysql/Postgres - RDS 38 | - user_session - same as Mongodb 39 | 40 | ## Current Manual Deployment Script 41 | 42 | In GCP 43 | 44 | - setup service account in IAM with appropriate permissions 45 | - enable and setup a storage bucket to serve webpage 46 | - you may need to have a domain name 47 | - enable Cloud Run and Container Registry 48 | 49 | --- 50 | 51 | ## Cloud Provider Specific SDKs 52 | 53 | You need to know about SDK from each cloud provider to use them effectively 54 | 55 | Google Cloud Platform - See [../gcp/home.md](../gcp/home.md) 56 | 57 | ## Scalable Deployment On Cloud (GCP) 58 | 59 | - Database 60 | - RDS / CloudSQL 61 | - MongoDB Atlas https://www.mongodb.com/cloud/atlas (GCP/AWS/Azure) 62 | - Memory Key-Value Store 63 | - https://cloud.google.com/memorystore/docs/redis 64 | - Redis Labs - https://app.redislabs.com/ 65 | - MQ 66 | - better-queue (simple) 67 | - agenda (uses MongoDB) 68 | - Google Pubsub https://cloud.google.com/pubsub/docs 69 | 70 | --- 71 | 72 | 4. CloudFlare 73 | 74 | - We do not use https://cloud.google.com/load-balancing/docs/ssl-certificates 75 | - SSL Strategies 76 | - Flexible: SSL User --> SSL --> CF --> GCP 77 | - Full: SSL User --> SSL --> CF --> SSL --> GCP 78 | - Redirect 79 | - always redirect http to https 80 | 81 | ## CORS 82 | 83 | CORS Origin settings will follow frontend name - e.g. https://app.mybot.live 84 | 85 | # DEPLOYMENT ON CLOUD (OPTIONAL) 86 | 87 | ## SSH Keys For use with ssh, scp 88 | 89 | generate private and public keys 90 | 91 | ```bash 92 | # -t dsa|rsa|ecdsa|ed25519 93 | # -b 521|2048... 94 | ssh-keygen -f ./id_rsa -t rsa -b 2048 95 | ``` 96 | 97 | ## SSL Certs For use with on web applications 98 | 99 | 1. generate self-signed cert 100 | 101 | ```bash 102 | # private key: privkey.pem 103 | # public cert self-signed: fullchain.pem 104 | # -nodes = no DES if not pass phrase 105 | # -subj "/C=SG/ST=Singapore/L=Singapore/O=My Group/OU=My Unit/CN=127.0.0.1" 106 | openssl req -x509 -newkey rsa:2048 -keyout privkey.pem -out fullchain.pem -days 3650 -nodes -subj "/CN=127.0.0.1" 107 | ``` 108 | 109 | 2. generate private key and CSR 110 | 111 | ```bash 112 | openssl req -new -newkey rsa:2048 -nodes -keyout id_rsa -out id_rsa.csr -days 3650 -subj "/CN=127.0.0.1" 113 | ``` 114 | 115 | alternative generate only private key 116 | 117 | ```bash 118 | # linux 119 | openssl genrsa -des3 -out id_rsa 2048 120 | 121 | # windows 122 | openssl genrsa -out id_rsa 2048 123 | ``` 124 | 125 | 3. generate CSR from exiting private key 126 | 127 | ```bash 128 | openssl req -new -key id_rsa -out id_rsa.csr -subj "/CN=127.0.0.1" 129 | ``` 130 | 131 | ## Generate SSL Keys using Certbot 132 | 133 | https://certbot.eff.org/ 134 | 135 | ```bash 136 | $ sudo apt-get update 137 | $ sudo apt-get install -y software-properties-common 138 | $ sudo add-apt-repository ppa:certbot/certbot 139 | $ sudo apt-get update 140 | $ sudo apt-get install certbot 141 | ``` 142 | 143 | https://certbot.eff.org/docs/using.html#renewing-certificates 144 | 145 | ```bash 146 | sudo service apache2 stop 147 | # sudo certbot certonly --standalone -d .example.com 148 | sudo certbot certonly --standalone --preferred-challenges http -d 149 | sudo service apache2 start 150 | ``` 151 | 152 | ### Test Renewal 153 | 154 | ```bash 155 | sudo certbot renew --dry-run --pre-hook "sudo service apache2 stop" --post-hook "sudo service apache2 start" 156 | ``` 157 | 158 | ### Live Renewal in crontab 159 | 160 | ```bash 161 | sudo certbot renew --pre-hook "sudo service apache2 stop" --post-hook "sudo service apache2 start" 162 | ``` 163 | 164 | ### so you can read renewed certs 165 | 166 | ```bash 167 | sudo su 168 | chmod +rx /etc/letsencrypt/live 169 | chmod +rx /etc/letsencrypt/archive 170 | ``` 171 | 172 | ## Using port below 1024 173 | 174 | ### Method 1 - Use Authbind (Preferred) 175 | 176 | http://pm2.keymetrics.io/docs/usage/specifics/#listening-on-port-80-w-o-root 177 | 178 | ```bash 179 | sudo apt-get install authbind 180 | sudo touch /etc/authbind/byport/443 181 | sudo chown ubuntu /etc/authbind/byport/443 182 | sudo chmod 755 /etc/authbind/byport/443 183 | ``` 184 | 185 | ```bash 186 | ~/.bashrc 187 | alias pm2='authbind --deep pm2' 188 | ``` 189 | 190 | source ~/.bashrc 191 | 192 | **package.json** 193 | "production-start": "ssh -i ../../test.pem ubuntu@127.0.0.1 \"cd ~/api; authbind --deep pm2 start --only api --env production;\"", 194 | "production-stop": "ssh -i ../../test.pem ubuntu@127.0.0.1 \"cd ~/api; pm2 delete api;\"", 195 | 196 | ### Method 2 - Create Safe User 197 | 198 | https://www.digitalocean.com/community/tutorials/how-to-use-pm2-to-setup-a-node-js-production-environment-on-an-ubuntu-vps 199 | 200 | ## Running on VM (eg. Droplet / EC2 / GCE) 201 | 202 | ### node / nodemon 203 | 204 | ```bash 205 | NODE_ENV=development nohup node index.js >> /dev/null 2>&1 & 206 | ``` 207 | 208 | ```kill 209 | ps ax | grep 'node index.js' | grep -v grep | awk '{print $1}' | xargs kill 210 | ``` 211 | 212 | ### use pm2 213 | 214 | look for `ecosystem.config.js` 215 | 216 | # Startup on VM using SystemD 217 | 218 | https://nodesource.com/blog/running-your-node-js-app-with-systemd-part-1/ 219 | https://www.freedesktop.org/software/systemd/man/systemd.service.html 220 | 221 | ```bash 222 | cat << EOF > /lib/systemd/system/hello.service 223 | [Unit] 224 | Description=hello.js - your node application as service 225 | Documentation=https://example.com 226 | After=network.target 227 | 228 | [Service] 229 | Environment=NODE_ENV=production 230 | Environment=REST_PORT=3000 231 | Environment=WS_PORT=3001 232 | Type=simple 233 | User=ubuntu 234 | ExecStart=/usr/bin/node /home/ubuntu/hello.js 235 | # always = will always restart no matter what 236 | Restart=on-failure 237 | # ExecStopPost= 238 | 239 | [Install] 240 | WantedBy=multi-user.target 241 | EOF 242 | ``` 243 | 244 | ```bash 245 | sudo systemctl daemon-reload 246 | sudo systemctl start|stop|status|restart hello 247 | sudo systemctl enable|disable hello 248 | ``` 249 | 250 | ### Note 251 | 252 | https://stackoverflow.com/questions/2953081/how-can-i-write-a-heredoc-to-a-file-in-bash-script 253 | 254 | ```bash 255 | cat << 'EOF' > /tmp/yourfilehere 256 | The variable $FOO will not be interpreted. 257 | EOF 258 | ``` 259 | 260 | # curl -G "https://api.telegram.org/bot000000000:XXXXxxxxxxxxxxXXXXXXXXXX_xxxxxxxxxx/sendMessage?chat_id=183XXXXXX" --data-urlencode "text=Login IP `date`" > /dev/null 261 | 262 | #!/bin/bash 263 | 264 | TCP_CONN=`netstat -on | grep 3005 | wc -l` 265 | 266 | echo $TCP_CONN 267 | 268 | if [ "$TCP_CONN" -gt 2048 ]; then 269 | echo 'LHS: GT5 '+$TCP_CONN 270 | curl -X GET "https://sms.era.sg/tg_out.php?user=aaronjxz&msg=TCP%20overlimit%20${TCP_CONN}" 271 | else 272 | echo 'LHS: LTE5 '+$TCP_CONN 273 | fi 274 | 275 | #!/bin/bash 276 | #free && sync 277 | #sudo sh -c 'echo 3 >/proc/sys/vm/drop_caches' 278 | #free 279 | 280 | HI="LHS: CPU `LC_ALL=C top -bn1 | grep "Cpu(s)" | sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | awk '{print 100 - $1}'`% RAM `free -m | awk '/Mem:/ { printf("%3.1f%%", $3/$2*100) }'` HDD `df -h / | awk '/\// {print $(NF-1)}'`" 281 | 282 | # echo $HI 283 | 284 | TCP_CONN=`netstat -on | grep 3005 | wc -l` 285 | 286 | curl -G "https://sms.era.sg/tg_out.php?user=aaronjxz" --data-urlencode "msg=$HI TCP:$TCP_CONN" 287 | 288 | #CPU=`grep 'cpu ' /proc/stat | awk '{usage=($2+$4)*100/($2+$4+$5)} END {print usage "%"}'` 289 | #echo $CPU 290 | 291 | # Configuring systemd to Run Multiple Instances 292 | 293 | https://nodesource.com/blog/running-your-node-js-app-with-systemd-part-2 294 | 295 | ```bash 296 | cat << 'EOF' > /lib/systemd/system/hello_env@.service 297 | [Unit] 298 | Description=hello_env.js - making your environment variables rad 299 | Documentation=https://example.com 300 | After=network.target 301 | 302 | [Service] 303 | # Environment=NODE_PORT=%i 304 | Type=simple 305 | User=chl 306 | ExecStart=/usr/bin/node /home/chl/hello_env.js --config /home/ubuntu/%i 307 | Restart=on-failure 308 | 309 | [Install] 310 | WantedBy=multi-user.target 311 | EOF 312 | ``` 313 | 314 | ```bash 315 | for port in $(seq 3001 3004); do sudo systemctl start hello_env@$port; done 316 | ``` 317 | 318 | ```bash 319 | #!/bin/bash -e 320 | 321 | PORTS="3001 3002 3003 3004" 322 | 323 | for port in ${PORTS}; do 324 | systemctl stop hello_env@${port} 325 | done 326 | 327 | exit 0 328 | ``` 329 | 330 | ## nginx 331 | 332 | ```bash 333 | sudo apt-get update 334 | sudo apt-get -y install nginx-full 335 | sudo rm -fv /etc/nginx/sites-enabled/default 336 | 337 | cat << 'EOF' > /etc/nginx/sites-enabled/hello_env.conf 338 | upstream hello_env { 339 | server 127.0.0.1:3001; 340 | server 127.0.0.1:3002; 341 | server 127.0.0.1:3003; 342 | server 127.0.0.1:3004; 343 | } 344 | server { 345 | listen 80 default_server; 346 | server_name _; 347 | 348 | location / { 349 | proxy_pass http://hello_env; 350 | proxy_set_header Host $host; 351 | } 352 | } 353 | EOF 354 | 355 | sudo systemctl restart nginx 356 | ``` 357 | 358 | ## Fail2ban 359 | 360 | https://www.techrepublic.com/article/how-to-install-fail2ban-on-ubuntu-server-18-04/ 361 | 362 | ```bash 363 | sudo apt-get install -y fail2ban 364 | sudo systemctl start fail2ban 365 | sudo systemctl enable fail2ban 366 | 367 | cat << 'EOF' > /etc/fail2ban/jail.local 368 | [sshd] 369 | enabled = true 370 | port = 22 371 | filter = sshd 372 | logpath = /var/log/auth.log 373 | maxretry = 3 374 | EOF 375 | 376 | sudo systemctl restart fail2ban 377 | 378 | sudo fail2ban-client set sshd unbanip IP_ADDRESS 379 | ``` 380 | 381 | ## Installation - NodeJS 382 | 383 | ```bash 384 | #!/bin/bash 385 | 386 | # Install NodeJS 12.x 387 | curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash - 388 | sudo apt-get install -y nodejs 389 | # sudo apt-get install -y build-essential 390 | 391 | # Allow lower ports 392 | sudo apt-get install -y authbind 393 | sudo touch /etc/authbind/byport/80 394 | sudo chown ubuntu /etc/authbind/byport/80 395 | sudo chmod 755 /etc/authbind/byport/80 396 | sudo touch /etc/authbind/byport/443 397 | sudo chown ubuntu /etc/authbind/byport/443 398 | sudo chmod 755 /etc/authbind/byport/443 399 | ``` 400 | 401 | # Docker 402 | 403 | ## Backend Application 404 | 405 | Reference - https://nodejs.org/de/docs/guides/nodejs-docker-webapp/ 406 | 407 | ```Dockerfile 408 | # build the container 409 | # replace "ais-one/node-web-app" with your own image name 410 | # -f is path from cookbook folder 411 | docker build -t ais-one/node-web-app:latest -f js-node/expressjs/Dockerfile . 412 | 413 | ## NOT USED: docker run -it -d ais-one/node-web-app /bin/bash 414 | # run the container 415 | docker run -p 3000:3000 -p 3001:3001 -d ais-one/node-web-app 416 | 417 | # check running container id 418 | docker ps 419 | 420 | # to check logs 421 | docker logs 422 | ``` 423 | 424 | To access container command line 425 | 426 | ```bash 427 | docker exec -it /bin/bash 428 | 429 | # Example should be Running on http://localhost:3000 430 | ``` 431 | 432 | ```bash 433 | # to save an image 434 | docker save image:tag | gzip > image-tag.tar.gz 435 | 436 | # TODO load an image from tar.gz 437 | ``` 438 | 439 | ## Docker Commands 440 | 441 | ```bash 442 | docker ps 443 | docker container ls -a 444 | docker container rm 445 | docker container run 446 | docker container stop 447 | docker container start 448 | docker container port 449 | docker container prune 450 | docker image ls 451 | docker image rm 452 | 453 | # build with 454 | docker build -t /node-web-app:latest . 455 | # OR 456 | docker build -t node-web-app:latest . 457 | 458 | # make container and run the image 459 | docker run -p 49160:8080 -d /node-web-app 460 | 461 | 462 | docker stop $(docker ps -a -q) 463 | docker rm $(docker ps -a -q) 464 | docker rmi $(docker images -q) 465 | 466 | # save image to file 467 | docker save $IMAGE:$TAG | gzip > $IMAGE-$TAG.tar.gz 468 | 469 | # create image from file 470 | docker image load -i $IMAGE-$TAG.tar.gz 471 | 472 | # If using WSL2 for Docker, make sure time is sync 473 | # https://stackoverflow.com/questions/65086856/wsl2-clock-is-out-of-sync-with-windows 474 | sudo hwclock -s 475 | ``` 476 | 477 | ## Secrets Management 478 | 479 | ### Considerations 480 | 481 | It is important to manage your secrets 482 | 483 | Need to take into consideration, the following: 484 | 485 | - CICD environment 486 | - Auto Scaling in Container Orchestration, bad practice to keep replicating 487 | - Need encryption at rest 488 | - Purpose, e.g. seperation of GCP Keys for deployment, GCP keys for service usage 489 | - Where to place them, side car, secret manager etc. 490 | 491 | ### Secret Managers 492 | 493 | - Hashicorp Vault 494 | - self hosted, docker 495 | - helm, sidecar 496 | - vault + mongoatlas 497 | - Google Secrets 498 | - AWS Secrets 499 | - www.cyberark.com 500 | - DIY? 501 | 502 | ### Deployment Environment Variables 503 | 504 | Environment to consider If $CI === true 505 | 506 | If non CI just 507 | 508 | ```bash 509 | 510 | # not used not for CI passed in as $1 in deploy.sh 511 | NODE_ENV= 512 | 513 | APP_NAME=for backend only docker image name 514 | 515 | # GCP Storage Bucket Name for frontend deployment 516 | BUCKET_NAME= 517 | 518 | # Cloud Provider 519 | GCP_PROJECT_ID= 520 | 521 | # Service key (from deploy folder if local deploy, env cicd env otherwise) 522 | GCP_PROJECT_KEY= 523 | 524 | # Vault Info (use config files if no vault) if VAULT=unused, do not call vault 525 | VAULT= 526 | ``` 527 | 528 | ### Use Of Files Or Not 529 | 530 | - Cloud Service Keys - Deployment 531 | - Local - file 532 | - cloud 533 | - manual 534 | - file 535 | - cicd 536 | - env 537 | - Knexfile - Database Migration 538 | - local - file 539 | - cloud 540 | - manual 541 | - file 542 | - cicd 543 | - na 544 | 545 | ### Serving Configs 546 | 547 | - RSA public and private keys for JWT 548 | 549 | - should be served from a authentication sidecar 550 | - should be served from secrets manager (JSON/JS) 551 | - served from config file (JSON/JS) 552 | - self-generated 553 | 554 | - SSL certificates 555 | 556 | - should use cloudflare or similar service for HTTPS 557 | - should be served from secrets manager (JSON/JS) 558 | - served from config file (JSON/JS) 559 | - self-generated 560 | 561 | - Cloud Service Keys, Knexfile, Other Configs 562 | - should be served from secrets manager (JSON/JS) 563 | - served from config file (JSON/JS) 564 | -------------------------------------------------------------------------------- /docs/devtools.md: -------------------------------------------------------------------------------- 1 | ## Optional VS Code Plugins 2 | 3 | **NOTE** Useful plugins if using VS Code: 4 | 5 | - Essentials 6 | - Docker 7 | - Live Server 8 | - REST Client 9 | - SFTP 10 | - MongoDB Client (official) / SQLite Viewer 11 | - JS Language Specific 12 | - es6-string-html 13 | - ESLint 14 | - Volar (for VueJS) 15 | - Prettier 16 | - Recommended 17 | - SonarLint (requires java) 18 | - GitLens 19 | 20 | ## Chrome Extensions 21 | 22 | - Web Server 23 | - https://chrome.google.com/webstore/detail/web-server-for-chrome/ofhbbkphhbklhfoeikjpcbhemlocgigb/related?hl=en 24 | - SAML / OIDC 25 | - https://chrome.google.com/webstore/detail/saml-ws-federation-and-oa/hkodokikbjolckghdnljbkbhacbhpnkb?hl=en 26 | - React & Vue Dev tools 27 | - https://chrome.google.com/webstore/detail/react-developer-tools/fmkadmapgofadopljbjfkapdkoienihi?hl=en 28 | - https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd?hl=en 29 | - MetaMask 30 | - https://chrome.google.com/webstore/detail/metamask/nkbihfbeogaeaoehlefnkodbefgpgknn?hl=en 31 | 32 | ## Other Utilities 33 | 34 | - DB clients 35 | - dbeaver (mac / windows) 36 | - heidisql (windows) 37 | 38 | ## Apps Setup 39 | 40 | ### OpenJDK Setup 41 | 42 | https://stackoverflow.com/questions/52511778/how-to-install-openjdk-11-on-windows 43 | 44 | Extract the zip file into a folder, e.g. C:\Program Files\Java\ and it will create a jdk-11 folder (where the bin folder is a direct sub-folder). You may need Administrator privileges to extract the zip file to this location. 45 | 46 | Set a PATH: 47 | 48 | Select Control Panel and then System. 49 | Click Advanced and then Environment Variables. 50 | Add the location of the bin folder of the JDK installation to the PATH variable in System Variables. 51 | The following is a typical value for the PATH variable: C:\WINDOWS\system32;C:\WINDOWS;"C:\Program Files\Java\jdk-11\bin" 52 | Set JAVA_HOME: 53 | 54 | Under System Variables, click New. 55 | Enter the variable name as JAVA_HOME. 56 | Enter the variable value as the installation path of the JDK (without the bin sub-folder). 57 | Click OK. 58 | Click Apply Changes. 59 | Configure the JDK in your IDE (e.g. IntelliJ or Eclipse). 60 | You are set. 61 | 62 | To see if it worked, open up the Command Prompt and type java -version and see if it prints your newly installed JDK. 63 | 64 | If you want to uninstall - just undo the above steps. 65 | 66 | Note: You can also point JAVA_HOME to the folder of your JDK installations and then set the PATH variable to %JAVA_HOME%\bin. So when you want to change the JDK you change only the JAVA_HOME variable and leave PATH as it is. 67 | -------------------------------------------------------------------------------- /docs/git/git.md: -------------------------------------------------------------------------------- 1 | ## git 2 | 3 | ### Commands 4 | 5 | ``` 6 | git config --list 7 | git add . 8 | git commit -m "msg" 9 | git push origin 10 | git checkout 11 | git checkout -b 12 | git branch 13 | git branch -D 14 | git log 15 | git status 16 | git merge 17 | git diff 18 | git pull origin 19 | git remote -v 20 | git remote set-url origin git@hostname:USERNAME/REPOSITORY.git 21 | git remote set-url httpsorigin https://hostname:USERNAME/REPOSITORY.git 22 | git remote add neworigin https://github.com/user/repo.git 23 | git remote rm someorigin 24 | git reset --hard 25 | git commit -am "some message" 26 | 27 | # remove file or folder from tracking 28 | git rm --cached 29 | git rm -r --cached 30 | ``` 31 | 32 | ### clone git repo without history 33 | 34 | $ git clone ... 35 | $ cd path/to/repo 36 | $ rm -rf .git 37 | $ git init 38 | 39 | git clone --depth 1 40 | 41 | https://www.atlassian.com/git/tutorials/comparing-workflows 42 | 43 | ## Branch Name 44 | 45 | - Keep it short, - 46 | - prefix = feat, fix 47 | 48 | ## Commit messages 49 | 50 | https://www.alibabacloud.com/blog/how-can-we-standardize-git-commits_597372 51 | 52 | Format of Commit Message 53 | (): 54 | 55 | ### Type (required) 56 | 57 | This indicates the type of the git commit. Only the following types are allowed: 58 | 59 | - feat: Introduces a new feature. 60 | - fix/to: Fixes a bug found by QA or developers. 61 | - fix: Generates diff and automatically fixes a bug. This is used to fix bugs with a single commit. 62 | - to: Only generates a diff instead of automatically fixing the bug. This is used for multiple commits. Then, you can use fix for the commit when you finally fix the bug. 63 | - docs: Commits documentation only changes. 64 | - style: Commits format changes that do not affect code running. 65 | - refactor: Commits a code change that neither fixes a bug nor adds a feature. 66 | - perf: Commits optimization changes, such as to improve performance and experience. 67 | - test: Adds tests. 68 | - chore: Commits changes to the build process or supporting tools. 69 | - revert: Rolls back to the previous version. 70 | - merge: Merges code. 71 | - sync: Synchronizes bugs of the main thread or a branch. 72 | 73 | ### Scope (optional) 74 | 75 | Scope is used to describe the scope of the impact of a commit, such as the data layer, control layer, or view layer, depending on the project. 76 | 77 | For example, in Angular, it can be location, browser, compile, rootScope, ngHref, ngClick, or ngView. If your modification affects more than one scope, you can use \* instead. 78 | 79 | ### Subject (required) 80 | 81 | The subject is a brief description of the purpose of a commit. It can be up to 50 characters in length. Do not end with a period or other punctuation marks. 82 | 83 | Therefore, the git commit message will be in the following formats: 84 | 85 | ``` 86 | Fix(DAO): User query missing the username attribute 87 | feat(Controller): Development of user query interfaces 88 | ``` 89 | 90 | https://gist.github.com/robertpainsi/b632364184e70900af4ab688decf6f53 91 | 92 | References in commit messages 93 | If the commit refers to an issue, add this information to the commit message header or body. e.g. the GitHub web platform automatically converts issue ids (e.g. #123) to links referring to the related issue. For issues tracker like Jira there are plugins which also converts Jira tickets, e.g. Jirafy. 94 | 95 | In header: 96 | 97 | ``` 98 | [#123] Refer to GitHub issue… 99 | CAT-123 Refer to Jira ticket with project identifier CAT… 100 | ``` 101 | 102 | In body: 103 | 104 | ``` 105 | Fixes #123, #124 106 | ``` 107 | 108 | https://wiki.openstack.org/wiki/GitCommitMessages 109 | 110 | ## Github 111 | 112 | ### Github pages 113 | 114 | https://dev.to/yuribenjamin/how-to-deploy-react-app-in-github-pages-2a1f 115 | 116 | ```bash 117 | npm i -D gh-pages 118 | ``` 119 | 120 | package.json 121 | 122 | ```json 123 | { 124 | "homepage": "http://.github.io/", 125 | "scripts": { 126 | "predeploy": "npm run build", 127 | "deploy": "gh-pages -d " 128 | } 129 | } 130 | ``` 131 | 132 | ## Git Hooks 133 | 134 | - https://dev.to/krzysztofkaczy9/do-you-really-need-husky-247b 135 | - https://dev.to/azu/git-hooks-without-extra-dependencies-like-husky-in-node-js-project-jjp 136 | 137 | ### commitizen 138 | 139 | ```bash 140 | npx commitizen init cz-conventional-changelog --save-dev --save-exact 141 | ``` 142 | 143 | .git/hooks/prepare-commit-msg 144 | 145 | ```bash 146 | #!/bin/bash 147 | exec < /dev/tty && node_modules/.bin/cz --hook || true 148 | ``` 149 | 150 | ### semantic-release 151 | 152 | - https://github.com/semantic-release/semantic-release 153 | - https://github.com/semantic-release/semantic-release/tree/master/docs/recipes 154 | -------------------------------------------------------------------------------- /docs/git/github.md: -------------------------------------------------------------------------------- 1 | https://github.com/ais-one/cookbook/settings/security_analysis 2 | 3 | - secret scanning: enable 4 | - push protection: enable 5 | 6 | https://github.com/ais-one/cookbook/settings/branches 7 | 8 | - branch protection 9 | 10 | https://github.com/settings/tokens 11 | 12 | - Setup PAT and allow for repo, workflow scopes 13 | 14 | https://github.com/settings/developers 15 | 16 | - Setup OAuth Apps 17 | - Callback: http://127.0.0.1:3000/api/oauth/callback 18 | 19 | # Github Wiki Sidebar Example 20 | 21 | - [Home](../wiki/Home) 22 | - [Page A](../wiki/Page-A) 23 | - [Page B](../wiki/Page-B) 24 | - Another 25 | - [Page C](../wiki/Page-C) 26 | - [Page D](../wiki/Page-D) 27 | - [Page E](../wiki/Page-E) 28 | -------------------------------------------------------------------------------- /docs/git/merge-from-upstream.md: -------------------------------------------------------------------------------- 1 | # Merge updates from upstream 2 | 3 | ## From Clone 4 | 5 | ### ref 6 | 7 | - https://medium.com/geekculture/how-to-use-git-to-downstream-changes-from-a-template-9f0de9347cc2 8 | - https://www.mslinn.com/git/700-propagating-git-template-changes.html 9 | 10 | ### steps 11 | 12 | **initial setup** 13 | 14 | ```bash 15 | git clone