├── .github └── FUNDING.yml ├── CNAME ├── LICENSE ├── README.md ├── _config.yml ├── alertmanager ├── README.md ├── example_configs │ ├── opsgenie │ │ ├── alertmanager.yml │ │ └── host_alert_rules.yml │ └── pushover │ │ ├── alertmanager.yml │ │ └── host_alert_rules.yml └── rules │ ├── blackbox_alerts.yml │ ├── host_alerts.yml │ └── http_alerts.yaml ├── ansible ├── README.md └── ansible.cfg ├── atlantis └── README.md ├── awk └── README.md ├── aws-cli ├── autoscaling │ └── README.md ├── codepipeline │ └── README.md ├── ec2 │ └── README.md ├── ecr │ └── README.md ├── ecs │ └── README.md ├── eks │ └── README.md ├── elasticache │ └── README.md ├── iam │ └── README.md ├── notes.md ├── secretsmanager │ └── README.md ├── ssm │ └── README.md └── sts │ └── README.md ├── aws-codebuild ├── README.md └── buildspec-with-ssm.yml ├── aws-ecr └── allow_codebuild.json ├── aws-iam ├── ec2 │ └── AllowStartStopEC2.json ├── ecr │ ├── CodeBuildECR.json │ └── GitlabCiECR.json ├── ecs │ └── GitlabCiECSDeployPipeline.json ├── iam │ ├── LambdaVPCExecution.json │ └── SelfManageIAM.json ├── s3 │ ├── BackupsToS3WithKMS.json │ └── BucketLevelPermissions.json ├── sqs │ └── SQSUsage.json └── ssm │ └── AllowGetSSMWithKMS.json ├── aws-python ├── cloudwatch-logs │ └── update_log_retention.py ├── ec2 │ ├── README.md │ └── examples │ │ └── ec2_list_instances.py ├── kms │ └── README.md ├── s3 │ └── put_object.py └── sns │ └── send_sms.py ├── bash └── README.md ├── benchmarking └── README.md ├── bitcoin ├── cli │ └── README.md ├── curl │ └── README.md ├── python │ └── README.md └── setup-guides │ ├── bitcoin-amd.md │ └── bitcoin-arm.md ├── blackbox-exporter └── README.md ├── cadvisor └── README.md ├── cassandra └── README.md ├── cat └── README.md ├── ceph └── README.md ├── cloudwatch └── logs │ └── README.md ├── codedeploy └── appspec │ └── versioned │ ├── appspec.yml │ └── scripts │ ├── after_install.sh │ ├── before_install.sh │ ├── start_server.sh │ ├── stop_server.sh │ └── validate_service.sh ├── concourse └── README.md ├── dd └── README.md ├── docker └── README.md ├── dogecoin ├── cli │ └── README.md └── curl │ └── README.md ├── drone-ci ├── README.md ├── localstack-service-terraform │ ├── .drone.yml │ ├── init.sh │ ├── main.tf │ ├── outputs.tf │ ├── providers.tf │ └── remote-state.tf ├── skeleton │ └── .drone.yml ├── triggers-pull-request │ └── .drone.yml └── using-terraform-in-drone │ └── .drone.yml ├── dynamodb ├── README.md ├── python-dynamodb.md └── python │ ├── conditional_put.py │ ├── create_table.py │ ├── delete_item.py │ ├── get_item.py │ ├── put_item.py │ ├── scan.py │ ├── script.py │ └── update_item.py ├── ec2-metadata └── README.md ├── ecs └── task-definitions │ ├── alertmanager_fargate_loki.json │ ├── cadvisor_taskdef.json │ ├── cloudwatch_logs.json │ ├── datadog_logs.json │ ├── efs_storage.json │ ├── env_and_secrets.json │ ├── firelens_loki.json │ ├── grafana_taskdef.json │ ├── sidecar_taskdef.json │ ├── statping_taskdef.json │ └── yopass_taskdef.json ├── eks └── README.md ├── elasticsearch ├── README.md └── python-elasticsearch.md ├── etcdctl └── README.md ├── ethereum-jsonrpc └── README.md ├── find └── README.md ├── fluent-bit ├── README.md └── example-configs │ └── loki-fluent-bit.conf ├── font-awesome └── README.md ├── git └── README.md ├── github-actions ├── README.md └── examples │ └── if-success-failure-slack.yml ├── gitlab-ci ├── README.md ├── auto-retry-jobs │ └── .gitlab-ci.yml ├── aws-build-push-ecr │ ├── .gitlab-ci.yml │ ├── Dockerfile │ └── configuration.md ├── basic-shell │ └── .gitlab-ci.yml ├── default-runner │ └── .gitlab-ci.yml ├── docker-helm-deploy │ └── .gitlab-ci.yml ├── docker-runner │ └── .gitlab-ci.yml ├── extends-docker │ ├── .gitlab-ci.yml │ ├── Dockerfile │ └── templates │ │ └── jobs.yml ├── gitlab-runner-config │ └── config.toml ├── interruptable-jobs │ └── .gitlab-ci.yml ├── manual-destroy-step │ └── .gitlab-ci.yml ├── multiple-executors │ └── .gitlab-ci.yml ├── parallel-jobs │ └── .gitlab-ci.yml ├── reusable-jobs │ └── .gitlab-ci.yml ├── services │ └── .gitlab-ci-mysql.yml └── terraform-pipeline │ └── .gitlab-ci.yml ├── golang ├── README.md ├── environment │ └── README.md ├── go-web-logs │ ├── Dockerfile │ ├── docker-compose.yml │ └── src │ │ ├── app.go │ │ ├── go.mod │ │ └── go.sum └── snippets │ ├── http-api-with-http-request │ └── main.go │ ├── http-requests-return-statuscode │ └── main.go │ ├── random-fake-word │ └── main.go │ ├── random-float │ └── main.go │ ├── random-integer │ └── main.go │ └── webserver-requestpath │ └── main.go ├── grafana └── README.md ├── grok └── README.md ├── helm-2 └── README.md ├── helm └── README.md ├── html-css └── center-page │ └── index.html ├── influxdb └── README.md ├── install └── README.md ├── iptables └── README.md ├── iterm └── README.md ├── javascript ├── README.md └── redis │ └── server.js ├── jq └── README.md ├── jsonnet └── README.md ├── k3s └── README.md ├── k9s └── README.md ├── kafka └── README.md ├── keybase-cli └── README.md ├── kotlin └── README.md ├── kubectl └── README.md ├── kubernetes ├── LAB.md ├── LEARN_GUIDE.md ├── NOTES.md ├── README.md ├── SNIPPETS.md ├── TROUBLESHOOTING.md └── snippets │ ├── attach-pvc-to-debug-pod.yaml │ ├── cronjob.yaml │ ├── define-command-in-deployment.yml │ ├── dockerd-sidecar-deployment.yml │ ├── pod-node-selectory-tolerations.yaml │ ├── secret-as-env-var.yaml │ ├── secret-mount-pod.yaml │ └── security-context-in-deployments.yml ├── litecoin └── curl │ └── README.md ├── loki ├── README.md ├── logcli │ └── README.md ├── logql │ └── README.md ├── loki-config │ └── loki-config_aws.yml ├── nginx-reverse-proxy │ ├── conf.d │ │ └── loki.conf │ └── nginx.conf └── promtail │ ├── README.md │ ├── docker-compose.yml │ ├── docker-example │ └── configs │ │ ├── alertmanager.yml │ │ ├── datasource.yml │ │ ├── fluent-bit.conf │ │ ├── host_alert_rules.yml │ │ ├── loki-rules.yml │ │ ├── loki.yml │ │ ├── prometheus.yml │ │ └── promtail_config.yml │ ├── drop-loglines-promtail.yml │ ├── ec2_instance_sd_discovery.yml │ ├── java_example-promtail-config.yml │ ├── labs │ └── custom-log-metrics-from-promtail.md │ ├── nginx_example-promtail-config.yml │ ├── relabel-convert-to-capitals-promtail.yml │ └── relabel-stdout-to-info-promtail.yml ├── makefiles ├── README.md ├── docker-compose │ └── Makefile └── with-help-section │ └── Makefile ├── mongodb ├── python │ ├── README.md │ ├── code-examples │ │ └── auth_make_connection.py │ ├── docker-compose-rs.yml │ ├── docker-compose.yml │ └── docker │ │ └── docker-compose.yml └── shell │ └── README.md ├── mysql └── README.md ├── mysqldump └── README.md ├── neo4j-cypher └── README.md ├── netstat └── README.md ├── nginx └── README.md ├── openssl └── README.md ├── packer └── hcl │ └── aws-ansible-ami.pkr.hcl ├── php-composer └── 7.19.3.Dockerfile ├── php └── hostname.php ├── postgresql └── README.md ├── powershell └── README.md ├── prometheus ├── README.md ├── alert-examples │ └── README.md └── metric_examples │ ├── CONTAINER_METRICS.md │ ├── KUBERNETES.md │ └── NODE_METRICS.md ├── pushgateway ├── README.md └── scripts │ ├── bash_cpu_exporter.sh │ └── bash_memory_exporter.sh ├── pygame └── README.md ├── python-flask ├── README.md ├── basic-hello-world │ └── app.py └── sqlalchemy-sqlite │ └── app.py ├── python ├── README.md ├── docker │ └── README.md └── sqlalchemy │ └── README.md ├── redis ├── README.md ├── redis-cli │ ├── README.md │ └── docker-compose.yaml └── redis-python │ ├── README.md │ └── docker-compose.yaml ├── regex └── README.md ├── rsync └── README.md ├── ruby └── webrick │ ├── basic-api.rb │ ├── basic-web.rb │ └── read-from-html.rb ├── samba └── README.md ├── sealedsecrets └── README.md ├── sed └── README.md ├── sftp └── README.md ├── slack └── python │ └── slack_helper.py ├── ssh-keygen └── README.md ├── ssh └── README.md ├── stern └── README.md ├── stress └── README.md ├── sudo └── README.md ├── symlinks └── README.md ├── sysadmin └── README.md ├── systemd ├── README.md ├── pre_start_example.service └── specify_logfile.service ├── tar └── README.md ├── terraform ├── README.md ├── snippets │ └── for_each.tf └── variables.md ├── tmux └── README.md ├── vagrant └── README.md ├── vim ├── README.md └── config │ └── .vimrc ├── xq └── README.md ├── yq └── README.md └── zipkin └── README.md /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | ko_fi: ruanbekker 2 | -------------------------------------------------------------------------------- /CNAME: -------------------------------------------------------------------------------- 1 | cheatsheets.ruanbekker.com -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Ruan Bekker 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /alertmanager/example_configs/opsgenie/host_alert_rules.yml: -------------------------------------------------------------------------------- 1 | groups: 2 | - name: host_alert_rules.yml 3 | rules: 4 | 5 | # Alert for any node that is unreachable for > 1 minute. 6 | - alert: node_down 7 | expr: up{job="node-exporter"} == 0 8 | for: 1m 9 | labels: 10 | severity: critical 11 | environment: foobar-production 12 | annotations: 13 | summary: "Job {{ $labels.job }} is down on {{ $labels.instance }}" 14 | description: "Failed to scrape {{ $labels.job }} on {{ $labels.instance }} for more than 1 minute. Node might be down." 15 | impact: "Any metrics from {{ $labels.job }} on {{ $labels.instance }} will be missing" 16 | action: "Check on {{ $labels.instance }} if {{ $labels.job }} is running" 17 | dashboard: https://grafana.localdns.xyz/d/pdfrTcGnQ/host-metrics 18 | runbook: https://mydocs.localdns.xyz/wiki/runbooks/1 19 | priority: P2 20 | -------------------------------------------------------------------------------- /alertmanager/example_configs/pushover/alertmanager.yml: -------------------------------------------------------------------------------- 1 | global: 2 | resolve_timeout: 5m 3 | 4 | route: 5 | group_by: ['alertname', 'cluster', 'job', 'env'] 6 | repeat_interval: 24h 7 | group_interval: 5m 8 | receiver: 'default' 9 | 10 | receivers: 11 | - name: 'default' 12 | pushover_configs: 13 | - token: x 14 | user_key: x 15 | #title: '{{ template "pushover.default.title" . }}' 16 | title: '{{ if eq .Status "firing" }}ALARM{{ else }}OK{{ end }} [{{ .Status | toUpper }}] {{ .CommonAnnotations.summary }}' 17 | message: '{{ template "pushover.default.message" . }}' 18 | url: '{{ template "pushover.default.url" . }}' 19 | priority: '{{ if eq .Status "firing" }}2{{ else }}0{{ end }}' 20 | -------------------------------------------------------------------------------- /alertmanager/example_configs/pushover/host_alert_rules.yml: -------------------------------------------------------------------------------- 1 | groups: 2 | - name: host_alert_rules.yml 3 | rules: 4 | 5 | # Alert for any node that is unreachable for > 1 minute. 6 | - alert: node_down 7 | expr: up{job="node-exporter"} == 0 8 | for: 1m 9 | labels: 10 | severity: critical 11 | environment: env-production 12 | annotations: 13 | summary: "Job {{ $labels.job }} is down on {{ $labels.instance }}" 14 | description: "Failed to scrape {{ $labels.job }} on {{ $labels.instance }} for more than 1 minute. Node might be down." 15 | impact: "Any metrics from {{ $labels.job }} on {{ $labels.instance }} will be missing" 16 | action: "Check on {{ $labels.instance }} if {{ $labels.job }} is running" 17 | dashboard: https://grafana.localdns.xyz 18 | runbook: https://runbooks.localdns.xyz 19 | -------------------------------------------------------------------------------- /alertmanager/rules/blackbox_alerts.yml: -------------------------------------------------------------------------------- 1 | # references: 2 | # https://medium.com/@yitaek/practical-monitoring-with-prometheus-grafana-part-ii-5020be20ebf6 3 | groups: 4 | - name: blackbox_alerts.yml 5 | rules: 6 | - alert: blackbox_exporter_down 7 | expr: up{job="blackbox-exporter"} == 0 8 | for: 5m 9 | labels: 10 | severity: warning 11 | annotations: 12 | summary: "Blackbox exporter is down" 13 | description: "Blackbox exporter is down or not being scraped correctly" 14 | 15 | - alert: probe_failing 16 | expr: probe_success{job="blackbox-exporter"} == 0 17 | for: 5m 18 | labels: 19 | severity: page 20 | priority: P1 21 | annotations: 22 | summary: "Endpoints are down" 23 | description: "Endpoint {{ $labels.instance }} is unresponsive for more than 5m" 24 | -------------------------------------------------------------------------------- /alertmanager/rules/host_alerts.yml: -------------------------------------------------------------------------------- 1 | groups: 2 | - name: host_alerts.yml 3 | rules: 4 | 5 | # alert when disk has 10% of availlable space left 6 | - alert: host_disk_space_low 7 | expr: (node_filesystem_avail_bytes{mountpoint="/"} * 100) / node_filesystem_size_bytes{mountpoint="/"} < 10 8 | for: 1m 9 | labels: 10 | severity: warning 11 | alert_channel: slack 12 | team: devops 13 | annotations: 14 | title: "Disk Usage is Low in {{ $labels.instance }}" 15 | description: "Instance {{ $labels.instance }} disk usage for {{ $labels.mountpoint }} is at {{ humanize $value}}%." 16 | summary: "\n- Node: {{ $labels.instance }} \n- Disk Usage: {{ humanize $value}}%" 17 | -------------------------------------------------------------------------------- /alertmanager/rules/http_alerts.yaml: -------------------------------------------------------------------------------- 1 | groups: 2 | - name: http_alerts.yaml 3 | rules: 4 | - alert: high_4xx_response_rate 5 | expr: sum(rate(http_server_requests_seconds_count{status=~"^4.."}[1m])) by (service) / sum(rate(http_server_requests_seconds_count[1m])) by (service) * 100 > 90 6 | for: 1m 7 | labels: 8 | severity: warning 9 | alert_channel: slack 10 | team: devops 11 | annotations: 12 | title: "High Rate of 4xx Response Status {{ $labels.service }}" 13 | description: "Too many HTTP requests with status 4xx (> 90%)\n VALUE = {{ humanize $value}}%\n SERVICE = {{ $labels.service }}" 14 | summary: "High HTTP 4xx error rate (instance {{ $labels.instance }})" 15 | - alert: high_actuator_rate 16 | expr: sum(irate(http_server_requests_seconds_count{uri="/actuator/prometheus"}[1m])) > 60 17 | for: 1m 18 | labels: 19 | severity: warning 20 | alert_channel: slack 21 | team: devops 22 | annotations: 23 | title: "High Rate of Actuator Scrapes {{ $labels.service }}" 24 | description: "Too many HTTP requests against the actuator endpoint (> 1ps)\n VALUE = {{ $value}}\n SERVICE = {{ $labels.service }}" 25 | summary: "High HTTP actuator endpoint rate (instance {{ $labels.instance }})" 26 | -------------------------------------------------------------------------------- /ansible/ansible.cfg: -------------------------------------------------------------------------------- 1 | # https://docs.ansible.com/ansible/2.4/intro_configuration.html 2 | [defaults] 3 | inventory = inventory.ini 4 | host_key_checking = False 5 | remote_user = ubuntu 6 | private_key_file = ~/.ssh/id_rsa 7 | -------------------------------------------------------------------------------- /atlantis/README.md: -------------------------------------------------------------------------------- 1 | # atlantis 2 | 3 | Terraform Pull Request Automation - [atlantis](https://www.runatlantis.io/) 4 | 5 | ## Examples 6 | 7 | Plan in the test environment: 8 | 9 | ```bash 10 | atlantis plan -d 'environments/test' 11 | ``` 12 | 13 | If you had a different workspace: 14 | 15 | ```bash 16 | atlantis plan -d 'environments/test' -w workspacename 17 | ``` 18 | 19 | Plan against a target: 20 | 21 | ```bash 22 | atlantis plan -d environments/test -w workspacename -- -target=module.environment.module.vpc 23 | ``` 24 | 25 | Plan a delete: 26 | 27 | ```bash 28 | atlantis plan -d environments/test -w workspacename -- -destroy 29 | ``` 30 | 31 | Plan a targeted delete: 32 | 33 | ```bash 34 | atlantis plan -d environments/test -w workspacename --auto-merge-disabled -- -destroy -target=module.eks -target=module.vpc 35 | ``` 36 | 37 | Run a apply without merging: 38 | 39 | ```bash 40 | atlantis apply --auto-merge-disabled 41 | ``` 42 | 43 | Remove state: 44 | 45 | ```bash 46 | atlantis state -d 'environments/test' rm 'module.acm[0].aws_acm_certificate.this[0]' 47 | ``` 48 | 49 | Atlantis import: 50 | 51 | ```bash 52 | atlantis import -d 'environments/test' 'module.acm[0].aws_acm_certificate.this[0]' arn:aws:acm:us-east-2:000000000000:certificate/00000000-0000-0000-0000-000000000000 53 | ``` 54 | 55 | Atlantis replace (taint): 56 | 57 | ```bash 58 | atlantis plan -d environments/test -w workspacename -- -replace='module.env.some_resource.this' 59 | ``` 60 | -------------------------------------------------------------------------------- /awk/README.md: -------------------------------------------------------------------------------- 1 | # awk 2 | 3 | ## Examples 4 | 5 | ### Second word in a string 6 | 7 | If you want to only display a certain piece of string, like the second word: 8 | 9 | ```bash 10 | echo "one two three" | awk '{print $2}' 11 | ``` 12 | 13 | Will return `two` 14 | 15 | ### Removing the second column 16 | 17 | If we have a csv file: 18 | 19 | ```csv 20 | 2023-08-31 13:40:44,19.90,66.30 21 | 2023-08-31 13:41:45,19.90,66.10 22 | 2023-08-31 13:42:46,19.90,66.10 23 | ``` 24 | 25 | And we want only the first and third column: 26 | 27 | ```bash 28 | cat data.csv | awk -F, 'BEGIN {OFS=FS} {print $1, $3}' 29 | ``` 30 | -------------------------------------------------------------------------------- /aws-cli/autoscaling/README.md: -------------------------------------------------------------------------------- 1 | # AWS CLI Autoscaling Cheatsheet 2 | 3 | Userdata: 4 | 5 | ``` 6 | $ cat userdata.txt 7 | #!/bin/bash 8 | CLUSTER_NAME="aws-qa-ecs" 9 | ENVIRONMENT_NAME="qa" 10 | MY_HOSTNAME="$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)" 11 | INSTANCE_ID="$(curl -s http://instance-data/latest/meta-data/instance-id)" 12 | INSTANCE_LIFECYCLE="$(curl -s http://169.254.169.254/latest/meta-data/instance-life-cycle)" 13 | REGION="$(curl -s http://instance-data/latest/meta-data/placement/availability-zone | rev | cut -c 2- | rev)" 14 | 15 | echo "ECS_CLUSTER=${CLUSTER_NAME}" >> /etc/ecs/ecs.config 16 | echo "ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"awslogs\"]" >> /etc/ecs/ecs.config 17 | echo "ECS_INSTANCE_ATTRIBUTES={\"environment\":\"${ENVIRONMENT_NAME}\"}" >> /etc/ecs/ecs.config 18 | ``` 19 | 20 | I am multiplying the current spot price with 2 to set my maximum bid price, so alter according to your needs: 21 | 22 | ``` 23 | #!/usr/bin/env bash 24 | 25 | launch_config_name="ecs-dev-cap-spot-lc.v1" 26 | instance_type="t2.medium" 27 | ssh_key_name="infra" 28 | userdata="userdata.txt" 29 | security_group="sg-00000000000000000" 30 | instance_profile="ecs-instance-role" 31 | 32 | ecs_ami_id="$(aws --profile dev ssm get-parameter --name '/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id' | jq -r '.Parameter.Value')" 33 | spot_price=$(aws --profile dev ec2 describe-spot-price-history --instance-types ${instance_type} --product-descriptions "Linux/UNIX" --max-items 1 | jq -r '.SpotPriceHistory[].SpotPrice') 34 | 35 | get_bid_price(){ 36 | spot_price=$(aws --profile dev ec2 describe-spot-price-history --instance-types ${instance_type} --product-descriptions "Linux/UNIX" --max-items 1 | jq -r '.SpotPriceHistory[].SpotPrice') 37 | echo ${spot_price} 2 | awk '{printf "%4.3f\n",$1*$2}' 38 | } 39 | 40 | aws --profile dev autoscaling create-launch-configuration \ 41 | --launch-configuration-name ${launch_config_name} \ 42 | --image-id ${ecs_ami_id} \ 43 | --instance-type ${instance_type} \ 44 | --key-name ${ssh_key_name} \ 45 | --user-data file://${userdata} \ 46 | --security-groups ${security_group} \ 47 | --instance-monitoring Enabled=true \ 48 | --iam-instance-profile ${instance_profile} \ 49 | --spot-price "$(get_bid_price)" 50 | ``` 51 | 52 | Create the Auto Scaling Group: 53 | 54 | ``` 55 | asg_name="ecs-dev-cap-spot" 56 | subnets="subnet-00000000000000000,subnet-11111111111111111,subnet-22222222222222222" 57 | 58 | aws --profile dev autoscaling create-auto-scaling-group \ 59 | --auto-scaling-group-name ${asg_name} \ 60 | --launch-configuration-name ${launch_config_name} \ 61 | --min-size 0 \ 62 | --max-size 3 \ 63 | --vpc-zone-identifier "${subnets}" 64 | ``` 65 | 66 | 67 | -------------------------------------------------------------------------------- /aws-cli/codepipeline/README.md: -------------------------------------------------------------------------------- 1 | # AWS CodePipeline CLI 2 | 3 | ## Export a Pipeline 4 | 5 | ``` 6 | $ aws codepipeline create-pipeline --pipeline my-pipeline --cli-input-json file://pipeline.json 7 | ``` 8 | 9 | ## Create Pipeline from JSON 10 | 11 | ``` 12 | $ aws codepipeline create-pipeline --cli-input-json file://pipeline.json 13 | ``` 14 | 15 | ## View Pipeline Source 16 | 17 | ``` 18 | $ aws codepipeline get-pipeline --name my-pipeline | jq -r '.pipeline.stages[] | select(.name == "Source") .actions[].configuration.Branch' 19 | ``` 20 | -------------------------------------------------------------------------------- /aws-cli/eks/README.md: -------------------------------------------------------------------------------- 1 | # eks 2 | 3 | ## kubeconfig 4 | 5 | To update your AWS EKS kubeconfig: 6 | 7 | ```bash 8 | aws --profile default eks update-kubeconfig --name my-cluster --alias my-cluster 9 | ``` 10 | -------------------------------------------------------------------------------- /aws-cli/elasticache/README.md: -------------------------------------------------------------------------------- 1 | ## Elasticache AWS CLI Cheatsheet 2 | 3 | List Clusters: 4 | 5 | ``` 6 | $ aws --profile dev elasticache describe-cache-clusters --max-items 5 7 | { 8 | "CacheClusters": [ 9 | { 10 | "CacheClusterId": "test-cluster-dev-0001-001", 11 | ... 12 | } 13 | ] 14 | } 15 | ``` 16 | 17 | Describe Cluster: 18 | 19 | ``` 20 | $ aws --profile eu-dev elasticache describe-cache-clusters --cache-cluster-id "test-cluster-dev-0001-001" 21 | { 22 | "CacheClusters": [ 23 | { 24 | "CacheClusterId": "test-cluster-dev-0001-001", 25 | ... 26 | } 27 | ] 28 | } 29 | ``` 30 | -------------------------------------------------------------------------------- /aws-cli/iam/README.md: -------------------------------------------------------------------------------- 1 | ## AWS CLI / IAM Cheatsheet 2 | 3 | View Policy ARN: 4 | 5 | ``` 6 | $ aws --profile dev iam list-attached-user-policies --user my-policy | jq -r '.AttachedPolicies[].PolicyArn' 7 | arn:aws:iam::000000000000:policy/my-policy 8 | ``` 9 | 10 | Detach Role Policy and Delete Role: 11 | 12 | ``` 13 | export iam_profile=dev 14 | export role_name=MyRole 15 | export role_arn=arn:aws:iam::aws:policy/ReadOnlyAccessX 16 | 17 | aws --profile ${iam_profile} iam detach-role-policy --role-name ${role_name} --policy-arn ${role_arn} 18 | aws --profile ${iam_profile} iam delete-role --role-name ${role_name} 19 | ``` 20 | -------------------------------------------------------------------------------- /aws-cli/notes.md: -------------------------------------------------------------------------------- 1 | Other Examples: 2 | 3 | - https://gist.github.com/avoidik/de015c0841aabec5e2d6c9fd6092d206 4 | -------------------------------------------------------------------------------- /aws-cli/secretsmanager/README.md: -------------------------------------------------------------------------------- 1 | # aws secretsmanager 2 | 3 | ## View Secret by Secret Name 4 | 5 | ```bash 6 | aws --profile default secretsmanager get-secret-value --secret-id my-db-secret --query SecretString --output text 7 | ``` 8 | -------------------------------------------------------------------------------- /aws-cli/ssm/README.md: -------------------------------------------------------------------------------- 1 | ## SSM AWS CLI Cheatsheet 2 | 3 | Put SSM Parameter: 4 | 5 | ``` 6 | $ aws --profile dev ssm put-parameter --type 'String' --name "/my-service/dev/DATABASE_NAME" --value "test" 7 | ``` 8 | 9 | Get SSM Parameters by Path: 10 | 11 | ``` 12 | $ aws --profile dev --region eu-west-1 ssm get-parameters-by-path --path '/my-service/dev/' | jq '.Parameters[]' | jq -r '.Name' 13 | /my-service/dev/DATABASE_HOST 14 | /my-service/dev/DATABASE_NAME 15 | ``` 16 | 17 | Decrypt and View SSM Parameter Value (using `jq`): 18 | 19 | ``` 20 | $ aws --profile dev ssm get-parameters --names '/my-service/dev/DATABASE_NAME' --with-decryption | jq -r '.Parameters[]' | jq -r '.Value' 21 | test 22 | ``` 23 | 24 | Decrypt and View SSM Parameter Value (using `--query`): 25 | 26 | ``` 27 | $ aws ssm get-parameter --name '/my-service/dev/DATABASE_PASSWORD' --with-decryption --query "Parameter.Value" --output text 28 | superSecureSecret 29 | ``` 30 | -------------------------------------------------------------------------------- /aws-cli/sts/README.md: -------------------------------------------------------------------------------- 1 | # sts cheatsheet 2 | 3 | ## Caller Identity 4 | 5 | ```bash 6 | aws --profile default sts get-caller-identity 7 | ``` 8 | 9 | ## Assume Role 10 | 11 | Get temporary credentials: 12 | 13 | ```bash 14 | aws --profile default sts assume-role \ 15 | --role-arn arn:aws:iam::000000000000:role/my-aws-role \ 16 | --role-session-name my-aws-role \ 17 | --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \ 18 | --output text 19 | ``` 20 | 21 | One liner to export to environment: 22 | 23 | ```bash 24 | export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" $(aws --profile default sts assume-role --role-arn arn:aws:iam::000000000000:role/my-aws-role --role-session-name my-aws-role --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" --output text)) 25 | ``` 26 | -------------------------------------------------------------------------------- /aws-codebuild/README.md: -------------------------------------------------------------------------------- 1 | # AWS CodeBuild 2 | 3 | ## Resources 4 | 5 | - https://kb.novaordis.com/index.php/AWS_CodeBuild_Buildspec 6 | -------------------------------------------------------------------------------- /aws-codebuild/buildspec-with-ssm.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | env: 4 | environment: 5 | aws_region: "eu-west-1" 6 | container_name: "test" 7 | repository_url: "xxxxxxxxxxxx" 8 | parameter-store: 9 | dockerhub_username: "/devops/dev/DOCKERHUB_USERNAME" 10 | dockerhub_password: "/devops/dev/DOCKERHUB_PASSWORD" 11 | 12 | phases: 13 | pre_build: 14 | commands: 15 | - echo logging into Dockerhub as upstream not yet using gallery.ecr.aws 16 | - docker login -u $dockerhub_username -p $dockerhub_password 17 | - echo logging into ECR 18 | - $(aws ecr get-login --region $aws_region --no-include-email) 19 | - REPOSITORY_URI=${repository_url} 20 | - IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7) 21 | build: 22 | commands: 23 | - echo build started on $(date) 24 | - docker build -t $REPOSITORY_URI:latest . 25 | - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG 26 | post_build: 27 | commands: 28 | - echo build completed on $(date) 29 | - echo pushing the docker images 30 | - docker push $REPOSITORY_URI:latest 31 | - docker push $REPOSITORY_URI:$IMAGE_TAG 32 | - echo writing image definitions file for deployment 33 | - printf '[{"name":"$container_name","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json 34 | 35 | artifacts: 36 | files: imagedefinitions.json 37 | -------------------------------------------------------------------------------- /aws-ecr/allow_codebuild.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2008-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "CodeBuildAccess", 6 | "Effect": "Allow", 7 | "Principal": { 8 | "AWS": [ 9 | "arn:aws:iam::xxxxxxxxxxxx:root" 10 | ], 11 | "Service": "codebuild.amazonaws.com" 12 | }, 13 | "Action": [ 14 | "ecr:BatchCheckLayerAvailability", 15 | "ecr:BatchGetImage", 16 | "ecr:GetDownloadUrlForLayer" 17 | ] 18 | } 19 | ] 20 | } 21 | -------------------------------------------------------------------------------- /aws-iam/ec2/AllowStartStopEC2.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "AllowStartStopEc2", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "ec2:StartInstances", 9 | "ec2:StopInstances" 10 | ], 11 | "Resource": [ 12 | "arn:aws:ec2:eu-west-1:xxxxxxxxxxxx:instance/i-00000000000000000", 13 | "arn:aws:ec2:eu-west-1:xxxxxxxxxxxx:instance/i-00000000000000001" 14 | ] 15 | } 16 | ] 17 | } 18 | -------------------------------------------------------------------------------- /aws-iam/ecr/CodeBuildECR.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "ecr:GetAuthorizationToken", 8 | "ecr:InitiateLayerUpload", 9 | "ecr:UploadLayerPart", 10 | "ecr:CompleteLayerUpload", 11 | "ecr:BatchCheckLayerAvailability", 12 | "ecr:PutImage" 13 | ], 14 | "Resource": [ 15 | "arn:aws:ecr:eu-west-1:xxxxxxxxxxxx:repository/myrepo" 16 | ] 17 | } 18 | ] 19 | } 20 | -------------------------------------------------------------------------------- /aws-iam/ecr/GitlabCiECR.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "ECRAllowAuthToken", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "ecr:GetAuthorizationToken" 9 | ], 10 | "Resource": [ 11 | "*" 12 | ] 13 | }, 14 | { 15 | "Sid": "ECRAllowUploads", 16 | "Effect": "Allow", 17 | "Action": [ 18 | "ecr:BatchCheckLayerAvailability", 19 | "ecr:InitiateLayerUpload", 20 | "ecr:UploadLayerPart", 21 | "ecr:CompleteLayerUpload", 22 | "ecr:PutImage" 23 | ], 24 | "Resource": [ 25 | "arn:aws:ecr:eu-west-1:xxxxxxxxxxxx:repository/*" 26 | ] 27 | }, 28 | { 29 | "Sid": "ECRAllowPull", 30 | "Effect": "Allow", 31 | "Action": [ 32 | "ecr:BatchGetImage", 33 | "ecr:GetDownloadUrlForLayer" 34 | ], 35 | "Resource": [ 36 | "arn:aws:ecr:eu-west-1:xxxxxxxxxxxx:repository/*" 37 | ] 38 | } 39 | ] 40 | } 41 | -------------------------------------------------------------------------------- /aws-iam/ecs/GitlabCiECSDeployPipeline.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "ECSReadAccess", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "ecs:DescribeTaskDefinition" 9 | ], 10 | "Resource": [ 11 | "*" 12 | ] 13 | }, 14 | { 15 | "Sid": "ECSWriteAccess", 16 | "Effect": "Allow", 17 | "Action": [ 18 | "ecs:RegisterTaskDefinition" 19 | ], 20 | "Resource": [ 21 | "*" 22 | ] 23 | }, 24 | { 25 | "Sid": "ECSDeployAccess", 26 | "Effect": "Allow", 27 | "Action": [ 28 | "ecs:UpdateService" 29 | ], 30 | "Resource": [ 31 | "arn:aws:ecs:eu-west-1:xxxxxxxxxxxx:service/teamname-env-cluster/servicename" 32 | ] 33 | }, 34 | { 35 | "Sid": "IAMPassRole", 36 | "Effect": "Allow", 37 | "Action": [ 38 | "iam:PassRole" 39 | ], 40 | "Resource": [ 41 | "arn:aws:iam::xxxxxxxxxxxx:role/ecs-taskrole-teamname-clustername", 42 | "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole" 43 | ] 44 | } 45 | ] 46 | } 47 | -------------------------------------------------------------------------------- /aws-iam/iam/LambdaVPCExecution.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "ec2:CreateNetworkInterface", 8 | "ec2:DeleteNetworkInterface", 9 | "ec2:DescribeNetworkInterfaces" 10 | ], 11 | "Resource": "*" 12 | } 13 | ] 14 | } 15 | -------------------------------------------------------------------------------- /aws-iam/s3/BackupsToS3WithKMS.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "AllowListBucket", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "s3:ListBucket" 9 | ], 10 | "Resource": [ 11 | "arn:aws:s3:::my-backups-bucket" 12 | ] 13 | }, 14 | { 15 | "Sid": "AllowPutAndGet", 16 | "Effect": "Allow", 17 | "Action": [ 18 | "s3:PutObject", 19 | "s3:GetObject" 20 | ], 21 | "Resource": [ 22 | "arn:aws:s3:::my-backups-bucket/*" 23 | ] 24 | }, 25 | { 26 | "Sid": "AllowEncryptionAndDecryption", 27 | "Effect": "Allow", 28 | "Action": [ 29 | "kms:Decrypt", 30 | "kms:Encrypt" 31 | ], 32 | "Resource": [ 33 | "arn:aws:kms:eu-west-1:xxxxxxxxxxxx:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" 34 | ] 35 | } 36 | ] 37 | } 38 | -------------------------------------------------------------------------------- /aws-iam/s3/BucketLevelPermissions.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "BucketLevelAccess", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "s3:PutObject", 9 | "s3:GetObject", 10 | "s3:AbortMultipartUpload", 11 | "s3:DeleteObjectVersion", 12 | "s3:ListBucket", 13 | "s3:DeleteObject" 14 | ], 15 | "Resource": [ 16 | "arn:aws:s3:::my-s3-bucket", 17 | "arn:aws:s3:::my-s3-bucket/*" 18 | ] 19 | } 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /aws-iam/sqs/SQSUsage.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "RequiredSQS", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "sqs:GetQueueUrl", 9 | "sqs:ChangeMessageVisibility", 10 | "sqs:ReceiveMessage", 11 | "sqs:SendMessage", 12 | "sqs:GetQueueAttributes" 13 | ], 14 | "Resource": [ 15 | "arn:aws:sqs:eu-west-1:xxxxxxxxxxxx:dev-request-queue", 16 | "arn:aws:sqs:eu-west-1:xxxxxxxxxxxx:dev-response-queue" 17 | ] 18 | }, 19 | { 20 | "Sid": "ListQueues", 21 | "Effect": "Allow", 22 | "Action": [ 23 | "sqs:ListQueues" 24 | ], 25 | "Resource": "*" 26 | } 27 | ] 28 | } 29 | -------------------------------------------------------------------------------- /aws-iam/ssm/AllowGetSSMWithKMS.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "AllowSSM", 6 | "Effect": "Allow", 7 | "Action": [ 8 | "ssm:GetParametersByPath", 9 | "ssm:GetParameters" 10 | ], 11 | "Resource": [ 12 | "arn:aws:ssm:eu-west-1:xxxxxxxxxxxx:parameter/my-application/dev/DATABASE_*", 13 | "arn:aws:ssm:eu-west-1:xxxxxxxxxxxx:parameter/codebuild/dev/DOCKER_USER", 14 | "arn:aws:ssm:eu-west-1:xxxxxxxxxxxx:parameter/codebuild/dev/DOCKER_PASSWORD" 15 | ] 16 | }, 17 | { 18 | "Sid": "AllowKMSDecrypt", 19 | "Effect": "Allow", 20 | "Action": [ 21 | "kms:Decrypt", 22 | "kms:GenerateDataKey" 23 | ], 24 | "Resource": [ 25 | "arn:aws:kms:eu-west-1:xxxxxxxxxxxx:key/*" 26 | ] 27 | } 28 | ] 29 | } 30 | -------------------------------------------------------------------------------- /aws-python/cloudwatch-logs/update_log_retention.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import boto3 4 | import time 5 | 6 | aws_profile = 'default' 7 | aws_region = 'eu-west-1' 8 | retention_in_days_number = 5 9 | 10 | cwlogs = boto3.Session(profile_name=aws_profile, region_name=aws_region).client('logs') 11 | 12 | def list_log_groups(limit_number): 13 | response = cwlogs.describe_log_groups(limit=limit_number) 14 | return response['logGroups'] 15 | 16 | def update_retention(log_group_name, retention_in_days): 17 | response = cwlogs.put_retention_policy( 18 | logGroupName=log_group_name, 19 | retentionInDays=retention_in_days 20 | ) 21 | return response 22 | 23 | paginator = cwlogs.get_paginator('describe_log_groups') 24 | 25 | updated_log_groups = 0 26 | 27 | for response in paginator.paginate(): 28 | for log_group in response['logGroups']: 29 | if 'retentionInDays' not in log_group.keys(): 30 | print("X {loggroup} has no retention policy set, and setting to 5 days".format(loggroup=log_group['logGroupName'])) 31 | update_response = update_retention(log_group['logGroupName'], retention_in_days_number) 32 | print("RequestID: {rid}, StatusCode: {sc}".format(rid=update_response['ResponseMetadata']['RequestId'], sc=update_response['ResponseMetadata']['HTTPStatusCode'])) 33 | updated_log_groups += 1 34 | time.sleep(1) 35 | 36 | print("Updated {count} log groups to the retention on {num} days".format(count=updated_log_groups, num=retention_in_days_number)) 37 | 38 | -------------------------------------------------------------------------------- /aws-python/ec2/README.md: -------------------------------------------------------------------------------- 1 | ## List Instances 2 | 3 | Get instance details from a filter, [examples/ec2_list_instances.py](examples/ec2_list_instances.py) 4 | 5 | ``` 6 | >>> import boto3 7 | >>> ec2 = boto3.Session(profile_name='prod', region_name='eu-west-1').client('ec2') 8 | >>> instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']},{'Name': 'tag:Name', 'Values': ['my-instance-group-name']}]) 9 | >>> for instance in instances: 10 | ... print(instance.id, instance.instance_type, instance.private_ip_address) 11 | ... 12 | ('i-00bceb55c1cec0c00', 'c5.large', '172.30.34.253') 13 | ('i-007f2ef27779f3f00', 'c5.large', '172.30.34.245') 14 | ('i-00228d357d1ddd200', 'c5.large', '172.30.36.188') 15 | ('i-00a0087392f7ebe00', 'c5.large', '172.30.37.192') 16 | ('i-008895f213ae84000', 'c5.large', '172.30.38.170') 17 | ``` 18 | -------------------------------------------------------------------------------- /aws-python/ec2/examples/ec2_list_instances.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | profile_name = '' 4 | region_name = '' 5 | 6 | ec2 = boto3.Session(profile_name=profile_name, region_name=region_name).client('ec2') 7 | instances = ec2.instances.filter( 8 | Filters=[ 9 | {'Name': 'instance-state-name', 'Values': ['running']}, 10 | {'Name': 'tag-key', 'Values': ['my-instance-group-name']} 11 | ] 12 | ) 13 | 14 | for instance in instances: 15 | print(instance.id, instance.instance_type, instance.private_ip_address) 16 | -------------------------------------------------------------------------------- /aws-python/kms/README.md: -------------------------------------------------------------------------------- 1 | # aws kms cheatsheet in python 2 | 3 | Ensure you have a KMS key that you have permissions to use 4 | 5 | ## Encryption 6 | 7 | ```python 8 | import boto3 9 | import base64 10 | session = boto3.Session(region_name='us-east-1', profile_name='default') 11 | kms = session.client('kms') 12 | 13 | ciphertext = kms.encrypt(KeyId='alias/mykey', Plaintext=plaintext) 14 | encoded_ciphertext = base64.b64encode(ciphertext["CiphertextBlob"]) 15 | result = encoded_ciphertext.decode('utf-8') 16 | ``` 17 | 18 | ## Decryption 19 | 20 | ```python 21 | import boto3 22 | import base64 23 | session = boto3.Session(region_name='us-east-1', profile_name='default') 24 | kms = session.client('kms') 25 | 26 | decoded_ciphertext = base64.b64decode(encoded_ciphertext) 27 | plaintext = kms.decrypt(CiphertextBlob=bytes(decoded_ciphertext)) 28 | result = plaintext['Plaintext'].decode('utf-8') 29 | ``` 30 | 31 | ## Full Example 32 | 33 | ```python 34 | import boto3 35 | import base64 36 | session = boto3.Session(region_name='us-east-1', profile_name='default') 37 | kms = session.client('kms') 38 | 39 | def encrypt(plaintext): 40 | ciphertext = kms.encrypt(KeyId='alias/mykey', Plaintext=plaintext) 41 | encoded_ciphertext = base64.b64encode(ciphertext["CiphertextBlob"]) 42 | return encoded_ciphertext.decode('utf-8') 43 | 44 | def decrypt(encoded_ciphertext): 45 | decoded_ciphertext = base64.b64decode(encoded_ciphertext) 46 | plaintext = kms.decrypt(CiphertextBlob=bytes(decoded_ciphertext)) 47 | return plaintext['Plaintext'].decode('utf-8') 48 | 49 | """ 50 | >>> a = encrypt('hello') 51 | >>> a 52 | 'AQICAHgQYMmngPUi9lcJeng2A12tVdu[shortened]2XY1wT3t1zreJg2KEF8vZmYykJBc8g==' 53 | 54 | >>> b = decrypt(a) 55 | >>> b 56 | 'hello' 57 | """ 58 | ``` 59 | -------------------------------------------------------------------------------- /aws-python/s3/put_object.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | s3 = boto3.client('s3') 3 | 4 | with open('file.txt', 'r') as file_content: 5 | s3.put_object(Bucket='my-bucket-name', Key=testfolder/file.txt, Body=file_content.read()) 6 | -------------------------------------------------------------------------------- /aws-python/sns/send_sms.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | SMS_NUMBER='+27000000000' 4 | 5 | sns = boto3.Session(profile_name='prod', region_name='eu-west-1').client('sns') 6 | 7 | response = sns.publish( 8 | PhoneNumber=SMS_NUMBER, 9 | Message='testing', 10 | MessageAttributes={ 11 | 'AWS.SNS.SMS.SenderID': { 12 | 'DataType': 'String', 13 | 'StringValue': '123' 14 | }, 15 | 'AWS.SNS.SMS.SMSType': { 16 | 'DataType': 'String', 17 | 'StringValue': 'Transactional' 18 | } 19 | } 20 | ) 21 | -------------------------------------------------------------------------------- /benchmarking/README.md: -------------------------------------------------------------------------------- 1 | # Benchmaring 2 | 3 | ## Tools 4 | 5 | - [stress-ng](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) 6 | -------------------------------------------------------------------------------- /bitcoin/cli/README.md: -------------------------------------------------------------------------------- 1 | # Bitcoin CLI 2 | 3 | ## Get Balance 4 | 5 | ```bash 6 | bitcoin-cli -conf=/blockchain/config/bitcoin.conf getbalance 7 | ``` 8 | 9 | ## List Transactions 10 | 11 | ```bash 12 | bitcoin-cli -conf=/blockchain/config/bitcoin.conf listtransactions 13 | ``` 14 | 15 | ## Send to Address 16 | 17 | ```bash 18 | bitcoin-cli -conf=/blockchain/config/bitcoin.conf sendtoaddress "btc-destination-address" 0.12345678 19 | ``` 20 | -------------------------------------------------------------------------------- /bitcoin/python/README.md: -------------------------------------------------------------------------------- 1 | # Bitcoin Python 2 | 3 | ## Get Balance 4 | 5 | ```python 6 | import os 7 | import requests 8 | import json 9 | from dotenv import load_dotenv 10 | 11 | load_dotenv('.env') 12 | 13 | RPC_USERNAME = os.getenv('RPC_USERNAME') 14 | RPC_PASSWORD = os.getenv('RPC_PASSWORD') 15 | 16 | def get_btc_balance(): 17 | headers = 'content-type: text/plain' 18 | request_payload = {"jsonrpc": "1.0", "id":"requests", "method": "getbalance", "params": ["*", 6]} 19 | response = requests.post('http://127.0.0.1:8332/', json=request_payload, auth=(RPC_USERNAME, RPC_PASSWORD)) 20 | return response.json() 21 | 22 | current_balance = get_btc_balance()['result'] 23 | 24 | print('[DEBUG] current balance is: {}'.format(current_balance)) 25 | ``` 26 | -------------------------------------------------------------------------------- /blackbox-exporter/README.md: -------------------------------------------------------------------------------- 1 | # Blackbox Exporter 2 | 3 | Blackbox Exporter by Prometheus 4 | 5 | ## Debugging 6 | 7 | TCP Checks, host: `test.mydomain.com`, port: `443` 8 | 9 | ``` 10 | curl https://blackbox-exporter.mydomain.com/probe?target=test.mydomain.com:443&module=tcp_connect&debug=true 11 | ``` 12 | 13 | HTTP Check: `https://test.mydomain.com` 14 | 15 | ``` 16 | curl https://blackbox-exporter.mydomain.com/probe?target=https://test.mydomain.com&module=http_2xx&debug=true 17 | ``` 18 | 19 | SSH Check: `test.mydomain.com:22` 20 | 21 | ``` 22 | curl "https://blackbox-exporter.mydomain.com/probe?target=test.mydomain.com:22&module=ssh_banner&debug=true" 23 | ``` 24 | -------------------------------------------------------------------------------- /cadvisor/README.md: -------------------------------------------------------------------------------- 1 | # cAdvisor Cheatsheet 2 | 3 | ### Labels 4 | 5 | ``` 6 | - instance 7 | - environment 8 | - cluster_name 9 | ``` 10 | 11 | For AWS, we will configure prometheus as: 12 | 13 | ``` 14 | scrape_configs: 15 | - job_name: container-metrics 16 | scrape_interval: 15s 17 | ec2_sd_configs: 18 | - region: eu-west-1 19 | role_arn: 'arn:aws:iam::000000000000:role/prometheus-ec2-role' 20 | port: 9100 21 | filters: 22 | - name: tag:PrometheusContainerScrape 23 | values: 24 | - Enabled 25 | relabel_configs: 26 | - source_labels: [__meta_ec2_private_ip] 27 | replacement: '${1}:8080' 28 | target_label: __address__ 29 | - source_labels: [__meta_ec2_tag_Name] 30 | target_label: instance 31 | - source_labels: [__meta_ec2_tag_ECSClusterName] 32 | target_label: cluster_name 33 | - source_labels: [__meta_ec2_tag_Environment] 34 | target_label: environment 35 | ``` 36 | -------------------------------------------------------------------------------- /cassandra/README.md: -------------------------------------------------------------------------------- 1 | # cassandra-cheatsheet 2 | 3 | ## Cassandra Client 4 | 5 | ### Installing cqlsh 6 | 7 | To install the `cqlsh` client on alpine linux: 8 | 9 | ```bash 10 | apk --no-cache add python3 py3-pip 11 | pip3 install cqlsh 12 | ``` 13 | 14 | ### Connecting to Cassandra 15 | 16 | From the same node: 17 | 18 | ```bash 19 | cqlsh -u user -p password 20 | ``` 21 | 22 | Over the network: 23 | 24 | ```bash 25 | CQLSH_HOST=cassandra.databases CQLSH_PORT=9042 cqlsh -u user -p password 26 | ``` 27 | 28 | ### Describe Keyspaces 29 | 30 | ```sql 31 | DESCRIBE keyspaces; 32 | ``` 33 | -------------------------------------------------------------------------------- /cat/README.md: -------------------------------------------------------------------------------- 1 | # cat 2 | 3 | ## Examples 4 | 5 | Show hidden characters like extra spaces, carriage returns or BOM markers, etc: 6 | 7 | ```bash 8 | cat -A data.csv | sed -n '26p' 9 | ``` 10 | -------------------------------------------------------------------------------- /ceph/README.md: -------------------------------------------------------------------------------- 1 | # ceph cheatsheet 2 | 3 | ## View Status: 4 | 5 | ``` 6 | $ ceph -s 7 | cluster: 8 | id: uuid-x-x-x-x 9 | health: HEALTH_OK 10 | 11 | services: 12 | mon: 1 daemons, quorum ceph:10.20.30.40:16789 13 | mgr: 1f2c207d5ec9(active) 14 | osd: 3 osds: 3 up, 3 in 15 | 16 | data: 17 | pools: 2 pools, 200 pgs 18 | objects: 16 objects, 21 MiB 19 | usage: 3.0 GiB used, 27 GiB / 30 GiB avail 20 | pgs: 200 active+clean 21 | ``` 22 | 23 | ## Pools: 24 | 25 | List pools: 26 | 27 | ``` 28 | $ ceph osd lspools 29 | 1 default 30 | 2 volumes 31 | ``` 32 | 33 | List objects in pool: 34 | 35 | ``` 36 | $ rados -p volumes ls 37 | rbd_header.10516b8b4567 38 | journal_data.2.10516b8b4567.1 39 | journal_data.2.10516b8b4567.2 40 | ``` 41 | 42 | View disk space of a pool: 43 | 44 | ``` 45 | $ rados df -p volumes 46 | POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR 47 | volumes 21 MiB 16 0 48 0 0 0 665 11 MiB 794 16 MiB 48 | 49 | total_objects 16 50 | total_used 3.0 GiB 51 | total_avail 27 GiB 52 | total_space 30 GiB 53 | ``` 54 | 55 | ## Get PGNum: 56 | 57 | ``` 58 | $ ceph osd pool get volumes pg_num 59 | pg_num: 100 60 | ``` 61 | 62 | ## Set Dashboard Username/Password: 63 | 64 | ``` 65 | ceph dashboard set-login-credentials 66 | ``` 67 | 68 | ## Ceph Docker Volumes 69 | 70 | ### Dockerized Ceph: 71 | - https://github.com/flaviostutz/ceph-osd 72 | 73 | ### Docker Volume Plugin: 74 | - https://github.com/flaviostutz/cepher 75 | 76 | ## Resources: 77 | - http://docs.ceph.com/docs/mimic/mgr/dashboard/ 78 | - https://wiki.nix-pro.com/view/Ceph_FAQ/Tweaks/Howtos -------------------------------------------------------------------------------- /cloudwatch/logs/README.md: -------------------------------------------------------------------------------- 1 | ## CloudWatch Logs Insights 2 | 3 | - https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax-examples.html 4 | 5 | Loglines: 6 | 7 | ``` 8 | 172.31.37.134 - - [07/Jul/2020 13:18:34] "GET / HTTP/1.1" 200 - 9 | 172.31.37.134 - - [07/Jul/2020 13:18:34] "GET /status HTTP/1.1" 200 - 10 | ``` 11 | 12 | Show all logs: 13 | 14 | ``` 15 | fields @message 16 | ``` 17 | 18 | Show the 25 most recent log entries: 19 | 20 | ``` 21 | fields @timestamp, @message | sort @timestamp desc | limit 25 22 | ``` 23 | 24 | Show all logs and include parsed fields: 25 | 26 | ``` 27 | fields @message, @log, @logStream, @ingestionTime, @timestamp 28 | ``` 29 | 30 | Only show logs containing `/status`: 31 | 32 | ``` 33 | fields @message | filter @message like '/status' 34 | ``` 35 | 36 | View eks audit logs for delete verbs: 37 | 38 | ``` 39 | fields @timestamp, @message, @logStream, @log 40 | | filter objectRef.namespace = 'dev' and objectRef.resource like /service.*/ and verb = 'delete' 41 | | sort @timestamp desc 42 | | limit 20 43 | ``` 44 | 45 | Select the logstream and filter on a string content: 46 | 47 | ``` 48 | fields @timestamp, @message, @logStream 49 | | sort @timestamp desc 50 | | filter @logStream = 'cb2a300000000000000000003b3' 51 | | filter @message like 'msg=' 52 | ``` 53 | 54 | Select the logstream and filter out string content: 55 | 56 | ``` 57 | fields @timestamp, @message, @logStream | sort @timestamp desc 58 | | filter @logStream = 'cb2a300000000000000000003b3' 59 | | filter @message not like "Something I dont want to see" 60 | ``` 61 | 62 | Filter out multiple strings: 63 | 64 | ``` 65 | fields @timestamp, @message, @logStream | sort @timestamp desc 66 | | filter @logStream = 'cb2a300000000000000000003b3' 67 | and not ( 68 | @message like "Something I dont want to see" or 69 | @message like "also dont want to see this" or 70 | @message like "or even this" 71 | ) 72 | ``` 73 | -------------------------------------------------------------------------------- /codedeploy/appspec/versioned/appspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | os: linux 3 | files: 4 | - source: / 5 | destination: /home/snake/_target 6 | hooks: 7 | # This deployment lifecycle event occurs even before the application revision 8 | # is downloaded. You can specify scripts for this event to gracefully stop the 9 | # application or remove currently installed packages in preparation for a deployment. 10 | # The AppSpec file and scripts used for this deployment lifecycle event are from the 11 | # previous successfully deployed application revision. 12 | ApplicationStop: 13 | - location: scripts/stop_server.sh 14 | timeout: 300 15 | runas: root 16 | 17 | # You can use this deployment lifecycle event for preinstall tasks, 18 | # such as decrypting files and creating a backup of the current version. 19 | BeforeInstall: 20 | - location: scripts/before_install.sh 21 | timeout: 300 22 | runas: root 23 | 24 | # You can use this deployment lifecycle event for tasks such as configuring 25 | # your application or changing file permissions. 26 | AfterInstall: 27 | - location: scripts/after_install.sh 28 | timeout: 300 29 | runas: root 30 | 31 | # You typically use this deployment lifecycle event to restart services that 32 | # were stopped during ApplicationStop 33 | ApplicationStart: 34 | - location: scripts/start_server.sh 35 | #- location: scripts/notify_post_start.sh 36 | timeout: 300 37 | runas: root 38 | 39 | # This is the last deployment lifecycle event. It is used to verify the 40 | # deployment was completed successfully. 41 | ValidateService: 42 | - location: scripts/validate_service.sh 43 | timeout: 300 44 | runas: root 45 | 46 | -------------------------------------------------------------------------------- /codedeploy/appspec/versioned/scripts/after_install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -ex 3 | APP_USER=snake 4 | APP_GROUP=snake 5 | DATESTAMP="$(date +%F)" 6 | CD_INSTALL_TARGET=/home/snake/_target 7 | 8 | function systemd_unit_file_check() { 9 | echo "copying systemd unit file in place" 10 | sudo cp "${CD_INSTALL_TARGET}/configs/python-app.service" /etc/systemd/system/python-app.service 11 | sudo systemctl daemon-reload 12 | } 13 | 14 | function remove_symlink(){ 15 | if [ -d /home/snake/app/current ] 16 | then 17 | sudo rm -rf /home/snake/app/current 18 | sudo mkdir -p /home/snake/app 19 | fi 20 | } 21 | 22 | function symlink_release() { 23 | sudo mkdir -p "/home/${APP_USER}/app/${DEPLOYMENT_ID}/configs" 24 | sudo mkdir -p "/home/${APP_USER}/app/${DEPLOYMENT_ID}/dependencies" 25 | sudo cp ${CD_INSTALL_TARGET}/configs/sample.env /home/${APP_USER}/app/${DEPLOYMENT_ID}/.env 26 | sudo cp ${CD_INSTALL_TARGET}/configs/hypercorn.toml /home/${APP_USER}/app/${DEPLOYMENT_ID}/configs/hypercorn.toml 27 | sudo cp ${CD_INSTALL_TARGET}/dependencies/requirements.pip /home/${APP_USER}/app/${DEPLOYMENT_ID}/dependencies/requirements.pip 28 | sudo cp -r ${CD_INSTALL_TARGET}/src/* /home/${APP_USER}/app/${DEPLOYMENT_ID}/ 29 | sudo ln -s /home/${APP_USER}/app/${DEPLOYMENT_ID} /home/${APP_USER}/app/current 30 | } 31 | 32 | function install_dependencies(){ 33 | sudo python3 -m pip install -r /home/${APP_USER}/app/current/dependencies/requirements.pip 34 | } 35 | 36 | function set_permissions() { 37 | sudo chown -R ${APP_USER}:${APP_GROUP} /home/${APP_USER}/ 38 | } 39 | 40 | function log_status(){ 41 | echo "[${DATESTAMP}] after install step completed" 42 | } 43 | 44 | # copy systemd unit file if not in place 45 | systemd_unit_file_check 46 | 47 | # remove symlink and version the installed target 48 | remove_symlink 49 | symlink_release 50 | 51 | # install the dependencies 52 | install_dependencies 53 | 54 | # set permissions 55 | set_permissions 56 | 57 | # log status 58 | log_status 59 | -------------------------------------------------------------------------------- /codedeploy/appspec/versioned/scripts/before_install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -ex 3 | 4 | # global variables 5 | APP_USER=snake 6 | APP_GROUP=snake 7 | DATESTAMP="$(date +%F)" 8 | TIMESTAMP="$(date +%s)" 9 | 10 | function detect_previous_version(){ 11 | if [ -f /opt/codedeploy-agent/deployment-root/${DEPLOYMENT_GROUP_ID}/.version ] 12 | then 13 | PREVIOUS_VERSION="$(cat /opt/codedeploy-agent/deployment-root/${DEPLOYMENT_GROUP_ID}/.version)" 14 | else 15 | PREVIOUS_VERSION="initial" 16 | fi 17 | } 18 | 19 | function debug_env(){ 20 | echo "LIFECYCLE_EVENT=${LIFECYCLE_EVENT}" > /tmp/codedeploy.env 21 | echo "DEPLOYMENT_ID=${DEPLOYMENT_ID}" >> /tmp/codedeploy.env 22 | echo "APPLICATION_NAME=${APPLICATION_NAME}" >> /tmp/codedeploy.env 23 | echo "DEPLOYMENT_GROUP_NAME=${DEPLOYMENT_GROUP_NAME}" >> /tmp/codedeploy.env 24 | echo "DEPLOYMENT_GROUP_ID=${DEPLOYMENT_GROUP_ID}" >> /tmp/codedeploy.env 25 | } 26 | 27 | # functions 28 | function user_and_group_check(){ 29 | id -u ${APP_USER} &> /dev/null && EXIT_CODE=${?} || EXIT_CODE=${?} 30 | if [ ${EXIT_CODE} == 1 ] 31 | then 32 | sudo groupadd --gid 1002 ${APP_GROUP} 33 | sudo useradd --create-home --gid 1002 --shell /bin/bash ${APP_USER} 34 | fi 35 | } 36 | 37 | function create_backup() { 38 | sudo mkdir -p "/opt/backups/${DATESTAMP}" 39 | # on initial deploy skip backups 40 | if [ -d "/home/snake/app/current" ] 41 | then 42 | TARGET_DIR=$(readlink -f /home/snake/app/current) 43 | sudo tar -zcf "/opt/backups/${DATESTAMP}/app-backup_${PREVIOUS_VERSION}.tar.gz ${TARGET_DIR}/" 44 | fi 45 | } 46 | 47 | function log_status(){ 48 | echo "[${DATESTAMP}] before install step completed" 49 | } 50 | 51 | if [ "$DEPLOYMENT_GROUP_NAME" == "Staging" ] 52 | then 53 | echo "Staging Environment" 54 | fi 55 | 56 | # detect previous version 57 | detect_previous_version 58 | # debug env vars for codedeploy 59 | debug_env 60 | # ensure the user exists 61 | user_and_group_check 62 | # create a backup 63 | #create_backup 64 | # log status 65 | log_status 66 | -------------------------------------------------------------------------------- /codedeploy/appspec/versioned/scripts/start_server.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | DATESTAMP="$(date +%FT%H:%m)" 3 | 4 | sudo systemctl restart python-app 5 | echo "[${DATESTAMP}] application started" 6 | -------------------------------------------------------------------------------- /codedeploy/appspec/versioned/scripts/stop_server.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | DATESTAMP="$(date +%FT%H:%m)" 3 | 4 | if [ -f "/etc/systemd/system/python-app.service" ] 5 | then 6 | sudo systemctl stop python-app 7 | sleep 5 8 | while [ "$(sudo systemctl is-active python-app)" == "active" ] 9 | do 10 | sleep 5 11 | done 12 | echo "[${DATESTAMP}] application stopped" 13 | fi 14 | -------------------------------------------------------------------------------- /codedeploy/appspec/versioned/scripts/validate_service.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -ex 3 | DATESTAMP="$(date +%FT%H:%m)" 4 | 5 | # Verify if the service is healthy 6 | while ! curl -sf http://localhost:8000/health; do sleep 5; done 7 | 8 | # log 9 | echo "[${DATESTAMP}] application passing health checks" 10 | 11 | # write current version to disk 12 | echo "${DEPLOYMENT_ID}" > /opt/codedeploy-agent/deployment-root/${DEPLOYMENT_GROUP_ID}/.version 13 | -------------------------------------------------------------------------------- /concourse/README.md: -------------------------------------------------------------------------------- 1 | ## Concourse Cheatsheets 2 | 3 | - https://cheatsheet.dennyzhang.com/cheatsheet-concourse-a4 4 | 5 | 6 | Login to your Team: 7 | 8 | ``` 9 | $ fly -t ci-teamx login -c https://ci.domain.com -n teamx 10 | ``` 11 | 12 | List your targets: 13 | 14 | ``` 15 | $ fly targets 16 | name url team expiry 17 | ci https://ci.domain.com teamy n/a 18 | ci-teamx https://ci.domain.com teamx Tue, 15 Oct 2019 20:42:34 UTC 19 | ``` 20 | 21 | Get a Pipeline's Config: 22 | 23 | ``` 24 | $ fly -t ci-teamx gp -p prod-pipeline 25 | ``` 26 | 27 | Delete a Pipeline: 28 | 29 | ``` 30 | $ fly -t ci-teamx destroy-pipeline -p prod-pipeline 31 | ``` 32 | -------------------------------------------------------------------------------- /dd/README.md: -------------------------------------------------------------------------------- 1 | # dd cheatshet 2 | 3 | ## Create Files 4 | 5 | To create a 1GB file: 6 | 7 | ```bash 8 | $ dd if=/dev/zero of=1g.bin bs=1G count=1 9 | ``` 10 | 11 | ## Benchmarking 12 | 13 | Server Throughput (Streaming I/O): 14 | 15 | ```bash 16 | $ dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=dsync 17 | ``` 18 | 19 | Server Latency: 20 | 21 | ```bash 22 | $ dd if=/dev/zero of=/root/testfile bs=512 count=1000 oflag=dsync 23 | ``` 24 | 25 | Testing Write Speed: 26 | 27 | ```bash 28 | # Set the block size to 1MB and copy 1000 blocks 29 | dd if=/dev/urandom of=testfile bs=1M count=1000 30 | ``` 31 | 32 | Testing Read Speed: 33 | 34 | ```bash 35 | dd if=testfile of=/dev/null bs=1M 36 | ``` 37 | -------------------------------------------------------------------------------- /dogecoin/cli/README.md: -------------------------------------------------------------------------------- 1 | # Dogecoin CLI 2 | 3 | ## Get Balance 4 | 5 | ```bash 6 | dogecoin-cli -conf=/blockchain/config/dogecoin.conf getbalance 7 | ``` 8 | 9 | ## List Transactions 10 | 11 | ```bash 12 | dogecoin-cli -conf=/blockchain/config/dogecoin.conf listtransactions 13 | ``` 14 | 15 | ## Send to Address 16 | 17 | ```bash 18 | dogecoin-cli --conf=/blockchain/config/dogecoin.conf sendtoaddress "xxxxxxxxxxxx" 12.12345678 "donation" "james donation" 19 | ``` 20 | -------------------------------------------------------------------------------- /dogecoin/curl/README.md: -------------------------------------------------------------------------------- 1 | # Dogecoin JSON-RPC Examples 2 | 3 | To get help on json-rpc you can generate them from `dogecoin-cli help sendtoaddress` as example. 4 | 5 | ## Commands 6 | 7 | - `getblockchaininfo` 8 | 9 | ```bash 10 | curl -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getblockchaininfo", "params": []}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 11 | ``` 12 | 13 | - `getinfo` 14 | 15 | ```bash 16 | curl -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getinfo", "params": []}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 17 | ``` 18 | 19 | - `getnewaddress` 20 | 21 | ```bash 22 | curl -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getnewaddress", "params": ["main"]}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 23 | ``` 24 | 25 | - `getaccountaddress` 26 | 27 | ```bash 28 | curl -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getaccountaddress", "params": ["main"]}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 29 | ``` 30 | 31 | - `getaddressesbyaccount` 32 | 33 | ```bash 34 | curl -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getaddressesbyaccount", "params": ["main"]}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 35 | ``` 36 | 37 | - `listaccounts` 38 | 39 | ```bash 40 | curl -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "listaccounts", "params": []}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 41 | ``` 42 | 43 | - `getbalance` 44 | 45 | ```bash 46 | curl -s -u "user:pass" -d '{"jsonrpc": "1.0", "id": "curl", "method": "getbalance", "params": ["*", 6]}' -H 'content-type: text/plain;' http://127.0.0.1:44555/ 47 | ``` 48 | 49 | -------------------------------------------------------------------------------- /drone-ci/README.md: -------------------------------------------------------------------------------- 1 | # drone-cheatsheets 2 | 3 | Blog posts: 4 | 5 | - [Promoting to Production and Restoring Cache with k3s](https://vitobotta.com/2019/10/09/ci-cd-with-drone-for-deployment-to-kubernetes-with-helm/) 6 | -------------------------------------------------------------------------------- /drone-ci/localstack-service-terraform/.drone.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: pipeline 3 | type: docker 4 | name: default 5 | trigger: 6 | event: 7 | - pull_request 8 | steps: 9 | - name: dumps-env 10 | image: alpine 11 | commands: 12 | - env 13 | 14 | - name: wait-for-localstack 15 | image: ruanbekker/awscli 16 | environment: 17 | AWS_ACCESS_KEY_ID: 123 18 | AWS_SECRET_ACCESS_KEY: xyz 19 | AWS_DEFAULT_REGION: eu-west-1 20 | commands: 21 | - while ! aws --endpoint-url=http://localstack:4566 kinesis list-streams; do sleep 1; done 22 | 23 | - name: pre-list-tables 24 | image: ruanbekker/awscli 25 | environment: 26 | AWS_ACCESS_KEY_ID: 123 27 | AWS_SECRET_ACCESS_KEY: xyz 28 | AWS_DEFAULT_REGION: eu-west-1 29 | commands: 30 | - aws --endpoint-url=http://localstack:4566 dynamodb list-tables 31 | 32 | - name: terraform-step 33 | image: hashicorp/terraform:light 34 | environment: 35 | AWS_ACCESS_KEY_ID: 36 | from_secret: AWS_ACCESS_KEY_ID 37 | AWS_SECRET_ACCESS_KEY: 38 | from_secret: AWS_SECRET_ACCESS_KEY 39 | AWS_DEFAULT_REGION: us-east-1 40 | commands: 41 | - sh init.sh 42 | - terraform plan 43 | - terraform apply -auto-approve 44 | volumes: 45 | - name: cache 46 | path: /tmp 47 | 48 | - name: post-list-tables 49 | image: ruanbekker/awscli 50 | environment: 51 | AWS_ACCESS_KEY_ID: 123 52 | AWS_SECRET_ACCESS_KEY: xyz 53 | AWS_DEFAULT_REGION: eu-west-1 54 | commands: 55 | - aws --endpoint-url=http://localstack:4566 dynamodb list-tables 56 | 57 | volumes: 58 | - name: cache 59 | temp: {} 60 | - name: localstack-vol 61 | host: 62 | path: /tmp/localstack-vol 63 | 64 | services: 65 | - name: localstack 66 | image: localstack/localstack:0.12.17 67 | environment: 68 | DOCKER_HOST: unix:///var/run/docker.sock 69 | EDGE_PORT: 4566 70 | volumes: 71 | - name: docker-socket 72 | path: /var/run/docker.sock 73 | - name: localstack-vol 74 | path: /tmp/localstack 75 | -------------------------------------------------------------------------------- /drone-ci/localstack-service-terraform/init.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env sh 2 | set -x 3 | terraform init \ 4 | -backend-config access_key=$AWS_ACCESS_KEY_ID \ 5 | -backend-config secret_key=$AWS_SECRET_ACCESS_KEY \ 6 | -backend-config region=$AWS_DEFAULT_REGION \ 7 | -backend-config "bucket=terraform-remote-state" \ 8 | -backend-config "key=$CI_REPO_NAME/$CI_COMMIT_BRANCH" \ 9 | -backend-config "endpoint=https://minio.domain.com" \ 10 | -backend-config "force_path_style=true" \ 11 | -backend-config "skip_credentials_validation=true" \ 12 | -backend-config "skip_metadata_api_check=true" \ 13 | -backend-config "skip_region_validation=true" 14 | -------------------------------------------------------------------------------- /drone-ci/localstack-service-terraform/main.tf: -------------------------------------------------------------------------------- 1 | resource "random_string" "userid" { 2 | length = 16 3 | special = false 4 | upper = false 5 | } 6 | 7 | resource "aws_dynamodb_table" "users" { 8 | name = "users" 9 | read_capacity = "2" 10 | write_capacity = "1" 11 | hash_key = "userid" 12 | 13 | attribute { 14 | name = "userid" 15 | type = "S" 16 | } 17 | } 18 | 19 | resource "aws_dynamodb_table" "countries" { 20 | name = "countries" 21 | read_capacity = "1" 22 | write_capacity = "1" 23 | hash_key = "country" 24 | 25 | attribute { 26 | name = "country" 27 | type = "S" 28 | } 29 | } 30 | 31 | resource "aws_kinesis_stream" "registrations" { 32 | name = "registration-stream" 33 | shard_count = 1 34 | retention_period = 30 35 | 36 | shard_level_metrics = [ 37 | "IncomingBytes", 38 | "OutgoingBytes", 39 | ] 40 | } 41 | -------------------------------------------------------------------------------- /drone-ci/localstack-service-terraform/outputs.tf: -------------------------------------------------------------------------------- 1 | output "userid" { 2 | value = random_string.userid.result 3 | } 4 | 5 | output "users_dynamodb_table_arn" { 6 | value = aws_dynamodb_table.users.arn 7 | } 8 | 9 | output "countries_dynamodb_table_arn" { 10 | value = aws_dynamodb_table.countries.arn 11 | } 12 | 13 | output "registrations_kinesis_stream_arn" { 14 | value = aws_kinesis_stream.registrations.arn 15 | } 16 | -------------------------------------------------------------------------------- /drone-ci/localstack-service-terraform/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | random = { 4 | version = "~> 3.0" 5 | } 6 | aws = { 7 | version = "~> 3.27" 8 | source = "hashicorp/aws" 9 | } 10 | } 11 | } 12 | 13 | provider "random" {} 14 | 15 | provider "aws" { 16 | region = "eu-west-1" 17 | access_key = "fake" 18 | secret_key = "fake" 19 | skip_credentials_validation = true 20 | skip_metadata_api_check = true 21 | skip_requesting_account_id = true 22 | 23 | endpoints { 24 | dynamodb = "http://localstack:4566" 25 | kinesis = "http://localstack:4566" 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /drone-ci/localstack-service-terraform/remote-state.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | backend "s3" { 3 | } 4 | } 5 | -------------------------------------------------------------------------------- /drone-ci/skeleton/.drone.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # https://docs.drone.io/pipeline/docker/syntax/trigger/ 3 | # https://docs.drone.io/pipeline/docker/syntax/platform/ 4 | # https://docs.drone.io/pipeline/docker/syntax/workspace/ 5 | # https://docs.drone.io/pipeline/docker/syntax/steps/ 6 | # http://plugins.drone.io/drone-plugins/drone-webhook/ 7 | # https://github.com/drone-plugins/drone-webhook 8 | 9 | kind: pipeline 10 | type: docker 11 | name: default 12 | 13 | trigger: 14 | branch: 15 | - master 16 | event: 17 | - push 18 | 19 | platform: 20 | os: linux 21 | arch: amd64 22 | 23 | workspace: 24 | path: /drone/src 25 | 26 | steps: 27 | - name: greeting 28 | image: busybox 29 | environment: 30 | OWNER: Ruan 31 | commands: 32 | - echo "Hi $OWNER" 33 | 34 | - name: send-success 35 | image: busybox 36 | when: 37 | status: [ success ] 38 | commands: 39 | - echo "build succeeded" 40 | 41 | - name: send-failure 42 | image: busybox 43 | when: 44 | status: [ failure ] 45 | commands: 46 | - echo "build failed" 47 | -------------------------------------------------------------------------------- /drone-ci/triggers-pull-request/.drone.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # https://docs.drone.io/pipeline/triggers/ 3 | 4 | kind: pipeline 5 | type: docker 6 | name: default 7 | 8 | trigger: 9 | event: 10 | - pull_request 11 | 12 | platform: 13 | os: linux 14 | arch: amd64 15 | 16 | workspace: 17 | path: /drone/src 18 | 19 | steps: 20 | - name: greeting 21 | image: busybox 22 | environment: 23 | OWNER: Ruan 24 | commands: 25 | - echo "dumps env" 26 | - env 27 | 28 | - name: send-success 29 | image: busybox 30 | when: 31 | status: [ success ] 32 | commands: 33 | - echo "build succeeded" 34 | 35 | - name: send-failure 36 | image: busybox 37 | when: 38 | status: [ failure ] 39 | commands: 40 | - echo "build failed" 41 | -------------------------------------------------------------------------------- /drone-ci/using-terraform-in-drone/.drone.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: pipeline 3 | type: docker 4 | name: default 5 | 6 | steps: 7 | - name: setup-aws-credentials 8 | image: busybox 9 | environment: 10 | AWS_CREDENTIALS: 11 | from_secret: AWS_CREDENTIALS 12 | commands: 13 | - mkdir -p $$DRONE_WORKSPACE/.aws 14 | - echo $${AWS_CREDENTIALS} | base64 -d > $$DRONE_WORKSPACE/.aws/credentials 15 | - chmod 0400 $$DRONE_WORKSPACE/.aws/credentials 16 | volumes: 17 | - name: cache 18 | path: /tmp 19 | 20 | - name: create-test-file 21 | image: busybox 22 | commands: 23 | - echo $$DRONE_COMMIT > infra/test.txt 24 | volumes: 25 | - name: cache 26 | path: /tmp 27 | 28 | - name: terraform-init 29 | image: hashicorp/terraform:light 30 | commands: 31 | - terraform -chdir=./infra init 32 | volumes: 33 | - name: cache 34 | path: /tmp 35 | 36 | - name: terraform-plan 37 | image: hashicorp/terraform:light 38 | commands: 39 | - terraform -chdir=./infra plan 40 | volumes: 41 | - name: cache 42 | path: /tmp 43 | 44 | # to promote step see: 45 | # https://vitobotta.com/2019/10/09/ci-cd-with-drone-for-deployment-to-kubernetes-with-helm/ 46 | - name: terraform-apply 47 | image: hashicorp/terraform:light 48 | commands: 49 | - terraform -chdir=./infra apply -input=false -auto-approve 50 | volumes: 51 | - name: cache 52 | path: /tmp 53 | 54 | volumes: 55 | - name: cache 56 | temp: {} 57 | -------------------------------------------------------------------------------- /dynamodb/README.md: -------------------------------------------------------------------------------- 1 | d 2 | -------------------------------------------------------------------------------- /dynamodb/python-dynamodb.md: -------------------------------------------------------------------------------- 1 | ## Using Python with DynamoDB 2 | 3 | Start a Local Server: 4 | 5 | ``` 6 | $ docker run -it -p 8000:4567 ruanbekker/dynamodb 7 | ``` 8 | 9 | Install boto3: 10 | 11 | ``` 12 | $ pip install boto3 13 | ``` 14 | 15 | See the examples in [python](python/) 16 | -------------------------------------------------------------------------------- /dynamodb/python/conditional_put.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | from botocore import exceptions 3 | import time 4 | 5 | client = boto3.Session(region_name='eu-west-1').client('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 6 | 7 | try: 8 | response = client.put_item( 9 | TableName='gamescores', 10 | Item={ 11 | 'event': {'S': 'gaming_nationals_zaf'}, 12 | 'timestamp': {'S': '2019-02-08T14:53'}, 13 | 'score': {'N': '11885'}, 14 | 'name': {'S': 'will'}, 15 | 'gamerid': {'S': 'wilson9335'}, 16 | 'game': {'S': 'counter strike'}, 17 | 'age': {'N': '27'}, 18 | 'rank': {'S': 'professional'}, 19 | 'location': {'S': 'sweden'} 20 | }, 21 | ConditionExpression='attribute_not_exists(gamerid)' 22 | ) 23 | print(response) 24 | 25 | except exceptions.ClientError as e: 26 | if e.response['Error']['Code'] == 'ConditionalCheckFailedException': 27 | print('ConditionalCheckFailedException') 28 | -------------------------------------------------------------------------------- /dynamodb/python/create_table.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import time 3 | 4 | client = boto3.Session(region_name='eu-west-1').resource('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 5 | 6 | response = client.create_table( 7 | AttributeDefinitions=[{ 8 | 'AttributeName': 'event', 9 | 'AttributeType': 'S' 10 | }, 11 | { 12 | 'AttributeName': 'timestamp', 13 | 'AttributeType': 'S' 14 | }], 15 | TableName='gamescores', 16 | KeySchema=[{ 17 | 'AttributeName': 'event', 18 | 'KeyType': 'HASH' 19 | }, 20 | { 21 | 'AttributeName': 'timestamp', 22 | 'KeyType': 'RANGE' 23 | }], 24 | ProvisionedThroughput={ 25 | 'ReadCapacityUnits': 1, 26 | 'WriteCapacityUnits': 10 27 | } 28 | ) 29 | 30 | time.sleep(2) 31 | print(response) 32 | -------------------------------------------------------------------------------- /dynamodb/python/delete_item.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | client = boto3.Session(region_name='eu-west-1').client('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 4 | 5 | 6 | response = client.delete_item( 7 | Key={ 8 | 'event': {'S': 'gaming_nationals_zaf'}, 9 | 'timestamp': {'S': '2019-02-08T14:53'} 10 | }, 11 | TableName='gamescores' 12 | ) 13 | 14 | print(response) 15 | -------------------------------------------------------------------------------- /dynamodb/python/get_item.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | client = boto3.Session(region_name='eu-west-1').client('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 4 | 5 | response = client.get_item( 6 | Key={ 7 | 'event': {'S': 'gaming_nationals_zaf'}, 8 | 'timestamp': {'S': '2019-02-08T14:53'} 9 | }, 10 | TableName='gamescores' 11 | ) 12 | 13 | print(response) 14 | -------------------------------------------------------------------------------- /dynamodb/python/put_item.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import time 3 | 4 | client = boto3.Session(region_name='eu-west-1').client('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 5 | 6 | response = client.put_item( 7 | TableName='gamescores', 8 | Item={ 9 | 'event': {'S': 'gaming_nationals_zaf'}, 10 | 'timestamp': {'S': '2019-02-08T14:53'}, 11 | 'score': {'N': '11885'}, 12 | 'name': {'S': 'will'}, 13 | 'gamerid': {'S': 'wilson9335'}, 14 | 'game': {'S': 'counter strike'}, 15 | 'age': {'N': '27'}, 16 | 'rank': {'S': 'professional'}, 17 | 'location': {'S': 'sweden'} 18 | } 19 | ) 20 | 21 | time.sleep(2) 22 | print(response) 23 | -------------------------------------------------------------------------------- /dynamodb/python/scan.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | client = boto3.Session(region_name='eu-west-1').client('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 4 | 5 | response = client.scan(TableName='gamescores') 6 | 7 | print(response) 8 | -------------------------------------------------------------------------------- /dynamodb/python/update_item.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | client = boto3.Session(region_name='eu-west-1').client('dynamodb', aws_access_key_id='', aws_secret_access_key='', endpoint_url='http://localhost:4567') 4 | 5 | response = client.update_item( 6 | TableName='gamescores', 7 | Key={ 8 | 'event': {'S': 'gaming_nationals_zaf'}, 9 | 'timestamp': {'S': '2019-02-08T14:53'} 10 | }, 11 | AttributeUpdates={ 12 | 'gamerid': {'Value': {'S': 'willx9335'}} 13 | } 14 | ) 15 | 16 | print(response) 17 | -------------------------------------------------------------------------------- /ec2-metadata/README.md: -------------------------------------------------------------------------------- 1 | # EC2 Metadata 2 | 3 | ## Examples 4 | 5 | Get the hostname: 6 | 7 | ``` 8 | $ curl http://instance-data/latest/meta-data/instance-id 9 | i-xxxxxxxxxxx 10 | ``` 11 | 12 | Get the private ipv4 address: 13 | 14 | ``` 15 | $ curl -s http://169.254.169.254/latest/meta-data/local-ipv4 16 | 172.31.50.37 17 | ``` 18 | 19 | Get the region: 20 | 21 | ``` 22 | $ curl -s http://instance-data/latest/meta-data/placement/availability-zone | rev | cut -c 2- | rev 23 | eu-west-1 24 | ``` 25 | 26 | Get the EC2 Tag Name Value: 27 | 28 | ``` 29 | TAG_NAME="Name" 30 | INSTANCE_ID="$(curl -s http://instance-data/latest/meta-data/instance-id)" 31 | REGION="$(curl -s http://instance-data/latest/meta-data/placement/availability-zone | rev | cut -c 2- | rev)" 32 | TAG_VALUE="$(aws ec2 describe-tags --filters "Name=resource-id,Values=${INSTANCE_ID}" "Name=key,Values=${TAG_NAME}" --region ${REGION} --output=text | cut -f5)" 33 | 34 | $ echo ${TAG_VALUE} 35 | my-instance 36 | ``` 37 | 38 | For authenticated requests: 39 | 40 | ```bash 41 | token=$(curl -s -XPUT -H 'X-aws-ec2-metadata-token-ttl-seconds: 21600' http://169.254.169.254/latest/api/token) 42 | curl -s -XGET -H "X-aws-ec2-metadata-token: $token" http://169.254.169.254/latest/meta-data/instance-id 43 | ``` 44 | -------------------------------------------------------------------------------- /ecs/task-definitions/cadvisor_taskdef.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "cadvisor", 3 | "containerDefinitions": [ 4 | { 5 | "name": "cadvisor", 6 | "image": "google/cadvisor", 7 | "memoryReservation": 256, 8 | "portMappings":[ 9 | { 10 | "protocol":"tcp", 11 | "containerPort":8080, 12 | "hostPort":8080 13 | } 14 | ], 15 | "essential": true, 16 | "privileged": true, 17 | "mountPoints": [ 18 | { 19 | "sourceVolume": "root", 20 | "containerPath": "/rootfs", 21 | "readOnly": true 22 | }, 23 | { 24 | "sourceVolume": "var_run", 25 | "containerPath": "/var/run", 26 | "readOnly": false 27 | }, 28 | { 29 | "sourceVolume": "sys", 30 | "containerPath": "/sys", 31 | "readOnly": true 32 | }, 33 | { 34 | "sourceVolume": "var_lib_docker", 35 | "containerPath": "/var/lib/docker", 36 | "readOnly": true 37 | }, 38 | { 39 | "sourceVolume": "dev_disk", 40 | "containerPath": "/dev/disk", 41 | "readOnly": true 42 | }, 43 | { 44 | "sourceVolume": "cgroup", 45 | "containerPath": "/sys/fs/cgroup", 46 | "readOnly": true 47 | } 48 | ] 49 | } 50 | ], 51 | "volumes": [ 52 | { 53 | "host" : { 54 | "sourcePath" : "/" 55 | }, 56 | "name" : "root" 57 | }, 58 | { 59 | "host" : { 60 | "sourcePath" : "/var/run" 61 | }, 62 | "name" : "var_run" 63 | }, 64 | { 65 | "host" : { 66 | "sourcePath" : "/sys" 67 | }, 68 | "name" : "sys" 69 | }, 70 | { 71 | "host" : { 72 | "sourcePath" : "/var/lib/docker" 73 | }, 74 | "name" : "var_lib_docker" 75 | }, 76 | { 77 | "host" : { 78 | "sourcePath" : "/dev_disk" 79 | }, 80 | "name" : "dev_disk" 81 | }, 82 | { 83 | "host" : { 84 | "sourcePath" : "/cgroup" 85 | }, 86 | "name" : "cgroup" 87 | } 88 | ] 89 | } 90 | -------------------------------------------------------------------------------- /ecs/task-definitions/cloudwatch_logs.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "nginx-with-cloudwatch", 3 | "executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecs-exec-role", 4 | "taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecs-task-role", 5 | "requiresCompatibilities":[ 6 | "EC2" 7 | ], 8 | "containerDefinitions": [ 9 | { 10 | "name": "nginx-json", 11 | "image": "ruanbekker/nginx-demo:json", 12 | "memory": 128, 13 | "essential": true, 14 | "portMappings": [ 15 | { 16 | "hostPort": 0, 17 | "containerPort": 80, 18 | "protocol": "tcp" 19 | } 20 | ], 21 | "logConfiguration": { 22 | "logDriver": "awslogs", 23 | "options": { 24 | "awslogs-group": "/ecs/tools/nginx-json", 25 | "awslogs-region": "eu-west-1", 26 | "awslogs-stream-prefix": "logs", 27 | "awslogs-create-group": "true" 28 | } 29 | } 30 | } 31 | ] 32 | } 33 | -------------------------------------------------------------------------------- /ecs/task-definitions/efs_storage.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "nginx-with-efs", 3 | "executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecs-exec-role", 4 | "taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecs-task-role", 5 | "requiresCompatibilities":[ 6 | "EC2" 7 | ], 8 | "containerDefinitions": [ 9 | { 10 | "name": "nginx-json", 11 | "image": "ruanbekker/nginx-demo:json", 12 | "memory": 128, 13 | "essential": true, 14 | "portMappings": [ 15 | { 16 | "hostPort": 0, 17 | "containerPort": 80, 18 | "protocol": "tcp" 19 | } 20 | ], 21 | "mountPoints": [ 22 | { 23 | "containerPath": "/usr/share/nginx/html", 24 | "sourceVolume": "efs-html" 25 | } 26 | ], 27 | "logConfiguration": { 28 | "logDriver": "awslogs", 29 | "options": { 30 | "awslogs-group": "/ecs/tools/efs-nginx-json", 31 | "awslogs-region": "eu-west-1", 32 | "awslogs-stream-prefix": "logs", 33 | "awslogs-create-group": "true" 34 | } 35 | } 36 | } 37 | ], 38 | "volumes": [ 39 | { 40 | "name": "efs-html", 41 | "efsVolumeConfiguration": { 42 | "fileSystemId": "fs-xxxxxxxx", 43 | "rootDirectory": "/efs-html" 44 | } 45 | } 46 | ] 47 | } 48 | -------------------------------------------------------------------------------- /ecs/task-definitions/env_and_secrets.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "app-with-secrets", 3 | "executionRoleArn":"arn:aws:iam::xxxxxxxxxxxx:role/ecs-exec-role", 4 | "taskRoleArn":"arn:aws:iam::xxxxxxxxxxxx:role/ecs-task-role", 5 | "containerDefinitions": [ 6 | { 7 | "name": "nginx", 8 | "image": "nginx:latest", 9 | "memoryReservation": 256, 10 | "portMappings":[ 11 | { 12 | "protocol":"tcp", 13 | "containerPort":3000, 14 | "hostPort":0 15 | } 16 | ], 17 | "environment": [ 18 | { 19 | "name": "AWS_DEFAULT_REGION", 20 | "value": "eu-west-1" 21 | } 22 | ], 23 | "secrets": [ 24 | { 25 | "name": "ACCESS_KEY_ID", 26 | "valueFrom": "arn:aws:ssm:eu-west-1:xxxxxxxxxxxx:parameter/myapp/prod/AWS_ACCESS_KEY_ID" 27 | } 28 | ], 29 | "essential": true, 30 | "privileged": true 31 | } 32 | ] 33 | } 34 | -------------------------------------------------------------------------------- /ecs/task-definitions/firelens_loki.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "nginx-loki-logs", 3 | "executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsTaskExecutionRole", 4 | "taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecs-task-role", 5 | "containerDefinitions": [ 6 | { 7 | "name": "nginx", 8 | "image": "nginx:latest", 9 | "portMappings": [ 10 | { 11 | "hostPort": 0, 12 | "protocol": "tcp", 13 | "containerPort": 80 14 | } 15 | ], 16 | "memoryReservation": 256, 17 | "stopTimeout": 30, 18 | "startTimeout": 60, 19 | "essential": true, 20 | "logConfiguration": { 21 | "logDriver": "awsfirelens", 22 | "options": { 23 | "RemoveKeys": "container_id,ecs_task_arn", 24 | "LineFormat": "key_value", 25 | "Labels": "{job=\"prod/dockerlogs\", service=\"nginx-web\", environment=\"prod\"}", 26 | "LabelKeys": "container_name,ecs_task_definition,source,ecs_cluster", 27 | "Url": "https://x:x@loki.mydomain.com/loki/api/v1/push", 28 | "Name": "grafana-loki" 29 | } 30 | } 31 | }, 32 | { 33 | "name": "log_router", 34 | "memoryReservation": 50, 35 | "image": "grafana/fluent-bit-plugin-loki:latest", 36 | "firelensConfiguration": { 37 | "type": "fluentbit", 38 | "options": { 39 | "enable-ecs-log-metadata": "true" 40 | } 41 | }, 42 | "essential": true 43 | } 44 | ], 45 | "requiresCompatibilities": [ 46 | "EC2" 47 | ] 48 | } 49 | -------------------------------------------------------------------------------- /ecs/task-definitions/sidecar_taskdef.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "app-with-sidecar-container", 3 | "taskRoleArn": "arn:aws:iam::000000000000:role/aws-dev-ecs-task-role", 4 | "executionRoleArn": "arn:aws:iam::000000000000:role/aws-dev-ecs-exec-role", 5 | "requiresCompatibilities": [ 6 | "EC2" 7 | ], 8 | "containerDefinitions": [ 9 | { 10 | "name": "proxy", 11 | "image": "proxy-image:latest", 12 | "portMappings": [ 13 | { 14 | "hostPort": 0, 15 | "protocol": "tcp", 16 | "containerPort": 80 17 | } 18 | ], 19 | "environment": [ 20 | { 21 | "name": "APP_URL", 22 | "value": "http://app:5000" 23 | } 24 | ], 25 | "secrets": [ 26 | { 27 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/app-with-sidecar-container/dev/APP_SECRET", 28 | "name": "APP_SECRET" 29 | } 30 | ], 31 | "memoryReservation": 256, 32 | "stopTimeout": 30, 33 | "startTimeout": 30, 34 | "essential": true, 35 | "links": [ 36 | "app" 37 | ] 38 | }, 39 | { 40 | "name": "app", 41 | "image": "app-image:latest", 42 | "memoryReservation": 128, 43 | "essential": true, 44 | "secrets": [ 45 | { 46 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/app-with-sidecar-container/dev/DB_DATABASE", 47 | "name": "DB_DATABASE" 48 | }, 49 | { 50 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/app-with-sidecar-container/dev/DB_HOST", 51 | "name": "DB_HOST" 52 | }, 53 | { 54 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/app-with-sidecar-container/dev/DB_PASSWORD", 55 | "name": "DB_PASSWORD" 56 | }, 57 | { 58 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/app-with-sidecar-container/dev/DB_USERNAME", 59 | "name": "DB_USERNAME" 60 | } 61 | ] 62 | } 63 | ] 64 | } 65 | -------------------------------------------------------------------------------- /ecs/task-definitions/statping_taskdef.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "statping", 3 | "executionRoleArn":"arn:aws:iam::000000000000:role/ecs-exec-role", 4 | "taskRoleArn":"arn:aws:iam::000000000000:role/ecs-task-role", 5 | "containerDefinitions": [ 6 | { 7 | "name": "statping", 8 | "image": "statping/statping:latest", 9 | "memoryReservation": 256, 10 | "portMappings":[ 11 | { 12 | "protocol":"tcp", 13 | "containerPort":8080, 14 | "hostPort":0 15 | } 16 | ], 17 | "environment": [ 18 | { 19 | "name": "DB_CONN", 20 | "value": "mysql" 21 | }, 22 | { 23 | "name": "SAMPLE_DATA", 24 | "value": "false" 25 | }, 26 | { 27 | "name": "IS_DOCKER", 28 | "value": "true" 29 | }, 30 | { 31 | "name": "STATPING_DIR", 32 | "value": "/app" 33 | }, 34 | { 35 | "name": "PORT", 36 | "value": "8080" 37 | }, 38 | { 39 | "name": "SASS", 40 | "value": "/usr/local/bin/sassc" 41 | } 42 | ], 43 | "secrets": [ 44 | { 45 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/statping/prod/DATABASE_HOSTNAME", 46 | "name": "DB_HOST" 47 | }, 48 | { 49 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/statping/prod/DATABASE_USERNAME", 50 | "name": "DB_USER" 51 | }, 52 | { 53 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/statping/prod/DATABASE_NAME", 54 | "name": "DB_DATABASE" 55 | }, 56 | { 57 | "valueFrom": "arn:aws:ssm:eu-west-1:000000000000:parameter/statping/prod/DATABASE_PASSWORD", 58 | "name": "DB_PASS" 59 | } 60 | ], 61 | "essential": true, 62 | "privileged": true, 63 | "mountPoints": [ 64 | { 65 | "containerPath": "/app", 66 | "sourceVolume": "statping-data", 67 | "readOnly": false 68 | } 69 | ] 70 | } 71 | ], 72 | "volumes": [ 73 | { 74 | "name": "statping-data", 75 | "efsVolumeConfiguration": { 76 | "fileSystemId": "fs-00000000", 77 | "rootDirectory": "/statping/data" 78 | } 79 | } 80 | ] 81 | } 82 | -------------------------------------------------------------------------------- /ecs/task-definitions/yopass_taskdef.json: -------------------------------------------------------------------------------- 1 | { 2 | "family": "yopass", 3 | "executionRoleArn":"arn:aws:iam::000000000000:role/ecs-exec-role", 4 | "taskRoleArn":"arn:aws:iam::000000000000:role/ecs-task-role", 5 | "containerDefinitions": [ 6 | { 7 | "name": "yopass-ui", 8 | "image": "jhaals/yopass:latest", 9 | "memoryReservation": 256, 10 | "portMappings":[ 11 | { 12 | "protocol":"tcp", 13 | "containerPort":1337, 14 | "hostPort":0 15 | } 16 | ], 17 | "essential": true, 18 | "privileged": true, 19 | "links": [ 20 | "yopass-cache" 21 | ], 22 | "command": [ 23 | "--memcached=yopass-cache:11211" 24 | ] 25 | }, 26 | { 27 | "name": "yopass-cache", 28 | "image": "memcached:latest", 29 | "memoryReservation": 256, 30 | "essential": true, 31 | "privileged": true 32 | } 33 | ] 34 | } 35 | -------------------------------------------------------------------------------- /eks/README.md: -------------------------------------------------------------------------------- 1 | # eks 2 | 3 | ## kubeconfig 4 | 5 | To update your AWS EKS kubeconfig: 6 | 7 | ```bash 8 | aws --profile default eks update-kubeconfig --name my-cluster --alias my-cluster 9 | ``` 10 | -------------------------------------------------------------------------------- /etcdctl/README.md: -------------------------------------------------------------------------------- 1 | # etcdctl cheatsheet 2 | 3 | ## deployment 4 | 5 | Deploy with kubernetes: 6 | 7 | ```bash 8 | helm repo add bitnami https://charts.bitnami.com/bitnami 9 | helm install default-etcd bitnami/etcd --set auth.rbac.create=False --set replicaCount=3 --namespace kube-system 10 | ``` 11 | 12 | To create a pod that you can use as a etcd client run the following command: 13 | 14 | ```bash 15 | kubectl run default-etcd-client --restart='Never' --image docker.io/bitnami/etcd:3.5.9-debian-11-r146 --env ETCDCTL_ENDPOINTS="default-etcd.kube-system.svc.cluster.local:2379" --namespace kube-system --command -- sleep infinity 16 | ``` 17 | 18 | Exec into the client pod: 19 | 20 | ```bash 21 | kubectl exec --namespace kube-system -it default-etcd-client -- bash 22 | ``` 23 | 24 | ## commands 25 | 26 | List members: 27 | 28 | ```bash 29 | etcdctl member list --write-out=table 30 | ``` 31 | 32 | Write a value to a key: 33 | 34 | ```bash 35 | etcdctl put /message Hello 36 | ``` 37 | 38 | View a key's value: 39 | 40 | ```bash 41 | etcdctl get /message 42 | ``` 43 | 44 | Same as above, but only view the value: 45 | 46 | ```bash 47 | etcdctl get /message --print-value-only 48 | ``` 49 | 50 | View all the keys: 51 | 52 | ```bash 53 | etcdctl get "" --prefix --keys-only 54 | ``` 55 | 56 | View the key and values: 57 | 58 | ```bash 59 | etcdctl get "" --prefix 60 | ``` 61 | 62 | Return the value only 63 | 64 | ## resources 65 | 66 | More cheatsheets: 67 | - https://lzone.de/cheat-sheet/etcd 68 | -------------------------------------------------------------------------------- /ethereum-jsonrpc/README.md: -------------------------------------------------------------------------------- 1 | # ethereum json rpc 2 | 3 | ## Resources 4 | 5 | - [Ethereum Wiki](https://github.com/ethereum/wiki/wiki/JSON-RPC/e8e0771b9f3677693649d945956bc60e886ceb2b) 6 | 7 | ## JSON RPC 8 | 9 | - `eth_syncing`: 10 | 11 | ```bash 12 | curl -XPOST -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' http://127.0.0.1:8545 13 | ``` 14 | 15 | - `eth_chainId` 16 | 17 | ```bash 18 | curl -s -XPOST -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","method":"eth_chainId","params":[],"id":1}' localhost:8545 | jq -r '.result' | tr -d '\n' | xargs -0 printf "%d" 19 | ``` 20 | 21 | - `eth_blockNumber`: 22 | 23 | ```bash 24 | curl -s -XPOST -H "Content-type: application/json" -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' localhost:8545 | jq -r ".result" | tr -d '\n' | xargs -0 printf "%d" 25 | ``` 26 | 27 | - `eth_getBlockByNumber` - by blocknumber: 28 | 29 | ```bash 30 | curl -s -H "Content-type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["0x3a5d74", false],"id":1}' localhost:8545 | jq -r '.result.number' | tr -d '\n' | xargs -0 printf "%d" 31 | ``` 32 | 33 | - `eth_getBlockByNumber` - latest : 34 | 35 | ```bash 36 | curl -s -H "Content-type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1}' localhost:8545 | jq -r '.result.number' | tr -d '\n' | xargs -0 printf "%d" 37 | ``` 38 | 39 | - `eth_blockNumber` - timestamp of latest received block 40 | 41 | ```bash 42 | curl -s -H "Content-type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["latest", false],"id":1}' localhost:8545 | jq -r '.result.timestamp' | tr -d '\n' | xargs -0 printf "%d" 43 | ``` 44 | 45 | - `personal_newAccount` 46 | 47 | ```bash 48 | # this is not recommended - rather create a account offline with a private key 49 | curl -s -XPOST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"personal_newAccount","params":["securepassword"],"id":1}' localhost:8545 50 | ``` 51 | 52 | - `eth_getBalance` 53 | 54 | ```bash 55 | curl -XPOST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0x9a070e582ef891ead3e9b92478df38dd17b4489e", "latest"],"id":1}' localhost:8545 56 | ``` 57 | 58 | - `eth_getBalance` - convert from wei to ether units 59 | 60 | ```bash 61 | # testnet 62 | printf %.2f $(echo $(curl -s -XPOST -H 'Content-Type: application/json' -d '{"jsonrpc":"2.0","method":"eth_getBalance","params":["0x746C9474d98C8A99280fC0E9DA7d706B647163DE", "latest"],"id":1}' http://localhost:8545 | jq -r '.result' | tr -d '\n' | xargs -0 printf "%d") / 1000000000000000000 | bc -l) 63 | ``` 64 | -------------------------------------------------------------------------------- /find/README.md: -------------------------------------------------------------------------------- 1 | # find cheatsheet 2 | 3 | ## Examples 4 | 5 | To find a file with the name `hello.txt` somewhere in the `~/workspace` directory: 6 | 7 | ```bash 8 | find ~/workspace -type f -name 'hello.txt' 9 | ``` 10 | 11 | To find any directories with the name `temp` somewhere in the `~/workspace` directory: 12 | 13 | ```bash 14 | find ~/workspace -type d -name 'temp' 15 | ``` 16 | 17 | To find and delete any folders that starts with `.terraform/providers`: 18 | 19 | ```bash 20 | find . -type d -name '.terraform' -exec bash -c 'if [ -d "$1/providers" ]; then rm -rf "$1/providers"; fi' bash {} \; 21 | ``` 22 | -------------------------------------------------------------------------------- /fluent-bit/README.md: -------------------------------------------------------------------------------- 1 | # Fluent Bit 2 | 3 | ## Resources 4 | 5 | - [Official Docker Image](https://hub.docker.com/r/fluent/fluent-bit/) 6 | - [Docker Metadata in Docker Logs using Fluent Bit](https://github.com/fluent/fluent-bit/issues/1499) 7 | - [Command line examples for fluent-bit and stdout/es](https://github.com/fluent/fluent-bit/issues/185#issuecomment-279114301) 8 | - [Fluent Bit Guide by coralogix.com](https://coralogix.com/log-analytics-blog/fluent-bit-guide/) 9 | 10 | 11 | ## Basic Example 12 | 13 | Run fluent-bit: 14 | 15 | ``` 16 | $ docker run -p 127.0.0.1:24224:24224 fluent/fluent-bit:1.5 /fluent-bit/bin/fluent-bit -i forward -o stdout -p format=json_lines -f 1 17 | ``` 18 | 19 | Run a container and specify the log driver: 20 | 21 | ``` 22 | $ docker run --log-driver=fluentd -t ubuntu echo "Testing a log message" 23 | Testing a log message 24 | ``` 25 | 26 | Stdout from fluent-bit: 27 | 28 | ``` 29 | { 30 | "date":1601638488, 31 | "container_id":"45eccdf719dc28629bded52c8b409d0b10d0efb6d4b72452fc369a256e31be97", 32 | "container_name":"/epic_tharp", 33 | "source":"stdout", 34 | "log":"Testing a log message\r" 35 | } 36 | ``` 37 | -------------------------------------------------------------------------------- /fluent-bit/example-configs/loki-fluent-bit.conf: -------------------------------------------------------------------------------- 1 | [INPUT] 2 | Name forward 3 | Listen 0.0.0.0 4 | Port 24224 5 | [Output] 6 | Name grafana-loki 7 | Match * 8 | Url ${LOKI_URL} 9 | RemoveKeys source,container_id 10 | Labels {job="fluent-bit"} 11 | LabelKeys container_name 12 | BatchWait 1s 13 | BatchSize 1001024 14 | LineFormat json 15 | LogLevel info 16 | -------------------------------------------------------------------------------- /font-awesome/README.md: -------------------------------------------------------------------------------- 1 | # Font Awesome Cheatsheet 2 | 3 | Resources: 4 | 5 | - https://gist.github.com/anthonykozak/84e07a2cf8c27d3e5a8f181742ca293d 6 | -------------------------------------------------------------------------------- /github-actions/README.md: -------------------------------------------------------------------------------- 1 | # Github Actions 2 | 3 | My scratchpad test repo for github actions is located at [ruanbekker/test-actions](https://github.com/ruanbekker/test-actions) 4 | 5 | ## External Repos 6 | 7 | - [@sdras awesome-actions](https://github.com/sdras/awesome-actions) 8 | 9 | ## Documentation 10 | 11 | - [Environment Variables](https://docs.github.com/en/actions/configuring-and-managing-workflows/using-environment-variables) 12 | - [Job Status Check Functions](https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#job-status-check-functions) 13 | - [Steps](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idsteps) 14 | 15 | ## Providers / Plugins 16 | 17 | ### Notifications 18 | 19 | - [slack-notify](https://github.com/marketplace/actions/slack-notify) 20 | - [github-actions-for-slack](https://github.com/marketplace/actions/github-action-for-slack) 21 | - [slack-action](https://github.com/abinoda/slack-action) 22 | - [actionable-notifications](https://github.com/slackapi/template-actionable-notifications) 23 | - [slack-pr-notifications](https://github.com/marketplace/actions/slack-pr-open-notification) 24 | -------------------------------------------------------------------------------- /github-actions/examples/if-success-failure-slack.yml: -------------------------------------------------------------------------------- 1 | # this workflow will only trigger when the alertmanager directory is modified 2 | # and will only be trigger when merged to master 3 | name: alertmanager-deploys 4 | on: 5 | push: 6 | branches: 7 | - master 8 | paths: 9 | - 'alertmanager/*' 10 | 11 | jobs: 12 | master: 13 | if: "!contains(github.event.head_commit.message, '[skip-ci]')" 14 | runs-on: ubuntu-latest 15 | steps: 16 | - name: Checkout 17 | uses: actions/checkout@v2 18 | 19 | - name: run a single line 20 | run: echo "${GITHUB_RUN_ID}" 21 | env: 22 | DUMMY_SECRET: ${{ secrets.DUMMY_SECRET }} 23 | 24 | - name: run a multi line 25 | run: | 26 | echo "starting" 27 | echo "finished" 28 | 29 | # https://github.com/marketplace/actions/github-action-for-slack 30 | - name: Slack Notification on Success 31 | # https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#job-status-check-functions 32 | # https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idsteps 33 | if: ${{ success() }} 34 | env: 35 | SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }} 36 | SLACK_USERNAME: AlertManager 37 | SLACK_CHANNEL: builds-notifications 38 | SLACK_AVATAR: repository # Optional. can be (repository, sender, an URL) (defaults to webhook app avatar) 39 | SLACK_CUSTOM_PAYLOAD: '{"text":"[*SUCCESS*] Alertmanager from was deployed\n *Repo*: [{{ GITHUB_REPOSITORY }}]({{ GITHUB_REPOSITORY }})\n *User*: `{{ GITHUB_ACTOR }}`","username": "{{ GITHUB_ACTOR }}"}' 40 | uses: Ilshidur/action-slack@master 41 | 42 | - name: Slack Notification on Failure 43 | if: ${{ failure() }} 44 | env: 45 | SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK }} 46 | SLACK_USERNAME: AlertManager 47 | SLACK_CHANNEL: builds-notifications 48 | SLACK_AVATAR: repository # Optional. can be (repository, sender, an URL) (defaults to webhook app avatar) 49 | SLACK_CUSTOM_PAYLOAD: '{"text":"[*FAILURE*] Alertmanager from was not deployed\n *Repo*: [{{ GITHUB_REPOSITORY }}]({{ GITHUB_REPOSITORY }})\n *User*: `{{ GITHUB_ACTOR }}`","username": "{{ GITHUB_ACTOR }}"}' 50 | uses: Ilshidur/action-slack@master 51 | -------------------------------------------------------------------------------- /gitlab-ci/README.md: -------------------------------------------------------------------------------- 1 | # GitlabCI Cheatsheet 2 | 3 | ## Examples 4 | 5 | - [SSH Keys with Private Repos](https://gitlab.com/gitlab-examples/ssh-private-key/-/blob/master/.gitlab-ci.yml) 6 | - [Deploy to AWS ECS](https://docs.gitlab.com/ee/ci/cloud_deployment/ecs/deploy_to_aws_ecs.html) 7 | - [GitOps - Infrastructure as Code](https://about.gitlab.com/topics/gitops/gitlab-enables-infrastructure-as-code/) 8 | 9 | ## Resources 10 | 11 | - https://www.perforce.com/manuals/gitswarm/ci/yaml/README.html 12 | - https://docs.gitlab.com/ee/user/project/deploy_keys/ 13 | - https://docs.gitlab.com/ee/ci/ssh_keys/ 14 | - https://docs.gitlab.com/ee/ci/variables/predefined_variables.html 15 | - https://docs.gitlab.com/ee/ci/yaml/ 16 | - https://bag.org.tr/proje/help/ci/yaml/README.md 17 | -------------------------------------------------------------------------------- /gitlab-ci/auto-retry-jobs/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | stages: 3 | - test 4 | 5 | test-job: 6 | stage: test 7 | interruptible: true 8 | script: 9 | - echo "run this" 10 | retry: 11 | max: 2 # runs 3 at max -> https://gitlab.com/gitlab-org/gitlab/-/issues/28088 12 | when: 13 | - runner_system_failure 14 | - api_failure 15 | - stuck_or_timeout_failure 16 | - scheduler_failure 17 | - unknown_failure 18 | -------------------------------------------------------------------------------- /gitlab-ci/aws-build-push-ecr/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | variables: 2 | AWS_ACCOUNT_ID: $AWS_ACCOUNT_ID 3 | ECR_REGISTRY: $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com 4 | ECR_REPO: devops-alertmanager 5 | ALERTMANAGER_VERSION: 0.24.0 6 | DOCKER_HOST: tcp://docker:2375 7 | DOCKER_TLS_CERTDIR: "" 8 | 9 | stages: 10 | - build 11 | 12 | publish: 13 | stage: build 14 | image: 15 | name: amazon/aws-cli:2.3.2 16 | entrypoint: [""] 17 | tags: 18 | - dind 19 | services: 20 | - docker:19.03.12-dind 21 | before_script: 22 | - amazon-linux-extras install docker -y 23 | - aws --version 24 | - docker --version 25 | script: 26 | - docker build --build-arg ALERTMANAGER_VERSION=$ALERTMANAGER_VERSION --build-arg GIT_COMMIT=$CI_COMMIT_SHA -t $ECR_REPO:$ALERTMANAGER_VERSION . 27 | - docker tag $ECR_REPO:$ALERTMANAGER_VERSION $ECR_REGISTRY/$ECR_REPO:$ALERTMANAGER_VERSION 28 | - docker tag $ECR_REPO:latest $ECR_REGISTRY/$ECR_REPO:$ALERTMANAGER_VERSION 29 | - aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_REGISTRY 30 | - docker push $ECR_REGISTRY/$ECR_REPO:$ALERTMANAGER_VERSION 31 | - docker push $ECR_REGISTRY/$ECR_REPO:latest 32 | - echo "pushed to $ECR_REGISTRY/$ECR_REPO:$ALERTMANAGER_VERSION and $ECR_REGISTRY/$ECR_REPO:latest" 33 | only: 34 | - master 35 | -------------------------------------------------------------------------------- /gitlab-ci/aws-build-push-ecr/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine as builder 2 | 3 | ARG ALERTMANAGER_VERSION 4 | ENV VERSION=$ALERTMANAGER_VERSION 5 | 6 | WORKDIR /tmp 7 | ADD https://github.com/prometheus/alertmanager/releases/download/v0.24.0/alertmanager-0.24.0.linux-amd64.tar.gz /tmp/alertmanager-0.24.0.linux-amd64.tar.gz 8 | RUN tar -xf alertmanager-0.24.0.linux-amd64.tar.gz 9 | 10 | FROM alpine 11 | COPY --from=builder /tmp/alertmanager-0.24.0.linux-amd64/alertmanager /bin/alertmanager 12 | COPY --from=builder /tmp/alertmanager-0.24.0.linux-amd64/amtool /bin/amtool 13 | COPY --from=builder /tmp/alertmanager-0.24.0.linux-amd64/alertmanager.yml /etc/alertmanager/alertmanager.yml 14 | COPY config/alertmanager.yml /etc/alertmanager/alertmanager.yml 15 | COPY bin/promboot.sh /bin/promboot.sh 16 | 17 | RUN chmod +x /bin/promboot.sh && mkdir -p /alertmanager && \ 18 | chown -R nobody:nogroup /etc/alertmanager /alertmanager 19 | 20 | ARG GIT_COMMIT 21 | LABEL git-ref=$CI_COMMIT_SHA 22 | 23 | USER nobody 24 | EXPOSE 9093 25 | VOLUME [ "/alertmanager" ] 26 | WORKDIR /alertmanager 27 | CMD [ "/bin/promboot.sh" ] 28 | -------------------------------------------------------------------------------- /gitlab-ci/aws-build-push-ecr/configuration.md: -------------------------------------------------------------------------------- 1 | ## gitlab configuration 2 | 3 | Add the following variables on "settings" -> "ci/cd" -> "variabes": 4 | 5 | - `AWS_ACCESS_KEY_ID` 6 | - `AWS_SECRET_ACCESS_KEY` 7 | - `AWS_DEFAULT_REGION` 8 | - `AWS_ACCOUNT_ID` 9 | -------------------------------------------------------------------------------- /gitlab-ci/basic-shell/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | stages: 2 | - build 3 | - test 4 | - deploy 5 | 6 | before_script: 7 | - echo "::before script section::" 8 | 9 | after_script: 10 | - echo "::fter script section::" 11 | 12 | build_job_a: 13 | stage: build 14 | script: 15 | - echo "building job a" 16 | 17 | test_job_a: 18 | stage: test 19 | script: 20 | - echo "testing job a" 21 | 22 | test_job_b: 23 | stage: test 24 | script: 25 | - echo "testing job b" 26 | 27 | deploy_job: 28 | stage: deploy 29 | script: 30 | - echo "deploy job" 31 | - uname -a 32 | - uptime 33 | - docker ps 34 | -------------------------------------------------------------------------------- /gitlab-ci/default-runner/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | default: 2 | tags: 3 | - shell-runner 4 | 5 | stages: 6 | - test 7 | 8 | test_job: 9 | stage: test 10 | script: 11 | - echo test 12 | -------------------------------------------------------------------------------- /gitlab-ci/docker-helm-deploy/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | image: docker:latest 2 | services: 3 | - docker:dind 4 | 5 | stages: 6 | - build 7 | - deploy 8 | 9 | variables: 10 | CONTAINER_IMAGE: anaisurlichs/react-example-app:${CI_COMMIT_SHORT_SHA} 11 | 12 | build: 13 | stage: build 14 | script: 15 | - docker login -u ${DOCKER_USER} -p ${DOCKER_PASSWORD} 16 | - docker build -t ${CONTAINER_IMAGE} . 17 | - docker tag ${CONTAINER_IMAGE} ${CONTAINER_IMAGE} 18 | - docker tag ${CONTAINER_IMAGE} anaisurlichs/react-example-app:latest 19 | - docker push ${CONTAINER_IMAGE} 20 | 21 | deploy: 22 | stage: deploy 23 | image: dtzar/helm-kubectl 24 | script: 25 | - kubectl config set-cluster example-cluster --server="${SERVER}" 26 | - kubectl config set-cluster example-cluster --embed-certs --certificate-authority=${CERTIFICATE_AUTHORITY_DATA} 27 | - kubectl config set-credentials gitlab --token="${USER_TOKEN}" 28 | - kubectl config set-context default --cluster=example-cluster --user=gitlab 29 | - kubectl config use-context default 30 | - sed -i "s//${CI_COMMIT_SHORT_SHA}/g" manifests/deployment.yaml 31 | - kubectl apply -f manifests/deployment.yaml 32 | -------------------------------------------------------------------------------- /gitlab-ci/docker-runner/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | stages: 2 | - run 3 | 4 | deploy: 5 | stage: run 6 | image: 7 | name: busybox:latest 8 | entrypoint: 9 | - "/usr/bin/env" 10 | - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" 11 | script: 12 | - echo hi 13 | -------------------------------------------------------------------------------- /gitlab-ci/extends-docker/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | include: '/templates/jobs.yml' 2 | 3 | stages: 4 | - push-image 5 | 6 | push: 7 | stage: push-image 8 | extends: .docker 9 | script: 10 | - docker buildx build --platform "linux/amd64,linux/arm64" -f Dockerfile -t user/myimage:tag --push . 11 | -------------------------------------------------------------------------------- /gitlab-ci/extends-docker/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine 2 | RUN apk --no-cache add curl 3 | -------------------------------------------------------------------------------- /gitlab-ci/extends-docker/templates/jobs.yml: -------------------------------------------------------------------------------- 1 | variables: 2 | DOCKER_HOST: tcp://docker:2376 3 | DOCKER_TLS_CERTDIR: "/certs" 4 | DOCKER_TLS_VERIFY: 1 5 | DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client" 6 | 7 | services: 8 | - docker:dind 9 | 10 | .docker: 11 | image: docker:20.10.17-cli 12 | before_script: 13 | - docker login -u "$DOCKERHUB_USERNAME" -p "$DOCKERHUB_PASSWORD" 14 | - docker --version 15 | - docker context create ci-environment 16 | - docker buildx create --name multi-arch --platform "linux/arm64,linux/amd64" --driver "docker-container" ci-environment 17 | -------------------------------------------------------------------------------- /gitlab-ci/gitlab-runner-config/config.toml: -------------------------------------------------------------------------------- 1 | # https://docs.gitlab.com/runner/executors/docker.html 2 | # https://docs.gitlab.com/runner/configuration/advanced-configuration.html#volumes-in-the-runnersdocker-section 3 | concurrent = 1 4 | check_interval = 0 5 | 6 | [session_server] 7 | session_timeout = 1800 8 | [[runners]] 9 | name = "ip-172-31-33-166-docker" 10 | url = "https://ci.domain.com/gitlab/" 11 | token = "xxxxxxxxxx" 12 | executor = "docker" 13 | [runners.custom_build_dir] 14 | [runners.cache] 15 | [runners.cache.s3] 16 | [runners.cache.gcs] 17 | [runners.cache.azure] 18 | [runners.docker] 19 | tls_verify = false 20 | image = "docker:dind" 21 | privileged = false 22 | disable_entrypoint_overwrite = false 23 | oom_kill_disable = false 24 | disable_cache = false 25 | volumes = ["/cache", "/home/gitlab-runner/.ssh:/root/.ssh:ro"] 26 | shm_size = 0 27 | -------------------------------------------------------------------------------- /gitlab-ci/interruptable-jobs/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # interruptible will cancel the job if another job in the same merge request is started 3 | stages: 4 | - test 5 | 6 | test: 7 | stage: test 8 | image: busybox 9 | interruptible: true 10 | script: 11 | - echo "run tests" 12 | -------------------------------------------------------------------------------- /gitlab-ci/manual-destroy-step/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | image: busybox:latest 2 | 3 | stages: 4 | - build 5 | - test 6 | - deploy 7 | - destroy 8 | 9 | before_script: 10 | - echo "Before script section" 11 | 12 | after_script: 13 | - echo "After script section" 14 | 15 | build1: 16 | stage: build 17 | script: 18 | - echo "building" 19 | 20 | test1: 21 | stage: test 22 | script: 23 | - echo "testing" 24 | 25 | test2: 26 | stage: test 27 | script: 28 | - echo "parallel test" 29 | 30 | deploy1: 31 | stage: deploy 32 | script: 33 | - echo "deploying" 34 | - deploytime=$(( ( RANDOM % 10 ) + 1 )) 35 | - sleep $deploytime 36 | 37 | destroy: 38 | stage: destroy 39 | when: manual 40 | script: 41 | - echo "destroy --dry-run" 42 | 43 | destroy-confirmation: 44 | stage: destroy 45 | script: echo "destroy --force" 46 | when: manual 47 | needs: 48 | - destroy 49 | -------------------------------------------------------------------------------- /gitlab-ci/multiple-executors/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | stages: 3 | - default 4 | - shell 5 | - docker 6 | 7 | default-run: 8 | stage: default 9 | script: 10 | - echo $CI_PROJECT_DIR 11 | - echo "Running on the default runner" 12 | - date 13 | - hostname 14 | 15 | shell-run: 16 | stage: shell 17 | tags: 18 | - shell 19 | script: 20 | - echo $CI_PROJECT_DIR 21 | - echo "Running on the shell runner" 22 | - date 23 | - hostname 24 | 25 | docker-run: 26 | stage: docker 27 | image: busybox:latest 28 | tags: 29 | - docker 30 | script: 31 | - echo $CI_PROJECT_DIR 32 | - echo "Running on the docker runner" 33 | - date 34 | - hostname 35 | -------------------------------------------------------------------------------- /gitlab-ci/parallel-jobs/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | variables: 2 | RANDOM_WORD: hello 3 | 4 | image: busybox:latest 5 | 6 | stages: 7 | - single-job 8 | - parallel-jobs 9 | - parallel-jobs-with-dependencies 10 | - results 11 | 12 | job-one: 13 | stage: single-job 14 | script: 15 | - HOSTNAME=$(hostname) 16 | - echo "hostname=${HOSTNAME}, random-number=${RANDOM}, pipeline-id=${CI_PIPELINE_ID}" > file.txt 17 | artifacts: 18 | paths: 19 | - file.txt 20 | 21 | job-two: 22 | stage: parallel-jobs 23 | script: 24 | - HOSTNAME=$(hostname) 25 | - echo "hostname=${HOSTNAME}, random-number=${RANDOM}, pipeline-id=${CI_PIPELINE_ID}" >> file.txt 26 | artifacts: 27 | paths: 28 | - file.txt 29 | 30 | job-three: 31 | stage: parallel-jobs 32 | script: 33 | - HOSTNAME=$(hostname) 34 | - echo "hostname=${HOSTNAME}, random-number=${RANDOM}, pipeline-id=${CI_PIPELINE_ID}" >> file.txt 35 | artifacts: 36 | paths: 37 | - file.txt 38 | 39 | job-four: 40 | stage: parallel-jobs-with-dependencies 41 | needs: [job-three] 42 | script: 43 | - HOSTNAME=$(hostname) 44 | - echo "hostname=${HOSTNAME}, random-number=${RANDOM}, pipeline-id=${CI_PIPELINE_ID}" >> file.txt 45 | artifacts: 46 | paths: 47 | - file.txt 48 | 49 | job-five: 50 | stage: parallel-jobs-with-dependencies 51 | needs: [job-four] 52 | dependencies: 53 | - job-four 54 | script: 55 | - HOSTNAME=$(hostname) 56 | - echo "hostname=${HOSTNAME}, random-number=${RANDOM}, pipeline-id=${CI_PIPELINE_ID}" >> file.txt 57 | artifacts: 58 | paths: 59 | - file.txt 60 | 61 | job-six: 62 | stage: parallel-jobs-with-dependencies 63 | needs: [job-four] 64 | dependencies: 65 | - job-four 66 | script: 67 | - HOSTNAME=$(hostname) 68 | - echo "hostname=${HOSTNAME}, random-number=${RANDOM}, pipeline-id=${CI_PIPELINE_ID}" >> file.txt 69 | artifacts: 70 | paths: 71 | - file.txt 72 | 73 | view-results: 74 | stage: results 75 | script: 76 | - cat file.txt 77 | -------------------------------------------------------------------------------- /gitlab-ci/reusable-jobs/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Resource: https://dnsmichi.at/2021/09/17/reusable-job-attributes-in-other-jobs-in-gitlab-ci-cd-with-reference/ 3 | stages: 4 | - setup 5 | 6 | .python-requirements: 7 | script: 8 | - python3 -m venv venv 9 | - source venv/bin/activate 10 | - pip install -r requirements.txt 11 | - pip install pylint 12 | 13 | cache: 14 | key: ${CI_COMMIT_REF_SLUG} 15 | paths: 16 | - venv 17 | - .cache/pip 18 | 19 | setup:dev: 20 | extends: .python-requirements 21 | stage: setup 22 | image: python:3.8 23 | script: 24 | - python -V 25 | - echo "[SETUP DEV]" 26 | - pushd dev 27 | - !reference [.python-requirements, script] 28 | - deactivate 29 | - popd 30 | rules: 31 | - if: '$CI_COMMIT_BRANCH' 32 | 33 | setup:staging: 34 | extends: .python-requirements 35 | stage: setup 36 | image: python:3.8 37 | script: 38 | - python -V 39 | - echo "[SETUP STAGING]" 40 | - pushd staging 41 | - !reference [.python-requirements, script] 42 | - pylint --version 43 | - deactivate 44 | - popd 45 | - echo "some other stuff" 46 | rules: 47 | - if: '$CI_COMMIT_BRANCH' 48 | 49 | # The reusable job becomes this ultimately: 50 | # 51 | # script: 52 | # ... 53 | # - pushd staging 54 | # - python3 -m venv venv 55 | # - source venv/bin/activate 56 | # - pip install -r requirements.txt 57 | # - pip install pylint 58 | # - pylint --version 59 | -------------------------------------------------------------------------------- /gitlab-ci/services/.gitlab-ci-mysql.yml: -------------------------------------------------------------------------------- 1 | --- 2 | stages: 3 | - healthcheck 4 | - run 5 | 6 | services: 7 | - mysql:5.7 8 | 9 | variables: 10 | # Configure mysql service (https://hub.docker.com/_/mysql/) 11 | MYSQL_DATABASE: test 12 | MYSQL_ROOT_PASSWORD: testpassword 13 | 14 | connect: 15 | image: mysql:5.7 16 | stage: healthcheck 17 | tags: 18 | - docker 19 | script: 20 | - echo "SELECT 'OK';" | mysql --user=root --password="$MYSQL_ROOT_PASSWORD" --host=mysql "$MYSQL_DATABASE" 21 | 22 | post: 23 | image: mysql:5.7 24 | stage: run 25 | dependencies: 26 | - connect 27 | tags: 28 | - docker 29 | script: 30 | - echo done 31 | -------------------------------------------------------------------------------- /gitlab-ci/terraform-pipeline/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | # terraform pipeline to plan automatically and manual apply 2 | 3 | image: 4 | name: hashicorp/terraform:1.2.5 5 | entrypoint: [""] 6 | 7 | stages: 8 | - validate 9 | - plan 10 | - apply 11 | 12 | .terraform: 13 | artifacts: 14 | paths: 15 | - '**/deploy.tfplan' 16 | - '**/.terraform.lock.hcl' 17 | 18 | before_script: 19 | - cd environments/dev 20 | - terraform --version 21 | - terraform init 22 | 23 | validate: 24 | stage: validate 25 | script: 26 | - terraform validate 27 | only: 28 | - branches 29 | except: 30 | - main 31 | 32 | plan-branch: 33 | environment: 34 | name: dev 35 | action: prepare 36 | extends: .terraform 37 | stage: plan 38 | script: 39 | - terraform plan --var-file $TERRAFORM_VARS_FILE -input=false 40 | only: 41 | - branches 42 | except: 43 | - main 44 | 45 | plan-dev: 46 | environment: 47 | name: dev 48 | action: prepare 49 | extends: .terraform 50 | stage: plan 51 | script: 52 | - terraform plan --var-file $TERRAFORM_VARS_FILE -input=false -out deploy.tfplan 53 | only: 54 | - main 55 | 56 | apply-dev: 57 | extends: .terraform 58 | environment: 59 | name: dev 60 | action: start 61 | stage: apply 62 | script: 63 | - terraform apply -input=false -auto-approve deploy.tfplan 64 | when: manual 65 | allow_failure: false 66 | only: 67 | - main 68 | -------------------------------------------------------------------------------- /golang/README.md: -------------------------------------------------------------------------------- 1 | # Go Cheatsheet 2 | 3 | -------------------------------------------------------------------------------- /golang/environment/README.md: -------------------------------------------------------------------------------- 1 | ## Environment Setup 2 | 3 | ### Environment on Linux 4 | 5 | - [Setup Go Environment](https://www.callicoder.com/golang-installation-setup-gopath-workspace/) 6 | 7 | ### Environment on Docker 8 | 9 | Go build environment on Alpine: 10 | 11 | ``` 12 | $ docker run -it alpine sh 13 | GO_VERSION=1.15.2 14 | apk add --no-cache ca-certificates 15 | echo 'hosts: files dns' > /etc/nsswitch.conf 16 | apk add --no-cache --virtual .build-deps bash gcc musl-dev openssl go 17 | go env GOROOT 18 | GOROOT_BOOTSTRAP="$(go env GOROOT)" 19 | GOOS="$(go env GOOS)" 20 | GOARCH="$(go env GOARCH)" 21 | GOHOSTOS="$(go env GOHOSTOS)" 22 | GOHOSTARCH="$(go env GOHOSTARCH)" 23 | apkArch="$(apk --print-arch)" 24 | wget -O go.tgz "https://golang.org/dl/go$GO_VERSION.src.tar.gz" 25 | tar -C /usr/local -xzf go.tgz 26 | rm go.tgz 27 | cd /usr/local/go/src 28 | ./make.bash 29 | rm -rf /usr/local/go/pkg/bootstrap /usr/local/go/pkg/obj 30 | apk del .build-deps 31 | export PATH="/usr/local/go/bin:$PATH" 32 | export GOPATH=/go 33 | export PATH=$GOPATH/bin:/usr/local/go/bin:$PATH 34 | mkdir -p "$GOPATH/src" "$GOPATH/bin" 35 | cd $GOPATH 36 | go version 37 | ``` 38 | 39 | To get a environment where you can download from git, append the following: 40 | 41 | ``` 42 | $ apk add --no-cache gcc musl-dev git 43 | $ go get github.com/digitalocean/godo 44 | $ ls $GOPATH/ 45 | bin pkg src 46 | 47 | $ ls $GOPATH/src/github.com/digitalocean 48 | godo 49 | ``` 50 | 51 | Test: 52 | 53 | ``` 54 | $ mkdir $GOPATH/src/github.com/ruanbekker/hello 55 | $ cat $GOPATH/src/github.com/ruanbekker/hello/main.go 56 | package main 57 | 58 | import "fmt" 59 | 60 | func main() { 61 | fmt.Println("Hello, World!") 62 | } 63 | 64 | $ go run $GOPATH/src/github.com/ruanbekker/hello 65 | /main.go 66 | Hello, World! 67 | 68 | $cd $GOPATH/src/github.com/ruanbekker/hello 69 | 70 | $ go install 71 | 72 | $ which hello 73 | /go/bin/hello 74 | ``` 75 | 76 | Use a dependency to test: 77 | 78 | ``` 79 | $ cat $GOPATH/src/github.com/ruanbekker/randomnumz/main.go 80 | package main 81 | 82 | import ( 83 | "fmt" 84 | "github.com/bxcodec/faker" 85 | ) 86 | 87 | func main() { 88 | randomDay := faker.DayOfWeek() 89 | fmt.Println("Hi:", randomDay) 90 | } 91 | 92 | $ go get -u github.com/bxcodec/faker 93 | # or 94 | cd $GOPATH/src/github.com/ruanbekker/randomnumz 95 | $ go get 96 | 97 | $ go run main.go 98 | Hi: Sunday 99 | 100 | $ which randomnumz 101 | /go/bin/randomnumz 102 | 103 | $ randomnumz 104 | Hi: Wednesday 105 | 106 | # or: 107 | 108 | $ go build -o random -v main.go 109 | ./random 110 | Hi: Thursday 111 | ``` 112 | -------------------------------------------------------------------------------- /golang/go-web-logs/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:alpine AS builder 2 | WORKDIR /go/src/hello 3 | RUN apk add --no-cache gcc libc-dev 4 | ADD src/go.* /go/src/hello/ 5 | RUN go mod download 6 | ADD src/app.go . 7 | RUN GOOS=linux GOARCH=amd64 go build -tags=netgo app.go 8 | 9 | FROM scratch 10 | COPY --from=builder /go/src/hello/app /app 11 | CMD ["/app"] 12 | -------------------------------------------------------------------------------- /golang/go-web-logs/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | go-app: 5 | build: . 6 | container_name: go-app 7 | ports: 8 | - 8080:8080 9 | networks: 10 | - appnet 11 | logging: 12 | driver: "json-file" 13 | options: 14 | max-size: "1m" 15 | 16 | networks: 17 | appnet: 18 | name: appnet 19 | -------------------------------------------------------------------------------- /golang/go-web-logs/src/app.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "net/http" 5 | 6 | log "github.com/sirupsen/logrus" 7 | ) 8 | 9 | func main() { 10 | log.SetFormatter(&log.JSONFormatter{}) 11 | log.Info("starting server") 12 | http.HandleFunc("/", hostnameHandler) 13 | http.ListenAndServe("0.0.0.0:8080", nil) 14 | } 15 | 16 | func hostnameHandler(w http.ResponseWriter, r *http.Request) { 17 | log.SetFormatter(&log.JSONFormatter{}) 18 | log.WithFields(log.Fields{"health": "ok"}).Info("service is healthy") 19 | } 20 | -------------------------------------------------------------------------------- /golang/go-web-logs/src/go.mod: -------------------------------------------------------------------------------- 1 | module hellologs 2 | 3 | go 1.16 4 | 5 | require github.com/sirupsen/logrus v1.8.1 // indirect 6 | -------------------------------------------------------------------------------- /golang/go-web-logs/src/go.sum: -------------------------------------------------------------------------------- 1 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 2 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 3 | github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE= 4 | github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0= 5 | github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= 6 | golang.org/x/sys v0.0.0-20191026070338-33540a1f6037 h1:YyJpGZS1sBuBCzLAR1VEpK193GlqGZbnPFnPV/5Rsb4= 7 | golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 8 | -------------------------------------------------------------------------------- /golang/snippets/http-api-with-http-request/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "io/ioutil" 5 | "encoding/json" 6 | "log" 7 | "net/http" 8 | ) 9 | 10 | type Response struct { 11 | Data string `json:"value"` 12 | } 13 | 14 | func getDataFromExternalEndpoint() (*Response, error) { 15 | url := "https://api.chucknorris.io/jokes/random" 16 | resp, err := http.Get(url) 17 | if err != nil { 18 | return nil, err 19 | } 20 | defer resp.Body.Close() 21 | body, err := ioutil.ReadAll(resp.Body) 22 | if err != nil { 23 | return nil, err 24 | } 25 | var response Response 26 | err = json.Unmarshal(body, &response) 27 | if err != nil { 28 | return nil, err 29 | } 30 | return &response, nil 31 | } 32 | 33 | 34 | func handler(w http.ResponseWriter, r *http.Request) { 35 | response, err := getDataFromExternalEndpoint() 36 | if err != nil { 37 | http.Error(w, err.Error(), http.StatusInternalServerError) 38 | return 39 | } 40 | json.NewEncoder(w).Encode(response) 41 | } 42 | 43 | 44 | func main() { 45 | http.HandleFunc("/", handler) 46 | log.Fatal(http.ListenAndServe(":8080", nil)) 47 | } 48 | 49 | 50 | -------------------------------------------------------------------------------- /golang/snippets/http-requests-return-statuscode/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "net/http" 6 | ) 7 | 8 | func main() { 9 | url := "https://ruan.dev" 10 | resp, err := http.Get(url) 11 | if err != nil { 12 | fmt.Println("Error while fetching the URL:", err) 13 | return 14 | } 15 | defer resp.Body.Close() 16 | fmt.Println(resp.StatusCode) 17 | } 18 | -------------------------------------------------------------------------------- /golang/snippets/random-fake-word/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "github.com/bxcodec/faker" 6 | ) 7 | 8 | func main() { 9 | randomDay := faker.DayOfWeek() 10 | fmt.Println("Hi:", randomDay) 11 | } 12 | -------------------------------------------------------------------------------- /golang/snippets/random-float/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "time" 6 | "math/rand" 7 | ) 8 | 9 | func floatrandom() float64 { 10 | rand.Seed(time.Now().UnixNano()) 11 | return rand.Float64() 12 | } 13 | 14 | func main() { 15 | 16 | res1 := floatrandom() 17 | res2 := floatrandom() 18 | res3 := floatrandom() 19 | 20 | // Displaying results 21 | fmt.Println("Result 1: ", res1) 22 | fmt.Println("Result 2: ", res2) 23 | fmt.Println("Result 3: ", res3) 24 | } 25 | -------------------------------------------------------------------------------- /golang/snippets/random-integer/main.go: -------------------------------------------------------------------------------- 1 | // https://golang.cafe/blog/golang-random-number-generator.html 2 | // https://golang.org/pkg/math/rand/ 3 | // Top-level functions, such as Float64 and Int, use a default shared Source that produces a deterministic sequence of values each time a program is run. Use the Seed function to initialize the default Source if different behavior is required for each run 4 | 5 | package main 6 | 7 | import ( 8 | "fmt" 9 | "math/rand" 10 | "time" 11 | ) 12 | 13 | func main() { 14 | rand.Seed(time.Now().UnixNano()) 15 | min := 1 16 | max := 30 17 | fmt.Println(rand.Intn(rand.Intn(max - min + 1) + min)) 18 | } 19 | -------------------------------------------------------------------------------- /golang/snippets/webserver-requestpath/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "html" 6 | "log" 7 | "net/http" 8 | ) 9 | 10 | func main() { 11 | 12 | http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { 13 | fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path)) 14 | }) 15 | 16 | log.Println("Listening on localhost:8080") 17 | log.Fatal(http.ListenAndServe(":8080", nil)) 18 | } 19 | -------------------------------------------------------------------------------- /grok/README.md: -------------------------------------------------------------------------------- 1 | ## Datadog Grok Parser 2 | 3 | ### Example 1 4 | 5 | Message: 6 | 7 | ``` 8 | Endpoints not available for default/team-app-service-foobar 9 | ``` 10 | 11 | Pattern: 12 | 13 | ``` 14 | warning_endpoint_rule %{regex("[endpoints not available for a-zA-Z]*"):message_line}/%{regex("[a-zA-Z0-9-]*"):service} 15 | ``` 16 | 17 | Result: 18 | 19 | ``` 20 | { 21 | "message_line": "Endpoints not available for default", 22 | "service": "team-app-service-foobar" 23 | } 24 | ``` 25 | 26 | ### Example 2 27 | 28 | Message: 29 | 30 | ``` 31 | [2019-12-10 00:00:07,890: INFO/ForkPoolWorker-10] Task api.tasks.handle_job[000000a0-1a2a-12a3-4a56-d12dd3456789] succeeded in 0.02847545174881816s: None 32 | ``` 33 | 34 | Pattern: 35 | 36 | ``` 37 | my_rule \[%{date("yyyy-MM-dd HH:mm:ss,SSS"):timestamp}: %{word:severity}/%{regex("[a-zA-Z0-9-]*"):process}\] %{data:details} 38 | ``` 39 | 40 | Result: 41 | 42 | ``` 43 | { 44 | "timestamp": 1575982567890, 45 | "severity": "INFO", 46 | "process": "ForkPoolWorker-10", 47 | "details": "Task api.tasks.handle_job[000000a0-1a2a-12a3-4a56-d12dd3456789] succeeded in 0.02847545174881816s: None" 48 | } 49 | ``` 50 | 51 | ### Example 3 52 | 53 | Message: 54 | 55 | ``` 56 | 2019-12-05 11:00:08,921 INFO module=trace, process_id=13, Task apps_dir.health.queue.tasks.add[000000a0-1a2a-12a3-4a56-d12dd3456789] succeeded in 0.0001603253185749054s: 8 57 | ``` 58 | 59 | Pattern: 60 | 61 | ``` 62 | my_rule .*%{date("yyyy-MM-dd HH:mm:ss,SSS"):date} %{word:status} .* 63 | ``` 64 | 65 | Result: 66 | 67 | ``` 68 | { 69 | "date": 1575982567890, 70 | "status": "INFO" 71 | } 72 | ``` 73 | -------------------------------------------------------------------------------- /helm-2/README.md: -------------------------------------------------------------------------------- 1 | # helm-2 2 | 3 | Legacy Helm v2 Cheatsheet 4 | 5 | ## List Releases 6 | 7 | ```bash 8 | helm2 --tiller-namespace kube-system list 9 | ``` 10 | 11 | ## History 12 | 13 | ```bash 14 | helm2 --tiller-namespace kube-system history my-app 15 | ``` 16 | 17 | ## Get Values 18 | 19 | ```bash 20 | helm2 --tiller-namespace kube-system get values my-app --output yaml 21 | ``` 22 | 23 | ## Get Manifest 24 | 25 | ```bash 26 | helm2 --tiller-namespace kube-system get manifest my-app --revision 188 27 | ``` 28 | 29 | ## Rollback 30 | 31 | ```bash 32 | helm2 --tiller-namespace kube-system rollback my-app 5 33 | ``` 34 | 35 | -------------------------------------------------------------------------------- /html-css/center-page/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Jokes 6 | 37 | 38 | 39 |
40 |

The Joke

41 |

The punchline.

42 |

Gimme another one

43 |
44 | 45 | 46 | -------------------------------------------------------------------------------- /influxdb/README.md: -------------------------------------------------------------------------------- 1 | ## InfluxDB Cheatsheet 2 | 3 | ### Connect to InfluxDB: 4 | 5 | ``` 6 | $ influx 7 | ``` 8 | 9 | ### Create DB: 10 | 11 | ``` 12 | > create database test 13 | ``` 14 | 15 | ### List Databases: 16 | 17 | ``` 18 | > show databases 19 | ``` 20 | 21 | ### Select a DB: 22 | 23 | ``` 24 | > use test 25 | ``` 26 | 27 | ### List Measurements 28 | 29 | ``` 30 | > show measurements 31 | ``` 32 | 33 | ### Show Measurements for name: bar 34 | 35 | ``` 36 | > select * from bar 37 | ``` 38 | 39 | ### Drop bar Measurements 40 | 41 | ``` 42 | > drop measurement bar 43 | ``` 44 | 45 | ### Show field keys 46 | 47 | ``` 48 | > show field keys from "bar-A1" 49 | ``` 50 | -------------------------------------------------------------------------------- /install/README.md: -------------------------------------------------------------------------------- 1 | # install 2 | 3 | ## Usage 4 | 5 | Download a binary from the internet: 6 | 7 | ```bash 8 | curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" 9 | ``` 10 | 11 | Use the install command to apply executable permissions and move the binary in place: 12 | 13 | ```bash 14 | sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl 15 | ``` 16 | -------------------------------------------------------------------------------- /iptables/README.md: -------------------------------------------------------------------------------- 1 | # iptables cheatsheet 2 | 3 | A couple of iptables examples 4 | 5 | ## Prerouting 6 | 7 | client -> host (10.20.1.1:2098) -> forward connection to a destination ip and port -> (10.22.23.4:22) 8 | 9 | ``` 10 | # create and insert at the top of the chain 11 | iptables -t nat -I PREROUTING -p tcp --dport 2098 -j DNAT --to-destination 10.22.23.4:22 12 | 13 | # create and append 14 | iptables -t nat -I PREROUTING -p tcp --dport 2098 -j DNAT --to-destination 10.22.23.4:22 15 | 16 | # deletes rule 17 | iptables -t nat -D PREROUTING -p tcp --dport 2098 -j DNAT --to-destination 10.22.23.4:22 18 | ``` 19 | 20 | Resources: 21 | - [difference-beetween-dnat-and-redirect-in-iptables](https://serverfault.com/questions/179200/difference-beetween-dnat-and-redirect-in-iptables) 22 | -------------------------------------------------------------------------------- /iterm/README.md: -------------------------------------------------------------------------------- 1 | # iterm cheatsheet 2 | 3 | ## External Resources 4 | 5 | - [iTerm2 Cheatsheet](https://gist.github.com/squarism/ae3613daf5c01a98ba3a) -------------------------------------------------------------------------------- /javascript/redis/server.js: -------------------------------------------------------------------------------- 1 | redis = require('redis') 2 | client = redis.createClient({url: 'redis://redis:6379'}) 3 | client.set('foo', 'bar') 4 | 5 | // 127.0.0.1:6379> get foo 6 | // "bar" 7 | -------------------------------------------------------------------------------- /jsonnet/README.md: -------------------------------------------------------------------------------- 1 | # jsonnet cheatsheet 2 | 3 | A data templating language for app and tool developers. [website](https://jsonnet.org/) 4 | 5 | ## Installation 6 | 7 | Install via brew: 8 | 9 | ```bash 10 | brew install jsonnet 11 | ``` 12 | 13 | ## Basic Example 14 | 15 | In `basic.jsonnet`: 16 | 17 | ``` 18 | local host = '10.0.0.120'; // Standard local variable ends with semicolon(;). 19 | local http_port = 8080; 20 | 21 | { 22 | local db_port = 3128, // A local variable next to JSON fields ends with comma(,). 23 | app_protocol:: 'http', // A special hidden variable next to fields use (::) instead of (=) or (:). 24 | environment_config: { 25 | app: { 26 | name: 'Sample app', 27 | url: $.app_protocol + '://' + host + ':' + http_port + 'app/' 28 | }, 29 | database: { 30 | name: "mysql database", 31 | username: "user", 32 | password: "password", 33 | host: host, 34 | port: db_port, 35 | }, 36 | rest_api: $.app_protocol + '://' + host + ':' + http_port + '/v2/api/' 37 | } 38 | } 39 | ``` 40 | 41 | Parse it with: 42 | 43 | ```bash 44 | jsonnet basic.jsonnet 45 | { 46 | "environment_config": { 47 | "app": { 48 | "name": "Sample app", 49 | "url": "http://10.0.0.120:8080app/" 50 | }, 51 | "database": { 52 | "host": "10.0.0.120", 53 | "name": "mysql database", 54 | "password": "password", 55 | "port": 3128, 56 | "username": "user" 57 | }, 58 | "rest_api": "http://10.0.0.120:8080/v2/api/" 59 | } 60 | } 61 | ``` 62 | -------------------------------------------------------------------------------- /k3s/README.md: -------------------------------------------------------------------------------- 1 | # k3s 2 | 3 | Install a Cluster: 4 | 5 | ``` 6 | curl https://get.k3s.io | INSTALL_K3S_VERSION="v1.18.13+k3s1" INSTALL_K3S_EXEC="server --write-kubeconfig-mode 0644 --tls-san foo.bar" sh -s - 7 | ``` 8 | -------------------------------------------------------------------------------- /k9s/README.md: -------------------------------------------------------------------------------- 1 | # k9s 2 | 3 | ## Themes 4 | 5 | Catppuccin themes: 6 | - https://github.com/catppuccin/k9s 7 | -------------------------------------------------------------------------------- /kafka/README.md: -------------------------------------------------------------------------------- 1 | # kafka cheatsheet 2 | 3 | Kafka is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. 4 | 5 | ## List Topics 6 | 7 | ```bash 8 | kafka-topics --list --bootstrap-server kafka-broker:9092 9 | ``` 10 | 11 | ## Create Topic 12 | 13 | ```bash 14 | kafka-topics --create --topic test-topic --bootstrap-server kafka-broker:9092 15 | ``` 16 | 17 | ## Describe Topic 18 | 19 | ```bash 20 | kafka-topics --describe --topic test-topic --bootstrap-server kafka-broker:9092 21 | ``` 22 | 23 | ## Produce Messages 24 | 25 | ```bash 26 | echo "hello" | kafka-console-producer --bootstrap-server kafka-broker:9092 --topic test-topic 27 | ``` 28 | 29 | Produce JSON Messages: 30 | 31 | ```bash 32 | cat > file.json << EOF 33 | { "id": 1, "first_name": "John"} 34 | { "id": 2, "first_name": "Peter"} 35 | { "id": 3, "first_name": "Nate"} 36 | { "id": 4, "first_name": "Frank"} 37 | EOF 38 | 39 | kafka-console-producer --bootstrap-server kafka-broker:9092 --topic test-topic < file.json 40 | ``` 41 | 42 | ## Consume Messages 43 | 44 | Read messages as they arrive: 45 | 46 | ```bash 47 | kafka-console-consumer --bootstrap-server kafka-broker:9092 --topic test-topic 48 | ``` 49 | 50 | Reading messages from the beginning: 51 | 52 | ```bash 53 | kafka-console-consumer --bootstrap-server kafka-ops:9092 --topic test-topic --from-beginning 54 | ``` 55 | 56 | ## Count Messages in Topic 57 | 58 | ```bash 59 | kafka-run-class kafka.tools.GetOffsetShell --bootstrap-server kafka-broker:9092 --topic test-topic | awk -F ":" '{sum += $3} END {print "Result: "sum}' 60 | ``` 61 | -------------------------------------------------------------------------------- /keybase-cli/README.md: -------------------------------------------------------------------------------- 1 | # keybase-cli 2 | 3 | ## GPG Decrypt 4 | 5 | ```bash 6 | pbpaste | base64 -d | keybase pgp decrypt 7 | ``` 8 | -------------------------------------------------------------------------------- /kotlin/README.md: -------------------------------------------------------------------------------- 1 | # Kotlin Cheatsheets 2 | 3 | External Resources: 4 | 5 | - https://github.com/BestFinderGit/KotlinExample 6 | -------------------------------------------------------------------------------- /kubernetes/LAB.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Lab 2 | 3 | ## Create a Cluster 4 | 5 | Create a cluster with k3s: 6 | 7 | ``` 8 | $ curl https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s - 9 | ``` 10 | 11 | View the nodes: 12 | 13 | ``` 14 | $ kubectl get nodes --output wide 15 | NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 16 | primary Ready master 4m43s v1.19.3+k3s3 192.168.64.10 Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.1-k3s1 17 | ``` 18 | 19 | ## Create a Basic Deployment 20 | 21 | Run a basic deployment for a web service that returns the hostname: 22 | 23 | ``` 24 | $ kubectl create deployment hostname --image ruanbekker/hostname 25 | deployment.apps/hostname created 26 | ``` 27 | 28 | View the deployment status: 29 | 30 | ``` 31 | $ kubectl get deployment/hostname --output wide 32 | NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 33 | hostname 1/1 1 1 71s hostname ruanbekker/hostname app=hostname 34 | ``` 35 | 36 | View the pods using the `app=hostname` selector: 37 | 38 | ``` 39 | $ kubectl get pods --selector app=hostname --output wide 40 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 41 | hostname-6cc46b9766-bvrcs 1/1 Running 0 101s 10.42.0.8 primary 42 | ``` 43 | 44 | Create a service and expose port 80 to the container: 45 | 46 | ``` 47 | $ kubectl expose deployment/hostname --type NodePort --port 8000 48 | service/hostname exposed 49 | ``` 50 | 51 | View the service details: 52 | 53 | ``` 54 | $ kubectl get service/hostname --output wide 55 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR 56 | hostname NodePort 10.43.155.228 8000:30033/TCP 8s app=hostname 57 | ``` 58 | 59 | From outside your cluster, view the application: 60 | 61 | ``` 62 | $ curl http://192.168.64.10:30033 63 | Hostname: hostname-6cc46b9766-bvrcs 64 | ``` 65 | -------------------------------------------------------------------------------- /kubernetes/LEARN_GUIDE.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes Learn Guide 2 | 3 | ## Pods 4 | 5 | - [Define a command in a container](https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/) 6 | 7 | ## Deployments 8 | 9 | - [Declarative updates for Pods and ReplicaSets](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 10 | -------------------------------------------------------------------------------- /kubernetes/SNIPPETS.md: -------------------------------------------------------------------------------- 1 | ## Pod Anti-Affinity 2 | 3 | Ensures pods dont run on the same node. 4 | 5 | ``` 6 | affinity: 7 | podAntiAffinity: 8 | requiredDuringSchedulingIgnoredDuringExecution: 9 | - labelSelector: 10 | matchExpressions: 11 | - key: app 12 | operator: In 13 | values: 14 | - bitcoin 15 | topologyKey: "kubernetes.io/hostname" 16 | ``` 17 | 18 | ``` 19 | affinity: 20 | podAntiAffinity: 21 | requiredDuringSchedulingIgnoredDuringExecution: 22 | - labelSelector: 23 | matchLabels: 24 | app: bitcoind 25 | topologyKey: "kubernetes.io/hostname" 26 | 27 | ``` 28 | 29 | Soft Anti-Affinity 30 | 31 | ``` 32 | podAntiAffinity: 33 | preferredDuringSchedulingIgnoredDuringExecution: 34 | - weight: 100 35 | podAffinityTerm: 36 | labelSelector: 37 | matchLabels: 38 | app: bitcoind 39 | topologyKey: "kubernetes.io/hostname" 40 | 41 | ``` 42 | 43 | ## Mounting a Docker Socket: 44 | 45 | ``` 46 | - image: docker:stable-dind 47 | name: docker-in-docker 48 | volumeMounts: 49 | - name: dockersock 50 | mountPath: "/var/run" 51 | #mountPath: "/var/run/docker.sock" 52 | securityContext: 53 | privileged: true 54 | allowPrivilegeEscalation: true 55 | volumes: 56 | - name: dockersock 57 | hostPath: 58 | path: /var/run/docker.sock 59 | #type: File 60 | ``` 61 | -------------------------------------------------------------------------------- /kubernetes/TROUBLESHOOTING.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Troubleshooting 2 | 3 | ## Daemonset not running on all nodes 4 | 5 | 1. Describe the daemonset and check: 6 | - Events 7 | - Selectors, Node-Selectors and Tolerations 8 | 3. Identify the nodes that the pods are not running in: 9 | - `kubectl get pods -A --field-selector spec.nodeName=ip-10-254-1-20` 10 | 4. Describe the node where the pods are not running for Taints: 11 | - `kubectl describe node ip-10-254-1-20| grep Taints` 12 | - if you see something like `Taints: application=monitoring:NoSchedule` you need to add tolerations to the daemonset: 13 | 14 | ```yaml 15 | tolerations: 16 | - key: "application" 17 | operator: "Equal" 18 | value: "monitoring" 19 | effect: "NoSchedule" 20 | ``` 21 | -------------------------------------------------------------------------------- /kubernetes/snippets/attach-pvc-to-debug-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pvc-debug-pod 5 | spec: 6 | containers: 7 | - name: alpine 8 | image: alpine:latest 9 | command: ['sleep', 'infinity'] 10 | volumeMounts: 11 | - name: mypvc 12 | mountPath: /data 13 | volumes: 14 | - name: mypvc 15 | persistentVolumeClaim: 16 | claimName: mypvc 17 | -------------------------------------------------------------------------------- /kubernetes/snippets/cronjob.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1 2 | kind: CronJob 3 | metadata: 4 | name: minutely-job 5 | namespace: default 6 | spec: 7 | schedule: "*/1 * * * *" 8 | jobTemplate: 9 | spec: 10 | template: 11 | spec: 12 | containers: 13 | - name: hello 14 | image: busybox 15 | args: 16 | - /bin/sh 17 | - -c 18 | - date; echo "hi from $(hostname)" 19 | restartPolicy: OnFailure 20 | -------------------------------------------------------------------------------- /kubernetes/snippets/define-command-in-deployment.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: run-command-with-deployment 6 | spec: 7 | replicas: 1 8 | selector: 9 | matchLabels: 10 | app: debug 11 | template: 12 | metadata: 13 | labels: 14 | app: debug 15 | spec: 16 | containers: 17 | - name: test 18 | image: busybox 19 | command: ["/bin/sh"] 20 | args: ["-c", "while true; do echo hello; sleep 10;done"] 21 | -------------------------------------------------------------------------------- /kubernetes/snippets/dockerd-sidecar-deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: dind 5 | labels: 6 | app: dind 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: dind 12 | template: 13 | metadata: 14 | labels: 15 | app: dind 16 | spec: 17 | containers: 18 | - name: docker-cmds 19 | image: docker:19.03.14 20 | command: ['docker', 'run', 'alpine', 'tail', '-f', '/dev/null'] 21 | securityContext: 22 | privileged: true 23 | resources: 24 | requests: 25 | cpu: "50m" 26 | memory: "256Mi" 27 | env: 28 | - name: DOCKER_HOST 29 | value: unix:///var/run/docker.sock 30 | - name: DOCKER_TLS_CERTDIR 31 | value: "" 32 | volumeMounts: 33 | - name: docker-socket-dir 34 | mountPath: /var/run 35 | - name: dind-daemon 36 | image: docker:stable-dind 37 | resources: 38 | limits: 39 | cpu: "1" 40 | memory: "512Mi" 41 | requests: 42 | cpu: 500m 43 | memory: "128Mi" 44 | securityContext: 45 | privileged: true 46 | volumeMounts: 47 | - name: docker-graph-storage 48 | mountPath: /var/lib/docker 49 | - name: docker-socket-dir 50 | mountPath: /var/run 51 | volumes: 52 | - name: docker-graph-storage 53 | emptyDir: {} 54 | - name: docker-socket-dir 55 | emptyDir: {} 56 | -------------------------------------------------------------------------------- /kubernetes/snippets/pod-node-selectory-tolerations.yaml: -------------------------------------------------------------------------------- 1 | # https://medium.com/kubernetes-tutorials/learn-how-to-assign-pods-to-nodes-in-kubernetes-using-nodeselector-and-affinity-features-e62c437f3cf8 2 | apiVersion: v1 3 | kind: Pod 4 | metadata: 5 | name: debug-pod 6 | namespace: default 7 | spec: 8 | containers: 9 | - name: debug 10 | image: alpine 11 | imagePullPolicy: IfNotPresent 12 | command: ["sleep"] 13 | args: ["100000"] 14 | nodeSelector: 15 | node: cpu 16 | tolerations: 17 | - key: application 18 | operator: Equal 19 | value: myapp 20 | effect: NoSchedule 21 | -------------------------------------------------------------------------------- /kubernetes/snippets/secret-as-env-var.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # kubectl create secret generic app-secret --from-literal=username=admin --from-literal=password=admin --dry-run=client --output=yaml 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: app-secret 7 | type: Opaque 8 | data: 9 | username: YWRtaW4= # base64 encoded value of "admin" 10 | password: YWRtaW4= # base64 encoded value of "admin" 11 | --- 12 | apiVersion: v1 13 | kind: Pod 14 | metadata: 15 | name: my-pod 16 | spec: 17 | containers: 18 | - name: container 19 | image: busybox:latest 20 | env: 21 | - name: AUTHENTICATION_ENABLED 22 | value: "true" 23 | - name: AUTHENTICATION_PASSWORD 24 | valueFrom: 25 | secretKeyRef: 26 | key: password 27 | name: app-secret 28 | -------------------------------------------------------------------------------- /kubernetes/snippets/secret-mount-pod.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # kubectl create secret generic app-secret --from-literal=username=admin --from-literal=password=admin --dry-run=client --output=yaml 3 | apiVersion: v1 4 | kind: Secret 5 | metadata: 6 | name: app-secret 7 | type: Opaque 8 | data: 9 | username: YWRtaW4= # base64 encoded value of "admin" 10 | password: YWRtaW4= # base64 encoded value of "admin" 11 | --- 12 | apiVersion: v1 13 | kind: Pod 14 | metadata: 15 | name: my-pod 16 | spec: 17 | containers: 18 | - name: container 19 | image: busybox:latest 20 | volumeMounts: 21 | - name: secret-volume 22 | mountPath: "/etc/secret" 23 | readOnly: true 24 | volumes: 25 | - name: secret-volume 26 | secret: 27 | secretName: app-secret 28 | -------------------------------------------------------------------------------- /kubernetes/snippets/security-context-in-deployments.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: dind 5 | labels: 6 | app: dind 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: dind 12 | template: 13 | metadata: 14 | labels: 15 | app: dind 16 | spec: 17 | containers: 18 | - name: dind-daemon 19 | image: docker:stable-dind 20 | resources: 21 | limits: 22 | cpu: "1" 23 | memory: "512Mi" 24 | requests: 25 | cpu: 500m 26 | memory: "128Mi" 27 | securityContext: 28 | privileged: true 29 | volumeMounts: 30 | - name: docker-graph-storage 31 | mountPath: /var/lib/docker 32 | - name: docker-socket-dir 33 | mountPath: /var/run 34 | volumes: 35 | - name: docker-graph-storage 36 | emptyDir: {} 37 | - name: docker-socket-dir 38 | emptyDir: {} 39 | -------------------------------------------------------------------------------- /loki/loki-config/loki-config_aws.yml: -------------------------------------------------------------------------------- 1 | auth_enabled: false 2 | 3 | server: 4 | http_listen_port: 3100 5 | http_listen_address: 127.0.0.1 6 | http_server_read_timeout: 1000s 7 | http_server_write_timeout: 1000s 8 | http_server_idle_timeout: 1000s 9 | log_level: "info" 10 | 11 | ingester: 12 | lifecycler: 13 | address: 127.0.0.1 14 | ring: 15 | kvstore: 16 | store: inmemory 17 | replication_factor: 1 18 | final_sleep: 0s 19 | chunk_encoding: snappy 20 | chunk_idle_period: 5m 21 | chunk_retain_period: 30s 22 | max_transfer_retries: 0 23 | 24 | # https://grafana.com/docs/loki/latest/configuration/#schema_config 25 | schema_config: 26 | configs: 27 | - from: 2020-05-15 28 | store: aws 29 | object_store: s3 30 | schema: v11 31 | index: 32 | prefix: loki-logging-index 33 | #period: 168h 34 | 35 | storage_config: 36 | aws: 37 | http_config: 38 | idle_conn_timeout: 90s 39 | response_header_timeout: 0s 40 | s3: s3://x:x@eu-west-1/loki-logs-datastore 41 | 42 | dynamodb: 43 | dynamodb_url: dynamodb://x:x@eu-west-1 44 | 45 | limits_config: 46 | enforce_metric_name: false 47 | reject_old_samples: true 48 | reject_old_samples_max_age: 168h 49 | ingestion_rate_mb: 30 50 | ingestion_burst_size_mb: 60 51 | 52 | # https://grafana.com/docs/loki/latest/operations/storage/retention/ 53 | # To avoid querying of data beyond the retention period, max_look_back_period config in chunk_store_config 54 | # must be set to a value less than or equal to what is set in table_manager.retention_period 55 | chunk_store_config: 56 | #max_look_back_period: 0s 57 | max_look_back_period: 720h 58 | 59 | # https://grafana.com/docs/loki/latest/operations/storage/retention/ 60 | table_manager: 61 | #retention_deletes_enabled: false 62 | retention_deletes_enabled: true 63 | #retention_period: 0s 64 | retention_period: 720h 65 | chunk_tables_provisioning: 66 | inactive_read_throughput: 10 67 | inactive_write_throughput: 10 68 | provisioned_read_throughput: 50 69 | provisioned_write_throughput: 20 70 | index_tables_provisioning: 71 | inactive_read_throughput: 10 72 | inactive_write_throughput: 10 73 | provisioned_read_throughput: 50 74 | provisioned_write_throughput: 20 75 | -------------------------------------------------------------------------------- /loki/nginx-reverse-proxy/conf.d/loki.conf: -------------------------------------------------------------------------------- 1 | upstream loki { 2 | server 127.0.0.1:3100; 3 | keepalive 15; 4 | } 5 | 6 | server { 7 | listen 80; 8 | server_name loki.mydomain.com; 9 | 10 | auth_basic "loki auth"; 11 | auth_basic_user_file /etc/nginx/passwords; 12 | 13 | location / { 14 | proxy_read_timeout 1800s; 15 | proxy_connect_timeout 1600s; 16 | proxy_pass http://loki; 17 | proxy_http_version 1.1; 18 | proxy_set_header Upgrade $http_upgrade; 19 | proxy_set_header Connection $connection_upgrade; 20 | proxy_set_header Connection "Keep-Alive"; 21 | proxy_set_header Proxy-Connection "Keep-Alive"; 22 | proxy_redirect off; 23 | #proxy_connect_timeout 1800; 24 | #proxy_send_timeout 1800; 25 | #proxy_read_timeout 1800; 26 | #send_timeout 1800; 27 | } 28 | 29 | location /ready { 30 | proxy_pass http://loki; 31 | proxy_http_version 1.1; 32 | proxy_set_header Connection "Keep-Alive"; 33 | proxy_set_header Proxy-Connection "Keep-Alive"; 34 | proxy_redirect off; 35 | auth_basic "off"; 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /loki/nginx-reverse-proxy/nginx.conf: -------------------------------------------------------------------------------- 1 | # https://gist.github.com/ruanbekker/5f3bd5a2a4289f3c2218b55ea1549ecc 2 | # https://www.nginx.com/blog/websocket-nginx/ 3 | 4 | user www-data; 5 | worker_processes auto; 6 | pid /run/nginx.pid; 7 | include /etc/nginx/modules-enabled/*.conf; 8 | 9 | worker_rlimit_nofile 100000; 10 | 11 | events { 12 | #worker_connections 768; 13 | worker_connections 4000; 14 | use epoll; 15 | multi_accept on; 16 | } 17 | 18 | http { 19 | 20 | # Basics 21 | sendfile on; 22 | tcp_nopush on; 23 | tcp_nodelay on; 24 | keepalive_timeout 65; 25 | types_hash_max_size 2048; 26 | open_file_cache_valid 30s; 27 | open_file_cache_min_uses 2; 28 | open_file_cache_errors on; 29 | 30 | include /etc/nginx/mime.types; 31 | default_type application/octet-stream; 32 | 33 | ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE 34 | ssl_prefer_server_ciphers on; 35 | 36 | # websockets upgrade for loki tail 37 | map $http_upgrade $connection_upgrade { 38 | default upgrade; 39 | '' close; 40 | } 41 | 42 | # Logging Settings 43 | access_log off; 44 | access_log /var/log/nginx/access.log; 45 | error_log /var/log/nginx/error.log; 46 | 47 | # Gzip Settings 48 | gzip on; 49 | gzip_min_length 10240; 50 | gzip_comp_level 1; 51 | gzip_vary on; 52 | gzip_disable msie6; 53 | gzip_proxied expired no-cache no-store private auth; 54 | gzip_types 55 | # text/html is always compressed by HttpGzipModule 56 | text/css 57 | text/javascript 58 | text/xml 59 | text/plain 60 | text/x-component 61 | application/javascript 62 | application/x-javascript 63 | application/json 64 | application/xml 65 | application/rss+xml 66 | application/atom+xml 67 | font/truetype 68 | font/opentype 69 | application/vnd.ms-fontobject 70 | image/svg+xml; 71 | reset_timedout_connection on; 72 | client_body_timeout 10; 73 | send_timeout 2; 74 | keepalive_requests 100000; 75 | 76 | # Virtual Host Configs 77 | include /etc/nginx/conf.d/loki.conf; 78 | } 79 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/datasource.yml: -------------------------------------------------------------------------------- 1 | apiVersion: 1 2 | 3 | deleteDatasources: 4 | - name: prometheus 5 | - name: loki 6 | 7 | datasources: 8 | - name: prometheus 9 | type: prometheus 10 | access: proxy 11 | url: http://prometheus:9090 12 | isDefault: true 13 | editable: true 14 | - name: loki 15 | type: loki 16 | access: proxy 17 | orgId: 1 18 | url: http://loki:3100 19 | basicAuth: false 20 | isDefault: false 21 | version: 1 22 | editable: true 23 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/fluent-bit.conf: -------------------------------------------------------------------------------- 1 | [INPUT] 2 | Name forward 3 | Listen 0.0.0.0 4 | Port 24224 5 | [Output] 6 | Name grafana-loki 7 | Match * 8 | Url ${LOKI_URL} 9 | RemoveKeys source,container_id 10 | Labels {job="fluent-bit"} 11 | LabelKeys container_name 12 | BatchWait 1s 13 | BatchSize 1001024 14 | LineFormat json 15 | LogLevel info 16 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/host_alert_rules.yml: -------------------------------------------------------------------------------- 1 | # examples: 2 | # https://rtfm.co.ua/en/prometheus-alertmanagers-alerts-receivers-and-routing-based-on-severity-level-and-tags/ 3 | # https://awesome-prometheus-alerts.grep.to/rules.html 4 | groups: 5 | - name: host_alert_rules.yml 6 | rules: 7 | 8 | # Alert for any node that is unreachable for > 2 minutes. 9 | - alert: node_down 10 | expr: up{job="node-exporter"} == 0 11 | for: 1m 12 | labels: 13 | severity: warning 14 | environment: prod 15 | alert_target: "{{ $labels.host }}" 16 | annotations: 17 | summary: "Job {{ $labels.job }} is down on {{ $labels.instance }}" 18 | description: "Failed to scrape {{ $labels.job }} on {{ $labels.instance }} for more than 1 minute. Node might be down." 19 | impact: "Any metrics from {{ $labels.job }} on {{ $labels.instance }} will be missing" 20 | action: "Check on {{ $labels.instance }} if {{ $labels.job }} is running" 21 | dashboard: http://grafana.localdns.xyz/d/pjhLJOzmk/infrastructure-hosts-stats 22 | runbook: http://wiki.localdns.xyz 23 | priority: P2 24 | 25 | # test alert on debug instance 26 | - alert: debug_instance_hard_disk_low 27 | expr: (node_filesystem_avail_bytes{mountpoint="/"} * 100) / node_filesystem_size_bytes{mountpoint="/"} < 20 28 | for: 1m 29 | labels: 30 | severity: warning 31 | alert_channel: notifications 32 | environment: prod 33 | team: devops 34 | aws_region: eu-west-1 35 | annotations: 36 | title: "[TEST] Disk Usage is Low in {{ $labels.instance }}" 37 | description: "Instance {{ $labels.instance }} has less than {{ humanize $value}}% available on mount {{ $labels.mountpoint }} " 38 | summary: "Low Disk Space Available" 39 | dashboard: http://grafana.localdns.xyz/d/pjhLJOzmk/infrastructure-hosts-stats 40 | runbook: http://wiki.localdns.xyz 41 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/loki-rules.yml: -------------------------------------------------------------------------------- 1 | groups: 2 | - name: example 3 | rules: 4 | - alert: HighThroughputLogStreams 5 | expr: sum by (container_name) (count_over_time({container_name=~".*"} |regexp`(?P.*)` [1h])>0) 6 | for: 20s 7 | labels: 8 | severity: "2" 9 | annotations: 10 | description: '{{ $labels.instance }} {{ $labels.msg }} memory.' 11 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/loki.yml: -------------------------------------------------------------------------------- 1 | auth_enabled: false 2 | 3 | server: 4 | http_listen_port: 3100 5 | 6 | ingester: 7 | lifecycler: 8 | address: 127.0.0.1 9 | ring: 10 | kvstore: 11 | store: inmemory 12 | replication_factor: 1 13 | final_sleep: 0s 14 | chunk_idle_period: 5m 15 | chunk_retain_period: 30s 16 | max_transfer_retries: 0 17 | 18 | schema_config: 19 | configs: 20 | - from: 2018-04-15 21 | store: boltdb 22 | object_store: filesystem 23 | schema: v11 24 | index: 25 | prefix: index_ 26 | period: 168h 27 | 28 | storage_config: 29 | boltdb: 30 | directory: /tmp/loki/index 31 | 32 | filesystem: 33 | directory: /tmp/loki/chunks 34 | 35 | limits_config: 36 | enforce_metric_name: false 37 | reject_old_samples: true 38 | reject_old_samples_max_age: 168h 39 | 40 | chunk_store_config: 41 | #max_look_back_period: 0s 42 | max_look_back_period: 168h 43 | 44 | table_manager: 45 | retention_deletes_enabled: true 46 | #retention_period: 0s 47 | retention_period: 168h 48 | 49 | ruler: 50 | storage: 51 | type: local 52 | local: 53 | directory: /etc/loki/rules 54 | rule_path: /tmp/loki/rules-temp 55 | alertmanager_url: http://alertmanager:9090 56 | ring: 57 | kvstore: 58 | store: inmemory 59 | enable_api: true 60 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/prometheus.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | evaluation_interval: 15s 4 | external_labels: 5 | cluster: 'cheatsheets-promtail' 6 | 7 | rule_files: 8 | - '/etc/prometheus/rules/host_alert_rules.yml' 9 | - '/etc/prometheus/rules/healtcheck_alert_rules.yml' 10 | 11 | alerting: 12 | alertmanagers: 13 | - scheme: http 14 | static_configs: 15 | - targets: ['alertmanager:9093'] 16 | 17 | scrape_configs: 18 | - job_name: 'prometheus' 19 | scrape_interval: 5s 20 | static_configs: 21 | - targets: ['localhost:9090'] 22 | 23 | - job_name: 'traefik' 24 | scrape_interval: 15s 25 | static_configs: 26 | - targets: ['traefik:8080'] 27 | -------------------------------------------------------------------------------- /loki/promtail/docker-example/configs/promtail_config.yml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 9080 3 | grpc_listen_port: 0 4 | 5 | positions: 6 | filename: /tmp/positions.yaml 7 | 8 | clients: 9 | - url: http://loki:3100/loki/api/v1/push 10 | 11 | scrape_configs: 12 | - job_name: syslog 13 | syslog: 14 | listen_address: 0.0.0.0:1514 15 | labels: 16 | job: "syslog" 17 | relabel_configs: 18 | - source_labels: ['__syslog_connection_ip_address'] 19 | target_label: 'instance' 20 | - source_labels: ['__syslog_message_app_name'] 21 | target_label: 'app' 22 | - source_labels: ['__syslog_message_severity'] 23 | target_label: 'severity' 24 | 25 | pipeline_stages: 26 | - match: 27 | selector: '{app="dockerd"}' 28 | stages: 29 | - regex: 30 | expression: "Health check for container (?P\\w+) (?P\\S+:).*" 31 | #expression: "\\[shard (?P\\d+)\\] (?P\\S+).*" 32 | - labels: 33 | containerid: 34 | msglevel: 35 | - match: 36 | selector: '{severity="warning"}' 37 | stages: 38 | - metrics: 39 | warning_total: 40 | type: Counter 41 | description: "total count of warnings" 42 | prefix: homepc_logs_ 43 | config: 44 | match_all: true 45 | action: inc 46 | -------------------------------------------------------------------------------- /loki/promtail/drop-loglines-promtail.yml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 9080 3 | grpc_listen_port: 0 4 | positions: 5 | filename: /var/lib/promtail/positions.yaml 6 | clients: 7 | - url: https://:@/loki/api/v1/push 8 | 9 | scrape_configs: 10 | - job_name: nginx-info 11 | static_configs: 12 | - targets: 13 | - localhost 14 | labels: 15 | job: prod/nginx 16 | environment: production 17 | host: demo-app-prod 18 | level: info 19 | service_name: demo-app-prod 20 | __path__: /var/log/nginx/access.log 21 | 22 | pipeline_stages: 23 | # https://grafana.com/docs/loki/latest/clients/promtail/stages/drop/ 24 | - drop: 25 | expression: "(.*/health-check.*)|(.*/health.*)" 26 | - drop: 27 | older_than: 24h 28 | drop_counter_reason: "line_too_old" 29 | - drop: 30 | longer_than: 8kb 31 | drop_counter_reason: "line_too_long" 32 | # https://grafana.com/docs/loki/latest/clients/promtail/stages/match/ 33 | - match: 34 | selector: '{app="promtail"} |~ ".*noisy error.*"' 35 | action: drop 36 | drop_counter_reason: promtail_noisy_error 37 | - match: 38 | selector: '{app="loki", component="gateway"}' 39 | action: drop 40 | drop_counter_reason: loki_gateway_logs 41 | - match: 42 | selector: '{app="loki", component="querier"} |= "level=info"' 43 | action: drop 44 | drop_counter_reason: loki_querier_info_logs 45 | # https://github.com/cyriltovena/loki/blob/master/docs/clients/promtail/stages/match.md#example 46 | # drop healthcheck logs 47 | - match: 48 | pipeline_name: 'drop_elb_healthchecks' 49 | selector: '{job="prod/nginx"} |= "ELB-HealthChecker"' 50 | action: drop 51 | -------------------------------------------------------------------------------- /loki/promtail/ec2_instance_sd_discovery.yml: -------------------------------------------------------------------------------- 1 | # https://grafana.com/docs/loki/latest/clients/promtail/scraping/ 2 | # https://grafana.com/blog/2020/07/13/loki-tutorial-how-to-set-up-promtail-on-aws-ec2-to-find-and-analyze-your-logs/ 3 | server: 4 | http_listen_port: 3100 5 | grpc_listen_port: 0 6 | 7 | clients: 8 | - url: https://user:pass@loki.domain.com/loki/api/v1/push 9 | 10 | positions: 11 | filename: /opt/promtail/positions.yaml 12 | 13 | scrape_configs: 14 | - job_name: prod/ec2-logs 15 | ec2_sd_configs: 16 | - region: eu-west-1 17 | #access_key: REDACTED 18 | #secret_key: REDACTED 19 | #role_arn: arn:aws:iam::000000000000:role/PrometheusEC2DynamicScrapeRole 20 | relabel_configs: 21 | - source_labels: [__meta_ec2_architecture] 22 | regex: "(.*)" 23 | replacement: "prod/server-logs" 24 | target_label: job 25 | - source_labels: [__meta_ec2_tag_Name] 26 | target_label: name 27 | action: replace 28 | - source_labels: [__meta_ec2_instance_id] 29 | target_label: instance 30 | action: replace 31 | - source_labels: [__meta_ec2_availability_zone] 32 | target_label: zone 33 | action: replace 34 | - action: replace 35 | replacement: /var/log/**.log 36 | target_label: __path__ 37 | - source_labels: [__meta_ec2_private_dns_name] 38 | regex: "(.*)\\.(.*)\\.compute\\.internal" 39 | replacement: '${1}' 40 | target_label: __host__ 41 | 42 | - job_name: prod/journal 43 | journal: 44 | json: false 45 | max_age: 12h 46 | path: /var/log/journal 47 | labels: 48 | job: prod/systemd-journal 49 | name: my-ec2-instance 50 | relabel_configs: 51 | - source_labels: ['__journal__systemd_unit'] 52 | target_label: 'unit' 53 | - source_labels: ['__journal__hostname'] 54 | target_label: __host__ 55 | - source_labels: ['__journal_syslog_identifier'] 56 | target_label: syslog_identifier 57 | 58 | # prod/ec2-logs produces: 59 | # job="prod/server-logs" 60 | # instance="i-00000000000" 61 | # name="my-ec2-instance" 62 | # zone="eu-west-1a" 63 | # filename="/var/log/auth.log" 64 | 65 | # prod/journal produces: 66 | # job="prod/systemd-journal" 67 | # name="my-ec2-instance" 68 | # syslog_identifier="promtail" 69 | # unit="promtail.service" 70 | -------------------------------------------------------------------------------- /loki/promtail/java_example-promtail-config.yml: -------------------------------------------------------------------------------- 1 | # Example: promtail to collect syslog and java logs from linux os 2 | # Application called myapp running in production 3 | # 4 | # job: prod/myapp 5 | # environment: production 6 | # host: myapp-prod.domain (or hostname) 7 | # service_name: myapp-prod 8 | 9 | server: 10 | http_listen_port: 9080 11 | grpc_listen_port: 0 12 | positions: 13 | filename: /var/lib/promtail/positions.yaml 14 | clients: 15 | - url: https://:@/loki/api/v1/push 16 | 17 | scrape_configs: 18 | - job_name: syslog 19 | pipeline_stages: 20 | static_configs: 21 | - targets: 22 | - localhost 23 | labels: 24 | job: prod/syslog 25 | host: myapp-prod.domain 26 | environment: production 27 | __path__: /var/log/syslog 28 | 29 | - job_name: myapp 30 | static_configs: 31 | - targets: 32 | - localhost 33 | labels: 34 | job: prod/myapp 35 | environment: production 36 | host: myapp-prod.domain 37 | service_name: myapp-prod 38 | __path__: /var/log/myapp/myapp-logs_*.log 39 | 40 | # remaps INFO to info for specified selector 41 | pipeline_stages: 42 | # https://github.com/cyriltovena/loki/blob/master/docs/clients/promtail/stages/match.md#example 43 | - match: 44 | selector: '{service_name="myapp-prod",environment="production"}' 45 | # selector: '{service_name="myapp-prod",environment="production"} |~ "GET|POST"' <- if you only want specific logs to be matched by the pipeline stage 46 | stages: 47 | - regex: 48 | expression: "(?P(INFO|WARNING|ERROR))(.*)" 49 | - template: 50 | source: level 51 | template: '{{ ToLower .Value }}' 52 | - labels: 53 | level: 54 | -------------------------------------------------------------------------------- /loki/promtail/nginx_example-promtail-config.yml: -------------------------------------------------------------------------------- 1 | # Example: promtail to collect journal, syslog and nginx logs 2 | # Application called demo-app running in production 3 | # 4 | # job: prod/nginx 5 | # environment: production 6 | # host: demo-app-prod (or hostname) 7 | # service_name: demo-app-prod 8 | 9 | server: 10 | http_listen_port: 9080 11 | grpc_listen_port: 0 12 | positions: 13 | filename: /var/lib/promtail/positions.yaml 14 | clients: 15 | - url: https://:@/loki/api/v1/push 16 | 17 | scrape_configs: 18 | - job_name: journal 19 | journal: 20 | max_age: 1h 21 | path: /var/log/journal 22 | labels: 23 | job: prod/journal 24 | environment: production 25 | host: demo-app-prod 26 | relabel_configs: 27 | - source_labels: ['__journal__systemd_unit'] 28 | target_label: 'unit' 29 | 30 | - job_name: syslog 31 | pipeline_stages: 32 | static_configs: 33 | - targets: 34 | - localhost 35 | labels: 36 | job: prod/syslog 37 | host: demo-app-prod 38 | environment: production 39 | __path__: /var/log/syslog 40 | 41 | - job_name: nginx-info 42 | static_configs: 43 | - targets: 44 | - localhost 45 | labels: 46 | job: prod/nginx 47 | environment: production 48 | host: demo-app-prod 49 | level: info 50 | service_name: demo-app-prod 51 | __path__: /var/log/nginx/access.log 52 | 53 | - job_name: nginx-error 54 | static_configs: 55 | - targets: 56 | - localhost 57 | labels: 58 | job: prod/nginx 59 | environment: production 60 | host: demo-app-prod 61 | level: error 62 | service_name: demo-app-prod 63 | __path__: /var/log/nginx/error.log 64 | -------------------------------------------------------------------------------- /loki/promtail/relabel-convert-to-capitals-promtail.yml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 9080 3 | grpc_listen_port: 0 4 | positions: 5 | filename: /var/lib/promtail/positions.yaml 6 | clients: 7 | - url: https://:@/loki/api/v1/push 8 | 9 | scrape_configs: 10 | - job_name: nginx-info 11 | static_configs: 12 | - targets: 13 | - localhost 14 | labels: 15 | job: prod/nginx 16 | environment: production 17 | host: demo-app-prod 18 | level: info 19 | service_name: demo-app-prod 20 | __path__: /var/log/nginx/access.log 21 | 22 | pipeline_stages: 23 | # convert capital log levels to lower case 24 | - regex: 25 | expression: "(?P(INFO|WARNING|ERROR))(.*)" 26 | # set captured values to lowercase 27 | - template: 28 | source: level 29 | template: '{{ ToLower .Value }}' 30 | # set the renamed values to level label 31 | - labels: 32 | level: 33 | -------------------------------------------------------------------------------- /loki/promtail/relabel-stdout-to-info-promtail.yml: -------------------------------------------------------------------------------- 1 | server: 2 | http_listen_port: 9080 3 | grpc_listen_port: 0 4 | positions: 5 | filename: /var/lib/promtail/positions.yaml 6 | clients: 7 | - url: https://:@/loki/api/v1/push 8 | 9 | scrape_configs: 10 | - job_name: nginx-info 11 | static_configs: 12 | - targets: 13 | - localhost 14 | labels: 15 | job: prod/nginx 16 | environment: production 17 | host: demo-app-prod 18 | level: info 19 | service_name: demo-app-prod 20 | __path__: /var/log/nginx/access.log 21 | 22 | pipeline_stages: 23 | # capture stdout|stderr to level 24 | - regex: 25 | expression: '(?P(stdout|stderr))' 26 | # rename stdout to info, stderr to error 27 | - template: 28 | source: level 29 | template: '{{ if eq .Value "stdout" }}{{ Replace .Value "stdout" "info" -1 }}{{ else if eq .Value "stderr" }}{{ Replace .Value "stderr" "error" -1 }}{{ else if eq .Value "errorstderr" }}{{ Replace .Value "errorstderr" "error" -1 }}{{ .Value }}{{ end }}' 30 | # set the renamed values to level label 31 | - labels: 32 | level: 33 | 34 | -------------------------------------------------------------------------------- /makefiles/README.md: -------------------------------------------------------------------------------- 1 | # makefiles cheatsheet 2 | 3 | ## External Examples 4 | - [@nicor88 - Terraform](https://github.com/nicor88/aws-ecs-airflow/blob/master/Makefile) 5 | - [yankeexe.medium.com](https://yankeexe.medium.com/streamline-your-projects-using-makefile-744ebbc69cc1) 6 | -------------------------------------------------------------------------------- /makefiles/docker-compose/Makefile: -------------------------------------------------------------------------------- 1 | # Thanks: https://gist.github.com/mpneuried/0594963ad38e68917ef189b4e6a269db 2 | .PHONY: help 3 | 4 | HAS_DOCKER_COMPOSE := $(shell command -v docker-compose 2> /dev/null) 5 | HAS_DOCKER_COMPOSE_V2 := $(shell command -v docker 2> /dev/null) 6 | 7 | ifeq ($(strip $(HAS_DOCKER_COMPOSE)),) 8 | ifeq ($(strip $(HAS_DOCKER_COMPOSE_V2)),) 9 | $(error No compatible command found) 10 | else 11 | DOCKER_COMPOSE_BINARY := docker compose 12 | endif 13 | else 14 | DOCKER_COMPOSE_BINARY := docker-compose 15 | endif 16 | 17 | help: ## This help. 18 | @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) 19 | 20 | .DEFAULT_GOAL := help 21 | 22 | # DOCKER TASKS 23 | up: ## Runs the containers in detached mode 24 | @$(DOCKER_COMPOSE_BINARY) up -d --build 25 | 26 | clean: ## Stops and removes all containers 27 | @$(DOCKER_COMPOSE_BINARY) down 28 | 29 | logs: ## View the logs from the containers 30 | @$(DOCKER_COMPOSE_BINARY) logs -f 31 | 32 | open: ## Opens tabs in container 33 | open http://localhost:3000/ 34 | -------------------------------------------------------------------------------- /makefiles/with-help-section/Makefile: -------------------------------------------------------------------------------- 1 | # Thanks: https://gist.github.com/mpneuried/0594963ad38e68917ef189b4e6a269db 2 | .PHONY: help 3 | 4 | help: ## This help. 5 | @awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) 6 | 7 | .DEFAULT_GOAL := help 8 | 9 | # DOCKER TASKS 10 | deps: ## pulls and builds 11 | docker-compose pull 12 | docker-compose build 13 | 14 | up: ## Runs the containers in detached mode 15 | docker-compose up -d --build 16 | 17 | all: deps up ## Generate, Builds and Runs the Containers in detached mode 18 | 19 | stop: ## Stops and removes all containers 20 | docker-compose down 21 | 22 | logs: ## View the logs from the containers 23 | docker-compose logs -f 24 | 25 | open-tabs: ## Opens tabs in container 26 | open http://localhost:8080/ 27 | -------------------------------------------------------------------------------- /mongodb/python/code-examples/auth_make_connection.py: -------------------------------------------------------------------------------- 1 | # https://pymongo.readthedocs.io/en/stable/examples/authentication.html?highlight=authentication 2 | # example server: 3 | # - https://github.com/ruanbekker/cheatsheets/blob/master/mongodb/python/docker/docker-compose.yml 4 | 5 | from pymongo import MongoClient 6 | client = MongoClient("192.168.0.8:27017", username="root", password="pass", authSource="admin", authMechanism="SCRAM-SHA-256") 7 | client.list_database_names() 8 | 9 | # uri = "mongodb://root:pass@192.168.0.8:27017/?authSource=admin&authMechanism=SCRAM-SHA-256" 10 | # client = MongoClient(uri) 11 | # client.list_database_names() 12 | -------------------------------------------------------------------------------- /mongodb/python/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # https://hub.docker.com/_/mongo 3 | version: '3.1' 4 | 5 | services: 6 | mongo: 7 | image: mongo 8 | restart: always 9 | environment: 10 | MONGO_INITDB_ROOT_USERNAME: root 11 | MONGO_INITDB_ROOT_PASSWORD: example 12 | 13 | mongo-express: 14 | image: mongo-express 15 | restart: always 16 | ports: 17 | - 8081:8081 18 | environment: 19 | ME_CONFIG_MONGODB_ADMINUSERNAME: root 20 | ME_CONFIG_MONGODB_ADMINPASSWORD: example 21 | ME_CONFIG_MONGODB_URL: mongodb://root:example@mongo:27017/ 22 | -------------------------------------------------------------------------------- /mongodb/python/docker/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | 3 | services: 4 | mongodb-eph: 5 | image: mongo:4.2 6 | container_name: mongodb-eph 7 | restart: unless-stopped 8 | command: ["--bind_ip_all", "--port", "27017"] 9 | environment: 10 | - MONGO_INITDB_ROOT_USERNAME=root 11 | - MONGO_INITDB_ROOT_PASSWORD=pass 12 | volumes: 13 | - ./data/db:/data/db 14 | - ./data/backups:/dump 15 | ports: 16 | - 27017:27017 17 | networks: 18 | - public 19 | healthcheck: 20 | test: echo 'db.runCommand("ping").ok' | mongo -u root -p pass localhost:27017/admin --quiet 21 | interval: 15s 22 | timeout: 10s 23 | start_period: 30s 24 | retries: 3 25 | logging: 26 | driver: "json-file" 27 | options: 28 | max-size: "1m" 29 | 30 | networks: 31 | public: 32 | name: public 33 | -------------------------------------------------------------------------------- /mysqldump/README.md: -------------------------------------------------------------------------------- 1 | # mysqldump cheatsheet 2 | 3 | ## Backup One Database 4 | 5 | ``` 6 | mysqldump -h 127.0.0.1 -u admin -padmin --triggers --routines --events mydb > mydb_$(date +%F).sql 7 | ``` 8 | 9 | If you get a error in mysql 8 about unknown columns: 10 | 11 | ``` 12 | mysqldump --column-statistics=0 -h 127.0.0.1 --user admin -prootpassword mydb > mydb_$(date +%F).sql 13 | ``` 14 | 15 | ## Backup All Databases 16 | 17 | ``` 18 | mysqldump -h 127.0.0.1 -u admin -padmin --triggers --routines --events --all-databases > alldbs_$(date +%F).sql 19 | ``` 20 | 21 | -------------------------------------------------------------------------------- /neo4j-cypher/README.md: -------------------------------------------------------------------------------- 1 | # Neo4j Cypher Cheatsheet 2 | - https://gist.github.com/DaniSancas/1d5265fc159a95ff457b940fc5046887 3 | -------------------------------------------------------------------------------- /netstat/README.md: -------------------------------------------------------------------------------- 1 | # netstat cheatsheet 2 | 3 | ## Listening Ports 4 | 5 | Command: 6 | 7 | ```bash 8 | netstat -tulpn 9 | ``` 10 | 11 |
12 | Response: 13 | 14 | ```bash 15 | Active Internet connections (only servers) 16 | Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name 17 | tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 420/sshd 18 | tcp6 0 0 :::9100 :::* LISTEN 402/node_exporter 19 | tcp6 0 0 :::35955 :::* LISTEN 405/promtail 20 | tcp6 0 0 :::22 :::* LISTEN 420/sshd 21 | tcp6 0 0 :::9080 :::* LISTEN 405/promtail 22 | udp 0 0 0.0.0.0:36345 0.0.0.0:* 279/avahi-daemon: r 23 | udp 0 0 0.0.0.0:68 0.0.0.0:* 401/dhcpcd 24 | udp 0 0 0.0.0.0:5353 0.0.0.0:* 279/avahi-daemon: r 25 | udp6 0 0 :::59354 :::* 279/avahi-daemon: r 26 | udp6 0 0 :::5353 :::* 279/avahi-daemon: r 27 | ``` 28 | 29 |
30 | 31 | -------------------------------------------------------------------------------- /nginx/README.md: -------------------------------------------------------------------------------- 1 | # nginx cheatsheet 2 | 3 | ## External 4 | 5 | - https://rtfm.co.ua/en/http-redirects-post-and-get-requests-and-lost-data/ 6 | - [Nginx Location Priority](https://stackoverflow.com/questions/5238377/nginx-location-priority) 7 | 8 | ## Configs 9 | 10 | Bare bones website: 11 | 12 | ``` 13 | server { 14 | listen 80; 15 | server_name localhost; 16 | 17 | location / { 18 | root /usr/share/nginx/html; 19 | index index.html index.htm; 20 | } 21 | 22 | error_page 500 502 503 504 /50x.html; 23 | location = /50x.html { 24 | root /usr/share/nginx/html; 25 | } 26 | 27 | } 28 | ``` 29 | 30 | Redirect non existing webpage to home: 31 | 32 | ``` 33 | # define the error page 34 | error_page 404 = @notfound; 35 | 36 | # 301 redirect to / for defined error page 37 | location @notfound { 38 | return 301 /; 39 | } 40 | ``` 41 | 42 | Redirect a old request url to a new path on disk: 43 | 44 | ``` 45 | # redirect old urls 46 | location /content/images/2019/10/logo.png { 47 | rewrite ^/content/images/2019/10/logo.png /assets/img/logo.png ; 48 | } 49 | ``` 50 | -------------------------------------------------------------------------------- /openssl/README.md: -------------------------------------------------------------------------------- 1 | # openssl cheatsheet 2 | 3 | ## Random Strings 4 | 5 | Generates a random string of bytes and outputs in hexadecimal format: 6 | 7 | ```bash 8 | openssl rand -hex 16 9 | ``` 10 | 11 | Generates a random string of bytes and outputs in base64 format: 12 | 13 | ```bash 14 | openssl rand -base64 24 15 | ``` 16 | 17 | ## Generate RSA Private Keys 18 | 19 | Generate a new RSA private key of 2048 bits, convert RSA private key to PKCS format, encrypt using DES3 alogirthm and store private key in PEM format: 20 | 21 | ```bash 22 | openssl genrsa 2048 | openssl pkcs8 -topk8 -v2 des3 -inform PEM -out my_rsa_key.p8 23 | ``` 24 | 25 | Create a public key from the private key: 26 | 27 | ```bash 28 | openssl rsa -in my_rsa_key.p8 -pubout -out my_rsa_key.pub 29 | ``` 30 | 31 | -------------------------------------------------------------------------------- /php-composer/7.19.3.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM php:7.3.19-cli 2 | 3 | RUN apt-get update \ 4 | && apt-get install \ 5 | default-mysql-client \ 6 | default-libmysqlclient-dev \ 7 | curl git libfreetype6-dev \ 8 | libjpeg62-turbo-dev \ 9 | libmcrypt-dev libpng-dev \ 10 | libzip-dev -y 11 | 12 | RUN docker-php-ext-install iconv \ 13 | && pecl install mcrypt-1.0.3 \ 14 | && docker-php-ext-enable mcrypt \ 15 | && docker-php-ext-configure gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ \ 16 | && docker-php-ext-install gd \ 17 | && docker-php-ext-install zip \ 18 | && docker-php-ext-install mysqli \ 19 | && docker-php-ext-install opcache \ 20 | && docker-php-ext-install mbstring \ 21 | && docker-php-ext-install bcmath \ 22 | && docker-php-ext-install pcntl 23 | 24 | #RUN curl -sS https://getcomposer.org/installer | php && mv composer.phar /usr/local/bin/composer 25 | RUN curl -O "https://getcomposer.org/download/1.10.7/composer.phar" \ 26 | && chmod a+x composer.phar \ 27 | && mv composer.phar /usr/bin/composer 28 | RUN composer global require phpunit/phpunit "^7.5" 29 | 30 | ENV PATH /root/.composer/vendor/bin:$PATH 31 | RUN ln -s /root/.composer/vendor/bin/phpunit /usr/bin/phpunit 32 | -------------------------------------------------------------------------------- /php/hostname.php: -------------------------------------------------------------------------------- 1 | 5 | -------------------------------------------------------------------------------- /postgresql/README.md: -------------------------------------------------------------------------------- 1 | # postgresql cheatsheet 2 | 3 | ## External Resources 4 | 5 | - https://www.codementor.io/engineerapart/getting-started-with-postgresql-on-mac-osx-are8jcopb 6 | - https://www.tutorialspoint.com/postgresql/postgresql_schema.htm 7 | 8 | ## Setting Up 9 | 10 | Install: 11 | 12 | ``` 13 | brew install postgresql 14 | ``` 15 | 16 | Start: 17 | 18 | ``` 19 | brew services start postgresql 20 | ``` 21 | 22 | Access Postgres: 23 | 24 | ``` 25 | psql postgres 26 | ``` 27 | 28 | ## Cheatsheet 29 | 30 | Create database: 31 | 32 | ``` 33 | CREATE database foo; 34 | ``` 35 | 36 | Create role: 37 | 38 | ``` 39 | CREATE ROLE user1 WITH LOGIN PASSWORD 'secret'; 40 | ``` 41 | 42 | List roles: 43 | 44 | ``` 45 | \du 46 | ``` 47 | 48 | Allow user to create databases: 49 | 50 | ``` 51 | ALTER ROLE user1 CREATEDB; 52 | ``` 53 | 54 | Exit with `\q` and logon with `user1`: 55 | 56 | ``` 57 | psql postgres -U user1 58 | ``` 59 | 60 | Grant all privileges to database: 61 | 62 | ``` 63 | GRANT ALL PRIVILEGES ON DATABASE "foo" to user1; 64 | ``` 65 | 66 | Create user: 67 | 68 | ``` 69 | CREATE USER testuser with encrypted password 'sekretpw'; 70 | ``` 71 | 72 | Grant privileges for user: 73 | 74 | ``` 75 | GRANT ALL PRIVILEGES ON database foo TO testuser; 76 | ``` 77 | 78 | Create a auto-incremental column: 79 | 80 | ``` 81 | CREATE TABLE fruits(id SERIAL PRIMARY KEY, name VARCHAR NOT NULL) 82 | INSERT INTO fruits(id,name) VALUES(DEFAULT,'Apple'); 83 | ``` 84 | 85 | List databases: 86 | 87 | ``` 88 | \l 89 | ``` 90 | 91 | Switch to database: 92 | 93 | ``` 94 | \c dbname 95 | ``` 96 | 97 | List tables: 98 | 99 | ``` 100 | \dt 101 | \dt+ 102 | ``` 103 | 104 | Backup Database: 105 | 106 | ``` 107 | pg_dump -h 127.0.0.1 -U postgres -p 5432 dbname > dbname.bak 108 | ``` 109 | 110 | Restore Database: 111 | 112 | ``` 113 | psql -h dbname.x.eu-west-1.rds.amazonaws.com -U postgres dbname < dbname.bak 114 | ``` 115 | 116 | Resources: 117 | 118 | - https://www.digitalocean.com/community/tutorials/how-to-use-roles-and-manage-grant-permissions-in-postgresql-on-a-vps--2 119 | - https://www.postgresqltutorial.com/postgresql-cheat-sheet/ 120 | - https://www.postgresqltutorial.com/postgresql-serial/ 121 | - https://blog.ruanbekker.com/blog/2019/03/06/create-users-databases-and-granting-access-for-users-on-postgresql/ 122 | -------------------------------------------------------------------------------- /powershell/README.md: -------------------------------------------------------------------------------- 1 | # powershell cheatsheet 2 | 3 | ## Create 4 | 5 | Create a file named `config`: 6 | 7 | ```powershell 8 | New-Item config -type file 9 | ``` 10 | 11 | ## Delete 12 | 13 | Delete everything inside the folder and subfolders (equivalent to `rm -rf`): 14 | 15 | ```powershell 16 | Remove-Item .\helm-charts\* -Recurse -Force 17 | ``` 18 | 19 | ## Environment Variables 20 | 21 | Show current path environment variable: 22 | 23 | ```powershell 24 | $env:Path 25 | ``` 26 | 27 | Append to path environment variable: 28 | 29 | ```powershell 30 | $env:Path += ';C:\Users\\scoop\shims' 31 | ``` 32 | 33 | Making [changes permanent](https://stackoverflow.com/a/714918): 34 | 35 | ```powershell 36 | code $PROFILE 37 | ``` 38 | -------------------------------------------------------------------------------- /prometheus/alert-examples/README.md: -------------------------------------------------------------------------------- 1 | # alerts 2 | 3 | ### Http Requests 4 | 5 | The average rate per second of 5xx errors during that minute, aggregated by service, pod, and uri: 6 | 7 | ```promql 8 | sum(rate(http_server_requests_seconds_count{status=~"5[0-9][0-9]"}[1m])) by (service, pod, uri) 9 | ``` 10 | 11 | Total number of 5xx errors in the last minute for each group: 12 | 13 | ```promql 14 | sum(rate(http_server_requests_seconds_count{status=~"5[0-9][0-9]"}[1m])) by (service, pod, uri) * 60 15 | ``` 16 | 17 | Request latencies - How long, on average, each (2xx, 4xx, 5xx) response took: 18 | 19 | ```promql 20 | sum by (service, uri) ( 21 | rate(http_server_requests_seconds_sum{status=~"[2-5][0-9][0-9]", service=~".*"}[1m]) 22 | / 23 | rate(http_server_requests_seconds_count{status=~"[2-5][0-9][0-9]", service=~".*"}[1m]) 24 | ) 25 | ``` 26 | 27 | Request latencies - This multiplication converts the average response time from seconds to total seconds for all events in the last minute: 28 | 29 | ```promql 30 | sum by (service, uri) ( 31 | rate(http_server_requests_seconds_sum{status=~"[2-5][0-9][0-9]", service=~".*"}[1m]) 32 | / 33 | rate(http_server_requests_seconds_count{status=~"[2-5][0-9][0-9]", service=~".*"}[1m]) 34 | ) * 60 35 | ``` 36 | -------------------------------------------------------------------------------- /prometheus/metric_examples/NODE_METRICS.md: -------------------------------------------------------------------------------- 1 | # Node Metrics 2 | 3 | Examples for Prometheus focused on Node Level Metrics. 4 | 5 | ## System Load 6 | 7 | System Load in Percantage Value: 8 | 9 | ``` 10 | avg(node_load1{instance="my-instance-name",job="node-exporter"}) / count(count(node_cpu_seconds_total{instance="my-instance-name",job="node-exporter"}) by (cpu)) * 100 11 | ``` 12 | 13 | ## CPU 14 | 15 | CPU Utilization: 16 | 17 | ``` 18 | 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle", instance="my-instance-name"}[5m])) * 100) 19 | ``` 20 | 21 | ## Memory 22 | 23 | Memory Available in %: 24 | 25 | ``` 26 | node_memory_MemAvailable_bytes{instance="my-instance-name"} / node_memory_MemTotal_bytes{instance="my-instance-name"} * 100 27 | ``` 28 | 29 | Memory Pressure: 30 | 31 | ``` 32 | rate(node_vmstat_pgmajfault{instance="my-instance-name"}[1m]) 33 | ``` 34 | 35 | ## Disk 36 | 37 | Disk Space Available in bytes: 38 | 39 | ``` 40 | node_filesystem_avail_bytes{instance=~"my-ec2-instance",job=~"node-exporter",mountpoint="/"} 41 | ``` 42 | 43 | Disk Space Available in Percentage: 44 | 45 | ``` 46 | (node_filesystem_avail_bytes{mountpoint="/", instance=~"my-ec2-instance"} * 100) / node_filesystem_size_bytes{mountpoint="/", instance=~"my-ec2-instance"} 47 | ``` 48 | 49 | Disk Latencies: 50 | 51 | ``` 52 | rate(node_disk_read_time_seconds_total{instance="my-instance-name"}[1m]) / rate(node_disk_reads_completed_total{instance="my-instance-name"}[1m]) 53 | rate(node_disk_write_time_seconds_total{instance="my-instance-name"}[1m]) / rate(node_disk_writes_completed_total{instance="my-instance-name"}[1m]) 54 | ``` 55 | 56 | ## Network 57 | 58 | Network Trhoughput 59 | 60 | ``` 61 | irate(node_network_receive_bytes_total{instance="my-instance-name"}[5m]) * 8 62 | irate(node_network_transmit_bytes_total{instance="my-instance-name}[5m]) * 8 63 | ``` 64 | 65 | ## Uptime 66 | 67 | Node Uptime: 68 | 69 | ``` 70 | node_time_seconds{instance="my-ec2-instance",job="node-exporter"} - node_boot_time_seconds{instance="my-ec2-instance",job="node-exporter"} 71 | ``` 72 | -------------------------------------------------------------------------------- /pushgateway/README.md: -------------------------------------------------------------------------------- 1 | # PushGateway 2 | 3 | ## Start Pushgateway 4 | 5 | Run using docker: 6 | 7 | ``` 8 | $ docker run -it -p 9091:9091 prom/pushgateway 9 | ``` 10 | 11 | ## Producing to Pushgateway 12 | 13 | Using curl: 14 | 15 | ``` 16 | $ duration=0.01 17 | $ echo "job_duration_seconds 0.01" | curl --data-binary @- http://localhost:9091/metrics/job/mysqldump/instance/db01 18 | ``` 19 | 20 | Using curl with multiline: 21 | 22 | ``` 23 | $ cat <>> import sqlalchemy 10 | >>> from sqlalchemy import create_engine 11 | >>> from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey 12 | >>> from sqlalchemy import inspect 13 | ``` 14 | 15 | ```python 16 | >>> metadata = MetaData() 17 | >>> books = Table('book', metadata, Column('id', Integer, primary_key=True), Column('title', String), Column('primary_author', String)) 18 | >>> engine = create_engine('sqlite:///books.db') 19 | >>> metadata.create_all(engine) 20 | ``` 21 | 22 | ```python 23 | >>> inspector = inspect(engine) 24 | >>> inspector.get_columns('book') 25 | [ 26 | {'name': 'id', 'type': INTEGER(), 'nullable': False, 'default': None, 'autoincrement': 'auto', 'primary_key': 1}, 27 | {'name': 'title', 'type': VARCHAR(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0}, 28 | {'name': 'primary_author', 'type': VARCHAR(), 'nullable': True, 'default': None, 'autoincrement': 'auto', 'primary_key': 0} 29 | ] 30 | ``` 31 | 32 | ```python 33 | >>> from sqlalchemy.sql import text 34 | >>> with engine.connect() as con: 35 | ... data = ( { "id": 1, "title": "Crushing It", "primary_author": "Gary Vaynerchuck" },{ "id": 2, "title": "Start with Why", "primary_author": "Simon Sinek" }) 36 | ... statement = text("""INSERT INTO book(id, title, primary_author) VALUES(:id, :title, :primary_author)""") 37 | ... for line in data: 38 | ... con.execute(statement, **line) 39 | ... 40 | 41 | 42 | ``` 43 | 44 | ```python 45 | >>> with engine.connect() as con: 46 | ... rs = con.execute('SELECT * FROM book') 47 | ... for row in rs: 48 | ... print(row) 49 | ... 50 | (1, 'Crushing It', 'Gary Vaynerchuck') 51 | (2, 'Start with Why', 'Simon Sinek') 52 | ``` 53 | 54 | ## External Resources 55 | 56 | - https://auth0.com/blog/sqlalchemy-orm-tutorial-for-python-developers/ 57 | 58 | -------------------------------------------------------------------------------- /redis/README.md: -------------------------------------------------------------------------------- 1 | # redis cheatsheet 2 | 3 | -------------------------------------------------------------------------------- /redis/redis-cli/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | 3 | services: 4 | redis-server: 5 | image: redis:7.0 6 | container_name: redis-server 7 | environment: 8 | - DOCKERHUB_DOCS=https://hub.docker.com/_/redis 9 | networks: 10 | - app 11 | ports: 12 | - 6379:6379 13 | logging: 14 | driver: "json-file" 15 | options: 16 | max-size: "1m" 17 | 18 | networks: 19 | app: 20 | name: app -------------------------------------------------------------------------------- /redis/redis-python/README.md: -------------------------------------------------------------------------------- 1 | # redis-python-cheatsheet -------------------------------------------------------------------------------- /redis/redis-python/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: "3.7" 2 | 3 | services: 4 | redis-server: 5 | image: redis:7.0 6 | container_name: redis-server 7 | environment: 8 | - DOCKERHUB_DOCS=https://hub.docker.com/_/redis 9 | networks: 10 | - app 11 | ports: 12 | - 6379:6379 13 | logging: 14 | driver: "json-file" 15 | options: 16 | max-size: "1m" 17 | 18 | networks: 19 | app: 20 | name: app -------------------------------------------------------------------------------- /rsync/README.md: -------------------------------------------------------------------------------- 1 | # rsync cheatsheet 2 | 3 | Sync folder in-progress to final: 4 | 5 | ``` 6 | rsync -avz ./in-progress /final 7 | ``` 8 | 9 | Sync the content from in-progress to final: 10 | 11 | ``` 12 | rsync -avz ./in-progress/ /final 13 | ``` 14 | 15 | Sync the content from in-progress to final and exclude the contents from in-progress/junk/ 16 | 17 | ``` 18 | rsync -avz --exclude=junk/* ./in-progress/ /final 19 | ``` 20 | 21 | Rsync over SSH: 22 | 23 | ``` 24 | rsync -avz /var/www/ user@1.2.3.4:/var/www/ 25 | ``` 26 | 27 | More examples: 28 | - [resource](https://devhints.io/rsync) 29 | -------------------------------------------------------------------------------- /ruby/webrick/basic-api.rb: -------------------------------------------------------------------------------- 1 | require 'webrick' 2 | require 'json' 3 | 4 | class MyApi < WEBrick::HTTPServlet::AbstractServlet 5 | def do_GET(request, response) 6 | # Handle GET request 7 | response.status = 200 8 | response['Content-Type'] = 'application/json' 9 | response.body = '{"message": "Hello, World!"}' 10 | end 11 | 12 | def do_POST(request, response) 13 | # Handle POST request 14 | request_body = JSON.parse(request.body) 15 | response.status = 200 16 | response['Content-Type'] = 'application/json' 17 | response.body = '{"message": "Received POST request with data: ' + request_body.to_s + '"}' 18 | end 19 | 20 | def do_PUT(request, response) 21 | # Handle PUT request 22 | request_body = JSON.parse(request.body) 23 | response.status = 200 24 | response['Content-Type'] = 'application/json' 25 | response.body = '{"message": "Received PUT request with data: ' + request_body.to_s + '"}' 26 | end 27 | 28 | def do_DELETE(request, response) 29 | # Handle DELETE request 30 | response.status = 200 31 | response['Content-Type'] = 'application/json' 32 | response.body = '{"message": "Received DELETE request"}' 33 | end 34 | 35 | end 36 | 37 | server = WEBrick::HTTPServer.new(Port: 8080) 38 | server.mount '/api', MyApi 39 | trap('INT') { server.shutdown } 40 | server.start 41 | -------------------------------------------------------------------------------- /ruby/webrick/basic-web.rb: -------------------------------------------------------------------------------- 1 | require 'webrick' 2 | 3 | server = WEBrick::HTTPServer.new( 4 | :Port => 80, 5 | :SSLEnable => false, 6 | :DocumentRoot => '/var/www/app', 7 | :ServerAlias => 'localhost' 8 | ) 9 | 10 | server.mount_proc '/' do |request, response| 11 | response.status = 200 12 | response.content_type = 'text/html; charset=utf-8' 13 | response.body = 'Hello, World!' 14 | end 15 | 16 | trap 'INT' do server.shutdown end 17 | 18 | server.start 19 | -------------------------------------------------------------------------------- /ruby/webrick/read-from-html.rb: -------------------------------------------------------------------------------- 1 | require 'webrick' 2 | 3 | server = WEBrick::HTTPServer.new( 4 | :Port => 80, 5 | :SSLEnable => false, 6 | :DocumentRoot => '/var/www/app' 7 | ) 8 | 9 | server.mount_proc('/') do |request, response| 10 | response.content_type = 'text/html; charset=utf-8' 11 | response.body = File.read('/var/www/app/index.html').sub("HEADER_TEXT", "Hello") 12 | end 13 | 14 | trap 'INT' do server.shutdown end 15 | 16 | server.start 17 | -------------------------------------------------------------------------------- /samba/README.md: -------------------------------------------------------------------------------- 1 | # Samba 2 | 3 | ## External Resources 4 | 5 | - https://confluence.jaytaala.com/display/TKB/Create+samba+share+writeable+by+all%2C+group%2C+or+only+a+user 6 | 7 | ## Setup Shares 8 | 9 | Create user: 10 | 11 | ``` 12 | useradd --system me 13 | chown -R me /disk/share 14 | ``` 15 | 16 | Create a Group: 17 | 18 | ``` 19 | sudo groupadd mygroup 20 | ``` 21 | 22 | Add user to the group: 23 | 24 | ``` 25 | sudo useradd me -G mygroup 26 | ``` 27 | 28 | Set permissions to the directory: 29 | 30 | ``` 31 | chgrp -R mygroup /disk/share 32 | chmod g+s /disk/share 33 | ``` 34 | 35 | Allow all users to read and write to your share: 36 | 37 | ``` 38 | [share] 39 | path = /disk/share 40 | writeable = yes 41 | browseable = yes 42 | public = yes 43 | create mask = 0644 44 | directory mask = 0755 45 | force user = me 46 | ``` 47 | 48 | Allow all linux users which is part of a group to read and write to your share: 49 | 50 | ``` 51 | [share] 52 | path = /disk/share 53 | valid users = @mygroup 54 | writeable = yes 55 | browseable = yes 56 | create mask = 0644 57 | directory mask = 0755 58 | force user = me 59 | ``` 60 | 61 | Only allowing one user to access our share, we need to assign a samba password: 62 | 63 | ``` 64 | sudo smbpasswd -a me 65 | ``` 66 | 67 | Then we can specify our config that only our `me` user can access our share with read/write permissions: 68 | 69 | ``` 70 | [share] 71 | path = /disk/share 72 | valid users = me 73 | writeable = yes 74 | browseable = yes 75 | create mask = 0644 76 | directory mask = 0755 77 | force user = me 78 | ``` 79 | 80 | ## Other examples 81 | 82 | ``` 83 | # read to some, write to some 84 | [share] 85 | comment = Ubuntu Share 86 | path = /your/samba/share 87 | browsable = yes 88 | guest ok = yes 89 | read only = no 90 | read list = guest nobody 91 | write list = user1 user2 user3 92 | create mask = 0755 93 | 94 | # read to all, write to some 95 | [share] 96 | comment = Ubuntu Share 97 | path = /your/samba/share 98 | browsable = yes 99 | guest ok = yes 100 | read only = yes 101 | write list = user1 user2 user3 102 | create mask = 0755 103 | ``` 104 | -------------------------------------------------------------------------------- /sealedsecrets/README.md: -------------------------------------------------------------------------------- 1 | # sealed secrets 2 | 3 | [Bitnami Sealed Secrets](https://github.com/bitnami-labs/sealed-secrets) is a Kubernetes controller and tool for one-way encrypted Secrets. 4 | 5 | ## Pre-Requisites 6 | 7 | You will need [kubectl]() and [kubeseal](https://github.com/bitnami-labs/sealed-secrets/releases) 8 | 9 | ## Installing Kubeseal 10 | 11 | Get the latest version from their [releases](https://github.com/bitnami-labs/sealed-secrets/releases) then for linux: 12 | 13 | ```bash 14 | curl -sSL https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.19.1/kubeseal-0.19.1-linux-amd64.tar.gz | tar -xz 15 | sudo install -o root -g root -m 0755 kubeseal /usr/local/bin/kubeseal 16 | ``` 17 | 18 | ## Create a Sealed Secret 19 | 20 | ### From stdin 21 | 22 | Create a kubernetes secret: 23 | 24 | ```bash 25 | echo -n pass123 | kubectl create secret generic app-secret --dry-run=client --from-file=foo=/dev/stdin -o yaml > app-secret.yaml 26 | ``` 27 | 28 | Encrypt the secret: 29 | 30 | ```bash 31 | kubeseal --controller-name=sealed-secrets --controller-namespace=kube-system --format yaml < app-secret.yaml > app-sealedsecret.yaml 32 | ``` 33 | 34 | Create the sealed secret: 35 | 36 | ```bash 37 | kubectl create -f app-sealedsecret.yaml 38 | ``` 39 | 40 | ## Master Key 41 | 42 | To backup the master key: 43 | 44 | ```bash 45 | kubectl get secret -n kube-system -l sealedsecrets.bitnami.com/sealed-secrets-key -o yaml > sealedsecret-master.key 46 | ``` 47 | 48 | To restore the master key: 49 | 50 | ```bash 51 | kubectl apply -f sealedsecret-master.key 52 | kubectl delete pod -n kube-system -l name=sealed-secrets-controller 53 | ``` 54 | -------------------------------------------------------------------------------- /sed/README.md: -------------------------------------------------------------------------------- 1 | # sed cheatsheet 2 | 3 | ## Replace content 4 | 5 | Let's say you want to change values in a file and assuming the file is: 6 | 7 | ```bash 8 | $ cat .env 9 | DB_USER=admin 10 | DB_PASS=__DB_PASS__ 11 | ``` 12 | 13 | And we want to substitute `__DB_PASS__` with a variable, we would do: 14 | 15 | ```bash 16 | $ sed 's/__DB_PASS__/secret/g' .env 17 | ``` 18 | 19 | We ran it without `-i` which wont make changes to the file, as it will only show you the proposed changed values, when you want to write it: 20 | 21 | ```bash 22 | $ sed -i 's/__DB_PASS__/secret/g' .env 23 | ``` 24 | 25 | If we had the value in our environment as `MYPASSWORD=secret`, we can do: 26 | 27 | ```bash 28 | $ sed -i "s/__DB_PASS__/${MYPASSWORD}/g" .env 29 | ``` 30 | 31 | If you are on MacOSx, you will need to pass `-i ''`: 32 | 33 | ```bash 34 | $ sed -i '' "s/__DB_PASS__/${MYPASSWORD}/g" .env 35 | ``` 36 | 37 | ## Remove blank lines 38 | 39 | Assume you have a file like: 40 | 41 | ```bash 42 | $ cat file.cfg 43 | [defaults] 44 | key1=val1 45 | key2=val2 46 | 47 | key3=val3 48 | ``` 49 | 50 | And you want to remove the blank line: 51 | 52 | ```bash 53 | $ sed -i '/^$/d' file.cfg 54 | ``` 55 | -------------------------------------------------------------------------------- /sftp/README.md: -------------------------------------------------------------------------------- 1 | # SFTP Cheatsheet 2 | 3 | Connect to server: 4 | 5 | ``` 6 | $ sftp -i ~/.ssh/id_rsa me@sftp.mydomain.com 7 | ``` 8 | 9 | Local working directory: 10 | 11 | ``` 12 | sftp> lpwd 13 | ``` 14 | 15 | Remote working directory: 16 | 17 | ``` 18 | sftp> pwd 19 | ``` 20 | 21 | Listing local files: 22 | 23 | ``` 24 | sftp> lls 25 | ``` 26 | 27 | Listing remote files: 28 | 29 | ``` 30 | sftp> ls 31 | ``` 32 | 33 | Upload a single file: 34 | 35 | ``` 36 | sftp> put file.json 37 | ``` 38 | 39 | Upload a single file to a specific path: 40 | 41 | ``` 42 | sftp> file.json path/to/file.json 43 | ``` 44 | 45 | Upload multiple files: 46 | 47 | ``` 48 | sftp> mput *.json 49 | ``` 50 | 51 | Download a single file: 52 | 53 | ``` 54 | sftp> get file.json 55 | ``` 56 | 57 | Download multiple files: 58 | 59 | ``` 60 | sftp> mget *.json 61 | ``` 62 | 63 | Switching a local directory: 64 | 65 | ``` 66 | sftp> lcd 67 | ``` 68 | 69 | Switching a remote directory: 70 | 71 | ``` 72 | sftp> cd 73 | ``` 74 | 75 | Creating local directory: 76 | 77 | ``` 78 | sftp> lmkdir data 79 | ``` 80 | 81 | Creating remote directory: 82 | 83 | ``` 84 | sftp> mkdir data 85 | ``` 86 | 87 | Deleting remote directory: 88 | 89 | ``` 90 | sftp> rm data 91 | ``` 92 | -------------------------------------------------------------------------------- /slack/python/slack_helper.py: -------------------------------------------------------------------------------- 1 | # pip install slackclient 2 | # https://github.com/JPStrydom/Crypto-Trading-Bot/issues/10 3 | 4 | from slack import WebClient as SlackClient 5 | legacy_token = "" # https://api.slack.com/custom-integrations/legacy-tokens 6 | slack_client = SlackClient(legacy_token) 7 | 8 | def list_channels(): 9 | name_to_id = {} 10 | res = slack_client.api_call( 11 | "groups.list", # groups are private channels, conversations are public channels. Different API. 12 | ) 13 | list_channels = {"private_channels": []} 14 | for channel in res['groups']: 15 | list_channels['private_channels'].append({channel['name']: channel['id']}) 16 | 17 | return list_channels 18 | -------------------------------------------------------------------------------- /ssh-keygen/README.md: -------------------------------------------------------------------------------- 1 | # SSH Keygen Cheatsheet 2 | 3 | ## Usage 4 | 5 | Create a SSH Private Key: 6 | 7 | ``` 8 | $ ssh-keygen -f ~/.ssh/mykey -t rsa -C "MyKey" -q -N "" 9 | ``` 10 | 11 | Generate a SSH Public Key from a Private Key: 12 | 13 | ``` 14 | $ ssh-keygen -y -f ~/.ssh/id_rsa > ~/.ssh/id_rsa.pub 15 | ``` 16 | 17 | Convert a multiline (newline) public ssh key to a normal public key: 18 | 19 | ``` 20 | $ ssh-keygen -i -f ~/Downloads/key.multiline_pub > ~/Downloads/key.pub 21 | ``` 22 | 23 | View the Public SSH Key from a Private Key: 24 | 25 | ``` 26 | $ ssh-keygen -y -f ~/.ssh/id_rsa 27 | ``` 28 | -------------------------------------------------------------------------------- /stern/README.md: -------------------------------------------------------------------------------- 1 | # stern 2 | 3 | [Stern](https://github.com/stern/stern) is a utility that allows you to specify both pod and container id as regular expressions. 4 | 5 | ## Installation 6 | 7 | Installation with Mac: 8 | 9 | ```bash 10 | brew install stern 11 | ``` 12 | 13 | ## Examples 14 | 15 | Tail pod logs starting with `my-pods-` in the `default` namespace: 16 | 17 | ```bash 18 | stern -n default my-pods 19 | ``` 20 | 21 | Tail the same, but only for the `web` container: 22 | 23 | ```bash 24 | stern -n default my-pods --container web 25 | ``` 26 | 27 | Do the same but only from 1 minute ago: 28 | 29 | ```bash 30 | stern -n default my-pods --container web --since 1m 31 | ``` 32 | 33 | ## Documentation 34 | 35 | - [github.com/stern/stern](https://github.com/stern/stern) 36 | - [kubernetes.io/blog/2016/10/tail-kubernetes-with-stern/](https://kubernetes.io/blog/2016/10/tail-kubernetes-with-stern/) 37 | -------------------------------------------------------------------------------- /stress/README.md: -------------------------------------------------------------------------------- 1 | # stress cheatsheet 2 | 3 | CLI tool to cause stress to a operating system 4 | 5 | ## Installation and Usage 6 | 7 | Installation: 8 | 9 | ```bash 10 | $ sudo apt install stress -y 11 | ``` 12 | 13 | Usage: 14 | 15 | ``` 16 | $ stress --cpu 8 --io 4 --vm 4 --vm-bytes 1024M --timeout 10s 17 | ``` 18 | 19 | ## Stress-ng 20 | 21 | Installation: 22 | 23 | ```bash 24 | sudo apt install stress-ng -y 25 | ``` 26 | 27 | Use `stress-ng` to simulate CPU load: 28 | 29 | ```bash 30 | # This command will stress all available CPUs for 60 seconds, you can adjust the --cpu parameter to match the number of CPU cores you want to stress 31 | stress-ng --cpu 4 --timeout 60s 32 | ``` 33 | 34 | Use `stress-ng` to simulate memory load: 35 | 36 | ```bash 37 | # This command will allocate 2 virtual machines each with 1GB of memory for 60 seconds, you can adjust the --vm and --vm-bytes parameters to control the memory stress 38 | stress-ng --vm 2 --vm-bytes 1G --timeout 60s 39 | ``` 40 | 41 | Use `dd` to simulate I/O load: 42 | 43 | ```bash 44 | # This command will create a 1GB test file filled with zeros in /tmp while synchronizing data to the disk, this will put stress on the I/O subsystem 45 | dd if=/dev/zero of=/tmp/testfile bs=1M count=1024 conv=fdatasync 46 | ``` 47 | -------------------------------------------------------------------------------- /sudo/README.md: -------------------------------------------------------------------------------- 1 | # Sudo Cheatsheet 2 | 3 | ## Usage 4 | 5 | User to sudo without a password: 6 | 7 | ``` 8 | $ echo "${USER} ALL=(ALL:ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/no-sudo-password-for-${USER} 9 | ``` 10 | 11 | Run a command as a user: 12 | 13 | ``` 14 | $ sudo -H -u ubuntu bash -c 'echo "I am: $USER"' 15 | ``` 16 | -------------------------------------------------------------------------------- /symlinks/README.md: -------------------------------------------------------------------------------- 1 | # symlinks 2 | 3 | ## Example 4 | 5 | Scenario: 6 | 7 | - We have a NFS mounted at `/mnt` 8 | - We want to persist `/data/shares/teams` to `/mnt/teams` 9 | - The directory at the local fs `teams` should not exist 10 | - The directory at the nfs fs `/mnt/teams` should exist 11 | 12 | The commands: 13 | 14 | ```bash 15 | mkdir /mnt/teams 16 | mv /data/shares/teams /data/shares/bak-teams 17 | sudo ln -s /mnt/teams /data/shares/teams 18 | mv /data/shares/bak-teams/* /data/shares/teams/ 19 | ``` 20 | -------------------------------------------------------------------------------- /systemd/README.md: -------------------------------------------------------------------------------- 1 | # systemd cheatsheet 2 | 3 | ## systemctl 4 | 5 | To view a status of the service: 6 | 7 | ```bash 8 | sudo systemctl status nginx 9 | ``` 10 | 11 | To restart a service: 12 | 13 | ```bash 14 | sudo systemctl restart nginx 15 | ``` 16 | 17 | ## journalctl 18 | 19 | To tail the logs of a unit: 20 | 21 | ```bash 22 | sudo journalctl -fu nginx 23 | ``` 24 | 25 | To tail and view the last 100 logs: 26 | 27 | ```bash 28 | sudo journalctl -fu nginx -n 100 --no-pager 29 | ``` 30 | -------------------------------------------------------------------------------- /systemd/pre_start_example.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Promtail 3 | 4 | [Service] 5 | User=root 6 | WorkingDirectory=/opt/promtail/ 7 | ExecStartPre=/bin/sleep 30 8 | ExecStart=/opt/promtail/promtail-linux-amd64 --config.file=./ec2-promtail.yaml 9 | SuccessExitStatus=143 10 | TimeoutStopSec=10 11 | Restart=on-failure 12 | RestartSec=5 13 | 14 | [Install] 15 | WantedBy=multi-user.target 16 | -------------------------------------------------------------------------------- /systemd/specify_logfile.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Promtail 3 | 4 | [Service] 5 | User=root 6 | WorkingDirectory=/opt/promtail/ 7 | ExecStart=/opt/promtail/promtail-linux-amd64 --config.file=./ec2-promtail.yaml 8 | #StandardOutput=file:/var/log/service.log 9 | #StandardError=file:/var/log/service.err 10 | StandardOutput=append:/var/log/service.log 11 | StandardError=append:/var/log/service.err 12 | 13 | [Install] 14 | WantedBy=multi-user.target 15 | -------------------------------------------------------------------------------- /tar/README.md: -------------------------------------------------------------------------------- 1 | # tar cheatsheet 2 | 3 | Compress: 4 | 5 | ``` 6 | $ tar -zcvf my-archive.tar.gz ~/path/to/compress 7 | ``` 8 | 9 | Extract: 10 | 11 | ``` 12 | $ tar -xvf my-archive.tar.gz 13 | ``` 14 | 15 | Exclude and Compress: 16 | 17 | ``` 18 | $ tar -zcvf backup-$(date +%F).tar.gz --exclude "~/personal/project/*/*.dat" --exclude "*.ldb" --exclude "*/.git/*" --exclude "*/.terraform/*" --exclude "*/site-packages/*" --exclude "*/node_modules/*" ~/workspace ~/Documents 19 | ``` 20 | 21 | Archive and follow symlinks: 22 | 23 | ``` 24 | $ tar -cvhf archive.tar.gz /opt/app/current 25 | ``` 26 | 27 | Tar combined with wget: 28 | 29 | ```bash 30 | $ wget -q -O - https://github.com/sibprogrammer/xq/releases/download/v1.1.4/xq_1.1.4_linux_amd64.tar.gz | tar zxv 31 | ``` 32 | -------------------------------------------------------------------------------- /terraform/snippets/for_each.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | users = { 3 | "james.dean" = "administrators" 4 | } 5 | } 6 | 7 | module "some-module" { 8 | source = "../../modules/some-module" 9 | 10 | username = local.key 11 | group = local.value 12 | 13 | for_each = local.users 14 | } 15 | -------------------------------------------------------------------------------- /terraform/variables.md: -------------------------------------------------------------------------------- 1 | Define variables: 2 | 3 | ``` 4 | $ cat variables.tf 5 | variable "environment" { 6 | default = "prod" 7 | } 8 | 9 | variable "name" { 10 | default = "web" 11 | } 12 | ``` 13 | 14 | Call a variable: 15 | 16 | ``` 17 | $ terraform console 18 | 19 | > "${var.name}" 20 | web 21 | 22 | > "${var.environment}" 23 | prod 24 | ``` 25 | 26 | Join strings: 27 | - https://www.terraform.io/docs/configuration/functions/format.html 28 | 29 | ``` 30 | > format("%s-%s-server", var.name, var.environment) 31 | web-prod-server 32 | ``` 33 | 34 | ## Validation 35 | 36 | Regex for choices: 37 | 38 | ```terraform 39 | variable "instance_type" { 40 | description = "Instance type for EC2." 41 | type = string 42 | default = "t2.medium" 43 | 44 | validation { 45 | condition = can(regex("^[Tt][2-3].(nano|micro|small)", var.instance_type)) 46 | error_message = "Instance type options are limited to t2.nano, t2.micro, t2.small" 47 | } 48 | } 49 | ``` 50 | -------------------------------------------------------------------------------- /tmux/README.md: -------------------------------------------------------------------------------- 1 | # tmux-cheatsheet 2 | 3 | ## Installation 4 | 5 | ### Ubuntu 6 | 7 | Install tmux: 8 | 9 | ```bash 10 | sudo apt install tmux -y 11 | ``` 12 | 13 | Install tmux plugin manager: 14 | 15 | ```bash 16 | git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm 17 | ``` 18 | 19 | Create the tmux configuration: 20 | 21 | ```bash 22 | mkdir -p ~/.config/tmux 23 | touch ~/.config/tmux/tmux.conf 24 | ``` 25 | 26 | The content of `tmux.conf`: 27 | 28 | ``` 29 | set -g @plugin 'tmux-plugins/tpm' 30 | set -g @plugin 'tmux-plugins/tmux-sensible' 31 | 32 | run '~/.tmux/plugins/tpm/tpm' 33 | ``` 34 | 35 | Then you can source it with: 36 | 37 | ```bash 38 | tmuxh source ~/.config/tmux/tmux.conf 39 | ``` 40 | 41 | ## Cheatsheets 42 | 43 | The prefix key is `cmd + b` 44 | 45 | To create a new window, prefix key + `c` 46 | 47 | To navigate to between windows we can use the prefix key and the number of the window, or prefix key + `n` or `p` 48 | 49 | To split your current window into a horizontal pane, press the prefix key + `%` and to split the pane vertically, the prefix key + `"` 50 | 51 | The switch between panes you can use the prefix key + the direction arrows 52 | 53 | The prefix and `q` will show you a number of the pane which you can switch to by pressing the number 54 | 55 | To zoom into a pane, you can use the prefix key and `z` 56 | 57 | To close, prefix key + `q` 58 | 59 | ## Resources 60 | 61 | - [Dreams of Code - TMUX](https://www.youtube.com/watch?v=DzNmUNvnB04) 62 | -------------------------------------------------------------------------------- /vagrant/README.md: -------------------------------------------------------------------------------- 1 | # vagrant cheatsheet 2 | 3 | External: 4 | - https://peteris.rocks/blog/vagrantfile-for-linux/ 5 | - https://phoenhex.re/2018-03-25/not-a-vagrant-bug 6 | - https://www.laurentiupancescu.com/blog/5913767b/ 7 | -------------------------------------------------------------------------------- /vim/README.md: -------------------------------------------------------------------------------- 1 | ## External Resources: 2 | - https://devhints.io/vim 3 | 4 | ## Copy Paste 5 | 6 | Copying Lines: 7 | 8 | ``` 9 | on the line that you want to duplicate 10 | yyp 11 | ``` 12 | 13 | Copy a Block: 14 | 15 | ``` 16 | v (for visual mode) 17 | direction arrows to highlight 18 | y (to copy) 19 | i (insert mode) move to the desired area 20 | esc 21 | p 22 | ``` 23 | 24 | ## Delete 25 | 26 | Deleting a Line: 27 | 28 | ``` 29 | On the line that you want to delete 30 | dd 31 | ``` 32 | 33 | Deleting everything below the cursor: 34 | 35 | ``` 36 | dG 37 | ``` 38 | 39 | Deleting everything above the cursor: 40 | 41 | ``` 42 | dH 43 | ``` 44 | 45 | Deleting 10 Lines below the cursor: 46 | 47 | ``` 48 | 10dd 49 | ``` 50 | 51 | Delete the first 4 characters of every line: 52 | 53 | ``` 54 | :%s/^.\{0,4\}// 55 | 56 | eg. 57 | 58 | >>> class HashTable: 59 | ... def __init__(self): 60 | ... self.size = 256 61 | ... self.slots = [None for i in range(self.size)] 62 | ... self.count = 0 63 | 64 | after: 65 | 66 | class HashTable: 67 | def __init__(self): 68 | self.size = 256 69 | self.slots = [None for i in range(self.size)] 70 | self.count = 0 71 | ``` 72 | 73 | ## Replace Characters: 74 | 75 | Norm Function: Find/Replace 76 | 77 | The dataset: 78 | 79 | ``` 80 | 123. 123 81 | 233. 123 82 | 83 | Apply: 84 | 85 | :%norm f.C, 86 | ``` 87 | 88 | Result: 89 | 90 | ``` 91 | 123, 92 | 233, 93 | ``` 94 | 95 | Commenting Lines: 96 | 97 | ``` 98 | shift + v 99 | move down until the line you want to comment out 100 | shift + I 101 | Enter the character (#) 102 | Esc 103 | ``` 104 | 105 | Search and Replace (replace true with false): 106 | 107 | ``` 108 | :%s/true/false/ 109 | press enter 110 | ``` 111 | -------------------------------------------------------------------------------- /vim/config/.vimrc: -------------------------------------------------------------------------------- 1 | colorscheme default 2 | syntax on 3 | set mouse-=a 4 | 5 | filetype on 6 | filetype indent plugin on 7 | set noexpandtab " tabs ftw 8 | set smarttab " tab respects 'tabstop', 'shiftwidth', and 'softtabstop' 9 | set tabstop=4 " the visible width of tabs 10 | set softtabstop=4 " edit as if the tabs are 4 characters wide 11 | set shiftwidth=4 " number of spaces to use for indent and unindent 12 | set shiftround " round indent to a multiple of 'shiftwidth' 13 | 14 | autocmd FileType yml setlocal ts=2 sts=2 sw=2 expandtab 15 | autocmd FileType yaml setlocal ts=2 sts=2 sw=2 expandtab 16 | -------------------------------------------------------------------------------- /xq/README.md: -------------------------------------------------------------------------------- 1 | # xq 2 | 3 | ## Download 4 | 5 | Get xq: 6 | 7 | ```bash 8 | wget -q -O - "https://github.com/sibprogrammer/xq/releases/download/v1.1.4/xq_1.1.4_linux_amd64.tar.gz" | tar zxv 9 | install -o root -g root -m 0755 xq /usr/local/bin/xq 10 | ``` 11 | 12 | ## Usage 13 | 14 | Prepare a xml file names `test.xml`: 15 | 16 | ```xml 17 | 18 | 19 | fullnode-backup-2 20 | 21 | archive-v1.tgz 22 | 2023-04-20T15:13:12.000Z 23 | 1493252335919 24 | STANDARD 25 | 26 | 27 | archive-v2.tgz 28 | 2023-04-21T15:10:56.000Z 29 | 1495321500561 30 | STANDARD 31 | 32 | 33 | ``` 34 | 35 | To return the key names: 36 | 37 | ```bash 38 | cat test.xml | xq -x '/ListBucketResult/Contents/Key' 39 | ``` 40 | 41 | Which will return: 42 | 43 | ```bash 44 | archive-v1.tgz 45 | archive-v2.tgz 46 | ``` 47 | -------------------------------------------------------------------------------- /yq/README.md: -------------------------------------------------------------------------------- 1 | # yq 2 | 3 | ## Cheatsheets 4 | 5 | Render the entire yaml file: 6 | 7 | ```bash 8 | yq -r '.' 9 | ``` 10 | 11 | Access the `microservices` key: 12 | 13 | ```bash 14 | yq -r '.microservices' 15 | ``` 16 | 17 | Append something at the bottom of the file: 18 | 19 | ```bash 20 | yq -i '.external_secrets = {"enabled": false}' test.yaml 21 | ``` 22 | 23 | Delete a key with its contents: 24 | 25 | ```bash 26 | yq -i 'del(.microservice.secrets)' test.yaml 27 | ``` 28 | 29 | Inject values from another file: 30 | 31 | ```bash 32 | yq -i ".microservice.env = load(\"staging.yaml\") | .microservice.env" test.yaml.yaml 33 | ``` 34 | 35 | Add new data: 36 | 37 | ```bash 38 | yq -i '.microservice.aws = {"accountId": "000000000000","region": "eu-west-1","eks": {"clusterId": "xxxxxxxxxxxxxxx"}}' test.yaml 39 | ``` 40 | 41 | Sort keys under microservices 42 | 43 | ```bash 44 | yq -i '.microservice |= (to_entries | sort_by(.key) | from_entries)' test.yaml 45 | ``` 46 | -------------------------------------------------------------------------------- /zipkin/README.md: -------------------------------------------------------------------------------- 1 | # Zipkin Cheatsheet 2 | 3 | ## Resources 4 | - [Use Zipkin to Trace Requests in Flask Application](https://medium.com/@eng.mohamed.m.saeed/use-zipkin-to-trace-requests-in-flask-application-68886f02e46) 5 | --------------------------------------------------------------------------------