├── .all-contributorsrc ├── .github ├── FUNDING.yml ├── dependabot.yml ├── pull_request_template.md └── workflows │ ├── greeting.yml │ ├── link-checker.yml │ ├── main.yaml │ ├── run-excercise.yml │ ├── run-shell.yml │ ├── shell-check.yml │ └── stale-issue.yml ├── .gitignore ├── CNAME ├── CODEOWNERS ├── CONTRIBUTING.md ├── LICENSE ├── README.md ├── assets └── images │ ├── argocd │ └── guestbook-ui-demo.png │ ├── elk │ └── elk_architecture.png │ └── prometheus │ └── prometheus-architecture.png ├── docs ├── GOODREAD.md └── README.md ├── getting-started └── README.md ├── projects ├── README.md └── ci-cd-observability │ ├── README.md │ └── setup_stack.sh ├── templates ├── README.md ├── TOPIC_STRUCTURE.md └── TOPIC_TEMPLATE.md ├── tools └── generate-main-readme.sh ├── topics ├── README.md ├── agile │ └── README.md ├── ansible │ ├── README.md │ ├── advanced │ │ ├── check-stats.yaml │ │ ├── ping-google.yaml │ │ ├── projects │ │ │ └── sample_project │ │ │ │ ├── README.md │ │ │ │ ├── inventory.init │ │ │ │ ├── main.yml │ │ │ │ └── roles │ │ │ │ ├── check_up_time │ │ │ │ └── tasks │ │ │ │ │ └── main.yml │ │ │ │ ├── ping_google │ │ │ │ ├── defaults │ │ │ │ │ └── main.yml │ │ │ │ └── tasks │ │ │ │ │ └── main.yml │ │ │ │ └── sample_role │ │ │ │ ├── defaults │ │ │ │ └── main.yml │ │ │ │ ├── handlers │ │ │ │ └── README.md │ │ │ │ ├── tasks │ │ │ │ └── main.yml │ │ │ │ └── templates │ │ │ │ └── README.md │ │ └── with-docker │ │ │ ├── AnsibleEnv.dockerfile │ │ │ └── ansible-practice.sh │ ├── basic │ │ ├── README.md │ │ └── helloworld │ │ │ ├── README.md │ │ │ ├── ansible-helloworld.sh │ │ │ ├── first-inventory.ini │ │ │ └── first-playbook.yml │ └── docs │ │ └── dev-to-blog-ansible-01.md ├── apache-httpd │ ├── README.md │ └── basic │ │ └── README.md ├── apachetomcat │ ├── README.md │ └── basic │ │ └── README.md ├── architecture │ └── README.md ├── argocd │ ├── README.md │ └── basic │ │ ├── README.md │ │ └── install_argocd.sh ├── aws │ ├── README.md │ └── basic │ │ └── README.md ├── azure │ ├── README.md │ └── basic │ │ └── README.md ├── azuredevops │ ├── README.md │ ├── advanced │ │ └── README.md │ └── basic │ │ └── first-azure-pipelines.yml ├── cloudflare │ ├── README.md │ └── basic │ │ └── README.md ├── coding │ ├── README.md │ └── practice.md ├── docker │ ├── README.md │ ├── advanced │ │ └── python-sample │ │ │ ├── Dockerfile │ │ │ ├── README.md │ │ │ ├── basic.py │ │ │ ├── practice.sh │ │ │ ├── random.py │ │ │ └── requirements.txt │ └── basic │ │ ├── docker-helloworld.sh │ │ └── top-docker-cmd.md ├── dynatrace │ ├── README.md │ └── basic │ │ └── README.md ├── elk │ ├── README.md │ ├── advanced │ │ └── REAMDE.md │ └── basic │ │ └── helloworld │ │ ├── README.md │ │ ├── elasalert │ │ ├── README.md │ │ ├── config.yaml │ │ ├── deploy.sh │ │ ├── docker-compose.yml │ │ ├── rules │ │ │ └── metricbeat_cpu_rule.yaml │ │ ├── tests │ │ │ ├── data │ │ │ │ └── json_debug.json │ │ │ └── test_rules.sh │ │ └── wip_rules │ │ │ ├── example_single_metric_agg.yaml │ │ │ └── metricbeat_disk_space_rule.yaml │ │ ├── installation │ │ ├── docker-compose │ │ │ ├── .custom-env │ │ │ ├── install.sh │ │ │ └── uninstall.sh │ │ └── helm │ │ │ ├── install.sh │ │ │ ├── uninstall.sh │ │ │ └── values.yml │ │ └── metric-beat │ │ ├── README.md │ │ ├── docker-compose.yaml │ │ ├── metricbeat.docker.yml │ │ ├── ubuntu_host_on_docker │ │ ├── Dockerfile │ │ ├── README.md │ │ └── metricbeat.yml │ │ └── wsl2_ubuntu │ │ ├── install_metricbeat.yml │ │ ├── install_via_ansible.sh │ │ └── metricbeat.yml ├── git │ ├── README.md │ ├── TIPS.md │ └── basic │ │ └── hello-world │ │ └── git-helloworld.sh ├── github-action │ ├── README.md │ └── basic │ │ └── README.md ├── gitlabci │ ├── README.md │ ├── advanced │ │ └── REAME.md │ └── basic │ │ └── .gitlab-ci.yml ├── groovy │ ├── README.md │ └── basic │ │ ├── README.md │ │ └── basic-concept.groovy ├── haproxy │ ├── README.md │ ├── advanced │ │ └── README.md │ └── basic │ │ ├── README.md │ │ ├── assets │ │ ├── server1.png │ │ └── server2.png │ │ ├── docker-compose.yaml │ │ ├── haproxy │ │ ├── Dockerfile │ │ └── haproxy.cfg │ │ └── nginx-webserver │ │ ├── nginx-webserver1 │ │ ├── Dockerfile │ │ └── index.html │ │ └── nginx-webserver2 │ │ ├── Dockerfile │ │ └── index.html ├── helm │ ├── README.md │ ├── advanced │ │ ├── hands-on │ │ │ └── deploy-jenkins │ │ │ │ ├── README.md │ │ │ │ ├── cleanup.sh │ │ │ │ ├── deploy.sh │ │ │ │ ├── jenkins-sa.yaml │ │ │ │ ├── jenkins-values.yaml │ │ │ │ ├── jenkins-volume.yaml │ │ │ │ └── local-debug.sh │ │ └── tungleo-chart │ │ │ ├── .helmignore │ │ │ ├── Chart.yaml │ │ │ ├── templates │ │ │ ├── NOTES.txt │ │ │ ├── _helpers.tpl │ │ │ ├── deployment.yaml │ │ │ ├── hpa.yaml │ │ │ ├── ingress.yaml │ │ │ ├── service.yaml │ │ │ ├── serviceaccount.yaml │ │ │ └── tests │ │ │ │ └── test-connection.yaml │ │ │ └── values.yaml │ └── basic │ │ └── helm-helloworld.sh ├── iis │ ├── README.md │ └── basic │ │ └── README.md ├── istio │ └── README.md ├── jenkins │ ├── README.md │ ├── advanced │ │ └── README.md │ └── basic │ │ ├── Jenkins-Hello-World.md │ │ ├── MyFirstPipeline.groovy │ │ ├── PipelineWithParallelStages.groovy │ │ └── deploy │ │ └── docker-compose │ │ ├── README.md │ │ └── docker-compose.yml ├── k8s │ ├── README.md │ ├── advanced │ │ └── play-around │ │ │ └── install-jenkins │ │ │ ├── README.md │ │ │ ├── jenkins-sa.yaml │ │ │ ├── jenkins-values.yaml │ │ │ └── jenkins-volume.yaml │ └── basic │ │ ├── beginner │ │ ├── 90daysofdevops │ │ │ └── nginx.yaml │ │ ├── GOOD-READ.md │ │ ├── gitea-deployment-service.yml │ │ ├── gitea-deployment-with-port.yaml │ │ ├── gitea-deployment.yaml │ │ ├── gitea.yaml │ │ └── mysql.yaml │ │ └── helloworld │ │ ├── k8s-helloworld-cleanup.sh │ │ ├── k8s-helloworld.sh │ │ ├── nginx-deployment.yaml │ │ └── nginx-service.yaml ├── kafka │ ├── README.md │ └── basic │ │ ├── README.md │ │ └── docker-compose.yml ├── microservices │ ├── README.md │ ├── assets │ │ └── first-demo-microservices-result.png │ └── basic │ │ ├── README.md │ │ ├── cleanup-hello-microservices.sh │ │ └── hello-microservices.sh ├── nginx │ ├── README.md │ ├── advanced │ │ └── README.md │ └── basic │ │ ├── README.md │ │ ├── assets │ │ └── demo_nginx_basic_ok.png │ │ ├── docker-compose.yaml │ │ ├── html │ │ └── index.html │ │ └── nginx.conf ├── openstack │ ├── README.md │ └── basic │ │ ├── README.md │ │ ├── cleanup.sh │ │ └── openstack-helm.sh ├── packer │ ├── README.md │ └── basic │ │ ├── README.md │ │ ├── assets │ │ └── ami-on-aws.png │ │ └── aws-ubuntu.pkr.hcl ├── prometheus │ ├── README.md │ ├── advanced │ │ └── README.md │ └── basic │ │ ├── prometheus-helloworld-cleanup.sh │ │ └── prometheus-helloworld.sh ├── python │ ├── README.md │ ├── advanced │ │ └── examples │ │ │ ├── 01-factorial-calculator.py │ │ │ ├── 02-parse-json-file.py │ │ │ ├── 03-oop-with-animal.py │ │ │ ├── 04-api-call.py │ │ │ ├── README.md │ │ │ └── sample_files │ │ │ └── persional_info.json │ └── basic │ │ └── helloworld.py ├── shell │ ├── README.md │ ├── advanced │ │ ├── examples │ │ │ ├── README.md │ │ │ └── list.sh │ │ └── excercise │ │ │ ├── README.md │ │ │ └── answers │ │ │ ├── 01_system_health_check.sh │ │ │ └── 02_password_generator.sh │ └── basic │ │ ├── basic.sh │ │ └── data │ │ ├── example.json │ │ ├── grep_example.txt │ │ ├── one.txt │ │ ├── three.txt │ │ └── two.txt ├── snyk │ ├── README.md │ └── basic │ │ └── README.md ├── sql │ ├── README.md │ ├── mysql-advanced.md │ └── mysql-basics.md ├── terraform │ ├── .gitignore │ ├── README.md │ ├── advanced │ │ ├── aws-three-tier │ │ │ ├── dev │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ ├── install_apache.sh │ │ │ │ ├── install_node.sh │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ └── variables.tf │ │ │ └── modules │ │ │ │ └── three-tier-deployment │ │ │ │ ├── compute │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ └── variables.tf │ │ │ │ ├── database │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ └── variables.tf │ │ │ │ ├── loadbalancing │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ └── variables.tf │ │ │ │ └── networking │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ └── variables.tf │ │ ├── docker │ │ │ ├── .terraform.lock.hcl │ │ │ └── main.tf │ │ └── terraform-up-and-running │ │ │ ├── chap02-getting-started │ │ │ ├── cluster-web-server │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ └── variables.tf │ │ │ └── single-web-server │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ └── main.tf │ │ │ ├── chap03-terraform-state │ │ │ ├── global │ │ │ │ └── s3-dynamo │ │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ │ ├── main.tf │ │ │ │ │ ├── outputs.tf │ │ │ │ │ └── variables.tf │ │ │ └── stage │ │ │ │ ├── datastore │ │ │ │ └── mysql │ │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ │ ├── main.tf │ │ │ │ │ ├── outputs.tf │ │ │ │ │ └── variables.tf │ │ │ │ └── service │ │ │ │ └── webserver-cluster │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ ├── main.tf │ │ │ │ ├── outputs.tf │ │ │ │ ├── user-data.sh │ │ │ │ └── variables.tf │ │ │ ├── chap04-reusable-module │ │ │ ├── modules │ │ │ │ └── services │ │ │ │ │ ├── asg-webserver-cluster │ │ │ │ │ ├── asg-user-data.sh │ │ │ │ │ ├── main.tf │ │ │ │ │ ├── outputs.tf │ │ │ │ │ └── variables.tf │ │ │ │ │ └── webserver-cluster │ │ │ │ │ ├── main.tf │ │ │ │ │ ├── outputs.tf │ │ │ │ │ ├── user-data.sh │ │ │ │ │ └── variables.tf │ │ │ ├── prod │ │ │ │ └── services │ │ │ │ │ └── webserver-cluster │ │ │ │ │ └── main.tf │ │ │ └── stage │ │ │ │ ├── datastore │ │ │ │ └── mysql │ │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ │ ├── main.tf │ │ │ │ │ ├── outputs.tf │ │ │ │ │ └── variables.tf │ │ │ │ └── services │ │ │ │ ├── alb-webserver │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ ├── main.tf │ │ │ │ └── outputs.tf │ │ │ │ └── webserver-cluster │ │ │ │ ├── .terraform.lock.hcl │ │ │ │ └── main.tf │ │ │ └── practice-scripts │ │ │ ├── README.md │ │ │ ├── chap04-alb-instance │ │ │ ├── asg-destroy-practice-resource.sh │ │ │ └── asg-start-practice-resource.sh │ │ │ ├── chap04-single-instance │ │ │ ├── destroy-practice-resource.sh │ │ │ └── start-practice-resource.sh │ │ │ └── tf-backend │ │ │ ├── create-tf-backend.sh │ │ │ └── destroy-tf-backend.sh │ └── basic │ │ ├── README.md │ │ ├── aws-ec2 │ │ ├── .terraform.lock.hcl │ │ ├── main.tf │ │ ├── outputs.tf │ │ └── variables.tf │ │ └── terraform-helloworld.sh └── virtualbox │ ├── README.md │ └── basic │ └── README.md └── troubleshooting ├── common-issues.md ├── installation ├── groovy-with-sdk-missing-java.md └── jenkins-pod-issues.md └── k8s-notes.md /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | github: tungbq # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] 4 | patreon: # Replace with a single Patreon username 5 | open_collective: # Replace with a single Open Collective username 6 | ko_fi: # Replace with a single Ko-fi username 7 | tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel 8 | community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry 9 | liberapay: # Replace with a single Liberapay username 10 | issuehunt: # Replace with a single IssueHunt username 11 | lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry 12 | polar: # Replace with a single Polar username 13 | buy_me_a_coffee: # Replace with a single Buy Me a Coffee username 14 | custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] 15 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: 2 3 | updates: 4 | - package-ecosystem: "github-actions" 5 | directory: "/" 6 | schedule: 7 | interval: "weekly" 8 | -------------------------------------------------------------------------------- /.github/pull_request_template.md: -------------------------------------------------------------------------------- 1 | 5 | 6 | Closes: #XXX 7 | > *Be noted that XXX is the issue ID* 8 | 9 | ## What is the purpose of the change 10 | 11 | > Add a description of the overall background and high level changes that this PR introduces 12 | 13 | *(E.g.: This pull request improves documentation of area A by adding ....)* 14 | -------------------------------------------------------------------------------- /.github/workflows/greeting.yml: -------------------------------------------------------------------------------- 1 | name: Greetings 2 | 3 | on: 4 | pull_request_target: 5 | types: [opened] 6 | issues: 7 | types: [opened] 8 | 9 | jobs: 10 | greeting: 11 | runs-on: ubuntu-latest 12 | if: ${{ github.actor != 'all-contributors[bot]' && github.event.pull_request.user.login != 'tungbq' }} 13 | permissions: 14 | issues: write 15 | pull-requests: write 16 | steps: 17 | - uses: actions/first-interaction@v1 18 | with: 19 | repo-token: ${{ secrets.GITHUB_TOKEN }} 20 | issue-message: "Hi, thanks for opening an issue! 🎉 Your contribution is greatly appreciated." 21 | pr-message: "Hi, thanks for your first pull request! 🎉 Your contribution is greatly appreciated." -------------------------------------------------------------------------------- /.github/workflows/link-checker.yml: -------------------------------------------------------------------------------- 1 | name: Links Checker 2 | 3 | on: 4 | workflow_dispatch: 5 | schedule: 6 | - cron: "0 0 1,15 * *" # twice a month 7 | 8 | jobs: 9 | linkChecker: 10 | runs-on: ubuntu-latest 11 | steps: 12 | - uses: actions/checkout@v4 13 | 14 | - name: Link Checker 15 | id: lychee 16 | uses: lycheeverse/lychee-action@v2.4.1 17 | with: 18 | args: --accept '100..=103,200..=299,403,429' --exclude-all-private . 19 | 20 | - name: Create Issue From File 21 | if: env.lychee_exit_code != 0 22 | uses: peter-evans/create-issue-from-file@v5 23 | with: 24 | title: Broken Link Detected (Help Wanted) 25 | content-filepath: ./lychee/out.md 26 | labels: report, automated issue, good first issue 27 | -------------------------------------------------------------------------------- /.github/workflows/main.yaml: -------------------------------------------------------------------------------- 1 | name: Generate Readme (WIP) 2 | on: 3 | schedule: 4 | - cron: '0 0 * * *' 5 | 6 | jobs: 7 | list_folders_job: 8 | runs-on: ubuntu-latest 9 | steps: 10 | - name: Checkout code 11 | uses: actions/checkout@v4 12 | 13 | - name: Run generate-main-readme.sh (WIP) 14 | run: | 15 | cd tools 16 | ls -la 17 | chmod +x generate-main-readme.sh 18 | ./generate-main-readme.sh 19 | 20 | -------------------------------------------------------------------------------- /.github/workflows/run-excercise.yml: -------------------------------------------------------------------------------- 1 | name: Check Excercise Answers 2 | 3 | on: 4 | pull_request: 5 | branches: ['main'] 6 | paths: 7 | - 'topics/shell/excercise/answers/*.sh' 8 | 9 | jobs: 10 | run-scripts: 11 | runs-on: ubuntu-latest 12 | 13 | steps: 14 | - name: Checkout repository 15 | uses: actions/checkout@v4 16 | 17 | - name: Find and run shell scripts 18 | run: | 19 | for file in topics/shell/excercise/answers/*.sh; do 20 | if [ -f "$file" ]; then 21 | echo "Running $file..." 22 | chmod +x "$file" 23 | bash "$file" 24 | echo "Completed $file." 25 | fi 26 | done 27 | -------------------------------------------------------------------------------- /.github/workflows/run-shell.yml: -------------------------------------------------------------------------------- 1 | name: shell-script 2 | 3 | on: 4 | pull_request: 5 | branches: ['main'] 6 | paths: 7 | - 'topics/shell/' 8 | jobs: 9 | build: 10 | runs-on: ubuntu-latest 11 | 12 | steps: 13 | - uses: actions/checkout@v4 14 | - run: ls 15 | - run: echo "Running basic.sh script" 16 | - run: cd topics/shell/basic; bash ./basic.sh 17 | -------------------------------------------------------------------------------- /.github/workflows/shell-check.yml: -------------------------------------------------------------------------------- 1 | name: ShellCheck 2 | 3 | on: 4 | workflow_dispatch: 5 | ## Temporary disable run on main (avoid spamming notifications), only run on user trigger, to fix all existing issues first before adding to CI 6 | # pull_request: 7 | # branches: ["main"] 8 | 9 | jobs: 10 | shellcheck: 11 | name: ShellCheck 12 | runs-on: ubuntu-latest 13 | steps: 14 | - uses: actions/checkout@v4 15 | - name: Run ShellCheck 16 | uses: ludeeus/action-shellcheck@master 17 | 18 | -------------------------------------------------------------------------------- /.github/workflows/stale-issue.yml: -------------------------------------------------------------------------------- 1 | name: Close inactive issues and prs 2 | on: 3 | schedule: 4 | - cron: "30 1 * * *" 5 | workflow_dispatch: 6 | 7 | jobs: 8 | close-issues: 9 | runs-on: ubuntu-latest 10 | permissions: 11 | issues: write 12 | pull-requests: write 13 | steps: 14 | - uses: actions/stale@v9 15 | with: 16 | # Issues 17 | days-before-issue-stale: 90 18 | days-before-issue-close: 60 19 | stale-issue-label: "stale" 20 | stale-issue-message: "This issue is stale because it has been open for 90 days with no activity." 21 | close-issue-message: "This issue was closed because it has been inactive for 60 days since being marked as stale." 22 | # PR 23 | days-before-pr-stale: 90 24 | days-before-pr-close: 60 25 | stale-pr-label: "stale" 26 | stale-pr-message: "This PR is stale because it has been open for 90 days with no activity." 27 | close-pr-message: "This PR was closed because it has been inactive for 60 days since being marked as stale." 28 | repo-token: ${{ secrets.GITHUB_TOKEN }} 29 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | nohup.out 2 | workspace 3 | -------------------------------------------------------------------------------- /CNAME: -------------------------------------------------------------------------------- 1 | devops-basics.thedevopshub.org -------------------------------------------------------------------------------- /CODEOWNERS: -------------------------------------------------------------------------------- 1 | * @tungbq 2 | -------------------------------------------------------------------------------- /assets/images/argocd/guestbook-ui-demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/assets/images/argocd/guestbook-ui-demo.png -------------------------------------------------------------------------------- /assets/images/elk/elk_architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/assets/images/elk/elk_architecture.png -------------------------------------------------------------------------------- /assets/images/prometheus/prometheus-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/assets/images/prometheus/prometheus-architecture.png -------------------------------------------------------------------------------- /docs/GOODREAD.md: -------------------------------------------------------------------------------- 1 | # URLs 2 | - https://devopscube.com/devops-projects/ 3 | - https://aws.amazon.com/blogs/devops/ 4 | -------------------------------------------------------------------------------- /docs/README.md: -------------------------------------------------------------------------------- 1 | Collection of DevOps related docs. 2 | # Terraform 3 | - Terraform + Ansible: https://www.hashicorp.com/resources/ansible-terraform-better-together 4 | 5 | # Tech guide 6 | - Linode: https://www.linode.com/docs/guides/ 7 | 8 | # AWS DevOps Blog 9 | - https://aws.amazon.com/blogs/devops/ 10 | 11 | # More... 12 | - https://github.com/tungbq/devops-basic/blob/main/docs/GOODREAD.md -------------------------------------------------------------------------------- /getting-started/README.md: -------------------------------------------------------------------------------- 1 | # What is DevOps? 2 | 3 | - https://aws.amazon.com/devops/what-is-devops (from AWS) 4 | - https://www.atlassian.com/devops (from Atlassian) 5 | - https://learn.microsoft.com/en-us/devops/what-is-devops (from Microsoft) 6 | 7 | # DevOps Roadmap 8 | 9 | - https://roadmap.sh/devops 10 | 11 | # More resource 12 | 13 | - **90DaysOfDevOps**: https://github.com/MichaelCade/90DaysOfDevOps 14 | - **devops-exercises**: https://github.com/bregman-arie/devops-exercises 15 | - **devops-resources**: https://github.com/bregman-arie/devops-resources 16 | -------------------------------------------------------------------------------- /projects/README.md: -------------------------------------------------------------------------------- 1 | # DevOps project 2 | - [ci-cd-observability](./ci-cd-observability) 3 | - [tungbq/devops-project](https://github.com/tungbq/devops-project) 4 | -------------------------------------------------------------------------------- /projects/ci-cd-observability/README.md: -------------------------------------------------------------------------------- 1 | # ci-cd-observability 2 | - Doc: https://www.elastic.co/guide/en/observability/current/ci-cd-observability.html 3 | -------------------------------------------------------------------------------- /projects/ci-cd-observability/setup_stack.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | CUR_DIR=$(pwd) 3 | 4 | 5 | echo "Start Jenkins..." 6 | cd "$CUR_DIR/../../topics/helm/hands-on/deploy-jenkins/" 7 | ls -la 8 | ./deploy.sh 9 | 10 | # echo "Start ELK" 11 | # cd "$CUR_DIR/../../topics/elk/helloworld/installation/docker-compose" 12 | # ls -la 13 | # ./install.sh 14 | -------------------------------------------------------------------------------- /templates/README.md: -------------------------------------------------------------------------------- 1 | # Templates for Various Content Categories 2 | 3 | ## Topic Template 4 | 5 | - View: [TOPIC_TEMPLATE.md](./TOPIC_TEMPLATE.md) 6 | -------------------------------------------------------------------------------- /templates/TOPIC_STRUCTURE.md: -------------------------------------------------------------------------------- 1 | # The topic tree 2 | 3 | - Create a `basic` and `advanced` folder under topic, for example `python` topic, we will have folder structure below: 4 | 5 | ``` 6 | ├── python 7 | │ ├── README.md 8 | │ ├── advanced 9 | │ │ └── examples 10 | │ │ ├── 01-factorial-calculator.py 11 | │ │ ├── 02-parse-json-file.py 12 | │ │ ├── 03-oop-with-animal.py 13 | │ │ ├── 04-api-call.py 14 | │ │ ├── README.md 15 | │ │ └── sample_files 16 | │ │ └── persional_info.json 17 | │ └── basic 18 | │ └── helloworld.py 19 | ``` 20 | -------------------------------------------------------------------------------- /templates/TOPIC_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # YOUR_TOPIC 4 | 5 | ## 1. What is YOUR_TOPIC? 6 | 7 | ### Overview 8 | 9 | - Describe YOUR_TOPIC. 10 | 11 | ### YOUR_TOPIC Architecture (Nice to have) 12 | 13 | - Include an architecture diagram for deeper understanding. 14 | 15 | ### Official website documentation of YOUR_TOPIC 16 | 17 | - Provide a link to the official documentation for YOUR_TOPIC. 18 | 19 | ## 2. Prerequisites (Optional) 20 | 21 | - Highlight any essential prerequisites necessary for understanding YOUR_TOPIC. 22 | - For instance, in the context of Kubernetes, prior knowledge of Docker might be required for comprehending containerization technology. 23 | 24 | ## 3. Installation 25 | 26 | ### How to install YOUR_TOPIC? 27 | 28 | - Share installation steps or provide a link to detailed installation documentation. 29 | - Consider including instructions for both local and production environments. 30 | 31 | ## 4. Basics of YOUR_TOPIC 32 | 33 | ### Getting started with YOUR_TOPIC 34 | 35 | - To get started visit **topics//basic** 36 | (Checkout the [TOPIC_STRUCTURE.md](./TOPIC_STRUCTURE.md) to create the topic directory structure) 37 | 38 | 39 | 40 | ## 5. Beyond the Basics 41 | 42 | ### Exploring Advanced Examples 43 | 44 | - To get more advanced examples/hands on visit **topics//advanced** 45 | (Checkout the [TOPIC_STRUCTURE.md](./TOPIC_STRUCTURE.md) to create the topic directory structure) 46 | 47 | - Link to official advanced examples (if any) 48 | 49 | ## 6. More... 50 | 51 | ### Cheatsheet (Nice to have) 52 | 53 | - Offer an official cheatsheet link or create one for quick reference to features and functionalities. 54 | 55 | ### Recommended Books 56 | 57 | - Suggest quality reads related to YOUR_TOPIC for further learning and understanding. 58 | -------------------------------------------------------------------------------- /tools/generate-main-readme.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | pwd 3 | # Todo: Auto update content of main README.md file 4 | echo "hehe, to be implemented" 5 | -------------------------------------------------------------------------------- /topics/README.md: -------------------------------------------------------------------------------- 1 | # DevOps Topics 2 | 3 | - Explore a variety of DevOps topics in this section. 4 | - Utilize the [topic content template](../templates/TOPIC_TEMPLATE.md) for consistency. 5 | - Use [topic content structure](../templates/TOPIC_STRUCTURE.md) for consistency. 6 | -------------------------------------------------------------------------------- /topics/agile/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Agile? 2 | 3 | - The Agile methodology is a project management approach that involves breaking the project into phases and emphasizes continuous collaboration and improvement. Teams follow a cycle of planning, executing, and evaluating. 4 | - See: https://www.atlassian.com/agile 5 | - Wiki: https://en.wikipedia.org/wiki/Agile_software_development 6 | 7 | ## 2. Agile learning resource 8 | 9 | - Concept: https://www.simplilearn.com/tutorials/agile-scrum-tutorial/what-is-agile 10 | - Scrum: https://www.atlassian.com/agile/scrum 11 | - Kanban: https://www.atlassian.com/agile/kanban 12 | 13 | ## 3. Scrum 14 | 15 | - Concept: https://www.atlassian.com/agile/scrum 16 | - Sprints: https://www.atlassian.com/agile/scrum/sprints 17 | - Sprint planning: https://www.atlassian.com/agile/scrum/sprint-planning 18 | - Sprint reviews: https://www.atlassian.com/agile/scrum/sprint-reviews 19 | - Backlog: https://www.atlassian.com/agile/scrum/backlogs 20 | - Standup: https://www.atlassian.com/agile/scrum/standups 21 | - Scrum master: https://www.atlassian.com/agile/scrum/scrum-master 22 | - Retrospectives: https://www.atlassian.com/agile/scrum/retrospectives 23 | - ...More at: https://www.atlassian.com/agile/scrum 24 | 25 | ## 4. How do Agile and DevOps interrelate? 26 | 27 | - https://www.atlassian.com/agile/devops 28 | -------------------------------------------------------------------------------- /topics/ansible/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Ansible 2 | 3 | ### Overview 4 | 5 | - See: https://opensource.com/resources/what-ansible 6 | 7 | ### Ansible Diagram 8 | 9 | ![ansible-diagram](https://www.interviewbit.com/blog/wp-content/uploads/2022/06/Why-use-Ansible-768x449.png) 10 | 11 | (Image source provided by https://www.interviewbit.com/blog/ansible-architecture/) 12 | 13 | ### Official website documentation of Ansible 14 | 15 | - Visit https://www.ansible.com/ 16 | 17 | ## 2. Installation 18 | 19 | ### How to install Ansible? 20 | 21 | - Follow this guide: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible 22 | 23 | ## 3. Basics of Ansible 24 | 25 | ### Getting started with Ansible 26 | 27 | - https://docs.ansible.com/ansible/latest/network/getting_started/basic_concepts.html#basic-concepts 28 | 29 | ### Ansible Helloworld ⭐ 30 | 31 | - Visit [ansible/basic/helloworld](./basic/helloworld/) 32 | 33 | ## 4. Beyond the Basics 34 | 35 | ### Exploring Advanced Examples 36 | 37 | - Checkout [advanced](./advanced/) 38 | 39 | ## 5. More 40 | 41 | ### Ansible playbook cheatsheet 42 | 43 | - https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#ansible-playbook 44 | 45 | ### Recommended Books 46 | 47 | - N/A 48 | -------------------------------------------------------------------------------- /topics/ansible/advanced/check-stats.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Check CPU and RAM usage 3 | hosts: localhost 4 | gather_facts: yes 5 | tasks: 6 | - name: Get system stats 7 | stat: 8 | path: "/proc/{{ item }}/" 9 | loop: 10 | - "cpuinfo" 11 | - "meminfo" 12 | 13 | - name: Print system stats 14 | debug: 15 | var: item.stat.content 16 | with_items: "{{ ansible_loop }}" 17 | when: item.stat.exists 18 | -------------------------------------------------------------------------------- /topics/ansible/advanced/ping-google.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ping google.com 3 | hosts: localhost 4 | gather_facts: no 5 | tasks: 6 | - name: Ping google.com 7 | ping: 8 | data: "google.com" 9 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/README.md: -------------------------------------------------------------------------------- 1 | # Sample project looks like: 2 | ``` 3 | my_ansible_project/ 4 | ├── roles/ 5 | │ ├── role1/ 6 | │ │ ├── tasks/ 7 | │ │ │ └── main.yml 8 | │ │ ├── handlers/ 9 | │ │ │ └── main.yml 10 | │ │ ├── templates/ 11 | │ │ ├── defaults/ 12 | │ │ └── meta/ 13 | │ └── role2/ 14 | │ ├── tasks/ 15 | │ ├── handlers/ 16 | │ ├── templates/ 17 | │ ├── defaults/ 18 | │ └── meta/ 19 | ├── main.yml (your main playbook file) 20 | ``` 21 | 22 | # How to run 23 | - Execute this command: `ansible-playbook -i inventory.ini -v main.yml` (tested on WSL2 Ubuntu) 24 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/inventory.init: -------------------------------------------------------------------------------- 1 | [my-localhost] 2 | 127.0.0.1 ansible_connection=local 3 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Main Playbook 3 | hosts: my-localhost 4 | tasks: 5 | - name: Include sample_role 6 | import_role: 7 | name: sample_role 8 | - name: Include ping_google 9 | import_role: 10 | name: ping_google 11 | - name: Include check_up_time 12 | import_role: 13 | name: check_up_time 14 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/check_up_time/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Execute uptime command 3 | command: uptime 4 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/ping_google/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | google_server: "google.com" 3 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/ping_google/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Ping google.com 3 | ping: 4 | data: "{{ google_server }}" 5 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/sample_role/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | test_default: "demo" 3 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/sample_role/handlers/README.md: -------------------------------------------------------------------------------- 1 | # What is Handlers? 2 | - In Ansible, handlers are a way to define a list of tasks that should be executed only if certain conditions are met, typically triggered by a notify directive in other tasks. Handlers are often used for actions that need to be taken only when specific changes occur during the playbook run, such as restarting services after configuration changes. -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/sample_role/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Greetings! 3 | ansible.builtin.command: echo "Hello from sample_role" 4 | -------------------------------------------------------------------------------- /topics/ansible/advanced/projects/sample_project/roles/sample_role/templates/README.md: -------------------------------------------------------------------------------- 1 | # Template? 2 | - In Ansible, templates are used to dynamically generate configuration files by inserting variable values and other dynamic content into a template file. 3 | - Templates are a powerful way to manage configuration files for various services and applications across different hosts. 4 | -------------------------------------------------------------------------------- /topics/ansible/advanced/with-docker/AnsibleEnv.dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:latest 2 | 3 | RUN apt-get update && \ 4 | apt-get install -y software-properties-common && \ 5 | apt-add-repository --yes --update ppa:ansible/ansible && \ 6 | apt-get install -y ansible 7 | CMD ["/bin/bash"] 8 | -------------------------------------------------------------------------------- /topics/ansible/advanced/with-docker/ansible-practice.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo ">>> Building ansible runner..." 3 | docker build -t ansible-runner -f AnsibleEnv.dockerfile . 4 | echo ">>> Execute ansible playbook (ping google)..." 5 | docker run --rm -v "$(pwd)/basic":/basic -w /basic ansible-runner ansible-playbook ping-google.yaml 6 | echo ">>> Execute ansible playbook (check stats)..." 7 | docker run --rm -v "$(pwd)/basic":/basic -w /basic ansible-runner ansible-playbook check-stats.yaml 8 | -------------------------------------------------------------------------------- /topics/ansible/basic/README.md: -------------------------------------------------------------------------------- 1 | # Initialize Ansible learning place 2 | ## Install ansible: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html 3 | -------------------------------------------------------------------------------- /topics/ansible/basic/helloworld/README.md: -------------------------------------------------------------------------------- 1 | # Ansible helloworld example 2 | 3 | - Inventory file: [first-inventory.ini](./first-inventory.ini) 4 | - First playbook file: [first-playbook.yml](./first-playbook.yml) 5 | 6 | - Run the first playbook 7 | 8 | ```bash 9 | # Navigate to code location under `devops-basics` repo 10 | cd devops-basics/topics/ansible/basic/helloworld 11 | 12 | # Run playbook 13 | ansible-playbook -i first-inventory.ini first-playbook.yml 14 | ``` 15 | 16 | - Or use the demo script if you want: [ansible-helloworld.sh](./ansible-helloworld.sh) 17 | -------------------------------------------------------------------------------- /topics/ansible/basic/helloworld/ansible-helloworld.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Ansible] $1" 5 | } 6 | 7 | check_tool_exist() { 8 | tool_name="ansible" 9 | if command -v $tool_name &>/dev/null; then 10 | echo "$tool_name is installed." 11 | else 12 | echo "$tool_name is not installed. Please install first" 13 | fi 14 | } 15 | 16 | console_log "Checking if Ansile is installed" 17 | check_tool_exist 18 | 19 | console_log "Checking Ansible version" 20 | ansible --version 21 | 22 | console_log "Checking inventory host" 23 | ansible all --list-hosts -i first-inventory.ini 24 | 25 | console_log "Send ping command to the host" 26 | ansible all -m ping -i first-inventory.ini 27 | 28 | console_log "Run the first playbook - get uptime and OS release on localhost" 29 | # Note: `-v` flag if you want to verbose the ansible execution result 30 | ansible-playbook -i first-inventory.ini first-playbook.yml 31 | -------------------------------------------------------------------------------- /topics/ansible/basic/helloworld/first-inventory.ini: -------------------------------------------------------------------------------- 1 | [my-localhost] 2 | 127.0.0.1 ansible_connection=local 3 | -------------------------------------------------------------------------------- /topics/ansible/basic/helloworld/first-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Basic tasks 3 | hosts: my-localhost 4 | tasks: 5 | - name: Execute uptime command 6 | command: uptime 7 | register: uptime_result 8 | - debug: var=uptime_result.stdout_lines 9 | 10 | - name: Check OS release 11 | command: cat /etc/os-release 12 | register: os_result 13 | - debug: var=os_result.stdout_lines 14 | -------------------------------------------------------------------------------- /topics/apachetomcat/basic/README.md: -------------------------------------------------------------------------------- 1 | # Apache Tomcat Basics 2 | 3 | This section covers the fundamental concepts and steps to get started with Apache Tomcat. Learn how to set up, configure, and deploy web applications. 4 | 5 | --- 6 | 7 | ## 1. Getting Started with Apache Tomcat 8 | 9 | ### Starting the Server 10 | 11 | 1. Open the terminal and navigate to the `bin` directory of your Tomcat installation. 12 | 2. Run the startup script: 13 | - **Windows**: `startup.bat` 14 | - **Linux/Mac**: `./startup.sh` 15 | 16 | ### Stopping the Server 17 | 18 | 1. Navigate to the `bin` directory. 19 | 2. Run the shutdown script: 20 | - **Windows**: `shutdown.bat` 21 | - **Linux/Mac**: `./shutdown.sh` 22 | 23 | --- 24 | 25 | ## 2. Configuring Apache Tomcat 26 | 27 | ### Change the Default Port 28 | 29 | 1. Open the `server.xml` file located in the `conf` directory. 30 | 2. Locate the `` element and change the `port` attribute: 31 | 32 | ```xml 33 | 36 | ``` 37 | 38 | --- 39 | 40 | ## 3. Deploying a Sample Application 41 | 42 | - Deploy a Pre-Built `.war` File 43 | - Download a sample `.war` file 44 | - Download the sample application Sample Web Application at https://tomcat.apache.org/tomcat-10.1-doc/appdev/sample/ 45 | - Deploy to Tomcat 46 | - Copy the `.war` file into the webapps directory of your Tomcat installation. 47 | - Access the Application: Open a browser and navigate to http://localhost:8080/sample. 48 | -------------------------------------------------------------------------------- /topics/architecture/README.md: -------------------------------------------------------------------------------- 1 | # Architecture Center 2 | 3 | ## 1. AWS Architecture Center 4 | 5 | - Architecture Center: https://aws.amazon.com/architecture 6 | - AWS official youtube channel: [This is my architecture series](https://youtube.com/playlist?list=PLhr1KZpdzukdeX8mQ2qO73bg6UKQHYsHb&si=ztggdByRdqW9tKvl) 7 | 8 | ## 2. Azure Architecture Center 9 | 10 | - Architecture Center: https://learn.microsoft.com/en-us/azure/architecture/ 11 | - Browse Architecture Center: https://learn.microsoft.com/en-us/azure/architecture/browse/ 12 | 13 | ## 3. Trunk Based Development 14 | 15 | A source-control branching model, where developers collaborate on code in a single branch called ‘trunk’ \*, resist any pressure to create other long-lived development branches by employing documented techniques. They therefore avoid merge hell, do not break the build, and live happily ever after. 16 | 17 | - https://trunkbaseddevelopment.com/ 18 | 19 | ## 4. Deployment 20 | 21 | - Deployment Choice: Code Promotion vs Artifact Promotion: https://hackernoon.com/deployment-choice-code-promotion-vs-artifact-promotion 22 | 23 | ## 5. Versioning 24 | 25 | - Kubernetes Release Versioning: https://github.com/kubernetes/sig-release/blob/master/release-engineering/versioning.md#kubernetes-release-versioning 26 | - Semantic Versioning 2.0.0: https://semver.org/ 27 | -------------------------------------------------------------------------------- /topics/argocd/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is ArgoCD? 2 | 3 | - https://argo-cd.readthedocs.io/en/stable/#what-is-argo-cd 4 | 5 | ### Overview 6 | 7 | Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. 8 | 9 | ### ArgoCD Architecture 10 | 11 | ![argocd_architecture](https://argo-cd.readthedocs.io/en/stable/assets/argocd_architecture.png) 12 | 13 | ### Official website documentation of docker 14 | 15 | - Access the complete [official ArgoCD repo](https://github.com/argoproj/argo-cd) for detailed information and references. 16 | 17 | ## 2. Prerequisites 18 | 19 | - Familiarity with containerization concepts and basic Linux command-line usage would be beneficial for understanding ArgoCD. 20 | 21 | ## 3. Installation 22 | 23 | ### How to install ArgoCD? 24 | 25 | - Follow the steps outlined in the [ArgoCD installation documentation](https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/) for both local development and production environments. 26 | 27 | ## 4. Basics of ArgoCD 28 | 29 | ### Getting started with ArgoCD 30 | 31 | - Refer to the [official ArgoCD getting started documentation](https://argo-cd.readthedocs.io/en/stable/getting_started/) for a comprehensive introduction. 32 | 33 | ### ArgoCD Hello World 34 | 35 | - Run the [basic/](./basic/install_argocd.sh) script to execute a simple ArgoCD "Hello World" demonstration. 36 | 37 | ## 5. Beyond the Basics 38 | 39 | ### Hands-On Example 40 | 41 | - Explore a practical hands-on example in the [argocd-example-apps repo](https://github.com/argoproj/argocd-example-apps) to quickly start using ArgoCD. 42 | 43 | ## 6. More 44 | 45 | ### ArgoCD Cheatsheet 46 | 47 | - [commands/argocd](https://argo-cd.readthedocs.io/en/stable/user-guide/commands/argocd/) 48 | 49 | ### Recommended Books 50 | 51 | - N/A 52 | -------------------------------------------------------------------------------- /topics/argocd/basic/README.md: -------------------------------------------------------------------------------- 1 | # Welcome to Argo CD 2 | 3 | ## Install Argo CD and the CLI 4 | 5 | - Run `./install_argocd.sh` 6 | 7 | ## Access the Web UI 8 | 9 | ### Initial password 10 | 11 | - Run `argocd admin initial-password -n argocd` to get the initial password 12 | - Login with admin and the above password 13 | 14 | ## Deloy your first application Via UI 15 | 16 | ### Create An Application From A Git Repository 17 | 18 | #### Check service 19 | 20 | `kubectl get services` 21 | 22 | #### Port forwarding to check the app 23 | 24 | - Syntax: (kubectl port-forward service/ :) 25 | - Run cmd: `kubectl port-forward service/guestbook-ui 8082:80` 26 | - NOTE: You can replace 8082 by your own port depends on your enviroment 27 | 28 | #### Verify the result 29 | 30 | - Visit http://localhost:8082 31 | - Or via cmd: `curl localhost:8082` 32 | - Once the app is deployed successfully and service up and running with port-forwarding, we should see something like 33 | ![guestbook-ui-demo](../../../assets/images/argocd/guestbook-ui-demo.png) 34 | 35 | ## Working with the ArgoCD CLI 36 | 37 | - Check out: https://argo-cd.readthedocs.io/en/stable/getting_started/#creating-apps-via-cli 38 | -------------------------------------------------------------------------------- /topics/argocd/basic/install_argocd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | install_argocd_cli() { 4 | pushd /tmp 5 | curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64 6 | ls -la 7 | sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd 8 | rm argocd-linux-amd64 9 | popd 10 | } 11 | # Deploy argocd 12 | kubectl create namespace argocd 13 | kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml 14 | 15 | # Check pods 16 | kubectl get pods -n argocd 17 | 18 | # Check service 19 | kubectl get svc -n argocd 20 | 21 | # Install the CLI 22 | install_argocd_cli 23 | 24 | # Change the argocd-server service type to LoadBalancer 25 | kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' 26 | 27 | # Port forwarding 28 | kubectl port-forward svc/argocd-server -n argocd 8080:443 29 | -------------------------------------------------------------------------------- /topics/aws/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is AWS? 2 | 3 | - https://aws.amazon.com/what-is-aws/ 4 | 5 | ### Overview 6 | 7 | Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud, offering over 200 fully featured services from data centers globally. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, become more agile, and innovate faster. 8 | 9 | ### AWS Architecture 10 | 11 | - N/A 12 | 13 | ### Official website documentation of AWS 14 | 15 | - https://docs.aws.amazon.com/ 16 | 17 | ## 2. Prerequisites 18 | 19 | - Familiarity with cloud concepts and basic Linux command-line usage would be beneficial for understanding AWS. 20 | 21 | ## 3. Installation 22 | 23 | ### How to install AWS? 24 | 25 | - No need to install AWS, it's cloud environment 26 | 27 | ## 4. Basics of AWS 28 | 29 | ### 1. Getting started with AWS 30 | 31 | - Refer to the [official AWS getting started documentation](https://aws.amazon.com/getting-started/) for a comprehensive introduction. 32 | 33 | ### 2. AWS Hello World 34 | 35 | - Check the [basic/](./basic/) directory to create a simple AWS EC2. 36 | 37 | ## 5. Beyond the Basics 38 | 39 | ### Hands-On Example 40 | 41 | - Explore a practical hands-on example in the [AWS hands-on](https://aws.amazon.com/getting-started/hands-on) to quickly start using AWS. 42 | 43 | ## 6. More 44 | 45 | ### AWS learning resource 46 | 47 | - https://github.com/tungbq/AWS-LearningResource 48 | 49 | ### Recommended Books 50 | 51 | - N/A 52 | -------------------------------------------------------------------------------- /topics/aws/basic/README.md: -------------------------------------------------------------------------------- 1 | # AWS Basic 2 | 3 | ## Get started with Amazon EC2 Linux instances 4 | 5 | - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html 6 | -------------------------------------------------------------------------------- /topics/azure/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Azure? 2 | 3 | - https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-azure/ 4 | 5 | ### Overview 6 | 7 | The Azure cloud platform is more than 200 products and cloud services designed to help you bring new solutions to life—to solve today’s challenges and create the future. Build, run, and manage applications across multiple clouds, on-premises, and at the edge, with the tools and frameworks of your choice. 8 | 9 | 10 | ### Official website documentation of Azure 11 | 12 | - https://learn.microsoft.com/en-us/azure/?product=popular 13 | 14 | ## 2. Prerequisites 15 | 16 | - Familiarity with cloud concepts and basic Linux command-line usage would be beneficial for understanding Azure. 17 | 18 | ## 3. Installation 19 | 20 | ### How to install Azure? 21 | 22 | - No need to install Azure, it's cloud environment 23 | 24 | ## 4. Basics of Azure 25 | 26 | ### 1. Getting started with Azure 27 | 28 | - https://portal.azure.com/?quickstart=true#view/Microsoft_Azure_Resources/QuickstartCenterBlade 29 | 30 | ### 2. Azure Hello World 31 | 32 | - Check the [**basic/**](./basic/) directory to create some Azure resources 33 | 34 | ## 5. Beyond the Basics 35 | 36 | ### Azure Architecture Center 37 | 38 | - https://learn.microsoft.com/en-us/azure/architecture/ 39 | 40 | ## 6. More 41 | 42 | ### Azure learning resource 43 | 44 | - https://github.com/TheDevOpsHub/AzureHub 45 | 46 | ### Recommended Books 47 | 48 | - N/A 49 | -------------------------------------------------------------------------------- /topics/azure/basic/README.md: -------------------------------------------------------------------------------- 1 | # Getting started with Azure 2 | - Portal: https://portal.azure.com/ 3 | - Create VM: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/quick-create-portal 4 | - Create blob storage: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal 5 | -------------------------------------------------------------------------------- /topics/azuredevops/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Azure DevOps? 2 | 3 | ### Overview 4 | 5 | Azure DevOps supports a collaborative culture and set of processes that bring together developers, project managers, and contributors to develop software. 6 | It allows organizations to create and improve products at a faster pace than they can with traditional software development approaches. 7 | 8 | ### Azure DevOps workflow 9 | 10 | N/A 11 | 12 | ### Official website documentation of Azure DevOps 13 | 14 | - https://learn.microsoft.com/en-us/azure/devops 15 | 16 | ## 2. Prerequisites 17 | 18 | - Understanding the basic CICD concept would be helpful 19 | 20 | ## 3. Installation 21 | 22 | ### How to install Azure DevOps? 23 | 24 | - No need to install, just use: https://dev.azure.com/ 25 | 26 | ## 4. Basics of Azure DevOps 27 | 28 | ### Azure DevOps quick start 29 | 30 | - See [Create first pipeline](https://learn.microsoft.com/en-us/azure/devops/pipelines/create-first-pipeline) 31 | 32 | ### Azure DevOps Hello World 33 | 34 | - Check the [basic/](./basic/) directory to create a simple Azure DevOps pipeline. 35 | 36 | ## 5. Beyond the Basics 37 | 38 | ### Hands-On Example 39 | 40 | - Check the [advanced/](./advanced/) directory for more Azure DevOps examples. 41 | 42 | ## 6. More... 43 | 44 | ### Recommended Books 45 | 46 | - TODO 47 | -------------------------------------------------------------------------------- /topics/azuredevops/advanced/README.md: -------------------------------------------------------------------------------- 1 | # To be implemented 2 | -------------------------------------------------------------------------------- /topics/azuredevops/basic/first-azure-pipelines.yml: -------------------------------------------------------------------------------- 1 | # Starter pipeline 2 | # Start with a minimal pipeline that you can customize to build and deploy your code. 3 | # Add steps that build, run tests, deploy, and more: 4 | # https://aka.ms/yaml 5 | 6 | trigger: 7 | - main 8 | 9 | pool: 10 | vmImage: ubuntu-latest 11 | 12 | steps: 13 | - script: echo Hello, world! 14 | displayName: "Run a one-line script" 15 | 16 | - script: | 17 | echo Add other tasks to build, test, and deploy your project. 18 | echo See https://aka.ms/yaml 19 | displayName: "Run a multi-line script" 20 | -------------------------------------------------------------------------------- /topics/cloudflare/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Cloudflare? 2 | 3 | ### Overview 4 | 5 | - Cloudflare is one of the biggest networks operating on the Internet. People use Cloudflare services for the purposes of increasing the security and performance of their web sites and services. 6 | 7 | ### Official website of Cloudflare 8 | 9 | - https://www.cloudflare.com/ 10 | 11 | ### Official documentation website of Cloudflare 12 | 13 | - https://developers.cloudflare.com/ 14 | 15 | ## 2. Prerequisites 16 | 17 | - Basic networking, DNS, webserver knowledge are helpful. 18 | 19 | ## 3. Installation 20 | 21 | ### How to use Cloudflare? 22 | 23 | - Start using Cloudflare service at: https://dash.cloudflare.com/ 24 | 25 | ## 4. Basics of Cloudflare 26 | 27 | ### Cloudflare getting started 28 | 29 | - Begginner's Guide: https://developers.cloudflare.com/learning-paths/get-started/ 30 | 31 | ### Cloudflare Hands on 32 | 33 | - See: [basic](./basic/) 34 | 35 | ## 5. More... 36 | 37 | ### Cloudflare cheatsheet 38 | 39 | - N/A 40 | 41 | ### Reference Architecture 42 | 43 | - https://developers.cloudflare.com/reference-architecture/ 44 | 45 | ### Recommended Books 46 | 47 | - N/A 48 | -------------------------------------------------------------------------------- /topics/cloudflare/basic/README.md: -------------------------------------------------------------------------------- 1 | ## Cloudflare basics practice 2 | 3 | - https://developers.cloudflare.com/learning-paths/get-started/ 4 | -------------------------------------------------------------------------------- /topics/coding/README.md: -------------------------------------------------------------------------------- 1 | # Coding 2 | 3 | Coding resources for DevOps 4 | 5 | ## 1. Some resource to level-up your coding skill and mindset 6 | 7 | ### Design Pattern 8 | 9 | - https://refactoring.guru/design-patterns/catalog 10 | 11 | ### The Twelve-Factor App 12 | 13 | - https://12factor.net/ 14 | 15 | ### OOP Concepts 16 | 17 | - https://docs.oracle.com/javase/tutorial/java/concepts/ 18 | -------------------------------------------------------------------------------- /topics/coding/practice.md: -------------------------------------------------------------------------------- 1 | # Coding practice 2 | - Leetcode: https://leetcode.com/ 3 | -------------------------------------------------------------------------------- /topics/docker/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Docker? 2 | 3 | ### Overview 4 | 5 | Docker is a platform designed to make it easier to create, deploy, and run applications using containers. It allows for packaging applications and their dependencies into containers. 6 | 7 | ### Docker Architecture 8 | 9 | For a deeper understanding, refer to the [Docker Architecture documentation](https://docs.docker.com/get-started/overview/#docker-architecture). 10 | 11 | ### Official website documentation of docker 12 | 13 | - Access the complete [official Docker documentation](https://docs.docker.com) for detailed information and references. 14 | 15 | ## 2. Prerequisites 16 | 17 | - Familiarity with containerization concepts and basic Linux command-line usage would be beneficial for understanding Docker. 18 | 19 | ## 3. Installation 20 | 21 | ### How to install Docker? 22 | 23 | - Follow the steps outlined in the [Docker installation documentation](https://docs.docker.com/engine/install/) for both local development and production environments. 24 | 25 | ## 4. Basics of Docker 26 | 27 | ### Getting started with Docker 28 | 29 | - Refer to the [official Docker getting started documentation](https://docs.docker.com/get-started/) for a comprehensive introduction. 30 | 31 | ### Docker Hello World 32 | 33 | - Run the [basic/docker-helloworld.sh](./basic/docker-helloworld.sh) script to execute a simple Docker "Hello World" demonstration. 34 | 35 | ### Top Docker commands 36 | 37 | - Checkout [basic/top-docker-cmd.md](./basic/top-docker-cmd.md) 38 | 39 | ## 5. Beyond the Basics 40 | 41 | ### Hands-On Example 42 | 43 | - Explore a practical hands-on example in the [advanced directory](./advanced/) to quickly start using Docker. 44 | 45 | ## 6. More 46 | 47 | ### Docker Cheatsheet 48 | 49 | - Use the [Docker cheatsheet](https://docs.docker.com/get-started/docker_cheatsheet.pdf) as a quick reference guide for Docker commands and functionalities. 50 | 51 | ### Recommended Books 52 | 53 | - _Docker in Action, Second Edition_ by Jeff Nickoloff (Author), Stephen Kuenzli (Author). Link [Docker in Action](https://www.amazon.com/Docker-Action-Jeff-Nickoloff/dp/1617294764) 54 | -------------------------------------------------------------------------------- /topics/docker/advanced/python-sample/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.8 2 | WORKDIR / 3 | COPY requirements.txt . 4 | # RUN pip install -r requirements.txt 5 | COPY . . 6 | CMD [ "python", "random.py" ] 7 | -------------------------------------------------------------------------------- /topics/docker/advanced/python-sample/README.md: -------------------------------------------------------------------------------- 1 | [Link to practice script](practice.sh) 2 | -------------------------------------------------------------------------------- /topics/docker/advanced/python-sample/basic.py: -------------------------------------------------------------------------------- 1 | print("Hello") -------------------------------------------------------------------------------- /topics/docker/advanced/python-sample/practice.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Running python script without building docker image 4 | echo "Running python script without building docker image" 5 | docker run --rm -v $(pwd):/app -w /app python:3.9-slim-buster python basic.py 6 | echo $? 7 | # Build and run docker container 8 | echo "Build and run docker container" 9 | IMAGE_NAME="random_python:latest" 10 | docker build -t $IMAGE_NAME . 11 | docker run --rm $IMAGE_NAME 12 | -------------------------------------------------------------------------------- /topics/docker/advanced/python-sample/random.py: -------------------------------------------------------------------------------- 1 | import datetime 2 | now = datetime.datetime.now() 3 | print ("Current date and time : ") 4 | print (now.strftime("%Y-%m-%d %H:%M:%S")) 5 | -------------------------------------------------------------------------------- /topics/docker/advanced/python-sample/requirements.txt: -------------------------------------------------------------------------------- 1 | random 2 | -------------------------------------------------------------------------------- /topics/docker/basic/docker-helloworld.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log () { 4 | echo ">>> [Docker] $1" 5 | } 6 | 7 | console_log "Welcome to docker!" 8 | 9 | console_log "Pull latest ubuntu docker image" 10 | docker pull ubuntu:latest 11 | 12 | console_log "Check docker images" 13 | docker images | grep ubuntu 14 | 15 | console_log "Run docker container and check OS release" 16 | docker run --rm ubuntu:latest cat /etc/os-release 17 | 18 | -------------------------------------------------------------------------------- /topics/dynatrace/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Dynatrace? 2 | 3 | ### Overview 4 | 5 | - Dynatrace is a revolutionary platform that delivers analytics and automation for unified observability and security. 6 | 7 | - Source: https://docs.dynatrace.com/docs/get-started/what-is-dynatrace 8 | 9 | ### Official Website of Dynatrace 10 | 11 | - https://www.dynatrace.com/ 12 | 13 | ### Official Documentation of Dynatrace 14 | 15 | - https://docs.dynatrace.com/docs 16 | 17 | ### What you can do with Dynatrace 18 | 19 | drawing 20 | 21 | - See details at: https://docs.dynatrace.com/docs/shortlink/intro#get-started-with-the-platform 22 | 23 | ## 2. Prerequisites 24 | 25 | - Basic knowledge of application and infrastructure monitoring. 26 | 27 | ## 3. Installation 28 | 29 | ### How to Install Dynatrace? 30 | 31 | 1. **Sign up for Dynatrace** if you don't already have an account, sign up for a free trial 32 | 33 | - https://www.dynatrace.com/trial 34 | 35 | 2. **Install OneAgent** by following the official steps to enable monitoring for hosts, applications, and containers: 36 | - [Dynatrace OneAgent Installation Guide](https://docs.dynatrace.com/docs/setup-and-configuration/dynatrace-oneagent/installation-and-operation) 37 | 38 | ## 4. Basics of Dynatrace 39 | 40 | ### Get started with Dynatrace 41 | 42 | - https://docs.dynatrace.com/docs/get-started 43 | 44 | ### Dynatrace quick start guide 45 | 46 | To start using Dynatrace, just create a free trial account, install OneAgent on a host, and see how Dynatrace immediately shows you the health and performance of that host. 47 | 48 | - Follow this useful guide: https://docs.dynatrace.com/docs/get-started/get-started 49 | 50 | ### Dynatrace Hands-On 51 | 52 | - See: [basic](./basic/) 53 | 54 | ## 5. More... 55 | 56 | ### Dynatrace Cheatsheet 57 | 58 | - N/A 59 | 60 | ### Recommended Books 61 | 62 | - N/A 63 | -------------------------------------------------------------------------------- /topics/dynatrace/basic/README.md: -------------------------------------------------------------------------------- 1 | ## Dynatrace Basics 2 | You can get started and hands on with Dynatrace via following guide: 3 | - Dynatrace quick start guide: https://docs.dynatrace.com/docs/discover-dynatrace/get-started/get-started 4 | - Create an alerting profile: https://docs.dynatrace.com/docs/analyze-explore-automate/notifications-and-alerting/alerting-profiles#create-an-alerting-profile 5 | - Send Dynatrace notifications via email: https://docs.dynatrace.com/docs/analyze-explore-automate/notifications-and-alerting/problem-notifications/email-integration 6 | - Identity and access management: https://docs.dynatrace.com/docs/manage/identity-access-management 7 | -------------------------------------------------------------------------------- /topics/elk/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is ELK? 2 | 3 | - https://www.elastic.co/what-is/elk-stack 4 | 5 | ### Overview 6 | 7 | - Elasticsearch is the heart of the free and open Elastic Stack 8 | - Discover, iterate, and resolve with ES|QL on Kibana 9 | 10 | ### ELK Architecture 11 | 12 | ![elk_architecture](../../assets/images/elk/elk_architecture.png) 13 | 14 | ### Official website documentation of ELK 15 | 16 | - https://www.elastic.co/guide/index.html 17 | 18 | ## 2. Prerequisites 19 | 20 | - N/A 21 | 22 | ## 3. Installation 23 | 24 | ### How to install ELK? 25 | 26 | - Installing the Elastic Stack: https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html 27 | 28 | ## 4. Basics of ELK 29 | 30 | ### Getting started with ELK 31 | 32 | - Refer to the [Official ELK getting started documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html) for a comprehensive introduction. 33 | 34 | ### ELK Hello World 35 | 36 | - Check the [helloworld/](./basic/helloworld/) directory to create a simple ELK demo. 37 | 38 | ## 5. Beyond the Basics 39 | 40 | ### Hands-On Example 41 | 42 | - Explore a practical hands-on example in the [ELK hands-on](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) for more ELK concepts 43 | 44 | ## 6. More 45 | 46 | ### ELK learning resource 47 | 48 | - N/A 49 | 50 | ### Recommended Books 51 | 52 | - N/A 53 | -------------------------------------------------------------------------------- /topics/elk/advanced/REAMDE.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/README.md: -------------------------------------------------------------------------------- 1 | # Install ELK stack 2 | 3 | ## On docker - via Docker compose 4 | 5 | - See run `cd installation/docker-compose; ./install.sh` 6 | 7 | ## On K8s - via Helm (TODO: Move to helm folder?) 8 | 9 | - Work in-progress 10 | 11 | # Practice with ELK 12 | 13 | - Install ELK, via [docker-compose](./installation/docker-compose/) 14 | - Ingest metrics, via [metric-beat](./metric-beat/) 15 | - Explore the data via Kibana, via http://localhost:5601 16 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/README.md: -------------------------------------------------------------------------------- 1 | # Elastalert 2 | 3 | ## Prerequisite 4 | 5 | - The Elastic stack must be installed first, see [elk](../../../../elk/) 6 | 7 | ## Elastalert install 8 | 9 | - Via docker compose, run `./deploy.sh` 10 | 11 | ## Elastalert rules 12 | 13 | - Find current working rule at [rules](./rules/) 14 | - `wip_rules` is the folder to store the rules that are not completed 15 | 16 | ## Run rules test 17 | 18 | - `cd tests; ./test_rules.sh` 19 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/deploy.sh: -------------------------------------------------------------------------------- 1 | 2 | echo "up docker stack..." 3 | docker-compose up -d 4 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | elastalert: 4 | image: ghcr.io/jertel/elastalert2/elastalert2 5 | container_name: elastalert 6 | restart: always 7 | network_mode: host 8 | volumes: 9 | - ./config.yaml:/opt/elastalert/config.yaml 10 | - ./rules:/opt/elastalert/rules 11 | command: ['--verbose'] 12 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/rules/metricbeat_cpu_rule.yaml: -------------------------------------------------------------------------------- 1 | es_host: localhost 2 | es_port: 9200 3 | 4 | name: Metricbeat CPU Usage Alert # Updated rule name 5 | type: metric_aggregation 6 | 7 | index: metricbeat-* 8 | 9 | buffer_time: 10 | minutes: 2 11 | 12 | filter: 13 | - query_string: 14 | query: "event.dataset:system.cpu" # Update query to monitor CPU usage 15 | 16 | metric_agg_key: "system.cpu.total.norm.pct" # Update metric_agg_key for CPU usage 17 | metric_agg_type: max 18 | metric_agg_bucket_path: "system.cpu.total.norm.pct" # Update bucket path for CPU usage 19 | metric_agg_query_key: "system.cpu.total.norm.pct" # Update query key for CPU usage 20 | 21 | bucket_interval: 22 | seconds: 10 23 | 24 | max_threshold: 0.1 # You can adjust this threshold as needed 25 | 26 | # (Required) 27 | # The alert is used when a match is found 28 | alert: 29 | - "debug" # You can replace "debug" with the desired action for the alert 30 | # alert: 31 | # - name: "log_alert" 32 | # webhook: 33 | # method: POST 34 | # host: localhost 35 | # port: 9200 36 | # path: /_ingest/pipeline/default 37 | # scheme: http 38 | # body: '{"message": "High CPU Usage Alert: {{ctx.metadata.name}} - {{ctx.metadata.description}}"}' 39 | # headers: 40 | # Content-Type: application/json 41 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/tests/test_rules.sh: -------------------------------------------------------------------------------- 1 | # document: https://elastalert2.readthedocs.io/en/latest/ruletypes.html#testing-your-rule 2 | echo "Running rules check..." 3 | docker exec -it elastalert elastalert-test-rule "/opt/elastalert/rules/metricbeat_cpu_rule.yaml" 4 | 5 | echo "Running rules check with local data..." 6 | echo "--Copy test data file" 7 | docker cp data/json_debug.json elastalert:/tmp/json_debug.json 8 | echo "--Execute test" 9 | docker exec -it elastalert elastalert-test-rule --data /tmp/json_debug.json "/opt/elastalert/rules/metricbeat_cpu_rule.yaml" 10 | 11 | echo "--Execute test with time range" 12 | docker exec -it elastalert elastalert-test-rule --data /tmp/json_debug.json "/opt/elastalert/rules/metricbeat_cpu_rule.yaml" --start 2023-09-25T14:49:08.682Z --end 2023-09-26T00:59:10.759Z 13 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/wip_rules/example_single_metric_agg.yaml: -------------------------------------------------------------------------------- 1 | name: Metricbeat CPU Spike Rule 2 | type: metric_aggregation 3 | 4 | index: metricbeat-* 5 | 6 | buffer_time: 7 | hours: 1 8 | 9 | metric_agg_key: host.cpu.usage 10 | metric_agg_type: avg 11 | query_key: host.name 12 | 13 | bucket_interval: 14 | minutes: 5 15 | 16 | sync_bucket_interval: true 17 | 18 | min_threshold: 0.1 # Change this threshold to 0.1 (10%) 19 | max_threshold: 1.0 # Set the max threshold to 1.0 to always trigger the alert 20 | 21 | filter: 22 | - term: 23 | metricset.name: cpu 24 | 25 | # (Required) 26 | # The alert is used when a match is found 27 | alert: 28 | - "debug" 29 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/elasalert/wip_rules/metricbeat_disk_space_rule.yaml: -------------------------------------------------------------------------------- 1 | es_host: localhost 2 | es_port: 9200 3 | 4 | name: Metricbeat Disk Space Alert 5 | type: metric_aggregation 6 | 7 | index: metricbeat-* 8 | 9 | buffer_time: 10 | minutes: 5 11 | 12 | filter: 13 | - query_string: 14 | query: "event.dataset:system.diskio" 15 | 16 | metric_agg_key: "system.filesystem.used.pct" 17 | metric_agg_type: max 18 | metric_agg_bucket_path: "system.filesystem.used.pct" 19 | metric_agg_query_key: "system.filesystem.device_name.keyword" 20 | 21 | 22 | bucket_interval: 23 | minutes: 1 24 | 25 | max_threshold: 0.8 26 | 27 | # (Required) 28 | # The alert is used when a match is found 29 | alert: 30 | - "debug" 31 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/installation/docker-compose/.custom-env: -------------------------------------------------------------------------------- 1 | ELASTIC_VERSION=8.9.2 2 | 3 | ## Passwords for stack users 4 | # 5 | 6 | # User 'elastic' (built-in) 7 | # 8 | # Superuser role, full access to cluster management and data indices. 9 | # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html 10 | ELASTIC_PASSWORD='changeme123' 11 | 12 | # User 'logstash_internal' (custom) 13 | # 14 | # The user Logstash uses to connect and send data to Elasticsearch. 15 | # https://www.elastic.co/guide/en/logstash/current/ls-security.html 16 | LOGSTASH_INTERNAL_PASSWORD='changeme123' 17 | 18 | # User 'kibana_system' (built-in) 19 | # 20 | # The user Kibana uses to connect and communicate with Elasticsearch. 21 | # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html 22 | KIBANA_SYSTEM_PASSWORD='changeme123' 23 | 24 | # Users 'metricbeat_internal', 'filebeat_internal' and 'heartbeat_internal' (custom) 25 | # 26 | # The users Beats use to connect and send data to Elasticsearch. 27 | # https://www.elastic.co/guide/en/beats/metricbeat/current/feature-roles.html 28 | METRICBEAT_INTERNAL_PASSWORD='' 29 | FILEBEAT_INTERNAL_PASSWORD='' 30 | HEARTBEAT_INTERNAL_PASSWORD='' 31 | 32 | # User 'monitoring_internal' (custom) 33 | # 34 | # The user Metricbeat uses to collect monitoring data from stack components. 35 | # https://www.elastic.co/guide/en/elasticsearch/reference/current/how-monitoring-works.html 36 | MONITORING_INTERNAL_PASSWORD='' 37 | 38 | # User 'beats_system' (built-in) 39 | # 40 | # The user the Beats use when storing monitoring information in Elasticsearch. 41 | # https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html 42 | BEATS_SYSTEM_PASSWORD='' 43 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/installation/docker-compose/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Get the current directory 4 | CUR_DIR=$(pwd) 5 | 6 | # Check if the script is executed from its own directory 7 | if [[ $(basename "$0") != "install.sh" ]]; then 8 | echo "Not in the script's directory, exiting..." 9 | exit 1 10 | fi 11 | 12 | # Define workspace variables 13 | WS_NAME="workspace/ws_elk" 14 | WS_ABS_PATH="$CUR_DIR/$WS_NAME" 15 | 16 | # Create the workspace directory if it doesn't exist 17 | mkdir -p "$WS_ABS_PATH" 18 | 19 | # Change to the workspace directory 20 | cd "$WS_ABS_PATH" || exit 21 | 22 | # Clone source code if it doesn't exist 23 | if [ ! -d "docker-elk" ]; then 24 | git clone git@github.com:deviantony/docker-elk.git 25 | fi 26 | 27 | # Change to the docker-elk directory 28 | cd "docker-elk" || exit 29 | 30 | # Uninstall if needed 31 | if [ -f "$CUR_DIR/uninstall.sh" ]; then 32 | echo "Uninstalling previous installation..." 33 | sh "$CUR_DIR/uninstall.sh" 34 | fi 35 | 36 | # # Override local config env 37 | # if [ -f "$CUR_DIR/.custom-env" ]; then 38 | # echo "Overwriting local config env..." 39 | # cp "$CUR_DIR/.custom-env" .env 40 | # else 41 | # echo "Warning: .custom-env file not found." 42 | # fi 43 | 44 | # Deploy 45 | echo "Deploying..." 46 | docker-compose up setup 47 | docker-compose up -d 48 | 49 | # Verify 50 | echo "Verifying..." 51 | echo "Verifying elasticsearch..." 52 | docker ps | grep elk-elasticsearch 53 | echo "Verifying logstash..." 54 | docker ps | grep elk-logstash 55 | echo "Verifying kibana..." 56 | docker ps | grep elk-kibana 57 | 58 | echo "View Elastic at:" 59 | echo "http://localhost:9200" 60 | echo 61 | echo "View Kibana at:" 62 | echo "http://localhost:5601" -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/installation/docker-compose/uninstall.sh: -------------------------------------------------------------------------------- 1 | # Un-deploy 2 | echo "Un-deploying..." 3 | docker-compose down -v 4 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/installation/helm/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | GREEN='\033[0;32m' 3 | # ANSI escape code to reset text color to default 4 | RESET='\033[0m' 5 | console_log() { 6 | echo -e "${GREEN}>>> [Elk] $1${RESET}" 7 | } 8 | 9 | command -v kubectl 10 | 11 | console_log "Configure Helm" 12 | helm repo add elastic https://helm.elastic.co 13 | helm repo update 14 | helm search hub elasticsearch 15 | 16 | console_log "Cleanup prev run!!!" 17 | ./uninstall.sh 18 | 19 | console_log "install elasticsearch" 20 | helm upgrade --install elasticsearch elastic/elasticsearch -f values.yml 21 | 22 | console_log "waiting for service up and running..." 23 | sleep 30 24 | 25 | console_log "Port forwarding..." 26 | kubectl port-forward svc/elasticsearch-master 9200 & 27 | console_log "Waiting 15s for port forward process completed..." 28 | sleep 15 29 | # login URL 30 | elk_url="http://localhost:9200" 31 | curl $elk_url 32 | console_log $elk_url 33 | 34 | 35 | console_log "install metricbeat" 36 | # helm upgrade --install filebeat elastic/filebeat 37 | helm upgrade --install metricbeat elastic/metricbeat 38 | 39 | 40 | console_log "install kibana" 41 | helm upgrade --install kibana elastic/kibana 42 | sleep 30 43 | 44 | console_log "Port forwarding..." 45 | kubectl port-forward svc/elasticsearch-master 5061:5061 & 46 | console_log "Waiting 15s for port forward process completed..." 47 | sleep 15 48 | # login URL 49 | kibana_url="http://localhost:5061" 50 | curl $kibana_url 51 | console_log $kibana_url 52 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/installation/helm/uninstall.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | ORANGE='\033[0;33m' 3 | # ANSI escape code to reset text color to default 4 | RESET='\033[0m' 5 | console_log() { 6 | echo -e "${ORANGE}>>> [CLEANUP] [Elk] $1${RESET}" 7 | } 8 | 9 | console_log "Delete related kibana" 10 | kubectl delete configmap kibana-kibana-helm-scripts 11 | kubectl delete serviceaccount pre-install-kibana-kibana 12 | kubectl delete serviceaccount post-delete-kibana-kibana 13 | kubectl delete roles pre-install-kibana-kibana 14 | kubectl delete rolebindings pre-install-kibana-kibana 15 | kubectl delete job pre-install-kibana-kibana 16 | 17 | console_log "Uninstall" 18 | helm delete elasticsearch 19 | helm delete metricbeat 20 | helm delete kibana 21 | 22 | console_log "Elastic - Kill prev port" 23 | ### Run the ps -ef command and use grep to filter the output for 'port-forward' 24 | process_line=$(ps -ef | grep 'port-forward' | grep "9200:9200" | grep -v grep) 25 | ### Extract the PID from the process_line using awk or cut 26 | PID=$(echo "$process_line" | awk '{print $2}') # Using awk 27 | console_log "Killing $PID" 28 | kill -9 $PID 29 | 30 | console_log "Kibana - Kill prev port" 31 | ### Run the ps -ef command and use grep to filter the output for 'port-forward' 32 | process_line=$(ps -ef | grep 'port-forward' | grep "5061:5061" | grep -v grep) 33 | ### Extract the PID from the process_line using awk or cut 34 | PID=$(echo "$process_line" | awk '{print $2}') # Using awk 35 | console_log "Killing $PID" 36 | kill -9 $PID 37 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/installation/helm/values.yml: -------------------------------------------------------------------------------- 1 | clusterName: "elasticsearch" 2 | nodeGroup: "master" 3 | 4 | esConfig: 5 | elasticsearch.yml: | 6 | network.host: 127.0.0.1 7 | 8 | master: 9 | replicas: 3 10 | heapSize: "512m" 11 | 12 | data: 13 | replicas: 2 14 | heapSize: "512m" 15 | persistence: 16 | enabled: true -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/README.md: -------------------------------------------------------------------------------- 1 | # Doc 2 | - https://www.elastic.co/guide/en/beats/metricbeat/current/running-on-docker.html 3 | 4 | # Deploy 5 | - Run 6 | ``` 7 | docker compose up -d 8 | ``` 9 | 10 | # Deploy altenatives method 11 | - See: wsl2_ubuntu 12 | - See also: ubuntu_host_on_docker (WIP) 13 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | metricbeat: 4 | container_name: metricbeat 5 | user: root 6 | image: docker.elastic.co/beats/metricbeat:8.9.2 7 | restart: always 8 | network_mode: 'host' # Use the host network 9 | command: > 10 | metricbeat -e 11 | -E output.elasticsearch.hosts=["http://localhost:9200"] 12 | -E output.elasticsearch.username="elastic" 13 | -E output.elasticsearch.password="changeme" 14 | --strict.perms=false 15 | volumes: 16 | - ./metricbeat.docker.yml:/usr/share/metricbeat/metricbeat.yml:ro 17 | - /var/run/docker.sock:/var/run/docker.sock:ro 18 | - /sys/fs/cgroup:/hostfs/sys/fs/cgroup:ro 19 | - /proc:/hostfs/proc:ro 20 | - /:/hostfs:ro 21 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/metricbeat.docker.yml: -------------------------------------------------------------------------------- 1 | # Reference: https://raw.githubusercontent.com/elastic/beats/8.9/deploy/docker/metricbeat.docker.yml 2 | 3 | metricbeat.config: 4 | modules: 5 | path: ${path.config}/modules.d/*.yml 6 | # Reload module configs as they change: 7 | reload.enabled: false 8 | 9 | metricbeat.autodiscover: 10 | providers: 11 | - type: docker 12 | hints.enabled: true 13 | 14 | metricbeat.modules: 15 | - module: docker 16 | metricsets: 17 | - "container" 18 | - "cpu" 19 | - "diskio" 20 | - "healthcheck" 21 | - "info" 22 | #- "image" 23 | - "memory" 24 | - "network" 25 | hosts: ["unix:///var/run/docker.sock"] 26 | period: 3s 27 | enabled: true 28 | 29 | processors: 30 | - add_cloud_metadata: ~ 31 | 32 | output.elasticsearch: 33 | hosts: "${ELASTICSEARCH_HOSTS:elasticsearch:9200}" 34 | username: "${ELASTICSEARCH_USERNAME:}" 35 | password: "${ELASTICSEARCH_PASSWORD:}" 36 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/ubuntu_host_on_docker/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:20.04 2 | 3 | # Install Metricbeat 4 | RUN apt-get update && apt-get install -y wget apt-transport-https && \ 5 | wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add - && \ 6 | echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-8.x.list && \ 7 | apt-get update && apt-get install metricbeat && \ 8 | systemctl enable metricbeat 9 | 10 | # Configure Metricbeat 11 | COPY metricbeat.yml /etc/metricbeat/metricbeat.yml 12 | 13 | CMD ["metricbeat", "-e", "--strict.perms=false"] 14 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/ubuntu_host_on_docker/README.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/ubuntu_host_on_docker/metricbeat.yml: -------------------------------------------------------------------------------- 1 | metricbeat.modules: 2 | - module: system 3 | metricsets: 4 | - cpu 5 | - memory 6 | - network 7 | - diskio 8 | - filesystem 9 | enabled: true 10 | period: 10s 11 | processes: ['.*'] 12 | 13 | output.elasticsearch: 14 | hosts: ["localhost:9200"] 15 | # index: "metricbeat-%{[agent.version]}-%{+yyyy.MM.dd}" 16 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/wsl2_ubuntu/install_metricbeat.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install and configure Metricbeat 3 | hosts: localhost 4 | become: yes 5 | become_method: sudo 6 | 7 | tasks: 8 | - name: Import Elasticsearch GPG key 9 | shell: wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add - 10 | changed_when: false 11 | 12 | - name: Install apt-transport-https 13 | apt: 14 | name: apt-transport-https 15 | state: present 16 | 17 | - name: Add Elastic APT repository 18 | lineinfile: 19 | dest: /etc/apt/sources.list.d/elastic-8.x.list 20 | line: "deb https://artifacts.elastic.co/packages/8.x/apt stable main" 21 | notify: Update apt cache 22 | 23 | - name: Update apt cache 24 | apt: 25 | update_cache: yes 26 | 27 | - name: Install Metricbeat 28 | apt: 29 | name: metricbeat 30 | state: present 31 | 32 | - name: Copy Metricbeat configuration file 33 | copy: 34 | src: metricbeat.yml 35 | dest: /etc/metricbeat/metricbeat.yml 36 | notify: Start Metricbeat service 37 | 38 | - name: Enable Metricbeat service 39 | systemd: 40 | name: metricbeat 41 | enabled: yes 42 | 43 | - name: Start Metricbeat service 44 | systemd: 45 | name: metricbeat 46 | state: started 47 | 48 | - name: Check Metricbeat service status 49 | systemd: 50 | name: metricbeat 51 | register: metricbeat_status 52 | 53 | - name: Display Metricbeat service status 54 | debug: 55 | var: metricbeat_status.stdout 56 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/wsl2_ubuntu/install_via_ansible.sh: -------------------------------------------------------------------------------- 1 | 2 | sudo ansible-playbook -v install_metricbeat.yml 3 | -------------------------------------------------------------------------------- /topics/elk/basic/helloworld/metric-beat/wsl2_ubuntu/metricbeat.yml: -------------------------------------------------------------------------------- 1 | metricbeat.modules: 2 | - module: system 3 | metricsets: 4 | - cpu 5 | - memory 6 | - network 7 | - diskio 8 | - filesystem 9 | enabled: true 10 | period: 10s 11 | processes: ['.*'] 12 | 13 | output.elasticsearch: 14 | hosts: ["localhost:9200"] 15 | # index: "metricbeat-%{[agent.version]}-%{+yyyy.MM.dd}" 16 | -------------------------------------------------------------------------------- /topics/git/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Git? 2 | 3 | - https://git-scm.com/book/en/v2/Getting-Started-What-is-Git%3F 4 | 5 | ### Overview 6 | 7 | Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. 8 | 9 | ### Git workflow 10 | 11 | - ![Git workflow](https://github.com/kubernetes/community/blob/master/contributors/guide/git_workflow.png) 12 | 13 | ### Official website documentation of Git 14 | 15 | - https://git-scm.com/ 16 | - https://github.com/git-guides 17 | 18 | ## 2. Prerequisites 19 | 20 | - Basic linux command line skill 21 | 22 | ## 3. Installation 23 | 24 | ### How to install Git? 25 | 26 | - https://github.com/git-guides/install-git 27 | 28 | ## 4. Basics of Git 29 | 30 | ### Getting started with Git 31 | 32 | - Visit https://git-scm.com/video/get-going for a comprehensive introduction. 33 | 34 | ### Git Hello World 35 | 36 | - Check the [helloworld/](./basic/hello-world/) directory to create a simple Git demo. 37 | 38 | ## 5. Beyond the Basics 39 | 40 | ### Hands-On Example 41 | 42 | - Explore a practical hands-on example in the [Git hands-on](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html) for more Git concepts 43 | 44 | ## 6. More 45 | 46 | ### Git guides page 47 | 48 | - https://github.com/git-guides 49 | 50 | ### Git cheatsheet 51 | 52 | - https://ndpsoftware.com/git-cheatsheet.html 53 | - https://education.github.com/git-cheat-sheet-education.pdf 54 | 55 | ### Recommended Books 56 | 57 | - N/A 58 | -------------------------------------------------------------------------------- /topics/git/TIPS.md: -------------------------------------------------------------------------------- 1 | # How do I make Git forget about a file that was tracked, but is now in .gitignore? 2 | - https://stackoverflow.com/questions/1274057/how-do-i-make-git-forget-about-a-file-that-was-tracked-but-is-now-in-gitignore 3 | -------------------------------------------------------------------------------- /topics/git/basic/hello-world/git-helloworld.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo "\e[32m>>> [Git] $1\e[0m" 5 | } 6 | 7 | WORKSPACE="/tmp/devops-basic/git/demo" 8 | 9 | console_log "Cleanup previous run data" 10 | rm -rf $WORKSPACE 11 | 12 | console_log "Prepare new data" 13 | mkdir -p $WORKSPACE 14 | 15 | console_log "Clone repository" 16 | cd $WORKSPACE 17 | git clone git@github.com:tungbq/devops-basic.git 18 | 19 | console_log "Navigate to the cloned repo" 20 | cd devops-basic 21 | ls -la 22 | 23 | console_log "Switch to main branch" 24 | git checkout main 25 | 26 | console_log "Create new development branch, from main branch" 27 | git checkout -b my-dev-branch 28 | 29 | console_log "Make change - Update file" 30 | echo "Testing README.md" >>README.md 31 | 32 | echo "Make change - New file" 33 | console_log "Hello-world" >hello-world.txt 34 | 35 | console_log "Check git status" 36 | git status 37 | 38 | #### We should see result like this: 39 | # Changes not staged for commit: 40 | # (use "git add ..." to update what will be committed) 41 | # (use "git restore ..." to discard changes in working directory) 42 | # modified: README.md 43 | 44 | # Untracked files: 45 | # (use "git add ..." to include in what will be committed) 46 | # hello-world.txt 47 | 48 | console_log "Add all the changes" 49 | git add . 50 | 51 | console_log "Commit the changes" 52 | git commit -m "dev: this is my first commit" 53 | 54 | console_log "Push the change (dry-run)" 55 | git push origin my-dev-branch --dry-run 56 | console_log "Just for demo! In real world, we will not use the --dry-run option" 57 | 58 | console_log "Check git log (last 3 commits)" 59 | git --no-pager log -n 3 60 | 61 | console_log "Check git status - again" 62 | git status 63 | -------------------------------------------------------------------------------- /topics/github-action/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is GitHub Action? 2 | 3 | - https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions 4 | 5 | ### Overview 6 | 7 | GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production. 8 | 9 | ### GitHub Action workflow 10 | 11 | - N/A 12 | 13 | ### Official website documentation of GitHub Action 14 | 15 | - https://docs.github.com/en/enterprise-cloud@latest/actions 16 | 17 | ## 2. Prerequisites 18 | 19 | - Basic linux command line skill, CICD, YAML 20 | 21 | ## 3. Installation 22 | 23 | ### How to install GitHub Action? 24 | 25 | - No need to install, it's built along with GitHub server 26 | 27 | ## 4. Basics of GitHub Action 28 | 29 | ### Getting started with GitHub Action 30 | 31 | - Visit https://docs.github.com/en/enterprise-cloud@latest/actions/quickstart for a comprehensive introduction. 32 | 33 | ### GitHub Action Hello World 34 | 35 | - Check the [basic/](./basic/) directory to create a simple GitHub Action demo. 36 | 37 | ## 5. Beyond the Basics 38 | 39 | ### Hands-On Example 40 | 41 | - Explore a practical hands-on example in the [learn-github-actions](https://docs.github.com/en/enterprise-cloud@latest/actions/learn-github-actions) for more GitHub Action concepts 42 | 43 | ## 6. More... 44 | 45 | ### Awesome GitHub workflow 46 | 47 | - Visit [awesome-workflow](https://github.com/tungbq/awesome-workflow) 48 | 49 | ### Recommended Books 50 | 51 | - N/A 52 | -------------------------------------------------------------------------------- /topics/github-action/basic/README.md: -------------------------------------------------------------------------------- 1 | ## Basics of GitHub Actrion 2 | 3 | ## Github Action - Helloworld 4 | 5 | - Create new workflow 6 | 7 | ``` 8 | name: GitHub Actions Helloworld 9 | on: [push] 10 | jobs: 11 | Welcome-GitHub-Actions: 12 | runs-on: ubuntu-latest 13 | steps: 14 | - run: echo " Hello world! 🎉 The job was automatically triggered by a ${{ Github.event_name }} event." 15 | ``` 16 | -------------------------------------------------------------------------------- /topics/gitlabci/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Gitlab CI? 2 | 3 | ### Overview 4 | 5 | GitLab CI/CD is a software development tool that allows organizations to implement “continuous” methodologies, including continuous integration (CI), continuous delivery (CD), and continuous deployment (also abbreviated to CD). 6 | 7 | ### Gitlab CI workflow 8 | 9 | - N/A 10 | 11 | ### Official website documentation of Gitlab CI 12 | 13 | - https://docs.gitlab.com/ (CI/CD page) 14 | 15 | ## 2. Prerequisites 16 | 17 | - Basic linux command line skill, CICD, YAML 18 | 19 | ## 3. Installation 20 | 21 | ### How to install Gitlab CI? 22 | 23 | #### Gitlab public 24 | 25 | - Use https://gitlab.com/ (No need to install) 26 | 27 | #### Gitlab self deployment 28 | 29 | - https://docs.gitlab.com/ee/install/install_methods.html 30 | 31 | ## 4. Basics of Gitlab CI 32 | 33 | ### Getting started with Gitlab CI 34 | 35 | - Visit https://docs.gitlab.com/ee/ci/quick_start/ for a comprehensive introduction. 36 | 37 | ### Gitlab CI Hello World 38 | 39 | - Check the [basic/](./basic/) directory to create a simple Gitlab CI demo. 40 | 41 | ## 5. Beyond the Basics 42 | 43 | ### Hands-On Example 44 | 45 | - Explore a practical hands-on example in the [Gitlab CI examples](https://docs.gitlab.com/ee/ci/examples/) for more Gitlab CI concepts 46 | - Check the [advanced/](./advanced//) for more Gitlab CI concepts 47 | 48 | ## 6. More... 49 | 50 | ### Gitlab CI YAML syntax reference 51 | 52 | - https://docs.gitlab.com/ee/ci/yaml/ 53 | 54 | ### Recommended Books 55 | 56 | - N/A 57 | -------------------------------------------------------------------------------- /topics/gitlabci/advanced/REAME.md: -------------------------------------------------------------------------------- 1 | ## Using GitLab CI/CD with a GitHub repository 2 | 3 | - https://docs.gitlab.com/ee/ci/ci_cd_for_external_repos/github_integration.html 4 | -------------------------------------------------------------------------------- /topics/gitlabci/basic/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | build-job: 2 | stage: build 3 | script: 4 | - echo "Hello, $GITLAB_USER_LOGIN!" 5 | 6 | test-job1: 7 | stage: test 8 | script: 9 | - echo "This job tests something" 10 | 11 | test-job2: 12 | stage: test 13 | script: 14 | - echo "This job tests something, but takes more time than test-job1." 15 | - echo "After the echo commands complete, it runs the sleep command for 20 seconds" 16 | - echo "which simulates a test that runs 20 seconds longer than test-job1" 17 | - sleep 20 18 | 19 | deploy-prod: 20 | stage: deploy 21 | script: 22 | - echo "This job deploys something from the $CI_COMMIT_BRANCH branch." 23 | environment: production 24 | -------------------------------------------------------------------------------- /topics/groovy/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Groovy? 2 | 3 | ### Overview 4 | 5 | - Groovy is a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity thanks to a concise, familiar and easy to learn . 6 | - Groovy is being used for developing the Jenkins pipeline, so it better if we have the knowledge about this language 7 | 8 | ### Groovy workflow 9 | 10 | - N/A 11 | 12 | ### Official website documentation of Groovy 13 | 14 | - https://groovy-lang.org/documentation.html 15 | 16 | ## 2. Prerequisites 17 | 18 | - N/A 19 | 20 | ## 3. Installation 21 | 22 | ### How to install Groovy? 23 | 24 | - See https://groovy-lang.org/install.html (I prefer using SDK man) 25 | - Facing missing java issue while installing: Visit: [groovy-with-sdk-missing-java.md](.././../troubleshooting/installation/groovy-with-sdk-missing-java.md) 26 | 27 | ## 4. Basics of Groovy 28 | 29 | ### Groovy Hello World 30 | 31 | - Check the [basic/](./basic/) directory to create a simple Groovy demo. 32 | 33 | ## 5. Beyond the Basics 34 | 35 | ### Hands-On Example 36 | 37 | - TODO 38 | 39 | ## 6. More... 40 | 41 | ### Groovy extra resources 42 | 43 | - TODO 44 | 45 | ### Recommended Books 46 | 47 | - TODO 48 | -------------------------------------------------------------------------------- /topics/groovy/basic/README.md: -------------------------------------------------------------------------------- 1 | # Install groovy 2 | - See: [groovy guide](../../groovy/README.md) 3 | 4 | # Run the example 5 | - E.g: `groovy basic-concept.groovy` 6 | -------------------------------------------------------------------------------- /topics/haproxy/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is HAProxy? 2 | 3 | ### Overview 4 | 5 | HAProxy is a free and open source software that provides a high availability load balancer and Proxy for TCP and HTTP-based applications that spreads requests across multiple servers. It is written in C and has a reputation for being fast and efficient. 6 | 7 | ### Official website documentation of HAProxy 8 | 9 | - https://HAProxy.org/ 10 | 11 | ## 2. Prerequisites 12 | 13 | - Basic networking, HTTP, Linux 14 | 15 | ## 3. Installation 16 | 17 | ### How to install HAProxy? 18 | 19 | - https://github.com/haproxy/haproxy/tree/master?tab=readme-ov-file#installation 20 | 21 | ### Install HAProxy with Docker 22 | 23 | - https://hub.docker.com/_/haproxy 24 | 25 | ## 4. Basics of HAProxy 26 | 27 | ### HAProxy lab 28 | 29 | - See: [basic](./basic/) 30 | 31 | ## 5. Beyond the Basics 32 | 33 | ### Hands-On Example 34 | 35 | - Check the [advanced/](./advanced/) directory for more HAProxy examples. 36 | 37 | ## 6. More... 38 | 39 | ### Admin guide 40 | 41 | - https://docs.haproxy.org/dev/management.html 42 | 43 | ### HAProxy cheatsheet 44 | 45 | - https://docs.haproxy.org/dev/configuration.html 46 | 47 | ### Recommended Books 48 | 49 | - N/A 50 | -------------------------------------------------------------------------------- /topics/haproxy/advanced/README.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | -------------------------------------------------------------------------------- /topics/haproxy/basic/README.md: -------------------------------------------------------------------------------- 1 | # HA proxy basics demo 2 | 3 | ## 1. Labs stack 4 | 5 | - [nginx-webserver1](https://nginx.org/): An Ubuntu VM running in nginx webserver. 6 | - [nginx-webserver2](https://nginx.org/): An Ubuntu VM running in nginx webserver. 7 | - [haproxy](https://www.haproxy.org/): HA proxy points to 2 these web servers. 8 | 9 | ## 2. Setup 10 | 11 | ### Prerequisites 12 | 13 | - Docker + Docker Compose 14 | 15 | ### Build and run the containers 16 | 17 | - Option-1: Build and run in background (Recommend) 18 | 19 | ```bash 20 | cd devops-basics/topics/haproxy/basic/ 21 | docker-compose up --build -d 22 | 23 | # To stop and remove contaienr, run: 24 | docker compose down 25 | ``` 26 | 27 | - Option-2: Run and verbose the logs 28 | 29 | ```bash 30 | cd devops-basics/topics/haproxy/basic/ 31 | docker-compose up --build 32 | 33 | # To stop, press 'Ctrl + C' 34 | ``` 35 | 36 | ## 3. Explore the HA proxy 37 | 38 | - Access the HA Proxy at http://localhost:6081 (You can replace 6081 by the port work on your machine!) 39 | - Refresh the page multiple time and you would see that the HA Proxy route to `nginx-webserver1` and `nginx-webserver2` in Round Robin mode. 40 | 41 | ![nginx-webserver1](./assets/server1.png) 42 | ![nginx-webserver2](./assets/server2.png) 43 | 44 | - Now try to stop the `nginx-webserver1` and refresh the page http://localhost:6081, it will check and only route to `nginx-webserver2` 45 | 46 | ```bash 47 | docker stop nginx-webserver1 48 | ``` 49 | 50 | ![nginx-webserver1](./assets/server2.png) 51 | ![nginx-webserver2](./assets/server2.png) 52 | -------------------------------------------------------------------------------- /topics/haproxy/basic/assets/server1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/haproxy/basic/assets/server1.png -------------------------------------------------------------------------------- /topics/haproxy/basic/assets/server2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/haproxy/basic/assets/server2.png -------------------------------------------------------------------------------- /topics/haproxy/basic/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | haproxy: 5 | build: 6 | context: ./haproxy 7 | container_name: haproxy 8 | ports: 9 | # You can replace 6081 by the port work on your machine! 10 | - '6081:80' 11 | depends_on: 12 | - nginx-webserver1 13 | - nginx-webserver2 14 | networks: 15 | - haproxy-network 16 | 17 | nginx-webserver1: 18 | build: 19 | context: ./nginx-webserver/nginx-webserver1 20 | container_name: nginx-webserver1 21 | networks: 22 | - haproxy-network 23 | 24 | nginx-webserver2: 25 | build: 26 | context: ./nginx-webserver/nginx-webserver2 27 | container_name: nginx-webserver2 28 | networks: 29 | - haproxy-network 30 | 31 | networks: 32 | haproxy-network: 33 | driver: bridge 34 | -------------------------------------------------------------------------------- /topics/haproxy/basic/haproxy/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM haproxy:latest 2 | COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg 3 | -------------------------------------------------------------------------------- /topics/haproxy/basic/haproxy/haproxy.cfg: -------------------------------------------------------------------------------- 1 | global 2 | log stdout format raw local0 3 | 4 | defaults 5 | log global 6 | mode http 7 | option httplog 8 | timeout connect 5000ms 9 | timeout client 50000ms 10 | timeout server 50000ms 11 | 12 | frontend http_front 13 | bind *:80 14 | default_backend http_back 15 | 16 | backend http_back 17 | balance roundrobin 18 | server webserver1 nginx-webserver1:80 check 19 | server webserver2 nginx-webserver2:80 check 20 | -------------------------------------------------------------------------------- /topics/haproxy/basic/nginx-webserver/nginx-webserver1/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:alpine 2 | COPY index.html /usr/share/nginx/html/index.html 3 | -------------------------------------------------------------------------------- /topics/haproxy/basic/nginx-webserver/nginx-webserver1/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Server 1 5 | 6 | 7 |

Hello from Server 1!

8 | 9 | 10 | -------------------------------------------------------------------------------- /topics/haproxy/basic/nginx-webserver/nginx-webserver2/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:alpine 2 | COPY index.html /usr/share/nginx/html/index.html 3 | -------------------------------------------------------------------------------- /topics/haproxy/basic/nginx-webserver/nginx-webserver2/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Server 2 5 | 6 | 7 |

Hello from Server 2!

8 | 9 | 10 | -------------------------------------------------------------------------------- /topics/helm/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Helm? 2 | 3 | ### Overview 4 | 5 | - Helm is the package manager for Kubernetes 6 | 7 | ### Helm workflow 8 | 9 | ![helm_workflow](https://v2.helm.sh/img/chart-illustration.png) 10 | 11 | ### Official website documentation of Helm 12 | 13 | - https://helm.sh/docs/ 14 | 15 | ## 2. Prerequisites 16 | 17 | - K8s, docker, linux 18 | 19 | ## 3. Installation 20 | 21 | ### How to install Helm? 22 | 23 | - https://helm.sh/docs/intro/install/ 24 | 25 | ## 4. Basics of Helm 26 | 27 | ### Helm quick start 28 | 29 | - https://helm.sh/docs/intro/quickstart/ 30 | 31 | ### Helm Hello World 32 | 33 | - Check the [basic/](./basic/) directory to create a simple Helm demo. 34 | 35 | ## 5. Beyond the Basics 36 | 37 | ### Hands-On Example 38 | 39 | - Check the [advanced/](./advanced/) directory for more Helm examples. 40 | 41 | ## 6. More... 42 | 43 | ### Helm cheatsheet 44 | 45 | - https://helm.sh/docs/intro/cheatsheet/ 46 | 47 | ### Recommended Books 48 | 49 | - TODO 50 | -------------------------------------------------------------------------------- /topics/helm/advanced/hands-on/deploy-jenkins/README.md: -------------------------------------------------------------------------------- 1 | # To deploy Jenkins using Helm v3 2 | # Docs 3 | - https://www.jenkins.io/doc/book/installing/kubernetes/#install-jenkins-with-helm-v3 4 | # Step 5 | - Run `deploy.sh` to automate the deployment 6 | - Run `cleanup.sh` if you wanna teardown the resource 7 | -------------------------------------------------------------------------------- /topics/helm/advanced/hands-on/deploy-jenkins/cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Delete 4 | helm delete jenkins -n jenkins 5 | 6 | # Consider always deleting this part or not? 7 | kubectl delete -f jenkins-volume.yaml 8 | kubectl delete -f jenkins-sa.yaml 9 | -------------------------------------------------------------------------------- /topics/helm/advanced/hands-on/deploy-jenkins/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | log() { 4 | local msg=$1 5 | echo "-------------------------" 6 | echo "> $msg" 7 | echo "-------------------------" 8 | } 9 | 10 | # Init and configure Helm 11 | log "Adding Jenkins repo and get latest updates" 12 | helm repo add jenkinsci https://charts.jenkins.io 13 | helm repo update 14 | 15 | # Create Persistent Volume 16 | log "Create the volume" 17 | kubectl apply -f jenkins-volume.yaml 18 | log "Check the volume" 19 | kubectl get pv 20 | 21 | # Create a service account 22 | kubectl apply -f jenkins-sa.yaml 23 | 24 | # Update the values manually: Done. TODO: Automate this step 25 | 26 | # Deploy 27 | log "Start deploying..." 28 | chart="jenkinsci/jenkins" 29 | # helm install jenkins -n jenkins -f jenkins-values.yaml $chart 30 | helm upgrade --install jenkins -n jenkins -f jenkins-values.yaml $chart 31 | 32 | # Check deployment 33 | log "Check deployment" 34 | kubectl get pods -n jenkins 35 | 36 | # Waiting for port up and running 37 | pod_name_from_helm="jenkins-0" 38 | while [[ $(kubectl get pods -n jenkins $pod_name_from_helm -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for pod - $pod_name_from_helm" && sleep 1; done 39 | 40 | # Get metadata 41 | log "Get 'admin' password" 42 | jsonpath="{.data.jenkins-admin-password}" 43 | secret=$(kubectl get secret -n jenkins jenkins -o jsonpath="$jsonpath" | xargs -0) 44 | log $(echo $secret | base64 --decode) 45 | 46 | # Portforward 47 | ## kill prev 48 | log "Kill prev port" 49 | ### Run the pgrep command to search for the process running on the port 8090 50 | PID=$(pgrep -f "port-forward 8090:8080") 51 | if [[ "$PID" != "" ]]; then 52 | log "Killing $PID" 53 | kill -9 $PID 54 | fi 55 | 56 | log "Port forwarding..." 57 | nohup kubectl port-forward service/jenkins 8090:8080 -n jenkins & 58 | log "Waiting 15s for port forward process completed..." 59 | sleep 15 60 | 61 | # login URL 62 | login_url="http://localhost:8090/login" 63 | log $login_url 64 | 65 | curl $login_url -------------------------------------------------------------------------------- /topics/helm/advanced/hands-on/deploy-jenkins/jenkins-sa.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: jenkins 6 | labels: 7 | name: jenkins 8 | --- 9 | apiVersion: v1 10 | kind: ServiceAccount 11 | metadata: 12 | name: jenkins 13 | namespace: jenkins 14 | --- 15 | apiVersion: rbac.authorization.k8s.io/v1 16 | kind: ClusterRole 17 | metadata: 18 | annotations: 19 | rbac.authorization.kubernetes.io/autoupdate: "true" 20 | labels: 21 | kubernetes.io/bootstrapping: rbac-defaults 22 | name: jenkins 23 | rules: 24 | - apiGroups: 25 | - "*" 26 | resources: 27 | - statefulsets 28 | - services 29 | - replicationcontrollers 30 | - replicasets 31 | - podtemplates 32 | - podsecuritypolicies 33 | - pods 34 | - pods/log 35 | - pods/exec 36 | - podpreset 37 | - poddisruptionbudget 38 | - persistentvolumes 39 | - persistentvolumeclaims 40 | - jobs 41 | - endpoints 42 | - deployments 43 | - deployments/scale 44 | - daemonsets 45 | - cronjobs 46 | - configmaps 47 | - namespaces 48 | - events 49 | - secrets 50 | verbs: 51 | - create 52 | - get 53 | - watch 54 | - delete 55 | - list 56 | - patch 57 | - update 58 | - apiGroups: 59 | - "" 60 | resources: 61 | - nodes 62 | verbs: 63 | - get 64 | - list 65 | - watch 66 | - update 67 | --- 68 | apiVersion: rbac.authorization.k8s.io/v1 69 | kind: ClusterRoleBinding 70 | metadata: 71 | annotations: 72 | rbac.authorization.kubernetes.io/autoupdate: "true" 73 | labels: 74 | kubernetes.io/bootstrapping: rbac-defaults 75 | name: jenkins 76 | roleRef: 77 | apiGroup: rbac.authorization.k8s.io 78 | kind: ClusterRole 79 | name: jenkins 80 | subjects: 81 | - apiGroup: rbac.authorization.k8s.io 82 | kind: Group 83 | name: system:serviceaccounts:jenkins 84 | -------------------------------------------------------------------------------- /topics/helm/advanced/hands-on/deploy-jenkins/jenkins-volume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: jenkins-pv 5 | namespace: jenkins 6 | spec: 7 | storageClassName: jenkins-pv 8 | accessModes: 9 | - ReadWriteOnce 10 | capacity: 11 | storage: 20Gi 12 | persistentVolumeReclaimPolicy: Retain 13 | hostPath: 14 | path: /run/desktop/mnt/host/c/data/k8s/jenkins-volume 15 | --- 16 | apiVersion: storage.k8s.io/v1 17 | kind: StorageClass 18 | metadata: 19 | name: jenkins-pv 20 | provisioner: kubernetes.io/no-provisioner 21 | volumeBindingMode: WaitForFirstConsumer 22 | -------------------------------------------------------------------------------- /topics/helm/advanced/hands-on/deploy-jenkins/local-debug.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | kubectl logs -n jenkins jenkins-0 -c init -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: tungleo-chart 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | # Versions are expected to follow Semantic Versioning (https://semver.org/) 18 | version: 0.1.0 19 | 20 | # This is the version number of the application being deployed. This version number should be 21 | # incremented each time you make changes to the application. Versions are not expected to 22 | # follow Semantic Versioning. They should reflect the version the application is using. 23 | # It is recommended to use it with quotes. 24 | appVersion: "1.16.0" 25 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 1. Get the application URL by running these commands: 2 | {{- if .Values.ingress.enabled }} 3 | {{- range $host := .Values.ingress.hosts }} 4 | {{- range .paths }} 5 | http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} 6 | {{- end }} 7 | {{- end }} 8 | {{- else if contains "NodePort" .Values.service.type }} 9 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "tungleo-chart.fullname" . }}) 10 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 11 | echo http://$NODE_IP:$NODE_PORT 12 | {{- else if contains "LoadBalancer" .Values.service.type }} 13 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 14 | You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "tungleo-chart.fullname" . }}' 15 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "tungleo-chart.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") 16 | echo http://$SERVICE_IP:{{ .Values.service.port }} 17 | {{- else if contains "ClusterIP" .Values.service.type }} 18 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "tungleo-chart.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 19 | export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") 20 | echo "Visit http://127.0.0.1:8080 to use your application" 21 | kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "tungleo-chart.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} 6 | {{- end }} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | If release name contains chart name it will be used as a full name. 12 | */}} 13 | {{- define "tungleo-chart.fullname" -}} 14 | {{- if .Values.fullnameOverride }} 15 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} 16 | {{- else }} 17 | {{- $name := default .Chart.Name .Values.nameOverride }} 18 | {{- if contains $name .Release.Name }} 19 | {{- .Release.Name | trunc 63 | trimSuffix "-" }} 20 | {{- else }} 21 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} 22 | {{- end }} 23 | {{- end }} 24 | {{- end }} 25 | 26 | {{/* 27 | Create chart name and version as used by the chart label. 28 | */}} 29 | {{- define "tungleo-chart.chart" -}} 30 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} 31 | {{- end }} 32 | 33 | {{/* 34 | Common labels 35 | */}} 36 | {{- define "tungleo-chart.labels" -}} 37 | helm.sh/chart: {{ include "tungleo-chart.chart" . }} 38 | {{ include "tungleo-chart.selectorLabels" . }} 39 | {{- if .Chart.AppVersion }} 40 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 41 | {{- end }} 42 | app.kubernetes.io/managed-by: {{ .Release.Service }} 43 | {{- end }} 44 | 45 | {{/* 46 | Selector labels 47 | */}} 48 | {{- define "tungleo-chart.selectorLabels" -}} 49 | app.kubernetes.io/name: {{ include "tungleo-chart.name" . }} 50 | app.kubernetes.io/instance: {{ .Release.Name }} 51 | {{- end }} 52 | 53 | {{/* 54 | Create the name of the service account to use 55 | */}} 56 | {{- define "tungleo-chart.serviceAccountName" -}} 57 | {{- if .Values.serviceAccount.create }} 58 | {{- default (include "tungleo-chart.fullname" .) .Values.serviceAccount.name }} 59 | {{- else }} 60 | {{- default "default" .Values.serviceAccount.name }} 61 | {{- end }} 62 | {{- end }} 63 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "tungleo-chart.fullname" . }} 5 | labels: 6 | {{- include "tungleo-chart.labels" . | nindent 4 }} 7 | spec: 8 | {{- if not .Values.autoscaling.enabled }} 9 | replicas: {{ .Values.replicaCount }} 10 | {{- end }} 11 | selector: 12 | matchLabels: 13 | {{- include "tungleo-chart.selectorLabels" . | nindent 6 }} 14 | template: 15 | metadata: 16 | {{- with .Values.podAnnotations }} 17 | annotations: 18 | {{- toYaml . | nindent 8 }} 19 | {{- end }} 20 | labels: 21 | {{- include "tungleo-chart.selectorLabels" . | nindent 8 }} 22 | spec: 23 | {{- with .Values.imagePullSecrets }} 24 | imagePullSecrets: 25 | {{- toYaml . | nindent 8 }} 26 | {{- end }} 27 | serviceAccountName: {{ include "tungleo-chart.serviceAccountName" . }} 28 | securityContext: 29 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 30 | containers: 31 | - name: {{ .Chart.Name }} 32 | securityContext: 33 | {{- toYaml .Values.securityContext | nindent 12 }} 34 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" 35 | imagePullPolicy: {{ .Values.image.pullPolicy }} 36 | ports: 37 | - name: http 38 | containerPort: 80 39 | protocol: TCP 40 | livenessProbe: 41 | httpGet: 42 | path: / 43 | port: http 44 | readinessProbe: 45 | httpGet: 46 | path: / 47 | port: http 48 | resources: 49 | {{- toYaml .Values.resources | nindent 12 }} 50 | {{- with .Values.nodeSelector }} 51 | nodeSelector: 52 | {{- toYaml . | nindent 8 }} 53 | {{- end }} 54 | {{- with .Values.affinity }} 55 | affinity: 56 | {{- toYaml . | nindent 8 }} 57 | {{- end }} 58 | {{- with .Values.tolerations }} 59 | tolerations: 60 | {{- toYaml . | nindent 8 }} 61 | {{- end }} 62 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/hpa.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.autoscaling.enabled }} 2 | apiVersion: autoscaling/v2beta1 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "tungleo-chart.fullname" . }} 6 | labels: 7 | {{- include "tungleo-chart.labels" . | nindent 4 }} 8 | spec: 9 | scaleTargetRef: 10 | apiVersion: apps/v1 11 | kind: Deployment 12 | name: {{ include "tungleo-chart.fullname" . }} 13 | minReplicas: {{ .Values.autoscaling.minReplicas }} 14 | maxReplicas: {{ .Values.autoscaling.maxReplicas }} 15 | metrics: 16 | {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} 17 | - type: Resource 18 | resource: 19 | name: cpu 20 | targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} 21 | {{- end }} 22 | {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} 23 | - type: Resource 24 | resource: 25 | name: memory 26 | targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} 27 | {{- end }} 28 | {{- end }} 29 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "tungleo-chart.fullname" . }} 5 | labels: 6 | {{- include "tungleo-chart.labels" . | nindent 4 }} 7 | spec: 8 | type: {{ .Values.service.type }} 9 | ports: 10 | - port: {{ .Values.service.port }} 11 | targetPort: http 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "tungleo-chart.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceAccount.create -}} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: {{ include "tungleo-chart.serviceAccountName" . }} 6 | labels: 7 | {{- include "tungleo-chart.labels" . | nindent 4 }} 8 | {{- with .Values.serviceAccount.annotations }} 9 | annotations: 10 | {{- toYaml . | nindent 4 }} 11 | {{- end }} 12 | {{- end }} 13 | -------------------------------------------------------------------------------- /topics/helm/advanced/tungleo-chart/templates/tests/test-connection.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: "{{ include "tungleo-chart.fullname" . }}-test-connection" 5 | labels: 6 | {{- include "tungleo-chart.labels" . | nindent 4 }} 7 | annotations: 8 | "helm.sh/hook": test 9 | spec: 10 | containers: 11 | - name: wget 12 | image: busybox 13 | command: ['wget'] 14 | args: ['{{ include "tungleo-chart.fullname" . }}:{{ .Values.service.port }}'] 15 | restartPolicy: Never 16 | -------------------------------------------------------------------------------- /topics/helm/basic/helm-helloworld.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Helm] $1" 5 | } 6 | 7 | CLEANUP=$1 8 | HELM_SQL_NAME="my-nginx-demo" 9 | 10 | command -v helm 11 | 12 | console_log "Initialize a Helm Chart Repository" 13 | helm repo add bitnami https://charts.bitnami.com/bitnami 14 | 15 | console_log "Install Nginx Chart" 16 | helm repo update 17 | helm install $HELM_SQL_NAME bitnami/nginx 18 | 19 | console_log "Check what has been released" 20 | helm list 21 | 22 | if [[ "$CLEANUP" == "true" ]]; then 23 | console_log "Uninstall MySQL Chart" 24 | helm uninstall $HELM_SQL_NAME 25 | fi 26 | -------------------------------------------------------------------------------- /topics/istio/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Istio? 2 | 3 | ### What is a Service Mesh? 4 | 5 | - A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code 6 | - Source: [what-is-a-service-mesh](https://istio.io/latest/about/service-mesh/#what-is-a-service-mesh) 7 | 8 | ### Overview 9 | 10 | - Istio is a service mesh 11 | - Istio extends Kubernetes to establish a programmable, application-aware network using the powerful Envoy service proxy. 12 | - Working with both Kubernetes and traditional workloads, Istio brings standard, universal traffic management, telemetry, and security to complex deployments. 13 | 14 | ### Istio Architecture 15 | 16 | ![istio-architecture](https://istio.io/latest/docs/ops/deployment/architecture/arch.svg) 17 | (Source image: https://istio.io/latest/docs/ops/deployment/architecture/) 18 | 19 | ### Official website documentation of Istio 20 | 21 | - Visit https://istio.io/latest/ 22 | 23 | ## 2. Installation 24 | 25 | ### How to install Istio? 26 | 27 | - https://istio.io/latest/docs/setup/install/ 28 | 29 | ## 3. Basics of Istio 30 | 31 | ### Getting started with Istio 32 | 33 | - https://istio.io/latest/docs/setup/getting-started/ 34 | 35 | ## 4. Beyond the Basics 36 | 37 | ### Exploring Advanced Examples 38 | 39 | - TODO 40 | 41 | ## 5. More... 42 | 43 | ### Istio cheatsheet 44 | 45 | - https://istio.io/latest/docs/reference/commands/ 46 | 47 | ### Istio with Azure 48 | 49 | - https://github.com/Azure-Samples/aks-istio-addon-bicep 50 | - https://learn.microsoft.com/en-us/azure/aks/istio-about 51 | 52 | ### Google Cloud Platform Istio Demo 53 | 54 | - [service-mesh-istio](https://github.com/GoogleCloudPlatform/microservices-demo/blob/main/kustomize/components/service-mesh-istio/README.md) 55 | 56 | ### Recommended Books 57 | 58 | - N/A 59 | -------------------------------------------------------------------------------- /topics/jenkins/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Jenkins? 2 | 3 | ### Overview 4 | 5 | - The leading open source automation server, Jenkins provides hundreds of plugins to support building, deploying and automating any project. 6 | 7 | ### Jenkins workflow 8 | 9 | - N/A 10 | 11 | ### Official website documentation of Jenkins 12 | 13 | - https://www.jenkins.io/doc/ 14 | 15 | ## 2. Prerequisites 16 | 17 | - K8s, docker, linux 18 | 19 | ## 3. Installation 20 | 21 | ### How to install Jenkins? 22 | 23 | - https://www.jenkins.io/doc/book/installing/ 24 | 25 | ### Install Jenkins with Docker 26 | 27 | - See [deploy-jenkins/README.md](../helm/advanced/hands-on/deploy-jenkins/README.md) 28 | 29 | ## 4. Basics of Jenkins 30 | 31 | ### Jenkins getting started 32 | 33 | - https://www.jenkins.io/doc/book/pipeline/getting-started/ 34 | 35 | ### Jenkins Hello World 36 | 37 | - See: [Jenkins Hello world](./basic/Jenkins-Hello-World.md) 38 | 39 | ## 5. Beyond the Basics 40 | 41 | ### Hands-On Example 42 | 43 | - Check the [advanced/](./advanced/) directory for more Jenkins examples. 44 | 45 | ## 6. More... 46 | 47 | ### Jenkins cheatsheet 48 | 49 | - N/A 50 | 51 | ### Recommended Books 52 | 53 | - N/A 54 | -------------------------------------------------------------------------------- /topics/jenkins/advanced/README.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | -------------------------------------------------------------------------------- /topics/jenkins/basic/Jenkins-Hello-World.md: -------------------------------------------------------------------------------- 1 | # Hello world Jenkins 2 | 3 | ## Install Jenkins 4 | 5 | - https://www.jenkins.io/doc/book/installing/ 6 | - Or install Jenkins via Helm hands on example of this DevOps repo, see [helm/hands-on/deploy-jenkins](../../helm/advanced/hands-on/deploy-jenkins/) 7 | 8 | ## Create and run your first pipeline 9 | 10 | - Follow [Official Getting Started](https://www.jenkins.io/doc/book/pipeline/getting-started/) section to create your first pipeline 11 | - The pipeline content look like: [MyFirstPipeline.groovy](./MyFirstPipeline.groovy) 12 | -------------------------------------------------------------------------------- /topics/jenkins/basic/MyFirstPipeline.groovy: -------------------------------------------------------------------------------- 1 | pipeline { 2 | agent any 3 | stages { 4 | stage('Stage Hello') { 5 | steps { 6 | echo 'Hello world!' 7 | } 8 | } 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /topics/jenkins/basic/PipelineWithParallelStages.groovy: -------------------------------------------------------------------------------- 1 | pipeline { 2 | agent any 3 | stages { 4 | stage('ParallelStage1') { 5 | steps { 6 | // Print a message 7 | echo "This is ParallelStage1" 8 | } 9 | } 10 | stage('ParallelStage2') { 11 | steps { 12 | // Print a message 13 | echo "This is ParallelStage2" 14 | } 15 | } 16 | } 17 | parallel { 18 | // Run the ParallelStage1 and ParallelStage2 stages in parallel 19 | stage 'ParallelStage1' 20 | stage 'ParallelStage2' 21 | } 22 | } 23 | -------------------------------------------------------------------------------- /topics/jenkins/basic/deploy/docker-compose/README.md: -------------------------------------------------------------------------------- 1 | # To deploy 2 | - Run `docker-compose up -d` 3 | -------------------------------------------------------------------------------- /topics/jenkins/basic/deploy/docker-compose/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.7' 2 | name: jenkins 3 | services: 4 | jenkins: 5 | image: jenkins/jenkins:lts 6 | restart: always 7 | privileged: true 8 | user: root 9 | ports: 10 | - 8081:8080 11 | - 50001:50000 12 | container_name: jenkins 13 | volumes: 14 | - ~/jenkins:/var/jenkins_home 15 | - /var/run/docker.sock:/var/run/docker.sock 16 | - /usr/local/bin/docker:/usr/local/bin/docker 17 | -------------------------------------------------------------------------------- /topics/k8s/advanced/play-around/install-jenkins/README.md: -------------------------------------------------------------------------------- 1 | # Following: https://www.jenkins.io/doc/book/installing/kubernetes/#install-jenkins-with-helm-v3 2 | -------------------------------------------------------------------------------- /topics/k8s/advanced/play-around/install-jenkins/jenkins-sa.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: jenkins 6 | namespace: jenkins 7 | --- 8 | apiVersion: rbac.authorization.k8s.io/v1 9 | kind: ClusterRole 10 | metadata: 11 | annotations: 12 | rbac.authorization.kubernetes.io/autoupdate: "true" 13 | labels: 14 | kubernetes.io/bootstrapping: rbac-defaults 15 | name: jenkins 16 | rules: 17 | - apiGroups: 18 | - '*' 19 | resources: 20 | - statefulsets 21 | - services 22 | - replicationcontrollers 23 | - replicasets 24 | - podtemplates 25 | - podsecuritypolicies 26 | - pods 27 | - pods/log 28 | - pods/exec 29 | - podpreset 30 | - poddisruptionbudget 31 | - persistentvolumes 32 | - persistentvolumeclaims 33 | - jobs 34 | - endpoints 35 | - deployments 36 | - deployments/scale 37 | - daemonsets 38 | - cronjobs 39 | - configmaps 40 | - namespaces 41 | - events 42 | - secrets 43 | verbs: 44 | - create 45 | - get 46 | - watch 47 | - delete 48 | - list 49 | - patch 50 | - update 51 | - apiGroups: 52 | - "" 53 | resources: 54 | - nodes 55 | verbs: 56 | - get 57 | - list 58 | - watch 59 | - update 60 | --- 61 | apiVersion: rbac.authorization.k8s.io/v1 62 | kind: ClusterRoleBinding 63 | metadata: 64 | annotations: 65 | rbac.authorization.kubernetes.io/autoupdate: "true" 66 | labels: 67 | kubernetes.io/bootstrapping: rbac-defaults 68 | name: jenkins 69 | roleRef: 70 | apiGroup: rbac.authorization.k8s.io 71 | kind: ClusterRole 72 | name: jenkins 73 | subjects: 74 | - apiGroup: rbac.authorization.k8s.io 75 | kind: Group 76 | name: system:serviceaccounts:jenkins 77 | -------------------------------------------------------------------------------- /topics/k8s/advanced/play-around/install-jenkins/jenkins-volume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: jenkins-pv 5 | namespace: jenkins 6 | spec: 7 | storageClassName: jenkins-pv 8 | accessModes: 9 | - ReadWriteOnce 10 | capacity: 11 | storage: 20Gi 12 | persistentVolumeReclaimPolicy: Retain 13 | hostPath: 14 | path: /E/K8S-DATA/jenkins-volume 15 | -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/90daysofdevops/nginx.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: nginx 5 | "labels": { 6 | "name": "nginx" 7 | } 8 | --- 9 | apiVersion: apps/v1 10 | kind: Deployment 11 | metadata: 12 | name: nginx-deployment 13 | namespace: nginx 14 | spec: 15 | selector: 16 | matchLabels: 17 | app: nginx 18 | replicas: 1 19 | template: 20 | metadata: 21 | labels: 22 | app: nginx 23 | spec: 24 | containers: 25 | - name: nginx 26 | image: nginx 27 | ports: 28 | - containerPort: 80 29 | --- 30 | apiVersion: v1 31 | kind: Service 32 | metadata: 33 | name: nginx-service 34 | namespace: nginx 35 | spec: 36 | selector: 37 | app: nginx-deployment 38 | ports: 39 | - protocol: TCP 40 | port: 80 41 | targetPort: 80 -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/GOOD-READ.md: -------------------------------------------------------------------------------- 1 | # Links: 2 | ## https://medium.com/google-cloud/kubernetes-110-your-first-deployment-bf123c1d3f8 3 | 4 | ## https://medium.com/google-cloud/kubernetes-120-networking-basics-3b903f13093a 5 | 6 | ## https://medium.com/google-cloud/kubernetes-120-networking-basics-3b903f13093a 7 | -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/gitea-deployment-service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: gitea-deployment 5 | spec: 6 | replicas: 1 7 | selector: 8 | matchLabels: 9 | app: gitea 10 | template: 11 | metadata: 12 | labels: 13 | app: gitea 14 | spec: 15 | containers: 16 | - name: gitea-container 17 | image: gitea/gitea:1.4 18 | ports: 19 | - containerPort: 3000 20 | name: http 21 | - containerPort: 22 22 | name: ssh 23 | --- 24 | kind: Service #+ 25 | apiVersion: v1 #+ 26 | metadata: #+ 27 | name: gitea-service #+ 28 | spec: #+ 29 | selector: #+ 30 | app: gitea #+ 31 | ports: #+ 32 | - protocol: TCP #+ 33 | targetPort: 3000 #+ 34 | port: 80 #+ 35 | name: http #+ 36 | - protocol: TCP #+ 37 | targetPort: 22 #+ 38 | port: 22 #+ 39 | name: ssh #+ 40 | type: NodePort #+ -> Nodeport for local custer, for cloud provider use LoadBalancer option 41 | -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/gitea-deployment-with-port.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: gitea-deployment 5 | spec: 6 | replicas: 3 7 | selector: 8 | matchLabels: 9 | app: gitea 10 | template: 11 | metadata: 12 | labels: 13 | app: gitea 14 | spec: 15 | containers: 16 | - name: gitea-container 17 | image: gitea/gitea:1.4 18 | ports: #+ 19 | - containerPort: 3000 #+ 20 | name: http #+ 21 | - containerPort: 22 #+ 22 | name: ssh #+ -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/gitea-deployment.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: gitea-deployment 6 | spec: 7 | replicas: 3 8 | selector: 9 | matchLabels: 10 | app: gitea 11 | template: 12 | metadata: 13 | labels: 14 | app: gitea 15 | spec: 16 | containers: 17 | - name: gitea-container 18 | image: gitea/gitea:1.4 19 | -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/gitea.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: v1 3 | kind: Pod 4 | metadata: 5 | name: gitea-pod 6 | spec: 7 | containers: 8 | - name: gitea-container-tung 9 | image: gitea/gitea:1.4 10 | -------------------------------------------------------------------------------- /topics/k8s/basic/beginner/mysql.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: mysql-deployment 6 | spec: 7 | replicas: 1 8 | selector: 9 | matchLabels: 10 | app: mysql 11 | template: 12 | metadata: 13 | labels: 14 | app: mysql 15 | spec: 16 | containers: 17 | - name: mysql 18 | image: mysql:5.6 19 | ports: 20 | - containerPort: 3306 21 | # Ignore this for now. It will be explained in the next article 22 | env: 23 | - name: MYSQL_ALLOW_EMPTY_PASSWORD 24 | value: "true" 25 | --- 26 | kind: Service 27 | apiVersion: v1 28 | metadata: 29 | name: mysql-service 30 | spec: 31 | selector: 32 | app: mysql 33 | ports: 34 | - protocol: TCP 35 | port: 3306 36 | type: ClusterIP -------------------------------------------------------------------------------- /topics/k8s/basic/helloworld/k8s-helloworld-cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Kubernetes] $1" 5 | } 6 | 7 | kill_port() { 8 | ### Run the pgrep command to search for the process running on the specified port 9 | PID=$(pgrep 'port-forward') 10 | 11 | # If the pgrep command does not find any processes, exit the function 12 | if [ -z "$PID" ]; then 13 | return 14 | fi 15 | 16 | console_log "Killing $PID" 17 | kill -9 "$PID" 18 | } 19 | 20 | console_log "Cleanup Kubernetes Demo!" 21 | 22 | kill_port "$@" 23 | 24 | kubectl delete -f "hello-world/nginx-deployment.yaml" 25 | kubectl delete -f "hello-world/nginx-service.yaml" 26 | -------------------------------------------------------------------------------- /topics/k8s/basic/helloworld/k8s-helloworld.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Kubernetes] $1" 5 | } 6 | 7 | local_port="9080" 8 | service_port="8081" 9 | port_fwd="$local_port:$service_port" 10 | 11 | console_log "Cleanup previous run if any" 12 | ./k8s-helloworld-cleanup.sh $port_fwd 13 | 14 | console_log "Welcome to Kubernetes!" 15 | 16 | console_log "Deploying your first app on Kubernetes" 17 | kubectl apply -f ./nginx-deployment.yaml 18 | 19 | console_log "Check deployment rollout status" 20 | kubectl rollout status deployment/nginx-deployment 21 | 22 | console_log "Check deployemnt" 23 | kubectl get deployments 24 | 25 | console_log "Check pod" 26 | kubectl get pods 27 | 28 | console_log "Apply service" 29 | kubectl apply -f ./nginx-service.yaml 30 | 31 | console_log "Check services" 32 | kubectl get services 33 | 34 | kubectl port-forward service/nginx-service $port_fwd & 35 | console_log "Waiting for port forward completed" 36 | sleep 10 37 | 38 | nginx_welcome_title="Welcome to nginx!" 39 | curl "localhost:$local_port" | grep "$nginx_welcome_title" 40 | -------------------------------------------------------------------------------- /topics/k8s/basic/helloworld/nginx-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | labels: 6 | app: nginx 7 | spec: 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: nginx:1.14.2 20 | ports: 21 | - containerPort: 80 22 | -------------------------------------------------------------------------------- /topics/k8s/basic/helloworld/nginx-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx-service 5 | spec: 6 | selector: 7 | app: nginx 8 | ports: 9 | - protocol: TCP 10 | port: 8081 11 | targetPort: 80 12 | -------------------------------------------------------------------------------- /topics/kafka/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Apache Kafka? 2 | 3 | - [Introduction to Apache Kafka](https://kafka.apache.org/documentation/) 4 | - [Youtube - What is Apache Kafka?](https://youtu.be/vHbvbwSEYGo?si=SbouSV-0NZzigsXV) 5 | 6 | ### Overview 7 | 8 | - Apache Kafka is a distributed event streaming platform capable of handling trillions of events a day. It is used for building real-time data pipelines and streaming applications. Kafka is horizontally scalable, fault-tolerant, and fast. 9 | 10 | - Kafka allows you to publish, subscribe to, store, and process streams of records in real-time. It is often used in scenarios where data needs to be processed or moved between systems efficiently, such as log aggregation, real-time analytics, or as a backbone for microservices. 11 | 12 | ### Kafka Architecture 13 | 14 | - [Understanding Kafka Architecture](https://kafka.apache.org/10/documentation/streams/architecture) 15 | 16 | ### Official Website Documentation for Apache Kafka 17 | 18 | - [Apache Kafka Documentation](https://kafka.apache.org/documentation/) 19 | 20 | ## 2. Prerequisites 21 | 22 | - Basic Linux command line skills 23 | - Understanding of distributed systems and event streaming concepts 24 | 25 | ## 3. Installation 26 | 27 | ### How to install Apache Kafka? 28 | 29 | - [Kafka Quickstart Guide](https://kafka.apache.org/quickstart) 30 | 31 | ## 4. Basics of Apache Kafka 32 | 33 | ### Getting Started with Kafka 34 | 35 | - [Kafka 101: Getting Started with Kafka](https://kafka.apache.org/quickstart) 36 | 37 | ### Kafka Basics 👋 38 | 39 | - See: [**basic**](./basic/) 40 | 41 | ## 5. Beyond the Basics 42 | 43 | - TODO 44 | 45 | ## 6. More... 46 | 47 | ### Kafka cheatsheet 48 | 49 | - https://www.redpanda.com/guides/kafka-tutorial-kafka-cheat-sheet 50 | 51 | ### Recommended Books 52 | 53 | - N/A 54 | -------------------------------------------------------------------------------- /topics/kafka/basic/README.md: -------------------------------------------------------------------------------- 1 | ## Kafka Basics 2 | 3 | Here's a basic "Hello World" example for Apache Kafka using Docker and Docker Compose. This will set up a Kafka broker and a Zookeeper instance, allowing you to produce and consume messages. 4 | 5 | ### 1. Create a `docker-compose.yml` File 6 | 7 | Create a [docker-compose.yml](./docker-compose.yml) file that defines the services for Zookeeper and Kafka. 8 | 9 | ### 2. Start Kafka and Zookeeper 10 | 11 | Run the following command to start the Kafka and Zookeeper containers: 12 | 13 | ```bash 14 | cd devops-basics/topics/kafka/basic 15 | docker-compose up -d 16 | ``` 17 | 18 | This command will start Zookeeper and Kafka in the background. 19 | 20 | ### 3. Create a Kafka Topic 21 | 22 | Once the containers are running, create a Kafka topic named `helloworld`. 23 | 24 | ```bash 25 | docker exec kafka kafka-topics.sh --create --topic helloworld --bootstrap-server localhost:9092 --partitions 1 --replication-factor 1 26 | ``` 27 | 28 | ### 4. Produce Messages to the Kafka Topic 29 | 30 | To send a message to the `helloworld` topic: 31 | 32 | ```bash 33 | docker exec -it kafka kafka-console-producer.sh --topic helloworld --bootstrap-server localhost:9092 34 | ``` 35 | 36 | Type a message (e.g., `Hello, Kafka!`) and press Enter. This sends the message to the Kafka topic. 37 | 38 | ### 5. Consume Messages from the Kafka Topic 39 | 40 | To read the message from the `helloworld` topic: 41 | 42 | ```bash 43 | docker exec -it kafka kafka-console-consumer.sh --topic helloworld --bootstrap-server localhost:9092 --from-beginning 44 | ``` 45 | 46 | You should see the message you produced earlier. 47 | 48 | ### 6. Cleanup 49 | 50 | To stop and remove the Kafka and Zookeeper containers, run: 51 | 52 | ```bash 53 | cd devops-basics/topics/kafka/basic 54 | docker-compose down 55 | ``` 56 | 57 | This basic setup allows you to get hands-on experience with Kafka using Docker and Docker Compose. You can extend this setup to explore more advanced Kafka features. 58 | -------------------------------------------------------------------------------- /topics/kafka/basic/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | services: 3 | zookeeper: 4 | image: bitnami/zookeeper:latest 5 | container_name: zookeeper 6 | environment: 7 | - ZOO_ENABLE_AUTH=no 8 | - ALLOW_ANONYMOUS_LOGIN=yes 9 | ports: 10 | - '2181:2181' 11 | 12 | kafka: 13 | image: bitnami/kafka:latest 14 | container_name: kafka 15 | ports: 16 | - '9092:9092' 17 | environment: 18 | - KAFKA_BROKER_ID=1 19 | - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 20 | - KAFKA_LISTENERS=PLAINTEXT://:9092 21 | - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092 22 | depends_on: 23 | - zookeeper 24 | -------------------------------------------------------------------------------- /topics/microservices/README.md: -------------------------------------------------------------------------------- 1 | # Docs 2 | 3 | - https://www.nginx.com/blog/deploying-microservices/ 4 | 5 | ## 1. Microservices Demo 6 | 7 | - Check out [GoogleCloudPlatform/microservices-demo](https://github.com/GoogleCloudPlatform/microservices-demo) 8 | - Also check out [Azure-Samples/aks-store-demo](https://github.com/Azure-Samples/aks-store-demo/tree/main) 9 | 10 | ## 2. Microservices architecture design (by Azure) 11 | 12 | - Microservices architecture design: https://learn.microsoft.com/en-us/azure/architecture/microservices/ 13 | - aks-microservices: [aks-microservices](https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/containers/aks-microservices/aks-microservices) 14 | - https://dotnet.microsoft.com/en-us/learn/aspnet/microservices-architecture 15 | 16 | ## 3. Hands-on 17 | 18 | ### Basics 19 | 20 | - Checkout [basic](./basic/) content 21 | -------------------------------------------------------------------------------- /topics/microservices/assets/first-demo-microservices-result.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/microservices/assets/first-demo-microservices-result.png -------------------------------------------------------------------------------- /topics/microservices/basic/README.md: -------------------------------------------------------------------------------- 1 | # Demo microservices 2 | 3 | - Using GCP demo: https://github.com/GoogleCloudPlatform/microservices-demo 4 | - Manifest: https://github.com/GoogleCloudPlatform/microservices-demo/blob/main/release/kubernetes-manifests.yaml 5 | 6 | ## 1. Provision K8s cluster 7 | 8 | - Find the installation via [k8s](../../k8s/) content 9 | 10 | ## 2. Run hello microservices script 11 | 12 | Prerequisite: 13 | 14 | - A k8s cluster up and running (step 1) 15 | 16 | Run: 17 | 18 | ```bash 19 | ./hello-microservices.sh 20 | ``` 21 | 22 | This will deploy the application then forward the service to port `8080` (or you could adjust to another port works with you machine) 23 | 24 | ## 3. Check the result 25 | 26 | Visit localhost:8080, you should get the similar result like this: 27 | 28 | ![first-demo-microservices-result](../assets/first-demo-microservices-result.png) 29 | 30 | ## 4. Cleanup 31 | 32 | Run: 33 | 34 | ```bash 35 | ./cleanup-hello-microservices.sh 36 | ``` 37 | -------------------------------------------------------------------------------- /topics/microservices/basic/cleanup-hello-microservices.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | NAME_SPACE="demo-app" 4 | 5 | kubectl delete namespace $NAME_SPACE 6 | -------------------------------------------------------------------------------- /topics/microservices/basic/hello-microservices.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Microservices] $1" 5 | } 6 | 7 | # Variables 8 | NAME_SPACE="demo-app" 9 | 10 | # Deploy 11 | console_log "Deploying the services" 12 | console_log "NAME_SPACE: $NAME_SPACE" 13 | 14 | if kubectl get namespace "$NAME_SPACE" &>/dev/null; then 15 | console_log "Namespace $NAME_SPACE already exists." 16 | else 17 | console_log "Namespace $NAME_SPACE does not exist. Creating..." 18 | kubectl create namespace "$NAME_SPACE" 19 | fi 20 | 21 | kubectl apply -n $NAME_SPACE -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/main/release/kubernetes-manifests.yaml 22 | kubectl get pods -n $NAME_SPACE 23 | 24 | # Wait for services up 25 | console_log "Waiting for all services are up. Sleeping 120s..." 26 | # sleep 120 27 | 28 | console_log "Checking all resouces in namespace '$NAME_SPACE'" 29 | kubectl get all -n $NAME_SPACE 30 | 31 | console_log "Checking pods" 32 | kubectl get pods -n $NAME_SPACE 33 | 34 | # Port forward 35 | console_log "Forward the 'service/frontend' service" 36 | ## NOTE: You can change the '8080' port to whatever works on your machine 37 | kubectl port-forward service/frontend -n $NAME_SPACE 8080:80 38 | 39 | # Access the application 40 | ## Visit http://localhost:8080 to check your application 41 | 42 | -------------------------------------------------------------------------------- /topics/nginx/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Nginx? 2 | 3 | ### Overview 4 | 5 | - The leading open source automation server, Nginx provides hundreds of plugins to support building, deploying and automating any project. 6 | 7 | ### Nginx workflow 8 | 9 | - N/A 10 | 11 | ### Official website documentation of Nginx 12 | 13 | - https://nginx.org/ 14 | 15 | ## 2. Prerequisites 16 | 17 | - Basic networking, HTTP, Linux 18 | 19 | ## 3. Installation 20 | 21 | ### How to install Nginx? 22 | 23 | - https://nginx.org/en/docs/install.html 24 | 25 | ### Install Nginx with Docker 26 | 27 | - TODO 28 | 29 | ## 4. Basics of Nginx 30 | 31 | ### Nginx getting started 32 | 33 | - Begginner's Guide: https://nginx.org/en/docs/beginners_guide.html 34 | 35 | ### Nginx Hello World 36 | 37 | - See: [basic](./basic/) 38 | 39 | ## 5. Beyond the Basics 40 | 41 | ### Hands-On Example 42 | 43 | - Check the [advanced/](./advanced/) directory for more Nginx examples. 44 | 45 | ## 6. More... 46 | 47 | ### Admin guide 48 | 49 | - https://docs.nginx.com/nginx/admin-guide/ 50 | 51 | ### Nginx cheatsheet 52 | 53 | - N/A 54 | 55 | ### Recommended Books 56 | 57 | - N/A 58 | -------------------------------------------------------------------------------- /topics/nginx/advanced/README.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | -------------------------------------------------------------------------------- /topics/nginx/basic/README.md: -------------------------------------------------------------------------------- 1 | # Nginx demo with docker compose 2 | 3 | - Prerequisites: 4 | - Docker + Docker compose installed 5 | - This setup will include two services: one for NGINX and another for a simple web server (e.g., an HTTP server running in a Python container). 6 | - Main files: 7 | 8 | - [nginx.conf](./nginx.conf): Contains basic NGINX configuration 9 | - [html](./html/): Contains HTML file for web server running with python 10 | - [docker-compose.yaml](./docker-compose.yaml): To deploy 2 separated containers for this demo (Nginx + HTTP web server) 11 | 12 | - Run the hands on: 13 | 14 | ```bash 15 | cd devops-basics/topics/nginx/basic 16 | docker-compose up -d 17 | ``` 18 | 19 | - Now you'll have an NGINX server acting as a reverse proxy to another web server running in a separate Docker container. 20 | - Visit: http://localhost:7080/ you could see: 21 | 22 | ![demo_nginx_basic_ok](./assets/demo_nginx_basic_ok.png) 23 | 24 | _NOTE_: You can change the localhost port from `7080` to any port works on your machine, and update the port definition in `docker-compose.yaml` as well. 25 | 26 | - To cleanup resouce, run: 27 | 28 | ```bash 29 | cd devops-basics/topics/nginx/basic 30 | docker-compose down 31 | ``` 32 | -------------------------------------------------------------------------------- /topics/nginx/basic/assets/demo_nginx_basic_ok.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/nginx/basic/assets/demo_nginx_basic_ok.png -------------------------------------------------------------------------------- /topics/nginx/basic/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | # Simple web server in python 5 | web: 6 | image: python:3.9-slim 7 | container_name: web-server 8 | command: python -m http.server 8000 9 | volumes: 10 | - ./html:/app 11 | working_dir: /app 12 | expose: 13 | - '8000' 14 | 15 | # Nginx reverse proxy 16 | nginx: 17 | image: nginx:latest 18 | container_name: nginx-proxy 19 | ports: 20 | # Repace 7080 by your desired port 21 | - '7080:80' 22 | volumes: 23 | - ./nginx.conf:/etc/nginx/nginx.conf 24 | depends_on: 25 | - web 26 | -------------------------------------------------------------------------------- /topics/nginx/basic/html/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Welcome to the Web Server! 5 | 6 | 7 |

Success! The web server is working!

8 | 9 | 10 | -------------------------------------------------------------------------------- /topics/nginx/basic/nginx.conf: -------------------------------------------------------------------------------- 1 | events {} 2 | 3 | http { 4 | server { 5 | listen 80; 6 | 7 | location / { 8 | proxy_pass http://web:8000; 9 | proxy_set_header Host $host; 10 | proxy_set_header X-Real-IP $remote_addr; 11 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 12 | proxy_set_header X-Forwarded-Proto $scheme; 13 | } 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /topics/openstack/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Openstack? 2 | 3 | ### Overview 4 | 5 | OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. 6 | 7 | ### Openstack Architecture 8 | 9 | For a deeper understanding, refer to the [Openstack Architecture documentation](https://www.openstack.org/openstack-map). 10 | 11 | ### Official website documentation of Openstack 12 | 13 | - Access the complete [Official Openstack documentation](https://docs.openstack.org/2023.2/) for detailed information and references. 14 | 15 | ## 2. Prerequisites 16 | 17 | - OS/Linux and Cloud concepts 18 | 19 | ## 3. Installation 20 | 21 | ### How to install Openstack? 22 | 23 | - Follow the steps outlined in the [Openstack installation documentation](https://docs.openstack.org/2023.2/install/) for both local development and production environments. 24 | - Or use the installation script in [basic](./basic/) 25 | 26 | ## 4. Basics of Openstack 27 | 28 | ### Getting started with Openstack 29 | 30 | - Refer to the [official Openstack getting started documentation](https://docs.openstack.org/install-guide/get-started-with-openstack.html) for a comprehensive introduction. 31 | 32 | ### Openstack Hello World 33 | 34 | - Run the [basic/openstack-helm.sh](./basic/openstack-helm.sh) script to execute a simple Openstack "Hello World" demonstration. 35 | 36 | ## 5. Beyond the Basics 37 | 38 | ### Hands-On Example 39 | 40 | - TODO 41 | 42 | ## 6. More 43 | 44 | ### Openstack Cheatsheet 45 | 46 | - Use the [Openstack cheatsheet](https://ubuntu.com/openstack/openstack-cheat-sheet) as a quick reference guide for Openstack commands and functionalities. 47 | 48 | ### Recommended Books 49 | 50 | - [OpenStack Cloud Computing Cookbook - Fourth Edition](https://a.co/d/34FukGa) 51 | -------------------------------------------------------------------------------- /topics/openstack/basic/README.md: -------------------------------------------------------------------------------- 1 | # Getting started with Openstack 2 | 3 | Deploy openstack on Kubernetes. 4 | Documentation: https://docs.openstack.org/openstack-helm/latest/install/index.html 5 | 6 | ## Deploy on k8s cluster (with Helm) 7 | 8 | - Run command: 9 | 10 | ``` 11 | cd ./basic 12 | chmod +x cleanup.sh 13 | ./cleanup.sh 14 | 15 | chmod +x openstack-helm.sh 16 | ./openstack-helm.sh 17 | ``` 18 | -------------------------------------------------------------------------------- /topics/openstack/basic/cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | console_log() { 6 | echo -e "${GREEN}>>> [Openstack] [Cleanup] $1${RESET}" 7 | } 8 | 9 | check_and_delete() { 10 | namespace_name=$1 11 | # Check if the namespace exists 12 | if kubectl get namespace "$namespace_name" &>/dev/null; then 13 | echo "Namespace '$namespace_name' exists. Deleting resources..." 14 | kubectl delete all --all --namespace "$namespace_name" || true 15 | kubectl delete namespace "$namespace_name" || true 16 | else 17 | echo "Namespace '$namespace_name' does not exist." 18 | # Add any handling or exit commands here if needed 19 | fi 20 | } 21 | 22 | cleanup_namespaces() { 23 | console_log "Cleanup previous k8s resources" 24 | check_and_delete ceph 25 | check_and_delete openstack 26 | check_and_delete osh-infra 27 | check_and_delete rook-ceph 28 | } 29 | 30 | # Set colors for console output 31 | GREEN='\033[0;32m' 32 | RESET='\033[0m' 33 | 34 | # Call cleanup function 35 | cleanup_namespaces 36 | 37 | console_log "Cleanup completed!" 38 | -------------------------------------------------------------------------------- /topics/packer/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Packer? 2 | 3 | ### Overview 4 | 5 | - Packer is a tool that lets you create identical machine images for multiple platforms from a single source template. 6 | - Packer can create golden images to use in image pipelines. 7 | 8 | ### Official website documentation of Packer 9 | 10 | - https://www.packer.io/ 11 | 12 | ## 2. Prerequisites 13 | 14 | - N/A 15 | 16 | ## 3. Installation 17 | 18 | ### How to install Packer? 19 | 20 | - https://developer.hashicorp.com/packer/install 21 | 22 | ## 4. Basics of Packer 23 | 24 | ### Packer getting started 25 | 26 | - Begginner's Guide: https://developer.hashicorp.com/packer/tutorials 27 | 28 | ### Packer Hands on 29 | 30 | - See: [basic](./basic/) 31 | 32 | ## 5. More... 33 | 34 | ### Packer cheatsheet 35 | 36 | - N/A 37 | 38 | ### Recommended Books 39 | 40 | - N/A 41 | -------------------------------------------------------------------------------- /topics/packer/basic/README.md: -------------------------------------------------------------------------------- 1 | # Build an Ubuntu machine image on AWS with Packer 2 | 3 | ## Prerequisites 4 | 5 | - AWS account 6 | - Packer installed 7 | - Authenticate to AWS 8 | ```bash 9 | export AWS_ACCESS_KEY_ID="" 10 | export AWS_SECRET_ACCESS_KEY="" 11 | ``` 12 | - Doc: https://developer.hashicorp.com/packer/integrations/hashicorp/amazon#iam-task-or-instance-role 13 | 14 | ## Init 15 | 16 | ```bash 17 | packer init . 18 | ``` 19 | 20 | ## Build 21 | 22 | ```bash 23 | packer build aws-ubuntu.pkr.hcl 24 | 25 | # ... 26 | # ==> Wait completed after 5 minutes 15 seconds 27 | # ==> Builds finished. The artifacts of successful builds are: 28 | # --> learn-packer.amazon-ebs.ubuntu: AMIs were created: 29 | # us-west-2: ami-xxxxyyyyzzzztttt 30 | ``` 31 | 32 | ## Verify 33 | 34 | - Go to AWS Console `us-west-2` (The region we build packer AMI): https://us-west-2.console.aws.amazon.com/ec2/home?region=us-west-2#Images:visibility=owned-by-me 35 | - You now can see your AMI with name: `learn-packer-linux-aws-redis-` 36 | ![](./assets/ami-on-aws.png) 37 | 38 | ## Cleanup 39 | 40 | - Once you dont want to use the AMI anymore, follow https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/deregister-ami.html to delete it. 41 | -------------------------------------------------------------------------------- /topics/packer/basic/assets/ami-on-aws.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/packer/basic/assets/ami-on-aws.png -------------------------------------------------------------------------------- /topics/packer/basic/aws-ubuntu.pkr.hcl: -------------------------------------------------------------------------------- 1 | packer { 2 | required_plugins { 3 | amazon = { 4 | version = ">= 1.2.8" 5 | source = "github.com/hashicorp/amazon" 6 | } 7 | } 8 | } 9 | 10 | source "amazon-ebs" "ubuntu" { 11 | ami_name = "${var.ami_prefix}-${local.timestamp}" 12 | instance_type = "t2.micro" 13 | region = "us-west-2" 14 | source_ami_filter { 15 | filters = { 16 | name = "ubuntu/images/*ubuntu-jammy-22.04-amd64-server-*" 17 | root-device-type = "ebs" 18 | virtualization-type = "hvm" 19 | } 20 | most_recent = true 21 | owners = ["099720109477"] 22 | } 23 | ssh_username = "ubuntu" 24 | } 25 | 26 | build { 27 | name = "learn-packer" 28 | sources = [ 29 | "source.amazon-ebs.ubuntu" 30 | ] 31 | 32 | provisioner "shell" { 33 | environment_vars = [ 34 | "FOO=hello world", 35 | ] 36 | inline = [ 37 | "echo Installing Redis", 38 | "sleep 30", 39 | "sudo apt-get update", 40 | "sudo apt-get install -y redis-server", 41 | "echo \"FOO is $FOO\" > example.txt", 42 | ] 43 | } 44 | 45 | provisioner "shell" { 46 | inline = ["echo This provisioner runs last"] 47 | } 48 | } 49 | 50 | variable "ami_prefix" { 51 | type = string 52 | default = "learn-packer-linux-aws-redis" 53 | } 54 | 55 | locals { 56 | timestamp = regex_replace(timestamp(), "[- TZ:]", "") 57 | } 58 | -------------------------------------------------------------------------------- /topics/prometheus/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Prometheus? 2 | 3 | - https://prometheus.io/docs/introduction/overview/#what-is-prometheus 4 | 5 | ### Overview 6 | 7 | - Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. 8 | 9 | ### Prometheus architecture 10 | 11 | 12 | 13 | - (Image source provided by https://prometheus.io/docs/introduction/overview/#architecture) 14 | 15 | ### Official website documentation of Prometheus 16 | 17 | - https://prometheus.io/docs/introduction/overview/ 18 | 19 | ## 2. Prerequisites 20 | 21 | - Linux, Helm, k8s 22 | 23 | ## 3. Installation 24 | 25 | ### How to install Prometheus? 26 | 27 | - https://prometheus.io/docs/prometheus/latest/installation/ 28 | 29 | ## 4. Basics of Prometheus 30 | 31 | ### Prometheus getting started 32 | 33 | - https://prometheus.io/docs/prometheus/latest/getting_started/ 34 | 35 | ### Prometheus Hello World 36 | 37 | - Required knowledge in [helm](../../topics/helm/) | [k8s](../../topics/k8s/) first for better understanding. Because we will deploy our own Prometheus to K8s using Helm 38 | - Run the demo scipt: `cd basic; ./prometheus-helloworld.sh` 39 | - (Optional) Run the demo scipt and cleanup right after the demo: `cd basic; ./prometheus-helloworld.sh true` 40 | 41 | ## 5. Beyond the Basics 42 | 43 | ### Hands-On Example 44 | 45 | - Check the [advanced/](./advanced/) directory for more Prometheus examples. 46 | 47 | ## 6. More... 48 | 49 | ### Prometheus cheatsheet 50 | 51 | - https://promlabs.com/promql-cheat-sheet/ 52 | 53 | ### Recommended Books 54 | 55 | - N/A 56 | -------------------------------------------------------------------------------- /topics/prometheus/advanced/README.md: -------------------------------------------------------------------------------- 1 | # TODO 2 | -------------------------------------------------------------------------------- /topics/prometheus/basic/prometheus-helloworld-cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Prometheus] $1" 5 | } 6 | 7 | kill_port() { 8 | local port_fwd=$1 9 | ### Run the pgrep command to find the process ID for 'port-forward' 10 | process_line=$(pgrep -f "port-forward.*$port_fwd") 11 | ### Extract the PID from the process_line using awk or cut 12 | PID="$process_line" # Using awk 13 | console_log "Killing $PID" 14 | kill -9 $PID 15 | } 16 | 17 | uninstall_chart() { 18 | local char_name=$1 19 | helm uninstall "$char_name" 20 | } 21 | 22 | console_log "Cleanup Prometheus Demo!" 23 | kill_port "$1" 24 | uninstall_chart "$2" 25 | -------------------------------------------------------------------------------- /topics/python/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Python? 2 | 3 | ### Overview 4 | 5 | - Python's combination of simplicity, power, extensive libraries, community support, and adaptability to various DevOps tasks makes it a go-to language for many professionals in this field. 6 | - Its effectiveness in automating workflows, managing infrastructure, and integrating with a plethora of tools solidifies its role as a key player in the DevOps landscape. 7 | 8 | ### Python workflow 9 | 10 | - N/A 11 | 12 | ### Official website documentation of Python 13 | 14 | - https://www.python.org/doc/ 15 | 16 | ## 2. Prerequisites 17 | 18 | - N/A 19 | 20 | ## 3. Installation 21 | 22 | ### How to install Python? 23 | 24 | - Visit the Python Downloads Page to access the latest version of Python suitable for your operating system. 25 | - https://www.python.org/downloads/ 26 | 27 | ## 4. Basics of Python 28 | 29 | ### Python getting started 30 | 31 | - If you're new to Python, the Python Official Getting Started Guide provides comprehensive insights into setting up and beginning your Python journey. 32 | - https://www.python.org/about/gettingstarted/ 33 | 34 | ### Python Hello World 35 | 36 | - Explore the [helloworld.py](./basic/helloworld.py) file in the helloworld directory to get a basic introduction to running a Python script. 37 | - Run `cd helloworld; python3 helloworld.py` 38 | 39 | ## 5. Beyond the Basics 40 | 41 | ### Hands-On Example 42 | 43 | - Find more examples at [advanced](./advanced/) 44 | 45 | ## 6. More... 46 | 47 | ### Python cheatsheet 48 | 49 | - https://www.pythoncheatsheet.org/ 50 | 51 | ### Recommended Books 52 | 53 | - N/A 54 | -------------------------------------------------------------------------------- /topics/python/advanced/examples/01-factorial-calculator.py: -------------------------------------------------------------------------------- 1 | # Example Python Script: Factorial Calculator 2 | 3 | # Function to calculate factorial 4 | def calculate_factorial(number): 5 | result = 1 6 | for i in range(1, number + 1): 7 | result *= i 8 | return result 9 | 10 | # Function to get user input and validate 11 | def get_valid_input(): 12 | while True: 13 | try: 14 | user_input = int(input("Enter a positive integer: ")) 15 | if user_input > 0: 16 | return user_input 17 | else: 18 | print("Please enter a positive integer.") 19 | except ValueError: 20 | print("Invalid input. Please enter a valid integer.") 21 | 22 | # Main program 23 | print("Factorial Calculator") 24 | 25 | # Get user input 26 | number = get_valid_input() 27 | 28 | # Calculate and display the factorial 29 | result = calculate_factorial(number) 30 | print(f"The factorial of {number} is: {result}") 31 | -------------------------------------------------------------------------------- /topics/python/advanced/examples/02-parse-json-file.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | # Read the JSON file 4 | with open('sample_files/persional_info.json') as json_file: 5 | info = json.load(json_file) 6 | 7 | # Display the parsed JSON data 8 | print("Name:", info['name']) 9 | print("Age:", info['age']) 10 | print("Email:", info['email']) 11 | print("Address:") 12 | print("Street:", info['address']['street']) 13 | print("City:", info['address']['city']) 14 | print("Zipcode:", info['address']['zipcode']) 15 | print("Interests:", ', '.join(info['interests'])) 16 | -------------------------------------------------------------------------------- /topics/python/advanced/examples/03-oop-with-animal.py: -------------------------------------------------------------------------------- 1 | class Animal: 2 | def __init__(self, species, sound): 3 | self.species = species 4 | self.sound = sound 5 | 6 | def make_sound(self): 7 | return f"The {self.species} makes a {self.sound} sound." 8 | 9 | 10 | class Dog(Animal): 11 | def __init__(self, name): 12 | super().__init__('dog', 'bark') 13 | self.name = name 14 | 15 | def wag_tail(self): 16 | return f"{self.name} wags its tail happily." 17 | 18 | 19 | class Cat(Animal): 20 | def __init__(self, name): 21 | super().__init__('cat', 'meow') 22 | self.name = name 23 | 24 | def purr(self): 25 | return f"{self.name} purrs softly while being pet." 26 | 27 | 28 | # Creating instances of animals 29 | dog = Dog('Buddy') 30 | cat = Cat('Whiskers') 31 | 32 | # Using methods of the instances 33 | print(dog.make_sound()) # Output: "The dog makes a bark sound." 34 | print(dog.wag_tail()) # Output: "Buddy wags its tail happily." 35 | 36 | print(cat.make_sound()) # Output: "The cat makes a meow sound." 37 | print(cat.purr()) # Output: "Whiskers purrs softly while being pet." 38 | -------------------------------------------------------------------------------- /topics/python/advanced/examples/04-api-call.py: -------------------------------------------------------------------------------- 1 | import requests 2 | from requests.exceptions import RequestException 3 | 4 | def print_advise(): 5 | """ 6 | This function prints a random advice by calling the `adviceslip` endpoint 7 | """ 8 | url = "https://api.adviceslip.com/advice" 9 | try: 10 | # Call the api endpoint, skip the SSL verification 11 | response = requests.get(url, verify=False) 12 | 13 | # Raise an exception for HTTP errors 14 | response.raise_for_status() 15 | 16 | # Get the data from respose 17 | advice_data = response.json() 18 | 19 | # Print the advice 20 | print("Random Advice:", advice_data["slip"]["advice"]) 21 | except RequestException as e: 22 | # Print the error and status code if any error occurred 23 | print(f"Error: {e}") 24 | if hasattr(e, "response") and e.response: 25 | print(f"Status Code: {e.response.status_code}") 26 | 27 | 28 | if __name__ == "__main__": 29 | print_advise() 30 | -------------------------------------------------------------------------------- /topics/python/advanced/examples/README.md: -------------------------------------------------------------------------------- 1 | # Python script examples 2 | 3 | ## List 4 | 5 | | ID | Example | URL | Status | 6 | | :-- | :------------------- | :--------------------------------------------------------- | :------ | 7 | | 01 | Factorial Calculator | [01-factorial-calculator.py](./01-factorial-calculator.py) | ✔️ Done | 8 | | 02 | Parse Json file | [02-parse-json-file.py](./02-parse-json-file.py) | ✔️ Done | 9 | | 03 | OOP with animal | [03-oop-with-animal.py](./03-oop-with-animal.py) | ✔️ Done | 10 | | 04 | Api call Example | [04-api-call.py](./04-api-call.py) | ✔️ Done | 11 | -------------------------------------------------------------------------------- /topics/python/advanced/examples/sample_files/persional_info.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "John Doe", 3 | "age": 30, 4 | "email": "johndoe@example.com", 5 | "address": { 6 | "street": "123 Main St", 7 | "city": "Anytown", 8 | "zipcode": "12345" 9 | }, 10 | "interests": ["programming", "hiking", "reading"] 11 | } 12 | -------------------------------------------------------------------------------- /topics/python/basic/helloworld.py: -------------------------------------------------------------------------------- 1 | # Basic Python Script: Greeting User 2 | 3 | # Ask the user for their name 4 | name = input("Enter your name: ") 5 | 6 | # Greet the user based on their input 7 | if name: 8 | print(f"Hello, {name}! Welcome to Python.") 9 | else: 10 | print("Hello, anonymous! Welcome to Python.") 11 | -------------------------------------------------------------------------------- /topics/shell/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Shell? 2 | 3 | ### Overview 4 | 5 | - A shell script is a computer program designed to be run by a Unix shell, a command-line interpreter. The various dialects of shell scripts are considered to be scripting languages. 6 | - Typical operations performed by shell scripts include file manipulation, program execution, and printing text. A script which sets up the environment, runs the program, and does any necessary cleanup or logging, is called a wrapper. 7 | 8 | ### Shell workflow 9 | 10 | - N/A 11 | 12 | ### Official website documentation of Shell 13 | 14 | - https://en.wikipedia.org/wiki/Shell_script 15 | 16 | ## 2. Prerequisites 17 | 18 | - K8s, docker, linux 19 | 20 | ## 3. Installation 21 | 22 | ### How to install Shell? 23 | 24 | - Just install Linux then you would have shell enviroment as well 25 | 26 | ## 4. Basics of Shell 27 | 28 | ### Shell getting started 29 | 30 | - https://www.shellscript.sh/ 31 | 32 | ### Shell Hello World 33 | 34 | - See: [basic](./basic/) 35 | 36 | ## 5. Beyond the Basics 37 | 38 | ### Hands-On Example 39 | 40 | - Do more practice execises at [advanced](./advanced/) 41 | 42 | ## 6. More... 43 | 44 | ### Shell cheatsheet 45 | 46 | - N/A 47 | 48 | ### Recommended Books 49 | 50 | - N/A 51 | -------------------------------------------------------------------------------- /topics/shell/advanced/examples/README.md: -------------------------------------------------------------------------------- 1 | # Collection of hands on shell example 2 | -------------------------------------------------------------------------------- /topics/shell/advanced/examples/list.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Define the output file name 4 | #In this script, we first define the output file name as file_list.txt. 5 | output_file="file_list.txt" 6 | 7 | #Then, we use the ls command with the *.txt pattern to list all .txt files in the current directory. The > operator redirects the output of the ls command to the specified output file. 8 | ls ../data/*.txt > "$output_file" 9 | 10 | # Finally, we display a message indicating the completion of the task, specifying the name of the output file. 11 | echo "List of .txt files has been saved to $output_file" 12 | 13 | #After running the script, you will find the list of .txt files in the current directory stored in the file_list.txt file. 14 | -------------------------------------------------------------------------------- /topics/shell/advanced/excercise/README.md: -------------------------------------------------------------------------------- 1 | # Excercise 2 | 3 | ## Practical collection to improve your shell script skill 4 | 5 | ### 01-System Health Check: Develop a script that checks system health by monitoring CPU load, memory usage, and disk space. 6 | 7 | - Answer: [answers/01_system_health_check.sh](./answers/01_system_health_check.sh) 8 | -------------------------------------------------------------------------------- /topics/shell/advanced/excercise/answers/01_system_health_check.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "CPU Load:" 3 | uptime 4 | echo -e "\nMemory Usage:" 5 | free -h 6 | echo -e "\nDisk Space:" 7 | df -h 8 | -------------------------------------------------------------------------------- /topics/shell/advanced/excercise/answers/02_password_generator.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Function to generate random password 4 | generate_password() { 5 | # Define the characters to use for generating password 6 | characters="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*()-_+=" 7 | 8 | # Initialize variable to store generated password 9 | password="" 10 | 11 | # Generate random password 12 | for i in {1..30}; do 13 | # Get a random character from the list of characters 14 | random_char=${characters:RANDOM % ${#characters}:1} 15 | # Append the random character to the password 16 | password="${password}${random_char}" 17 | done 18 | 19 | # Print the generated password 20 | echo "$password" 21 | } 22 | 23 | # Call the function to generate password 24 | generate_password -------------------------------------------------------------------------------- /topics/shell/basic/data/example.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "John Doe", 3 | "age": 30, 4 | "occupation": "Software Engineer" 5 | } 6 | -------------------------------------------------------------------------------- /topics/shell/basic/data/grep_example.txt: -------------------------------------------------------------------------------- 1 | This is an example file. 2 | It contains some text for demonstration. 3 | We will use grep to search for specific patterns. 4 | 5 | It will search for the pattern "grep" within the grep_example.txt file and display any lines that contain the pattern. 6 | 7 | Here the output will have line 3 and line 5 as both have the given keyword in them. 8 | 9 | //////////// 10 | sample text it 11 | 12 | 13 | ///////////// -------------------------------------------------------------------------------- /topics/shell/basic/data/one.txt: -------------------------------------------------------------------------------- 1 | 1st sample file. -------------------------------------------------------------------------------- /topics/shell/basic/data/three.txt: -------------------------------------------------------------------------------- 1 | 3rd sample file. -------------------------------------------------------------------------------- /topics/shell/basic/data/two.txt: -------------------------------------------------------------------------------- 1 | 2nd sample file. -------------------------------------------------------------------------------- /topics/snyk/basic/README.md: -------------------------------------------------------------------------------- 1 | Coming soon 2 | -------------------------------------------------------------------------------- /topics/sql/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is MySQL 2 | 3 | ### Overview 4 | 5 | - MySQL, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. 6 | 7 | ### Official website documentation of MySQL 8 | 9 | - Visit https://dev.mysql.com/doc/ 10 | 11 | ## 2. Installation 12 | 13 | ### How to install MySQL? 14 | 15 | - https://dev.mysql.com/doc/mysql-installation-excerpt/5.7/en/ 16 | 17 | ## 3. Basics of MySQL 18 | 19 | ### Getting started with MySQL 20 | 21 | - https://dev.mysql.com/doc/mysql-getting-started/en/ 22 | 23 | ### MySQL Helloword ⭐ 24 | 25 | - Visit [mysql-basics](./mysql-basics.md) 26 | 27 | ## 4. Beyond the Basics 28 | 29 | ### Exploring Advanced Examples 30 | 31 | - Checkout [mysql-advanced](./mysql-advanced.md) 32 | 33 | ## 5. More 34 | 35 | ### CS50x SQL 36 | 37 | - [CS50x 2023 - Lecture 7 - SQL](https://www.youtube.com/live/zrCLRC3Ci1c?si=yCsB6cSRY5FqyOXd) 38 | 39 | ### Recommended Books 40 | 41 | - N/A 42 | -------------------------------------------------------------------------------- /topics/sql/mysql-advanced.md: -------------------------------------------------------------------------------- 1 | # More MySQL hands-on 2 | 3 | TODO 4 | -------------------------------------------------------------------------------- /topics/sql/mysql-basics.md: -------------------------------------------------------------------------------- 1 | # MySQL basic hands-on 2 | 3 | This hands-on will: 4 | 5 | - Provisions a mysql instance with docker 6 | - Creates a database named `sqldemodb`, a table named `users` with the specified columns, and shows how to verify the database and table creation. 7 | 8 | ## Run and explore sql in docker container 9 | 10 | ```bash 11 | docker run --name demo-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:latest 12 | 13 | docker exec -it demo-mysql bash 14 | # You now in the `bash-5.1#` container terminal 15 | 16 | # Connect to SQL 17 | mysql -uroot -p 18 | 19 | # Creating new DB 20 | CREATE DATABASE sqldemodb; 21 | 22 | # Using the Database 23 | USE sqldemodb; 24 | 25 | 26 | # Creating a New Table 27 | CREATE TABLE users ( 28 | id INT AUTO_INCREMENT PRIMARY KEY, 29 | name VARCHAR(100) NOT NULL, 30 | email VARCHAR(100) NOT NULL 31 | ); 32 | 33 | # Verifying the Creation 34 | SHOW DATABASES; 35 | SHOW TABLES; 36 | DESCRIBE users; 37 | ``` 38 | -------------------------------------------------------------------------------- /topics/terraform/.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | **/.terraform/* 3 | 4 | # .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | 8 | # Crash log files 9 | crash.log 10 | crash.*.log 11 | 12 | # Exclude all .tfvars files, which are likely to contain sensitive data, such as 13 | # password, private keys, and other secrets. These should not be part of version 14 | # control as they are data points which are potentially sensitive and subject 15 | # to change depending on the environment. 16 | *.tfvars 17 | *.tfvars.json 18 | 19 | # Ignore override files as they are usually used to override resources locally and so 20 | # are not checked in 21 | override.tf 22 | override.tf.json 23 | *_override.tf 24 | *_override.tf.json 25 | 26 | # Include override files you do wish to add to version control using negated pattern 27 | # !example_override.tf 28 | 29 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 30 | # example: *tfplan* 31 | 32 | # Ignore CLI configuration files 33 | .terraformrc 34 | terraform.rc 35 | 36 | # *.lock.hcl 37 | -------------------------------------------------------------------------------- /topics/terraform/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is Terraform? 2 | 3 | - https://developer.hashicorp.com/terraform/intro 4 | 5 | ### Overview 6 | 7 | - HashiCorp Terraform is an infrastructure as code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share. 8 | - You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle. Terraform can manage low-level components like compute, storage, and networking resources, as well as high-level components like DNS entries and SaaS features. 9 | 10 | ### Terraform workflow 11 | 12 | - https://developer.hashicorp.com/terraform/intro#how-does-terraform-work 13 | 14 | ### Official website documentation of Terraform 15 | 16 | - https://developer.hashicorp.com/terraform/docs 17 | 18 | ## 2. Prerequisites 19 | 20 | - Basic linux command line skill and IaC concepts 21 | - Cloud (if working with cloud provider) 22 | 23 | ## 3. Installation 24 | 25 | ### How to install Terraform? 26 | 27 | - https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli 28 | 29 | ## 4. Basics of Terraform 30 | 31 | ### Terraform getting started 32 | 33 | - https://developer.hashicorp.com/terraform/tutorials/aws-get-started 34 | 35 | ### Terraform Hello World 36 | 37 | - See: [basic](./basic/) 38 | 39 | ## 5. Beyond the Basics 40 | 41 | ### Hands-On Example 42 | 43 | - For more hands-on examples, visit [aws-lab-with-terraform projects](https://github.com/tungbq/aws-lab-with-terraform) 44 | 45 | ## 6. More... 46 | 47 | ### Looking for a Terraform sample project with best practice? 48 | 49 | - Check out: [terraform-sample-project](https://github.com/tungbq/terraform-sample-project) 50 | 51 | ### Terraform cheatsheet 52 | 53 | - N/A 54 | 55 | ### Recommended Books 56 | 57 | - N/A 58 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/dev/install_apache.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | yum update -y 3 | yum install -y httpd 4 | systemctl start httpd 5 | systemctl enable httpd 6 | echo "Hello World from $(hostname -f)" > /var/www/html/index.html 7 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/dev/install_node.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | yum update -y 3 | yum -y install curl 4 | yum install -y gcc-c++ make 5 | curl -sL https://rpm.nodesource.com/setup_16.x | bash - 6 | yum install -y nodejs 7 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/dev/outputs.tf: -------------------------------------------------------------------------------- 1 | 2 | output "load_balancer_endpoint" { 3 | value = module.loadbalancing.lb_endpoint 4 | } 5 | 6 | output "database_endpoint" { 7 | value = module.database.db_endpoint 8 | } 9 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/dev/variables.tf: -------------------------------------------------------------------------------- 1 | variable "access_ip" { 2 | type = string 3 | } 4 | 5 | variable "db_name" { 6 | type = string 7 | } 8 | 9 | variable "dbuser" { 10 | type = string 11 | sensitive = true 12 | } 13 | 14 | variable "dbpassword" { 15 | type = string 16 | sensitive = true 17 | } 18 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/compute/outputs.tf: -------------------------------------------------------------------------------- 1 | output "app_asg" { 2 | value = aws_autoscaling_group.three_tier_app 3 | } 4 | 5 | output "app_backend_asg" { 6 | value = aws_autoscaling_group.three_tier_backend 7 | } 8 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/compute/variables.tf: -------------------------------------------------------------------------------- 1 | variable "bastion_sg" {} 2 | variable "frontend_app_sg" {} 3 | variable "backend_app_sg" {} 4 | variable "private_subnets" {} 5 | variable "public_subnets" {} 6 | variable "key_name" {} 7 | variable "lb_tg_name" {} 8 | variable "lb_tg" {} 9 | 10 | variable "bastion_instance_count" { 11 | type = number 12 | } 13 | 14 | variable "instance_type" { 15 | type = string 16 | } 17 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/database/main.tf: -------------------------------------------------------------------------------- 1 | # --- database/main.tf --- 2 | 3 | resource "aws_db_instance" "three_tier_db" { 4 | allocated_storage = var.db_storage 5 | engine = "mysql" 6 | engine_version = var.db_engine_version 7 | instance_class = var.db_instance_class 8 | db_name = var.db_name 9 | username = var.dbuser 10 | password = var.dbpassword 11 | db_subnet_group_name = var.db_subnet_group_name 12 | identifier = var.db_identifier 13 | skip_final_snapshot = var.skip_db_snapshot 14 | vpc_security_group_ids = [var.rds_sg] 15 | 16 | tags = { 17 | Name = "three-tier-db" 18 | } 19 | } -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/database/outputs.tf: -------------------------------------------------------------------------------- 1 | # --- database/outputs.tf --- 2 | 3 | output "db_endpoint" { 4 | value = aws_db_instance.three_tier_db.endpoint 5 | } 6 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/database/variables.tf: -------------------------------------------------------------------------------- 1 | # --- database/variables.tf --- 2 | 3 | variable "db_storage" {} 4 | variable "db_instance_class" {} 5 | variable "db_name" {} 6 | variable "dbuser" {} 7 | variable "dbpassword" {} 8 | variable "db_subnet_group_name" {} 9 | variable "db_engine_version" {} 10 | variable "db_identifier" {} 11 | variable "skip_db_snapshot" {} 12 | variable "rds_sg" {} 13 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/loadbalancing/main.tf: -------------------------------------------------------------------------------- 1 | # INTERNET FACING LOAD BALANCER 2 | 3 | resource "aws_lb" "three_tier_lb" { 4 | name = "three-tier-loadbalancer" 5 | security_groups = [var.lb_sg] 6 | subnets = var.public_subnets 7 | idle_timeout = 400 8 | 9 | depends_on = [ 10 | var.app_asg 11 | ] 12 | } 13 | 14 | resource "aws_lb_target_group" "three_tier_tg" { 15 | name = "three-tier-lb-tg-${substr(uuid(), 0, 3)}" 16 | port = var.tg_port 17 | protocol = var.tg_protocol 18 | vpc_id = var.vpc_id 19 | 20 | lifecycle { 21 | ignore_changes = [name] 22 | create_before_destroy = true 23 | } 24 | } 25 | 26 | resource "aws_lb_listener" "three_tier_lb_listener" { 27 | load_balancer_arn = aws_lb.three_tier_lb.arn 28 | port = var.listener_port 29 | protocol = var.listener_protocol 30 | default_action { 31 | type = "forward" 32 | target_group_arn = aws_lb_target_group.three_tier_tg.arn 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/loadbalancing/outputs.tf: -------------------------------------------------------------------------------- 1 | output "alb_dns" { 2 | value = aws_lb.three_tier_lb.dns_name 3 | } 4 | 5 | output "lb_endpoint" { 6 | value = aws_lb.three_tier_lb.dns_name 7 | } 8 | 9 | output "lb_tg_name" { 10 | value = aws_lb_target_group.three_tier_tg.name 11 | } 12 | 13 | output "lb_tg" { 14 | value = aws_lb_target_group.three_tier_tg.arn 15 | } 16 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/loadbalancing/variables.tf: -------------------------------------------------------------------------------- 1 | variable "lb_sg" {} 2 | variable "public_subnets" {} 3 | variable "app_asg" {} 4 | variable "tg_port" {} 5 | variable "tg_protocol" {} 6 | variable "vpc_id" {} 7 | variable "listener_port" {} 8 | variable "listener_protocol" {} 9 | variable "azs" {} 10 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/networking/outputs.tf: -------------------------------------------------------------------------------- 1 | output "vpc_id" { 2 | value = aws_vpc.three_tier_vpc.id 3 | } 4 | 5 | output "db_subnet_group_name" { 6 | value = aws_db_subnet_group.three_tier_rds_subnetgroup.*.name 7 | } 8 | 9 | output "rds_db_subnet_group" { 10 | value = aws_db_subnet_group.three_tier_rds_subnetgroup.*.id 11 | } 12 | 13 | output "rds_sg" { 14 | value = aws_security_group.three_tier_rds_sg.id 15 | } 16 | 17 | output "frontend_app_sg" { 18 | value = aws_security_group.three_tier_frontend_app_sg.id 19 | } 20 | 21 | output "backend_app_sg" { 22 | value = aws_security_group.three_tier_backend_app_sg.id 23 | } 24 | 25 | output "bastion_sg" { 26 | value = aws_security_group.three_tier_bastion_sg.id 27 | } 28 | 29 | output "lb_sg" { 30 | value = aws_security_group.three_tier_lb_sg.id 31 | } 32 | 33 | output "public_subnets" { 34 | value = aws_subnet.three_tier_public_subnets.*.id 35 | } 36 | 37 | output "private_subnets" { 38 | value = aws_subnet.three_tier_private_subnets.*.id 39 | } 40 | 41 | output "private_subnets_db" { 42 | value = aws_subnet.three_tier_private_subnets_db.*.id 43 | } 44 | -------------------------------------------------------------------------------- /topics/terraform/advanced/aws-three-tier/modules/three-tier-deployment/networking/variables.tf: -------------------------------------------------------------------------------- 1 | variable "vpc_cidr" { 2 | type = string 3 | } 4 | 5 | variable "public_sn_count" { 6 | type = number 7 | } 8 | 9 | variable "private_sn_count" { 10 | type = number 11 | } 12 | 13 | variable "access_ip" { 14 | type = string 15 | } 16 | 17 | variable "db_subnet_group" { 18 | type = bool 19 | } 20 | 21 | variable "availabilityzone" {} 22 | 23 | variable "azs" {} 24 | -------------------------------------------------------------------------------- /topics/terraform/advanced/docker/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/kreuzwerker/docker" { 5 | version = "2.13.0" 6 | constraints = "~> 2.13.0" 7 | hashes = [ 8 | "h1:3r/gPhfPCl4mxazpfg0S/qgxmt+QWuvYT3CXTxUz9fs=", 9 | "zh:0df685adc7b5740ae0def7235a44e1bce2f71beaf155319c2464ad2fba5cb321", 10 | "zh:2cf4b4f840fa84f1b906f4cca58c9782375e9988ad354afcd85b0180cd784205", 11 | "zh:347b189655afdc0df1919a26fb64cb745bb02d8fa2006a087cb6679a1b62319d", 12 | "zh:441521c85fecad348ca012db7b9d14544cbe0a237012f8a03d5660c73e9a32a6", 13 | "zh:462a1f67d26182fbb5ee78bb8d4764a2983804fa5f9971ca006da439e9e97055", 14 | "zh:53822eb743cd487cabbed3360221cc0404b80f933b746d80426a4e10fa2f958a", 15 | "zh:55c6eda01dd3d3f877aad16de6bf91e84bfa9c93f852869581429640be19d472", 16 | "zh:690bb327398f800f7945bab35b1ad2c6ec1c0fa7f8a1e5696b0bc4597540e3af", 17 | "zh:6c55a9a761596ca974a9cbaeee3179fb8f50916fad18d2422a2d818c3f4dc241", 18 | "zh:6efd9e6ffa4c4c73fd39c856456022aad6a3a0b176c550409345e894475bbf4f", 19 | "zh:811a37e3a66d5e99a81e0e66c817363205b030962fcec68bb96ab53b029ffeac", 20 | "zh:aacb4ab8dd11e834952877390bc19beabf9fb0591c101e96da559201f4b284ca", 21 | "zh:cecdf49f9488a10ac9416be354e7de3ed45114a25235cebc4ec6771696d980e9", 22 | ] 23 | } 24 | -------------------------------------------------------------------------------- /topics/terraform/advanced/docker/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | docker = { 4 | source = "kreuzwerker/docker" 5 | version = "~> 2.13.0" 6 | } 7 | } 8 | } 9 | 10 | provider "docker" {} 11 | 12 | resource "docker_image" "nginx" { 13 | name = "nginx:latest" 14 | keep_locally = false 15 | } 16 | 17 | resource "docker_container" "nginx" { 18 | image = docker_image.nginx.latest 19 | name = "tutorial" 20 | ports { 21 | internal = 80 22 | external = 8000 23 | } 24 | } 25 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap02-getting-started/cluster-web-server/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "2.70.1" 6 | constraints = "~> 2.0" 7 | hashes = [ 8 | "h1:xb0+6Ciez8k//X73O4EvaoM2fidw9sFVP6xaAXV/88c=", 9 | "zh:04137cdf128cf21dcd190bbba4d4bba43c7868c52ad646b0eaa54a8b8b8160a7", 10 | "zh:30c9f956133a102b4a426d76dd3ef1a42332d9875261a06aa877409aa6b2b556", 11 | "zh:3107a43647454a3d6d847fba6aa593650af0f6a353272c04450408af5f4d353a", 12 | "zh:3f17285478313af822447b453fa4e37f30ef221f0b0e8f2e4655f1ac9f9de1a2", 13 | "zh:5a626f7a3c4a9fea3bdfde63aedbf6eea73760f3b228f776f1132b61d00c7ff2", 14 | "zh:6aafc9dd79b511b9e3d0ec49f7df1d1fd697c3c873d1d70a2be1a12475b50206", 15 | "zh:6fb29b48ccc85f7e9dfde3867ce99d6d65fb76bea68c97d404fae431758a8f03", 16 | "zh:c47be92e1edf2e8675c932030863536c1a79decf85b2baa4232e5936c5f7088f", 17 | "zh:cd0a4b28c5e4b5092043803d17fd1d495ecb926c2688603c4cdab4c20f3a91f4", 18 | "zh:fb0ff763cb5d7a696989e58e0e4b88b1faed2a62b9fb83f4f7c2400ad6fabb84", 19 | ] 20 | } 21 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap02-getting-started/cluster-web-server/outputs.tf: -------------------------------------------------------------------------------- 1 | output "alb_dns_name" { 2 | value = aws_lb.example.dns_name 3 | description = "The domain name of the load balancer" 4 | } -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap02-getting-started/cluster-web-server/variables.tf: -------------------------------------------------------------------------------- 1 | variable "server_port" { 2 | description = "The port the server will use for HTTP requests" 3 | type = number 4 | default = 80 5 | } 6 | 7 | variable "alb_name" { 8 | description = "The name of the ALB" 9 | type = string 10 | default = "terraform-asg-example" 11 | } 12 | 13 | variable "instance_security_group_name" { 14 | description = "The name of the security group for the EC2 Instances" 15 | type = string 16 | default = "terraform-example-instance" 17 | } 18 | 19 | variable "alb_security_group_name" { 20 | description = "The name of the security group for the ALB" 21 | type = string 22 | default = "terraform-example-alb" 23 | } 24 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap02-getting-started/single-web-server/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "4.38.0" 6 | hashes = [ 7 | "h1:bhDPZioOF9Uz9mavezCHfYbD5YJ3fEPsixLpcWgV/kU=", 8 | "zh:0ae61458acf7acecf47f7a02e08da1f7adeee9532e053c0d80432f16197e4799", 9 | "zh:1ece9bcef41ffc75e0955419d7f8b1708ab7ffe4518bc9a2afe3bc5c79a9e79b", 10 | "zh:302065a7c3ae798345b92a465b650b025d9c4e9abc3e78421ecc69a17b8c3d6a", 11 | "zh:52d61f6a3ed6726b821a78f1fb78df818cf24a4d2378cc16afded297b37d4b7b", 12 | "zh:6c365ed0cae031acdbcca04560997589a94629269cb456d468cbe51a3a020386", 13 | "zh:70987a51d782f3458f124efea320157a48453864c420421051c56d41e463a948", 14 | "zh:8b5a5f30240c67e596a89ccd76aa81133e6ae253c8a06a932b8901ef2b4a7486", 15 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 16 | "zh:d672167515ece7c2db4663faf180dfb6cfc6dbf5e149f868d05c39bb54b9ca03", 17 | "zh:df1bc9926674b2e1246c9ebffd8bf8c4e380f50910a7f0b3ded957e8768ae27a", 18 | "zh:e304b6e2bd66e7992326aa0446152547eb97e8f77d00bc1a9096022ac37e5d71", 19 | "zh:f033690f11446af1383ad74149f429fae19e2784af5e151a22f46965dff21b29", 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap02-getting-started/single-web-server/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | variable "server_port" { 6 | description = "The port the server will use for HTTP requests" 7 | type = number 8 | default = 80 9 | } 10 | 11 | variable "ssh_port" { 12 | description = "The SSH port to instance (might helpful for debugging purpose)" 13 | type = number 14 | default = 22 15 | } 16 | 17 | # Create security group 18 | resource "aws_security_group" "instance" { 19 | name = "terraform-example-instance" 20 | 21 | # Inbound rules 22 | ingress { 23 | from_port = var.server_port 24 | to_port = var.server_port 25 | protocol = "tcp" 26 | cidr_blocks = ["0.0.0.0/0"] 27 | } 28 | 29 | 30 | # Outbound rules 31 | egress { 32 | from_port = 0 33 | to_port = 0 34 | protocol = "-1" 35 | cidr_blocks = ["0.0.0.0/0"] 36 | ipv6_cidr_blocks = ["::/0"] 37 | } 38 | } 39 | 40 | # Create SSH security group 41 | resource "aws_security_group" "ssh_instance" { 42 | name = "terraform-example-instance-SSH" 43 | 44 | # Inbound rules 45 | ingress { 46 | from_port = var.ssh_port 47 | to_port = var.ssh_port 48 | protocol = "tcp" 49 | cidr_blocks = ["0.0.0.0/0"] 50 | } 51 | } 52 | 53 | # Launch an EC2 instance 54 | resource "aws_instance" "example" { 55 | ami = "ami-0c4e4b4eb2e11d1d4" 56 | instance_type = "t2.micro" 57 | vpc_security_group_ids = [aws_security_group.instance.id, aws_security_group.ssh_instance.id] 58 | 59 | user_data = <<-EOF 60 | #!/bin/bash 61 | yum update -y 62 | yum install -y httpd 63 | systemctl enable httpd.service 64 | systemctl start httpd.service 65 | # Create the HTML file 66 | touch /var/www/html/index.html 67 | echo "[tungbq] This is $(hostname)" > /var/www/html/index.html 68 | EOF 69 | tags = { 70 | Name = "terraform-example-leo" 71 | } 72 | } 73 | 74 | output "public_ip" { 75 | value = aws_instance.example.public_ip 76 | description = "The public IP address of the web server" 77 | } 78 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/global/s3-dynamo/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "4.38.0" 6 | constraints = "~> 4.0" 7 | hashes = [ 8 | "h1:bhDPZioOF9Uz9mavezCHfYbD5YJ3fEPsixLpcWgV/kU=", 9 | "zh:0ae61458acf7acecf47f7a02e08da1f7adeee9532e053c0d80432f16197e4799", 10 | "zh:1ece9bcef41ffc75e0955419d7f8b1708ab7ffe4518bc9a2afe3bc5c79a9e79b", 11 | "zh:302065a7c3ae798345b92a465b650b025d9c4e9abc3e78421ecc69a17b8c3d6a", 12 | "zh:52d61f6a3ed6726b821a78f1fb78df818cf24a4d2378cc16afded297b37d4b7b", 13 | "zh:6c365ed0cae031acdbcca04560997589a94629269cb456d468cbe51a3a020386", 14 | "zh:70987a51d782f3458f124efea320157a48453864c420421051c56d41e463a948", 15 | "zh:8b5a5f30240c67e596a89ccd76aa81133e6ae253c8a06a932b8901ef2b4a7486", 16 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 17 | "zh:d672167515ece7c2db4663faf180dfb6cfc6dbf5e149f868d05c39bb54b9ca03", 18 | "zh:df1bc9926674b2e1246c9ebffd8bf8c4e380f50910a7f0b3ded957e8768ae27a", 19 | "zh:e304b6e2bd66e7992326aa0446152547eb97e8f77d00bc1a9096022ac37e5d71", 20 | "zh:f033690f11446af1383ad74149f429fae19e2784af5e151a22f46965dff21b29", 21 | ] 22 | } 23 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/global/s3-dynamo/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | terraform { 5 | required_version = ">= 1.0.0, < 2.0.0" 6 | 7 | required_providers { 8 | aws = { 9 | source = "hashicorp/aws" 10 | version = "~> 4.0" 11 | } 12 | } 13 | } 14 | 15 | resource "aws_s3_bucket" "terraform_state" { 16 | 17 | bucket = var.bucket_name 18 | 19 | // This is only here so we can destroy the bucket as part of automated tests. You should not copy this for production 20 | // usage 21 | force_destroy = true 22 | 23 | } 24 | 25 | # Enable versioning so you can see the full revision history of your 26 | # state files 27 | resource "aws_s3_bucket_versioning" "enabled" { 28 | bucket = aws_s3_bucket.terraform_state.id 29 | versioning_configuration { 30 | status = "Enabled" 31 | } 32 | } 33 | 34 | # Enable server-side encryption by default 35 | resource "aws_s3_bucket_server_side_encryption_configuration" "default" { 36 | bucket = aws_s3_bucket.terraform_state.id 37 | 38 | rule { 39 | apply_server_side_encryption_by_default { 40 | sse_algorithm = "AES256" 41 | } 42 | } 43 | } 44 | 45 | # Explicitly block all public access to the S3 bucket 46 | resource "aws_s3_bucket_public_access_block" "public_access" { 47 | bucket = aws_s3_bucket.terraform_state.id 48 | block_public_acls = true 49 | block_public_policy = true 50 | ignore_public_acls = true 51 | restrict_public_buckets = true 52 | } 53 | 54 | resource "aws_dynamodb_table" "terraform_locks" { 55 | name = var.table_name 56 | billing_mode = "PAY_PER_REQUEST" 57 | hash_key = "LockID" 58 | 59 | attribute { 60 | name = "LockID" 61 | type = "S" 62 | } 63 | } -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/global/s3-dynamo/outputs.tf: -------------------------------------------------------------------------------- 1 | output "s3_bucket_arn" { 2 | value = aws_s3_bucket.terraform_state.arn 3 | description = "The ARN of the S3 bucket" 4 | } 5 | 6 | output "dynamodb_table_name" { 7 | value = aws_dynamodb_table.terraform_locks.name 8 | description = "The name of the DynamoDB table" 9 | } 10 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/global/s3-dynamo/variables.tf: -------------------------------------------------------------------------------- 1 | variable "bucket_name" { 2 | description = "The name of the S3 bucket. Must be globally unique." 3 | type = string 4 | default = "tungleo-terraform-state-s3" 5 | } 6 | 7 | variable "table_name" { 8 | description = "The name of the DynamoDB table. Must be unique in this AWS account." 9 | type = string 10 | default = "terraform-up-and-running-locks" 11 | } 12 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/datastore/mysql/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "4.38.0" 6 | hashes = [ 7 | "h1:bhDPZioOF9Uz9mavezCHfYbD5YJ3fEPsixLpcWgV/kU=", 8 | "zh:0ae61458acf7acecf47f7a02e08da1f7adeee9532e053c0d80432f16197e4799", 9 | "zh:1ece9bcef41ffc75e0955419d7f8b1708ab7ffe4518bc9a2afe3bc5c79a9e79b", 10 | "zh:302065a7c3ae798345b92a465b650b025d9c4e9abc3e78421ecc69a17b8c3d6a", 11 | "zh:52d61f6a3ed6726b821a78f1fb78df818cf24a4d2378cc16afded297b37d4b7b", 12 | "zh:6c365ed0cae031acdbcca04560997589a94629269cb456d468cbe51a3a020386", 13 | "zh:70987a51d782f3458f124efea320157a48453864c420421051c56d41e463a948", 14 | "zh:8b5a5f30240c67e596a89ccd76aa81133e6ae253c8a06a932b8901ef2b4a7486", 15 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 16 | "zh:d672167515ece7c2db4663faf180dfb6cfc6dbf5e149f868d05c39bb54b9ca03", 17 | "zh:df1bc9926674b2e1246c9ebffd8bf8c4e380f50910a7f0b3ded957e8768ae27a", 18 | "zh:e304b6e2bd66e7992326aa0446152547eb97e8f77d00bc1a9096022ac37e5d71", 19 | "zh:f033690f11446af1383ad74149f429fae19e2784af5e151a22f46965dff21b29", 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/datastore/mysql/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | terraform { 6 | backend "s3" { 7 | bucket = "tungleo-terraform-state-s3" 8 | key = "stage/data-stores/mysql/terraform.tfstate" 9 | region = "us-east-1" 10 | # Replace this with your DynamoDB table name! 11 | dynamodb_table = "terraform-up-and-running-locks" 12 | encrypt = true 13 | } 14 | } 15 | 16 | resource "aws_db_instance" "example" { 17 | identifier_prefix = "example-db" 18 | engine = "mysql" 19 | allocated_storage = 10 20 | instance_class = "db.t2.micro" 21 | db_name = "example_database" 22 | username = "admin" 23 | password = "admin0123456" 24 | skip_final_snapshot = true 25 | # This is a better way, but we dont use for practice as it costs money 26 | # password = data.aws_secretsmanager_secret_version.db_password.secret_string 27 | } 28 | 29 | # This is a better way, but we dont use for practice as it costs money 30 | # data "aws_secretsmanager_secret_version" "db_password" { 31 | # secret_id = "mysql-master-password-stage" 32 | # } 33 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/datastore/mysql/outputs.tf: -------------------------------------------------------------------------------- 1 | output "address" { 2 | value = aws_db_instance.example.address 3 | description = "Connect to the database at this endpoint" 4 | } 5 | output "port" { 6 | value = aws_db_instance.example.port 7 | description = "The port the database is listening on" 8 | } -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/datastore/mysql/variables.tf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/datastore/mysql/variables.tf -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/service/webserver-cluster/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0.0, < 2.0.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 4.0" 8 | } 9 | } 10 | 11 | backend "s3" { 12 | 13 | # This backend configuration is filled in automatically at test time by Terratest. If you wish to run this example 14 | # manually, uncomment and fill in the config below. 15 | 16 | bucket = "tungleo-terraform-state-s3" 17 | key = "global/s3/terraform.tfstate" 18 | region = "us-east-1" 19 | dynamodb_table = "terraform-up-and-running-locks" 20 | encrypt = true 21 | 22 | } 23 | } 24 | 25 | data "terraform_remote_state" "db" { 26 | backend = "s3" 27 | config = { 28 | bucket = "tungleo-terraform-state-s3" 29 | key = "stage/data-stores/mysql/terraform.tfstate" 30 | region = "us-east-1" 31 | } 32 | } 33 | 34 | provider "aws" { 35 | region = "us-east-1" 36 | } 37 | 38 | resource "aws_instance" "example_chap03" { 39 | ami = "ami-0c4e4b4eb2e11d1d4" 40 | instance_type = "t2.micro" 41 | vpc_security_group_ids = [aws_security_group.instance_chap03.id] 42 | 43 | # Render the User Data script as a template 44 | user_data = templatefile("user-data.sh", { 45 | server_port = var.server_port 46 | db_address = data.terraform_remote_state.db.outputs.address 47 | db_port = data.terraform_remote_state.db.outputs.port 48 | }) 49 | 50 | user_data_replace_on_change = true 51 | 52 | tags = { 53 | Name = "terraform-example" 54 | } 55 | } 56 | 57 | 58 | # Create security group 59 | resource "aws_security_group" "instance_chap03" { 60 | name = "terraform-example-instance-chap03" 61 | 62 | # Inbound rules 63 | ingress { 64 | from_port = var.server_port 65 | to_port = var.server_port 66 | protocol = "tcp" 67 | cidr_blocks = ["0.0.0.0/0"] 68 | } 69 | 70 | # Outbound rules 71 | egress { 72 | from_port = 0 73 | to_port = 0 74 | protocol = "-1" 75 | cidr_blocks = ["0.0.0.0/0"] 76 | ipv6_cidr_blocks = ["::/0"] 77 | } 78 | } 79 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/service/webserver-cluster/outputs.tf: -------------------------------------------------------------------------------- 1 | output "public_ip" { 2 | value = aws_instance.example_chap03.public_ip 3 | description = "The public IP address of the web server" 4 | } 5 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/service/webserver-cluster/user-data.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | yum update -y 3 | yum install -y httpd 4 | systemctl enable httpd.service 5 | systemctl start httpd.service 6 | # Create the HTML file 7 | touch /var/www/html/index.html 8 | 9 | cat > /var/www/html/index.html <Hello, World! My name is Tung 11 |

DB address: ${db_address}

12 |

DB port: ${db_port}

13 | EOF 14 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap03-terraform-state/stage/service/webserver-cluster/variables.tf: -------------------------------------------------------------------------------- 1 | 2 | variable "server_port" { 3 | description = "The port the server will use for HTTP requests" 4 | type = number 5 | default = 80 6 | } 7 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/asg-webserver-cluster/asg-user-data.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | yum update -y 3 | yum install -y httpd 4 | systemctl enable httpd.service 5 | systemctl start httpd.service 6 | # Create the HTML file 7 | touch /var/www/html/index.html 8 | 9 | cat > /var/www/html/index.html <Hello, World! My name is Tung 11 |

DB address: TO-BE-ADDED

12 |

Hostname: $(hostname -f)

13 |

Date: $(date)

14 | EOF 15 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/asg-webserver-cluster/outputs.tf: -------------------------------------------------------------------------------- 1 | output "alb_dns_name" { 2 | value = aws_lb.terramino.dns_name 3 | description = "The domain name of the load balancer" 4 | } 5 | 6 | output "lb_endpoint" { 7 | value = "http://${aws_lb.terramino.dns_name}" 8 | } 9 | 10 | output "application_endpoint" { 11 | value = "http://${aws_lb.terramino.dns_name}" 12 | } 13 | 14 | output "asg_name" { 15 | value = aws_autoscaling_group.terramino.name 16 | } 17 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/asg-webserver-cluster/variables.tf: -------------------------------------------------------------------------------- 1 | variable "alb_name" { 2 | description = "ALB name to be created" 3 | type = string 4 | } 5 | 6 | variable "environment" { 7 | description = "ALB name to be created" 8 | type = string 9 | } 10 | 11 | variable "asg_min_size" { 12 | description = "ASG min size" 13 | type = number 14 | default = 1 15 | } 16 | 17 | variable "asg_max_size" { 18 | description = "ASG max size" 19 | type = number 20 | default = 2 21 | } 22 | 23 | variable "asg_desired_capacity" { 24 | description = "ASG desired capacity" 25 | type = number 26 | default = 1 27 | } 28 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/webserver-cluster/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_version = ">= 1.0.0, < 2.0.0" 3 | 4 | required_providers { 5 | aws = { 6 | source = "hashicorp/aws" 7 | version = "~> 4.0" 8 | } 9 | } 10 | } 11 | 12 | data "terraform_remote_state" "db" { 13 | backend = "s3" 14 | config = { 15 | bucket = var.db_remote_state_bucket 16 | key = var.db_remote_state_key 17 | region = "us-east-1" 18 | } 19 | } 20 | 21 | resource "aws_instance" "example_chap03" { 22 | ami = "ami-0c4e4b4eb2e11d1d4" 23 | instance_type = "t2.micro" 24 | vpc_security_group_ids = [aws_security_group.instance_chap03.id] 25 | 26 | # Render the User Data script as a template 27 | user_data = templatefile("${path.module}/user-data.sh", { 28 | server_port = var.server_port 29 | db_address = data.terraform_remote_state.db.outputs.address 30 | db_port = data.terraform_remote_state.db.outputs.port 31 | }) 32 | 33 | user_data_replace_on_change = true 34 | 35 | tags = { 36 | Name = "terraform-example" 37 | } 38 | } 39 | 40 | 41 | # Create security group 42 | resource "aws_security_group" "instance_chap03" { 43 | name = "terraform-example-instance-chap03" 44 | 45 | # Inbound rules 46 | ingress { 47 | from_port = var.server_port 48 | to_port = var.server_port 49 | protocol = "tcp" 50 | cidr_blocks = ["0.0.0.0/0"] 51 | } 52 | 53 | # Outbound rules 54 | egress { 55 | from_port = 0 56 | to_port = 0 57 | protocol = "-1" 58 | cidr_blocks = ["0.0.0.0/0"] 59 | ipv6_cidr_blocks = ["::/0"] 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/webserver-cluster/outputs.tf: -------------------------------------------------------------------------------- 1 | output "public_ip" { 2 | value = aws_instance.example_chap03.public_ip 3 | description = "The public IP address of the web server" 4 | } 5 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/webserver-cluster/user-data.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | yum update -y 3 | yum install -y httpd 4 | systemctl enable httpd.service 5 | systemctl start httpd.service 6 | # Create the HTML file 7 | touch /var/www/html/index.html 8 | 9 | cat > /var/www/html/index.html <Hello, World! My name is Tung 11 |

DB address: ${db_address}

12 |

DB port: ${db_port}

13 | EOF 14 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/modules/services/webserver-cluster/variables.tf: -------------------------------------------------------------------------------- 1 | 2 | variable "server_port" { 3 | description = "The port the server will use for HTTP requests" 4 | type = number 5 | default = 80 6 | } 7 | 8 | variable "cluster_name" { 9 | description = "The name to use for all the cluster resources" 10 | type = string 11 | } 12 | 13 | variable "db_remote_state_bucket" { 14 | description = "The name of the S3 bucket for the database's remote state" 15 | type = string 16 | } 17 | 18 | variable "db_remote_state_key" { 19 | description = "The path for the database's remote state in S3" 20 | type = string 21 | } -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/prod/services/webserver-cluster/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | module "webserver_cluster" { 5 | source = "../../../modules/services/webserver-cluster" 6 | } 7 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/datastore/mysql/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "4.45.0" 6 | hashes = [ 7 | "h1:J/XjRsEJIpxi+mczXQfnH3nvfACv3LRDtrthQJCIibY=", 8 | "zh:22da03786f25658a000d1bcc28c780816a97e7e8a1f59fff6eee7d452830e95e", 9 | "zh:2543be56eee0491eb0c79ca1c901dcbf71da26625961fe719f088263fef062f4", 10 | "zh:31a1da1e3beedfd88c3c152ab505bdcf330427f26b75835885526f7bb75c4857", 11 | "zh:4409afe50f225659d5f378fe9303a45052953a1219f7f1acc82b69d07528b7ba", 12 | "zh:4dadec3b783f10d2f8eef3dab5e817baae9c932a7967d45fe3d77fcbcbdaa438", 13 | "zh:55be80d6e24828dcb0db7a0226fb275415c1c0ad63dd2f33b76f3ac0cd64e6a6", 14 | "zh:560bba29efb7dbe0bfcc937369d88817aa31a8d18aa25395b1afe2576cb04495", 15 | "zh:6caacc202e83438ff63d5d96733e283f44e349668d96c6b1c5c7df463ebf85cc", 16 | "zh:6cabab83a61d5b4ac801c5a5d57556a0e76ec8dc879d28cf777509db5f6a657e", 17 | "zh:96c4528bf9c16edb8841b68479ec51c499ed7fa680462fa28caeab3fc168bb43", 18 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 19 | "zh:cdc0b47ff840d708fbf75abfe86d23dc7f1dffdd233a771822a17b5c637f4769", 20 | "zh:d9a9583e82776d1ebb6cf6c3d47acc2b302f8778f470ceffe7579dc794eb1feb", 21 | "zh:e9367ca9f6f6418a23cdf8d01f29dd0c4f614e78499f52a767a422e4c334b915", 22 | "zh:f6d355a2fb3bcebb597f68bbca4fa2aaa364efd29240236c582375e219d77656", 23 | ] 24 | } 25 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/datastore/mysql/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | 5 | terraform { 6 | backend "s3" { 7 | bucket = "tungleo-terraform-state-s3" 8 | key = "stage/data-stores/mysql/terraform.tfstate" 9 | region = "us-east-1" 10 | # Replace this with your DynamoDB table name! 11 | dynamodb_table = "terraform-up-and-running-locks" 12 | encrypt = true 13 | } 14 | } 15 | 16 | resource "aws_db_instance" "example" { 17 | identifier_prefix = "example-db" 18 | engine = "mysql" 19 | allocated_storage = 10 20 | instance_class = "db.t2.micro" 21 | db_name = "example_database" 22 | username = "admin" 23 | password = "admin0123456" 24 | skip_final_snapshot = true 25 | # This is a better way, but we dont use for practice as it costs money 26 | # password = data.aws_secretsmanager_secret_version.db_password.secret_string 27 | } 28 | 29 | # This is a better way, but we dont use for practice as it costs money 30 | # data "aws_secretsmanager_secret_version" "db_password" { 31 | # secret_id = "mysql-master-password-stage" 32 | # } 33 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/datastore/mysql/outputs.tf: -------------------------------------------------------------------------------- 1 | output "address" { 2 | value = aws_db_instance.example.address 3 | description = "Connect to the database at this endpoint" 4 | } 5 | output "port" { 6 | value = aws_db_instance.example.port 7 | description = "The port the database is listening on" 8 | } -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/datastore/mysql/variables.tf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tungbq/devops-basics/4533f8345a366f7dd623800859f0ebab797e4b59/topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/datastore/mysql/variables.tf -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/services/alb-webserver/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "4.48.0" 6 | constraints = ">= 2.70.0, ~> 4.0" 7 | hashes = [ 8 | "h1:Fz26mWZmM9syrY91aPeTdd3hXG4DvMR81ylWC9xE2uA=", 9 | "zh:08f5e3c5256a4fbd5c988863d10e5279172b2470fec6d4fb13c372663e7f7cac", 10 | "zh:2a04376b7fa84681bd2938973c7d0822c8c0f0656a4e7661a2f50ac4d852d4a3", 11 | "zh:30d6cdf321aaba874934cbde505333d89d172d8d5ffcf40b6e66626c57bc6ab2", 12 | "zh:364639ee19cf4cfaa65de84a2a71d32725d5b728b71dd88d01ccb639c006c1cf", 13 | "zh:4e02252cd88b6f59f556f49c5ce46a358046c98f069230358ac15f4030ae1e76", 14 | "zh:611717320f20b3512ceb90abddd5198a85e1093965ce59e3ef8183188c84f8c3", 15 | "zh:630be3b9ba5b3a95ecb2ce2f3523714ab37cd8bcd7479c879a769e6a446ab5ed", 16 | "zh:6701f9d3ae1ffadb3ebefbe75c9d82668cc5495b8f826e498adb8530e202b652", 17 | "zh:6dc6fdfa7469c9de7b405c68b2f6a09a3438db1ef09d348e49c7ceff4300b01a", 18 | "zh:84c8140d8af6965fa9cd80e52eb2ee3d273e3ab7762719a8d1af665c08fab748", 19 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 20 | "zh:9b6b4f7d4cea37ba7a42a47d506115498858bcd6440ad97dfb214c13a688ba90", 21 | "zh:a7f876af20f5c5dae8e333ec0dfc901e26aa801137e7df65fb365565637bbfe2", 22 | "zh:ad107b8e11dd0609b856584ce70ae6621aa4f1f946da51f7c792f1259e3f9c27", 23 | "zh:d5dc1683693a5fe2652952f50dbbeccd02716799c26c6d1a1378b226cf845e9b", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/services/alb-webserver/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | module "webserver_alb" { 5 | source = "../../../modules/services/asg-webserver-cluster" 6 | 7 | alb_name = "testing-alb" 8 | environment = "staging" 9 | asg_min_size = 1 10 | asg_max_size = 3 11 | asg_desired_capacity = 2 12 | } 13 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/services/alb-webserver/outputs.tf: -------------------------------------------------------------------------------- 1 | output "alb_dns_name" { 2 | value = module.webserver_alb.alb_dns_name 3 | description = "The domain name of the load balancer" 4 | } 5 | 6 | 7 | output "lb_endpoint" { 8 | value = module.webserver_alb.lb_endpoint 9 | } 10 | 11 | output "application_endpoint" { 12 | value = module.webserver_alb.application_endpoint 13 | } 14 | 15 | output "asg_name" { 16 | value = module.webserver_alb.asg_name 17 | } 18 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/services/webserver-cluster/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "4.45.0" 6 | constraints = "~> 4.0" 7 | hashes = [ 8 | "h1:J/XjRsEJIpxi+mczXQfnH3nvfACv3LRDtrthQJCIibY=", 9 | "zh:22da03786f25658a000d1bcc28c780816a97e7e8a1f59fff6eee7d452830e95e", 10 | "zh:2543be56eee0491eb0c79ca1c901dcbf71da26625961fe719f088263fef062f4", 11 | "zh:31a1da1e3beedfd88c3c152ab505bdcf330427f26b75835885526f7bb75c4857", 12 | "zh:4409afe50f225659d5f378fe9303a45052953a1219f7f1acc82b69d07528b7ba", 13 | "zh:4dadec3b783f10d2f8eef3dab5e817baae9c932a7967d45fe3d77fcbcbdaa438", 14 | "zh:55be80d6e24828dcb0db7a0226fb275415c1c0ad63dd2f33b76f3ac0cd64e6a6", 15 | "zh:560bba29efb7dbe0bfcc937369d88817aa31a8d18aa25395b1afe2576cb04495", 16 | "zh:6caacc202e83438ff63d5d96733e283f44e349668d96c6b1c5c7df463ebf85cc", 17 | "zh:6cabab83a61d5b4ac801c5a5d57556a0e76ec8dc879d28cf777509db5f6a657e", 18 | "zh:96c4528bf9c16edb8841b68479ec51c499ed7fa680462fa28caeab3fc168bb43", 19 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 20 | "zh:cdc0b47ff840d708fbf75abfe86d23dc7f1dffdd233a771822a17b5c637f4769", 21 | "zh:d9a9583e82776d1ebb6cf6c3d47acc2b302f8778f470ceffe7579dc794eb1feb", 22 | "zh:e9367ca9f6f6418a23cdf8d01f29dd0c4f614e78499f52a767a422e4c334b915", 23 | "zh:f6d355a2fb3bcebb597f68bbca4fa2aaa364efd29240236c582375e219d77656", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/chap04-reusable-module/stage/services/webserver-cluster/main.tf: -------------------------------------------------------------------------------- 1 | provider "aws" { 2 | region = "us-east-1" 3 | } 4 | module "webserver_cluster" { 5 | source = "../../../modules/services/webserver-cluster" 6 | 7 | cluster_name = "webservers-stage" 8 | db_remote_state_bucket = "tungleo-terraform-state-s3" 9 | db_remote_state_key = "stage/data-stores/mysql/terraform.tfstate" 10 | } 11 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/README.md: -------------------------------------------------------------------------------- 1 | # Step 1: Init TF backend ./tf-backend 2 | 3 | # Step 2: Alb/single 4 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/chap04-alb-instance/asg-destroy-practice-resource.sh: -------------------------------------------------------------------------------- 1 | # Define vars 2 | CURRENT_PATH=$(pwd) 3 | ROOT_PATH="$CURRENT_PATH/../../" 4 | CHAP03_PATH="$ROOT_PATH/chap03-terraform-state" 5 | CHAP04_PATH="$ROOT_PATH/chap04-reusable-module" 6 | 7 | # Action 8 | # Tasks 9 | ## Destroy staging env 10 | echo ">>>>>>> Destroy staging environment (webserver-cluster)" 11 | cd $CHAP04_PATH/stage/services/alb-webserver 12 | terraform destroy -lock=false -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/chap04-alb-instance/asg-start-practice-resource.sh: -------------------------------------------------------------------------------- 1 | # Define vars 2 | CURRENT_PATH=$(pwd) 3 | ROOT_PATH="$CURRENT_PATH/../../" 4 | CHAP03_PATH="$ROOT_PATH/chap03-terraform-state" 5 | CHAP04_PATH="$ROOT_PATH/chap04-reusable-module" 6 | 7 | # Action 8 | cd $CHAP03_PATH 9 | ls -la 10 | 11 | # Tasks 12 | ## Init staging env 13 | echo ">>>>>>> Init staging environment (webserver-cluster)" 14 | cd $CHAP04_PATH/stage/services/alb-webserver 15 | pwd 16 | ls -la 17 | terraform init 18 | terraform plan -lock=false 19 | terraform apply -lock=false 20 | 21 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/chap04-single-instance/destroy-practice-resource.sh: -------------------------------------------------------------------------------- 1 | # Define vars 2 | CURRENT_PATH=$(pwd) 3 | ROOT_PATH="$CURRENT_PATH/../../" 4 | CHAP03_PATH="$ROOT_PATH/chap03-terraform-state" 5 | CHAP04_PATH="$ROOT_PATH/chap04-reusable-module" 6 | 7 | # Action 8 | # Tasks 9 | ## Destroy staging env 10 | echo ">>>>>>> Destroy staging environment (webserver-cluster)" 11 | cd $CHAP04_PATH/stage/services/webserver-cluster 12 | terraform destroy -lock=false 13 | 14 | ## Destroy DB 15 | echo ">>>>>>> Destroy DB" 16 | cd $CHAP04_PATH/stage/datastore/mysql 17 | terraform destroy -lock=false -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/chap04-single-instance/start-practice-resource.sh: -------------------------------------------------------------------------------- 1 | # Define vars 2 | CURRENT_PATH=$(pwd) 3 | ROOT_PATH="$CURRENT_PATH/../../" 4 | CHAP03_PATH="$ROOT_PATH/chap03-terraform-state" 5 | CHAP04_PATH="$ROOT_PATH/chap04-reusable-module" 6 | 7 | # Action 8 | cd $CHAP03_PATH 9 | ls -la 10 | 11 | # Tasks 12 | # Init DB 13 | echo ">>>>>>> Init DB" 14 | cd $CHAP04_PATH/stage/datastore/mysql 15 | pwd 16 | ls -la 17 | terraform init 18 | terraform plan -lock=false 19 | terraform apply -lock=false 20 | 21 | ## Init staging env 22 | echo ">>>>>>> Init staging environment (webserver-cluster)" 23 | cd $CHAP04_PATH/stage/services/webserver-cluster 24 | pwd 25 | ls -la 26 | terraform init 27 | terraform plan -lock=false 28 | terraform apply -lock=false 29 | 30 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/tf-backend/create-tf-backend.sh: -------------------------------------------------------------------------------- 1 | # Define vars 2 | CURRENT_PATH=$(pwd) 3 | ROOT_PATH="$CURRENT_PATH/../../" 4 | CHAP03_PATH="$ROOT_PATH/chap03-terraform-state" 5 | CHAP04_PATH="$ROOT_PATH/chap04-reusable-module" 6 | 7 | # Action 8 | cd $CHAP03_PATH 9 | ls -la 10 | 11 | # Tasks 12 | ## Init s3-dynamoDB for tf backend 13 | echo ">>>>>>> Init backend s3-dynamo" 14 | cd $CHAP03_PATH/global/s3-dynamo 15 | terraform init 16 | terraform plan 17 | terraform apply 18 | 19 | -------------------------------------------------------------------------------- /topics/terraform/advanced/terraform-up-and-running/practice-scripts/tf-backend/destroy-tf-backend.sh: -------------------------------------------------------------------------------- 1 | # Define vars 2 | CURRENT_PATH=$(pwd) 3 | ROOT_PATH="$CURRENT_PATH/../../" 4 | CHAP03_PATH="$ROOT_PATH/chap03-terraform-state" 5 | CHAP04_PATH="$ROOT_PATH/chap04-reusable-module" 6 | 7 | # Action 8 | cd $CHAP03_PATH 9 | ls -la 10 | 11 | # Tasks 12 | ## Destroy s3-dynamoDB for tf backend 13 | echo ">>>>>>> Destroy backend s3-dynamo" 14 | cd "$CHAP03_PATH/global/s3-dynamo" 15 | 16 | ls -la 17 | terraform destroy 18 | -------------------------------------------------------------------------------- /topics/terraform/basic/README.md: -------------------------------------------------------------------------------- 1 | # Basics of Terraform 2 | 3 | ## Demo 4 | 5 | Run `./terraform-helloworld.sh` 6 | -------------------------------------------------------------------------------- /topics/terraform/basic/aws-ec2/.terraform.lock.hcl: -------------------------------------------------------------------------------- 1 | # This file is maintained automatically by "terraform init". 2 | # Manual edits may be lost in future updates. 3 | 4 | provider "registry.terraform.io/hashicorp/aws" { 5 | version = "5.74.0" 6 | constraints = "~> 5.34" 7 | hashes = [ 8 | "h1:HMaN/L2hf1PN2YLdlQRbE49f4RF7VuqEVpqxNtJ2+18=", 9 | "zh:1e2d65add4d63af5b396ae33d55c48303eca6c86bd1be0f6fae13267a9b47bc4", 10 | "zh:20ddec3dac3d06a188f12e58b6428854949b1295e937c5d4dca4866dc1c937af", 11 | "zh:35b72de4e6a3e3d69efc07184fb413406262fe447b2d82d57eaf8c787a068a06", 12 | "zh:44eada24a50cd869aadc4b29f9e791fdf262d7f426921e9ac2893bbb86013176", 13 | "zh:455e666e3a9a2312b3b9f434b87a404b6515d64a8853751e20566a6548f9df9e", 14 | "zh:58b3ae74abfca7b9b61f42f0c8b10d97f9b01aff18bd1d4ab091129c9d203707", 15 | "zh:840a8a32d5923f9e7422f9c80d165c3f89bb6ea370b8283095081e39050a8ea8", 16 | "zh:87cb6dbbdbc1b73bdde4b8b5d6d780914a3e8f1df0385da4ea7323dc1a68468f", 17 | "zh:8b8953e39b0e6e6156c5570d1ca653450bfa0d9b280e2475f01ee5c51a6554db", 18 | "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", 19 | "zh:9bd750262e2fb0187a8420a561e55b0a1da738f690f53f5c7df170cb1f380459", 20 | "zh:9d2474c1432dfa5e1db197e2dd6cd61a6a15452e0bc7acd09ca86b3cdb228871", 21 | "zh:b763ecaf471c7737a5c6e4cf257b5318e922a6610fd83b36ed8eb68582a8642e", 22 | "zh:c1344cd8fe03ff7433a19b14b14a1898c2ca5ba22a468fb8e1687f0a7f564d52", 23 | "zh:dc0e0abf3be7402d0d022ced82816884356115ed27646df9c7222609e96840e6", 24 | ] 25 | } 26 | -------------------------------------------------------------------------------- /topics/terraform/basic/aws-ec2/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "~> 5.34" 6 | } 7 | } 8 | 9 | required_version = ">= 1.2.0" 10 | } 11 | 12 | provider "aws" { 13 | region = var.deploy_region 14 | } 15 | 16 | resource "aws_instance" "app_server" { 17 | ami = var.ami_template 18 | instance_type = "t2.micro" 19 | 20 | tags = { 21 | Name = var.instance_name 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /topics/terraform/basic/aws-ec2/outputs.tf: -------------------------------------------------------------------------------- 1 | output "instance_id" { 2 | description = "ID of the EC2 instance" 3 | value = aws_instance.app_server.id 4 | } 5 | 6 | output "instance_public_ip" { 7 | description = "Public IP address of the EC2 instance" 8 | value = aws_instance.app_server.public_ip 9 | } 10 | -------------------------------------------------------------------------------- /topics/terraform/basic/aws-ec2/variables.tf: -------------------------------------------------------------------------------- 1 | variable "instance_name" { 2 | description = "Value of the Name tag for the EC2 instance" 3 | type = string 4 | default = "ExampleAppServerInstance" 5 | } 6 | 7 | variable "deploy_region" { 8 | description = "Value of the region to provision the EC2 instance" 9 | type = string 10 | default = "us-west-1" 11 | } 12 | 13 | 14 | variable "ami_template" { 15 | description = "Value of the AMI to provision the EC2 instance" 16 | type = string 17 | default = "ami-018d291ca9ffc002f" 18 | } 19 | -------------------------------------------------------------------------------- /topics/terraform/basic/terraform-helloworld.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | console_log() { 4 | echo ">>> [Terraform] $1" 5 | } 6 | 7 | console_log "Demo creating an EC2 instance on AWS using Terraform" 8 | 9 | console_log "Navigating to aws ec2 example" 10 | cd aws-ec2 11 | 12 | console_log "Checking the AWS EC2 example structure" 13 | ls -la 14 | 15 | console_log "Checking the variables content" 16 | cat variables.tf 17 | 18 | console_log "Init the terraform" 19 | terraform init 20 | 21 | console_log "Plan the terraform code" 22 | terraform plan 23 | 24 | console_log "Apply the terraform code" 25 | # --auto-approve is to approve automatically, no need to type 'yes' 26 | terraform apply --auto-approve 27 | 28 | console_log "Navigate to your AWS account to verify the created EC2 instance" 29 | console_log "Keep exploring from AWS..." 30 | 31 | console_log "IMPORTANT! Please terminate the resource after the hands on to avoid unexpected AWS cost!" 32 | console_log "Once verification completed! Run terraform destroy, and type 'yes' to confirm" 33 | terraform destroy 34 | 35 | console_log "Done! Congratulations, you've just created the EC2 instance on AWS using Terraform!" 36 | -------------------------------------------------------------------------------- /topics/virtualbox/README.md: -------------------------------------------------------------------------------- 1 | ## 1. What is VirtualBox? 2 | 3 | ### Overview 4 | 5 | - VirtualBox is an open-source virtualization software developed by Oracle. It allows users to run multiple operating systems simultaneously on a single machine by creating virtual machines (VMs). It's widely used for testing, development, and learning purposes. 6 | 7 | ### Official Website of VirtualBox 8 | 9 | - https://www.virtualbox.org/ 10 | 11 | ### Official Documentation of VirtualBox 12 | 13 | - https://www.virtualbox.org/wiki/Documentation 14 | 15 | --- 16 | 17 | ## 2. Prerequisites 18 | 19 | - Familiarity with operating systems (Linux/Windows/macOS). 20 | - Understanding of virtualization concepts and networking basics. 21 | 22 | --- 23 | 24 | ## 3. Installation 25 | 26 | ### How to Install VirtualBox? 27 | 28 | 1. **Download** VirtualBox from the official website: 29 | https://www.virtualbox.org/wiki/Downloads 30 | 31 | 2. **Select OS Package** (Windows, macOS, Linux distributions). 32 | 33 | 3. **Install** the downloaded package by following the installer steps for your operating system. 34 | 35 | 4. **Optional**: Install the VirtualBox Extension Pack for additional features like USB 3.0 support and RDP access: 36 | - https://www.virtualbox.org/wiki/Downloads 37 | 38 | --- 39 | 40 | ## 4. Basics of VirtualBox 41 | 42 | ### VirtualBox Getting Started 43 | 44 | - Official Beginner’s Guide: 45 | https://www.virtualbox.org/manual/ch01.html 46 | 47 | ### VirtualBox Hands-On 48 | 49 | - See: [basic setup and usage](./basic/) 50 | 51 | --- 52 | 53 | ## 5. More... 54 | 55 | ### VirtualBox Cheatsheet 56 | 57 | - N/A 58 | 59 | ### Recommended Books 60 | 61 | - N/A 62 | -------------------------------------------------------------------------------- /topics/virtualbox/basic/README.md: -------------------------------------------------------------------------------- 1 | ## VirtualBox Basics 2 | You can get start and hands on with Virtual Box via following guide: 3 | - Create Virtual Machine: https://www.virtualbox.org/manual/ch01.html#create-vm-wizard 4 | - Create Ubuntu Virtual Machine: 5 | - https://ubuntu.com/tutorials/how-to-run-ubuntu-desktop-on-a-virtual-machine-using-virtualbox#1-overview 6 | - https://devopscube.com/virtual-box-tutorial/ 7 | -------------------------------------------------------------------------------- /troubleshooting/common-issues.md: -------------------------------------------------------------------------------- 1 | # /bin/bash^M: no such file or directory 2 | 3 | - Run `sed -i -e 's/\r$//' scriptname.sh` 4 | 5 | # Openstack install 6 | 7 | - Error: create: failed to create: secrets "sh.helm.release.v1.rook-ceph-cluster.v1" is forbidden: unable to create new content in namespace ceph because it is being terminated (Unresolved) 8 | 9 | # Namespace "stuck" as Terminating, How I removed it 10 | 11 | - https://stackoverflow.com/questions/52369247/namespace-stuck-as-terminating-how-i-removed-it 12 | - https://www.ibm.com/docs/en/cloud-private/3.1.1?topic=console-namespace-is-stuck-in-terminating-state 13 | -------------------------------------------------------------------------------- /troubleshooting/installation/groovy-with-sdk-missing-java.md: -------------------------------------------------------------------------------- 1 | # Groovy 2 | - Error `groovy: JAVA_HOME not set and cannot find javac to deduce location, please set JAVA_HOME.`: 3 | - Try to install the java by running `sdk install java` 4 | ``` 5 | ➜ ~ curl -s "https://get.sdkman.io" | bash 6 | ➜ ~ source "$HOME/.sdkman/bin/sdkman-init.sh" 7 | ➜ ~ source "$HOME/.sdkman/bin/sdkman-init.sh" 8 | ➜ ~ sdk install groovy 9 | 10 | Downloading: groovy 4.0.14 11 | 12 | In progress... 13 | 14 | ##################################################################################################################################################################################### 100.0% 15 | 16 | Installing: groovy 4.0.14 17 | Done installing! 18 | 19 | 20 | Setting groovy 4.0.14 as default. 21 | ➜ ~ groovy 22 | groovy: JAVA_HOME not set and cannot find javac to deduce location, please set JAVA_HOME. 23 | ➜ ~ groovy --version 24 | groovy: JAVA_HOME not set and cannot find javac to deduce location, please set JAVA_HOME. 25 | ➜ ~ sdk install java 26 | 27 | Downloading: java 17.0.8.1-tem 28 | 29 | In progress... 30 | 31 | ##################################################################################################################################################################################### 100.0% 32 | 33 | Repackaging Java 17.0.8.1-tem... 34 | 35 | Done repackaging... 36 | 37 | Installing: java 17.0.8.1-tem 38 | Done installing! 39 | 40 | 41 | Setting java 17.0.8.1-tem as default. 42 | ➜ ~ groovy --version 43 | Groovy Version: 4.0.14 JVM: 17.0.8.1 Vendor: Eclipse Adoptium OS: Linux 44 | ➜ ~ 45 | ``` 46 | -------------------------------------------------------------------------------- /troubleshooting/k8s-notes.md: -------------------------------------------------------------------------------- 1 | # Check all pod from all namespace 2 | - Run `kubectl get pod -A` 3 | 4 | # Delete all resouce from a workspace: 5 | - Run `kubectl delete -n sock-shop --all all` 6 | --------------------------------------------------------------------------------