├── .github └── workflows │ └── generate-pdf.yml ├── CONTRIBUTING.md ├── Security ├── AquaSec.md ├── SonarQube.md ├── Trivy.md └── HashiCorp-Vault.md ├── CI-CD ├── GitLab-CI.md ├── GitHub-Actions.md ├── Jenkins.md └── CircleCI.md ├── Containerization ├── Podman.md ├── Helm.md ├── CRI-O.md └── Docker.md ├── Networking ├── Linkerd.md ├── Envoy.md ├── Consul.md └── Istio.md ├── Monitoring ├── ELK-Stack.md ├── Nagios.md ├── Grafana.md ├── Prometheus.md └── CloudWatch.md ├── README.md ├── cloud ├── Ansible.md ├── Terraform.md ├── GCP.md ├── AWS.md ├── Azure.md └── Kubernetes-on-AWS.md └── Version-Control ├── Github.md ├── Bitbucket.md └── GitLab.md /.github/workflows/generate-pdf.yml: -------------------------------------------------------------------------------- 1 | name: Generate PDFs 2 | 3 | on: 4 | push: 5 | branches: 6 | - master 7 | pull_request: 8 | branches: 9 | - master 10 | workflow_dispatch: 11 | 12 | jobs: 13 | pdf: 14 | runs-on: ubuntu-latest 15 | 16 | steps: 17 | - name: Checkout code 18 | uses: actions/checkout@v4 19 | 20 | - name: Install pandoc and LaTeX 21 | run: | 22 | sudo apt-get update 23 | sudo apt-get install -y pandoc 24 | sudo apt-get install -y texlive-xetex 25 | 26 | - name: Generate PDF 27 | run: | 28 | for file in $(find . -name '*.md'); do 29 | pandoc "$file" -o "${file%.md}.pdf" --pdf-engine=xelatex 30 | done 31 | 32 | - name: Upload PDF artifacts 33 | uses: actions/upload-artifact@v4 34 | with: 35 | name: pdfs 36 | path: '**/*.pdf' 37 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to DevOps Tools Cheatsheet Collection 2 | 3 | Thank you for considering contributing to the **DevOps Tools Cheatsheet Collection**! Your contributions help make this project a valuable resource for the DevOps community. 4 | 5 | ## 📜 Code of Conduct 6 | 7 | By participating in this project, you agree to abide by our [Code of Conduct](./CODE_OF_CONDUCT.md). Please read it to understand the expected behavior when contributing to the project. 8 | 9 | ## 🛠️ How to Contribute 10 | 11 | > [!TIP] 12 | > We welcome various types of contributions, including but not limited to: 13 | > 14 | > - **Adding New Cheatsheets**: Share your knowledge about a tool not yet covered. 15 | > - **Improving Existing Cheatsheets**: Update or enhance the content in existing files. 16 | > - **Fixing Issues**: Help resolve any bugs or errors found in the repository. 17 | > - **Providing Feedback**: Suggest new features, improvements, or corrections. 18 | 19 | ### 📝 Submitting a Pull Request 20 | 21 | 1. **Fork the Repository**: Start by forking this repository to your GitHub account. 22 | 2. **Create a New Branch**: Create a new branch in your fork for your contribution. For example: 23 | 24 | ```bash 25 | git checkout -b add-toolname-cheatsheet 26 | ``` 27 | 28 | 3. **Make Your Changes**: Edit or add files as necessary. Follow the format of existing cheatsheets for consistency. 29 | 4. **Commit Your Changes**: Write a clear and descriptive commit message. For example: 30 | 31 | ```bash 32 | git commit -m "Add cheatsheet for ToolName" 33 | ``` 34 | 35 | 5. **Push Your Changes**: Push the changes to your forked repository: 36 | 37 | ```bash 38 | git push origin add-toolname-cheatsheet 39 | ``` 40 | 41 | 6. **Submit a Pull Request**: Go to the original repository and submit a pull request. Provide a clear description of your changes and why they are beneficial. 42 | 43 | ### 🧐 Review Process 44 | 45 | > [!NOTE] 46 | > 47 | > - Your pull request will be reviewed by the maintainers of the project. 48 | > - Please be patient, as reviews can take some time depending on the complexity of the changes. 49 | > - You may be asked to make changes before your pull request is accepted. 50 | 51 | ## 📂 Directory Structure 52 | 53 | When adding a new cheatsheet, please ensure it is placed in the correct directory based on its category (e.g., `CI-CD`, `Containerization`, `Monitoring`, etc.). This helps maintain an organized structure for easy navigation. 54 | 55 | ### ✏️ Cheatsheet Format 56 | 57 | > [!TIP] 58 | > For consistency, please follow this basic format for new cheatsheets: 59 | > 60 | > - **Tool Name**: Title the file with the tool name (e.g., `Docker.md`). 61 | > - **Sections**: Include sections such as Basic Commands, Tips, Configuration, etc. 62 | > - **Examples**: Provide examples wherever possible. 63 | > - **Formatting**: Use Markdown for formatting (headings, bullet points, code blocks). 64 | 65 | ## 🤝 Community Guidelines 66 | 67 | > [!IMPORTANT] 68 | > 69 | > - **Be Respectful**: Keep interactions respectful and constructive. 70 | > - **Ask for Help**: If you're unsure about anything, feel free to ask in the discussion section. 71 | > - **Stay on Topic**: Make sure your contributions align with the purpose of this repository. 72 | 73 | ## 📝 License 74 | 75 | By contributing to this repository, you agree that your contributions will be licensed under the MIT License. 76 | 77 | --- 78 | 79 | ### Thank you for your contribution! 🚀 80 | -------------------------------------------------------------------------------- /Security/AquaSec.md: -------------------------------------------------------------------------------- 1 | # AquaSec Cheatsheet 2 | 3 | ![text](https://imgur.com/8MBLV6G.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **AquaSec** (Aqua Security) is a comprehensive security platform for securing containers, Kubernetes, and cloud-native applications throughout the CI/CD pipeline. 8 | 9 | **2. Installation:** 10 | 11 | - **Installing AquaSec:** 12 | - AquaSec is usually deployed as a Kubernetes application. 13 | - Download AquaSec from the [Aqua website](https://www.aquasec.com/) and follow the installation instructions for your environment. 14 | 15 | - **Dockerized Installation:** 16 | - AquaSec components can also be installed using Docker images available on Docker Hub. 17 | 18 | **3. Basic Configuration:** 19 | 20 | - **Aqua Console:** 21 | - The Aqua Console is the central management interface for configuring and monitoring AquaSec. 22 | - Access the Aqua Console at `http://:8080`. 23 | 24 | - **User Management:** 25 | - Create users and assign roles in the Aqua Console under the **Users** section. 26 | 27 | **4. Container Security:** 28 | 29 | - **Image Scanning:** 30 | - AquaSec automatically scans container images for vulnerabilities, malware, and misconfigurations. 31 | - Scans can be initiated via the Aqua Console or automated in CI/CD pipelines. 32 | 33 | - **Runtime Protection:** 34 | - AquaSec provides real-time monitoring of running containers, blocking unauthorized activities based on predefined policies. 35 | 36 | **5. Kubernetes Security:** 37 | 38 | - **Kubernetes Admission Control:** 39 | - AquaSec integrates with Kubernetes admission controllers to enforce security policies during the pod creation process. 40 | - Policies can prevent the deployment of vulnerable or misconfigured containers. 41 | 42 | - **Network Segmentation:** 43 | - AquaSec can segment Kubernetes network traffic using microsegmentation to restrict communication between pods. 44 | 45 | **6. Advanced Features:** 46 | 47 | - **Secrets Management:** 48 | - AquaSec integrates with secrets management tools like HashiCorp Vault to secure sensitive data in containers and Kubernetes clusters. 49 | 50 | - **Compliance Auditing:** 51 | - AquaSec provides auditing capabilities to ensure compliance with standards like PCI-DSS, HIPAA, and NIST. 52 | 53 | **7. AquaSec in CI/CD Pipelines:** 54 | 55 | - **Integrating with Jenkins:** 56 | - Use the AquaSec Jenkins plugin to scan images as part of the build process and fail builds that do not meet security criteria. 57 | 58 | - **Automating Policies:** 59 | - Define security policies that are automatically enforced across all stages of the pipeline. 60 | 61 | **8. Monitoring and Reporting:** 62 | 63 | - **Dashboards:** 64 | - AquaSec provides detailed dashboards for monitoring vulnerabilities, policy violations, and runtime security events. 65 | 66 | - **Custom Alerts:** 67 | - Configure alerts for specific security events, such as the detection of high-severity vulnerabilities or unauthorized access attempts. 68 | 69 | **9. Scaling AquaSec:** 70 | 71 | - **High Availability:** 72 | - Deploy AquaSec in a high-availability configuration with multiple Aqua Consoles and databases to ensure resilience. 73 | 74 | - **Integrating with SIEMs:** 75 | - AquaSec integrates with Security Information and Event Management (SIEM) systems like Splunk and IBM QRadar for centralized monitoring. 76 | 77 | **10. Troubleshooting AquaSec:** 78 | 79 | - **Common Issues:** 80 | - **Failed Scans:** Ensure that the Aqua scanner is properly configured and has access to the image registry. 81 | - **Policy Enforcement Issues:** Review policy definitions and ensure they are correctly applied. 82 | 83 | - **Debugging:** 84 | - Check AquaSec logs for detailed error information and troubleshooting steps. 85 | -------------------------------------------------------------------------------- /CI-CD/GitLab-CI.md: -------------------------------------------------------------------------------- 1 | # GitLab CI Cheatsheet 2 | 3 | ![](https://imgur.com/dbufti0.png) 4 | 5 | **1. Introduction:** 6 | 7 | - GitLab CI/CD is a part of GitLab, a complete DevOps platform, allowing you to define CI/CD pipelines directly within your GitLab repository using the `.gitlab-ci.yml` file. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Pipeline:** A series of stages that run jobs sequentially or in parallel. 12 | - **Job:** An individual unit of work, such as running tests or deploying code. 13 | - **Stage:** A group of jobs that run in parallel. 14 | - **Runner:** The agent that executes jobs, can be GitLab-hosted or self-hosted. 15 | 16 | **3. Basic `.gitlab-ci.yml` Example:** 17 | 18 | - **YAML Syntax:** 19 | 20 | ```yaml 21 | stages: 22 | - build 23 | - test 24 | - deploy 25 | 26 | build-job: 27 | stage: build 28 | script: 29 | - echo "Building the project..." 30 | - make 31 | 32 | test-job: 33 | stage: test 34 | 35 | 36 | script: 37 | - echo "Running tests..." 38 | - make test 39 | 40 | deploy-job: 41 | stage: deploy 42 | script: 43 | - echo "Deploying the project..." 44 | - make deploy 45 | ``` 46 | 47 | **4. Runners:** 48 | 49 | - **Shared Runners:** Provided by GitLab and available to all projects. 50 | - **Specific Runners:** Custom runners registered to a specific project or group. 51 | - **Tags:** Use tags to specify which runner should execute a job. 52 | 53 | **5. Artifacts and Caching:** 54 | 55 | - **Artifacts:** Save job outputs and make them available to subsequent jobs. 56 | 57 | ```yaml 58 | artifacts: 59 | paths: 60 | - build/ 61 | expire_in: 1 week 62 | ``` 63 | 64 | - **Caching:** Speed up jobs by reusing previously downloaded dependencies. 65 | 66 | ```yaml 67 | cache: 68 | paths: 69 | - node_modules/ 70 | ``` 71 | 72 | **6. Environments and Deployments:** 73 | 74 | - **Environments:** Define environments to organize and manage deployments. 75 | 76 | ```yaml 77 | deploy-job: 78 | stage: deploy 79 | environment: 80 | name: production 81 | url: https://myapp.com 82 | script: 83 | - echo "Deploying to production..." 84 | - ./deploy.sh 85 | ``` 86 | 87 | - **Manual Deployments:** Require manual approval before a job runs. 88 | 89 | ```yaml 90 | deploy-job: 91 | stage: deploy 92 | script: 93 | - ./deploy.sh 94 | when: manual 95 | ``` 96 | 97 | **7. Advanced `.gitlab-ci.yml` Features:** 98 | 99 | - **YAML Anchors:** Reuse parts of your YAML configuration. 100 | 101 | ```yaml 102 | .default-job: &default-job 103 | script: 104 | - echo "Default job script" 105 | 106 | job1: 107 | <<: *default-job 108 | 109 | job2: 110 | <<: *default-job 111 | ``` 112 | 113 | - **Includes:** Include other YAML files to organize your configuration. 114 | 115 | ```yaml 116 | include: 117 | - local: '/templates/.gitlab-ci-template.yml' 118 | ``` 119 | 120 | **8. Security and Compliance:** 121 | 122 | - **Secret Variables:** Store sensitive data securely in GitLab CI/CD. 123 | 124 | ```yaml 125 | deploy-job: 126 | script: 127 | - deploy --token $CI_DEPLOY_TOKEN 128 | ``` 129 | 130 | - **Protected Branches:** Restrict certain jobs to run only on protected branches. 131 | 132 | **9. Troubleshooting:** 133 | 134 | - **Pipeline Logs:** Access detailed logs for each job to troubleshoot failures. 135 | - **Retrying Jobs:** Use the GitLab UI to manually retry failed jobs. 136 | 137 | **10. Best Practices:** 138 | 139 | - **Modular Pipelines:** Break down your pipeline into stages for better organization. 140 | - **Use CI/CD Templates:** Leverage GitLab’s built-in templates for common CI/CD tasks. 141 | - **Optimize Runner Usage:** Use caching, artifacts, and parallel jobs to reduce pipeline runtime. 142 | -------------------------------------------------------------------------------- /Security/SonarQube.md: -------------------------------------------------------------------------------- 1 | # SonarQube Cheatsheet 2 | 3 | ![text](https://imgur.com/l49w71S.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **SonarQube** is a popular open-source platform for continuous inspection of code quality, performing automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities. 8 | 9 | **2. Installation:** 10 | 11 | - **Installing SonarQube:** 12 | - On Docker: 13 | 14 | ```bash 15 | docker run -d --name sonarqube -p 9000:9000 sonarqube 16 | ``` 17 | 18 | - Manual Installation on Linux: 19 | 20 | ```bash 21 | wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.9.0.43852.zip 22 | unzip sonarqube-8.9.0.43852.zip 23 | cd sonarqube-8.9.0.43852/bin/linux-x86-64 24 | ./sonar.sh start 25 | ``` 26 | 27 | - **Starting SonarQube:** 28 | - Access SonarQube at `http://localhost:9000`. 29 | - Default credentials: `admin/admin`. 30 | 31 | **3. Configuring SonarQube:** 32 | 33 | - **Database Configuration:** 34 | - SonarQube requires a database like PostgreSQL, MySQL, or Oracle. 35 | - Configure the database connection in the `sonar.properties` file: 36 | 37 | ```properties 38 | sonar.jdbc.url=jdbc:postgresql://localhost/sonarqube 39 | sonar.jdbc.username=sonar 40 | sonar.jdbc.password=sonar 41 | ``` 42 | 43 | - **Configuring Quality Profiles:** 44 | - Quality profiles define the set of rules SonarQube uses for code analysis. 45 | - Create or customize profiles in the **Quality Profiles** section of the UI. 46 | 47 | **4. Running Analysis:** 48 | 49 | - **Using SonarQube Scanner:** 50 | - Install the scanner: 51 | 52 | ```bash 53 | npm install -g sonarqube-scanner 54 | ``` 55 | 56 | - Run a scan: 57 | 58 | ```bash 59 | sonar-scanner \ 60 | -Dsonar.projectKey=my-project \ 61 | -Dsonar.sources=. \ 62 | -Dsonar.host.url=http://localhost:9000 \ 63 | -Dsonar.login=admin \ 64 | -Dsonar.password=admin 65 | ``` 66 | 67 | - **Integrating with CI/CD:** 68 | - Integrate SonarQube with Jenkins, GitLab CI, or other CI/CD tools to automate code analysis. 69 | 70 | **5. SonarQube Plugins:** 71 | 72 | - **Installing Plugins:** 73 | - Navigate to **Administration > Marketplace** in SonarQube and search for plugins. 74 | - Popular plugins include SonarLint, SonarCSS, and SonarTS. 75 | 76 | - **SonarQube and IDE Integration:** 77 | - **SonarLint** is a plugin that integrates with IDEs like IntelliJ, Eclipse, and VS Code for real-time code quality feedback. 78 | 79 | **6. Advanced Features:** 80 | 81 | - **Code Coverage:** 82 | - SonarQube integrates with code coverage tools like Jacoco for Java and Istanbul for JavaScript to report on test coverage. 83 | 84 | - **Security Vulnerabilities:** 85 | - SonarQube detects vulnerabilities and provides remediation guidance based on OWASP and SANS standards. 86 | 87 | **7. Managing Users and Permissions:** 88 | 89 | - **User Management:** 90 | - Add users and groups in the **Security** section. 91 | - Assign roles such as **Admin**, **User**, or **Code Viewer**. 92 | 93 | - **LDAP/SSO Integration:** 94 | - Configure LDAP or SSO in `sonar.properties` for centralized user authentication. 95 | 96 | **8. Monitoring and Reporting:** 97 | 98 | - **Project Dashboards:** 99 | - SonarQube provides detailed dashboards for each project, showing metrics like code coverage, duplications, and issues over time. 100 | 101 | - **Custom Reports:** 102 | - Generate custom reports with detailed metrics and trends for management or compliance purposes. 103 | 104 | **9. Scaling SonarQube:** 105 | 106 | - **High Availability:** 107 | - Run SonarQube in a cluster mode by configuring multiple nodes and a load balancer. 108 | - Configure the cluster settings in the `sonar.properties` file. 109 | 110 | - **Optimizing Performance:** 111 | - Use a separate database for larger SonarQube deployments and allocate sufficient resources to the server. 112 | 113 | **10. Troubleshooting SonarQube:** 114 | 115 | - **Common Issues:** 116 | - **Out of Memory:** Increase JVM heap size in `sonar.properties`. 117 | - **Failed Scans:** Check the logs in `logs/` directory for detailed error messages. 118 | 119 | - **Debugging:** 120 | - Enable debug logging in `sonar.properties`: 121 | 122 | ```properties 123 | sonar.log.level=DEBUG 124 | ``` 125 | -------------------------------------------------------------------------------- /CI-CD/GitHub-Actions.md: -------------------------------------------------------------------------------- 1 | # GitHub Actions Cheatsheet 2 | 3 | ![](https://imgur.com/GMwRo18.png) 4 | 5 | **1. Introduction:** 6 | 7 | - GitHub Actions is a powerful CI/CD and automation tool integrated directly into GitHub repositories, allowing you to build, test, and deploy your code. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Workflow:** An automated process defined in YAML that is triggered by events like `push`, `pull_request`, etc. 12 | - **Job:** A set of steps that runs on the same runner. 13 | - **Step:** An individual task, such as running a script or installing a dependency. 14 | - **Runner:** A server that runs the jobs in a workflow, can be GitHub-hosted or self-hosted. 15 | 16 | **3. Basic Workflow Example:** 17 | 18 | - **YAML Syntax:** 19 | 20 | ```yaml 21 | name: CI Workflow 22 | 23 | on: 24 | push: 25 | branches: 26 | - main 27 | pull_request: 28 | branches: 29 | - main 30 | 31 | jobs: 32 | build: 33 | runs-on: ubuntu-latest 34 | steps: 35 | - uses: actions/checkout@v3 36 | - name: Set up Node.js 37 | uses: actions/setup-node@v3 38 | with: 39 | node-version: '14' 40 | - run: npm install 41 | - run: npm test 42 | ``` 43 | 44 | **4. Common Actions:** 45 | 46 | - **actions/checkout:** Checks out your repository under `$GITHUB_WORKSPACE`. 47 | - **actions/setup-node:** Sets up a Node.js environment. 48 | - **actions/upload-artifact:** Uploads build artifacts for later use. 49 | - **actions/cache:** Caches dependencies like `node_modules` or `Maven`. 50 | 51 | **5. Triggers:** 52 | 53 | - **on: push:** Trigger a workflow when a push occurs. 54 | - **on: pull_request:** Trigger a workflow when a pull request is opened. 55 | - **on: schedule:** Schedule a workflow to run at specific times using cron syntax. 56 | 57 | **6. Environment Variables:** 58 | 59 | - **Set environment variables:** 60 | 61 | ```yaml 62 | env: 63 | NODE_ENV: production 64 | DEBUG: true 65 | ``` 66 | 67 | - **Access secrets:** 68 | 69 | ```yaml 70 | env: 71 | MY_SECRET: ${{ secrets.MY_SECRET }} 72 | ``` 73 | 74 | **7. Matrix Builds:** 75 | 76 | - **Example:** 77 | 78 | ```yaml 79 | jobs: 80 | build: 81 | runs-on: ubuntu-latest 82 | strategy: 83 | matrix: 84 | node-version: [12, 14, 16] 85 | steps: 86 | - uses: actions/checkout@v3 87 | - name: Set up Node.js 88 | uses: actions/setup-node@v3 89 | with: 90 | node-version: ${{ matrix.node-version }} 91 | - run: npm install 92 | - run: npm test 93 | ``` 94 | 95 | **8. Artifacts and Caching:** 96 | 97 | - **Upload Artifacts:** 98 | 99 | ```yaml 100 | - name: Upload build artifacts 101 | uses: actions/upload-artifact@v3 102 | with: 103 | name: my-artifact 104 | path: ./build 105 | ``` 106 | 107 | - **Caching Dependencies:** 108 | 109 | ```yaml 110 | - name: Cache Node.js modules 111 | uses: actions/cache@v3 112 | with: 113 | path: node_modules 114 | key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} 115 | restore-keys: | 116 | ${{ runner.os }}-node- 117 | ``` 118 | 119 | **9. Reusable Workflows:** 120 | 121 | - **Define a reusable workflow:** 122 | 123 | ```yaml 124 | name: Reusable CI Workflow 125 | 126 | on: 127 | workflow_call: 128 | inputs: 129 | node-version: 130 | required: true 131 | type: string 132 | 133 | jobs: 134 | build: 135 | runs-on: ubuntu-latest 136 | steps: 137 | - uses: actions/checkout@v3 138 | - name: Set up Node.js 139 | uses: actions/setup-node@v3 140 | with: 141 | node-version: ${{ inputs.node-version }} 142 | - run: npm install 143 | - run: npm test 144 | ``` 145 | 146 | - **Call a reusable workflow:** 147 | 148 | ```yaml 149 | jobs: 150 | call-workflow: 151 | uses: ./.github/workflows/reusable-workflow.yml 152 | with: 153 | node-version: '14' 154 | ``` 155 | 156 | **10. Best Practices:** 157 | 158 | - **Modular Workflows:** Break down complex workflows into smaller, reusable pieces. 159 | - **Use Environments:** Leverage environments in GitHub Actions for deployments with manual approvals. 160 | - **Secret Management:** Always use GitHub Secrets for sensitive information and never hard-code them. 161 | -------------------------------------------------------------------------------- /CI-CD/Jenkins.md: -------------------------------------------------------------------------------- 1 | # **Jenkins Cheatsheet** 2 | 3 | ![](https://imgur.com/jWGs9lH.png) 4 | 5 | **1. Introduction:** 6 | 7 | - Jenkins is an open-source automation server that helps automate parts of software development related to building, testing, and deploying, facilitating continuous integration and delivery. 8 | 9 | **2. Installation:** 10 | 11 | - **On Ubuntu:** 12 | 13 | ```bash 14 | sudo apt update 15 | sudo apt install openjdk-11-jre 16 | wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - 17 | sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' 18 | sudo apt update 19 | sudo apt install jenkins 20 | sudo systemctl start jenkins 21 | sudo systemctl status jenkins 22 | ``` 23 | 24 | - **Access Jenkins:** 25 | - Visit `http://localhost:8080` in your web browser. 26 | 27 | **3. Jenkins Pipeline:** 28 | 29 | - **Declarative Pipeline:** 30 | 31 | ```groovy 32 | pipeline { 33 | agent any 34 | environment { 35 | MY_VAR = "value" 36 | } 37 | stages { 38 | stage('Checkout') { 39 | steps { 40 | checkout scm 41 | } 42 | } 43 | stage('Build') { 44 | steps { 45 | sh 'make' 46 | } 47 | } 48 | stage('Test') { 49 | steps { 50 | sh 'make test' 51 | } 52 | } 53 | stage('Deploy') { 54 | steps { 55 | sh 'make deploy' 56 | } 57 | } 58 | } 59 | post { 60 | success { 61 | echo 'Pipeline completed successfully!' 62 | } 63 | failure { 64 | echo 'Pipeline failed.' 65 | } 66 | } 67 | } 68 | ``` 69 | 70 | - **Scripted Pipeline:** 71 | 72 | ```groovy 73 | node { 74 | stage('Checkout') { 75 | checkout scm 76 | } 77 | stage('Build') { 78 | sh 'make' 79 | } 80 | stage('Test') { 81 | sh 'make test' 82 | } 83 | stage('Deploy') { 84 | sh 'make deploy' 85 | } 86 | } 87 | ``` 88 | 89 | **4. Common Jenkins Commands:** 90 | 91 | - **Restart Jenkins:** 92 | 93 | ```bash 94 | sudo systemctl restart jenkins 95 | ``` 96 | 97 | - **Manage Jenkins from CLI:** 98 | 99 | ```bash 100 | java -jar jenkins-cli.jar -s http://localhost:8080/ list-jobs 101 | ``` 102 | 103 | **5. Useful Jenkins Plugins:** 104 | 105 | - **Blue Ocean:** Modern UI for Jenkins pipelines. 106 | - **Git:** Integrate Git version control into Jenkins. 107 | - **Pipeline:** Enables Pipeline as Code. 108 | - **Credentials Binding:** Securely manage credentials. 109 | - **SonarQube Scanner:** Integrate code quality checks. 110 | - **Slack Notification:** Send pipeline status notifications to Slack. 111 | 112 | **6. Best Practices:** 113 | 114 | - **Pipeline as Code:** Always use Jenkins Pipelines defined in `Jenkinsfile` for consistent and version-controlled builds. 115 | - **Use Parameters:** Use parameters to make your pipelines flexible and reusable. 116 | 117 | ```groovy 118 | parameters { 119 | string(name: 'ENV', defaultValue: 'dev', description: 'Environment') 120 | } 121 | ``` 122 | 123 | - **Secure Jenkins:** Regularly update plugins, use RBAC, and secure the Jenkins instance with HTTPS. 124 | 125 | **7. Jenkins Configuration:** 126 | 127 | - **Manage Jenkins:** 128 | - Manage and configure global settings from the Jenkins dashboard under **Manage Jenkins**. 129 | - **Configure Tools:** Set up JDK, Maven, and other tools globally in **Global Tool Configuration**. 130 | - **Jenkinsfile Configuration:** 131 | - Define your pipeline stages, environment, and agents within a `Jenkinsfile` stored in your repository. 132 | 133 | **8. Advanced Jenkins:** 134 | 135 | - **Parallel Stages:** 136 | 137 | ```groovy 138 | pipeline { 139 | agent any 140 | stages { 141 | stage('Parallel') { 142 | parallel { 143 | stage('Unit Tests') { 144 | steps { 145 | sh 'make test' 146 | } 147 | } 148 | stage('Integration Tests') { 149 | steps { 150 | sh 'make integration-test' 151 | } 152 | } 153 | } 154 | } 155 | } 156 | } 157 | ``` 158 | 159 | - **Shared Libraries:** Centralize and reuse pipeline code across projects using Shared Libraries. 160 | -------------------------------------------------------------------------------- /Containerization/Podman.md: -------------------------------------------------------------------------------- 1 | # Podman Cheatsheet 2 | 3 | ![text](https://imgur.com/6x1bZIJ.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Podman** is an open-source container engine that performs much like Docker but without the daemon dependency. It supports the Open Container Initiative (OCI) standards for both containers and container images. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Pod:** A group of containers that run together and share resources, similar to a Kubernetes Pod. 12 | - **Rootless Containers:** Podman can run containers as a non-root user. 13 | - **Docker Compatibility:** Podman commands are similar to Docker, making it easy to switch between the two. 14 | 15 | **3. Installation:** 16 | 17 | - **On Fedora:** 18 | 19 | ```bash 20 | sudo dnf install podman 21 | ``` 22 | 23 | - **On Ubuntu:** 24 | 25 | ```bash 26 | sudo apt-get -y install podman 27 | ``` 28 | 29 | **4. Basic Podman Commands:** 30 | 31 | - **Run a Container:** 32 | 33 | ```bash 34 | podman run -dt -p 8080:80 nginx 35 | ``` 36 | 37 | - **List Running Containers:** 38 | 39 | ```bash 40 | podman ps 41 | ``` 42 | 43 | - **Stop a Container:** 44 | 45 | ```bash 46 | podman stop container_id 47 | ``` 48 | 49 | - **Remove a Container:** 50 | 51 | ```bash 52 | podman rm container_id 53 | ``` 54 | 55 | - **Build an Image:** 56 | 57 | ```bash 58 | podman build -t my-image:latest . 59 | ``` 60 | 61 | **5. Podman vs Docker:** 62 | 63 | - **No Daemon:** Podman does not rely on a central daemon; each container is an isolated process. 64 | - **Rootless Mode:** Allows running containers without root privileges, enhancing security. 65 | - **Podman Pods:** Group containers under a single network namespace. 66 | 67 | **6. Pods in Podman:** 68 | 69 | - **Create a Pod:** 70 | 71 | ```bash 72 | podman pod create --name mypod -p 8080:80 73 | ``` 74 | 75 | - **Run a Container in a Pod:** 76 | 77 | ```bash 78 | podman run -dt --pod mypod nginx 79 | ``` 80 | 81 | - **Inspect a Pod:** 82 | 83 | ```bash 84 | podman pod inspect mypod 85 | ``` 86 | 87 | - **Stop a Pod:** 88 | 89 | ```bash 90 | podman pod stop mypod 91 | ``` 92 | 93 | **7. Networking:** 94 | 95 | - **Podman Network Command:** 96 | 97 | ```bash 98 | podman network create mynetwork 99 | ``` 100 | 101 | - **Attaching a Container to a Network:** 102 | 103 | ```bash 104 | podman run -dt --network mynetwork nginx 105 | ``` 106 | 107 | **8. Storage Management:** 108 | 109 | - **Mount a Volume:** 110 | 111 | ```bash 112 | podman run -dt -v /host/data:/container/data nginx 113 | ``` 114 | 115 | - **List Volumes:** 116 | 117 | ```bash 118 | podman volume ls 119 | ``` 120 | 121 | - **Create a Volume:** 122 | 123 | ```bash 124 | podman volume create myvolume 125 | ``` 126 | 127 | **9. Rootless Containers:** 128 | 129 | - **Running Rootless:** 130 | 131 | ```bash 132 | podman --rootless run -dt -p 8080:80 nginx 133 | ``` 134 | 135 | - **Inspect Rootless Mode:** 136 | 137 | ```bash 138 | podman info --format '{{.Host.Rootless}}' 139 | ``` 140 | 141 | **10. Podman Compose:** 142 | 143 | - **Install Podman Compose:** 144 | 145 | ```bash 146 | pip3 install podman-compose 147 | ``` 148 | 149 | - **Using Docker Compose with Podman:** 150 | 151 | ```bash 152 | podman-compose up 153 | ``` 154 | 155 | **11. Troubleshooting Podman:** 156 | 157 | - **Check Podman Logs:** 158 | 159 | ```bash 160 | podman logs container_id 161 | ``` 162 | 163 | - **Check Network Configuration:** 164 | 165 | ```bash 166 | podman network inspect mynetwork 167 | ``` 168 | 169 | - **Debugging Podman Containers:** 170 | 171 | ```bash 172 | podman exec -it container_id /bin/bash 173 | ``` 174 | 175 | **12. Podman in CI/CD:** 176 | 177 | - **Using Podman in GitLab CI:** 178 | 179 | ```yaml 180 | image: quay.io/podman/stable 181 | 182 | build: 183 | script: 184 | - podman build -t myimage . 185 | - podman push myimage registry.example.com/myimage:latest 186 | ``` 187 | 188 | **13. Security Best Practices:** 189 | 190 | - **Run Containers as Non-Root:** 191 | - Use rootless mode or specify a non-root user in the container. 192 | 193 | ```bash 194 | podman run -dt -u 1001 nginx 195 | ``` 196 | 197 | - **Use SELinux:** 198 | - Enable SELinux for added security on supported systems. 199 | 200 | ```bash 201 | podman run -dt --security-opt label=type:container_runtime_t nginx 202 | ``` 203 | 204 | **14. Migrating from Docker to Podman:** 205 | 206 | - **Docker Compatibility Mode:** 207 | 208 | ```bash 209 | alias docker=podman 210 | ``` 211 | 212 | - **Importing Docker Images:** 213 | 214 | ```bash 215 | podman pull docker-daemon:nginx:latest 216 | ``` 217 | 218 | **15. Podman on Kubernetes:** 219 | 220 | - **CRI-O Integration:** 221 | - Podman can be used with CRI-O as a runtime for Kubernetes, allowing seamless integration with Kubernetes clusters. 222 | -------------------------------------------------------------------------------- /Security/Trivy.md: -------------------------------------------------------------------------------- 1 | # Trivy Cheatsheet 2 | 3 | ![text](https://imgur.com/TYu7qw7.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Trivy** is a comprehensive and easy-to-use security scanner for container images, file systems, and Git repositories, detecting vulnerabilities, misconfigurations, and secrets. 8 | 9 | **2. Installation:** 10 | 11 | - **Installing Trivy:** 12 | - On macOS using Homebrew: 13 | 14 | ```bash 15 | brew install aquasecurity/trivy/trivy 16 | ``` 17 | 18 | - On Linux: 19 | 20 | ```bash 21 | sudo apt-get install wget apt-transport-https gnupg lsb-release 22 | wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - 23 | echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list 24 | sudo apt-get update 25 | sudo apt-get install trivy 26 | ``` 27 | 28 | - On Windows: 29 | - Download the binary from the [GitHub releases](https://github.com/aquasecurity/trivy/releases). 30 | 31 | **3. Basic Usage:** 32 | 33 | - **Scanning a Docker Image:** 34 | 35 | ```bash 36 | trivy image nginx:latest 37 | ``` 38 | 39 | - This command scans the `nginx:latest` Docker image for known vulnerabilities. 40 | 41 | - **Scanning a File System:** 42 | 43 | ```bash 44 | trivy fs /path/to/directory 45 | ``` 46 | 47 | - This command scans the specified directory for vulnerabilities and misconfigurations. 48 | 49 | - **Scanning a Git Repository:** 50 | 51 | ```bash 52 | trivy repo https://github.com/user/repository 53 | ``` 54 | 55 | - This command scans the entire GitHub repository for vulnerabilities. 56 | 57 | **4. Scanning Options:** 58 | 59 | - **Severity Levels:** 60 | - Filter results based on severity: 61 | 62 | ```bash 63 | trivy image --severity HIGH,CRITICAL nginx:latest 64 | ``` 65 | 66 | - This command limits the output to high and critical vulnerabilities only. 67 | 68 | - **Ignore Unfixed Vulnerabilities:** 69 | 70 | ```bash 71 | trivy image --ignore-unfixed nginx:latest 72 | ``` 73 | 74 | - Excludes vulnerabilities that have no known fixes. 75 | 76 | - **Output Formats:** 77 | - JSON: 78 | 79 | ```bash 80 | trivy image -f json -o results.json nginx:latest 81 | ``` 82 | 83 | - Table (default): 84 | 85 | ```bash 86 | trivy image -f table nginx:latest 87 | ``` 88 | 89 | **5. Advanced Usage:** 90 | 91 | - **Customizing Vulnerability Database Update:** 92 | 93 | ```bash 94 | trivy image --skip-update nginx:latest 95 | ``` 96 | 97 | - Skips updating the vulnerability database before scanning. 98 | 99 | - **Using Trivy with Docker:** 100 | - Running Trivy as a Docker container: 101 | 102 | ```bash 103 | docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image nginx:latest 104 | ``` 105 | 106 | - Scanning an image by directly pulling it from a registry: 107 | 108 | ```bash 109 | trivy image --docker-username --docker-password myregistry.com/myimage:tag 110 | ``` 111 | 112 | - **Trivy in CI/CD Pipelines:** 113 | - Integrate Trivy into CI/CD workflows to automate vulnerability scanning during build stages. 114 | 115 | **6. Trivy Misconfiguration Detection:** 116 | 117 | - **Scanning for Misconfigurations:** 118 | 119 | ```bash 120 | trivy config /path/to/configuration/files 121 | ``` 122 | 123 | - Scans configuration files (e.g., Kubernetes, Terraform) for security misconfigurations. 124 | 125 | **7. Trivy and Secrets Detection:** 126 | 127 | - **Scanning for Secrets:** 128 | 129 | ```bash 130 | trivy fs --security-checks secrets /path/to/code 131 | ``` 132 | 133 | - Detects hardcoded secrets like passwords, API keys, and tokens within the codebase. 134 | 135 | **8. Integration with Other Tools:** 136 | 137 | - **Trivy and Harbor:** 138 | - Trivy can be used as a vulnerability scanner within [Harbor](https://goharbor.io/), a cloud-native registry. 139 | 140 | - **Trivy and Kubernetes:** 141 | - Trivy can scan Kubernetes resources for vulnerabilities and misconfigurations. 142 | 143 | **9. Trivy Reports:** 144 | 145 | - **Generating Reports:** 146 | - HTML Report: 147 | 148 | ```bash 149 | trivy image -f json -o report.json nginx:latest 150 | trivy report --input report.json --format html --output report.html 151 | ``` 152 | 153 | - Detailed Reports with Severity Breakdown: 154 | 155 | ```bash 156 | trivy image --severity HIGH,CRITICAL --format table nginx:latest 157 | ``` 158 | 159 | **10. Troubleshooting Trivy:** 160 | 161 | - **Common Issues:** 162 | - **Slow Scans:** Consider skipping database updates if they are not necessary. 163 | - **Network Issues:** Ensure your network allows access to Trivy’s vulnerability database. 164 | 165 | - **Debugging:** 166 | - Use the `--debug` flag to see detailed logs: 167 | 168 | ```bash 169 | trivy image --debug nginx:latest 170 | ``` 171 | -------------------------------------------------------------------------------- /CI-CD/CircleCI.md: -------------------------------------------------------------------------------- 1 | # CircleCI Cheatsheet 2 | 3 | ![](https://imgur.com/s6aXKl9.png) 4 | 5 | **1. Introduction:** 6 | 7 | - CircleCI is a continuous integration and delivery platform that automates the build, test, and deploy processes, allowing for quick and efficient development workflows. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Job:** A collection of steps to be executed in a build. 12 | - **Step:** A single command or script within a job. 13 | - **Workflow:** Defines the order of jobs and their dependencies. 14 | - **Executor:** Specifies the environment in which the job runs (e.g., Docker, Linux VM, macOS). 15 | 16 | **3. Basic `.circleci/config.yml` Example:** 17 | 18 | - **YAML Syntax:** 19 | 20 | ```yaml 21 | version: 2.1 22 | 23 | jobs: 24 | build: 25 | docker: 26 | - image: circleci/node:14 27 | steps: 28 | - checkout 29 | - run: npm install 30 | - run: npm test 31 | 32 | deploy: 33 | docker: 34 | - image: circleci/node:14 35 | steps: 36 | - checkout 37 | - run: npm run deploy 38 | 39 | workflows: 40 | version: 2 41 | build_and_deploy: 42 | jobs: 43 | - build 44 | - deploy 45 | ``` 46 | 47 | **4. Executors:** 48 | 49 | - **Docker:** Run jobs in Docker containers. 50 | 51 | ```yaml 52 | docker: 53 | - image: circleci/node:14 54 | ``` 55 | 56 | - **Machine:** Run jobs in a Linux VM. 57 | 58 | ```yaml 59 | machine: 60 | image: ubuntu-2004:202101-01 61 | ``` 62 | 63 | - **macOS:** Run jobs on macOS for iOS builds. 64 | 65 | ```yaml 66 | macos: 67 | xcode: "12.4.0" 68 | ``` 69 | 70 | **5. Reusable Configurations:** 71 | 72 | - **Commands:** Reuse steps across multiple jobs. 73 | 74 | ```yaml 75 | commands: 76 | setup: 77 | steps: 78 | - checkout 79 | - run: npm install 80 | 81 | jobs: 82 | build: 83 | docker: 84 | - image: circleci/node:14 85 | steps: 86 | - setup 87 | - run: npm test 88 | ``` 89 | 90 | - **Executors:** Reuse the environment configuration. 91 | 92 | ```yaml 93 | executors: 94 | node-executor: 95 | docker: 96 | - image: circleci/node:14 97 | 98 | jobs: 99 | build: 100 | executor: node-executor 101 | steps: 102 | - checkout 103 | - run: npm install 104 | ``` 105 | 106 | **6. Caching and Artifacts:** 107 | 108 | - **Caching:** Speed up builds by caching dependencies. 109 | 110 | ```yaml 111 | - restore_cache: 112 | keys: 113 | - v1-dependencies-{{ checksum "package-lock.json" }} 114 | - save_cache: 115 | paths: 116 | - node_modules 117 | key: v1-dependencies-{{ checksum "package-lock.json" }} 118 | ``` 119 | 120 | - **Artifacts:** Save build outputs and other data for later use. 121 | 122 | ```yaml 123 | - store_artifacts: 124 | path: ./build 125 | destination: build_output 126 | ``` 127 | 128 | **7. Workflows:** 129 | 130 | - **Sequential Jobs:** Define jobs that run in sequence. 131 | 132 | ```yaml 133 | workflows: 134 | version: 2 135 | build_and_deploy: 136 | jobs: 137 | - build 138 | - deploy 139 | ``` 140 | 141 | - **Parallel Jobs:** Run jobs in parallel to speed up pipeline execution. 142 | 143 | ```yaml 144 | workflows: 145 | version: 2 146 | test-and-deploy: 147 | jobs: 148 | - test 149 | - deploy 150 | ``` 151 | 152 | **8. Environment Variables:** 153 | 154 | - **Project-level Variables:** Set environment variables in the CircleCI project settings. 155 | - **Context Variables:** Use contexts to securely store and manage environment variables. 156 | - **Job-level Variables:** 157 | 158 | ```yaml 159 | jobs: 160 | build: 161 | docker: 162 | - image: circleci/node:14 163 | environment: 164 | NODE_ENV: production 165 | ``` 166 | 167 | **9. Advanced CircleCI Features:** 168 | 169 | - **Orbs:** Reusable packages of CircleCI configuration that make it easy to integrate with third-party tools. 170 | 171 | ```yaml 172 | orbs: 173 | aws-s3: circleci/aws-s3@4.2.0 174 | 175 | jobs: 176 | deploy: 177 | steps: 178 | - aws-s3/copy: 179 | from: "build/" 180 | to: "s3://my-bucket/" 181 | ``` 182 | 183 | - **Conditional Steps:** Run steps conditionally based on the success or failure of previous steps. 184 | 185 | ```yaml 186 | - run: 187 | name: Deploy only if tests pass 188 | command: ./deploy.sh 189 | when: on_success 190 | ``` 191 | 192 | **10. Best Practices:** 193 | 194 | - **Parallelism:** Use parallelism to reduce build times by running tests and other tasks simultaneously. 195 | - **Modular Configurations:** Break down your CircleCI configuration into reusable components with orbs, commands, and executors. 196 | - **Effective Caching:** Cache dependencies effectively to reduce build times, but remember to invalidate caches when necessary to avoid stale dependencies. 197 | -------------------------------------------------------------------------------- /Networking/Linkerd.md: -------------------------------------------------------------------------------- 1 | # Linkerd Cheatsheet 2 | 3 | ![text](https://imgur.com/xyQcgGf.png) 4 | 5 | ## **Overview** 6 | 7 | Linkerd is a lightweight service mesh designed to be simple to operate while providing powerful features for observability, security, and reliability. Unlike some other service meshes, Linkerd focuses on minimal configuration and performance. 8 | 9 | ### **Basic Concepts** 10 | 11 | - **Service Mesh:** Linkerd provides an infrastructure layer that enables secure, reliable, and observable communication between microservices. It operates transparently, requiring minimal changes to your services. 12 | 13 | - **Control Plane:** Linkerd’s control plane manages the configuration and behavior of the service mesh. It includes components for managing policies, collecting telemetry, and issuing certificates. 14 | 15 | - **Data Plane:** The data plane consists of lightweight proxies deployed as sidecars to each service. These proxies handle all inbound and outbound traffic, providing features like mTLS, retries, and load balancing. 16 | 17 | ### **Traffic Management** 18 | 19 | - **Routing:** Linkerd automatically manages routing for service-to-service communication. It handles retries and timeouts, ensuring that requests are routed efficiently and reliably. 20 | 21 | - **Load Balancing:** Linkerd distributes traffic across available service instances to prevent any single instance from being overwhelmed. It uses algorithms like random and least-request to balance traffic effectively. 22 | 23 | - **Traffic Splitting:** Linkerd allows you to split traffic between different versions of a service. This is useful for canary deployments, where a small percentage of traffic is sent to a new version before full rollout. 24 | 25 | ### **Security** 26 | 27 | - **mTLS:** Linkerd provides out-of-the-box mutual TLS (mTLS) for all communication between services. This ensures that all traffic is encrypted and that both the client and server are authenticated. 28 | 29 | - **Identity Service:** Linkerd includes an identity service that issues and renews TLS certificates for the proxies. This service manages the cryptographic identities used for mTLS. 30 | 31 | - **Authorization:** Linkerd’s mTLS also acts as an authorization mechanism, ensuring that only authorized services can communicate with each other. This enhances security by preventing unauthorized access. 32 | 33 | ### **Observability** 34 | 35 | - **Metrics:** Linkerd automatically collects and exposes metrics such as latency, success rates, and request volumes. These metrics are essential for monitoring the health and performance of your services. 36 | 37 | - **Prometheus Integration:** Linkerd integrates seamlessly with Prometheus, allowing you to scrape and visualize metrics. Prometheus can be used to create alerts based on Linkerd’s metrics. 38 | 39 | - **Grafana Dashboards:** Linkerd provides pre-built Grafana dashboards for visualizing metrics. These dashboards offer insights into service performance and help in identifying issues. 40 | 41 | - **Distributed Tracing:** Linkerd supports distributed tracing, allowing you to track requests as they flow through different services. This helps in understanding the service interaction and diagnosing issues. 42 | 43 | ### **Advanced Concepts** 44 | 45 | - **Service Profiles:** Service profiles allow you to define expected behavior for services, such as retries, timeouts, and traffic shaping. They provide fine-grained control over how traffic is handled. 46 | 47 | - **Tap API:** The Tap API provides real-time visibility into live traffic. You can use it to inspect requests and responses, making it a powerful tool for debugging and monitoring. 48 | 49 | - **Traffic Shifting:** Linkerd supports traffic shifting, enabling you to gradually shift traffic from one version of a service to another. This is particularly useful for rolling out updates safely. 50 | 51 | - **Multicluster Support:** Linkerd can extend its service mesh across multiple Kubernetes clusters, allowing you to manage services that span different environments. This is useful for high availability and disaster recovery. 52 | 53 | - **Policy Enforcement:** Linkerd allows you to define policies that control traffic routing, access control, and rate limiting. These policies help ensure that services behave as expected under various conditions. 54 | 55 | ### **Example Use Case** 56 | 57 | Suppose you are managing a microservices application where you need a lightweight service mesh to provide observability and security with minimal overhead: 58 | 59 | 1. **Simplified Deployment:** Deploy Linkerd with minimal configuration and start benefiting from automatic mTLS and load balancing. 60 | 2. **Canary Releases:** Use traffic splitting to gradually route traffic to a new version of a service, reducing the risk of full deployment. 61 | 3. **Real-time Monitoring:** Utilize the Tap API to monitor live traffic and quickly identify any issues with requests. 62 | 4. **Secure Communication:** Rely on Linkerd’s mTLS to secure all service-to-service communication without the need for complex certificate management. 63 | 5. **Cross-Cluster Management:** Extend Linkerd’s service mesh across multiple Kubernetes clusters to ensure high availability and disaster recovery. 64 | -------------------------------------------------------------------------------- /Containerization/Helm.md: -------------------------------------------------------------------------------- 1 | # Helm Cheatsheet 2 | 3 | ![text](https://imgur.com/nDW9BHK.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Helm** is a package manager for Kubernetes, helping you define, install, and upgrade even the most complex Kubernetes applications. It uses charts to package Kubernetes resources. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Chart:** A collection of files that describe a set of Kubernetes resources. 12 | - **Release:** An instance of a chart running in a Kubernetes cluster. 13 | - **Repository:** A place where charts can be collected and shared. 14 | 15 | **3. Installing Helm:** 16 | 17 | - **Helm Installation:** 18 | 19 | ```bash 20 | curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash 21 | ``` 22 | 23 | - **Add a Helm Repository:** 24 | 25 | ```bash 26 | helm repo add stable https://charts.helm.sh/stable 27 | helm repo update 28 | ``` 29 | 30 | **4. Helm Commands:** 31 | 32 | - **Install a Chart:** 33 | 34 | ```bash 35 | helm install my-release stable/nginx 36 | ``` 37 | 38 | - **List Releases:** 39 | 40 | ```bash 41 | helm list 42 | ``` 43 | 44 | - **Upgrade a Release:** 45 | 46 | ```bash 47 | helm upgrade my-release stable/nginx 48 | ``` 49 | 50 | - **Uninstall a Release:** 51 | 52 | ```bash 53 | helm uninstall my-release 54 | ``` 55 | 56 | - **Search for Charts:** 57 | 58 | ```bash 59 | helm search repo nginx 60 | ``` 61 | 62 | **5. Chart Structure:** 63 | 64 | - **Basic Chart Structure:** 65 | 66 | ``` 67 | my-chart/ 68 | ├── Chart.yaml 69 | ├── values.yaml 70 | ├── charts/ 71 | ├── templates/ 72 | │ ├── deployment.yaml 73 | │ ├── service.yaml 74 | │ └── _helpers.tpl 75 | ``` 76 | 77 | - **Chart.yaml:** 78 | 79 | ```yaml 80 | apiVersion: v2 81 | name: my-chart 82 | description: A Helm chart for Kubernetes 83 | version: 0.1.0 84 | ``` 85 | 86 | - **values.yaml:** 87 | 88 | ```yaml 89 | replicaCount: 3 90 | image: 91 | repository: nginx 92 | tag: stable 93 | ``` 94 | 95 | - **Template Example (deployment.yaml):** 96 | 97 | ```yaml 98 | apiVersion: apps/v1 99 | kind: Deployment 100 | metadata: 101 | name: {{ .Release.Name }}-nginx 102 | spec: 103 | replicas: {{ .Values.replicaCount }} 104 | selector: 105 | matchLabels: 106 | app: {{ .Release.Name }}-nginx 107 | template: 108 | metadata: 109 | labels: 110 | app: {{ .Release.Name }}-nginx 111 | spec: 112 | containers: 113 | - name: nginx 114 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 115 | ``` 116 | 117 | **6. Helm Lifecycle:** 118 | 119 | - **Creating a New Chart:** 120 | 121 | ```bash 122 | helm create my-chart 123 | ``` 124 | 125 | - **Templating:** 126 | - **List all template values:** 127 | 128 | ```bash 129 | helm template my-release my-chart 130 | ``` 131 | 132 | - **Lint a Chart:** 133 | 134 | ```bash 135 | helm lint my-chart 136 | ``` 137 | 138 | **7. Helm Repositories:** 139 | 140 | - **Creating a Local Helm Repository:** 141 | 142 | ```bash 143 | helm repo index ./charts --url http://example.com/charts 144 | ``` 145 | 146 | - **Serving Charts:** 147 | 148 | ```bash 149 | helm serve --address 0.0.0.0:8879 150 | ``` 151 | 152 | **8. Helm Hooks:** 153 | 154 | - **Example of a Pre-Install Hook:** 155 | 156 | ```yaml 157 | apiVersion: batch/v1 158 | kind: Job 159 | metadata: 160 | name: "{{ .Release.Name }}-preinstall" 161 | annotations: 162 | "helm.sh/hook": pre-install 163 | spec: 164 | template: 165 | spec: 166 | containers: 167 | - name: preinstall 168 | image: busybox 169 | command: ['sh', '-c', 'echo Hello Helm'] 170 | restartPolicy: Never 171 | ``` 172 | 173 | **9. Helm and CI/CD:** 174 | 175 | - **Using Helm in Jenkins Pipeline:** 176 | 177 | ```groovy 178 | pipeline { 179 | agent any 180 | stages { 181 | stage('Deploy') { 182 | steps { 183 | script { 184 | sh "helm upgrade --install my-release ./my-chart" 185 | } 186 | } 187 | } 188 | } 189 | } 190 | ``` 191 | 192 | **10. Advanced Helm Concepts:** 193 | 194 | - **Subcharts:** Use subcharts to package related Kubernetes resources together. 195 | - **Chart Museum:** Helm repository server to store and manage Helm charts. 196 | - **Helmfile:** A declarative spec for deploying Helm charts. 197 | 198 | **11. Helm Security:** 199 | 200 | - **Chart Signing:** 201 | - Sign and verify Helm charts to ensure integrity. 202 | 203 | ```bash 204 | helm package --sign --key --keyring my-chart 205 | helm verify my-chart-0.1.0.tgz 206 | ``` 207 | 208 | - **RBAC:** Control access to Helm releases with Kubernetes RBAC. 209 | 210 | **12. Troubleshooting Helm:** 211 | 212 | - **Debugging a Chart Installation:** 213 | 214 | ```bash 215 | helm install --debug --dry-run my-release ./my-chart 216 | ``` 217 | 218 | - **Checking Helm Release History:** 219 | 220 | ```bash 221 | helm history my-release 222 | ``` 223 | 224 | - **Rollback a Release:** 225 | 226 | ```bash 227 | helm rollback my-release 1 228 | ``` 229 | -------------------------------------------------------------------------------- /Networking/Envoy.md: -------------------------------------------------------------------------------- 1 | # Envoy Cheatsheet 2 | 3 | ![text](https://imgur.com/iw5sG1a.png) 4 | 5 | ## **Overview** 6 | 7 | Envoy is a high-performance, open-source edge and service proxy. Originally developed by Lyft, Envoy is now widely adopted for managing microservices communication, especially within service meshes. Envoy handles tasks such as load balancing, security, observability, and routing. 8 | 9 | ### **Basic Concepts** 10 | 11 | - **Proxy:** Envoy acts as a proxy, sitting between services and managing all incoming and outgoing traffic. It intercepts, processes, and forwards requests based on predefined configurations. 12 | 13 | - **Listener:** A listener is a configuration that defines how Envoy should accept incoming connections. It specifies the port and protocols (e.g., HTTP, TCP) Envoy listens to. 14 | 15 | - **Cluster:** In Envoy, a cluster represents a group of upstream services that Envoy proxies traffic to. A cluster typically consists of multiple instances of a service, allowing Envoy to distribute requests across them. 16 | 17 | - **Route:** Routes define how requests are processed and forwarded by Envoy. A route maps incoming requests to the appropriate cluster based on various criteria like URL paths or headers. 18 | 19 | ### **Traffic Management** 20 | 21 | - **Load Balancing:** Envoy provides several load balancing algorithms to distribute traffic across service instances. Common algorithms include round-robin, least-request, and ring-hash. Load balancing ensures that no single instance is overwhelmed with too much traffic. 22 | 23 | - **Retries:** Envoy can automatically retry failed requests based on configurable policies. For example, if an upstream service fails to respond, Envoy can retry the request on a different instance. 24 | 25 | - **Circuit Breakers:** Circuit breakers prevent a service from becoming overwhelmed by limiting the number of concurrent connections or requests. If a service exceeds the defined thresholds, Envoy will stop sending traffic to it until it recovers. 26 | 27 | - **Rate Limiting:** Envoy allows you to define rate limits on incoming requests, controlling how many requests are allowed over a given period. This is useful for preventing abuse or overloading of services. 28 | 29 | ### **Security** 30 | 31 | - **TLS Termination:** Envoy can handle TLS termination, decrypting inbound traffic, and encrypting outbound traffic. This simplifies the management of secure communications within your services. 32 | 33 | - **mTLS (Mutual TLS):** Envoy supports mutual TLS for securing service-to-service communication. This ensures that both parties in a communication exchange authenticate each other and that their communication is encrypted. 34 | 35 | - **RBAC (Role-Based Access Control):** Envoy implements RBAC to control access to services based on predefined roles and permissions. This adds an additional layer of security, ensuring that only authorized services or users can access specific resources. 36 | 37 | ### **Observability** 38 | 39 | - **Metrics:** Envoy provides detailed metrics about network traffic, including request counts, latency, error rates, and more. These metrics are essential for monitoring the health and performance of your services. 40 | 41 | - **Access Logs:** Envoy generates detailed access logs for each request it handles. These logs include information about the request’s origin, the response status, and any errors encountered. Access logs are valuable for auditing and debugging. 42 | 43 | - **Tracing:** Envoy integrates with distributed tracing systems like Jaeger and Zipkin. Tracing provides a detailed view of a request’s journey through various services, helping you identify bottlenecks and failures in your application. 44 | 45 | ### **Advanced Concepts** 46 | 47 | - **Filter Chains:** Envoy’s filter chains allow for complex request processing. Filters can modify, route, or reject requests based on various conditions. Common filters include authentication, rate limiting, and request transformation. 48 | 49 | - **Dynamic Configuration with xDS APIs:** Envoy supports dynamic configuration through a set of APIs known as xDS (e.g., ADS, CDS, LDS, RDS, EDS). These APIs allow Envoy to update its configuration in real-time without restarting. This capability is crucial for environments where services are constantly changing. 50 | 51 | - **Sidecar Proxy:** In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each microservice. The sidecar intercepts all traffic to and from the service, providing security, observability, and traffic management features. 52 | 53 | ### **Example Use Case** 54 | 55 | Imagine you are running an e-commerce application with multiple microservices such as payment, inventory, and user services. Here’s how 56 | 57 | Envoy can help: 58 | 59 | 1. **Secure Communication:** Use Envoy’s TLS termination to encrypt all traffic between the microservices. 60 | 2. **Load Balancing:** Distribute incoming requests evenly across multiple instances of the payment service using Envoy’s round-robin load balancing. 61 | 3. **Rate Limiting:** Protect the user service from abuse by setting a rate limit on login attempts. 62 | 4. **Observability:** Monitor the health of all microservices using Envoy’s metrics and integrate with Prometheus for alerting. 63 | 5. **Resilience:** Use circuit breakers to prevent the inventory service from becoming overwhelmed during high traffic periods. 64 | -------------------------------------------------------------------------------- /Monitoring/ELK-Stack.md: -------------------------------------------------------------------------------- 1 | # ELK Stack Cheatsheet 2 | 3 | ![text](https://imgur.com/wLayBA4.png) 4 | 5 | **1. Introduction:** 6 | 7 | - The **ELK Stack** is a powerful suite of open-source tools: **Elasticsearch** for search and analytics, **Logstash** for data processing, and **Kibana** for visualization. It's often extended with **Beats** for data collection and **X-Pack** for additional features. 8 | 9 | **2. Elasticsearch:** 10 | 11 | - **Installing Elasticsearch:** 12 | 13 | ```bash 14 | wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.2-x86_64.rpm 15 | sudo rpm -ivh elasticsearch-7.10.2-x86_64.rpm 16 | sudo systemctl start elasticsearch 17 | sudo systemctl enable elasticsearch 18 | ``` 19 | 20 | - **Basic Configuration:** 21 | - Edit `/etc/elasticsearch/elasticsearch.yml`: 22 | 23 | ```yaml 24 | network.host: localhost 25 | http.port: 9200 26 | ``` 27 | 28 | - **Basic Queries:** 29 | 30 | ```bash 31 | curl -X GET "localhost:9200/_cat/indices?v" 32 | curl -X GET "localhost:9200/my-index/_search?q=user:john" 33 | ``` 34 | 35 | - **Indexing Documents:** 36 | 37 | ```bash 38 | curl -X POST "localhost:9200/my-index/_doc/1" -H 'Content-Type: application/json' -d' 39 | { 40 | "user": "john", 41 | "message": "Hello, Elasticsearch!" 42 | }' 43 | ``` 44 | 45 | - **Elasticsearch Cluster:** 46 | - Configure multi-node clusters by setting `cluster.name`, `node.name`, and `discovery.seed_hosts` in `elasticsearch.yml`. 47 | 48 | **3. Logstash:** 49 | 50 | - **Installing Logstash:** 51 | 52 | ```bash 53 | wget https://artifacts.elastic.co/downloads/logstash/logstash-7.10.2.rpm 54 | sudo rpm -ivh logstash-7.10.2.rpm 55 | sudo systemctl start logstash 56 | sudo systemctl enable logstash 57 | ``` 58 | 59 | - **Logstash Configuration:** 60 | 61 | ```yaml 62 | input { 63 | file { 64 | path => "/var/log/syslog" 65 | start_position => "beginning" 66 | } 67 | } 68 | filter { 69 | grok { 70 | match => { "message" => "%{SYSLOGLINE}" } 71 | } 72 | } 73 | output { 74 | elasticsearch { 75 | hosts => ["localhost:9200"] 76 | index => "syslog-%{+YYYY.MM.dd}" 77 | } 78 | } 79 | ``` 80 | 81 | - **Running Logstash:** 82 | 83 | ```bash 84 | sudo systemctl start logstash 85 | ``` 86 | 87 | - **Using Beats with Logstash:** 88 | - Use **Filebeat**, **Metricbeat**, or **Packetbeat** to ship data to Logstash for processing. 89 | 90 | **4. Kibana:** 91 | 92 | - **Installing Kibana:** 93 | 94 | ```bash 95 | wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.2-x86_64.rpm 96 | sudo rpm -ivh kibana-7.10.2-x86_64.rpm 97 | sudo systemctl start kibana 98 | sudo systemctl enable kibana 99 | ``` 100 | 101 | - **Basic Configuration:** 102 | - Edit `/etc/kibana/kibana.yml`: 103 | 104 | ```yaml 105 | server.port: 5601 106 | server.host: "localhost" 107 | elasticsearch.hosts: ["http://localhost:9200"] 108 | ``` 109 | 110 | - **Creating Visualizations:** 111 | 1. Navigate to **Visualize** in the Kibana interface. 112 | 2. Choose a visualization type (e.g., line chart, pie chart). 113 | 3. Select the data source and configure your queries. 114 | 4. Save and add the visualization to a dashboard. 115 | 116 | - **Kibana Dashboards:** 117 | - Use dashboards to combine multiple visualizations into a single view, useful for monitoring and analysis. 118 | 119 | **5. Beats:** 120 | 121 | - **Filebeat:** 122 | - **Installing Filebeat:** 123 | 124 | ```bash 125 | wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.2-x86_64.rpm 126 | sudo rpm -ivh filebeat-7.10.2-x86_64.rpm 127 | sudo systemctl start filebeat 128 | sudo systemctl enable filebeat 129 | ``` 130 | 131 | - **Configuring Filebeat:** 132 | 133 | ```yaml 134 | filebeat.inputs: 135 | - type: log 136 | paths: 137 | - /var/log/syslog 138 | 139 | output.elasticsearch: 140 | hosts: ["localhost:9200"] 141 | ``` 142 | 143 | - **Running Filebeat:** 144 | 145 | ```bash 146 | sudo systemctl start filebeat 147 | ``` 148 | 149 | - **Metricbeat:** 150 | - Collects metrics from the system and services like MySQL, Docker, etc. 151 | 152 | - **Packetbeat:** 153 | - Captures network traffic and analyzes protocols. 154 | 155 | **6. Security in ELK Stack:** 156 | 157 | - **Enabling HTTPS in Elasticsearch:** 158 | 159 | ```yaml 160 | xpack.security.enabled: true 161 | xpack.security.http.ssl.enabled: true 162 | xpack.security.http.ssl.keystore.path: /path/to/keystore.jks 163 | ``` 164 | 165 | - **User Authentication:** 166 | - Use **X-Pack** to manage users, roles, and permissions. 167 | 168 | **7. ELK Stack in Kubernetes:** 169 | 170 | - **Deploying ELK Stack:** 171 | - Use Helm charts to deploy the ELK stack in Kubernetes for easier management and scaling. 172 | 173 | **8. Troubleshooting ELK Stack:** 174 | 175 | - **Common Issues:** 176 | - **High Memory Usage:** Optimize the heap size in Elasticsearch. 177 | - **Logstash Performance:** Tune pipeline workers 178 | 179 | and batch size. 180 | 181 | - **Debugging:** 182 | - Check logs for Elasticsearch (`/var/log/elasticsearch/`), Logstash (`/var/log/logstash/`), and Kibana (`/var/log/kibana/`). 183 | - Use `curl` to test Elasticsearch endpoints and ensure services are running. 184 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # devops-cheatsheet 2 | # 🚀***DevOps Tools Cheatsheet Collection*** 3 | ![DevOps Tools Cheatsheet Collection](https://github.com/fareedmohamed11/devops-cheatsheet/blob/9b0e4a77e6766aa1f5ab6d9a81737da495de3071/0_laJvKi8DonJmv9zT.png) 4 | ### Welcome to the DevOps Tools Cheatsheet Collection – your go-to resource for mastering DevOps tools and technologies! 5 | 6 | 💡**Whether you're an experienced DevOps engineer, a sysadmin, a developer, or a newcomer looking to streamline workflows, these curated cheatsheets offer quick commands, best practices, and essential tips to supercharge your DevOps journey.** 7 | 8 | ## 📖 What makes this collection special? 9 | ## ✅ Comprehensive: Covers everything from CI/CD, Containerization, Cloud, Security, Monitoring, and more. 10 | ## ✅ Beginner-Friendly & Advanced: Useful for new learners as well as seasoned professionals. 11 | ## ✅ Structured & Easy to Navigate: Find what you need in seconds. 12 | ## ✅ Regularly Updated: We keep it fresh with new tools and best practices. 13 | 14 | # 📂 Repository Overview 15 | 16 | /devops-cheatsheet/ 17 | - README.md 18 | - CONTRIBUTING.md 19 | - CI-CD/ 20 | - Jenkins.md 21 | - GitHub-Actions.md 22 | - GitLab-CI.md 23 | - CircleCI.md 24 | - Containerization/ 25 | - Docker.md 26 | - Kubernetes.md 27 | - CRI-O.md 28 | - OpenShift.md 29 | - Helm.md 30 | - Podman.md 31 | - Monitoring/ 32 | - Prometheus.md 33 | - Grafana.md 34 | - ELK-Stack.md 35 | - CloudWatch.md 36 | - Nagios.md 37 | - Security/ 38 | - Trivy.md 39 | - SonarQube.md 40 | - AquaSec.md 41 | - HashiCorp-Vault.md 42 | - Version-Control/ 43 | - GitLab.md 44 | - GitHub.md 45 | - Bitbucket.md 46 | - Cloud/ 47 | - AWS.md 48 | - Azure.md 49 | - Ansible.md 50 | - GCP.md 51 | - Kubernetes-on-AWS.md 52 | - Terraform.md 53 | - Networking/ 54 | - Istio.md 55 | - Envoy.md 56 | - Consul.md 57 | - Linkerd.md 58 | 59 | # 📚 Cheatsheet Categories 60 | 61 | ## 🔵 CI/CD Automation 62 | Boost deployment speed with continuous integration & continuous deployment: 63 | - 🚀 [Jenkins](https://www.jenkins.io/) 64 | - 🚀 [GitHub Actions](https://github.com/features/actions) 65 | - 🚀 [GitLab CI](https://docs.gitlab.com/ee/ci/) 66 | - 🚀 [CircleCI](https://circleci.com/) 67 | 68 | ## 🔵 Containerization & Orchestration 69 | Build, manage, and deploy containers effortlessly: 70 | - 🔹 [Docker](https://www.docker.com/) 71 | - 🔹 [Kubernetes](https://kubernetes.io/) 72 | - 🔹 [CRI-O](https://cri-o.io/) 73 | - 🔹 [OpenShift](https://www.openshift.com/) 74 | - 🔹 [Helm](https://helm.sh/) 75 | - 🔹 [Podman](https://podman.io/) 76 | 77 | ## 🔵 Monitoring 78 | Track and monitor your systems effectively: 79 | - 🔹 [Prometheus](https://prometheus.io/) 80 | - 🔹 [Grafana](https://grafana.com/) 81 | - 🔹 [ELK Stack](https://www.elastic.co/what-is/elk-stack) 82 | - 🔹 [CloudWatch](https://aws.amazon.com/cloudwatch/) 83 | - 🔹 [Nagios](https://www.nagios.org/) 84 | 85 | ## 🔵 Security 86 | Enhance security and compliance: 87 | - 🔹 [Trivy](https://aquasecurity.github.io/trivy/) 88 | - 🔹 [SonarQube](https://www.sonarqube.org/) 89 | - 🔹 [AquaSec](https://www.aquasec.com/) 90 | - 🔹 [HashiCorp Vault](https://www.vaultproject.io/) 91 | 92 | ## 🔵 Version Control 93 | Manage your code repositories efficiently: 94 | - 🔹 [GitLab](https://about.gitlab.com/) 95 | - 🔹 [GitHub](https://github.com/) 96 | - 🔹 [Bitbucket](https://bitbucket.org/) 97 | 98 | ## 🔵 Cloud 99 | Leverage cloud platforms for scalability: 100 | - 🔹 [AWS](https://aws.amazon.com/) 101 | - 🔹 [Azure](https://azure.microsoft.com/) 102 | - 🔹 [Ansible](https://www.ansible.com/) 103 | - 🔹 [GCP](https://cloud.google.com/) 104 | - 🔹 [Kubernetes on AWS](https://aws.amazon.com/eks/) 105 | - 🔹 [Terraform](https://www.terraform.io/) 106 | 107 | ## 🔵 Networking 108 | Optimize and secure your network: 109 | - 🔹 [Istio](https://istio.io/) 110 | - 🔹 [Envoy](https://www.envoyproxy.io/) 111 | - 🔹 [Consul](https://www.consul.io/) 112 | - 🔹 [Linkerd](https://linkerd.io/) 113 | 114 | ## 👥 Who Should Use This? 115 | 116 | > ### 🔮 **Important** 117 | > - ✅ **DevOps Engineers** – Quick access to essential commands & tools 118 | > - ✅ **Sysadmins** – Simplify system management with structured cheatsheets 119 | > - ✅ **Developers** – Understand DevOps tools and workflows 120 | > - ✅ **Beginners** – Learn step-by-step with curated resources 121 | 122 | Whether you're **automating deployments**, **managing cloud infrastructure**, or **ensuring security compliance**, this collection is your **ultimate DevOps guide!** 🚀 123 | 124 | --- 125 | 126 | ## 🛠️ How to Use This Repository 127 | 128 | > ### ℹ️ **Note** 129 | > 1. **Explore the Categories**: Navigate through the folders to find the tool or technology you’re interested in. 130 | > 2. **Use the Cheatsheets**: Each cheatsheet is designed to provide quick access to the most important commands and concepts. 131 | > 3. **Contribute**: Found something missing? Want to share your own tips? Check out our [Contributing Guidelines](#). 132 | 133 | # 🤝 Contributions Welcome! 134 | ## 💡 This is a community-driven project! If you have insights, fixes, or new tools to share, your contributions are highly valued. 135 | 136 | ## 🔥 Want to contribute? Check out the [CONTRIBUTING.md](https://github.com/fareedmohamed11/devops-cheatsheet/blob/29ec759a27f41c0e8a11a0c5e76dfc0f66ce7538/CONTRIBUTING.md) file. 137 | 138 | ## 👤 About Me 139 | 140 | Hi there! I'm **Fareed Mohamed**, passionate about DevOps, cloud infrastructure, and automation. Feel free to connect with me or check out my work: 141 | 142 | - 🌐 [LinkedIn Profile](https://www.linkedin.com/in/fareed-mohamed-412031282) 143 | - 💻 [GitHub Profile](https://github.com/fareedmohamed11) 144 | - 📞 Phone: 01011632634 145 | -------------------------------------------------------------------------------- /Security/HashiCorp-Vault.md: -------------------------------------------------------------------------------- 1 | # HashiCorp Vault Cheatsheet 2 | 3 | ![text](https://imgur.com/322q6Pi.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **HashiCorp Vault** is a tool designed to securely store and access secrets. It can manage sensitive data such as passwords, API keys, and certificates. 8 | 9 | **2. Installation:** 10 | 11 | - **Installing Vault:** 12 | - On macOS using Homebrew: 13 | 14 | ```bash 15 | brew install vault 16 | ``` 17 | 18 | - On Linux: 19 | 20 | ```bash 21 | wget https://releases.hashicorp.com/vault/1.9.0/vault_1.9.0_linux_amd64.zip 22 | unzip vault_1.9.0_linux_amd64.zip 23 | sudo mv vault /usr/local/bin/ 24 | ``` 25 | 26 | - On Windows: 27 | - Download the binary from the [official HashiCorp releases](https://www.vaultproject.io/downloads). 28 | 29 | **3. Basic Usage:** 30 | 31 | - **Initializing Vault:** 32 | 33 | ```bash 34 | vault operator init 35 | ``` 36 | 37 | - This command initializes the Vault server, generating unseal keys and a root token. 38 | 39 | - **Unsealing Vault:** 40 | 41 | ```bash 42 | vault operator unseal 43 | vault operator unseal 44 | vault operator unseal 45 | ``` 46 | 47 | - Unseal Vault using the keys provided during initialization. 48 | 49 | - **Storing Secrets:** 50 | 51 | ```bash 52 | vault kv put secret/my-secret password="mypassword" 53 | ``` 54 | 55 | - This command stores a secret in Vault at the path `secret/my-secret`. 56 | 57 | - **Retrieving Secrets:** 58 | 59 | ```bash 60 | vault kv get secret/my-secret 61 | ``` 62 | 63 | - Retrieves the secret stored at `secret/my-secret`. 64 | 65 | **4. Advanced Usage:** 66 | 67 | - **Dynamic Secrets:** 68 | - Vault can generate secrets dynamically, such as database credentials that are created on-demand. 69 | - Example: Generating MySQL credentials: 70 | 71 | ```bash 72 | vault write database/roles/my-role db_name=mydb creation_statements="CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}';" default_ttl="1h" max_ttl="24h" 73 | vault read database/creds/my-role 74 | ``` 75 | 76 | - **Secret Engines:** 77 | - Vault supports multiple secret engines like KV, AWS, GCP, and more. 78 | - Enable a secrets engine: 79 | 80 | ```bash 81 | vault secrets enable aws 82 | ``` 83 | 84 | - Configure and use the AWS secrets engine: 85 | 86 | ```bash 87 | vault write aws/config/root access_key= secret_key= 88 | vault write aws/roles/my-role credential_type=iam_user policy_arns=arn:aws:iam::aws:policy/ReadOnlyAccess 89 | ``` 90 | 91 | **5. Authentication Methods:** 92 | 93 | - **Enabling Authentication Methods:** 94 | - Vault supports various authentication methods, including AppRole, LDAP, and AWS. 95 | - Enable an authentication method: 96 | 97 | ```bash 98 | vault auth enable approle 99 | ``` 100 | 101 | - **Configuring AppRole Authentication:** 102 | - Create a role: 103 | 104 | ```bash 105 | vault write auth/approle/role/my-role token_policies="default" token_ttl=1h token_max_ttl=4h 106 | ``` 107 | 108 | - Retrieve the role ID and secret ID: 109 | 110 | ```bash 111 | vault read auth/approle/role/my-role/role-id 112 | vault write -f auth/approle/role/my-role/secret-id 113 | ``` 114 | 115 | **6. Policies and Access Control:** 116 | 117 | - **Creating Policies:** 118 | - Define a policy to control access to secrets: 119 | 120 | ```hcl 121 | path "secret/data/*" { 122 | capabilities = ["create", "read", "update", "delete", "list"] 123 | } 124 | ``` 125 | 126 | - Apply the policy: 127 | 128 | ```bash 129 | vault policy write my-policy my-policy.hcl 130 | ``` 131 | 132 | **7. Vault in Production:** 133 | 134 | - **High Availability (HA):** 135 | - Vault supports HA configurations using storage backends like Consul. 136 | - Example Consul configuration: 137 | 138 | ```bash 139 | storage "consul" { 140 | address = "127.0.0.1:8500" 141 | path = "vault/" 142 | } 143 | ``` 144 | 145 | - **Performance Replication:** 146 | - Vault Enterprise supports performance replication for scaling reads. 147 | 148 | **8. Integrations and Automation:** 149 | 150 | - **Terraform Integration:** 151 | - Use the [Terraform Vault provider](https://registry.terraform.io/providers/hashicorp/vault/latest/docs) to manage Vault resources. 152 | - Example Terraform configuration: 153 | 154 | ```hcl 155 | provider "vault" {} 156 | 157 | resource "vault_generic_secret" "example" { 158 | path = "secret/example" 159 | data_json = </nagios`. 101 | - Default credentials: `nagiosadmin` and the password set during installation. 102 | 103 | - **Customizing the Interface:** 104 | - Modify the theme and layout by editing files in `/usr/local/nagios/share`. 105 | 106 | **7. Monitoring Remote Hosts:** 107 | 108 | - **NRPE (Nagios Remote Plugin Executor):** 109 | - **Installing NRPE:** 110 | 111 | ```bash 112 | sudo apt-get install nagios-nrpe-server nagios-plugins 113 | sudo systemctl start nagios-nrpe-server 114 | ``` 115 | 116 | - **Configuring NRPE:** 117 | - Edit `/etc/nagios/nrpe.cfg` to define allowed hosts and monitored services. 118 | 119 | ```cfg 120 | allowed_hosts=127.0.0.1,192.168.1.100 121 | command[check_disk]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/sda1 122 | ``` 123 | 124 | - **Monitoring with NRPE:** 125 | - Add a service in Nagios to monitor a remote host using NRPE. 126 | 127 | ```cfg 128 | define service { 129 | use generic-service 130 | host_name remotehost 131 | service_description Disk Usage 132 | check_command check_nrpe!check_disk 133 | } 134 | ``` 135 | 136 | **8. Nagios XI:** 137 | 138 | - **Introduction to Nagios XI:** 139 | - Nagios XI is the commercial version of Nagios Core, providing additional features like a more user-friendly interface, reporting, and advanced monitoring capabilities. 140 | 141 | - **Differences from Nagios Core:** 142 | - Built-in wizards, easier configuration, and more extensive support. 143 | 144 | **9. Advanced Nagios Concepts:** 145 | 146 | - **Passive Checks:** 147 | - Useful for monitoring systems where Nagios cannot initiate checks, but the system can send results to Nagios. 148 | 149 | - **Distributed Monitoring:** 150 | - Implement distributed monitoring by setting up multiple Nagios servers and configuring them to send data to a central Nagios server. 151 | 152 | **10. Securing Nagios:** 153 | 154 | - **Enabling HTTPS:** 155 | - Configure Apache to serve Nagios over HTTPS. 156 | 157 | ```bash 158 | sudo a2enmod ssl 159 | sudo service apache2 restart 160 | ``` 161 | 162 | - Update Nagios configuration in `/etc/apache2/sites-available/nagios.conf` to use SSL certificates. 163 | 164 | - **User Authentication:** 165 | - Use `.htpasswd` files to manage user access to the Nagios web interface. 166 | 167 | **11. Troubleshooting Nagios:** 168 | 169 | - **Common Issues:** 170 | - **Service Check Failing:** Ensure plugins are executable and paths are correct. 171 | - **Email Notifications Not Working:** Verify the mail server configuration and check the `maillog` for errors. 172 | 173 | - **Debugging:** 174 | - Use the Nagios log file at `/usr/local/nagios/var/nagios.log` to troubleshoot issues. 175 | - Run checks manually to verify plugin output. 176 | 177 | ```bash 178 | /usr/local/nagios/libexec/check_http -I 127.0.0.1 179 | ``` 180 | 181 | **12. Nagios and Docker:** 182 | 183 | - **Running Nagios in Docker:** 184 | 185 | ```bash 186 | docker run --name nagios -p 0.0.0.0:8080:80 jasonrivers/nagios 187 | ``` 188 | 189 | - **Customizing Dockerized Nagios:** 190 | - Mount volumes to add custom configurations and plugins. 191 | 192 | ```bash 193 | docker run --name nagios -v /path/to/nagios.cfg:/usr/local/nagios/etc/nagios.cfg jasonrivers/nagios 194 | ``` 195 | -------------------------------------------------------------------------------- /Monitoring/Grafana.md: -------------------------------------------------------------------------------- 1 | # Grafana Cheatsheet 2 | 3 | ![text](https://imgur.com/j07r4L6.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Grafana** is an open-source platform for monitoring and observability that allows you to query, visualize, and alert on metrics from multiple data sources like Prometheus, InfluxDB, Elasticsearch, and more. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Dashboard:** A collection of panels organized into a grid. 12 | - **Panel:** A visualization of data (graphs, charts, etc.) from a specific data source. 13 | - **Data Source:** The database or service that provides the metrics for Grafana to visualize. 14 | - **Alerting:** Set up conditions to trigger notifications when metrics meet specific criteria. 15 | 16 | **3. Installation:** 17 | 18 | - **Running Grafana:** 19 | 20 | ```bash 21 | sudo apt-get install -y adduser libfontconfig1 22 | wget https://dl.grafana.com/oss/release/grafana_7.5.7_amd64.deb 23 | sudo dpkg -i grafana_7.5.7_amd64.deb 24 | sudo systemctl start grafana-server 25 | sudo systemctl enable grafana-server 26 | ``` 27 | 28 | - **Docker:** 29 | 30 | ```bash 31 | docker run -d -p 3000:3000 --name=grafana grafana/grafana 32 | ``` 33 | 34 | **4. Configuring Data Sources:** 35 | 36 | - **Adding Prometheus as a Data Source:** 37 | 1. Navigate to **Configuration > Data Sources**. 38 | 2. Click on **Add data source** and select **Prometheus**. 39 | 3. Enter the URL of your Prometheus server (e.g., `http://localhost:9090`). 40 | 4. Click **Save & Test** to verify the connection. 41 | 42 | - **Adding Elasticsearch as a Data Source:** 43 | 1. Navigate to **Configuration > Data Sources**. 44 | 2. Click on **Add data source** and select **Elasticsearch**. 45 | 3. Enter the URL, index name, and time field. 46 | 4. Click **Save & Test** to verify the connection. 47 | 48 | **5. Building Dashboards:** 49 | 50 | - **Creating a New Dashboard:** 51 | 1. Click the **+** icon in the sidebar and select **Dashboard**. 52 | 2. Click **Add new panel**. 53 | 3. Choose your data source and write a query (e.g., `rate(http_requests_total[5m])` for Prometheus). 54 | 4. Select a visualization type (e.g., **Graph**, **Stat**, **Gauge**). 55 | 5. Save the panel and the dashboard. 56 | 57 | - **Using Variables:** 58 | - **Creating a Variable:** 59 | 1. Go to **Dashboard settings** > **Variables** > **New**. 60 | 2. Set the **Name**, **Type** (e.g., **Query**), and **Query**. 61 | 3. Use the variable in panel queries by referencing it as **`$variable_name`**. 62 | 63 | **6. Alerting:** 64 | 65 | - **Creating Alerts:** 66 | 67 | 1. Add a panel to your dashboard. 68 | 2. In the **Alert** tab, click **Create Alert**. 69 | 3. Set the **Conditions** for triggering the alert (e.g., when a metric crosses a threshold). 70 | 4. Define the **Evaluation Interval** and **No Data** options. 71 | 5. Configure **Notifications** to send alerts via email, Slack, or other channels. 72 | 73 | - **Managing Alerts:** 74 | - Alerts can be managed centrally through the **Alerting** section in the sidebar. 75 | 76 | **7. Grafana Plugins:** 77 | 78 | - **Installing Plugins:** 79 | 80 | ```bash 81 | grafana-cli plugins install grafana-piechart-panel 82 | sudo systemctl restart grafana-server 83 | ``` 84 | 85 | - **Popular Plugins:** 86 | - **Pie Chart Panel:** Display metrics in a pie chart. 87 | - **Worldmap Panel:** Visualize data on a world map. 88 | - **Alert List Panel:** Display active alerts from multiple sources. 89 | 90 | **8. Dashboard Templating:** 91 | 92 | - **Using Templated Dashboards:** 93 | - Leverage variables to create dynamic dashboards that can change based on user input. 94 | 95 | - **Dynamic Panels:** 96 | - Create repeating panels or rows based on variable values (e.g., show metrics per host). 97 | 98 | **9. Customizing Grafana:** 99 | 100 | - **Themes:** 101 | - Switch between light and dark themes via **Preferences** in the dashboard settings. 102 | 103 | - **Custom Branding:** 104 | - Modify Grafana's appearance by adding custom logos and colors. Requires editing configuration files and CSS. 105 | 106 | **10. Securing Grafana:** 107 | 108 | - **User Management:** 109 | - Add users and assign them roles such as Viewer, Editor, or Admin. 110 | 111 | - **LDAP/SSO Integration:** 112 | - Configure Grafana to use LDAP or Single Sign-On (SSO) for user authentication. 113 | 114 | - **Enabling HTTPS:** 115 | 116 | ```yaml 117 | [server] 118 | protocol = https 119 | cert_file = /path/to/cert.crt 120 | cert_key = /path/to/cert.key 121 | ``` 122 | 123 | **11. Advanced Queries and Visualizations:** 124 | 125 | - **Grafana with PromQL:** 126 | - Use advanced PromQL queries for more complex visualizations. 127 | 128 | - **Annotations:** 129 | - Add annotations to mark specific events on graphs, useful for correlating issues with changes or incidents. 130 | 131 | **12. Grafana Loki:** 132 | 133 | - **Introduction to Loki:** 134 | - Grafana Loki is a horizontally scalable, highly available log aggregation system inspired by Prometheus. 135 | 136 | - **Setting up Loki:** 137 | 138 | ```bash 139 | docker run -d --name=loki -p 3100:3100 grafana/loki:2.2.0 -config.file=/etc/loki/local-config.yaml 140 | ``` 141 | 142 | - **Querying Logs in Grafana:** 143 | - Use **Loki** as a data source to query and visualize logs alongside metrics. 144 | 145 | **13. Grafana in Kubernetes:** 146 | 147 | - **Deploying Grafana in Kubernetes:** 148 | 149 | ```yaml 150 | apiVersion: apps/v1 151 | kind: Deployment 152 | metadata: 153 | name: grafana 154 | spec: 155 | replicas: 1 156 | selector: 157 | matchLabels: 158 | app: grafana 159 | template: 160 | metadata: 161 | labels: 162 | app: grafana 163 | spec: 164 | containers: 165 | - name: grafana 166 | image: grafana/grafana:7.5.7 167 | ports: 168 | - containerPort: 3000 169 | ``` 170 | 171 | **14. Troubleshooting Grafana:** 172 | 173 | - **Common Issues:** 174 | - **No Data:** Check data source configuration and queries. 175 | - **Slow Dashboards:** Optimize queries and reduce the time range. 176 | - **Plugin Errors:** Ensure plugins are compatible with your Grafana version. 177 | 178 | - **Debugging:** 179 | - View logs at `/var/log/grafana/grafana.log` for error details. 180 | - Use **`curl`** to test data source connectivity (e.g., `curl http://localhost:9090` for Prometheus). 181 | -------------------------------------------------------------------------------- /Monitoring/Prometheus.md: -------------------------------------------------------------------------------- 1 | # Prometheus Cheatsheet 2 | 3 | ![text](https://imgur.com/nthHFQk.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Prometheus** is an open-source systems monitoring and alerting toolkit, particularly well-suited for monitoring dynamic, cloud-native environments such as Kubernetes. It uses a pull-based model to scrape metrics from configured endpoints. 8 | 9 | **2. Key Concepts:** 10 | 11 | - **Metrics:** Data points collected over time, usually in the form of time series. 12 | - **PromQL:** Prometheus Query Language used to query the collected metrics. 13 | - **Exporters:** Components that expose metrics in a format that Prometheus can scrape. 14 | - **Alertmanager:** Manages alerts generated by Prometheus. 15 | 16 | **3. Installation:** 17 | 18 | - **Running Prometheus:** 19 | 20 | ```bash 21 | wget https://github.com/prometheus/prometheus/releases/download/v2.30.0/prometheus-2.30.0.linux-amd64.tar.gz 22 | tar xvfz prometheus-*.tar.gz 23 | cd prometheus-* 24 | ./prometheus --config.file=prometheus.yml 25 | ``` 26 | 27 | - **Docker:** 28 | 29 | ```bash 30 | docker run -p 9090:9090 prom/prometheus 31 | ``` 32 | 33 | **4. Prometheus Configuration:** 34 | 35 | - **Basic `prometheus.yml` Configuration:** 36 | 37 | ```yaml 38 | global: 39 | scrape_interval: 15s 40 | 41 | scrape_configs: 42 | - job_name: 'prometheus' 43 | static_configs: 44 | - targets: ['localhost:9090'] 45 | ``` 46 | 47 | - **Adding Targets:** 48 | 49 | ```yaml 50 | - job_name: 'node_exporter' 51 | static_configs: 52 | - targets: ['localhost:9100'] 53 | ``` 54 | 55 | **5. Prometheus Query Language (PromQL):** 56 | 57 | - **Basic Queries:** 58 | 59 | ```promql 60 | up 61 | rate(http_requests_total[5m]) 62 | ``` 63 | 64 | - **Aggregations:** 65 | 66 | ```promql 67 | sum(rate(http_requests_total[5m])) 68 | avg_over_time(http_requests_total[5m]) 69 | ``` 70 | 71 | - **Recording Rules:** 72 | 73 | ```yaml 74 | groups: 75 | - name: example 76 | rules: 77 | - record: job:http_inprogress_requests:sum 78 | expr: sum(http_inprogress_requests) by (job) 79 | ``` 80 | 81 | **6. Exporters:** 82 | 83 | - **Node Exporter:** Collects system-level metrics. 84 | 85 | ```bash 86 | wget https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz 87 | tar xvfz node_exporter-*.tar.gz 88 | ./node_exporter 89 | ``` 90 | 91 | - **Custom Exporter:** Writing a custom exporter using Python. 92 | 93 | ```python 94 | from prometheus_client import start_http_server, Gauge 95 | import random 96 | import time 97 | 98 | g = Gauge('random_number', 'A random number') 99 | 100 | def generate_random_number(): 101 | while True: 102 | g.set(random.random()) 103 | time.sleep(5) 104 | 105 | if __name__ == '__main__': 106 | start_http_server(8000) 107 | generate_random_number() 108 | ``` 109 | 110 | **7. Alerts and Alertmanager:** 111 | 112 | - **Alerting Rules:** 113 | 114 | ```yaml 115 | groups: 116 | - name: example 117 | rules: 118 | - alert: HighMemoryUsage 119 | expr: node_memory_Active_bytes / node_memory_MemTotal_bytes * 100 > 90 120 | for: 5m 121 | labels: 122 | severity: critical 123 | annotations: 124 | summary: "High memory usage detected on {{ $labels.instance }}" 125 | description: "Memory usage is above 90% for more than 5 minutes." 126 | ``` 127 | 128 | - **Alertmanager Configuration:** 129 | 130 | ```yaml 131 | global: 132 | resolve_timeout: 5m 133 | 134 | route: 135 | group_by: ['alertname'] 136 | receiver: 'email' 137 | 138 | receivers: 139 | - name: 'email' 140 | email_configs: 141 | - to: 'your-email@example.com' 142 | from: 'prometheus@example.com' 143 | smarthost: 'smtp.example.com:587' 144 | auth_username: 'username' 145 | auth_password: 'password' 146 | ``` 147 | 148 | **8. Prometheus Federation:** 149 | 150 | - **Setting Up Federation:** 151 | 152 | ```yaml 153 | scrape_configs: 154 | - job_name: 'federate' 155 | honor_labels: true 156 | metrics_path: '/federate' 157 | params: 158 | match[]: 159 | - '{job="prometheus"}' 160 | static_configs: 161 | - targets: 162 | - 'prometheus-server-1:9090' 163 | - 'prometheus-server-2:9090' 164 | ``` 165 | 166 | **9. Monitoring Kubernetes with Prometheus:** 167 | 168 | - **Deploying Prometheus on Kubernetes:** 169 | 170 | ```yaml 171 | apiVersion: monitoring.coreos.com/v1 172 | kind: Prometheus 173 | metadata: 174 | name: prometheus 175 | spec: 176 | replicas: 1 177 | serviceAccountName: prometheus 178 | serviceMonitorSelector: 179 | matchLabels: 180 | team: frontend 181 | resources: 182 | requests: 183 | memory: 400Mi 184 | storage: 185 | volumeClaimTemplate: 186 | spec: 187 | storageClassName: standard 188 | resources: 189 | requests: 190 | storage: 50Gi 191 | ``` 192 | 193 | - **ServiceMonitor Example:** 194 | 195 | ```yaml 196 | apiVersion: monitoring.coreos.com/v1 197 | kind: ServiceMonitor 198 | metadata: 199 | name: example-monitor 200 | spec: 201 | selector: 202 | matchLabels: 203 | app: example 204 | endpoints: 205 | - port: web 206 | ``` 207 | 208 | **10. Advanced Prometheus Concepts:** 209 | 210 | - **Thanos:** Extends Prometheus with long-term storage, global querying, and downsampling. 211 | - **Cortex:** Multi-tenant, horizontally scalable Prometheus as a service. 212 | 213 | **11. Prometheus Security:** 214 | 215 | - **Basic Authentication:** 216 | 217 | ```yaml 218 | basic_auth: 219 | username: admin 220 | password: admin 221 | ``` 222 | 223 | - **TLS/SSL Configuration:** 224 | 225 | ```yaml 226 | tls_config: 227 | ca_file: /etc/prometheus/certs/ca.crt 228 | cert_file: /etc/prometheus/certs/prometheus.crt 229 | key_file: /etc/prometheus/certs/prometheus.key 230 | ``` 231 | 232 | **12. Troubleshooting Prometheus:** 233 | 234 | - **Common Issues:** 235 | - **High Cardinality Metrics:** Too many unique time series can overwhelm Prometheus. 236 | - **Slow Queries:** Optimize queries by avoiding high cardinality and using efficient aggregations. 237 | 238 | - **Debugging:** 239 | - Use the **`promtool`** command-line tool to check configuration files. 240 | - **Prometheus UI** provides an interface to debug queries and examine time series data. 241 | -------------------------------------------------------------------------------- /cloud/Ansible.md: -------------------------------------------------------------------------------- 1 | # 📜 **Ansible Cheatsheet** 2 | 3 | ![ansible](https://imgur.com/XwECXoK.png) 4 | 5 | ## **🔹 Introduction to Ansible** 6 | 7 | ### ✅ What is Ansible? 8 | 9 | Ansible is an **open-source automation tool** used for: 10 | ✅ **Configuration Management** (e.g., installing & managing software on servers) 11 | ✅ **Application Deployment** (e.g., deploying a web app on multiple servers) 12 | ✅ **Orchestration** (e.g., managing multi-tier applications like load balancer + DB) 13 | ✅ **Provisioning** (e.g., setting up cloud infrastructure with AWS, Azure, GCP) 14 | 15 | ### ✅ Why Use Ansible? 16 | 17 | 🔹 **Agentless:** No need to install agents on target machines (uses SSH & WinRM) 18 | 🔹 **Idempotent:** Runs multiple times without unwanted changes 19 | 🔹 **Human-Readable:** Uses YAML playbooks 20 | 🔹 **Cross-Platform:** Works on **Linux, Windows, macOS, Cloud Servers** 21 | 22 | --- 23 | 24 | ## **🛠️ 1. Installing & Setting Up Ansible** 25 | 26 | ### ✅ Installing Ansible on Linux 27 | 28 | ```bash 29 | # Ubuntu/Debian 30 | sudo apt update 31 | sudo apt install -y ansible 32 | 33 | # CentOS/RHEL 34 | sudo yum install -y ansible 35 | ``` 36 | 37 | ### ✅ Checking Installation 38 | 39 | ```bash 40 | ansible --version 41 | ``` 42 | 43 | ### ✅ Setting Up an Inventory File 44 | 45 | An **inventory file** (`/etc/ansible/hosts`) tells Ansible where to connect. 46 | Example: 47 | 48 | ```ini 49 | [webservers] 50 | server1 ansible_host=192.168.1.10 ansible_user=ubuntu 51 | server2 ansible_host=192.168.1.11 ansible_user=ubuntu 52 | 53 | [dbservers] 54 | db1 ansible_host=192.168.1.20 ansible_user=root 55 | ``` 56 | 57 | ### ✅ Testing Connectivity with `ping` 58 | 59 | ```bash 60 | ansible all -m ping 61 | ``` 62 | 63 | 📌 If successful, you'll see: 64 | 65 | ```bash 66 | server1 | SUCCESS => {"changed": false, "ping": "pong"} 67 | server2 | SUCCESS => {"changed": false, "ping": "pong"} 68 | ``` 69 | 70 | --- 71 | 72 | ## **🚀 2. Running Ad-Hoc Commands (Quick Tasks Without a Playbook)** 73 | 74 | ✅ **Check disk usage** 75 | 76 | ```bash 77 | ansible all -m command -a "df -h" 78 | ``` 79 | 80 | ✅ **Check system uptime** 81 | 82 | ```bash 83 | ansible all -m command -a "uptime" 84 | ``` 85 | 86 | ✅ **Create a directory on remote hosts** 87 | 88 | ```bash 89 | ansible all -m file -a "path=/opt/newdir state=directory" 90 | ``` 91 | 92 | ✅ **Copy files to remote servers** 93 | 94 | ```bash 95 | ansible all -m copy -a "src=/tmp/file.txt dest=/home/ubuntu/file.txt" 96 | ``` 97 | 98 | ✅ **Install a package (e.g., nginx) on all web servers** 99 | 100 | ```bash 101 | ansible webservers -m apt -a "name=nginx state=present" --become 102 | ``` 103 | 104 | ✅ **Restart a service (e.g., nginx)** 105 | 106 | ```bash 107 | ansible webservers -m service -a "name=nginx state=restarted" --become 108 | ``` 109 | 110 | --- 111 | 112 | ## **📜 3. Writing Ansible Playbooks (Automation Scripts)** 113 | 114 | ✅ **What is a Playbook?** 115 | A **playbook** is a YAML file that contains tasks to **automate configuration**. 116 | 117 | ### **🔹 Basic Playbook Example** 118 | 119 | ```yaml 120 | - name: Install and Start Nginx 121 | hosts: webservers 122 | become: yes # Run as sudo 123 | tasks: 124 | - name: Install Nginx 125 | apt: 126 | name: nginx 127 | state: present 128 | 129 | - name: Start Nginx 130 | service: 131 | name: nginx 132 | state: started 133 | ``` 134 | 135 | ✅ **Run the Playbook** 136 | 137 | ```bash 138 | ansible-playbook playbook.yml 139 | ``` 140 | 141 | --- 142 | 143 | ## **🔹 4. Using Variables in Ansible** 144 | 145 | ✅ **Define Variables in a Playbook** 146 | 147 | ```yaml 148 | - name: Install a Package with a Variable 149 | hosts: webservers 150 | vars: 151 | package_name: nginx 152 | tasks: 153 | - name: Install Package 154 | apt: 155 | name: "{{ package_name }}" 156 | state: present 157 | ``` 158 | 159 | ✅ **Use Built-in Ansible Facts** 160 | 161 | ```bash 162 | ansible all -m setup 163 | ``` 164 | 165 | Example Fact Usage in Playbook: 166 | 167 | ```yaml 168 | - name: Display System Information 169 | hosts: all 170 | tasks: 171 | - debug: 172 | msg: "This server is running {{ ansible_distribution }} {{ ansible_distribution_version }}" 173 | ``` 174 | 175 | --- 176 | 177 | ## **🔹 5. Loops & Conditionals** 178 | 179 | ✅ **Loop Example (Install Multiple Packages)** 180 | 181 | ```yaml 182 | - name: Install Multiple Packages 183 | hosts: webservers 184 | become: yes 185 | tasks: 186 | - name: Install Packages 187 | apt: 188 | name: "{{ item }}" 189 | state: present 190 | loop: 191 | - nginx 192 | - curl 193 | - unzip 194 | ``` 195 | 196 | ✅ **Conditional Execution** 197 | 198 | ```yaml 199 | - name: Restart Nginx Only If Needed 200 | hosts: webservers 201 | become: yes 202 | tasks: 203 | - name: Check if Nginx is Running 204 | shell: pgrep nginx 205 | register: nginx_running 206 | ignore_errors: yes 207 | 208 | - name: Restart Nginx 209 | service: 210 | name: nginx 211 | state: restarted 212 | when: nginx_running.rc == 0 213 | ``` 214 | 215 | --- 216 | 217 | ## **📂 6. Ansible Roles (Best Practices for Large Projects)** 218 | 219 | ✅ **Generate an Ansible Role Structure** 220 | 221 | ```bash 222 | ansible-galaxy init my_role 223 | ``` 224 | 225 | 📌 This creates a structured directory like: 226 | 227 | ```plaintext 228 | my_role/ 229 | ├── tasks/ 230 | │ └── main.yml 231 | ├── handlers/ 232 | │ └── main.yml 233 | ├── templates/ 234 | ├── files/ 235 | ├── vars/ 236 | │ └── main.yml 237 | ├── defaults/ 238 | │ └── main.yml 239 | ├── meta/ 240 | │ └── main.yml 241 | ├── README.md 242 | ``` 243 | 244 | ✅ **Use Roles in a Playbook** 245 | 246 | ```yaml 247 | - name: Deploy Web Server 248 | hosts: webservers 249 | roles: 250 | - nginx_role 251 | ``` 252 | 253 | --- 254 | 255 | ## **🔐 7. Ansible Vault (Encrypting Secrets)** 256 | 257 | ✅ **Create an Encrypted File** 258 | 259 | ```bash 260 | ansible-vault create secrets.yml 261 | ``` 262 | 263 | ✅ **Edit an Encrypted File** 264 | 265 | ```bash 266 | ansible-vault edit secrets.yml 267 | ``` 268 | 269 | ✅ **Use Vault in Playbooks** 270 | 271 | ```yaml 272 | - name: Deploy with Encrypted Secrets 273 | hosts: webservers 274 | vars_files: 275 | - secrets.yml 276 | tasks: 277 | - debug: 278 | msg: "The secret password is {{ secret_password }}" 279 | ``` 280 | 281 | ✅ **Run Playbook with Vault Password Prompt** 282 | 283 | ```bash 284 | ansible-playbook playbook.yml --ask-vault-pass 285 | ``` 286 | 287 | --- 288 | 289 | ## **🎯 8. Useful Ansible Commands** 290 | 291 | ✅ **Check Playbook Syntax** 292 | 293 | ```bash 294 | ansible-playbook playbook.yml --syntax-check 295 | ``` 296 | 297 | ✅ **Dry Run (Test Without Executing Changes)** 298 | 299 | ```bash 300 | ansible-playbook playbook.yml --check 301 | ``` 302 | 303 | ✅ **List All Available Modules** 304 | 305 | ```bash 306 | ansible-doc -l 307 | ``` 308 | 309 | ✅ **Get Help for a Specific Module** 310 | 311 | ```bash 312 | ansible-doc apt 313 | ``` 314 | 315 | --- 316 | 317 | ## 🎯 **Conclusion** 318 | 319 | This **Ansible Cheatsheet** provides a **step-by-step guide** from **beginner to advanced**. 320 | 321 | 🚀 **Next Steps:** 322 | ✅ **Practice with real-world playbooks** 323 | ✅ **Use roles for better structuring** 324 | ✅ **Secure credentials with Ansible Vault** 325 | ✅ **Automate cloud infrastructure with Terraform + Ansible** 326 | 327 | 🔗 **Contribute to the Cheatsheet Collection:** [GitHub Repo](https://github.com/NotHarshhaa/devops-cheatsheet) 328 | -------------------------------------------------------------------------------- /Version-Control/Github.md: -------------------------------------------------------------------------------- 1 | # Github Cheatsheet 2 | 3 | ![text](https://imgur.com/c189VXy.png) 4 | 5 | **GitHub** is a powerful platform for version control, collaboration, CI/CD automation, and DevOps workflows. This cheatsheet provides an in-depth guide to using GitHub, covering basic operations to advanced features. 6 | 7 | --- 8 | 9 | ## 1. **Introduction to GitHub** 10 | 11 | ### What is GitHub? 12 | 13 | GitHub is a web-based platform that uses Git for version control and provides tools for: 14 | 15 | - Collaborative software development 16 | - CI/CD automation 17 | - Project management 18 | - Code review and DevOps integration 19 | 20 | ### Key Features 21 | 22 | - **Git Repositories**: Centralized code hosting with Git. 23 | - **Collaboration**: Pull requests, code reviews, and discussions. 24 | - **Actions**: Automate workflows with GitHub Actions. 25 | - **Project Management**: Boards, issues, and milestones for agile workflows. 26 | - **Security**: Dependabot alerts and code scanning for vulnerabilities. 27 | 28 | --- 29 | 30 | ## 2. **Getting Started** 31 | 32 | ### Creating an Account 33 | 34 | 1. Sign up at [GitHub](https://github.com/). 35 | 2. Create or join an organization for team collaboration. 36 | 37 | ### Adding SSH Keys 38 | 39 | 1. Generate an SSH key: 40 | 41 | ```bash 42 | ssh-keygen -t rsa -b 4096 -C "your_email@example.com" 43 | ``` 44 | 45 | 2. Add the key to your GitHub account: 46 | - Go to **Settings** → **SSH and GPG keys** → Add Key. 47 | 48 | ### Creating a Repository 49 | 50 | 1. Go to **Repositories** → **New**. 51 | 2. Configure repository name, description, and visibility. 52 | 3. Add a `.gitignore` file or license if needed. 53 | 54 | --- 55 | 56 | ## 3. **Basic GitHub Operations** 57 | 58 | ### Cloning a Repository 59 | 60 | ```bash 61 | git clone git@github.com:username/repository.git 62 | ``` 63 | 64 | ### Committing and Pushing Changes 65 | 66 | ```bash 67 | # Stage changes 68 | git add . 69 | # Commit changes 70 | git commit -m "Initial commit" 71 | # Push changes 72 | git push origin main 73 | ``` 74 | 75 | ### Pulling Changes 76 | 77 | ```bash 78 | git pull origin main 79 | ``` 80 | 81 | --- 82 | 83 | ## 4. **Branching and Merging** 84 | 85 | ### Creating and Switching Branches 86 | 87 | ```bash 88 | # Create a new branch 89 | git checkout -b feature-branch 90 | # Switch to an existing branch 91 | git checkout main 92 | ``` 93 | 94 | ### Pushing a Branch 95 | 96 | ```bash 97 | git push origin feature-branch 98 | ``` 99 | 100 | ### Merging Branches 101 | 102 | 1. Open a **Pull Request** on GitHub: 103 | - Navigate to the repository → **Pull Requests** → **New Pull Request**. 104 | 2. Review and merge changes. 105 | 106 | ### Deleting a Branch 107 | 108 | ```bash 109 | # Delete locally 110 | git branch -d feature-branch 111 | # Delete on remote 112 | git push origin --delete feature-branch 113 | ``` 114 | 115 | --- 116 | 117 | ## 5. **GitHub Issues and Project Boards** 118 | 119 | ### Creating an Issue 120 | 121 | 1. Go to **Issues** → **New Issue**. 122 | 2. Add title, description, and assign labels or assignees. 123 | 124 | ### Automating Project Boards 125 | 126 | - **Add Issues Automatically**: 127 | 1. Go to the project board. 128 | 2. Set up automation rules like "Add issues in progress." 129 | 130 | ### Linking Pull Requests to Issues 131 | 132 | Use keywords in PR descriptions: 133 | 134 | ```text 135 | Fixes #issue_number 136 | Closes #issue_number 137 | ``` 138 | 139 | --- 140 | 141 | ## 6. **GitHub Actions (CI/CD)** 142 | 143 | GitHub Actions is a workflow automation tool for CI/CD. 144 | 145 | ### Basics of `.github/workflows/.yml` 146 | 147 | #### Example Workflow: 148 | 149 | ```yaml 150 | name: CI Pipeline 151 | 152 | on: 153 | push: 154 | branches: 155 | - main 156 | 157 | jobs: 158 | build: 159 | runs-on: ubuntu-latest 160 | steps: 161 | - name: Checkout Code 162 | uses: actions/checkout@v3 163 | - name: Install Dependencies 164 | run: npm install 165 | - name: Run Tests 166 | run: npm test 167 | ``` 168 | 169 | ### Workflow Triggers 170 | 171 | - **push**: Runs the workflow when a commit is pushed. 172 | - **pull_request**: Triggers on pull requests. 173 | - **schedule**: Triggers on a cron schedule. 174 | 175 | ### Managing Secrets 176 | 177 | 1. Go to **Settings** → **Secrets and variables** → **Actions**. 178 | 2. Add variables like `AWS_ACCESS_KEY_ID` or `DOCKER_PASSWORD`. 179 | 180 | ### Example with Secrets 181 | 182 | ```yaml 183 | jobs: 184 | deploy: 185 | runs-on: ubuntu-latest 186 | steps: 187 | - name: Deploy to AWS 188 | run: aws s3 sync ./build s3://my-bucket 189 | env: 190 | AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} 191 | AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} 192 | ``` 193 | 194 | --- 195 | 196 | ## 7. **GitHub Packages** 197 | 198 | ### Using GitHub as a Docker Registry 199 | 200 | 1. Authenticate: 201 | 202 | ```bash 203 | docker login ghcr.io -u USERNAME -p TOKEN 204 | ``` 205 | 206 | 2. Build and Push: 207 | 208 | ```bash 209 | docker build -t ghcr.io/username/image-name:tag . 210 | docker push ghcr.io/username/image-name:tag 211 | ``` 212 | 213 | ### Installing from GitHub Packages 214 | 215 | - Add dependency in `package.json` (Node.js): 216 | 217 | ```json 218 | "dependencies": { 219 | "package-name": "github:username/repository" 220 | } 221 | ``` 222 | 223 | --- 224 | 225 | ## 8. **Advanced GitHub Features** 226 | 227 | ### Protecting Branches 228 | 229 | 1. Go to **Settings** → **Branches**. 230 | 2. Enable branch protection rules (e.g., prevent force-pushes, require PR reviews). 231 | 232 | ### Code Review Automation 233 | 234 | - Use GitHub Apps like **CodeCov** or **LGTM** for automated code review. 235 | 236 | ### Dependency Management with Dependabot 237 | 238 | 1. Enable Dependabot under **Insights** → **Dependency Graph**. 239 | 2. Dependabot creates pull requests to update outdated dependencies. 240 | 241 | --- 242 | 243 | ## 9. **GitHub Security** 244 | 245 | ### Code Scanning 246 | 247 | 1. Enable **Code Scanning Alerts** under **Security**. 248 | 2. Include scanning actions in workflows: 249 | 250 | ```yaml 251 | - name: CodeQL Analysis 252 | uses: github/codeql-action/analyze@v2 253 | ``` 254 | 255 | ### Secret Scanning 256 | 257 | - GitHub scans public repositories for leaked secrets and alerts the repository owner. 258 | 259 | ### Enabling 2FA 260 | 261 | 1. Go to **Settings** → **Account Security** → Enable Two-Factor Authentication. 262 | 263 | --- 264 | 265 | ## 10. **GitHub CLI (gh)** 266 | 267 | ### Installing GitHub CLI 268 | 269 | ```bash 270 | brew install gh # macOS 271 | sudo apt install gh # Linux 272 | ``` 273 | 274 | ### Authenticating 275 | 276 | ```bash 277 | gh auth login 278 | ``` 279 | 280 | ### Common Commands 281 | 282 | - Clone a Repository: 283 | 284 | ```bash 285 | gh repo clone username/repository 286 | ``` 287 | 288 | - Create a Pull Request: 289 | 290 | ```bash 291 | gh pr create --title "Feature Update" --body "Details of PR" 292 | ``` 293 | 294 | - List Issues: 295 | 296 | ```bash 297 | gh issue list 298 | ``` 299 | 300 | --- 301 | 302 | ## 11. **GitHub API** 303 | 304 | ### Using the API 305 | 306 | Authenticate using a personal access token: 307 | 308 | ```bash 309 | curl -H "Authorization: token YOUR_TOKEN" https://api.github.com/user/repos 310 | ``` 311 | 312 | ### Example: Creating an Issue 313 | 314 | ```bash 315 | curl -X POST -H "Authorization: token YOUR_TOKEN" \ 316 | -H "Content-Type: application/json" \ 317 | -d '{"title": "Bug Report", "body": "Description of the bug"}' \ 318 | https://api.github.com/repos/username/repository/issues 319 | ``` 320 | 321 | --- 322 | 323 | ## 12. **GitHub Best Practices** 324 | 325 | - **Use Descriptive Commit Messages**: 326 | 327 | ```text 328 | Fix bug in login page #123 329 | ``` 330 | 331 | - **Enable Branch Protections** to enforce review processes. 332 | - **Automate Testing** using GitHub Actions for pull requests. 333 | - **Use Issues and Labels** for effective project tracking. 334 | 335 | --- 336 | 337 | ## References and Resources 338 | 339 | 1. [GitHub Documentation](https://docs.github.com/) 340 | 2. [GitHub CLI Documentation](https://cli.github.com/manual/) 341 | 3. [GitHub Actions Guide](https://docs.github.com/en/actions) 342 | -------------------------------------------------------------------------------- /cloud/Terraform.md: -------------------------------------------------------------------------------- 1 | # Terraform Cheatsheet 2 | 3 | ![text](https://imgur.com/FwmjyK1.png) 4 | 5 | #### **1. Introduction to Terraform** 6 | 7 | Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp. It allows you to define and provision infrastructure using a high-level configuration language. Terraform is cloud-agnostic, meaning it can manage infrastructure across various cloud providers like AWS, Azure, Google Cloud, and on-premise data centers. 8 | 9 | **Key Concepts:** 10 | 11 | - **IaC (Infrastructure as Code):** Managing and provisioning infrastructure through code rather than manual processes. 12 | - **HCL (HashiCorp Configuration Language):** The language used to write Terraform configurations. 13 | - **Providers:** Plugins that interact with APIs of cloud providers and other services. 14 | - **Resources:** The most basic building blocks of Terraform, representing infrastructure components. 15 | - **State:** Terraform keeps track of the real-world state of your infrastructure in a state file. 16 | 17 | --- 18 | 19 | #### **2. Terraform Basics** 20 | 21 | **2.1. Installing Terraform** 22 | 23 | - Terraform can be installed on various operating systems. 24 | - Download Terraform from the [official site](https://www.terraform.io/downloads.html) and add it to your system's PATH. 25 | 26 | **Example:** 27 | 28 | ```bash 29 | # On Ubuntu 30 | sudo apt-get update && sudo apt-get install -y gnupg software-properties-common 31 | wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg 32 | echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list 33 | sudo apt update 34 | sudo apt install terraform 35 | ``` 36 | 37 | **2.2. Writing Your First Terraform Configuration** 38 | 39 | - Create a directory for your Terraform configuration files. 40 | - Define the provider and resources in a `.tf` file. 41 | 42 | **Example:** 43 | 44 | ```hcl 45 | # main.tf 46 | 47 | provider "aws" { 48 | region = "us-west-2" 49 | } 50 | 51 | resource "aws_instance" "example" { 52 | ami = "ami-0c55b159cbfafe1f0" 53 | instance_type = "t2.micro" 54 | 55 | tags = { 56 | Name = "TerraformExample" 57 | } 58 | } 59 | ``` 60 | 61 | **2.3. Initializing Terraform** 62 | 63 | - Use `terraform init` to initialize your Terraform project. This downloads the necessary provider plugins. 64 | 65 | **Example:** 66 | 67 | ```bash 68 | terraform init 69 | ``` 70 | 71 | **2.4. Planning and Applying Changes** 72 | 73 | - **terraform plan:** Generates an execution plan showing what actions Terraform will take. 74 | - **terraform apply:** Executes the actions proposed in the plan. 75 | 76 | **Example:** 77 | 78 | ```bash 79 | terraform plan 80 | terraform apply 81 | ``` 82 | 83 | **2.5. Managing Terraform State** 84 | 85 | - Terraform stores the state of your infrastructure in a file called `terraform.tfstate`. 86 | - The state file is critical for Terraform to manage your resources accurately. 87 | 88 | **Example:** 89 | 90 | ```bash 91 | # To view the current state 92 | terraform show 93 | 94 | # To refresh the state 95 | terraform refresh 96 | ``` 97 | 98 | --- 99 | 100 | #### **3. Intermediate Terraform** 101 | 102 | **3.1. Variables and Outputs** 103 | 104 | - **Variables:** Used to input dynamic values into Terraform configurations. 105 | - **Outputs:** Used to extract information from Terraform resources after they are created. 106 | 107 | **Example:** 108 | 109 | ```hcl 110 | # variables.tf 111 | 112 | variable "instance_type" { 113 | description = "Type of instance to create" 114 | default = "t2.micro" 115 | } 116 | 117 | # main.tf 118 | 119 | resource "aws_instance" "example" { 120 | ami = "ami-0c55b159cbfafe1f0" 121 | instance_type = var.instance_type 122 | 123 | tags = { 124 | Name = "TerraformExample" 125 | } 126 | } 127 | 128 | # outputs.tf 129 | 130 | output "instance_id" { 131 | description = "The ID of the instance" 132 | value = aws_instance.example.id 133 | } 134 | ``` 135 | 136 | **3.2. Managing Multiple Environments** 137 | 138 | - Use `terraform.workspace` to manage different environments like dev, staging, and production. 139 | 140 | **Example:** 141 | 142 | ```bash 143 | # Create a new workspace 144 | terraform workspace new dev 145 | 146 | # Switch to an existing workspace 147 | terraform workspace select dev 148 | ``` 149 | 150 | **3.3. Terraform Modules** 151 | 152 | - Modules are reusable pieces of Terraform code that group resources together. They promote reusability and maintainability. 153 | 154 | **Example:** 155 | 156 | ```hcl 157 | # Creating a module 158 | 159 | # Directory structure 160 | module/ 161 | ├── main.tf 162 | ├── variables.tf 163 | └── outputs.tf 164 | 165 | # Using a module 166 | module "vpc" { 167 | source = "./module" 168 | 169 | vpc_name = "example-vpc" 170 | } 171 | ``` 172 | 173 | --- 174 | 175 | #### **4. Advanced Terraform** 176 | 177 | **4.1. Terraform Provisioners** 178 | 179 | - Provisioners allow you to execute scripts or commands on a remote resource as part of the resource creation or destruction. 180 | 181 | **Example:** 182 | 183 | ```hcl 184 | resource "aws_instance" "example" { 185 | ami = "ami-0c55b159cbfafe1f0" 186 | instance_type = "t2.micro" 187 | 188 | provisioner "remote-exec" { 189 | inline = [ 190 | "sudo apt-get update", 191 | "sudo apt-get install -y nginx" 192 | ] 193 | 194 | connection { 195 | type = "ssh" 196 | user = "ubuntu" 197 | private_key = file("~/.ssh/id_rsa") 198 | host = self.public_ip 199 | } 200 | } 201 | 202 | tags = { 203 | Name = "TerraformExample" 204 | } 205 | } 206 | ``` 207 | 208 | **4.2. Handling Secrets with Terraform** 209 | 210 | - Use HashiCorp Vault or AWS Secrets Manager to securely manage sensitive data like passwords and API keys in your Terraform configurations. 211 | 212 | **Example:** 213 | 214 | ```hcl 215 | provider "vault" { 216 | address = "https://vault.example.com" 217 | } 218 | 219 | data "vault_generic_secret" "example" { 220 | path = "secret/myapp" 221 | } 222 | 223 | resource "aws_db_instance" "example" { 224 | engine = "mysql" 225 | instance_class = "db.t2.micro" 226 | username = "admin" 227 | password = data.vault_generic_secret.example.data["password"] 228 | } 229 | ``` 230 | 231 | **4.3. Remote State Management** 232 | 233 | - Store Terraform state files remotely using backends like S3, Azure Blob Storage, or Google Cloud Storage. 234 | 235 | **Example:** 236 | 237 | ```hcl 238 | terraform { 239 | backend "s3" { 240 | bucket = "my-terraform-state" 241 | key = "path/to/my/key" 242 | region = "us-west-2" 243 | } 244 | } 245 | ``` 246 | 247 | **4.4. Terraform Enterprise** 248 | 249 | - Terraform Enterprise provides additional features like collaboration, policy enforcement, and enhanced security for teams. 250 | 251 | **Example:** 252 | 253 | ```hcl 254 | # Example configuration for Terraform Cloud 255 | terraform { 256 | cloud { 257 | organization = "my-org" 258 | 259 | workspaces { 260 | name = "my-workspace" 261 | } 262 | } 263 | } 264 | ``` 265 | 266 | **4.5. Custom Providers and Plugins** 267 | 268 | - You can write custom providers and plugins if Terraform’s built-in providers don’t meet your needs. 269 | 270 | **Example:** 271 | 272 | ```hcl 273 | # Example using a custom provider 274 | provider "custom" { 275 | # Configuration for the custom provider 276 | } 277 | 278 | resource "custom_resource" "example" { 279 | name = "example-resource" 280 | } 281 | ``` 282 | 283 | --- 284 | 285 | #### **5. Best Practices for Terraform** 286 | 287 | **5.1. Version Control** 288 | 289 | - Store your Terraform configurations in version control systems like Git. This ensures that changes are tracked and collaborative work is streamlined. 290 | 291 | **5.2. Use of Modules** 292 | 293 | - Break down complex infrastructure into modules. This enhances reusability and reduces code duplication. 294 | 295 | **5.3. State Management** 296 | 297 | - Use remote state for collaboration and ensure that state files are encrypted and secured. 298 | 299 | **5.4. Locking State** 300 | 301 | - Use state locking to prevent concurrent state modifications when using remote backends like S3 with DynamoDB. 302 | 303 | **5.5. Use `terraform validate`** 304 | 305 | - Always run `terraform validate` to check the syntax and validity of your Terraform configurations before applying them. 306 | 307 | **5.6. Avoid Hardcoding Values** 308 | 309 | - Use variables and environment-specific configurations to avoid hardcoding sensitive data or region-specific information. 310 | 311 | **5.7. Implement Proper Logging and Monitoring** 312 | 313 | - Implement logging and monitoring for your Terraform deployments to track changes, catch issues early, and maintain audit trails. 314 | 315 | **5.8. Implement Policy as Code** 316 | 317 | - Use tools like HashiCorp Sentinel or Open Policy Agent (OPA) to enforce policies on your Terraform configurations. 318 | -------------------------------------------------------------------------------- /Monitoring/CloudWatch.md: -------------------------------------------------------------------------------- 1 | # CloudWatch Cheatsheet 2 | 3 | ![text](https://imgur.com/BU5g7ce.png) 4 | 5 | Amazon CloudWatch is a comprehensive monitoring and management service designed for AWS and hybrid cloud applications. This guide covers everything from basic concepts to advanced configurations, helping you leverage CloudWatch for performance monitoring, troubleshooting, and operational insights. 6 | 7 | --- 8 | 9 | ## **1. Introduction to CloudWatch** 10 | 11 | ### What is CloudWatch? 12 | 13 | - Amazon CloudWatch is a monitoring and observability service for AWS resources and custom applications. 14 | - Provides actionable insights through metrics, logs, alarms, and dashboards. 15 | - Supports both infrastructure and application-level monitoring. 16 | 17 | ### Key Features: 18 | 19 | - **Metrics**: Collect and monitor key performance data. 20 | - **Logs**: Aggregate, analyze, and search logs. 21 | - **Alarms**: Set thresholds for metrics to trigger automated actions. 22 | - **Dashboards**: Visualize data in real time. 23 | - **CloudWatch Events**: Trigger actions based on changes in AWS resources. 24 | 25 | --- 26 | 27 | ## **2. CloudWatch Architecture Overview** 28 | 29 | - **Data Sources**: 30 | - AWS Services: EC2, RDS, Lambda, etc. 31 | - On-premises servers or hybrid setups using CloudWatch Agent. 32 | - **Core Components**: 33 | - **Metrics**: Quantifiable data points (e.g., CPU utilization). 34 | - **Logs**: Application and system logs. 35 | - **Alarms**: Notifications or automated responses. 36 | - **Dashboards**: Custom visualizations. 37 | - **Insights**: Advanced log analytics. 38 | 39 | --- 40 | 41 | ## **3. Setting Up CloudWatch** 42 | 43 | ### Accessing CloudWatch 44 | 45 | 1. Go to the **AWS Management Console**. 46 | 2. Navigate to **CloudWatch** under the **Management & Governance** section. 47 | 48 | ### CloudWatch Agent Installation 49 | 50 | To monitor custom metrics or on-premises resources: 51 | 52 | 1. Install the CloudWatch Agent on your instance: 53 | 54 | ```bash 55 | sudo yum install amazon-cloudwatch-agent 56 | ``` 57 | 58 | 2. Configure the agent: 59 | 60 | ```bash 61 | sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard 62 | ``` 63 | 64 | 3. Start the agent: 65 | 66 | ```bash 67 | sudo /opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent 68 | ``` 69 | 70 | ### Setting IAM Permissions 71 | 72 | Attach the **CloudWatchFullAccess** policy to the IAM role or user managing CloudWatch. 73 | 74 | --- 75 | 76 | ## **4. Metrics Monitoring** 77 | 78 | ### Viewing Metrics 79 | 80 | 1. In the CloudWatch console, go to **Metrics**. 81 | 2. Select a namespace (e.g., `AWS/EC2`, `AWS/Lambda`). 82 | 3. Choose metrics like `CPUUtilization`, `DiskWriteOps`, etc. 83 | 84 | ### Common Metrics: 85 | 86 | - **EC2**: 87 | - `CPUUtilization` 88 | - `DiskReadBytes` 89 | - `NetworkIn/Out` 90 | - **RDS**: 91 | - `DatabaseConnections` 92 | - `ReadIOPS` 93 | - `WriteLatency` 94 | - **Lambda**: 95 | - `Invocations` 96 | - `Duration` 97 | - `Errors` 98 | 99 | ### Custom Metrics 100 | 101 | To send custom metrics: 102 | 103 | 1. Install the AWS CLI. 104 | 2. Publish a metric: 105 | 106 | ```bash 107 | aws cloudwatch put-metric-data --namespace "CustomNamespace" --metric-name "MetricName" --value 100 108 | ``` 109 | 110 | --- 111 | 112 | ## **5. CloudWatch Logs** 113 | 114 | ### Setting Up Log Groups and Streams 115 | 116 | 1. Navigate to **Logs** in the CloudWatch console. 117 | 2. Create a **Log Group** (e.g., `/aws/lambda/my-function`). 118 | 3. Each application/service writes to a **Log Stream** under the group. 119 | 120 | ### Exporting Logs to S3 121 | 122 | 1. Go to **Logs** → Select a log group. 123 | 2. Click **Actions** → **Export data to Amazon S3**. 124 | 3. Configure the export with the desired time range. 125 | 126 | ### Querying Logs with CloudWatch Logs Insights 127 | 128 | 1. Navigate to **Logs Insights**. 129 | 2. Write queries for analysis: 130 | 131 | ```sql 132 | fields @timestamp, @message 133 | | filter @message like /ERROR/ 134 | | sort @timestamp desc 135 | | limit 20 136 | ``` 137 | 138 | --- 139 | 140 | ## **6. CloudWatch Alarms** 141 | 142 | ### Creating an Alarm 143 | 144 | 1. Go to **Alarms** in the CloudWatch console. 145 | 2. Click **Create Alarm**. 146 | 3. Select a metric (e.g., `CPUUtilization`). 147 | 4. Set a threshold (e.g., `> 80%` for 5 minutes). 148 | 5. Choose an action (e.g., send an SNS notification). 149 | 150 | ### Alarm States: 151 | 152 | - **OK**: Metric is within the defined threshold. 153 | - **ALARM**: Metric breaches the threshold. 154 | - **INSUFFICIENT DATA**: No data available. 155 | 156 | ### Advanced Alarm Configurations 157 | 158 | - Composite Alarms: Combine multiple alarms. 159 | - Actions: 160 | - Notify via SNS. 161 | - Trigger Lambda functions. 162 | - Stop/start EC2 instances. 163 | 164 | --- 165 | 166 | ## **7. CloudWatch Dashboards** 167 | 168 | ### Creating a Dashboard 169 | 170 | 1. Go to **Dashboards** in the CloudWatch console. 171 | 2. Click **Create Dashboard**. 172 | 3. Add widgets: 173 | - **Line** for metrics. 174 | - **Number** for single values. 175 | - **Text** for notes. 176 | 177 | ### Customizing Widgets 178 | 179 | - Choose metrics from different namespaces. 180 | - Configure time ranges and granularity. 181 | 182 | ### Example: Multi-Service Dashboard 183 | 184 | - **EC2 Metrics**: CPU, Disk, Network. 185 | - **RDS Metrics**: Connections, IOPS. 186 | - **Lambda Metrics**: Invocations, Errors. 187 | 188 | --- 189 | 190 | ## **8. CloudWatch Events (EventBridge)** 191 | 192 | ### Creating Rules 193 | 194 | 1. Navigate to **Rules** under **Events** in the CloudWatch console. 195 | 2. Create a rule with an event pattern (e.g., EC2 state change). 196 | 3. Add a target (e.g., SNS, Lambda, Step Functions). 197 | 198 | ### Example: Automate Instance Shutdown 199 | 200 | 1. Event Pattern: 201 | 202 | ```json 203 | { 204 | "source": ["aws.ec2"], 205 | "detail-type": ["EC2 Instance State-change Notification"], 206 | "detail": { 207 | "state": ["stopped"] 208 | } 209 | } 210 | ``` 211 | 212 | 2. Target: Send an SNS notification. 213 | 214 | --- 215 | 216 | ## **9. Advanced Configurations** 217 | 218 | ### Cross-Account Monitoring 219 | 220 | 1. Create a cross-account role with permissions to access CloudWatch in the target account. 221 | 2. Use the `CloudWatch:ListMetrics` and `CloudWatch:GetMetricData` APIs. 222 | 223 | ### Anomaly Detection 224 | 225 | Enable anomaly detection for metrics: 226 | 227 | 1. Go to **Metrics** → Select a metric. 228 | 2. Click **Actions** → **Enable anomaly detection**. 229 | 230 | ### Metric Math 231 | 232 | Perform calculations across metrics: 233 | 234 | - Example: Combine CPU utilization across instances. 235 | 236 | ```bash 237 | (m1+m2)/2 238 | ``` 239 | 240 | --- 241 | 242 | ## **10. Integration with Other Services** 243 | 244 | ### AWS Lambda 245 | 246 | - Use `console.log()` to write logs to CloudWatch. 247 | - Monitor Lambda-specific metrics like `Errors` and `Throttles`. 248 | 249 | ### ECS/EKS 250 | 251 | - Enable CloudWatch Container Insights for detailed monitoring. 252 | - Use `awslogs` driver to send container logs to CloudWatch. 253 | 254 | ### Integration with Third-Party Tools 255 | 256 | - Use **DataDog** or **Grafana** for enhanced visualization. 257 | - Integrate CloudWatch metrics into these platforms using APIs. 258 | 259 | --- 260 | 261 | ## **11. Security Best Practices** 262 | 263 | ### Log Retention 264 | 265 | - Set retention policies for logs to reduce costs: 266 | 267 | ```bash 268 | aws logs put-retention-policy --log-group-name "/aws/lambda/my-function" --retention-in-days 30 269 | ``` 270 | 271 | ### Fine-Grained Access Control 272 | 273 | - Use IAM policies to restrict access to specific metrics, logs, or dashboards. 274 | 275 | --- 276 | 277 | ## **12. CloudWatch Pricing** 278 | 279 | ### Pricing Model 280 | 281 | 1. **Metrics**: Charged per metric, per month. 282 | 2. **Logs**: 283 | - Ingestion: Cost per GB ingested. 284 | - Storage: Cost per GB stored. 285 | 3. **Dashboards**: Charged per dashboard, per month. 286 | 287 | ### Cost Optimization Tips 288 | 289 | - Use metric filters to limit data collection. 290 | - Set shorter retention periods for logs. 291 | 292 | --- 293 | 294 | ## **13. Best Practices** 295 | 296 | 1. **Organize Log Groups**: 297 | - Use consistent naming conventions (e.g., `/application/environment/service`). 298 | 299 | 2. **Use Alarms Wisely**: 300 | - Avoid too many alarms to prevent alert fatigue. 301 | - Use composite alarms to group related metrics. 302 | 303 | 3. **Automate Monitoring**: 304 | - Automate alert creation and dashboards using CloudFormation or Terraform. 305 | 306 | 4. **Optimize Log Storage**: 307 | - Export logs to S3 for long-term storage and analysis. 308 | 309 | 5. **Enable Anomaly Detection**: 310 | - Automate anomaly detection for critical metrics. 311 | 312 | --- 313 | 314 | ## **14. References and Resources** 315 | 316 | - [CloudWatch Documentation](https://docs.aws.amazon.com/cloudwatch/) 317 | - [Metric Math Syntax Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html) 318 | - [CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/) 319 | -------------------------------------------------------------------------------- /Version-Control/Bitbucket.md: -------------------------------------------------------------------------------- 1 | # BitBucket Cheatsheet 2 | 3 | ![text](https://imgur.com/7PDN0aZ.png) 4 | 5 | Bitbucket, developed by Atlassian, is a Git-based source code repository hosting service. It is designed for teams and provides strong integration with other Atlassian tools like Jira, Trello, and Confluence. This cheatsheet provides a detailed guide for mastering Bitbucket from basic operations to advanced features. 6 | 7 | --- 8 | 9 | ## **1. Introduction to Bitbucket** 10 | 11 | ### What is Bitbucket? 12 | 13 | - Bitbucket is a Git-based platform for version control, CI/CD pipelines, and project collaboration. 14 | - It supports both **private** and **public repositories**. 15 | - Known for its seamless integration with Atlassian tools (e.g., Jira) and in-built CI/CD pipelines. 16 | 17 | ### Key Features 18 | 19 | - Git repository hosting 20 | - In-built CI/CD via **Bitbucket Pipelines** 21 | - Jira integration for issue tracking 22 | - Branch permissions and code review tools 23 | - Supports Mercurial (deprecated) 24 | 25 | --- 26 | 27 | ## **2. Getting Started** 28 | 29 | ### Creating a Bitbucket Account 30 | 31 | 1. Go to [Bitbucket](https://bitbucket.org/) and sign up for an account. 32 | 2. Optionally, link your Atlassian account for better integration. 33 | 34 | ### Setting Up SSH Keys 35 | 36 | 1. Generate an SSH key: 37 | 38 | ```bash 39 | ssh-keygen -t rsa -b 4096 -C "your_email@example.com" 40 | ``` 41 | 42 | 2. Add the public key to Bitbucket: 43 | - Navigate to **Personal Settings** → **SSH Keys** → **Add Key**. 44 | 45 | ### Creating a Repository 46 | 47 | 1. Log in to Bitbucket. 48 | 2. Go to **Repositories** → **Create Repository**. 49 | 3. Configure: 50 | - Repository Name 51 | - Access level (Private/Public) 52 | - Repository Type (Git) 53 | 54 | --- 55 | 56 | ## **3. Basic Operations** 57 | 58 | ### Cloning a Repository 59 | 60 | ```bash 61 | git clone git@bitbucket.org:username/repository.git 62 | ``` 63 | 64 | ### Staging, Committing, and Pushing 65 | 66 | ```bash 67 | # Stage changes 68 | git add . 69 | # Commit changes 70 | git commit -m "Initial commit" 71 | # Push changes 72 | git push origin main 73 | ``` 74 | 75 | ### Pulling Changes 76 | 77 | ```bash 78 | git pull origin main 79 | ``` 80 | 81 | --- 82 | 83 | ## **4. Branching and Merging** 84 | 85 | ### Creating and Switching Branches 86 | 87 | ```bash 88 | # Create a new branch 89 | git checkout -b feature-branch 90 | # Switch to an existing branch 91 | git checkout main 92 | ``` 93 | 94 | ### Pushing a Branch 95 | 96 | ```bash 97 | git push origin feature-branch 98 | ``` 99 | 100 | ### Creating a Pull Request (PR) 101 | 102 | 1. Open Bitbucket and navigate to **Pull Requests**. 103 | 2. Click **Create Pull Request**. 104 | 3. Select branches, add reviewers, and provide a description. 105 | 106 | ### Merging Pull Requests 107 | 108 | 1. Approve the PR. 109 | 2. Merge using the options: 110 | - **Merge Commit**: Keeps all commits intact. 111 | - **Squash Merge**: Combines all commits into one. 112 | - **Rebase**: Rewrites commit history. 113 | 114 | --- 115 | 116 | ## **5. Bitbucket Pipelines (CI/CD)** 117 | 118 | ### Overview 119 | 120 | Bitbucket Pipelines is an integrated CI/CD service to automate builds, tests, and deployments. 121 | 122 | ### Enabling Pipelines 123 | 124 | 1. Go to the repository settings → **Pipelines**. 125 | 2. Enable Pipelines and configure the `.bitbucket-pipelines.yml` file. 126 | 127 | ### Sample Pipeline Configuration 128 | 129 | ```yaml 130 | pipelines: 131 | default: 132 | - step: 133 | name: Build and Test 134 | image: node:14 135 | script: 136 | - npm install 137 | - npm test 138 | - step: 139 | name: Deploy to Production 140 | script: 141 | - echo "Deploying to production..." 142 | ``` 143 | 144 | ### Key Triggers 145 | 146 | - **default**: Runs on any branch when pushed. 147 | - **branches**: Customizes triggers for specific branches. 148 | - **tags**: Automates deployment for version tags. 149 | 150 | ### Variables and Secrets 151 | 152 | 1. Go to **Repository Settings** → **Pipelines** → **Environment Variables**. 153 | 2. Add sensitive variables like `AWS_ACCESS_KEY`. 154 | 155 | #### Using Variables in Pipelines 156 | 157 | ```yaml 158 | script: 159 | - echo "Using secret: $AWS_ACCESS_KEY" 160 | ``` 161 | 162 | --- 163 | 164 | ## **6. Branch Permissions and Access Control** 165 | 166 | ### Branch Permissions 167 | 168 | 1. Go to **Repository Settings** → **Branch Permissions**. 169 | 2. Add rules such as: 170 | - Prevent direct pushes to `main`. 171 | - Require at least 2 code reviews before merging. 172 | 173 | ### User Roles 174 | 175 | - **Admin**: Full control over repositories and permissions. 176 | - **Write**: Can push and pull code. 177 | - **Read**: Read-only access to repositories. 178 | 179 | --- 180 | 181 | ## **7. Integration with Jira** 182 | 183 | ### Linking a Repository to Jira 184 | 185 | 1. Go to **Repository Settings** → **Jira Settings**. 186 | 2. Connect the repository to a Jira project. 187 | 188 | ### Automating Issue Tracking 189 | 190 | - Add Jira issue keys in commit messages: 191 | 192 | ```text 193 | PROJ-123: Fix login page bug 194 | ``` 195 | 196 | - Jira automatically links commits, pull requests, and deployments. 197 | 198 | --- 199 | 200 | ## **8. Code Review and Quality** 201 | 202 | ### Using Pull Requests for Code Review 203 | 204 | 1. Assign reviewers while creating a pull request. 205 | 2. Add comments inline to highlight issues. 206 | 207 | ### Integrating Code Quality Tools 208 | 209 | - Add tools like **SonarCloud** or **CodeClimate** to your pipelines for static code analysis. 210 | - Example: Adding SonarCloud to Bitbucket Pipelines: 211 | 212 | ```yaml 213 | - pipe: sonarsource/sonarcloud-scan:1.4.0 214 | variables: 215 | SONAR_TOKEN: $SONAR_TOKEN 216 | ``` 217 | 218 | --- 219 | 220 | ## **9. Bitbucket API** 221 | 222 | ### Authenticating 223 | 224 | Generate a personal access token: 225 | 226 | 1. Go to **Personal Settings** → **Access Management** → **Create App Password**. 227 | 228 | Use the token in API calls: 229 | 230 | ```bash 231 | curl -u username:app_password https://api.bitbucket.org/2.0/repositories 232 | ``` 233 | 234 | ### Common API Endpoints 235 | 236 | - List repositories: 237 | 238 | ```bash 239 | curl -X GET https://api.bitbucket.org/2.0/repositories/{username} 240 | ``` 241 | 242 | - Create an issue: 243 | 244 | ```bash 245 | curl -X POST -u username:app_password \ 246 | -H "Content-Type: application/json" \ 247 | -d '{"title": "Bug in Login Page", "content": {"raw": "Description"}}' \ 248 | https://api.bitbucket.org/2.0/repositories/{username}/{repo}/issues 249 | ``` 250 | 251 | --- 252 | 253 | ## **10. Advanced Features** 254 | 255 | ### Deployments with Bitbucket Pipelines 256 | 257 | Track deployment environments: 258 | 259 | 1. Go to **Deployments** → Configure environments (e.g., Dev, Staging, Prod). 260 | 261 | Add deployment steps in `.bitbucket-pipelines.yml`: 262 | 263 | ```yaml 264 | pipelines: 265 | branches: 266 | main: 267 | - step: 268 | name: Deploy to Staging 269 | deployment: staging 270 | script: 271 | - ./deploy.sh staging 272 | ``` 273 | 274 | ### Monorepo Support 275 | 276 | Host multiple services in one repository: 277 | 278 | - Use Pipelines for individual service builds: 279 | 280 | ```yaml 281 | pipelines: 282 | default: 283 | - step: 284 | name: Build Service A 285 | script: 286 | - cd services/service-a && npm install && npm test 287 | ``` 288 | 289 | ### Mirror Repositories 290 | 291 | Mirror a repository between Bitbucket and GitHub: 292 | 293 | ```bash 294 | git remote add bitbucket git@bitbucket.org:username/repo.git 295 | git push bitbucket --mirror 296 | ``` 297 | 298 | --- 299 | 300 | ## **11. Security and Best Practices** 301 | 302 | ### Enforcing Two-Factor Authentication (2FA) 303 | 304 | 1. Go to **Personal Settings** → **Security** → Enable 2FA. 305 | 306 | ### Secret Scanning 307 | 308 | Bitbucket scans for hard-coded credentials and alerts users. 309 | 310 | ### Dependency Scanning 311 | 312 | Use Atlassian tools like **Snyk** or **Dependabot** to identify vulnerabilities. 313 | 314 | --- 315 | 316 | ## **12. Best Practices** 317 | 318 | 1. **Branch Naming Convention**: 319 | - Use prefixes like `feature/`, `bugfix/`, and `release/`. 320 | 321 | ```text 322 | feature/add-login-form 323 | bugfix/fix-authentication-error 324 | ``` 325 | 326 | 2. **Commit Messages**: 327 | - Follow a format like: 328 | 329 | ```text 330 | [PROJ-123] Fix bug in login functionality 331 | ``` 332 | 333 | - Reference Jira issues in commit messages. 334 | 335 | 3. **Automate Everything**: 336 | - Use Pipelines for CI/CD. 337 | - Automate linting, testing, and deployment. 338 | 339 | 4. **Use Pull Request Templates**: 340 | - Add `.bitbucket/pull_request_template.md` to standardize PR descriptions. 341 | 342 | --- 343 | 344 | ## **13. References and Resources** 345 | 346 | - [Bitbucket Documentation](https://bitbucket.org/product/) 347 | - [Bitbucket API Documentation](https://developer.atlassian.com/bitbucket/api/2/reference/) 348 | - [Pipelines Guide](https://bitbucket.org/product/features/pipelines) 349 | -------------------------------------------------------------------------------- /cloud/GCP.md: -------------------------------------------------------------------------------- 1 | # GCP Cheatsheet 2 | 3 | ![text](https://imgur.com/2MpF0w5.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Google Cloud Platform (GCP)** is a suite of cloud computing services offered by Google. It provides a range of services including compute, storage, databases, machine learning, and more. 8 | 9 | **2. Core GCP Services:** 10 | 11 | - **Compute:** 12 | - **Google Compute Engine (GCE):** 13 | - Scalable virtual machines running on Google’s infrastructure. 14 | - Key Concepts: Machine Types, Images, Snapshots, Persistent Disks. 15 | - Example: 16 | 17 | ```bash 18 | gcloud compute instances create my-instance --zone=us-central1-a --machine-type=e2-medium --image-family=debian-10 --image-project=debian-cloud 19 | ``` 20 | 21 | - **Google Kubernetes Engine (GKE):** 22 | - Managed Kubernetes service for running containerized applications. 23 | - Key Concepts: Clusters, Nodes, Pods, Services, Deployments. 24 | - Example: 25 | 26 | ```bash 27 | gcloud container clusters create my-cluster --zone us-central1-a --num-nodes 3 28 | ``` 29 | 30 | - **Cloud Functions:** 31 | - Serverless environment to execute code in response to events. 32 | - Key Concepts: Functions, Triggers, Event Sources. 33 | - Example: 34 | 35 | ```bash 36 | gcloud functions deploy my-function --runtime python39 --trigger-http --allow-unauthenticated 37 | ``` 38 | 39 | - **Storage:** 40 | - **Google Cloud Storage:** 41 | - Object storage service for storing and accessing data. 42 | - Key Concepts: Buckets, Objects, Classes (Standard, Nearline, Coldline, Archive). 43 | - Example: 44 | 45 | ```bash 46 | gsutil mb gs://my-bucket 47 | gsutil cp my-file.txt gs://my-bucket/ 48 | ``` 49 | 50 | - **Persistent Disks:** 51 | - Durable, high-performance block storage for VM instances. 52 | - Key Concepts: Disk Types (Standard, SSD, Balanced), Snapshots, Zonal/Regional Disks. 53 | - Example: 54 | 55 | ```bash 56 | gcloud compute disks create my-disk --size=100GB --type=pd-ssd --zone=us-central1-a 57 | ``` 58 | 59 | - **Filestore:** 60 | - Fully managed file storage service for applications that require a file system interface. 61 | - Key Concepts: Instances, Tiers (Basic, High Scale, Enterprise). 62 | - Example: 63 | 64 | ```bash 65 | gcloud filestore instances create my-filestore-instance --zone=us-central1-a --tier=STANDARD --file-share=name="my-share",capacity=1TB --network=name="default" 66 | ``` 67 | 68 | - **Database:** 69 | - **Cloud SQL:** 70 | - Managed relational database service supporting MySQL, PostgreSQL, and SQL Server. 71 | - Key Concepts: Instances, Backups, Failover, Maintenance Windows. 72 | - Example: 73 | 74 | ```bash 75 | gcloud sql instances create my-instance --database-version=MYSQL_8_0 --tier=db-f1-micro --region=us-central1 76 | ``` 77 | 78 | - **Cloud Spanner:** 79 | - Scalable, globally-distributed, and strongly consistent database service. 80 | - Key Concepts: Instances, Databases, Schemas, Nodes. 81 | - Example: 82 | 83 | ```bash 84 | gcloud spanner instances create my-instance --config=regional-us-central1 --nodes=1 --description="My Spanner Instance" 85 | ``` 86 | 87 | - **Firestore:** 88 | - NoSQL document database for mobile, web, and server development. 89 | - Key Concepts: Collections, Documents, Queries, Indexes. 90 | - Example: 91 | 92 | ```bash 93 | gcloud firestore databases create --region=us-central 94 | ``` 95 | 96 | **3. Networking:** 97 | 98 | - **VPC (Virtual Private Cloud):** 99 | - Isolated network environments within GCP. 100 | - Key Concepts: Subnets, Routes, Firewalls, VPN, Interconnect. 101 | - Example: 102 | 103 | ```bash 104 | gcloud compute networks create my-vpc --subnet-mode=custom 105 | gcloud compute networks subnets create my-subnet --network=my-vpc --region=us-central1 --range=10.0.0.0/24 106 | ``` 107 | 108 | - **Cloud Load Balancing:** 109 | - Global load balancing service for distributing traffic across multiple instances. 110 | - Key Concepts: Frontends, Backends, URL Maps, Health Checks. 111 | - Example: 112 | 113 | ```bash 114 | gcloud compute forwarding-rules create my-rule --global --target-http-proxy=my-proxy --ports=80 115 | ``` 116 | 117 | - **Cloud DNS:** 118 | - Managed DNS service running on the same infrastructure as Google. 119 | - Key Concepts: Managed Zones, DNS Records, Policies. 120 | - Example: 121 | 122 | ```bash 123 | gcloud dns managed-zones create my-zone --dns-name="example.com." --description="My DNS zone" 124 | gcloud dns record-sets transaction start --zone=my-zone 125 | gcloud dns record-sets transaction add --zone=my-zone --name="www.example.com." --ttl=300 --type=A "1.2.3.4" 126 | gcloud dns record-sets transaction execute --zone=my-zone 127 | ``` 128 | 129 | - **Cloud CDN:** 130 | - Content delivery network for delivering web and video content globally. 131 | - Key Concepts: Backends, Cache Modes, Signed URLs. 132 | - Example: 133 | 134 | ```bash 135 | gcloud compute url-maps create my-url-map --default-service=my-backend-service 136 | gcloud compute backend-buckets create my-backend-bucket --gcs-bucket-name=my-bucket 137 | gcloud compute backend-buckets add-backend --url-map=my-url-map --default-backend-bucket=my-backend-bucket 138 | ``` 139 | 140 | **4. Security and Identity:** 141 | 142 | - **Identity and Access Management (IAM):** 143 | - Manage access to resources with fine-grained control. 144 | - Key Concepts: Roles, Permissions, Policies, Service Accounts. 145 | - Example: 146 | 147 | ```bash 148 | gcloud projects add-iam-policy-binding my-project --member="user:example@gmail.com" --role="roles/editor" 149 | ``` 150 | 151 | - **Cloud Identity:** 152 | - Identity management for users and groups across services. 153 | - Key Concepts: Directory, Groups, Security Settings, OAuth. 154 | - Example: 155 | - Managed via Google Admin Console. 156 | 157 | - **Cloud Key Management Service (KMS):** 158 | - Create, manage, and use cryptographic keys. 159 | - Key Concepts: Key Rings, Keys, Versions, Policies. 160 | - Example: 161 | 162 | ```bash 163 | gcloud kms keyrings create my-keyring --location=global 164 | gcloud kms keys create my-key --keyring=my-keyring --location=global --purpose=encryption 165 | ``` 166 | 167 | - **Cloud Security Command Center (SCC):** 168 | - Security and risk management platform for GCP. 169 | - Key Concepts: Findings, Assets, Sources, Security Health Analytics. 170 | - Example: 171 | - Managed via GCP Console. 172 | 173 | **5. Management Tools:** 174 | 175 | - **Deployment Manager:** 176 | - Infrastructure as code service for managing GCP resources. 177 | - Key Concepts: Templates, Deployments, Resources. 178 | - Example: 179 | 180 | ```bash 181 | gcloud deployment-manager deployments create my-deployment --config=config.yaml 182 | ``` 183 | 184 | - **Stackdriver (now part of Operations Suite):** 185 | - Monitoring, logging, and diagnostics tool for GCP. 186 | - Key Concepts: Metrics, Logs, Alerts, Dashboards. 187 | - Example: 188 | 189 | ```bash 190 | gcloud logging write my-log "This is a log entry" --severity=ERROR 191 | ``` 192 | 193 | - **Cloud Console:** 194 | - Web-based interface to manage GCP resources. 195 | - Key Concepts: Dashboards, Cloud Shell, Editor. 196 | 197 | - **Cloud Shell:** 198 | - Command-line interface with access to all GCP resources. 199 | - Example: 200 | 201 | ```bash 202 | gcloud config set project my-project 203 | ``` 204 | 205 | **6. Advanced Topics:** 206 | 207 | - **Cost Management:** 208 | - Monitor and optimize your GCP costs using Billing Reports and Budgets. 209 | - Example 210 | 211 | : 212 | ```bash 213 | gcloud beta billing budgets create --billing-account=012345-67890A-BCDEF0 --display-name="My Budget" --amount=500USD 214 | ``` 215 | 216 | - **Auto Scaling:** 217 | - Automatically adjust the number of VM instances based on demand. 218 | - Key Concepts: Instance Groups, Autoscaler, Metrics. 219 | - Example: 220 | 221 | ```bash 222 | gcloud compute instance-groups managed set-autoscaling my-group --max-num-replicas 10 --min-num-replicas 1 --target-cpu-utilization 0.6 223 | ``` 224 | 225 | - **Serverless Architectures:** 226 | - Use Cloud Functions, Cloud Run, and Pub/Sub for serverless solutions. 227 | - Key Concepts: Triggers, Events, Containers, Scaling. 228 | - Example: 229 | 230 | ```bash 231 | gcloud run deploy my-service --image=gcr.io/my-project/my-image --platform managed 232 | ``` 233 | 234 | **7. Best Practices:** 235 | 236 | - **Security:** 237 | - Use IAM policies, encrypt data, monitor with SCC, apply security best practices. 238 | 239 | - **Reliability:** 240 | - Use multiple zones/regions, set up failover, and implement backups. 241 | 242 | - **Performance Efficiency:** 243 | - Choose appropriate machine types, use caching, optimize databases. 244 | 245 | - **Cost Optimization:** 246 | - Use committed use contracts, monitor spend, and optimize resources. 247 | 248 | - **Operational Excellence:** 249 | - Automate deployments, monitor operations, and use infrastructure as code (IaC). 250 | -------------------------------------------------------------------------------- /Version-Control/GitLab.md: -------------------------------------------------------------------------------- 1 | # GitLab Cheatsheet 2 | 3 | ![text](https://imgur.com/QJ7J3qs.png) 4 | 5 | **GitLab** is a web-based DevOps platform that provides a robust set of tools for source code management, CI/CD, project management, and deployment automation. This cheatsheet covers everything from basic usage to advanced GitLab features. 6 | 7 | --- 8 | 9 | ## 1. **Introduction to GitLab** 10 | 11 | ### What is GitLab? 12 | 13 | GitLab is an open-source DevOps platform offering integrated tools for: 14 | 15 | - Source control (Git) 16 | - Continuous Integration/Continuous Deployment (CI/CD) 17 | - Issue tracking and project management 18 | - Container registry and DevSecOps 19 | 20 | ### Key Features 21 | 22 | - **Git Repository Management**: Handles distributed version control and code review. 23 | - **CI/CD Pipelines**: Automates testing, integration, and deployment. 24 | - **DevSecOps**: Built-in security scanning for dependencies, container images, and code. 25 | - **Container Registry**: Docker container management. 26 | 27 | --- 28 | 29 | ## 2. **Basic GitLab Setup** 30 | 31 | ### Signing Up and Creating a Project 32 | 33 | 1. **Sign up**: Visit [GitLab](https://gitlab.com/) and create an account. 34 | 2. **Create a Project**: 35 | - Go to **Projects** → **New Project**. 36 | - Choose **Blank Project**, **Import**, or **Template**. 37 | - Configure visibility (Private, Internal, or Public). 38 | 39 | ### Adding SSH Keys 40 | 41 | 1. Generate an SSH key: 42 | 43 | ```bash 44 | ssh-keygen -t rsa -b 4096 -C "your_email@example.com" 45 | ``` 46 | 47 | 2. Copy the public key: 48 | 49 | ```bash 50 | cat ~/.ssh/id_rsa.pub 51 | ``` 52 | 53 | 3. Add the key in GitLab: 54 | - Go to **User Settings** → **SSH Keys** → Paste the public key. 55 | 56 | --- 57 | 58 | ## 3. **GitLab Basics** 59 | 60 | ### Cloning a Repository 61 | 62 | ```bash 63 | git clone git@gitlab.com:username/projectname.git 64 | ``` 65 | 66 | ### Committing Changes 67 | 68 | ```bash 69 | # Stage files 70 | git add . 71 | # Commit files 72 | git commit -m "Initial commit" 73 | # Push changes 74 | git push origin main 75 | ``` 76 | 77 | ### Branching 78 | 79 | - Create a branch: 80 | 81 | ```bash 82 | git checkout -b feature-branch 83 | ``` 84 | 85 | - Push the branch: 86 | 87 | ```bash 88 | git push origin feature-branch 89 | ``` 90 | 91 | ### Merge Requests (MRs) 92 | 93 | 1. Go to your project on GitLab. 94 | 2. Navigate to **Merge Requests** → **New Merge Request**. 95 | 3. Select source and target branches and create an MR. 96 | 97 | --- 98 | 99 | ## 4. **Working with GitLab CI/CD** 100 | 101 | ### Basics of `.gitlab-ci.yml` 102 | 103 | The `.gitlab-ci.yml` file defines the CI/CD pipeline. 104 | 105 | #### Example File: 106 | 107 | ```yaml 108 | stages: 109 | - build 110 | - test 111 | - deploy 112 | 113 | build_job: 114 | stage: build 115 | script: 116 | - echo "Building the project" 117 | - ./build-script.sh 118 | 119 | test_job: 120 | stage: test 121 | script: 122 | - echo "Running tests" 123 | - ./test-script.sh 124 | 125 | deploy_job: 126 | stage: deploy 127 | script: 128 | - echo "Deploying to production" 129 | - ./deploy-script.sh 130 | ``` 131 | 132 | ### Pipeline Lifecycle 133 | 134 | 1. **Stages**: Define steps (e.g., `build`, `test`, `deploy`). 135 | 2. **Jobs**: Define tasks in each stage. 136 | 3. **Runners**: Execute pipeline jobs (shared or custom). 137 | 138 | ### Running a Pipeline 139 | 140 | - Push changes to a branch: 141 | 142 | ```bash 143 | git push origin branch-name 144 | ``` 145 | 146 | - Check pipelines: 147 | - Navigate to **CI/CD** → **Pipelines** in GitLab. 148 | 149 | --- 150 | 151 | ## 5. **Intermediate GitLab Features** 152 | 153 | ### GitLab Runners 154 | 155 | - Runners execute CI/CD jobs. 156 | - **Shared Runners**: Provided by GitLab. 157 | - **Custom Runners**: Self-hosted. 158 | 159 | #### Register a Custom Runner: 160 | 161 | 1. Install GitLab Runner: 162 | 163 | ```bash 164 | sudo apt install gitlab-runner 165 | ``` 166 | 167 | 2. Register the Runner: 168 | 169 | ```bash 170 | gitlab-runner register 171 | ``` 172 | 173 | - Enter GitLab URL, registration token, executor (e.g., `shell`, `docker`), and tags. 174 | 175 | ### Managing Variables 176 | 177 | - **Set Environment Variables**: 178 | 1. Go to **Settings** → **CI/CD** → **Variables**. 179 | 2. Add variables (e.g., `AWS_ACCESS_KEY`, `DOCKER_PASSWORD`). 180 | 181 | - Use in `.gitlab-ci.yml`: 182 | 183 | ```yaml 184 | script: 185 | - echo $MY_VARIABLE 186 | ``` 187 | 188 | ### Artifacts 189 | 190 | Artifacts store job outputs. 191 | 192 | ```yaml 193 | test_job: 194 | stage: test 195 | script: 196 | - ./run-tests 197 | artifacts: 198 | paths: 199 | - test-results/ 200 | ``` 201 | 202 | --- 203 | 204 | ## 6. **Advanced GitLab Features** 205 | 206 | ### GitLab Pages 207 | 208 | Host static websites directly on GitLab. 209 | 210 | #### Example `.gitlab-ci.yml` for Pages: 211 | 212 | ```yaml 213 | pages: 214 | stage: deploy 215 | script: 216 | - mkdir .public 217 | - cp -r * .public 218 | artifacts: 219 | paths: 220 | - public 221 | ``` 222 | 223 | ### Container Registry 224 | 225 | - GitLab provides a built-in Docker registry for container storage. 226 | - **Push an Image**: 227 | 228 | ```bash 229 | docker build -t registry.gitlab.com/username/projectname:tag . 230 | docker login registry.gitlab.com 231 | docker push registry.gitlab.com/username/projectname:tag 232 | ``` 233 | 234 | ### GitLab Kubernetes Integration 235 | 236 | - Integrate Kubernetes clusters with GitLab for deployments. 237 | - Navigate to **Operations** → **Kubernetes** to connect your cluster. 238 | 239 | #### Deploy Using Helm: 240 | 241 | ```yaml 242 | deploy: 243 | stage: deploy 244 | script: 245 | - helm install my-app ./helm-chart 246 | ``` 247 | 248 | --- 249 | 250 | ## 7. **Security in GitLab** 251 | 252 | ### SAST (Static Application Security Testing) 253 | 254 | - Enable SAST to scan for vulnerabilities: 255 | 256 | ```yaml 257 | include: 258 | - template: Security/SAST.gitlab-ci.yml 259 | ``` 260 | 261 | ### DAST (Dynamic Application Security Testing) 262 | 263 | - Perform runtime vulnerability scans: 264 | 265 | ```yaml 266 | include: 267 | - template: Security/DAST.gitlab-ci.yml 268 | ``` 269 | 270 | ### Secret Detection 271 | 272 | - Detect hardcoded secrets: 273 | 274 | ```yaml 275 | include: 276 | - template: Security/Secret-Detection.gitlab-ci.yml 277 | ``` 278 | 279 | --- 280 | 281 | ## 8. **GitLab Monitoring and Analytics** 282 | 283 | ### Pipeline Analytics 284 | 285 | - Navigate to **Analytics** → **CI/CD** → **Pipelines** to review pipeline efficiency. 286 | 287 | ### Code Coverage 288 | 289 | - Enable coverage reports in `.gitlab-ci.yml`: 290 | 291 | ```yaml 292 | test_job: 293 | stage: test 294 | script: 295 | - ./run-tests 296 | coverage: '/Code Coverage: \d+%/' 297 | ``` 298 | 299 | ### Container Scanning 300 | 301 | - Scan Docker images for vulnerabilities: 302 | 303 | ```yaml 304 | include: 305 | - template: Security/Container-Scanning.gitlab-ci.yml 306 | ``` 307 | 308 | --- 309 | 310 | ## 9. **GitLab Backup and Recovery** 311 | 312 | ### Backing Up GitLab 313 | 314 | - For self-hosted GitLab, run: 315 | 316 | ```bash 317 | gitlab-backup create 318 | ``` 319 | 320 | - Backup includes repositories, CI/CD logs, uploads, and settings. 321 | 322 | ### Restoring GitLab 323 | 324 | - Restore a backup: 325 | 326 | ```bash 327 | gitlab-restore restore BACKUP_FILE=backup_filename 328 | ``` 329 | 330 | --- 331 | 332 | ## 10. **Troubleshooting GitLab** 333 | 334 | ### Common Errors 335 | 336 | - **Pipeline Failures**: 337 | - Check pipeline logs in **CI/CD** → **Jobs**. 338 | - **Runner Issues**: 339 | - Ensure the runner is active: `gitlab-runner status`. 340 | - **Permission Errors**: 341 | - Verify SSH key and repository access. 342 | 343 | ### Debugging CI/CD Pipelines 344 | 345 | - Add verbose logging: 346 | 347 | ```yaml 348 | script: 349 | - echo "Debugging info" 350 | - set -x 351 | - ./my-script.sh 352 | ``` 353 | 354 | --- 355 | 356 | ## 11. **GitLab Best Practices** 357 | 358 | - **Use Branching Strategies**: 359 | - Implement GitLab Flow or GitFlow for streamlined collaboration. 360 | - **Secure CI/CD Pipelines**: 361 | - Use environment variables to manage sensitive data. 362 | - **Automate Reviews**: 363 | - Use merge request templates and code owners. 364 | - **Leverage GitLab Templates**: 365 | - Use pre-built `.gitlab-ci.yml` templates to save time. 366 | - **Monitor Usage**: 367 | - Regularly check project and pipeline analytics. 368 | 369 | --- 370 | 371 | ## 12. **Useful GitLab CLI Commands** 372 | 373 | ### Basic Commands 374 | 375 | - **Login to GitLab CLI**: 376 | 377 | ```bash 378 | glab auth login 379 | ``` 380 | 381 | - **List Repositories**: 382 | 383 | ```bash 384 | glab repo list 385 | ``` 386 | 387 | - **Create an Issue**: 388 | 389 | ```bash 390 | glab issue create --title "Bug report" --description "Details here" 391 | ``` 392 | 393 | --- 394 | 395 | ## References and Resources 396 | 397 | 1. [GitLab Documentation](https://docs.gitlab.com/) 398 | 2. [GitLab CI/CD Examples](https://docs.gitlab.com/ee/ci/examples/) 399 | 3. [GitLab CLI](https://github.com/profclems/glab) 400 | -------------------------------------------------------------------------------- /cloud/AWS.md: -------------------------------------------------------------------------------- 1 | # AWS Cheatsheet 2 | 3 | ![text](https://imgur.com/DDbwilK.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Amazon Web Services (AWS)** is a comprehensive cloud platform offering over 200 fully featured services from data centers globally. AWS provides cloud solutions for compute, storage, databases, machine learning, security, and more. 8 | 9 | **2. Core AWS Services:** 10 | 11 | - **Compute:** 12 | - **EC2 (Elastic Compute Cloud):** 13 | - Virtual servers for running applications. 14 | - Instance types: General Purpose, Compute Optimized, Memory Optimized, etc. 15 | - Key Concepts: AMI, Instance Types, Key Pairs, Security Groups, EBS Volumes. 16 | - Example: 17 | 18 | ```bash 19 | aws ec2 run-instances --image-id ami-12345678 --instance-type t2.micro --key-name MyKeyPair 20 | ``` 21 | 22 | - **Lambda:** 23 | - Serverless computing to run code without provisioning or managing servers. 24 | - Key Concepts: Functions, Event Sources, IAM Roles. 25 | - Example: 26 | 27 | ```bash 28 | aws lambda create-function --function-name my-function --runtime python3.8 --role arn:aws:iam::123456789012:role/execution_role --handler my_function.handler --zip-file fileb://my-deployment-package.zip 29 | ``` 30 | 31 | - **ECS/EKS (Elastic Container Service/Elastic Kubernetes Service):** 32 | - ECS: Fully managed container orchestration service. 33 | - EKS: Managed Kubernetes service for running Kubernetes on AWS. 34 | - Key Concepts: Clusters, Tasks, Services, Fargate. 35 | - Example: 36 | 37 | ```bash 38 | aws ecs create-cluster --cluster-name my-cluster 39 | ``` 40 | 41 | - **Storage:** 42 | - **S3 (Simple Storage Service):** 43 | - Scalable object storage service. 44 | - Key Concepts: Buckets, Objects, Storage Classes, Lifecycle Policies. 45 | - Example: 46 | 47 | ```bash 48 | aws s3 mb s3://my-bucket 49 | aws s3 cp my-file.txt s3://my-bucket/ 50 | ``` 51 | 52 | - **EBS (Elastic Block Store):** 53 | - Block storage for use with EC2 instances. 54 | - Key Concepts: Volumes, Snapshots, Volume Types (gp2, io1, st1, etc.). 55 | - Example: 56 | 57 | ```bash 58 | aws ec2 create-volume --size 10 --region us-east-1 --availability-zone us-east-1a --volume-type gp2 59 | ``` 60 | 61 | - **Glacier:** 62 | - Long-term, secure, and durable storage for data archiving and backup. 63 | - Key Concepts: Vaults, Archives, Retrieval Policies. 64 | - Example: 65 | 66 | ```bash 67 | aws glacier create-vault --vault-name my-vault --account-id - 68 | ``` 69 | 70 | - **Database:** 71 | - **RDS (Relational Database Service):** 72 | - Managed relational database service supporting various engines (MySQL, PostgreSQL, Oracle, SQL Server, etc.). 73 | - Key Concepts: DB Instances, Snapshots, Security Groups, Multi-AZ. 74 | - Example: 75 | 76 | ```bash 77 | aws rds create-db-instance --db-instance-identifier mydbinstance --db-instance-class db.t2.micro --engine mysql --master-username admin --master-user-password password --allocated-storage 20 78 | ``` 79 | 80 | - **DynamoDB:** 81 | - Managed NoSQL database service. 82 | - Key Concepts: Tables, Items, Attributes, Primary Key, Global/Local Secondary Indexes. 83 | - Example: 84 | 85 | ```bash 86 | aws dynamodb create-table --table-name MyTable --attribute-definitions AttributeName=Id,AttributeType=N --key-schema AttributeName=Id,KeyType=HASH --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 87 | ``` 88 | 89 | - **Aurora:** 90 | - MySQL and PostgreSQL-compatible relational database built for the cloud, providing high performance and availability. 91 | - Key Concepts: Clusters, Replicas, Global Databases. 92 | - Example: 93 | 94 | ```bash 95 | aws rds create-db-cluster --db-cluster-identifier my-cluster --engine aurora-mysql --master-username admin --master-user-password password 96 | ``` 97 | 98 | **3. Networking:** 99 | 100 | - **VPC (Virtual Private Cloud):** 101 | - Isolated network environment to launch AWS resources. 102 | - Key Concepts: Subnets, Route Tables, Internet Gateways, NAT Gateways, Security Groups, NACLs. 103 | - Example: 104 | 105 | ```bash 106 | aws ec2 create-vpc --cidr-block 10.0.0.0/16 107 | aws ec2 create-subnet --vpc-id vpc-12345678 --cidr-block 10.0.1.0/24 108 | ``` 109 | 110 | - **Route 53:** 111 | - Scalable DNS and domain name registration service. 112 | - Key Concepts: Hosted Zones, Record Sets, Health Checks, Traffic Policies. 113 | - Example: 114 | 115 | ```bash 116 | aws route53 create-hosted-zone --name example.com --caller-reference unique-string 117 | ``` 118 | 119 | - **CloudFront:** 120 | - Content delivery network (CDN) for delivering content globally with low latency. 121 | - Key Concepts: Distributions, Origins, Behaviors, Edge Locations. 122 | - Example: 123 | 124 | ```bash 125 | aws cloudfront create-distribution --origin-domain-name mybucket.s3.amazonaws.com 126 | ``` 127 | 128 | - **Elastic Load Balancing (ELB):** 129 | - Distributes incoming traffic across multiple targets, such as EC2 instances. 130 | - Key Concepts: Load Balancers (ALB, NLB, CLB), Target Groups, Listeners. 131 | - Example: 132 | 133 | ```bash 134 | aws elbv2 create-load-balancer --name my-load-balancer --subnets subnet-12345678 subnet-87654321 --security-groups sg-12345678 135 | ``` 136 | 137 | **4. Security and Identity:** 138 | 139 | - **IAM (Identity and Access Management):** 140 | - Manages users, groups, roles, and permissions. 141 | - Key Concepts: Users, Groups, Roles, Policies, MFA, Access Keys. 142 | - Example: 143 | 144 | ```bash 145 | aws iam create-user --user-name myuser 146 | aws iam attach-user-policy --user-name myuser --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess 147 | ``` 148 | 149 | - **KMS (Key Management Service):** 150 | - Managed service for creating and controlling encryption keys. 151 | - Key Concepts: CMKs (Customer Master Keys), Aliases, Grants, Key Policies. 152 | - Example: 153 | 154 | ```bash 155 | aws kms create-key --description "My CMK" 156 | ``` 157 | 158 | - **CloudTrail:** 159 | - Tracks user activity and API usage across AWS accounts. 160 | - Key Concepts: Trails, Logs, S3 Buckets, Insights. 161 | - Example: 162 | 163 | ```bash 164 | aws cloudtrail create-trail --name MyTrail --s3-bucket-name my-bucket 165 | ``` 166 | 167 | **5. Management Tools:** 168 | 169 | - **CloudFormation:** 170 | - Infrastructure as Code service for modeling and setting up AWS resources. 171 | - Key Concepts: Templates, Stacks, Resources, Outputs, Parameters. 172 | - Example: 173 | 174 | ```bash 175 | aws cloudformation create-stack --stack-name my-stack --template-body file://template.json 176 | ``` 177 | 178 | - **CloudWatch:** 179 | - Monitoring and observability service for AWS resources and applications. 180 | - Key Concepts: Metrics, Alarms, Logs, Events, Dashboards. 181 | - Example: 182 | 183 | ```bash 184 | aws cloudwatch put-metric-alarm --alarm-name my-alarm --metric-name CPUUtilization --namespace AWS/EC2 --statistic Average --period 300 --threshold 80 --comparison-operator GreaterThanOrEqualToThreshold --evaluation-periods 1 --alarm-actions arn:aws:sns:us-east-1:123456789012:my-topic 185 | ``` 186 | 187 | - **AWS Config:** 188 | - Service for assessing, auditing, and evaluating the configurations of AWS resources. 189 | - Key Concepts: Rules, Resources, Aggregators, Config Recorder. 190 | - Example: 191 | 192 | ```bash 193 | aws configservice put-configuration-recorder --configuration-recorder name=my-recorder,roleARN=arn:aws:iam::123456789012:role/my-role 194 | ``` 195 | 196 | - **Trusted Advisor:** 197 | - Provides real-time guidance to help you provision your resources following AWS best practices. 198 | - Key Concepts: Checks, Recommendations. 199 | - Example: 200 | - Access via AWS Management Console. 201 | 202 | **6. Advanced Topics:** 203 | 204 | - **Cost Management:** 205 | - Use AWS Cost Explorer, Budgets, and Cost & Usage Reports to monitor and optimize spending. 206 | - Example: 207 | 208 | ```bash 209 | aws ce get-cost-and-usage --time-period Start=2024-08-01,End=2024-08-31 --granularity MONTHLY --metrics "BlendedCost" 210 | ``` 211 | 212 | - **Auto Scaling:** 213 | - Automatically adjust the capacity of your resources based on demand. 214 | - Key Concepts: Auto Scaling Groups, Scaling Policies, Launch Configurations. 215 | - Example: 216 | 217 | ```bash 218 | aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-asg --launch-configuration-name my-lc --min-size 1 --max-size 10 --desired-capacity 2 --vpc-zone-identifier subnet-12345678 219 | ``` 220 | 221 | - **Serverless Architectures:** 222 | - Use AWS Lambda, API Gateway, and DynamoDB to build serverless applications. 223 | - Key Concepts: Functions, APIs, Tables, Events, Triggers. 224 | - Example: 225 | 226 | ```bash 227 | aws apigateway create-rest-api --name 'My API' 228 | ``` 229 | 230 | **7. Best 231 | 232 | Practices:** 233 | 234 | - **Security:** 235 | - Use IAM Roles and Policies, enable MFA, encrypt data at rest and in transit, monitor with CloudTrail, and apply the Principle of Least Privilege. 236 | 237 | - **Reliability:** 238 | - Design for failure, use multiple Availability Zones (AZs), implement backups, and set up auto-scaling. 239 | 240 | - **Performance Efficiency:** 241 | - Right-size instances, use appropriate storage classes, and leverage managed services. 242 | 243 | - **Cost Optimization:** 244 | - Use Reserved Instances (RIs), Spot Instances, and review billing regularly. 245 | 246 | - **Operational Excellence:** 247 | - Automate processes, monitor operations, and use infrastructure as code (IaC). 248 | -------------------------------------------------------------------------------- /cloud/Azure.md: -------------------------------------------------------------------------------- 1 | # Azure Cheatsheet 2 | 3 | ![text](https://imgur.com/f7RWwnx.png) 4 | 5 | **1. Introduction:** 6 | 7 | - **Microsoft Azure** is a cloud computing platform offering a wide range of services, including compute, analytics, storage, and networking. 8 | 9 | **2. Core Azure Services:** 10 | 11 | - **Compute:** 12 | - **Azure Virtual Machines:** 13 | - Scalable virtual servers for running applications. 14 | - Key Concepts: VM Sizes, Resource Groups, Virtual Networks, Disks. 15 | - Example: 16 | 17 | ```bash 18 | az vm create --resource-group myResourceGroup --name myVM --image UbuntuLTS --admin-username azureuser --generate-ssh-keys 19 | ``` 20 | 21 | - **Azure Functions:** 22 | - Serverless compute service for running event-driven code. 23 | - Key Concepts: Functions, Triggers, Bindings. 24 | - Example: 25 | 26 | ```bash 27 | func init MyFunctionProj --dotnet 28 | func new --name MyHttpTrigger --template "HTTP trigger" --authlevel "anonymous" 29 | ``` 30 | 31 | - **Azure Kubernetes Service (AKS):** 32 | - Managed Kubernetes service for running containerized applications. 33 | - Key Concepts: Clusters, Nodes, Pods, Services. 34 | - Example: 35 | 36 | ```bash 37 | az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys 38 | ``` 39 | 40 | - **Storage:** 41 | - **Azure Blob Storage:** 42 | - Object storage solution for the cloud. 43 | - Key Concepts: Storage Accounts, Containers, Blobs, Access Tiers. 44 | - Example: 45 | 46 | ```bash 47 | az storage account create --name mystorageaccount --resource-group myResourceGroup --location eastus --sku Standard_LRS 48 | az storage container create --name mycontainer --account-name mystorageaccount 49 | az storage blob upload --container-name mycontainer --file myfile.txt --name myfile.txt --account-name mystorageaccount 50 | ``` 51 | 52 | - **Azure Files:** 53 | - Managed file shares in the cloud using the SMB protocol. 54 | - Key Concepts: File Shares, Directories, Snapshots. 55 | - Example: 56 | 57 | ```bash 58 | az storage share create --name myshare --account-name mystorageaccount 59 | ``` 60 | 61 | - **Azure Disk Storage:** 62 | - High-performance disk storage for VMs. 63 | - Key Concepts: Managed Disks, Disk Types (Standard HDD, Standard SSD, Premium SSD). 64 | - Example: 65 | 66 | ```bash 67 | az disk create --resource-group myResourceGroup --name myDisk --size-gb 128 --sku Premium_LRS 68 | ``` 69 | 70 | - **Database:** 71 | - **Azure SQL Database:** 72 | - Managed relational database service. 73 | - Key Concepts: Databases, Servers, Elastic Pools, DTUs/vCores. 74 | - Example: 75 | 76 | ```bash 77 | az sql db create --resource-group myResourceGroup --server myServer --name myDatabase --service-objective S0 78 | ``` 79 | 80 | - **Cosmos DB:** 81 | - Globally distributed, multi-model database service. 82 | - Key Concepts: Databases, Containers, Partition Keys, Consistency Levels. 83 | - Example: 84 | 85 | ```bash 86 | az cosmosdb create --name myCosmosDBAccount --resource-group myResourceGroup --kind MongoDB --locations regionName=eastus 87 | ``` 88 | 89 | - **Azure Database for MySQL/PostgreSQL:** 90 | - Managed MySQL/PostgreSQL service. 91 | - Key Concepts: Servers, Databases, Backup Retention, Performance Tiers. 92 | - Example: 93 | 94 | ```bash 95 | az mysql server create --resource-group myResourceGroup --name mydemoserver --location eastus --admin-user myadmin --admin-password mypassword --sku-name GP_Gen5_2 96 | ``` 97 | 98 | **3. Networking:** 99 | 100 | - **Azure Virtual Network (VNet):** 101 | - Provides an isolated network environment in Azure. 102 | - Key Concepts: Subnets, Network Security Groups, VPN Gateway, Peering. 103 | - Example: 104 | 105 | ```bash 106 | az network vnet create --resource-group myResourceGroup --name myVnet --address-prefix 10.0.0.0/16 --subnet-name mySubnet --subnet-prefix 10.0.1.0/24 107 | ``` 108 | 109 | - **Azure Load Balancer:** 110 | - Distributes inbound traffic across multiple VMs. 111 | - Key Concepts: Frontend IP, Backend Pools, Load Balancing Rules. 112 | - Example: 113 | 114 | ```bash 115 | az network lb create --resource-group myResourceGroup --name myLoadBalancer --frontend-ip-name myFrontEnd --backend-pool-name myBackEndPool 116 | ``` 117 | 118 | - **Azure Application Gateway:** 119 | - Web traffic load balancer for managing HTTP and HTTPS traffic. 120 | - Key Concepts: Listener, Rules, HTTP Settings, SSL Certificates. 121 | - Example: 122 | 123 | ```bash 124 | az network application-gateway create --name myAppGateway --resource-group myResourceGroup --capacity 2 --sku Standard_v2 --vnet-name myVnet --subnet mySubnet 125 | ``` 126 | 127 | - **Azure DNS:** 128 | - Hosts your DNS domains and provides name resolution using Microsoft Azure infrastructure. 129 | - Key Concepts: DNS Zones, Records, NS Records, A Records. 130 | - Example: 131 | 132 | ```bash 133 | az network dns zone create --resource-group myResourceGroup --name mydomain.com 134 | az network dns record-set a add-record --resource-group myResourceGroup --zone-name mydomain.com --record-set-name www --ipv4-address 10.0.0.4 135 | ``` 136 | 137 | **4. Security and Identity:** 138 | 139 | - **Azure Active Directory (AAD):** 140 | - Identity and access management service. 141 | - Key Concepts: Users, Groups, Roles, Managed Identities, Conditional Access. 142 | - Example: 143 | 144 | ```bash 145 | az ad user create --display-name "My User" --user-principal-name myuser@mydomain.com --password "P@ssw0rd!" 146 | ``` 147 | 148 | - **Azure Key Vault:** 149 | - Securely store and access secrets, keys, and certificates. 150 | - Key Concepts: Vaults, Secrets, Keys, Certificates, Access Policies. 151 | - Example: 152 | 153 | ```bash 154 | az keyvault create --name myKeyVault --resource-group myResourceGroup --location eastus 155 | az keyvault secret set --vault-name myKeyVault --name MySecret --value "MySecretValue" 156 | ``` 157 | 158 | - **Azure Security Center:** 159 | - Unified infrastructure security management system. 160 | - Key Concepts: Security Posture, Recommendations, Secure Score, Just-in-Time VM Access. 161 | - Example: 162 | 163 | ```bash 164 | az security assessment create --name myAssessment --status "Healthy" --description "This is a custom assessment." 165 | ``` 166 | 167 | - **Azure Policy:** 168 | - Enforce organizational standards and assess compliance at-scale. 169 | - Key Concepts: Definitions, Initiatives, Assignments. 170 | - Example: 171 | 172 | ```bash 173 | az policy assignment create --name myPolicyAssignment --scope /subscriptions/{subscription-id}/resourceGroups/{resource-group-name} --policy /subscriptions/{subscription-id}/providers/Microsoft.Authorization/policyDefinitions/{policyDefinitionName} 174 | ``` 175 | 176 | **5. Management Tools:** 177 | 178 | - **Azure Resource Manager (ARM):** 179 | - Azure’s deployment and management service. 180 | - Key Concepts: ARM Templates, Resources, Resource Groups, Deployments. 181 | - Example: 182 | 183 | ```bash 184 | az group create --name myResourceGroup --location eastus 185 | az deployment group create --resource-group myResourceGroup --template-file azuredeploy.json 186 | ``` 187 | 188 | - **Azure Monitor:** 189 | - Comprehensive monitoring service for collecting, analyzing, and acting on telemetry data. 190 | - Key Concepts: Metrics, Logs, Alerts, Application Insights, Log Analytics. 191 | - Example: 192 | 193 | ```bash 194 | az monitor alert create --name myAlert --resource-group myResourceGroup --target /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.Compute/virtualMachines/{vm-name} --condition "avg Percentage CPU > 75" 195 | ``` 196 | 197 | - **Azure Automation:** 198 | - Automate frequent, time-consuming, and error-prone cloud management tasks. 199 | - Key Concepts: Runbooks, Desired State Configuration (DSC), Hybrid Worker Groups. 200 | - Example: 201 | 202 | ```bash 203 | az automation account create --name myAutomationAccount --resource-group myResourceGroup --location eastus 204 | az automation runbook create --name myRunbook --automation-account-name myAutomationAccount --resource-group myResourceGroup --type PowerShellWorkflow 205 | ``` 206 | 207 | - **Azure Advisor:** 208 | - Personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. 209 | - Key Concepts: Recommendations, Cost, Performance, Security, High Availability. 210 | - Example: 211 | - Access via Azure Portal. 212 | 213 | **6. Advanced Topics:** 214 | 215 | - **Cost Management:** 216 | - Monitor and optimize your Azure costs using Cost Management + Billing. 217 | - Example: 218 | 219 | ```bash 220 | az consumption budget create --amount 1000 --time-grain Monthly --start-date 2024-08- 221 | 222 | 01 --end-date 2024-08-31 --name myBudget --resource-group myResourceGroup 223 | ``` 224 | 225 | - **Auto Scaling:** 226 | - Automatically adjust the number of VM instances based on demand. 227 | - Key Concepts: Scale Sets, Scaling Rules, Metrics. 228 | - Example: 229 | 230 | ```bash 231 | az vmss create --resource-group myResourceGroup --name myScaleSet --image UbuntuLTS --upgrade-policy-mode automatic --admin-username azureuser --generate-ssh-keys 232 | ``` 233 | 234 | - **Serverless Architectures:** 235 | - Utilize Azure Functions, Logic Apps, and Event Grid for serverless solutions. 236 | - Key Concepts: Triggers, Bindings, Workflows, Event Subscriptions. 237 | - Example: 238 | 239 | ```bash 240 | az eventgrid event-subscription create --name myEventSubscription --source-resource-id /subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Storage/storageAccounts/{storage-account-name} --endpoint https://myfunction.azurewebsites.net/runtime/webhooks/eventgrid?functionName=myfunction 241 | ``` 242 | 243 | **7. Best Practices:** 244 | 245 | - **Security:** 246 | - Use Azure Security Center, encrypt data, apply RBAC, monitor with Azure Monitor, and implement secure coding practices. 247 | 248 | - **Reliability:** 249 | - Use Availability Sets, Availability Zones, configure backups, and utilize disaster recovery services. 250 | 251 | - **Performance Efficiency:** 252 | - Choose appropriate VM sizes, use caching services, and optimize databases. 253 | 254 | - **Cost Optimization:** 255 | - Use Reserved Instances (RIs), monitor spend, and optimize resources. 256 | 257 | - **Operational Excellence:** 258 | - Automate deployments with ARM, monitor operations, and use infrastructure as code (IaC). 259 | -------------------------------------------------------------------------------- /Containerization/CRI-O.md: -------------------------------------------------------------------------------- 1 | # CRI-O Cheatsheet 2 | 3 | ![text](https://imgur.com/iET0fW6.png) 4 | 5 | ## Table of Contents 6 | 7 | 1. **Introduction to CRI-O** 8 | - What is CRI-O? 9 | - Architecture Overview 10 | - Key Features 11 | 2. **Installation** 12 | - System Requirements 13 | - Installing CRI-O on Linux 14 | - Post-Installation Configuration 15 | 3. **Basic Commands** 16 | - CRI-O CLI Overview 17 | - Starting and Stopping CRI-O 18 | - Managing Containers 19 | - Viewing Logs 20 | 4. **Container Management** 21 | - Pulling Images 22 | - Running Containers 23 | - Stopping and Removing Containers 24 | - Viewing Running Containers 25 | 5. **Networking** 26 | - Default Networking Configuration 27 | - Configuring Custom Networks 28 | - Using CNI Plugins with CRI-O 29 | 6. **Storage** 30 | - Managing Container Storage 31 | - Configuring Storage Options 32 | - Persistent Storage Management 33 | 7. **Security** 34 | - Pod Security Policies (PSPs) 35 | - SELinux and CRI-O 36 | - Seccomp Profiles 37 | - AppArmor Integration 38 | 8. **Monitoring and Logging** 39 | - Integrating with Prometheus 40 | - Setting Up Log Collection 41 | - Debugging Containers 42 | 9. **Advanced Configuration** 43 | - CRI-O Configuration Files 44 | - Runtime Configuration 45 | - Resource Limits and Cgroups 46 | - Tuning for Performance 47 | 10. **Troubleshooting** 48 | - Common Issues and Fixes 49 | - Analyzing CRI-O Logs 50 | - Debugging Failed Containers 51 | 11. **Integration with Kubernetes** 52 | - Configuring CRI-O with Kubernetes 53 | - CRI-O as a Container Runtime for K8s 54 | - Multi-tenancy with CRI-O in Kubernetes 55 | 12. **Best Practices** 56 | - Security Best Practices 57 | - Performance Optimization 58 | - Efficient Resource Management 59 | 13. **FAQs** 60 | - Common Questions about CRI-O 61 | 14. **References** 62 | - Official Documentation 63 | - Community Resources 64 | 65 | --- 66 | 67 | ## 1. Introduction to CRI-O 68 | 69 | ### What is CRI-O? 70 | 71 | - **CRI-O** is an open-source, lightweight container runtime for Kubernetes. It is designed to provide a minimal and stable interface between Kubernetes and the container runtime, adhering to the Container Runtime Interface (CRI) specifications. 72 | 73 | ### Architecture Overview 74 | 75 | - **CRI-O** integrates directly with Kubernetes, using OCI-compatible runtimes (like runc) to handle container operations. It replaces the need for a full container engine like Docker in Kubernetes environments. 76 | 77 | ### Key Features 78 | 79 | - **Lightweight**: Minimal dependencies and a smaller footprint compared to full container engines. 80 | - **Compatibility**: Fully compliant with Kubernetes and the Open Container Initiative (OCI) specifications. 81 | - **Security**: Integrates with SELinux, AppArmor, and seccomp for enhanced security. 82 | - **Performance**: Optimized for performance with lower overhead. 83 | 84 | --- 85 | 86 | ## 2. Installation 87 | 88 | ### System Requirements 89 | 90 | - **Supported OS**: CRI-O supports various Linux distributions including Fedora, CentOS, and Ubuntu. 91 | - **Kernel Version**: Ensure that your Linux kernel is 4.19 or higher for optimal compatibility. 92 | 93 | ### Installing CRI-O on Linux 94 | 95 | - **Fedora/CentOS**: 96 | 97 | ```bash 98 | sudo dnf install -y cri-o 99 | ``` 100 | 101 | - **Ubuntu**: 102 | 103 | ```bash 104 | sudo apt-get install -y cri-o 105 | ``` 106 | 107 | ### Post-Installation Configuration 108 | 109 | - **Start and Enable CRI-O**: 110 | 111 | ```bash 112 | sudo systemctl start crio 113 | sudo systemctl enable crio 114 | ``` 115 | 116 | - **Verify Installation**: 117 | 118 | ```bash 119 | crio --version 120 | ``` 121 | 122 | --- 123 | 124 | ## 3. Basic Commands 125 | 126 | ### CRI-O CLI Overview 127 | 128 | - **`crio`**: The main command for interacting with the CRI-O service. 129 | - **`crictl`**: A CLI tool used to manage containers and images through CRI-O. 130 | 131 | ### Starting and Stopping CRI-O 132 | 133 | - **Start CRI-O**: 134 | 135 | ```bash 136 | sudo systemctl start crio 137 | ``` 138 | 139 | - **Stop CRI-O**: 140 | 141 | ```bash 142 | sudo systemctl stop crio 143 | ``` 144 | 145 | ### Managing Containers 146 | 147 | - **List Running Containers**: 148 | 149 | ```bash 150 | sudo crictl ps 151 | ``` 152 | 153 | - **Stop a Container**: 154 | 155 | ```bash 156 | sudo crictl stop 157 | ``` 158 | 159 | - **Remove a Container**: 160 | 161 | ```bash 162 | sudo crictl rm 163 | ``` 164 | 165 | ### Viewing Logs 166 | 167 | - **View CRI-O Logs**: 168 | 169 | ```bash 170 | sudo journalctl -u crio 171 | ``` 172 | 173 | --- 174 | 175 | ## 4. Container Management 176 | 177 | ### Pulling Images 178 | 179 | - **Pull an Image**: 180 | 181 | ```bash 182 | sudo crictl pull 183 | ``` 184 | 185 | ### Running Containers 186 | 187 | - **Run a Container**: 188 | 189 | ```bash 190 | sudo crictl run 191 | ``` 192 | 193 | ### Stopping and Removing Containers 194 | 195 | - **Stop a Container**: 196 | 197 | ```bash 198 | sudo crictl stop 199 | ``` 200 | 201 | - **Remove a Container**: 202 | 203 | ```bash 204 | sudo crictl rm 205 | ``` 206 | 207 | ### Viewing Running Containers 208 | 209 | - **List Containers**: 210 | 211 | ```bash 212 | sudo crictl ps 213 | ``` 214 | 215 | --- 216 | 217 | ## 5. Networking 218 | 219 | ### Default Networking Configuration 220 | 221 | - **Default Network**: CRI-O uses the `cni0` bridge for networking by default. 222 | 223 | ### Configuring Custom Networks 224 | 225 | - **CNI Plugins**: CRI-O can use various CNI plugins to configure custom network setups. 226 | 227 | ### Using CNI Plugins with CRI-O 228 | 229 | - **Install CNI Plugins**: 230 | 231 | ```bash 232 | sudo dnf install -y containernetworking-plugins 233 | ``` 234 | 235 | - **Configure Plugin**: Add your CNI plugin configuration in `/etc/cni/net.d/`. 236 | 237 | --- 238 | 239 | ## 6. Storage 240 | 241 | ### Managing Container Storage 242 | 243 | - **Default Storage**: CRI-O uses `overlay` storage driver by default. 244 | 245 | ### Configuring Storage Options 246 | 247 | - **Modify Storage Driver**: Edit `/etc/containers/storage.conf` to change the storage driver. 248 | 249 | ### Persistent Storage Management 250 | 251 | - **Mount Volumes**: Use `--mount` option to attach persistent storage volumes to containers. 252 | 253 | --- 254 | 255 | ## 7. Security 256 | 257 | ### Pod Security Policies (PSPs) 258 | 259 | - **Enable PSPs**: Configure PSPs in Kubernetes to apply security restrictions on CRI-O managed containers. 260 | 261 | ### SELinux and CRI-O 262 | 263 | - **SELinux Enforcement**: Ensure SELinux is enabled on the host system for better security. 264 | 265 | ### Seccomp Profiles 266 | 267 | - **Enable Seccomp**: CRI-O supports seccomp profiles to restrict system calls for containers. 268 | 269 | ### AppArmor Integration 270 | 271 | - **AppArmor Profiles**: Apply AppArmor profiles for CRI-O containers to enforce security policies. 272 | 273 | --- 274 | 275 | ## 8. Monitoring and Logging 276 | 277 | ### Integrating with Prometheus 278 | 279 | - **Prometheus Metrics**: CRI-O exposes metrics that can be scraped by Prometheus for monitoring. 280 | 281 | ### Setting Up Log Collection 282 | 283 | - **Log Rotation**: Configure log rotation in `/etc/crio/crio.conf` to manage container logs. 284 | 285 | ### Debugging Containers 286 | 287 | - **Container Logs**: 288 | 289 | ```bash 290 | sudo crictl logs 291 | ``` 292 | 293 | --- 294 | 295 | ## 9. Advanced Configuration 296 | 297 | ### CRI-O Configuration Files 298 | 299 | - **Main Configuration File**: `/etc/crio/crio.conf` 300 | - **Modify Configurations**: Adjust settings for runtime, networking, and storage. 301 | 302 | ### Runtime Configuration 303 | 304 | - **Specify Runtime**: Use the `runtime` section in `crio.conf` to set the container runtime (e.g., runc, kata). 305 | 306 | ### Resource Limits and Cgroups 307 | 308 | - **Set Resource Limits**: Define CPU and memory limits in the container configuration. 309 | 310 | ### Tuning for Performance 311 | 312 | - **Adjust Parameters**: Modify parameters like `pids_limit` and `log_size_max` in `crio.conf` for performance tuning. 313 | 314 | --- 315 | 316 | ## 10. Troubleshooting 317 | 318 | ### Common Issues and Fixes 319 | 320 | - **Containers Not Starting**: Check logs for errors related to runtime or configuration issues. 321 | - **Networking Issues**: Verify CNI plugin configurations and network settings. 322 | 323 | ### Analyzing CRI-O Logs 324 | 325 | - **View Logs**: 326 | 327 | ```bash 328 | sudo journalctl -u crio 329 | ``` 330 | 331 | ### Debugging Failed Containers 332 | 333 | - **Check Exit Code**: 334 | 335 | ```bash 336 | sudo crictl inspect 337 | ``` 338 | 339 | --- 340 | 341 | ## 11. Integration with Kubernetes 342 | 343 | ### Configuring CRI-O with Kubernetes 344 | 345 | - **Set CRI-O as the Default Runtime**: Modify Kubernetes configuration to use CRI-O as the default container runtime. 346 | 347 | ### CRI-O as a Container Runtime for K8s 348 | 349 | - **Installation**: Ensure CRI-O is installed and configured on all Kubernetes nodes. 350 | 351 | ### Multi-tenancy with CRI-O in Kubernetes 352 | 353 | - **Namespace Isolation**: Use Kubernetes namespaces and CRI-O security features to ensure tenant isolation. 354 | 355 | --- 356 | 357 | ## 12. Best Practices 358 | 359 | ### Security Best Practices 360 | 361 | - **Use SELinux**: Enable SELinux for all nodes running CRI-O. 362 | - **Limit Resource Usage**: Define CPU and memory limits to prevent resource exhaustion. 363 | 364 | ### Performance Optimization 365 | 366 | - **Tune Runtime**: Adjust runtime parameters for high-performance workloads. 367 | - **Log Management**: Set up proper log rotation to prevent disk space exhaustion. 368 | 369 | ### Efficient Resource Management 370 | 371 | - **Resource Limits**: Apply resource limits to containers to optimize cluster resource usage. 372 | 373 | --- 374 | 375 | ## 13. FAQs 376 | 377 | ### Common Questions about CRI-O 378 | 379 | - ** 380 | 381 | Q**: How does CRI-O differ from Docker? 382 | **A**: CRI-O is a lightweight container runtime designed specifically for Kubernetes, whereas Docker is a full-featured container platform. 383 | 384 | - **Q**: Can CRI-O run standalone without Kubernetes? 385 | **A**: CRI-O is designed to run within Kubernetes environments, but it can also be used with tools like `crictl` for standalone operations. 386 | 387 | --- 388 | 389 | ## 14. References 390 | 391 | ### Official Documentation 392 | 393 | - [CRI-O GitHub Repository](https://github.com/cri-o/cri-o) 394 | - [CRI-O Documentation](https://crio.readthedocs.io/) 395 | 396 | ### Community Resources 397 | 398 | - [Kubernetes CRI-O Integration Guide](https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cri-o) 399 | -------------------------------------------------------------------------------- /cloud/Kubernetes-on-AWS.md: -------------------------------------------------------------------------------- 1 | # Kubernetes on AWS Cheatsheet 2 | 3 | ![text](https://imgur.com/lWOk4cE.png) 4 | 5 | ## 1. Introduction to Kubernetes on AWS 6 | 7 | ### What is Kubernetes? 8 | 9 | - **Kubernetes** is an open-source platform for automating containerized application deployment, scaling, and management. 10 | 11 | ### Kubernetes on AWS 12 | 13 | - AWS offers managed Kubernetes services through **Amazon EKS** (Elastic Kubernetes Service), which simplifies the process of running Kubernetes clusters on AWS infrastructure. 14 | 15 | --- 16 | 17 | ## 2. Setting Up Kubernetes on AWS 18 | 19 | ### Amazon EKS Overview 20 | 21 | - **Amazon EKS** is a managed service that simplifies running Kubernetes on AWS by handling the control plane and providing integration with AWS services. 22 | 23 | ### Creating an EKS Cluster 24 | 25 | - **Using AWS Management Console**: 26 | 1. Go to the Amazon EKS console. 27 | 2. Click **Create cluster**. 28 | 3. Follow the wizard to configure cluster settings, including VPC, subnets, and IAM roles. 29 | 30 | - **Using AWS CLI**: 31 | 32 | ```bash 33 | aws eks create-cluster --name my-cluster --role-arn arn:aws:iam::123456789012:role/EKS-Cluster-Role --resources-vpc-config subnetIds=subnet-0bb1c79de4EXAMPLE,subnet-0bb1c79de4EXAMPLE 34 | ``` 35 | 36 | ### Configuring kubectl 37 | 38 | - **Update kubeconfig**: 39 | 40 | ```bash 41 | aws eks update-kubeconfig --name my-cluster 42 | ``` 43 | 44 | --- 45 | 46 | ## 3. EKS Cluster Configuration 47 | 48 | ### Node Groups 49 | 50 | - **Create Node Group Using Console**: 51 | 1. Go to the Amazon EKS console. 52 | 2. Select your cluster. 53 | 3. Navigate to the **Compute** tab and click **Add Node Group**. 54 | 4. Configure settings such as instance types, scaling options, and IAM roles. 55 | 56 | - **Create Node Group Using AWS CLI**: 57 | 58 | ```bash 59 | aws eks create-nodegroup --cluster-name my-cluster --nodegroup-name my-node-group --scaling-config minSize=1,maxSize=3,desiredSize=2 --disk-size 20 --subnets subnet-0bb1c79de4EXAMPLE,subnet-0bb1c79de4EXAMPLE --instance-types t3.medium --node-role arn:aws:iam::123456789012:role/EKS-Node-Role 60 | ``` 61 | 62 | ### IAM Roles and Policies 63 | 64 | - **Create IAM Roles**: 65 | - **EKS Cluster Role**: Grants EKS permissions to interact with AWS services. 66 | - **Node Instance Role**: Grants permissions for the worker nodes. 67 | 68 | - **Attach Policies**: 69 | - **AmazonEKSClusterPolicy** 70 | - **AmazonEKSWorkerNodePolicy** 71 | - **AmazonEC2ContainerRegistryReadOnly** 72 | 73 | --- 74 | 75 | ## 4. Networking 76 | 77 | ### VPC and Subnet Configuration 78 | 79 | - **Create VPC**: 80 | 81 | ```bash 82 | aws ec2 create-vpc --cidr-block 10.0.0.0/16 83 | ``` 84 | 85 | - **Create Subnets**: 86 | 87 | ```bash 88 | aws ec2 create-subnet --vpc-id vpc-0bb1c79de4EXAMPLE --cidr-block 10.0.1.0/24 --availability-zone us-west-2a 89 | ``` 90 | 91 | - **Configure Security Groups**: 92 | - Allow inbound traffic on port 443 (Kubernetes API server). 93 | - Allow outbound traffic for node communication. 94 | 95 | ### Cluster Networking 96 | 97 | - **Use Amazon VPC CNI Plugin**: 98 | - Ensures that Kubernetes pods get IP addresses from the VPC network. 99 | 100 | ```bash 101 | kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.12/aws-k8s-cni.yaml 102 | ``` 103 | 104 | --- 105 | 106 | ## 5. Deploying Applications 107 | 108 | ### Deploying with kubectl 109 | 110 | - **Create a Deployment**: 111 | 112 | ```yaml 113 | apiVersion: apps/v1 114 | kind: Deployment 115 | metadata: 116 | name: my-deployment 117 | spec: 118 | replicas: 2 119 | selector: 120 | matchLabels: 121 | app: my-app 122 | template: 123 | metadata: 124 | labels: 125 | app: my-app 126 | spec: 127 | containers: 128 | - name: my-container 129 | image: my-image:latest 130 | ports: 131 | - containerPort: 80 132 | ``` 133 | 134 | - **Apply the Deployment**: 135 | 136 | ```bash 137 | kubectl apply -f deployment.yaml 138 | ``` 139 | 140 | ### Managing Services 141 | 142 | - **Create a Service**: 143 | 144 | ```yaml 145 | apiVersion: v1 146 | kind: Service 147 | metadata: 148 | name: my-service 149 | spec: 150 | selector: 151 | app: my-app 152 | ports: 153 | - protocol: TCP 154 | port: 80 155 | targetPort: 80 156 | type: LoadBalancer 157 | ``` 158 | 159 | - **Apply the Service**: 160 | 161 | ```bash 162 | kubectl apply -f service.yaml 163 | ``` 164 | 165 | ### Ingress Controllers 166 | 167 | - **Install NGINX Ingress Controller**: 168 | 169 | ```bash 170 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/aws/deploy.yaml 171 | ``` 172 | 173 | - **Create an Ingress Resource**: 174 | 175 | ```yaml 176 | apiVersion: networking.k8s.io/v1 177 | kind: Ingress 178 | metadata: 179 | name: my-ingress 180 | spec: 181 | rules: 182 | - host: myapp.example.com 183 | http: 184 | paths: 185 | - path: / 186 | pathType: Prefix 187 | backend: 188 | service: 189 | name: my-service 190 | port: 191 | number: 80 192 | ``` 193 | 194 | --- 195 | 196 | ## 6. Storage 197 | 198 | ### EBS Volumes 199 | 200 | - **Create and Attach an EBS Volume**: 201 | - **Create Volume**: 202 | 203 | ```bash 204 | aws ec2 create-volume --size 10 --availability-zone us-west-2a --volume-type gp2 205 | ``` 206 | 207 | - **Attach Volume**: 208 | 209 | ```bash 210 | aws ec2 attach-volume --volume-id vol-0bb1c79de4EXAMPLE --instance-id i-0bb1c79de4EXAMPLE --device /dev/xvdf 211 | ``` 212 | 213 | ### Persistent Volumes and Claims 214 | 215 | - **Create a Persistent Volume**: 216 | 217 | ```yaml 218 | apiVersion: v1 219 | kind: PersistentVolume 220 | metadata: 221 | name: my-pv 222 | spec: 223 | capacity: 224 | storage: 10Gi 225 | accessModes: 226 | - ReadWriteOnce 227 | hostPath: 228 | path: /mnt/data 229 | ``` 230 | 231 | - **Create a Persistent Volume Claim**: 232 | 233 | ```yaml 234 | apiVersion: v1 235 | kind: PersistentVolumeClaim 236 | metadata: 237 | name: my-pvc 238 | spec: 239 | accessModes: 240 | - ReadWriteOnce 241 | resources: 242 | requests: 243 | storage: 10Gi 244 | ``` 245 | 246 | --- 247 | 248 | ## 7. Monitoring and Logging 249 | 250 | ### CloudWatch Integration 251 | 252 | - **Install CloudWatch Agent**: 253 | 254 | ```bash 255 | kubectl apply -f https://s3.amazonaws.com/amazoncloudwatch-agent-kubernetes/amazon-cloudwatch-agent.yaml 256 | ``` 257 | 258 | - **Configure CloudWatch Logs**: 259 | - Create log groups and streams in CloudWatch. 260 | - Set up IAM roles to allow Kubernetes to push logs to CloudWatch. 261 | 262 | ### Prometheus and Grafana 263 | 264 | - **Install Prometheus**: 265 | 266 | ```bash 267 | kubectl create namespace monitoring 268 | kubectl apply -f https://github.com/prometheus/prometheus/releases/download/v2.26.0/prometheus-2.26.0.yaml 269 | ``` 270 | 271 | - **Install Grafana**: 272 | 273 | ```bash 274 | kubectl apply -f https://raw.githubusercontent.com/grafana/grafana/main/deploy/kubernetes/grafana-deployment.yaml 275 | ``` 276 | 277 | - **Configure Prometheus and Grafana**: 278 | - Set up Prometheus as a data source in Grafana. 279 | - Import pre-built dashboards or create custom ones. 280 | 281 | --- 282 | 283 | ## 8. Security 284 | 285 | ### IAM Roles for Service Accounts 286 | 287 | - **Create IAM Role for Service Account**: 288 | 289 | ```bash 290 | aws iam create-role --role-name my-k8s-role --assume-role-policy-document file://trust-policy.json 291 | ``` 292 | 293 | - **Attach Policies**: 294 | 295 | ```bash 296 | aws iam attach-role-policy --role-name my-k8s-role --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess 297 | ``` 298 | 299 | - **Associate IAM Role with Kubernetes Service Account**: 300 | 301 | ```yaml 302 | apiVersion: v1 303 | kind: ServiceAccount 304 | metadata: 305 | name: my-service-account 306 | annotations: 307 | eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/my-k8s-role 308 | ``` 309 | 310 | ### Network Policies 311 | 312 | - **Create a Network Policy**: 313 | 314 | ```yaml 315 | apiVersion: networking.k8s.io/v1 316 | kind: NetworkPolicy 317 | metadata: 318 | name: allow-front-end 319 | spec: 320 | podSelector: 321 | matchLabels: 322 | role: front-end 323 | ingress: 324 | - from: 325 | - podSelector: 326 | matchLabels: 327 | role: back-end 328 | ``` 329 | 330 | ### Secrets Management 331 | 332 | - **Create a Kubernetes Secret**: 333 | 334 | ```bash 335 | kubectl create secret generic my-secret --from-literal=password=my-password 336 | ``` 337 | 338 | - **Access Secret in Pods**: 339 | 340 | ```yaml 341 | apiVersion: v1 342 | kind: Pod 343 | metadata: 344 | name: my-pod 345 | spec: 346 | containers: 347 | - name: my-container 348 | image: my-image 349 | env: 350 | - name: MY 351 | 352 | _SECRET 353 | valueFrom: 354 | secretKeyRef: 355 | name: my-secret 356 | key: password 357 | 358 | ``` 359 | 360 | --- 361 | 362 | ## 9. Auto-scaling and Load Balancing 363 | 364 | ### Horizontal Pod Autoscaler 365 | - **Create Horizontal Pod Autoscaler**: 366 | ```bash 367 | kubectl autoscale deployment my-deployment --cpu-percent=50 --min=1 --max=10 368 | ``` 369 | 370 | ### Cluster Autoscaler 371 | 372 | - **Install Cluster Autoscaler**: 373 | 374 | ```bash 375 | kubectl apply -f https://github.com/kubernetes/autoscaler/releases/download//cluster-autoscaler-v.yaml 376 | ``` 377 | 378 | ### ELB Integration 379 | 380 | - **Configure ELB for Load Balancing**: 381 | - Ensure the service type is `LoadBalancer`. 382 | 383 | ```yaml 384 | apiVersion: v1 385 | kind: Service 386 | metadata: 387 | name: my-service 388 | spec: 389 | type: LoadBalancer 390 | selector: 391 | app: my-app 392 | ports: 393 | - protocol: TCP 394 | port: 80 395 | targetPort: 80 396 | ``` 397 | 398 | --- 399 | 400 | ## 10. Backup and Recovery 401 | 402 | ### EBS Snapshots 403 | 404 | - **Create Snapshot**: 405 | 406 | ```bash 407 | aws ec2 create-snapshot --volume-id vol-0bb1c79de4EXAMPLE --description "My snapshot" 408 | ``` 409 | 410 | - **Restore from Snapshot**: 411 | 412 | ```bash 413 | aws ec2 create-volume --snapshot-id snap-0bb1c79de4EXAMPLE --availability-zone us-west-2a 414 | ``` 415 | 416 | ### Backup Strategies 417 | 418 | - **Use Velero for Backup and Restore**: 419 | - **Install Velero**: 420 | 421 | ```bash 422 | velero install --provider aws --bucket --secret-file --backup-location-config region= 423 | ``` 424 | 425 | - **Create a Backup**: 426 | 427 | ```bash 428 | velero backup create my-backup --include-namespaces my-namespace 429 | ``` 430 | 431 | --- 432 | 433 | ## 11. Upgrades and Maintenance 434 | 435 | ### Upgrading EKS Clusters 436 | 437 | - **Upgrade Control Plane**: 438 | - **Using Console**: Select your cluster and choose to upgrade. 439 | - **Using CLI**: 440 | 441 | ```bash 442 | aws eks update-cluster-version --name my-cluster --kubernetes-version 1.21 443 | ``` 444 | 445 | ### Upgrading Node Groups 446 | 447 | - **Update Node Groups**: 448 | 449 | ```bash 450 | aws eks update-nodegroup-version --cluster-name my-cluster --nodegroup-name my-node-group --release-version 1.21 451 | ``` 452 | 453 | ### Regular Maintenance 454 | 455 | - **Monitor Cluster Health**: Use AWS CloudWatch and Prometheus for monitoring. 456 | - **Check for Vulnerabilities**: Regularly scan images and clusters for security vulnerabilities. 457 | 458 | --- 459 | 460 | ## 12. References 461 | 462 | ### Official Documentation 463 | 464 | - [Amazon EKS Documentation](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) 465 | - [Kubernetes Documentation](https://kubernetes.io/docs/) 466 | 467 | ### Tools and Resources 468 | 469 | - [AWS CLI Documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) 470 | - [Kubectl Documentation](https://kubernetes.io/docs/reference/kubectl/) 471 | -------------------------------------------------------------------------------- /Containerization/Docker.md: -------------------------------------------------------------------------------- 1 | # Docker Cheatsheet 2 | 3 | ![text](https://imgur.com/XHwJp6U.png) 4 | 5 | ## Checkout detailed article on [Dev.to](https://dev.to/prodevopsguytech/docker-commands-from-beginner-to-advanced-for-devops-engineers-bb3) 6 | 7 | ## 1. Introduction to Docker 8 | 9 | ### What is Docker? 10 | 11 | - **Docker** is an open-source platform that automates the deployment, scaling, and management of applications by using containerization technology. Containers are lightweight, portable, and consistent environments that contain everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings. 12 | 13 | ### Key Concepts 14 | 15 | - **Docker Engine**: The core component of Docker, responsible for running containers. 16 | - **Image**: A lightweight, standalone, and executable software package that includes everything needed to run an application. 17 | - **Container**: A runtime instance of a Docker image that shares the host system's kernel. 18 | - **Dockerfile**: A script containing a series of commands to assemble a Docker image. 19 | - **Registry**: A storage and distribution system for Docker images, such as Docker Hub. 20 | - **Docker Compose**: A tool for defining and running multi-container Docker applications using a YAML file. 21 | 22 | --- 23 | 24 | ## 2. Installing Docker 25 | 26 | ### Install Docker on Linux 27 | 28 | - **Install Docker Engine**: 29 | 30 | ```bash 31 | sudo apt-get update 32 | sudo apt-get install docker-ce docker-ce-cli containerd.io 33 | ``` 34 | 35 | - **Start Docker Service**: 36 | 37 | ```bash 38 | sudo systemctl start docker 39 | sudo systemctl enable docker 40 | ``` 41 | 42 | ### Install Docker on macOS 43 | 44 | - **Install Docker Desktop**: 45 | - Download and install Docker Desktop from [Docker's official website](https://www.docker.com/products/docker-desktop). 46 | 47 | ### Install Docker on Windows 48 | 49 | - **Install Docker Desktop**: 50 | - Download and install Docker Desktop from [Docker's official website](https://www.docker.com/products/docker-desktop). 51 | 52 | --- 53 | 54 | ## 3. Basic Docker Operations 55 | 56 | ### Working with Docker Images 57 | 58 | - **Search for an Image**: 59 | 60 | ```bash 61 | docker search nginx 62 | ``` 63 | 64 | - **Pull an Image from Docker Hub**: 65 | 66 | ```bash 67 | docker pull nginx 68 | ``` 69 | 70 | - **List All Images**: 71 | 72 | ```bash 73 | docker images 74 | ``` 75 | 76 | - **Remove an Image**: 77 | 78 | ```bash 79 | docker rmi nginx 80 | ``` 81 | 82 | ### Working with Docker Containers 83 | 84 | - **Run a Container**: 85 | 86 | ```bash 87 | docker run -d -p 80:80 --name mynginx nginx 88 | ``` 89 | 90 | - **List Running Containers**: 91 | 92 | ```bash 93 | docker ps 94 | ``` 95 | 96 | - **List All Containers (including stopped)**: 97 | 98 | ```bash 99 | docker ps -a 100 | ``` 101 | 102 | - **Stop a Running Container**: 103 | 104 | ```bash 105 | docker stop mynginx 106 | ``` 107 | 108 | - **Remove a Container**: 109 | 110 | ```bash 111 | docker rm mynginx 112 | ``` 113 | 114 | ### Docker Networks 115 | 116 | - **List All Networks**: 117 | 118 | ```bash 119 | docker network ls 120 | ``` 121 | 122 | - **Create a New Network**: 123 | 124 | ```bash 125 | docker network create mynetwork 126 | ``` 127 | 128 | - **Connect a Container to a Network**: 129 | 130 | ```bash 131 | docker network connect mynetwork mynginx 132 | ``` 133 | 134 | - **Disconnect a Container from a Network**: 135 | 136 | ```bash 137 | docker network disconnect mynetwork mynginx 138 | ``` 139 | 140 | --- 141 | 142 | ## 4. Building Docker Images 143 | 144 | ### Dockerfile Basics 145 | 146 | - **Sample Dockerfile**: 147 | 148 | ```Dockerfile 149 | # Use an official Node.js runtime as a parent image 150 | FROM node:14 151 | 152 | # Set the working directory in the container 153 | WORKDIR /app 154 | 155 | # Copy the current directory contents into the container at /app 156 | COPY . /app 157 | 158 | # Install any needed packages specified in package.json 159 | RUN npm install 160 | 161 | # Make port 8080 available to the world outside this container 162 | EXPOSE 8080 163 | 164 | # Define environment variable 165 | ENV NODE_ENV production 166 | 167 | # Run app.js using node 168 | CMD ["node", "app.js"] 169 | ``` 170 | 171 | ### Building an Image from a Dockerfile 172 | 173 | - **Build the Image**: 174 | 175 | ```bash 176 | docker build -t mynodeapp . 177 | ``` 178 | 179 | ### Managing Image Tags 180 | 181 | - **Tag an Image**: 182 | 183 | ```bash 184 | docker tag mynodeapp myrepo/mynodeapp:v1.0 185 | ``` 186 | 187 | - **Push an Image to Docker Hub**: 188 | 189 | ```bash 190 | docker push myrepo/mynodeapp:v1.0 191 | ``` 192 | 193 | --- 194 | 195 | ## 5. Docker Compose 196 | 197 | ### Introduction to Docker Compose 198 | 199 | - **Docker Compose** is a tool for defining and running multi-container Docker applications. You use a YAML file to configure your application's services, and then use a single command to create and start all the services. 200 | 201 | ### Sample `docker-compose.yml` File 202 | 203 | ```yaml 204 | version: '3' 205 | services: 206 | web: 207 | image: nginx 208 | ports: 209 | - "8080:80" 210 | db: 211 | image: mysql:5.7 212 | environment: 213 | MYSQL_ROOT_PASSWORD: example 214 | ``` 215 | 216 | ### Docker Compose Commands 217 | 218 | - **Start Services**: 219 | 220 | ```bash 221 | docker-compose up 222 | ``` 223 | 224 | - **Stop Services**: 225 | 226 | ```bash 227 | docker-compose down 228 | ``` 229 | 230 | - **Scale Services**: 231 | 232 | ```bash 233 | docker-compose up --scale web=3 234 | ``` 235 | 236 | ### Managing Volumes with Docker Compose 237 | 238 | - **Defining Volumes**: 239 | 240 | ```yaml 241 | services: 242 | web: 243 | image: nginx 244 | volumes: 245 | - ./webdata:/usr/share/nginx/html 246 | ``` 247 | 248 | --- 249 | 250 | ## 6. Docker Volumes and Storage 251 | 252 | ### Understanding Docker Volumes 253 | 254 | - **Volumes** are the preferred mechanism for persisting data generated and used by Docker containers. 255 | 256 | ### Managing Volumes 257 | 258 | - **Create a Volume**: 259 | 260 | ```bash 261 | docker volume create myvolume 262 | ``` 263 | 264 | - **List All Volumes**: 265 | 266 | ```bash 267 | docker volume ls 268 | ``` 269 | 270 | - **Inspect a Volume**: 271 | 272 | ```bash 273 | docker volume inspect myvolume 274 | ``` 275 | 276 | - **Remove a Volume**: 277 | 278 | ```bash 279 | docker volume rm myvolume 280 | ``` 281 | 282 | ### Mounting Volumes 283 | 284 | - **Mount a Volume to a Container**: 285 | 286 | ```bash 287 | docker run -d -p 80:80 --name mynginx -v myvolume:/usr/share/nginx/html nginx 288 | ``` 289 | 290 | ### Bind Mounts 291 | 292 | - **Use a Bind Mount**: 293 | 294 | ```bash 295 | docker run -d -p 80:80 --name mynginx -v /path/to/local/dir:/usr/share/nginx/html nginx 296 | ``` 297 | 298 | --- 299 | 300 | ## 7. Docker Networking 301 | 302 | ### Networking Modes 303 | 304 | - **Bridge Network**: The default network driver, which allows containers to communicate on the same host. 305 | - **Host Network**: Removes network isolation between the container and the Docker host. 306 | - **Overlay Network**: Enables networking between multiple Docker hosts in a swarm. 307 | 308 | ### Working with Networks 309 | 310 | - **Create a User-Defined Bridge Network**: 311 | 312 | ```bash 313 | docker network create mynetwork 314 | ``` 315 | 316 | - **Run a Container in a Network**: 317 | 318 | ```bash 319 | docker run -d --name mynginx --network=mynetwork nginx 320 | ``` 321 | 322 | - **Inspect a Network**: 323 | 324 | ```bash 325 | docker network inspect mynetwork 326 | ``` 327 | 328 | ### DNS in Docker 329 | 330 | - Docker containers can resolve each other's hostnames to IP addresses by using the embedded DNS server. 331 | 332 | --- 333 | 334 | ## 8. Docker Security 335 | 336 | ### Securing Docker 337 | 338 | - **Least Privileged User**: Always run containers as a non-root user. 339 | 340 | ```Dockerfile 341 | FROM nginx 342 | USER www-data 343 | ``` 344 | 345 | - **Use Trusted Images**: Use official images or images from trusted sources. 346 | - **Keep Docker Updated**: Regularly update Docker to the latest version to benefit from security patches. 347 | 348 | ### Docker Content Trust 349 | 350 | - **Enable Docker Content Trust (DCT)**: 351 | 352 | ```bash 353 | export DOCKER_CONTENT_TRUST=1 354 | ``` 355 | 356 | ### Managing Secrets 357 | 358 | - **Create a Secret in Docker Swarm**: 359 | 360 | ```bash 361 | echo "mysecretpassword" | docker secret create my_secret - 362 | ``` 363 | 364 | - **Use a Secret in a Service**: 365 | 366 | ```bash 367 | docker service create --name myservice --secret my_secret nginx 368 | ``` 369 | 370 | ### Securing Docker Daemon 371 | 372 | - **Use TLS to Secure Docker API**: 373 | - Generate TLS certificates and configure the Docker daemon to use them for secure communication. 374 | 375 | ### Limiting Container Resources 376 | 377 | - **Limit Memory**: 378 | 379 | ```bash 380 | docker run -d --name mynginx --memory="256m" nginx 381 | ``` 382 | 383 | - **Limit CPU**: 384 | 385 | ```bash 386 | docker run -d --name mynginx --cpus="1.0" nginx 387 | ``` 388 | 389 | --- 390 | 391 | ## 9. Advanced Docker Features 392 | 393 | ### Docker Swarm 394 | 395 | - **Initialize a Swarm**: 396 | 397 | ```bash 398 | docker swarm init 399 | ``` 400 | 401 | - **Join a Swarm**: 402 | 403 | ```bash 404 | docker swarm join --token SWMTKN-1-xxxx 405 | ``` 406 | 407 | - **Deploy a Stack**: 408 | 409 | ```bash 410 | docker stack deploy -c docker-compose.yml mystack 411 | ``` 412 | 413 | ### Multi-Stage Builds 414 | 415 | - **Example of a Multi-Stage Dockerfile**: 416 | 417 | ```Dockerfile 418 | # First Stage 419 | FROM golang:1.16 as builder 420 | WORKDIR /app 421 | COPY . . 422 | RUN go build -o myapp 423 | 424 | # Second Stage 425 | FROM alpine:latest 426 | WORKDIR /app 427 | COPY --from=builder /app/myapp . 428 | CMD ["./myapp"] 429 | ``` 430 | 431 | ### Docker Plugins 432 | 433 | - **List Installed Plugins**: 434 | 435 | ```bash 436 | docker plugin ls 437 | ``` 438 | 439 | - **Install a Plugin 440 | 441 | **: 442 | 443 | ```bash 444 | docker plugin install vieux/sshfs 445 | ``` 446 | 447 | ### Docker Daemon Configuration 448 | 449 | - **Customizing Docker Daemon**: 450 | - Edit the `/etc/docker/daemon.json` file to configure the Docker daemon. 451 | 452 | ```json 453 | { 454 | "log-driver": "json-file", 455 | "log-level": "warn", 456 | "storage-driver": "overlay2" 457 | } 458 | ``` 459 | 460 | - **Reload Daemon Configuration**: 461 | 462 | ```bash 463 | sudo systemctl reload docker 464 | ``` 465 | 466 | --- 467 | 468 | ## 10. Monitoring and Logging 469 | 470 | ### Docker Logs 471 | 472 | - **View Container Logs**: 473 | 474 | ```bash 475 | docker logs mynginx 476 | ``` 477 | 478 | - **Follow Logs**: 479 | 480 | ```bash 481 | docker logs -f mynginx 482 | ``` 483 | 484 | ### Monitoring Containers 485 | 486 | - **Inspect Resource Usage**: 487 | 488 | ```bash 489 | docker stats mynginx 490 | ``` 491 | 492 | - **Docker Events**: 493 | - Monitor Docker events in real-time. 494 | 495 | ```bash 496 | docker events 497 | ``` 498 | 499 | ### Integrating with Monitoring Tools 500 | 501 | - **Prometheus and Grafana**: Use cAdvisor and Prometheus Node Exporter to monitor Docker containers. 502 | 503 | ```bash 504 | docker run -d --name=cadvisor --volume=/:/rootfs:ro --volume=/var/run:/var/run:ro --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --volume=/dev/disk/:/dev/disk:ro --publish=8080:8080 google/cadvisor:latest 505 | ``` 506 | 507 | --- 508 | 509 | ## 11. Docker Best Practices 510 | 511 | ### Dockerfile Best Practices 512 | 513 | - **Minimize Image Size**: Use multi-stage builds and slim base images. 514 | - **Leverage Build Cache**: Organize Dockerfile instructions to maximize the use of cache layers. 515 | - **Use `.dockerignore`**: Exclude unnecessary files from the build context using a `.dockerignore` file. 516 | 517 | ### Container Management Best Practices 518 | 519 | - **Immutable Infrastructure**: Treat containers as immutable, replace rather than modify running containers. 520 | - **Keep Containers Stateless**: Design containers to be stateless, with external data persistence. 521 | - **Log to STDOUT/STDERR**: Ensure containers log to STDOUT/STDERR for easier aggregation and analysis. 522 | 523 | ### Security Best Practices 524 | 525 | - **Regularly Scan Images**: Use tools like `trivy` to scan images for vulnerabilities. 526 | - **Use Namespaces**: Use namespaces to isolate container resources and enhance security. 527 | - **Limit Capabilities**: Drop unnecessary capabilities from containers. 528 | 529 | ```bash 530 | docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx 531 | ``` 532 | 533 | --- 534 | 535 | ## 12. Troubleshooting Docker 536 | 537 | ### Common Issues 538 | 539 | - **Container Exits Immediately**: 540 | - Check the Docker logs for errors. 541 | 542 | ```bash 543 | docker logs 544 | ``` 545 | 546 | - **Image Build Fails**: 547 | - Debug using the `--no-cache` option to rebuild the image without cache. 548 | 549 | ```bash 550 | docker build --no-cache -t myimage . 551 | ``` 552 | 553 | - **Networking Issues**: 554 | - Verify network settings and connectivity. 555 | 556 | ```bash 557 | docker network inspect 558 | ``` 559 | 560 | ### Useful Docker Commands for Troubleshooting 561 | 562 | - **Inspect a Container**: 563 | 564 | ```bash 565 | docker inspect 566 | ``` 567 | 568 | - **Enter a Running Container**: 569 | 570 | ```bash 571 | docker exec -it /bin/bash 572 | ``` 573 | 574 | - **Check Resource Usage**: 575 | 576 | ```bash 577 | docker stats 578 | ``` 579 | 580 | --- 581 | 582 | ## 13. References 583 | 584 | ### Official Documentation 585 | 586 | - [Docker Documentation](https://docs.docker.com/) 587 | 588 | ### Community Resources 589 | 590 | - [Docker Hub](https://hub.docker.com/) 591 | - [Docker GitHub Repository](https://github.com/docker/docker-ce) 592 | - [Docker Forums](https://forums.docker.com/) 593 | --------------------------------------------------------------------------------