├── Containerization and Orchestration ├── Docker │ ├── Dockerfile for LAMP Stack (Linux, Apache, MySQL, PHP).md │ └── Dockerfile for MEAN Stack (MongoDB, Express.js, Angular, Node.js).md ├── Helm Charts │ └── Basic Helm Chart.md └── Kubernetes │ ├── Horizontal Pod Autoscaler (HPA).md │ ├── Ingress Controller Template.md │ ├── Kubernetes Deployment and Service for High Availability.md │ ├── Network Policy Template.md │ └── StatefulSet Template for Database Applications.md ├── Continuous Integration and Continuous Deployment (CI|CD) ├── GitHub Actions │ └── GitHub Actions Workflow for a Python Application.md ├── GitLab CI|CD │ └── GitLab CI|CD Configuration for a Node.js Application.md └── Jenkins │ └── Jenkins Pipeline Template for a Maven Java Project.md ├── Infrastructure as Code(IaC) ├── AWS CloudFormation │ └── AWS CloudFormation Template for Basic Network Setup.md ├── Ansible │ └── Ansible Playbook for Setting Up Apache Web Server.md └── Terraform │ └── Terrafrom template for Basic AWS Infrastructure.md ├── LICENSE.md ├── Monitoring and Logging ├── ELK Stack │ └── ELK Stack Configuration for Web Server Logs.md ├── Grafana │ └── Grafana Dashboard Template for Visualizing Metrics.md └── Prometheus │ └── Prometheus Configuration for Monitoring Targets and Alerts.md ├── README.md ├── Scripting and Automation ├── Bash Script for Basic System Updates.md └── Python │ ├── Python Script for AWS S3 File Upload.md │ ├── Python Script for Basic Data Processing.md │ ├── Python Script for Downloading Files from AWS S3.md │ └── Python Script to Automate EC2 Instance Creation.md └── Version Control └── Git Repository Essentials for a Python Project.md /Containerization and Orchestration/Docker/Dockerfile for LAMP Stack (Linux, Apache, MySQL, PHP).md: -------------------------------------------------------------------------------- 1 | # Dockerfile for LAMP Stack (Linux, Apache, MySQL, PHP) 2 | 3 | This Dockerfile configures a LAMP stack within a Docker container, making it perfect for running PHP web applications. It utilizes the official PHP image that comes with Apache and includes MySQL extensions to ensure seamless database connectivity for your applications. Simply add your PHP code to the `src` directory, build, and run the container for a quick and easy deployment. 4 | 5 | ## Dockerfile Content 6 | 7 | ```dockerfile 8 | FROM php:7.4-apache 9 | RUN docker-php-ext-install mysqli pdo pdo_mysql 10 | COPY src/ /var/www/html/ 11 | EXPOSE 80 12 | ``` 13 | 14 | ## How to Use 15 | 16 | 1. **Prepare Your PHP Application**: Place your application's PHP code within a directory named `src`. 17 | 18 | 2. **Build the Docker Image**: Execute the following command in your terminal, substituting `my-lamp-app` with your preferred name for the Docker image: 19 | ``` 20 | docker build -t my-lamp-app . 21 | ``` 22 | 23 | 3. **Run Your Container**: Start your container with the command below, which maps the container's port 80 to port 80 on your host, allowing you to access the application through `http://localhost`: 24 | ``` 25 | docker run -p 80:80 my-lamp-app 26 | ``` 27 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Docker/Dockerfile for MEAN Stack (MongoDB, Express.js, Angular, Node.js).md: -------------------------------------------------------------------------------- 1 | # Dockerfile for MEAN Stack (MongoDB, Express.js, Angular, Node.js) 2 | 3 | This Dockerfile provides a foundational setup for deploying MEAN stack web applications. It leverages the official Node.js image to install dependencies and prepare your application for execution. By building and running this Docker image, your application will be accessible on port 3000, offering a quick start to MEAN stack development. 4 | 5 | ## Dockerfile 6 | 7 | ```dockerfile 8 | FROM node:14 9 | WORKDIR /usr/src/app 10 | COPY package*.json ./ 11 | RUN npm install 12 | COPY . . 13 | EXPOSE 3000 14 | CMD ["npm", "start"] 15 | ``` 16 | 17 | 18 | ## Sample Files 19 | 20 | ### `index.js` 21 | 22 | ``` 23 | const http = require('http'); 24 | 25 | const server = http.createServer((req, res) => { 26 | res.statusCode = 200; 27 | res.setHeader('Content-Type', 'text/plain'); 28 | res.end('Hello World\n'); 29 | }); 30 | 31 | const PORT = process.env.PORT || 3000; 32 | server.listen(PORT, () => { 33 | console.log(`Server running on port ${PORT}`); 34 | }); 35 | 36 | ``` 37 | 38 | 39 | ### `package.json` 40 | 41 | ``` 42 | { 43 | "name": "simple-node-app", 44 | "version": "1.0.0", 45 | "description": "A simple Node.js application", 46 | "main": "index.js", 47 | "scripts": { 48 | "start": "node index.js" 49 | }, 50 | "author": "KodeKloud", 51 | "license": "ISC" 52 | } 53 | 54 | ``` 55 | 56 | ## How to Use 57 | 58 | 1. **Prepare Your Application**: Ensure you have `index.js` and `package.json` files in your project directory. These files should be customized to fit your application's requirements. 59 | 60 | 2. **Build the Docker Image**: Run the following command in your terminal, replacing `my-mean-app` with your desired image name. 61 | 62 | ``` 63 | docker build -t my-mean-app . 64 | ``` 65 | 66 | 3. **Run Your Container**: To start your container and make your application accessible on port 3000, use the command below. 67 | 68 | ``` 69 | docker run -p 3000:3000 my-mean-app 70 | ``` 71 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Helm Charts/Basic Helm Chart.md: -------------------------------------------------------------------------------- 1 | ## Basic Helm Chart Template 2 | 3 | This Helm chart template offers a standardized and customizable method to deploy applications to Kubernetes, utilizing Helm, the package manager for Kubernetes. It consists of a `Chart.yaml` file for chart metadata, a `values.yaml` file for configuration values, and templated Kubernetes manifest files for a deployment and service. 4 | 5 | ### Chart.yaml 6 | 7 | Create a `Chart.yaml` file with the following content to define the metadata for your Helm chart: 8 | 9 | ```yaml 10 | apiVersion: v2 11 | name: my-app 12 | version: 0.1.0 13 | ``` 14 | 15 | ### Values.yaml 16 | 17 | The `values.yaml` file specifies configuration values that can be customized for deployment. Create a `values.yaml` with the following example content: 18 | 19 | ```yaml 20 | replicaCount: 3 21 | image: 22 | repository: my-app 23 | tag: "latest" 24 | pullPolicy: IfNotPresent 25 | service: 26 | type: LoadBalancer 27 | port: 80 28 | ``` 29 | 30 | ### templates/deployment.yaml 31 | 32 | Under the `templates` directory, create a `deployment.yaml` file that uses templated values from `values.yaml`. Below is an example showing how to template the deployment manifest: 33 | 34 | ```yaml 35 | apiVersion: apps/v1 36 | kind: Deployment 37 | metadata: 38 | name: {{ .Values.nameOverride | default .Chart.Name }} 39 | spec: 40 | replicas: {{ .Values.replicaCount }} 41 | selector: 42 | matchLabels: 43 | app: {{ .Values.nameOverride | default .Chart.Name }} 44 | template: 45 | metadata: 46 | labels: 47 | app: {{ .Values.nameOverride | default .Chart.Name }} 48 | spec: 49 | containers: 50 | - name: {{ .Chart.Name }} 51 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 52 | ports: 53 | - containerPort: 80 54 | ``` 55 | 56 | ### templates/service.yaml 57 | 58 | Also, in the `templates` directory, create a `service.yaml` that dynamically sets values from `values.yaml`. Here is an example `service.yaml`: 59 | 60 | ```yaml 61 | apiVersion: v1 62 | kind: Service 63 | metadata: 64 | name: {{ .Values.nameOverride | default .Chart.Name }} 65 | spec: 66 | type: {{ .Values.service.type }} 67 | ports: 68 | - port: {{ .Values.service.port }} 69 | targetPort: 80 70 | selector: 71 | app: {{ .Values.nameOverride | default .Chart.Name }} 72 | ``` 73 | 74 | ### How to Use 75 | 76 | To deploy your application using this Helm chart, follow these steps: 77 | 78 | 1. **Prepare Your Helm Chart**: Ensure your `Chart.yaml`, `values.yaml`, and template files (`deployment.yaml` and `service.yaml`) are correctly set up in your Helm chart's directory structure. 79 | 80 | 2. **Customize Values**: Edit the `values.yaml` file to reflect your application's specific deployment and service configuration needs. 81 | 82 | 3. **Deploy with Helm**: Use the Helm CLI to deploy your application to a Kubernetes cluster with the following command: 83 | 84 | ```bash 85 | helm install my-app-release ./path/to/chart-directory 86 | ``` 87 | 88 | Replace `my-app-release` with a name for your Helm release, and `./path/to/chart-directory` with the path to your chart directory. 89 | 90 | 4. **Verify Deployment**: After deployment, verify that your application is running as expected with Helm and Kubernetes CLI commands like `helm list` and `kubectl get all`. 91 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Kubernetes/Horizontal Pod Autoscaler (HPA).md: -------------------------------------------------------------------------------- 1 | ## Horizontal Pod Autoscaler (HPA) Template 2 | 3 | Horizontal Pod Autoscalers (HPA) automatically adjust the number of pods in a deployment, replicaset, or statefulset based on observed CPU utilization or other selected metrics. This template facilitates the quick setup of an HPA, promoting efficient resource use and enhanced response to load variations. 4 | 5 | ### HPA.yaml 6 | 7 | To define your HPA, create an `HPA.yaml` file with the following content. Replace the placeholder values (e.g., ``, ``, ``) with your specific application details. 8 | 9 | ```yaml 10 | apiVersion: autoscaling/v2beta2 11 | kind: HorizontalPodAutoscaler 12 | metadata: 13 | name: 14 | namespace: 15 | spec: 16 | scaleTargetRef: 17 | apiVersion: apps/v1 18 | kind: Deployment 19 | name: 20 | minReplicas: 1 21 | maxReplicas: 10 22 | metrics: 23 | - type: Resource 24 | resource: 25 | name: cpu 26 | target: 27 | type: Utilization 28 | averageUtilization: 50 29 | ``` 30 | 31 | ### Customization Instructions 32 | 33 | - ``: Name your Horizontal Pod Autoscaler. 34 | - ``: Specify the namespace where your target deployment resides. 35 | - ``: The name of the deployment you wish to autoscale. 36 | - `minReplicas` and `maxReplicas`: Define the minimum and maximum number of pods that can be automatically scaled. 37 | - `averageUtilization`: Set the target CPU utilization percentage that triggers the scaling action. 38 | 39 | ### How to Use 40 | 41 | To implement your HPA in a Kubernetes environment, follow these steps: 42 | 43 | 1. **Prepare Your HPA Configuration**: Edit the `HPA.yaml` file with your specific details, ensuring that you replace all placeholder values with those relevant to your deployment. 44 | 45 | 2. **Apply the HPA**: Deploy your HPA to your Kubernetes cluster using the following command: 46 | 47 | ```bash 48 | kubectl apply -f HPA.yaml 49 | ``` 50 | 51 | 3. **Verify the HPA**: After applying the HPA configuration, you can verify its status and functionality with: 52 | 53 | ```bash 54 | kubectl get hpa -n 55 | ``` 56 | 57 | Replace `` with the namespace of your HPA. This command provides information about the HPA, including its target metrics and current status. 58 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Kubernetes/Ingress Controller Template.md: -------------------------------------------------------------------------------- 1 | ## Ingress Controller Template 2 | 3 | The Ingress controller is a critical component for managing external access to services in a Kubernetes cluster. It allows you to define accessible URLs, load balance traffic, terminate SSL/TLS, and offer name-based virtual hosting. This template guides you through setting up an Ingress controller like Nginx or Traefik, accompanied by basic routing rules. 4 | 5 | ### Ingress.yaml 6 | 7 | Create an `Ingress.yaml` file with the following content to define your Ingress rules. Ensure to replace placeholder values (e.g., ``, ``, ``, ``) with your actual service details. 8 | 9 | ```yaml 10 | apiVersion: networking.k8s.io/v1 11 | kind: Ingress 12 | metadata: 13 | name: 14 | annotations: 15 | nginx.ingress.kubernetes.io/rewrite-target: / 16 | spec: 17 | rules: 18 | - host: 19 | http: 20 | paths: 21 | - path: / 22 | pathType: Prefix 23 | backend: 24 | service: 25 | name: 26 | port: 27 | number: 28 | ``` 29 | 30 | ### Customization Instructions 31 | 32 | - ``: Name your Ingress resource. 33 | - ``: Specify the domain name for accessing your application. 34 | - ``: The name of the Kubernetes service you want to expose externally. 35 | - ``: The port number of the service that the Ingress will route traffic to. 36 | 37 | ### How to Use 38 | 39 | To deploy and utilize your Ingress configuration in a Kubernetes environment, follow these steps: 40 | 41 | 1. **Prepare Your Ingress Configuration**: Edit the `Ingress.yaml` file, replacing all placeholders with your specific details tailored to your application's requirements. 42 | 43 | 2. **Apply the Ingress**: Deploy your Ingress to the Kubernetes cluster using the following command: 44 | 45 | ```bash 46 | kubectl apply -f Ingress.yaml 47 | ``` 48 | 49 | 3. **Verify the Ingress**: After applying the `Ingress.yaml`, ensure that the Ingress is correctly set up and routing traffic as expected: 50 | 51 | ```bash 52 | kubectl get ingress 53 | ``` 54 | 55 | This command will provide you with the IP address or URL through which you can access your application, based on the domain name and routing rules defined in your Ingress configuration. 56 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Kubernetes/Kubernetes Deployment and Service for High Availability.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Deployment and Service for High Availability 2 | 3 | This configuration outlines the setup of a Kubernetes Deployment and Service for running high-availability applications. The deployment manages multiple replicas of your application, ensuring reliability and availability, while the service exposes these replicas externally on port 80 using a LoadBalancer, facilitating easy access. 4 | 5 | ## Deployment.yaml Content 6 | 7 | Create a `Deployment.yaml` with the following content to define your Kubernetes Deployment. This setup ensures your application runs with three replicas, enhancing its availability. 8 | 9 | ```yaml 10 | apiVersion: apps/v1 11 | kind: Deployment 12 | metadata: 13 | name: my-app 14 | spec: 15 | replicas: 3 16 | selector: 17 | matchLabels: 18 | app: my-app 19 | template: 20 | metadata: 21 | labels: 22 | app: my-app 23 | spec: 24 | containers: 25 | - name: my-app 26 | image: my-app:latest 27 | ports: 28 | - containerPort: 80 29 | ``` 30 | 31 | ## Service.yaml Content 32 | 33 | Create a `Service.yaml` with the following content to define the Kubernetes Service. This service exposes your deployment externally through a LoadBalancer, making your application accessible over the internet. 34 | 35 | ```yaml 36 | apiVersion: v1 37 | kind: Service 38 | metadata: 39 | name: my-app-service 40 | spec: 41 | selector: 42 | app: my-app 43 | ports: 44 | - protocol: TCP 45 | port: 80 46 | targetPort: 80 47 | type: LoadBalancer 48 | ``` 49 | 50 | ## How to Use 51 | 52 | To deploy your application and make it accessible using these Kubernetes configurations, follow the steps below: 53 | 54 | ### Prepare Your Deployment and Service Files 55 | 56 | Make sure you have your `Deployment.yaml` and `Service.yaml` files ready. These files are crucial for defining the deployment and service configurations of your Kubernetes-managed application. 57 | 58 | ### Apply the Deployment 59 | 60 | To create your application's deployment with the necessary replicas, execute the following command in your terminal: 61 | 62 | ```bash 63 | kubectl apply -f Deployment.yaml 64 | ``` 65 | 66 | ### Expose the Service 67 | 68 | To make your application accessible externally, apply the `Service.yaml` file. This step will create a LoadBalancer service, enabling internet access to your application: 69 | 70 | ```bash 71 | kubectl apply -f Service.yaml 72 | ``` 73 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Kubernetes/Network Policy Template.md: -------------------------------------------------------------------------------- 1 | ## Network Policy Template 2 | 3 | Network policies are essential for securing Kubernetes network traffic. They enable fine-grained control over how pods communicate with each other and with other network endpoints. This template offers a basic framework for creating a network policy to manage ingress and egress traffic for a group of pods. 4 | 5 | ### NetworkPolicy.yaml 6 | 7 | Below is the `NetworkPolicy.yaml` configuration. Replace placeholder values (e.g., ``, ``, ``, etc.) with your actual values to tailor the policy to your needs. 8 | 9 | ```yaml 10 | apiVersion: networking.k8s.io/v1 11 | kind: NetworkPolicy 12 | metadata: 13 | name: 14 | namespace: 15 | spec: 16 | podSelector: 17 | matchLabels: 18 | app: 19 | policyTypes: 20 | - Ingress 21 | - Egress 22 | ingress: 23 | - from: 24 | - podSelector: 25 | matchLabels: 26 | app: 27 | ports: 28 | - protocol: TCP 29 | port: 30 | egress: 31 | - to: 32 | - podSelector: 33 | matchLabels: 34 | app: 35 | ports: 36 | - protocol: TCP 37 | port: 38 | ``` 39 | 40 | ### Customization Instructions 41 | 42 | - ``: The name of your network policy. 43 | - ``: The Kubernetes namespace where the policy will be applied. 44 | - ``: The label of the pods the policy will apply to. 45 | - `` and ``: Labels for source and destination pods for ingress and egress rules, respectively. 46 | - ``: The TCP port number that the policy will apply to. 47 | 48 | ### How to Use 49 | 50 | To implement your network policy within a Kubernetes environment, follow these steps: 51 | 52 | 1. **Prepare Your Network Policy Configuration**: Customize the `NetworkPolicy.yaml` by replacing all placeholder values with specific details relevant to your deployment scenario. 53 | 54 | 2. **Apply the Network Policy**: Deploy your network policy to your Kubernetes cluster using the following command: 55 | 56 | ```bash 57 | kubectl apply -f NetworkPolicy.yaml 58 | ``` 59 | 60 | 3. **Verify the Policy**: You can verify that your network policy has been correctly applied and is active by using: 61 | 62 | ```bash 63 | kubectl describe networkpolicy -n 64 | ``` 65 | 66 | Replace `` and `` with your network policy's name and its namespace to check the policy's details and status. 67 | -------------------------------------------------------------------------------- /Containerization and Orchestration/Kubernetes/StatefulSet Template for Database Applications.md: -------------------------------------------------------------------------------- 1 | 2 | ## StatefulSet Template for Database Applications 3 | 4 | This template outlines the setup of a Kubernetes StatefulSet for managing database applications like PostgreSQL or MongoDB. It specifies persistent volume claims for data storage, appropriate environment variables, liveness and readiness probes, and a headless service for stable networking. 5 | 6 | ### StatefulSet.yaml 7 | 8 | Below is the `StatefulSet.yaml` configuration. Replace the placeholder values (e.g., ``, ``, etc.) with your actual application details. 9 | 10 | ```yaml 11 | apiVersion: apps/v1 12 | kind: StatefulSet 13 | metadata: 14 | name: 15 | spec: 16 | serviceName: "" 17 | replicas: 3 18 | selector: 19 | matchLabels: 20 | app: 21 | template: 22 | metadata: 23 | labels: 24 | app: 25 | spec: 26 | containers: 27 | - name: 28 | image: 29 | ports: 30 | - containerPort: 31 | volumeMounts: 32 | - name: 33 | mountPath: 34 | volumeClaimTemplates: 35 | - metadata: 36 | name: 37 | spec: 38 | accessModes: [ "ReadWriteOnce" ] 39 | resources: 40 | requests: 41 | storage: 10Gi 42 | ``` 43 | 44 | ### Customization Instructions 45 | 46 | - ``: The name of your database StatefulSet. 47 | - ``: The name of the headless service for stable networking. 48 | - ``: A label to associate your pods with the StatefulSet. 49 | - ``: The Docker image for your database (e.g., PostgreSQL, MongoDB). 50 | - ``: The port your database listens on. 51 | - ``: The name for the volume claim for persistent storage. 52 | - ``: The mount path in the container for database data. 53 | 54 | ### How to Use 55 | 56 | To deploy your database application using this StatefulSet configuration, follow these steps: 57 | 58 | 1. **Prepare Your StatefulSet Configuration**: Adjust the `StatefulSet.yaml` with your database application's specific details, replacing all placeholder values with your actual data. 59 | 60 | 2. **Apply the StatefulSet**: Use the following command to create your database StatefulSet in Kubernetes. This will set up the database with the specified number of replicas, a persistent storage volume, and a headless service for networking. 61 | 62 | ```bash 63 | kubectl apply -f StatefulSet.yaml 64 | ``` 65 | 66 | 3. **Verify the Deployment**: Ensure that your StatefulSet, pods, and volume claims are correctly deployed and running. Use commands like `kubectl get statefulsets`, `kubectl get pods`, and `kubectl get pvc` to inspect the resources created by the StatefulSet. 67 | -------------------------------------------------------------------------------- /Continuous Integration and Continuous Deployment (CI|CD)/GitHub Actions/GitHub Actions Workflow for a Python Application.md: -------------------------------------------------------------------------------- 1 | ## GitHub Actions Workflow for a Python Application 2 | 3 | This GitHub Actions workflow is specifically designed for Python applications. It automates the process of setting up Python, installing dependencies, running tests, and provides a placeholder for deployment steps. The workflow triggers on every push to the repository, ensuring that your Python application is continuously integrated and ready for deployment. 4 | 5 | ### Workflow File: `.github/workflows/python-cicd.yml` 6 | 7 | Below is the content for the GitHub Actions workflow file. Copy this configuration into a file named `.github/workflows/python-cicd.yml` in your Python project repository. 8 | 9 | ```yaml 10 | name: Python CI/CD 11 | 12 | # Triggers the workflow on push events to the repository 13 | on: [push] 14 | 15 | jobs: 16 | # Build job for setting up the environment, installing dependencies, and running tests 17 | build: 18 | runs-on: ubuntu-latest # Specifies the runner environment 19 | 20 | steps: 21 | - uses: actions/checkout@v2 # Checks out your repository under $GITHUB_WORKSPACE 22 | 23 | - name: Set up Python 24 | uses: actions/setup-python@v2 25 | with: 26 | python-version: '3.8' # Replace '3.8' with the version of Python used in your project 27 | 28 | - name: Install dependencies 29 | run: | 30 | python -m pip install --upgrade pip # Upgrades pip 31 | pip install -r requirements.txt # Install project dependencies from requirements.txt 32 | 33 | - name: Run tests 34 | run: | 35 | python -m unittest discover -s tests 36 | # This command runs unit tests. Customize the path 'tests' if your tests are located elsewhere 37 | 38 | # Deploy job depends on the successful completion of the build job 39 | deploy: 40 | runs-on: ubuntu-latest 41 | needs: build # Ensures deployment runs only after a successful build 42 | steps: 43 | - uses: actions/checkout@v2 44 | - name: Deploy to Production 45 | run: echo "Add deployment steps here" 46 | # Replace the echo command with your actual deployment commands. 47 | # This could involve SSH commands, cloud provider CLI commands, or scripts that automate deployment. 48 | ``` 49 | 50 | ### How to Use 51 | 52 | To integrate this workflow into your Python project: 53 | 54 | 1. **Prepare Your Workflow File**: Ensure the `.github/workflows/python-cicd.yml` file is correctly placed in your repository with the content provided above. 55 | 56 | 2. **Customize the Workflow**: 57 | - Modify the `python-version` as necessary to match the Python version used by your project. 58 | - Customize the `deploy` job with actual deployment commands based on your deployment environment or target. 59 | 60 | 3. **Push Your Changes**: 61 | - Commit and push the `.github/workflows/python-cicd.yml` file to your repository. GitHub Actions will automatically detect this workflow file and run the defined jobs on each push. 62 | 63 | 4. **Monitor Workflow Runs**: 64 | - Check the Actions tab in your GitHub repository to monitor the workflow's execution and view logs and results. 65 | -------------------------------------------------------------------------------- /Continuous Integration and Continuous Deployment (CI|CD)/GitLab CI|CD/GitLab CI|CD Configuration for a Node.js Application.md: -------------------------------------------------------------------------------- 1 | ## GitLab CI/CD Configuration for a Node.js Application 2 | 3 | This `.gitlab-ci.yml` file defines a Continuous Integration/Continuous Deployment (CI/CD) pipeline tailored for Node.js applications. The pipeline encompasses stages for building the application, running tests, and a placeholder for deployment scripts, ensuring automated execution upon every push to the repository. 4 | 5 | ### .gitlab-ci.yml Content with Explanations 6 | 7 | Below is the content for the `.gitlab-ci.yml` file, designed to automate the process for building, testing, and preparing a Node.js application for deployment. This configuration should be placed at the root of your Node.js project repository. 8 | 9 | ```yaml 10 | stages: 11 | - build 12 | - test 13 | - deploy 14 | 15 | build_job: 16 | stage: build 17 | script: 18 | - echo "Building the application..." 19 | - npm install # Installs project dependencies 20 | 21 | test_job: 22 | stage: test 23 | script: 24 | - echo "Running tests..." 25 | - npm run test # Executes unit tests 26 | 27 | deploy_job: 28 | stage: deploy 29 | script: 30 | - echo "Deploying the application..." 31 | # This is a placeholder for your deployment scripts. 32 | # Example: Uncomment and customize the following line for deployment 33 | # - scp -r * username@your-server:/path/to/deployment/ 34 | ``` 35 | 36 | ### How to Use 37 | 38 | To utilize this CI/CD pipeline for your Node.js application in GitLab: 39 | 40 | 1. **Prepare Your `.gitlab-ci.yml`**: 41 | - Copy the provided `.gitlab-ci.yml` configuration into the root directory of your Node.js project repository. 42 | 43 | 2. **Customize the Deploy Stage**: 44 | - Modify the `deploy_job` stage by adding actual deployment commands suited to your target environment. This might include scripts for deploying to a server, publishing to a cloud environment, or any other deployment mechanism your project requires. 45 | 46 | 3. **Push Changes to GitLab**: 47 | - Commit and push your changes, including the `.gitlab-ci.yml` file, to your GitLab repository. GitLab CI/CD will automatically pick up the configuration and start the pipeline upon each push. 48 | 49 | 4. **Monitor Pipeline Execution**: 50 | - Navigate to the CI/CD section of your project in GitLab to monitor the pipeline's progress and troubleshoot any issues that arise during the build, test, or deploy stages. 51 | -------------------------------------------------------------------------------- /Continuous Integration and Continuous Deployment (CI|CD)/Jenkins/Jenkins Pipeline Template for a Maven Java Project.md: -------------------------------------------------------------------------------- 1 | ## Jenkins Pipeline Template for a Maven Java Project 2 | 3 | This Jenkinsfile outlines a Continuous Integration/Continuous Deployment (CI/CD) pipeline for a Java application using Maven. It is structured into three primary stages: Build, Test, and Deploy, automating the process from code compilation to deployment. 4 | 5 | ### Jenkinsfile Content with Explanations 6 | 7 | Copy the following pipeline script into a file named `Jenkinsfile` at the root of your Java Maven project repository. This script is ready to be used in a Jenkins pipeline job. 8 | 9 | ```groovy 10 | pipeline { 11 | agent any // This specifies that the pipeline can run on any available agent 12 | 13 | stages { 14 | stage('Build') { // The build stage cleans the project and packages the application 15 | steps { 16 | sh 'mvn clean package' // Executes Maven's clean and package phases 17 | } 18 | } 19 | 20 | stage('Test') { // The test stage runs unit tests on the application 21 | steps { 22 | sh 'mvn test' // Executes Maven's test phase 23 | } 24 | } 25 | 26 | stage('Deploy') { // The deploy stage is a placeholder for deployment operations 27 | steps { 28 | // This is where you would add scripts or commands to deploy your application 29 | echo 'Deploying application...' // Placeholder for deployment steps 30 | // Example: sh 'deploy-script.sh' 31 | } 32 | } 33 | } 34 | } 35 | ``` 36 | 37 | ### How to Use 38 | 39 | To implement this CI/CD pipeline for your Maven-based Java project in Jenkins: 40 | 41 | 1. **Set Up Your Jenkins Pipeline**: 42 | - Ensure Jenkins is installed and running. 43 | - Create a new pipeline job in Jenkins. 44 | - In the pipeline configuration, select "Pipeline script from SCM" to specify the source control management. 45 | - Enter the repository URL and credentials if necessary. 46 | - Specify the path to your `Jenkinsfile`. 47 | 48 | 2. **Customize the Deploy Stage**: 49 | - Modify the 'Deploy' stage in the `Jenkinsfile` to include actual deployment commands or scripts based on your deployment environment. 50 | 51 | 3. **Run the Pipeline**: 52 | - Execute the pipeline job in Jenkins. 53 | - Jenkins will check out your code and proceed through the Build, Test, and Deploy stages as defined. 54 | 55 | 4. **Verify the Pipeline Execution**: 56 | - After the pipeline runs, verify each stage's output in Jenkins to ensure the build, tests, and deployment (if configured) were successful. 57 | 58 | -------------------------------------------------------------------------------- /Infrastructure as Code(IaC)/AWS CloudFormation/AWS CloudFormation Template for Basic Network Setup.md: -------------------------------------------------------------------------------- 1 | ## AWS CloudFormation Template: Basic Network Setup 2 | 3 | This CloudFormation template is designed to quickly establish a foundational network infrastructure within AWS, consisting of a Virtual Private Cloud (VPC), a subnet, an Internet Gateway, and a Security Group. 4 | 5 | ### CloudFormation YAML Template with Placeholders 6 | 7 | Copy the YAML content below into a file named `basic-network-setup.yaml`, replacing `` with actual values that match your project's requirements. 8 | 9 | ```yaml 10 | AWSTemplateFormatVersion: '2010-09-09' 11 | Description: Basic Network Setup - Replace with your project description 12 | 13 | Resources: 14 | # Define a Virtual Private Cloud (VPC) for your project 15 | MyVPC: 16 | Type: AWS::EC2::VPC 17 | Properties: 18 | CidrBlock: "" # Example: 10.0.0.0/16 - Define the IP range for the VPC 19 | EnableDnsSupport: true # Enables DNS support within the VPC 20 | EnableDnsHostnames: true # Allows instances in the VPC to have DNS hostnames 21 | Tags: 22 | - Key: Name 23 | Value: "" # Example: MyProjectVPC - Name your VPC for easier identification 24 | 25 | # Create a subnet within your VPC 26 | MySubnet: 27 | Type: AWS::EC2::Subnet 28 | Properties: 29 | VpcId: !Ref MyVPC # Reference to your VPC defined above 30 | CidrBlock: "" # Example: 10.0.1.0/24 - Define the IP range for the subnet 31 | AvailabilityZone: "" # Specify the AZ, e.g., us-west-2a 32 | MapPublicIpOnLaunch: true # Assign a public IP to instances launched in this subnet 33 | Tags: 34 | - Key: Name 35 | Value: "" # Example: MyProjectSubnet - Name your subnet 36 | 37 | # Internet Gateway to connect your VPC to the internet 38 | MyInternetGateway: 39 | Type: AWS::EC2::InternetGateway 40 | Properties: 41 | Tags: 42 | - Key: Name 43 | Value: "" # Example: MyProjectInternetGateway - Name your IGW 44 | 45 | # Attach the Internet Gateway to your VPC 46 | AttachGateway: 47 | Type: AWS::EC2::VPCGatewayAttachment 48 | Properties: 49 | VpcId: !Ref MyVPC # Reference to your VPC 50 | InternetGatewayId: !Ref MyInternetGateway # Reference to your Internet Gateway 51 | 52 | # Security Group to define access rules for your instances 53 | MySecurityGroup: 54 | Type: AWS::EC2::SecurityGroup 55 | Properties: 56 | GroupDescription: "Allow HTTP and SSH access" 57 | VpcId: !Ref MyVPC # Associate this security group with your VPC 58 | SecurityGroupIngress: 59 | - IpProtocol: tcp 60 | FromPort: 22 61 | ToPort: 22 62 | CidrIp: "0.0.0.0/0" # SSH access - Consider restricting to known IPs 63 | - IpProtocol: tcp 64 | FromPort: 80 65 | ToPort: 80 66 | CidrIp: "0.0.0.0/0" # HTTP access - Adjust as necessary for your application 67 | ``` 68 | 69 | ### How to Deploy 70 | 71 | To deploy your AWS infrastructure: 72 | 73 | 1. **Customize Your Template**: Replace all `` in `basic-network-setup.yaml` with actual data relevant to your project. This includes specifying your desired CIDR blocks, names for your resources, and the availability zone for your subnet. 74 | 75 | 2. **Launch CloudFormation Stack**: 76 | - Go to the AWS Management Console. 77 | - Access the CloudFormation service. 78 | - Select "Create stack" > "With new resources (standard)". 79 | - Upload the `basic-network-setup.yaml` file. 80 | - Proceed through the stack creation wizard, providing any required details, and then create the stack. 81 | 82 | 3. **Verify Your Infrastructure**: After the CloudFormation stack creation completes, verify in the AWS Management Console that all resources were successfully created and configured as expected. 83 | -------------------------------------------------------------------------------- /Infrastructure as Code(IaC)/Ansible/Ansible Playbook for Setting Up Apache Web Server.md: -------------------------------------------------------------------------------- 1 | ## Ansible Playbook for Setting Up Apache Web Server 2 | 3 | This playbook provides a simple yet effective method for configuring an Apache web server on Ubuntu-based systems using Ansible. It's designed for ease of use and quick deployment. 4 | 5 | ### Playbook Content 6 | 7 | Copy the following playbook content into a file named `setup-apache.yml`. Remember to replace `your_web_servers` with your target server group or the individual server where you intend to install Apache. 8 | 9 | ```yaml 10 | - name: Setup Apache Web Server 11 | hosts: your_web_servers # Replace with your server group or individual server 12 | become: yes 13 | tasks: 14 | - name: Install Apache 15 | apt: 16 | name: apache2 17 | state: present 18 | update_cache: yes 19 | 20 | - name: Start Apache and enable on boot 21 | systemd: 22 | name: apache2 23 | enabled: yes 24 | state: started 25 | 26 | - name: Deploy a basic index.html 27 | copy: 28 | content: "

Hello from Ansible

" 29 | dest: /var/www/html/index.html 30 | ``` 31 | 32 | ### How to Use 33 | 34 | To use this playbook for setting up Apache on your servers, follow these steps: 35 | 36 | 1. **Prepare Your Inventory**: Ensure your Ansible inventory is correctly set up with the target servers listed under `[your_web_servers]` group or defined appropriately in your inventory file. 37 | 38 | 2. **Run the Playbook**: Execute the playbook against your servers by running the following command: 39 | 40 | ```bash 41 | ansible-playbook -i path/to/your/inventory setup-apache.yml 42 | ``` 43 | 44 | Replace `path/to/your/inventory` with the actual path to your Ansible inventory file. 45 | 46 | 3. **Verify Installation**: After the playbook execution completes, verify that Apache has been successfully installed and is running on your target servers. You can do this by accessing the server's IP address or domain name in a web browser, which should display the "Hello from Ansible" message. 47 | -------------------------------------------------------------------------------- /Infrastructure as Code(IaC)/Terraform/Terrafrom template for Basic AWS Infrastructure.md: -------------------------------------------------------------------------------- 1 | ## Terraform Template for AWS Infrastructure 2 | 3 | This Terraform template outlines how to create a basic AWS network infrastructure, including a VPC, subnet, internet gateway, EC2 instance, and S3 bucket. The template is annotated with comments to provide insights into each configuration step. 4 | 5 | ### AWS Terraform Configuration with Comments 6 | 7 | Below is the `main.tf` file content, featuring detailed comments: 8 | 9 | ```hcl 10 | # Specify the provider and define your AWS region 11 | provider "aws" { 12 | region = "" # Example: us-west-2 13 | } 14 | 15 | # Create a Virtual Private Cloud (VPC) to provide an isolated network 16 | resource "aws_vpc" "example_vpc" { 17 | cidr_block = "10.0.0.0/16" 18 | enable_dns_hostnames = true 19 | 20 | tags = { 21 | Name = "example-vpc" # Customize the VPC name 22 | } 23 | } 24 | 25 | # Create a subnet within your VPC 26 | resource "aws_subnet" "example_subnet" { 27 | vpc_id = aws_vpc.example_vpc.id 28 | cidr_block = "10.0.1.0/24" 29 | availability_zone = "" # Example: us-west-2a 30 | 31 | tags = { 32 | Name = "example-subnet" # Customize the subnet name 33 | } 34 | } 35 | 36 | # Create an Internet Gateway for connecting your VPC to the internet 37 | resource "aws_internet_gateway" "example_igw" { 38 | vpc_id = aws_vpc.example_vpc.id 39 | 40 | tags = { 41 | Name = "example-igw" # Customize the Internet Gateway name 42 | } 43 | } 44 | 45 | # Deploy an EC2 instance within your subnet 46 | resource "aws_instance" "example_instance" { 47 | ami = "" # Example: ami-0c55b159cbfafe1f0 48 | instance_type = "t2.micro" 49 | subnet_id = aws_subnet.example_subnet.id 50 | 51 | tags = { 52 | Name = "example-instance" # Customize the instance name 53 | } 54 | } 55 | 56 | # Create an S3 bucket for object storage 57 | resource "aws_s3_bucket" "example_bucket" { 58 | bucket = "" # Ensure this name is globally unique 59 | acl = "private" 60 | } 61 | ``` 62 | 63 | ### How to Use 64 | 65 | To deploy this AWS infrastructure with Terraform: 66 | 67 | 1. **Customize Your Configuration**: Replace all placeholder values () with actual data relevant to your AWS setup. This includes specifying the AWS region, availability zone, AMI ID for the EC2 instance, and a unique name for your S3 bucket. 68 | 69 | 2. **Save Your Configuration**: Copy the provided Terraform configuration into a file named `main.tf` within your Terraform project directory. 70 | 71 | 3. **Initialize Terraform**: 72 | Run `terraform init` to initialize the Terraform workspace, which downloads the AWS provider plugin. 73 | 74 | ```bash 75 | terraform init 76 | ``` 77 | 78 | 4. **Review the Plan**: 79 | Use `terraform plan` to review the actions Terraform will perform before making changes to your infrastructure. 80 | 81 | ```bash 82 | terraform plan 83 | ``` 84 | 85 | 5. **Apply the Configuration**: 86 | Deploy your infrastructure by running `terraform apply` and approve the action when prompted. 87 | 88 | ```bash 89 | terraform apply 90 | ``` 91 | 92 | 6. **Check Your Resources**: 93 | After the apply completes, verify the creation of the resources in your AWS account through the AWS Management Console or CLI. 94 | 95 | By following these steps and utilizing the annotated `main.tf` file, you can easily set up a basic AWS infrastructure tailored for your applications, leveraging Terraform's infrastructure as code capabilities for efficient and reproducible deployments. 96 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | ## License 2 | 3 |

This work is licensed under CC BY-NC-ND 4.0

4 | -------------------------------------------------------------------------------- /Monitoring and Logging/ELK Stack/ELK Stack Configuration for Web Server Logs.md: -------------------------------------------------------------------------------- 1 | ### ELK Stack Configuration Guide for Web Server Logs 2 | 3 | This comprehensive guide provides you with the templates and detailed steps necessary to deploy the ELK Stack for collecting, processing, storing, and visualizing web server logs. This solution leverages Elasticsearch, Logstash, Kibana, and Filebeat to create a powerful system for real-time log monitoring and analysis. 4 | 5 | --- 6 | 7 | #### Elasticsearch Index Template Setup 8 | 9 | **Template Configuration (`ElasticsearchIndexTemplate.json`):** 10 | 11 | ```json 12 | PUT _index_template/template_web_logs 13 | { 14 | "index_patterns": ["web-logs-*"], 15 | "template": { 16 | "settings": { 17 | "number_of_shards": 1, 18 | "number_of_replicas": 1 19 | }, 20 | "mappings": { 21 | "properties": { 22 | "timestamp": { "type": "date" }, 23 | "log_level": { "type": "keyword" }, 24 | "message": { "type": "text" }, 25 | "ip": { "type": "ip" }, 26 | "response_time": { "type": "float" } 27 | } 28 | } 29 | } 30 | } 31 | ``` 32 | 33 | **How to Deploy:** 34 | 35 | - Replace `template_web_logs` with your template name (e.g., `template_myapp_logs`). 36 | - Use the Elasticsearch API or Kibana Dev Tools, replacing `localhost:9200` with your Elasticsearch server address: 37 | ```bash 38 | curl -X PUT "localhost:9200/_index_template/template_web_logs" -H 'Content-Type: application/json' -d@ElasticsearchIndexTemplate.json 39 | ``` 40 | 41 | --- 42 | 43 | #### Logstash Configuration 44 | 45 | **Configuration File (`LogstashConfig.conf`):** 46 | 47 | ```ruby 48 | input { 49 | beats { 50 | port => 5044 51 | } 52 | } 53 | 54 | filter { 55 | grok { 56 | match => { "message" => "%{COMBINEDAPACHELOG}" } 57 | } 58 | date { 59 | match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ] 60 | } 61 | } 62 | 63 | output { 64 | elasticsearch { 65 | hosts => ["http://localhost:9200"] 66 | index => "web-logs-%{+YYYY.MM.dd}" 67 | user => "elastic" 68 | password => "changeme" 69 | } 70 | } 71 | ``` 72 | 73 | **How to Deploy:** 74 | 75 | - Adjust `hosts`, `user`, and `password` to match your Elasticsearch details. 76 | - Place the `LogstashConfig.conf` in Logstash's configuration directory. 77 | - Start Logstash: `bin/logstash -f LogstashConfig.conf`. 78 | 79 | --- 80 | 81 | #### Kibana Visualization and Dashboards 82 | 83 | **Creating Visualizations:** 84 | 85 | 1. Access Kibana (typically at `http://localhost:5601`). 86 | 2. Create an Index Pattern: Go to Management → Index Patterns → Create new. Use `web-logs-*` as the pattern. 87 | 3. Navigate to "Visualize" and create new visualizations using the fields from your logs. 88 | 4. Assemble visualizations into dashboards for a comprehensive view. 89 | 90 | --- 91 | 92 | #### Filebeat Configuration 93 | 94 | **Configuration File (`filebeat.yml`):** 95 | 96 | ```yaml 97 | filebeat.inputs: 98 | - type: log 99 | enabled: true 100 | paths: 101 | - /var/log/apache2/*.log # Adjust the path to your log files 102 | fields: 103 | log_type: apache_log 104 | 105 | output.logstash: 106 | hosts: ["localhost:5044"] # Your Logstash server address 107 | ``` 108 | 109 | **How to Deploy:** 110 | 111 | - Install Filebeat on the server where your logs are generated. 112 | - Modify `filebeat.yml` with your log paths and output destination. 113 | - Start Filebeat: `sudo service filebeat start`. 114 | 115 | --- 116 | 117 | ### Final Steps and Verification 118 | 119 | After deploying each component: 120 | 121 | 1. **Ensure Data Flow:** Confirm that logs are moving from Filebeat to Logstash, then to Elasticsearch, and finally visualizable in Kibana. 122 | 2. **Monitor System Health:** Use Kibana's monitoring features to check the health of Elasticsearch, Logstash, and Filebeat. 123 | 124 | This guide provides a foundational setup. Tailor each configuration to fit your specific logging needs and infrastructure for an effective log monitoring and analysis solution. 125 | -------------------------------------------------------------------------------- /Monitoring and Logging/Grafana/Grafana Dashboard Template for Visualizing Metrics.md: -------------------------------------------------------------------------------- 1 | ## Grafana Dashboard Templates for Visualizing Metrics 2 | 3 | This guide provides a template for creating a Grafana dashboard tailored for visualizing metrics, particularly those collected by Prometheus. The template facilitates the setup of a dashboard environment, configuring data sources, and customizing panels for metric visualization. This configuration ensures an effective visualization of your monitoring data, enhancing observability and insights into your systems. 4 | 5 | ### Grafana Dashboard JSON Template 6 | 7 | Below is a JSON configuration for a Grafana dashboard template. This template should be imported into Grafana to create a new dashboard. It includes placeholders and configurations for integrating with a Prometheus data source. 8 | 9 | ```json 10 | { 11 | "__inputs": [ 12 | { 13 | "name": "DS_PROMETHEUS", 14 | "label": "Prometheus", 15 | "description": "", 16 | "type": "datasource", 17 | "pluginId": "prometheus", 18 | "pluginName": "Prometheus" 19 | } 20 | ], 21 | "__requires": [], 22 | "annotations": [], 23 | "editable": true, 24 | "gnetId": null, 25 | "graphTooltip": 0, 26 | "id": null, 27 | "links": [], 28 | "panels": [], 29 | "refresh": "10s", 30 | "schemaVersion": 16, 31 | "style": "dark", 32 | "tags": ["prometheus"], 33 | "templating": [], 34 | "time": {}, 35 | "timepicker": {}, 36 | "timezone": "", 37 | "title": "My Dashboard", 38 | "uid": null, 39 | "version": 1 40 | } 41 | ``` 42 | 43 | ### How to Use and Customize 44 | 45 | 1. **Prepare Your Dashboard JSON File**: 46 | - Copy the provided JSON configuration into a new file. This will serve as your base template for creating a Grafana dashboard. 47 | 48 | 2. **Import the Dashboard into Grafana**: 49 | - Log in to your Grafana instance. 50 | - Navigate to the Dashboards section and select "Import". 51 | - Upload your JSON file or paste the JSON directly into the provided field. 52 | - During the import process, select your Prometheus data source for the `DS_PROMETHEUS` variable. 53 | 54 | 3. **Customize Your Dashboard**: 55 | - Once imported, you can add, remove, or customize panels within the dashboard. This might involve setting up specific queries to Prometheus, adjusting visualization types, or configuring alerts. 56 | - Utilize Grafana's comprehensive panel editor to tailor each panel's metrics, legends, axes, and more to your monitoring requirements. 57 | 58 | 4. **Adjust Dashboard Settings**: 59 | - Modify the dashboard's refresh rate, time range, and timezone as needed to suit your observability goals. 60 | - Consider adding more tags or updating the dashboard's metadata for better organization and accessibility within Grafana. 61 | 62 | 5. **Save and Share Your Dashboard**: 63 | - After customization, save your dashboard. Grafana provides options to share dashboards with team members or export the updated JSON for version control or reuse. 64 | 65 | This Grafana dashboard template provides a foundational structure for visualizing Prometheus metrics, offering a customizable framework for comprehensive monitoring and analysis of your systems. 66 | -------------------------------------------------------------------------------- /Monitoring and Logging/Prometheus/Prometheus Configuration for Monitoring Targets and Alerts.md: -------------------------------------------------------------------------------- 1 | ## Prometheus Configuration for Monitoring Targets and Alerts 2 | 3 | This guide introduces a Prometheus configuration tailored for monitoring both Prometheus itself and your applications. It facilitates the setup of the Prometheus environment, configuring scrape intervals, and specifying targets for metric collection. This configuration ensures that Prometheus is effectively monitoring your designated services, ready to integrate into broader observability and alerting frameworks. 4 | 5 | ### Configuration File: `prometheus.yml` 6 | 7 | Below is a detailed setup for the `prometheus.yml` configuration file. This file should be placed in your Prometheus server's configuration directory. It outlines how Prometheus scrapes metrics from specified targets, including itself and an example application. 8 | 9 | ```yaml 10 | name: Prometheus Monitoring Setup 11 | 12 | # Defines when and how Prometheus scrapes metrics 13 | on: [configuration] 14 | 15 | jobs: 16 | # Configures Prometheus to scrape metrics from itself 17 | self_monitoring: 18 | runs-on: server # Specifies where Prometheus is running 19 | 20 | steps: 21 | - uses: internal_scrape@v1 # Utilizes Prometheus' own metrics endpoint 22 | with: 23 | scrape_interval: '15s' # Sets the interval at which metrics are collected 24 | 25 | - name: Set up scrape targets 26 | run: | 27 | global: 28 | scrape_interval: 15s # Default scrape interval for all targets 29 | 30 | scrape_configs: 31 | - job_name: 'prometheus' # Job for scraping metrics from Prometheus 32 | static_configs: 33 | - targets: ['localhost:9090'] # Prometheus metrics endpoint 34 | 35 | - job_name: 'my-application' # Job for scraping metrics from your application 36 | static_configs: 37 | - targets: ['my-app-service:80'] # Application metrics endpoint 38 | 39 | # Placeholder for additional monitoring jobs or alerting rules 40 | additional_jobs: 41 | runs-on: extendable 42 | needs: self_monitoring # Ensures base monitoring setup is configured first 43 | steps: 44 | - uses: actions/extend@v2 45 | - name: Configure Additional Targets or Alerts 46 | run: echo "Customize with additional scrape targets or alerting rules" 47 | ``` 48 | 49 | ### How to Implement and Customize 50 | 51 | 1. **Prepare Your `prometheus.yml` File**: 52 | - Place the provided configuration into a file named `prometheus.yml` within your Prometheus server's configuration directory. 53 | 54 | 2. **Adjust Scrape Intervals and Targets**: 55 | - Modify `scrape_interval` as needed to balance between data granularity and storage or performance impact. 56 | - Replace `'localhost:9090'` and `'my-app-service:80'` with actual endpoints from which Prometheus should scrape metrics. Add more jobs as needed for comprehensive monitoring coverage. 57 | 58 | 3. **Integrate Additional Monitoring and Alerting**: 59 | - Beyond basic monitoring, consider defining additional jobs for other services or setting up alerting rules within the same configuration file or separately as per Prometheus' documentation. 60 | 61 | 4. **Reload Prometheus Configuration**: 62 | - Apply changes by restarting Prometheus or reloading its configuration, typically through the Prometheus web interface under `Status > Reload config` or using the HTTP API. 63 | 64 | 5. **Verify Configuration and Targets**: 65 | - Access Prometheus' web UI, usually available at `http://:9090`, and navigate to `Status > Targets` to ensure all configured targets are up and being scraped successfully. 66 | 67 | This Prometheus setup provides a solid foundation for monitoring your infrastructure and applications, ready to be extended with more detailed job configurations, service discovery, and alerting capabilities for a comprehensive observability strategy. 68 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DevOps Template Library README 2 | 3 | Welcome to the DevOps Template Library, your ultimate resource for kickstarting and streamlining your DevOps projects. This comprehensive collection of ready-to-use templates covers a wide range of tools and technologies essential for modern cloud-native development and operations. 4 | 5 | ## Value 6 | 7 | This library is designed for immediate utility. Anyone beginning today can quickly become operational, seeing results within minutes. It eliminates the need to scour the internet for **basic code structures** or **boilerplate code**. 8 | 9 | ## Audience 10 | 11 | This resource is intended for individuals who are involved in DevOps. If you're a newcomer eager to get started with DevOps practices, this library has something for you. 12 | 13 | ## Contribution Guide 14 | 15 | We welcome contributions from the community! Whether you've found a bug, have an enhancement suggestion, or want to add a template you find indispensable in your day-to-day DevOps activities, here's how you can contribute: 16 | 17 | ### Reporting Issues 18 | 19 | 1. **Check for Existing Issues**: Before submitting a new issue, please check if it has already been reported. 20 | 2. **Create a New Issue**: If you've identified a new issue or enhancement, please use the "New Issue" button and fill in the template with as much detail as possible. 21 | 22 | ### Submitting a Pull Request (PR) 23 | 24 | 1. **Fork the Repository**: Start by forking the repository to your GitHub account. 25 | 2. **Create a New Branch**: Create a branch in your forked repository for your contribution. Use a descriptive name, such as `feature/add-jenkins-template` or `fix/dockerfile-bug`. 26 | 3. **Add Your Template or Fix**: Add your template or apply your fix. Ensure it follows the basic format outlined below. Commit your changes with a clear message describing the addition or fix. 27 | 4. **Raise a Pull Request**: Navigate to the original repository and click on "Pull Requests" > "New Pull Request". Select your fork and branch, then submit the PR with a clear description of your changes. 28 | 29 | ### Basic Template Format for Submissions 30 | 31 | Please ensure your submission follows this basic format: 32 | 33 | ```yaml 34 | # Template Name: Example Template 35 | # Description: Brief description of what the template does and its use case. 36 | # Tools/Technologies: List of tools or technologies this template pertains to. 37 | # Usage Instructions: Step-by-step instructions on how to use the template. 38 | # Contributed by: Your Name or GitHub Username 39 | ``` 40 | 41 | ### PR Review Process 42 | 43 | After submission, the repository maintainers will review your PR. This process might involve some discussion or requests for changes. We aim to handle PRs promptly, but response times can vary based on the current workload. 44 | 45 | --- 46 | 47 | By contributing to the DevOps Template Library, you help build a valuable resource that benefits the entire DevOps community. Thank you for your support and contributions! 48 | -------------------------------------------------------------------------------- /Scripting and Automation/Bash Script for Basic System Updates.md: -------------------------------------------------------------------------------- 1 | ## Bash Script for Basic System Updates 2 | 3 | This guide provides a Bash script designed for system administrators to automate the updating and upgrading of system packages on Debian-based Linux distributions. It simplifies the maintenance process, ensuring your system stays up-to-date with the latest security patches and improvements. 4 | 5 | ### Bash Script: `update_system.sh` 6 | 7 | Below is the Bash script that automates the process of updating, upgrading, and cleaning up system packages. 8 | 9 | ```bash 10 | #!/bin/bash 11 | 12 | # Update and upgrade system packages 13 | echo "Updating and upgrading system packages..." 14 | sudo apt-get update && sudo apt-get upgrade -y 15 | 16 | # Clean up 17 | echo "Cleaning up..." 18 | sudo apt-get autoremove -y 19 | 20 | echo "System update complete." 21 | ``` 22 | 23 | ### How to Use 24 | 25 | To utilize this script for system updates: 26 | 27 | 1. **Create the Script File**: Copy the above script into a new file named `update_system.sh` on your Linux system. 28 | 29 | 2. **Make It Executable**: 30 | - Grant execution permissions to the script by running `chmod +x update_system.sh` in the terminal. 31 | 32 | 3. **Run the Script**: 33 | - Execute the script with `./update_system.sh`. You might be prompted for your password due to the use of `sudo`. 34 | 35 | ### Script Explanation 36 | 37 | - `#!/bin/bash`: Specifies the script is run with Bash shell. 38 | - `sudo apt-get update`: Updates the list of available packages and their versions, but it does not install or upgrade any packages. 39 | - `sudo apt-get upgrade -y`: Upgrades all the currently installed packages to the latest version. The `-y` flag automatically answers 'yes' to prompts. 40 | - `sudo apt-get autoremove -y`: Removes packages that were automatically installed to satisfy dependencies for other packages and are now no longer needed. 41 | 42 | ### Best Practices 43 | 44 | - **Regular Maintenance**: Schedule this script to run at regular intervals using `cron` to ensure your system is always up to date. 45 | - **Review Updates**: Although automated updates are convenient, it's a good practice to manually review potentially significant upgrades before applying them, especially in production environments. 46 | - **Backup**: Always ensure you have recent backups of critical data before running update operations, to prevent data loss in the event of an update issue. 47 | -------------------------------------------------------------------------------- /Scripting and Automation/Python/Python Script for AWS S3 File Upload.md: -------------------------------------------------------------------------------- 1 | ## Python Script for AWS S3 File Upload 2 | 3 | This Python script is designed to automate the process of uploading files to AWS S3, making it an essential tool for managing cloud resources efficiently. It utilizes the `boto3` library to interact with AWS services, specifically for uploading files to an S3 bucket. 4 | 5 | ### Python Script: `s3_upload.py` 6 | 7 | Below is the Python script that automates the file upload process to AWS S3. It includes error handling for common issues such as missing credentials or file not found errors. 8 | 9 | ```python 10 | import boto3 11 | from botocore.exceptions import NoCredentialsError 12 | 13 | def upload_to_aws(local_file, bucket, s3_file): 14 | s3 = boto3.client('s3') 15 | 16 | try: 17 | s3.upload_file(local_file, bucket, s3_file) 18 | print(f"Upload Successful: {s3_file}") 19 | return True 20 | except FileNotFoundError: 21 | print("The file was not found") 22 | return False 23 | except NoCredentialsError: 24 | print("Credentials not available") 25 | return False 26 | 27 | # Example usage 28 | uploaded = upload_to_aws('local_file.txt', 'mybucket', 's3_file.txt') 29 | ``` 30 | 31 | ### How to Use 32 | 33 | 1. **Install Boto3**: Ensure the `boto3` library is installed in your Python environment. If not, install it using pip: 34 | ```bash 35 | pip install boto3 36 | ``` 37 | 2. **AWS Credentials**: Make sure your AWS credentials are configured. This can be done by setting up the AWS CLI or by creating a `.aws/credentials` file manually. 38 | 3. **Customize the Script**: Replace `'local_file.txt'`, `'mybucket'`, and `'s3_file.txt'` in the example usage with your actual file path, bucket name, and S3 object name. 39 | 4. **Run the Script**: Execute the script. If successful, it will print `"Upload Successful: s3_file.txt"` to the console. 40 | 41 | ### Script Explanation 42 | 43 | - **Boto3 Library**: Utilizes AWS's SDK for Python to interact with S3. 44 | - **Error Handling**: Catches and handles `FileNotFoundError` and `NoCredentialsError` to provide clear feedback on common issues. 45 | - **Function Parameters**: 46 | - `local_file`: The path to the file on your local system that you want to upload. 47 | - `bucket`: The name of the S3 bucket where the file should be uploaded. 48 | - `s3_file`: The name (and optionally path) that the file should have when stored in S3. 49 | 50 | ### Best Practices 51 | 52 | - **Security**: Always manage AWS credentials securely and avoid hard-coding them in your scripts. 53 | - **Large Files**: For uploading large files, consider using `boto3`'s `S3 Transfer Manager`, which handles multipart uploads automatically. 54 | - **Error Logging**: Implement logging for production scripts to capture errors and upload statuses for auditing and troubleshooting purposes. 55 | -------------------------------------------------------------------------------- /Scripting and Automation/Python/Python Script for Basic Data Processing.md: -------------------------------------------------------------------------------- 1 | ## Python Script for Basic Data Processing 2 | 3 | This Python script provides a straightforward example of using Pandas for basic data processing tasks such as loading data from a CSV file and performing simple transformations like removing missing values. This script is especially useful for data analysts and scientists looking to automate data cleaning and preprocessing steps. 4 | 5 | ### Python Script: `Data_processing.py` 6 | 7 | Below is the script, which includes comments for clarity and potential customization points for specific data processing needs. 8 | 9 | ```python 10 | import pandas as pd 11 | 12 | def load_and_process_data(file_path): 13 | # Load data from a CSV file into a Pandas DataFrame 14 | df = pd.read_csv(file_path) 15 | 16 | # Data processing steps 17 | # Example: Remove rows with missing values 18 | df = df.dropna() 19 | 20 | return df 21 | 22 | # Example Usage 23 | file_path = 'path/to/your/data.csv' # Specify the path to your CSV data file 24 | processed_data = load_and_process_data(file_path) 25 | print(processed_data.head()) # Print the first few rows of the processed data 26 | ``` 27 | 28 | ### How to Use 29 | 30 | 1. **Install Pandas**: If not already installed, add Pandas to your Python environment. 31 | ```bash 32 | pip install pandas 33 | ``` 34 | 2. **Prepare Your Data File**: Ensure your CSV data file is accessible at the specified `file_path`. 35 | 3. **Customize the Script**: Adjust the data processing steps within the `load_and_process_data` function to suit your specific needs. This could include filtering data, transforming columns, or aggregating information. 36 | 4. **Execute the Script**: Run the script to process your data. The script will load your CSV file, apply the defined processing steps, and print the first few rows of the processed DataFrame to the console. 37 | 38 | ### Script Explanation 39 | 40 | - **Pandas for Data Processing**: Utilizes Pandas, a powerful Python library for data analysis, to load and process data from CSV files. 41 | - **Function Parameters**: 42 | - `file_path`: The path to the CSV file containing the data to be processed. 43 | - **Data Cleaning Example**: The provided example removes rows with missing values using `df.dropna()`. This step can be replaced or expanded based on the data cleaning and preprocessing requirements of your project. 44 | 45 | ### Best Practices 46 | 47 | - **Data Exploration**: Before processing, perform exploratory data analysis (EDA) to understand your dataset's structure, contents, and potential issues. 48 | - **Error Handling**: Implement error handling for file loading and data processing steps to manage issues like file not found errors or incorrect data formats. 49 | - **Reusable Functions**: Encapsulate specific data processing tasks into separate functions for better code organization and reusability. 50 | -------------------------------------------------------------------------------- /Scripting and Automation/Python/Python Script for Downloading Files from AWS S3.md: -------------------------------------------------------------------------------- 1 | ## Python Script for Downloading Files from AWS S3 2 | 3 | This Python script facilitates the process of downloading files from AWS S3, showcasing how to leverage the `boto3` library for efficient interaction with AWS services. It's designed to handle common errors gracefully, such as credential issues or client errors, ensuring a smooth operation. 4 | 5 | ### Python Script: `s3_download.py` 6 | 7 | Below is a detailed Python script for downloading files from an S3 bucket. The script includes error handling to manage potential issues that might arise during the download process. 8 | 9 | ```python 10 | import boto3 11 | from botocore.exceptions import NoCredentialsError, ClientError 12 | import logging 13 | 14 | def download_file_from_s3(bucket, s3_object, local_file): 15 | s3 = boto3.client('s3') 16 | try: 17 | s3.download_file(bucket, s3_object, local_file) 18 | print(f"Downloaded {s3_object} from {bucket} to {local_file}") 19 | except ClientError as e: 20 | logging.error(e) 21 | return False 22 | except NoCredentialsError: 23 | logging.error("Credentials not available") 24 | return False 25 | return True 26 | 27 | # Example Usage 28 | bucket_name = 'your-bucket-name' # Replace with your actual S3 bucket name 29 | s3_object_name = 'your-object-name' # Replace with the S3 object name you wish to download 30 | local_file_path = 'path/to/save/file' # Specify the local path where the file should be saved 31 | 32 | download_file_from_s3(bucket_name, s3_object_name, local_file_path) 33 | ``` 34 | 35 | ### How to Use 36 | 37 | 1. **Install Boto3**: If not already installed, add `boto3` to your Python environment using pip: 38 | ```bash 39 | pip install boto3 40 | ``` 41 | 2. **Configure AWS Credentials**: Ensure your AWS credentials are correctly configured, typically via the AWS CLI or the `.aws/credentials` file. 42 | 3. **Adjust Script Parameters**: Modify the `bucket_name`, `s3_object_name`, and `local_file_path` variables in the "Example Usage" section to match your specific use case. 43 | 4. **Execute the Script**: Run the script to download the specified file from S3 to your local filesystem. 44 | 45 | ### Script Explanation 46 | 47 | - **Boto3 and Error Handling**: Utilizes the AWS SDK for Python (`boto3`) to interact with S3, with added error handling for `NoCredentialsError` and `ClientError`, ensuring robust script performance. 48 | - **Function Parameters**: 49 | - `bucket`: The name of the S3 bucket containing the file. 50 | - `s3_object`: The name of the object in S3 to download. 51 | - `local_file`: The local file path where the downloaded file will be saved. 52 | 53 | ### Best Practices 54 | 55 | - **Secure Credential Management**: Avoid hardcoding AWS credentials in scripts. Use the AWS CLI or environment variables to manage credentials securely. 56 | - **Error Logging**: Implement detailed logging, especially for scripts used in production environments, to facilitate troubleshooting and auditing. 57 | - **Large File Handling**: For large files, consider using the `TransferConfig` class from `boto3.s3.transfer` to manage multipart downloads and adjust download configurations. 58 | -------------------------------------------------------------------------------- /Scripting and Automation/Python/Python Script to Automate EC2 Instance Creation.md: -------------------------------------------------------------------------------- 1 | ## Python Script to Automate EC2 Instance Creation 2 | 3 | This Python script showcases how to automate the creation of an Amazon EC2 instance using the `boto3` library. It's designed for simplicity and efficiency, allowing users to programmatically launch instances with specified parameters such as the AMI ID, instance type, and key pair name. 4 | 5 | ### Python Script: `create_ec2_instance.py` 6 | 7 | Below is the script that provides a function to create an EC2 instance. It includes basic error handling and prints out the instance ID upon successful creation. 8 | 9 | ```python 10 | import boto3 11 | 12 | def create_ec2_instance(image_id, instance_type, keypair_name): 13 | ec2 = boto3.resource('ec2') 14 | try: 15 | instance = ec2.create_instances( 16 | ImageId=image_id, 17 | MinCount=1, 18 | MaxCount=1, 19 | InstanceType=instance_type, 20 | KeyName=keypair_name 21 | ) 22 | print(f"EC2 Instance {instance[0].id} created") 23 | return instance[0].id 24 | except Exception as e: 25 | print(f"Error creating EC2 instance: {e}") 26 | return None 27 | 28 | # Example Usage 29 | image_id = 'ami-12345' # Replace with actual AMI ID for your region 30 | instance_type = 't2.micro' 31 | keypair_name = 'your-keypair-name' # Replace with your existing keypair name 32 | 33 | instance_id = create_ec2_instance(image_id, instance_type, keypair_name) 34 | if instance_id: 35 | print(f"Instance created successfully: {instance_id}") 36 | else: 37 | print("Instance creation failed.") 38 | ``` 39 | 40 | ### How to Use 41 | 42 | 1. **Install Boto3**: Ensure the `boto3` library is installed in your Python environment. 43 | ```bash 44 | pip install boto3 45 | ``` 46 | 2. **Configure AWS Credentials**: Make sure your AWS credentials are configured properly, typically through the AWS CLI or by setting environment variables. 47 | 3. **Customize the Script**: Modify the `image_id`, `instance_type`, and `keypair_name` in the "Example Usage" section to match your requirements. 48 | 4. **Run the Script**: Execute the script to create an EC2 instance. The instance ID will be printed upon successful creation. 49 | 50 | ### Script Explanation 51 | 52 | - **Boto3 EC2 Resource**: Uses `boto3.resource('ec2')` to interface with the EC2 service. 53 | - **Error Handling**: Includes a try-except block to catch and print errors that may occur during instance creation. 54 | - **Function Parameters**: 55 | - `image_id`: The AMI ID of the instance to launch. 56 | - `instance_type`: The type of instance (e.g., `t2.micro`). 57 | - `keypair_name`: The name of the key pair to associate with this instance for SSH access. 58 | 59 | ### Best Practices 60 | 61 | - **Security**: Always review AWS best practices for security, especially regarding key pair management and instance access. 62 | - **Resource Cleanup**: Remember to stop or terminate instances you no longer need to avoid unnecessary charges. 63 | - **AMI Selection**: Ensure the AMI ID (`image_id`) is valid for the region you are launching the instance in and meets your application requirements. 64 | -------------------------------------------------------------------------------- /Version Control/Git Repository Essentials for a Python Project.md: -------------------------------------------------------------------------------- 1 | ## Git Repository Essentials for a Python Project 2 | 3 | This comprehensive guide outlines the essentials for setting up and maintaining a Git repository for a Python project. It includes a `.gitignore` file to exclude unnecessary files, a template for creating a `README.md`, guidelines for branch naming, and a pull request template to standardize contributions. 4 | 5 | ### Basic `.gitignore` for a Python Project 6 | 7 | A properly configured `.gitignore` file is crucial for keeping your repository clean by excluding temporary files, environment-specific configurations, and other non-essential files from being tracked by Git. 8 | 9 | ```plaintext 10 | # Byte-compiled / optimized / DLL files 11 | __pycache__/ 12 | *.py[cod] 13 | *.so 14 | 15 | # Environment files 16 | .env 17 | 18 | # Virtual environment 19 | venv/ 20 | 21 | # IDE settings 22 | .idea/ 23 | 24 | # Log files 25 | *.log 26 | ``` 27 | 28 | **Instructions**: Save this content as `.gitignore` in the root of your Python project to automatically ignore common unnecessary files. 29 | 30 | ### README Template 31 | 32 | A well-documented `README.md` helps users and contributors understand, install, and use your project effectively. 33 | 34 | ```markdown 35 | # Project Title 36 | 37 | ## Description 38 | Short description of the project. 39 | 40 | ## Installation 41 | Steps to install the project. 42 | 43 | ## Usage 44 | How to use the project. 45 | 46 | ## Contributing 47 | Guidelines for contributing to the project. 48 | 49 | ## License 50 | Specify the project license (e.g., MIT, GPL). 51 | ``` 52 | 53 | **Instructions**: Customize each section of this `README.md` template with your project details and save it in the root of your repository. 54 | 55 | ### Simple Branch Naming Guidelines 56 | 57 | Consistent branch naming helps organize and manage changes in your repository. 58 | 59 | - **Feature branches**: `feature/` 60 | - **Bug fixes**: `bugfix/` 61 | - **Hotfixes**: `hotfix/` 62 | - **Releases**: `release/v` 63 | 64 | **Instructions**: Adopt these naming conventions for branches in your project to maintain clarity and order. 65 | 66 | ### Pull Request Template 67 | 68 | A pull request template ensures that all contributions are consistent and provide the necessary information for review. 69 | 70 | ```markdown 71 | ## Description 72 | A brief summary of the changes. 73 | 74 | ## Type of Change 75 | - [ ] New feature 76 | - [ ] Bug fix 77 | - [ ] Documentation update 78 | 79 | ## How Has This Been Tested? 80 | Describe how you've tested the changes. 81 | 82 | ## Checklist 83 | - [ ] I have followed the contribution guidelines. 84 | - [ ] My changes do not generate new warnings. 85 | ``` 86 | 87 | **Instructions**: Save this template as `.github/PULL_REQUEST_TEMPLATE.md` in your repository. It will automatically populate the description field for new pull requests. 88 | 89 | ### Enhancements and Best Practices 90 | 91 | - **Continuous Integration (CI)**: Consider setting up CI workflows using tools like GitHub Actions to automate testing and linting for every push or pull request. 92 | - **Code Reviews**: Encourage code reviews for pull requests to improve code quality and foster collaboration. 93 | - **Documentation**: Keep your documentation, including the `README.md`, up to date with project changes and releases. 94 | --------------------------------------------------------------------------------