├── .github └── FUNDING.yml ├── .gitignore ├── CONTRIBUTION.md ├── LICENSE ├── README.md ├── ansible-and-docker └── README.md ├── ansible-and-vagrant └── README.md ├── ansible-tutorial └── README.md ├── ansiblesemaphore-tutorial ├── README.md ├── docker-compose.yaml └── playbooks │ └── update-apt-packages.yaml ├── argocd-tutorial ├── README.md ├── app-test-1 │ └── app-test-1.yml └── traefik-ingressroute.yml ├── bind9-docker ├── README.md ├── assets │ ├── dns-hide-internally.excalidraw.md │ ├── dns-lookup-chain-optimized.excalidraw.md │ ├── dns-lookup-chain.excalidraw.md │ ├── dns-setup-new.excalidraw.md │ ├── dns-setup-old.excalidraw.md │ └── dns-split-horizon.excalidraw.md └── files │ ├── config │ ├── demo-clcreative-de.zone │ └── named.conf │ └── docker-compose.yml ├── bind9-terraform-tutorial └── README.md ├── cloudflare-tunnel-tutorial ├── README.md ├── assets │ ├── cloudflare-tunnel-1.png │ └── cloudflare-tunnel-2.png └── cloudflaretunnel-demo-1 │ └── docker-compose.yml ├── docker-python-debugging-vscode └── README.md ├── docker-tutorial └── README.md ├── duplicati-tutorial ├── README.md └── docker-compose.yml ├── github-actions-tutorial ├── README.md └── assets │ ├── github-actions-tutorial-asset-1.png │ ├── github-actions-tutorial-asset-2.png │ ├── github-actions-tutorial-asset-3.png │ ├── github-actions-tutorial-asset-4.png │ ├── what-are-github-actions.png │ ├── where-do-github-actions-run.png │ └── why-use-self-hosted-runners.png ├── mailcow-tutorial └── README.md ├── nextcloud-tutorial ├── README.md └── docker-compose.yml ├── nginx-reverseproxy └── README.md ├── nginxproxymanager-tutorial └── README.md ├── passbolt-tutorial ├── README.md ├── conf │ ├── headers.conf │ └── tls.conf ├── docker-compose.yml └── traefik.yaml ├── podman-docker-alternative └── README.md ├── portainer-remote-tls └── README.md ├── portainer-tutorial └── README.md ├── self-signed-certificates-in-kubernetes ├── README.md ├── nginx1-ca-secret.yml ├── nginx1-cert.yml ├── nginx1-clusterissuer.yml ├── nginx1-deploy.yml ├── nginx1-ingress.yml ├── nginx1-mw.yml ├── nginx1-ns.yml └── nginx1-svc.yml ├── serverless-tutorial ├── README.md ├── assets │ └── serverless-tutorial-architecture.png └── files │ └── youtubify-demo-2 │ ├── .gitignore │ ├── packages │ └── youtubify │ │ └── getlatestvideo │ │ ├── __main__.py │ │ ├── build.sh │ │ └── requirements.txt │ └── project.yml ├── teleport-passwordless ├── README.md ├── config │ └── teleport.yml ├── demo-server │ ├── ansible.cfg │ ├── demo-server-civo.tf │ ├── demo-server-install-docker.yml │ └── inventory └── docker-compose.yml ├── teleport-tutorial ├── README.md ├── config │ └── teleport.yml └── docker-compose.yml ├── truenasscale-traefik └── README.md ├── vagrant-tutorial └── README.md ├── watchtower-tutorial └── README.md ├── webserver-on-linux ├── README.md └── docker-compose.yml ├── windows-terminal-powerlevel10k └── README.md ├── wireguard-docker └── README.md └── wireguard-on-linux └── README.md /.github/FUNDING.yml: -------------------------------------------------------------------------------- 1 | # These are supported funding model platforms 2 | 3 | patreon: christianlempa 4 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .terraform 2 | .terraform.lock.hcl 3 | .obsidian 4 | .final 5 | .wip 6 | .credentials 7 | .env 8 | .DS_Store -------------------------------------------------------------------------------- /CONTRIBUTION.md: -------------------------------------------------------------------------------- 1 | ## Welcome to contributing at The Digital Life.🤝 2 | 3 | - If you’d like to contribute to this project, reach out to me on social media or [Discord](https://discord.gg/bz2SN7d), or create a pull request for the necessary changes. 4 | 5 | ## Project overview 6 | This repository contains the steps corresponding to my videos on the 7 | **[YouTube Channel: The Digital Life](https://www.youtube.com/channel/UCZNhwA1B5YqiY1nLzmM0ZRg)**. 8 | 9 | ## Issues 10 | 11 | ### Create a new issue 12 | If you spot a problem with the any of the steps, search if that issue already exists. 13 | If a relevant issue doesn't exist, you can open a new issue in the issue's tab. 14 | 15 | ### Solve an issue 16 | Search through the existing issues to find one that interests you. You can refine the search using labels as filters. 17 | Once you find an issue to work on, you are welcome to open a PR with a fix. 18 | 19 | ## Your PR is merged! 20 | Once reviewed and accepted, your PR would be merged and available publicly. 21 | Congratulations 🎉🎉 and thank you for playing your part ✨. 22 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2022 Christian Lempa 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Christian's "Videos" 2 | 3 | [![Welcome](https://cnd-prod-1.s3.us-west-004.backblazeb2.com/new-banner4-scaled-for-github.jpg)](https://youtu.be/apgp9egIKK8) 4 | 5 | **Hey, there!** 6 | 7 | **I’m Christian, and I’m passionate about creating educational tech content for IT Pros and Homelab nerds.** 8 | 9 | This Repository **Video** is my video documentation. Here you'll find code-snippets, technical documentation, templates, command reference, and whatever is needed for all my YouTube Videos. 10 | 11 | > :warning: Be aware, products can change over time. I do my best to keep up with the latest changes and releases, but please understand that this won’t always be the case. 12 | 13 | I created them as free resources to be used in your specific use cases. If you're searching for detailed, in-depth tutorials on some tools or technologies, check out my [YouTube Channel](https://www.youtube.com/@christianlempa). 14 | 15 | ## Contribution 16 | 17 | If you’d like to contribute to this project, reach out to me on social media or [Discord](https://christianlempa.de/discord), or create a pull request for the necessary changes. 18 | 19 | ## Other Resources 20 | 21 | - [Dotfiles](https://github.com/christianlempa/dotfiles) - My personal configuration files on macOS 22 | - [Boilerplates](https://github.com/christianlempa/boilerplates) - Templates for various projects like Docker, K8S, Ansible, etc 23 | - [Cheat-Sheets](https://github.com/christianlempa/cheat-sheets) - Command Reference for various tools and technologies 24 | - [Videos](https://github.com/christianlempa/videos) - Documentation and project files for all my video tutorials on YouTube 25 | - [Homelab](https://github.com/christianlempa/homelab) - This is my entire Homelab documentation, and configurations for infrastructure, applications, networking, and more. 26 | 27 | ## Support me 28 | 29 | Creating high-quality videos and valuable resources that are accessible to everyone, free of charge, is a huge challenge. With your contribution, I can dedicate more time and effort into the creation process, which ultimately enhances the quality of the content. So, all your support, by becoming a member, truly makes a significant impact on what I do. And you’ll also get some cool benefits and perks in return, as a recognition of your support. 30 | 31 | Remember, ***supporting me is entirely optional.*** Your choice to become a member or not won't change your access to my videos and resources. You are also welcome to reach out to me on Discord, if you have any questions or feedback. 32 | 33 | [https://www.patreon.com/christianlempa](https://www.patreon.com/christianlempa) 34 | -------------------------------------------------------------------------------- /ansible-and-docker/README.md: -------------------------------------------------------------------------------- 1 | # Automate your Docker deployments with Ansible 2 | Ansible Docker Deployments for your Servers made easily! In this video, I'll show you how I've used Ansible to automate my webserver deployments. We're deploying Portainer, Watchtower, and a WordPress Blog. 3 | 4 | We will use the free and open-source software Ansible by Red Hat. 5 | 6 | Project Homepage: https://www.ansible.com/ 7 | Documentation: https://docs.ansible.com/ 8 | 9 | 10 | Video: https://youtu.be/CQk9AOPh5pw 11 | 12 | ## Prerequisites 13 | 14 | - Linux, macOS or Windows 10, 11 with WSL2 (Debian or Ubuntu) running Ansible 15 | - Linux Server for testing 16 | 17 | *To set up Linux, macOS or Windows 10, 11 with WSL2 running Ansible, follow my [Ansible Tutorial](https://github.com/xcad2k/videos/tree/main/ansible-tutorial)* 18 | 19 | ## 1. Installation and Configuration 20 | 21 | ### 1.1. Install the Docker Galaxy Extension for Ansible 22 | 23 | Now, we can install the Ansible Galaxy Extension to manage Docker containers on remote servers. Simply execute the following command. 24 | 25 | ```bash 26 | ansible-galaxy collection install community.docker 27 | ``` 28 | 29 | ### 1.2. Configure our Ansible Environment 30 | 31 | Let's start configuring our Ansible Environment. Because we need to set up an ansible.cfg and inventory file in our project folder to tell Ansible how to connect to our remote server. The inventory file is a simple text file that just contains the IP address of our server. 32 | 33 | **Example Ansible Configuration**: 34 | ```conf 35 | [defaults] 36 | inventory = inventory 37 | host_key_checking = False # optional: removes the SSH prompt 38 | deprecation_warnings=False # optional: removes deprecation warning in playbooks 39 | remote_user = 40 | private_key_file = 41 | ``` 42 | 43 | ### 1.3. (optional) Test Ansible Connection 44 | 45 | If everything is configured correctly, you can test the connection with the following command. 46 | 47 | ```bash 48 | ansible all -m ping 49 | ``` 50 | 51 | ## 2. Install Docker on our remote Server 52 | 53 | If we have configured our Ansible Environment, we can install all necessary components on our remote server. Because I installed a fresh new Ubuntu 20.04 LTS server, we need to install Docker first. And also the Docker Python SDK is required by Ansible to run containers on remote servers. Therefore, you have two options to install everything on the remote server. 54 | 55 | **Option 1: Manual installation of all components** 56 | 57 | When you want to install the components manually, just have a look at the Docker Installation Documentation. You also need to install Python, the Python Package Manager, and the Docker SDK. 58 | 59 | **Option 2: Install Docker with an Ansible Playbook** 60 | 61 | Since we already have configured Ansible to manage our remote server, why shouldn't we use it? Because I've already prepared an Ansible Playbook, you can just download and run it. You will find this Playbook in my GitHub Repository Ansible-Boilerplates, and it will install Docker and the Python Docker SDK for you. 62 | 63 | ## 3. Run Portainer with Ansible 64 | 65 | ### 3.1. Write Ansible Playbook 66 | 67 | Now we're ready to deploy our first Docker container with Ansible! Create a new Ansible Playbook YAML file in your project folder, that should look like this. 68 | 69 | ```yml 70 | - hosts: all 71 | become: yes 72 | tasks: 73 | 74 | - name: Deploy Portainer 75 | community.docker.docker_container: 76 | name: portainer 77 | image: portainer/portainer-ce 78 | ports: 79 | - "9000:9000" 80 | - "8000:8000" 81 | volumes: 82 | - /var/run/docker.sock:/var/run/docker.sock 83 | - portainer_data:/data 84 | restart_policy: always 85 | ``` 86 | 87 | ### 3.2. Run Ansible Playbook 88 | 89 | To run the Ansible Playbook, simply execute the following command in the shell. 90 | 91 | ```bash 92 | ansible-playbook 93 | ``` 94 | 95 | ## 4. Run Watchtower with Ansible 96 | 97 | ### 4.1. Write Ansible Playbook 98 | 99 | To run our second Docker container, we simply can just add another task inside the same Ansible Playbook. Because Ansible will take care of which Containers are already deployed and if there are any changes to be made. And it only re-deploys containers if there are changes being made. 100 | 101 | ```yml 102 | - name: Deploy Watchtower 103 | community.docker.docker_container: 104 | name: watchtower 105 | image: containrrr/watchtower 106 | command: --schedule "0 0 4 * * *" --debug 107 | volumes: 108 | - /var/run/docker.sock:/var/run/docker.sock 109 | restart_policy: always 110 | ``` 111 | 112 | Simply run the Playbook with the same command above again. You can see that the task "Portainer" is not executed again, only our new task "Watchtower". 113 | 114 | ## 5. Deploy a Wordpress Blog with Ansible and Docker 115 | 116 | ### 5.1. Write Ansible Playbook 117 | 118 | Let's also deploy two more containers, to automate the deployment of my webserver. Because I want to run a WordPress Blog on this server, we execute the following Playbook. 119 | 120 | We also need to create a new Network before running the Containers. Otherwise, WordPress will not be able to connect to the Database Container. Therefore we also need to attach them to the same Network. 121 | 122 | ```yml 123 | - hosts: all 124 | become: yes 125 | tasks: 126 | 127 | - name: Create Network 128 | community.docker.docker_network: 129 | name: wordpress 130 | 131 | - name: Deploy Wordpress 132 | community.docker.docker_container: 133 | name: wordpress 134 | image: wordpress:latest 135 | ports: 136 | - "80:80" 137 | networks: 138 | - name: wordpress 139 | volumes: 140 | - wordpress:/var/www/html 141 | env: 142 | WORDPRESS_DB_HOST: "db" 143 | WORDPRESS_DB_USER: "exampleuser" 144 | WORDPRESS_DB_PASSWORD: "examplepass" 145 | WORDPRESS_DB_NAME: "exampledb" 146 | restart_policy: always 147 | 148 | - name: Deploy MYSQL 149 | community.docker.docker_container: 150 | name: db 151 | image: mysql:5.7 152 | networks: 153 | - name: wordpress 154 | volumes: 155 | - db:/var/lib/mysql 156 | env: 157 | MYSQL_DATABASE: "exampledb" 158 | MYSQL_USER: "exampleuser" 159 | MYSQL_PASSWORD: "examplepass" 160 | MYSQL_RANDOM_ROOT_PASSWORD: '1' 161 | restart_policy: always 162 | ``` -------------------------------------------------------------------------------- /ansible-and-vagrant/README.md: -------------------------------------------------------------------------------- 1 | # Automate your virtual lab environment with Ansible and Vagrant 2 | I show you how you can easily automate a full lab environment containing multiple virtual machines using Vagrant and Ansible. 3 | 4 | We will use the free and open-source software Ansible by Red Hat, and the free and open-source software Vagrant by Hashicorp. 5 | 6 | **Ansible**: 7 | Project Homepage: https://www.ansible.com/ 8 | Documentation: https://docs.ansible.com/ 9 | 10 | **Vagrant**: 11 | Project Homepage: https://www.vagrantup.com/ 12 | Documentation: https://www.vagrantup.com/docså 13 | Find Boxes: https://app.vagrantup.com/boxes/search 14 | 15 | Video: https://youtu.be/7Di0twyxw1M 16 | 17 | ## Prerequisites 18 | 19 | - Linux, macOS or Windows 10, 11 with WSL2 (Debian or Ubuntu) 20 | - Linux Server for testing 21 | 22 | ## Installation and Configuration 23 | 24 | 1. Use Vagrant with Ansible on Windows 10, 11 25 | 26 | If you’re running Vagrant on Linux, you can skip this part. But if you’re running Vagrant on Windows with VirtualBox or Hyper-V, you have a problem. Because Ansible is not running on Windows, you will need to run Vagrant with Ansible scripts on a Linux machine. Luckily, you can do this pretty easily with the Windows Subsystem for Linux (WSL2). Because the trick is to install Vagrant on your WSL machine and on your Windows 10, too. Note, that it needs to be installed exactly in the same version and it’s still considered a beta version at this time. 27 | 28 | To install Vagrant on WSL just simply download the latest version at https://releases.hashicorp.com/vagrant/. 29 | 30 | ```bash 31 | wget https://releases.hashicorp.com/vagrant/2.2.10/vagrant_2.2.10_x86_64.deb 32 | 33 | sudo apt install ./vagrant_2.2.10_x86_64.deb 34 | ``` 35 | 36 | Next, you need to add a few environment variables according to https://www.vagrantup.com/docs/other/wsl.html. If you’re running bash, simply add them to your .bashrc file. On zsh you need to place them in your .zshrc file. 37 | 38 | ``` 39 | VAGRANT_WSL_WINDOWS_ACCESS_USER_HOME_PATH=/mnt/c/Users/ 40 | VAGRANT_WSL_ENABLE_WINDOWS_ACCESS=1 41 | ``` 42 | 43 | On Windows 10, 11 with Hyper-V, set the default provider with the following environment variable. 44 | 45 | ``` 46 | VAGRANT_DEFAULT_PROVIDER=hyperv 47 | ``` 48 | 49 | Check if Vagrant is running on your WSL2 and can communicate to the Hypervisor on your Windows 10 by executing the vagrant command. If and error shows up, you probably haven’t loaded the environment variables correctly. 50 | 51 | 2. Install Ansible 52 | 53 | ``` 54 | sudo apt-add-repository ppa:ansible/ansible 55 | 56 | sudo apt update 57 | 58 | sudo apt install ansible 59 | ``` 60 | 61 | ## Set up a Vagrant machine 62 | 63 | Now let’s start with the creation of our Vagrantfile. If you’re not familiar with Vagrant yet, you should check out my tutorial about Vagrant, to learn the fundamentals. When you’re running Vagrant with VirtualBox, the configuration will look slightly different, because I’m using Hyper-V as my default provider. Any Virtualbox fans can just comment out the HyperV subconfiguration part. 64 | 65 | **Example Vagrantfile**: 66 | ```ruby 67 | # -*- mode: ruby -*- 68 | # vi: set ft=ruby : 69 | Vagrant.configure("2") do |config| 70 | config.vm.define "master" do |subconfig| 71 | subconfig.vm.box = "generic/ubuntu2004" 72 | subconfig.vm.hostname = "master" 73 | subconfig.vm.provider "hyperv" 74 | subconfig.vm.network "public_network", bridge: "BRIDGE" 75 | 76 | subconfig.vm.provider "hyperv" do |h| 77 | h.enable_virtualization_extensions = false 78 | h.linked_clone = false 79 | h.vmname = "ubuntu_cluster_master" 80 | end 81 | 82 | subconfig.vm.provision "ansible" do |a| 83 | a.verbose = "v" 84 | a.playbook = "master_playbook.yaml" 85 | end 86 | end 87 | end 88 | ``` 89 | 90 | If you later want to add more than one virtual machine, it’s useful to create a sub config for every single machine. 91 | 92 | ```ruby 93 | config.vm.define "master" do |subconfig| 94 | ``` 95 | 96 | We can automatically attach our virtual machine(s) to a virtual switch in Hyper-V with the following statement. Because the “BRIDGE” Interface is the name of my virtual switch that connects the virtual machine to my physical network adapter. 97 | 98 | ```ruby 99 | subconfig.vm.network "public_network", bridge: "BRIDGE" 100 | ``` 101 | 102 | This h configuration is part of the provider-specific configuration of the Hyper-V on Windows 10. For more information check out this blogpost. 103 | 104 | ```ruby 105 | subconfig.vm.provider "hyperv" do |h| 106 | h.enable_virtualization_extensions = false 107 | h.linked_clone = false 108 | h.vmname = "ubuntu_cluster_master" 109 | end 110 | ``` 111 | 112 | Now we will create another subconfiguration to provision the machine with an ansible-playbook. This will contain all ansible instructions to provision our virtual machine. Vagrant automatically executes the ansible-playbook once the virtual machine is created the first time. 113 | 114 | ```ruby 115 | subconfig.vm.provision "ansible" do |a| 116 | a.verbose = "v" 117 | a.playbook = "master_playbook.yaml" 118 | end 119 | ``` 120 | 121 | ## Create Ansible Playbook 122 | 123 | Now, we need to create your ansible-playbook that is used to provision our virtual machine. I’ve created an example of a playbook that will automatically install Docker. Of course, you can simply change the playbook to whatever your need is. By the way, you find useful examples on my GitHub repository ansible-boilerplates. Of course, you can simply use and modify them in a variety of different setups. 124 | 125 | Because Vagrant completely handles the provisioning and authentication part, you don’t need to enter any passwords or public SSH keys. 126 | 127 | ```yml 128 | - hosts: all 129 | become: yes 130 | tasks: 131 | - name: install prerequisites 132 | apt: 133 | name: 134 | - apt-transport-https 135 | - ca-certificates 136 | - curl 137 | - gnupg-agent 138 | - software-properties-common 139 | - name: add apt-key 140 | apt_key: 141 | url: https://download.docker.com/linux/ubuntu/gpg 142 | - name: add docker repo 143 | apt_repository: 144 | repo: deb https://download.docker.com/linux/ubuntu focal stable 145 | - name: install docker 146 | apt: 147 | name: 148 | - docker-ce 149 | - docker-ce-cli 150 | - containerd.io 151 | update_cache: yes 152 | - name: add userpermissions 153 | shell: "usermod -aG docker vagrant" 154 | ``` 155 | 156 | ## Start the Lab Environment 157 | 158 | Now, start the virtual lab environment. Just execute `vagrant up`, and Vagrant will create the virtual machine, install the box image, and provision it with the ansible-playbook. After that, you should see that all tasks are applied. 159 | 160 | ```bash 161 | vagrant up 162 | ``` 163 | 164 | ### Connect to the Lab Environment 165 | 166 | ``` 167 | vagrant ssh 168 | ``` 169 | 170 | -------------------------------------------------------------------------------- /ansible-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Simple automation for all your Linux servers with Ansible 2 | We take a look at how to automate Linux servers with Ansible. We also cover authentication via SSH user/password and private/public keys you should use in a production environment. 3 | 4 | We will use the free and open-source software Ansible by Red Hat. 5 | 6 | Project Homepage: https://www.ansible.com/ 7 | Documentation: https://docs.ansible.com/ 8 | 9 | Video: https://youtu.be/uR1_hlHxvhc 10 | 11 | ## Prerequisites 12 | 13 | - Linux, macOS or Windows 10, 11 with WSL2 14 | - Linux Server for testing 15 | 16 | ## Installation and Configuration 17 | 18 | 1. Install Ansible on your machine 19 | 20 | First, you need to install Ansible on your system. Just go to https://docs.ansible.com/ and follow the installation instructions for your Linux distribution. You can also set up this in a virtual machine or on Windows Subsystem for Linux. 21 | 22 | 2. Configure ansible.cfg 23 | 24 | Before we can connect to other machines, we need to set up a general Ansible configuration file. This is very helpful, otherwise, we would need to enter all settings as options in our ansible commands. You can either edit the general Ansible Configuration file under `/etc/ansible/ansible.cfg` or create a new `ansible.cfg` in a project folder. By default, Ansible will check if there is an `ansible.cfg` file in your current folder which will overwrite the default settings. 25 | 26 | **Example ansible.cfg**: 27 | ```config 28 | [defaults] 29 | inventory = inventory 30 | host_key_checking = False 31 | ``` 32 | 33 | 3. Configure your inventory 34 | 35 | You should always specify an **inventory** file, that you can place in your project folder. This inventory file contains all IP addresses and also configuration variables of the machines you want to control. 36 | 37 | **Example inventory**: 38 | ``` 39 | [nodes] 40 | 192.168.0.139 41 | 192.168.0.140 42 | 43 | [master] 44 | 192.168.0.138 45 | 46 | [master:vars] 47 | ansible_ssh_user=master 48 | ansible_ssh_pass=master 49 | 50 | [nodes:vars] 51 | ansible_ssh_user=vagrant 52 | ansible_ssh_pass=vagrant 53 | ``` 54 | 55 | 4. Configure authentication 56 | 57 | Next, you should think about how Ansible authenticates to your machines. There are generally two different methods. The first method is via usernames and passwords, which you can simply define either in the general `ansible.cfg` file. If some machines in your inventory require a different username and/or password you can also configure that separately in your inventory file by using the `[:vars]` section. In this case, you also need to install the `sshpass` package on your Ansible machine and add `host_key_checking = False` in the default section of your ansible.cfg file. 58 | 59 | *Note it is not the best and most secure method and should not be used in production environments! In any production environment, you should create a corresponding private and public key pair for Ansible and upload the public key on all machines.* 60 | 61 | 5. (optinonal) More secure authentication with private and public ssh keys 62 | 63 | **Example Inventory**: 64 | ``` 65 | [nodes] 66 | 192.168.0.139 67 | 192.168.0.140 68 | 69 | [master] 70 | 192.168.0.138 71 | 72 | [nodes:vars] 73 | ansible_ssh_user=christian 74 | ansible_ssh_private_key_file=~/.ssh/ansible_id_rsa 75 | 76 | [master:vars] 77 | ansible_ssh_user=master 78 | ansible_ssh_private_key_file=~/.ssh/ansible_id_rsa 79 | ``` 80 | 81 | ## Test Connection 82 | 83 | Now that we have configured Ansible and can connect to our machines we can execute some test commands. This command will ping all machines that are in our inventory file. 84 | 85 | ```bash 86 | ansible all -m ping 87 | ``` 88 | 89 | *You can also change the pattern from `all` to a specific section you have defined in your inventory file. Let’s only ping the nodes, and exclude the master.* 90 | 91 | ```bash 92 | ansible nodes -m ping 93 | ``` 94 | 95 | ## Ansible Playbooks 96 | 97 | Ansible playbooks can describe even complex automation tasks simply and effectively. It uses the YAML (Yet another markup language) standard, which is easily readable by humans and interpreted by machines. You can define a state where you want all your machines to be in. Ansible will take care of the rest and perform the necessary actions on the machines. 98 | 99 | First, you need to start with a pattern of machines, you want to define a state for. You can also choose which user you want to use to execute any actions and if Ansible needs to execute commands with root privileges. For every pattern, you can describe one or more tasks to call Ansible modules. In the following example, we’re installing some software packages on the master server and nodes. 100 | 101 | **Example Playbook**: 102 | ```yml 103 | --- 104 | - hosts: nodes 105 | become: yes 106 | tasks: 107 | - name: make sure net-tools are installed on all nodes 108 | apt: 109 | name: net-tools 110 | state: present 111 | ``` 112 | 113 | If you want to execute an Ansible playbook you need to use the command `ansible-playbook`. Note, that for installing software packages you need to become a root user. This can be done by the option `become`, which requires you to provide a **sudo** password either via the default or inventory configuration, or arguments in the command. 114 | 115 | ```bash 116 | ansible-playbook playbook.yaml -K 117 | ``` 118 | 119 | You can see that Ansible executed all tasks successfully, but it didn’t always change something. This is because some packages are already installed on the machines. Ansible only installs a package when it’s not installed already. -------------------------------------------------------------------------------- /ansiblesemaphore-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # This web UI for Ansible is so damn useful 2 | 3 | Ansible Semaphore is the perfect tool for automating your servers, applications, and clean up tasks with Ansible Playbooks. In this video, I'll quickly show you how to install it and some of the stuff you can do with it. If you're new to Ansible, I recommend checking out some of my older videos first to get some foundational knowledge. Let's get started automating your Homelab with Ansible Semaphore! 4 | 5 | Video: https://youtu.be/NyOSoLn5T5U 6 | 7 | 8 | --- 9 | ## Prerequisites 10 | 11 | - Linux Server with Docker installed 12 | - Some basic knowledge about Ansible 13 | 14 | --- 15 | ## Install Ansible-Semaphore 16 | 17 | Ansible-Semaphore can be easily installed on a Linux machine using a snap command, Linux packages, or Docker. To install Ansible-Semaphore, you can use the following Docker Compose file, as an example. 18 | 19 | ```yaml 20 | --- 21 | volumes: 22 | semaphore-mysql: 23 | driver: local 24 | services: 25 | mysql: 26 | image: mysql:8.0 27 | hostname: mysql 28 | volumes: 29 | - semaphore-mysql:/var/lib/mysql 30 | environment: 31 | - MYSQL_RANDOM_ROOT_PASSWORD=yes 32 | - MYSQL_DATABASE=semaphore 33 | - MYSQL_USER=semaphore 34 | - MYSQL_PASSWORD=secret-password # change! 35 | restart: unless-stopped 36 | semaphore: 37 | container_name: ansiblesemaphore 38 | image: semaphoreui/semaphore:v2.8.90 39 | user: "${UID}:${GID}" 40 | ports: 41 | - 3000:3000 42 | environment: 43 | - SEMAPHORE_DB_USER=semaphore 44 | - SEMAPHORE_DB_PASS=secret-password # change! 45 | - SEMAPHORE_DB_HOST=mysql 46 | - SEMAPHORE_DB_PORT=3306 47 | - SEMAPHORE_DB_DIALECT=mysql 48 | - SEMAPHORE_DB=semaphore 49 | - SEMAPHORE_PLAYBOOK_PATH=/tmp/semaphore/ 50 | - SEMAPHORE_ADMIN_PASSWORD=secret-admin-password # change! 51 | - SEMAPHORE_ADMIN_NAME=admin 52 | - SEMAPHORE_ADMIN_EMAIL=admin@localhost 53 | - SEMAPHORE_ADMIN=admin 54 | - SEMAPHORE_ACCESS_KEY_ENCRYPTION= # add to your access key encryption ! 55 | - ANSIBLE_HOST_KEY_CHECKING=false # (optional) change to true if you want to enable host key checking 56 | volumes: 57 | - ./inventory/:/inventory:ro 58 | - ./authorized-keys/:/authorized-keys:ro 59 | - ./config/:/etc/semaphore:rw 60 | restart: unless-stopped 61 | depends_on: 62 | - mysql 63 | ``` 64 | 65 | Modify the file by changing the database password, adding a strong admin password, and generating a new access key encryption. Ansible Semaphore stores sensitive information such as SSH keys or passwords in the database, so it's important to use a secret access key. 66 | 67 | You can generate this with the following command. 68 | 69 | ```sh 70 | head -c32 /dev/urandom | base64 71 | ``` 72 | 73 | Start the container with a `docker compose up -d` command. 74 | 75 | --- 76 | ## References 77 | 78 | - [Ansible-Semaphore Documentation](https://docs.ansible-semaphore.com) -------------------------------------------------------------------------------- /ansiblesemaphore-tutorial/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | volumes: 3 | semaphore-mysql: 4 | driver: local 5 | services: 6 | mysql: 7 | image: mysql:8.0 8 | hostname: mysql 9 | volumes: 10 | - semaphore-mysql:/var/lib/mysql 11 | environment: 12 | - MYSQL_RANDOM_ROOT_PASSWORD=yes 13 | - MYSQL_DATABASE=semaphore 14 | - MYSQL_USER=semaphore 15 | - MYSQL_PASSWORD=secret-password # change! 16 | restart: unless-stopped 17 | semaphore: 18 | container_name: ansiblesemaphore 19 | image: semaphoreui/semaphore:v2.8.90 20 | user: "${UID}:${GID}" 21 | ports: 22 | - 3000:3000 23 | environment: 24 | - SEMAPHORE_DB_USER=semaphore 25 | - SEMAPHORE_DB_PASS=secret-password # change! 26 | - SEMAPHORE_DB_HOST=mysql 27 | - SEMAPHORE_DB_PORT=3306 28 | - SEMAPHORE_DB_DIALECT=mysql 29 | - SEMAPHORE_DB=semaphore 30 | - SEMAPHORE_PLAYBOOK_PATH=/tmp/semaphore/ 31 | - SEMAPHORE_ADMIN_PASSWORD=secret-admin-password # change! 32 | - SEMAPHORE_ADMIN_NAME=admin 33 | - SEMAPHORE_ADMIN_EMAIL=admin@localhost 34 | - SEMAPHORE_ADMIN=admin 35 | - SEMAPHORE_ACCESS_KEY_ENCRYPTION= # add to your access key encryption ! 36 | - ANSIBLE_HOST_KEY_CHECKING=false # (optional) change to true if you want to enable host key checking 37 | volumes: 38 | - ./inventory/:/inventory:ro 39 | - ./authorized-keys/:/authorized-keys:ro 40 | - ./config/:/etc/semaphore:rw 41 | restart: unless-stopped 42 | depends_on: 43 | - mysql -------------------------------------------------------------------------------- /ansiblesemaphore-tutorial/playbooks/update-apt-packages.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | 4 | become: true 5 | tasks: 6 | - name: update apt packages 7 | apt: 8 | upgrade: yes 9 | update_cache: yes -------------------------------------------------------------------------------- /argocd-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # ArgoCD Tutorial 2 | We start taking a look at CI-CD, especially the CD part with the free and open-source Argo CD. In this video, we will automate the deployment of a Nginx Web server on my Kubernetes Cluster. 3 | 4 | We will use the free and open-source software ArgoCD. 5 | 6 | Project Homepage: https://argoproj.github.io/cd/ 7 | Documentation: https://argo-cd.readthedocs.io/en/stable/ 8 | 9 | Video: https://youtu.be/Yb3_4PZX0B0 10 | 11 | ## Prerequisites 12 | 13 | - Kubernetes Cluster running k3s (v1.23.6 or newer) 14 | - Traefik (v2.5 or newer), Cert-Manager with ClusterIssuer configured 15 | - Kubectl configured 16 | - Public Git Repository on GitHub 17 | 18 | *You can still use ArgoCD on other Kubernetes Clusteres like AKS, EKS, GKE, etc. and other Git services, however, this is out-of-scope of this Tutorial!* 19 | 20 | ## Install and configure ArgoCD 21 | 22 | ### Install ArgoCD on Kubernetes 23 | 24 | Create a new namespace `argocd` and deploy ArgoCD with the web UI included. 25 | 26 | ```bash 27 | kubectl create namespace argocd 28 | kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml 29 | ``` 30 | 31 | ### Expose ArgoCD with Traefik 32 | 33 | Create a new IngressRoute object, follow the template described in `traefik-ingressroute.yml`. 34 | 35 | ### Disable internal TLS 36 | 37 | First, to avoid internal redirection loops from HTTP to HTTPS, the API server should be run with TLS disabled. 38 | 39 | Edit the --insecure flag in the argocd-server command of the argocd-server deployment. 40 | 41 | ```bash 42 | kubectl -n argocd edit deployment.apps argocd-server 43 | ``` 44 | 45 | Change the container command from: 46 | ```yml 47 | ... 48 | containers: 49 | - command: 50 | - argocd-server 51 | - --staticassets 52 | - /shared/app 53 | ... 54 | ``` 55 | 56 | To: 57 | ```yml 58 | ... 59 | containers: 60 | - command: 61 | - argocd-server 62 | - --insecure 63 | - --staticassets 64 | - /shared/app 65 | ... 66 | ``` 67 | 68 | ### Log in to the ArgoCD web interface 69 | 70 | Log in to the ArgoCD web interface `https:///` by using the default username `admin` and the password, collected by the following command. 71 | 72 | ```bash 73 | kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d 74 | ``` 75 | 76 | ## Add a Git Repository 77 | 78 | Add your Git Repository in the **Settings -> Repositories** menu. 79 | 80 | ## Create an Application in ArgoCD 81 | 82 | Create an Application in ArgoCD to deploy your Git Repository in Kubernetes -------------------------------------------------------------------------------- /argocd-tutorial/app-test-1/app-test-1.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: app-test-1 5 | --- 6 | apiVersion: apps/v1 7 | kind: Deployment 8 | metadata: 9 | name: app-test-1-deploy 10 | namespace: app-test-1 11 | spec: 12 | replicas: 1 13 | selector: 14 | matchLabels: 15 | app: app-test-1 16 | template: 17 | metadata: 18 | labels: 19 | app: app-test-1 20 | spec: 21 | containers: 22 | - name: app-test-1 23 | image: nginx 24 | ports: 25 | - name: web 26 | containerPort: 80 27 | --- 28 | apiVersion: v1 29 | kind: Service 30 | metadata: 31 | name: app-test-1-svc 32 | namespace: app-test-1 33 | labels: 34 | app: app-test-1 35 | spec: 36 | type: ClusterIP 37 | ports: 38 | - name: http 39 | port: 80 40 | targetPort: 80 41 | protocol: TCP 42 | selector: 43 | app: app-test-1 44 | --- 45 | apiVersion: networking.k8s.io/v1 46 | kind: Ingress 47 | metadata: 48 | name: app-test-1-ingress 49 | namespace: app-test-1 50 | spec: 51 | rules: 52 | - host: "app-test-1.kube-demo-1.clcreative.home" 53 | http: 54 | paths: 55 | - path: / 56 | pathType: Prefix 57 | backend: 58 | service: 59 | name: app-test-1-svc 60 | port: 61 | number: 80 62 | tls: 63 | - hosts: 64 | - app-test-1.kube-demo-1.clcreative.home 65 | secretName: app-test-1-tls-secret 66 | --- 67 | apiVersion: cert-manager.io/v1 68 | kind: Certificate 69 | metadata: 70 | name: app-test-1-cert 71 | namespace: app-test-1 72 | spec: 73 | secretName: app-test-1-tls-secret 74 | issuerRef: 75 | name: ssl-issuer 76 | kind: ClusterIssuer 77 | dnsNames: 78 | - app-test-1.kube-demo-1.clcreative.home -------------------------------------------------------------------------------- /argocd-tutorial/traefik-ingressroute.yml: -------------------------------------------------------------------------------- 1 | apiVersion: traefik.containo.us/v1alpha1 2 | kind: IngressRoute 3 | metadata: 4 | name: argocd-server 5 | namespace: argocd 6 | spec: 7 | entryPoints: 8 | - websecure 9 | routes: 10 | - kind: Rule 11 | match: Host(`argocd.kube-demo-1.clcreative.home`) 12 | priority: 10 13 | services: 14 | - name: argocd-server 15 | port: 80 16 | - kind: Rule 17 | match: Host(`argocd.kube-demo-1.clcreative.home`) && Headers(`Content-Type`, `application/grpc`) 18 | priority: 11 19 | services: 20 | - name: argocd-server 21 | port: 80 22 | scheme: h2c 23 | tls: 24 | secretName: argocd-tls-secret 25 | --- 26 | apiVersion: cert-manager.io/v1 27 | kind: Certificate 28 | metadata: 29 | name: argocd-cert 30 | namespace: argocd 31 | spec: 32 | secretName: argocd-tls-secret 33 | issuerRef: 34 | name: ssl-issuer 35 | kind: ClusterIssuer 36 | dnsNames: 37 | - argocd.kube-demo-1.clcreative.home 38 | -------------------------------------------------------------------------------- /bind9-docker/README.md: -------------------------------------------------------------------------------- 1 | # You want a real DNS Server at home? (bind9 + docker) 2 | 3 | In this Tutorial, we set up a free and open-source DNS server for your home labs. We will deploy bind9 on an Ubuntu Linux server running Docker and configure it to be an authoritative DNS server in my local network. And we also configure forwarders and access control lists to protect our internal networks. #homeserver #dns #docker 4 | 5 | Video: https://github.com/ChristianLempa/videos/tree/main/bind9-docker 6 | 7 | --- 8 | ## Prerequisites 9 | 10 | Before you can deploy Bind9 in Docker, you need a Linux Server that has **Docker**, and **Docker-Compose** installed. 11 | 12 | For further References, how to use **Docker**, and **Docker-Compose**, check out my previous videos: 13 | - [How to use Docker and migrate your existing Apps to your Linux Server?](https://www.youtube.com/watch?v=y0GGQ2F2tvs) 14 | - [Docker-Compose Tutorial](https://www.youtube.com/watch?v=qH4ZKfwbO8w) 15 | 16 | *You can still install Bind9 on a Linux Server that is not running Docker, however, this may require different commands!* 17 | 18 | --- 19 | ## Define your domain 20 | 21 | You have multiple options of defining a domain, you can use a so-called "fake-domain", which is not publicly resolvable, such as "your-domain.home". However, a "fake-domain", will not allow you to issue trusted SSL certificates, in this case it makes more sense to use a real "public domain". 22 | 23 | ### Example of public domain 24 | 25 | [Public Domain DNS](assets/dns-setup-new.excalidraw.md#drawing) 26 | 27 | ### Split-horizon DNS 28 | 29 | In computer networking, split-horizon DNS (also known as split-view DNS, split-brain DNS, or split DNS) is the facility of a Domain Name System (DNS) implementation to provide different sets of DNS information, usually selected by the source address of the DNS request. 30 | 31 | In our example, we can use the internal Bind9 Server to resolve to only internal IPs, while an external DNS Server might resolve to external IPs. 32 | 33 | [Split Horizon DNS](assets/dns-split-horizon.excalidraw#drawing) 34 | 35 | --- 36 | ## Install Bind9 in Docker 37 | 38 | ### Prepare Ubuntu Server 39 | 40 | Edit the config file of systemd-resolved, `/etc/systemd/resolved.conf`. 41 | 42 | Uncomment the line `DNSStubListener`, and set it to **no**. 43 | 44 | ```conf 45 | [Resolve] 46 | ... 47 | DNSStubListener=no 48 | ... 49 | ``` 50 | 51 | Restart the sytemd-resolved service using `sudo systemctl restart systemd-resolved`. 52 | 53 | ### Create Docker-Compose file 54 | 55 | Copy the example `docker-compose.yml` file in your project directory, and make sure you replace the `container_name` value with your desired container name. 56 | 57 | **Example `docker-compose.yml`:** 58 | 59 | ```yaml 60 | version: "3" 61 | 62 | services: 63 | bind9: 64 | container_name: dns-demo-2 65 | image: ubuntu/bind9:latest 66 | environment: 67 | - BIND9_USER=root 68 | - TZ=Europe/Berlin 69 | ports: 70 | - "53:53/tcp" 71 | - "53:53/udp" 72 | volumes: 73 | - ./config:/etc/bind 74 | - ./cache:/var/cache/bind 75 | - ./records:/var/lib/bind 76 | restart: unless-stopped 77 | ``` 78 | 79 | ### Create the main config file 80 | 81 | Copy the example `named.conf` file in the `./config/` folder of your project directory, and make sure you replace the values, with your desired configuration. 82 | 83 | **Example `named.conf`:** 84 | 85 | ```conf 86 | acl internal { 87 | 192.168.0.0/24; 88 | }; 89 | 90 | options { 91 | forwarders { 92 | 1.1.1.1; 93 | 1.0.0.1; 94 | }; 95 | allow-query { internal; }; 96 | }; 97 | 98 | zone "demo.clcreative.de" IN { 99 | type master; 100 | file "/etc/bind/demo-clcreative-de.zone"; 101 | }; 102 | ``` 103 | 104 | ### Prepare the zone file 105 | 106 | Copy the example `demo-clcreative-de.zone` file in the `./config/` folder of your project directory, and make sure you replace the values, with your desired configuration. 107 | 108 | **Example `demo-clcreative-de.zone`:** 109 | 110 | ```conf 111 | $TTL 2d 112 | 113 | $ORIGIN demo.clcreative.de. 114 | 115 | @ IN SOA ns.demo.clcreative.de. info.clcreative.de ( 116 | 2022121900 ; serial 117 | 12h ; refresh 118 | 15m ; retry 119 | 3w ; expire 120 | 2h ) ; minimum TTL 121 | 122 | IN NS ns.demo.clcreative.de. 123 | 124 | ns IN A 10.20.3.4 125 | 126 | srv-demo-2 IN A 10.20.3.4 127 | *.srv-demo-2 IN A 10.20.3.4 128 | ``` 129 | 130 | ### Add your DNS Records 131 | 132 | According to the following examples, you can add additional DNS Records, defined in the [IANA's DNS Resource Records TYPEs](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4). 133 | 134 | 135 | ### Start the container 136 | 137 | To start the container, execute the following command in the project directory. 138 | 139 | ```sh 140 | docker-compose up -d 141 | ``` 142 | 143 | --- 144 | ## Test Bind9 145 | 146 | To test your bind9 dns server, you can use the "nslookup" command on your local machine, followed by the IP address of your dns server. 147 | 148 | ```sh 149 | nslookup name-to-resolve.tld your-dns-server-ip 150 | ``` 151 | 152 | --- 153 | ## DNS Lookup Chain 154 | 155 | [DNS Lookup Chain](assets/dns-lookup-chain-optimized.excalidraw.md#drawing) 156 | 157 | --- 158 | ## References 159 | 160 | - [Bind9 Configuration and Zone Files](https://bind9.readthedocs.io/en/v9_18_10/chapter3.html) 161 | - [IANA's DNS Resource Records TYPEs](https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-4) 162 | -------------------------------------------------------------------------------- /bind9-docker/assets/dns-setup-old.excalidraw.md: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | excalidraw-plugin: parsed 4 | tags: [excalidraw] 5 | 6 | --- 7 | ==⚠ Switch to EXCALIDRAW VIEW in the MORE OPTIONS menu of this document. ⚠== 8 | 9 | 10 | # Text Elements 11 | PC ^8Wp2zIR5 12 | 13 | clcreative.home ^QNPZ3J9l 14 | 15 | *fake-domain ^uQqbJ9X5 16 | 17 | 1. DNS Request to: 18 | server1.clcreative.home ^A1c7gce8 19 | 20 | Sophos XG Firewall 21 | (primary DNS Server) ^8PUeCKcc 22 | 23 | server1.clcreative.home -> 10.20.0.2 24 | server2.clcreative.home -> 10.20.0.3 25 | server3.clcreative.home -> 10.20.0.4 26 | ... ^sbu7ojQX 27 | 28 | 2. DNS Answer: 29 | server1.clcreative.home is at 30 | 10.20.0.2 ^6FLhZ8Ql 31 | 32 | %% 33 | # Drawing 34 | ```json 35 | { 36 | "type": "excalidraw", 37 | "version": 2, 38 | "source": "https://excalidraw.com", 39 | "elements": [ 40 | { 41 | "id": "ct-lXaH6ul2zobiDwAR8M", 42 | "type": "rectangle", 43 | "x": -498.4879150390625, 44 | "y": -173.67869567871094, 45 | "width": 184, 46 | "height": 154, 47 | "angle": 0, 48 | "strokeColor": "#000000", 49 | "backgroundColor": "transparent", 50 | "fillStyle": "hachure", 51 | "strokeWidth": 1, 52 | "strokeStyle": "solid", 53 | "roughness": 1, 54 | "opacity": 100, 55 | "groupIds": [], 56 | "roundness": { 57 | "type": 3 58 | }, 59 | "seed": 147995302, 60 | "version": 191, 61 | "versionNonce": 1882057062, 62 | "isDeleted": false, 63 | "boundElements": [ 64 | { 65 | "type": "text", 66 | "id": "8Wp2zIR5" 67 | } 68 | ], 69 | "updated": 1672245678324, 70 | "link": null, 71 | "locked": false 72 | }, 73 | { 74 | "id": "8Wp2zIR5", 75 | "type": "text", 76 | "x": -419.9879150390625, 77 | "y": -109.17869567871094, 78 | "width": 27, 79 | "height": 25, 80 | "angle": 0, 81 | "strokeColor": "#000000", 82 | "backgroundColor": "transparent", 83 | "fillStyle": "hachure", 84 | "strokeWidth": 1, 85 | "strokeStyle": "solid", 86 | "roughness": 1, 87 | "opacity": 100, 88 | "groupIds": [], 89 | "roundness": null, 90 | "seed": 238035622, 91 | "version": 131, 92 | "versionNonce": 1628194810, 93 | "isDeleted": false, 94 | "boundElements": null, 95 | "updated": 1672245678324, 96 | "link": null, 97 | "locked": false, 98 | "text": "PC", 99 | "rawText": "PC", 100 | "fontSize": 20, 101 | "fontFamily": 1, 102 | "textAlign": "center", 103 | "verticalAlign": "middle", 104 | "baseline": 17, 105 | "containerId": "ct-lXaH6ul2zobiDwAR8M", 106 | "originalText": "PC" 107 | }, 108 | { 109 | "id": "RcfatBkMgLmPJfQE7lDPV", 110 | "type": "rectangle", 111 | "x": -154.56329345703125, 112 | "y": -345.2449645996094, 113 | "width": 294, 114 | "height": 48, 115 | "angle": 0, 116 | "strokeColor": "#000000", 117 | "backgroundColor": "transparent", 118 | "fillStyle": "hachure", 119 | "strokeWidth": 1, 120 | "strokeStyle": "solid", 121 | "roughness": 1, 122 | "opacity": 100, 123 | "groupIds": [], 124 | "roundness": { 125 | "type": 3 126 | }, 127 | "seed": 134832570, 128 | "version": 119, 129 | "versionNonce": 2074395386, 130 | "isDeleted": false, 131 | "boundElements": [ 132 | { 133 | "type": "text", 134 | "id": "QNPZ3J9l" 135 | } 136 | ], 137 | "updated": 1672245556901, 138 | "link": null, 139 | "locked": false 140 | }, 141 | { 142 | "id": "QNPZ3J9l", 143 | "type": "text", 144 | "x": -81.06329345703125, 145 | "y": -333.7449645996094, 146 | "width": 147, 147 | "height": 25, 148 | "angle": 0, 149 | "strokeColor": "#000000", 150 | "backgroundColor": "transparent", 151 | "fillStyle": "hachure", 152 | "strokeWidth": 1, 153 | "strokeStyle": "solid", 154 | "roughness": 1, 155 | "opacity": 100, 156 | "groupIds": [], 157 | "roundness": null, 158 | "seed": 2070240102, 159 | "version": 111, 160 | "versionNonce": 272138662, 161 | "isDeleted": false, 162 | "boundElements": null, 163 | "updated": 1672245556901, 164 | "link": null, 165 | "locked": false, 166 | "text": "clcreative.home", 167 | "rawText": "clcreative.home", 168 | "fontSize": 20, 169 | "fontFamily": 1, 170 | "textAlign": "center", 171 | "verticalAlign": "middle", 172 | "baseline": 17, 173 | "containerId": "RcfatBkMgLmPJfQE7lDPV", 174 | "originalText": "clcreative.home" 175 | }, 176 | { 177 | "id": "uQqbJ9X5", 178 | "type": "text", 179 | "x": 48.5782470703125, 180 | "y": -292.8172836303711, 181 | "width": 126, 182 | "height": 25, 183 | "angle": 0, 184 | "strokeColor": "#000000", 185 | "backgroundColor": "transparent", 186 | "fillStyle": "hachure", 187 | "strokeWidth": 1, 188 | "strokeStyle": "solid", 189 | "roughness": 1, 190 | "opacity": 100, 191 | "groupIds": [], 192 | "roundness": null, 193 | "seed": 1619096550, 194 | "version": 255, 195 | "versionNonce": 1595374074, 196 | "isDeleted": false, 197 | "boundElements": null, 198 | "updated": 1672245517382, 199 | "link": null, 200 | "locked": false, 201 | "text": "*fake-domain", 202 | "rawText": "*fake-domain", 203 | "fontSize": 20, 204 | "fontFamily": 1, 205 | "textAlign": "left", 206 | "verticalAlign": "top", 207 | "baseline": 18, 208 | "containerId": null, 209 | "originalText": "*fake-domain" 210 | }, 211 | { 212 | "id": "A1c7gce8", 213 | "type": "text", 214 | "x": -262.71075439453125, 215 | "y": -209.74597930908203, 216 | "width": 218, 217 | "height": 50, 218 | "angle": 0, 219 | "strokeColor": "#364fc7", 220 | "backgroundColor": "transparent", 221 | "fillStyle": "hachure", 222 | "strokeWidth": 1, 223 | "strokeStyle": "solid", 224 | "roughness": 1, 225 | "opacity": 100, 226 | "groupIds": [], 227 | "roundness": null, 228 | "seed": 101246566, 229 | "version": 122, 230 | "versionNonce": 715388090, 231 | "isDeleted": false, 232 | "boundElements": null, 233 | "updated": 1672245660441, 234 | "link": null, 235 | "locked": false, 236 | "text": "1. DNS Request to:\nserver1.clcreative.home", 237 | "rawText": "1. DNS Request to:\nserver1.clcreative.home", 238 | "fontSize": 20, 239 | "fontFamily": 1, 240 | "textAlign": "left", 241 | "verticalAlign": "top", 242 | "baseline": 43, 243 | "containerId": null, 244 | "originalText": "1. DNS Request to:\nserver1.clcreative.home" 245 | }, 246 | { 247 | "id": "vENAlFKZFTKliiqHE9jiD", 248 | "type": "rectangle", 249 | "x": 221.8028564453125, 250 | "y": -177.42134857177734, 251 | "width": 267, 252 | "height": 157, 253 | "angle": 0, 254 | "strokeColor": "#364fc7", 255 | "backgroundColor": "transparent", 256 | "fillStyle": "hachure", 257 | "strokeWidth": 1, 258 | "strokeStyle": "solid", 259 | "roughness": 1, 260 | "opacity": 100, 261 | "groupIds": [], 262 | "roundness": { 263 | "type": 3 264 | }, 265 | "seed": 13249210, 266 | "version": 179, 267 | "versionNonce": 1748950010, 268 | "isDeleted": false, 269 | "boundElements": [ 270 | { 271 | "type": "text", 272 | "id": "8PUeCKcc" 273 | } 274 | ], 275 | "updated": 1672245685639, 276 | "link": null, 277 | "locked": false 278 | }, 279 | { 280 | "id": "8PUeCKcc", 281 | "type": "text", 282 | "x": 252.8028564453125, 283 | "y": -123.92134857177734, 284 | "width": 205, 285 | "height": 50, 286 | "angle": 0, 287 | "strokeColor": "#364fc7", 288 | "backgroundColor": "transparent", 289 | "fillStyle": "hachure", 290 | "strokeWidth": 1, 291 | "strokeStyle": "solid", 292 | "roughness": 1, 293 | "opacity": 100, 294 | "groupIds": [], 295 | "roundness": null, 296 | "seed": 284675194, 297 | "version": 187, 298 | "versionNonce": 1688762534, 299 | "isDeleted": false, 300 | "boundElements": null, 301 | "updated": 1672245685639, 302 | "link": null, 303 | "locked": false, 304 | "text": "Sophos XG Firewall\n(primary DNS Server)", 305 | "rawText": "Sophos XG Firewall\n(primary DNS Server)", 306 | "fontSize": 20, 307 | "fontFamily": 1, 308 | "textAlign": "center", 309 | "verticalAlign": "middle", 310 | "baseline": 43, 311 | "containerId": "vENAlFKZFTKliiqHE9jiD", 312 | "originalText": "Sophos XG Firewall\n(primary DNS Server)" 313 | }, 314 | { 315 | "id": "dMeqEpByghXrm1UnGRfXz", 316 | "type": "arrow", 317 | "x": -273.66510009765625, 318 | "y": -140.49652862548828, 319 | "width": 441.8294677734375, 320 | "height": 1.3543724010337996, 321 | "angle": 0, 322 | "strokeColor": "#364fc7", 323 | "backgroundColor": "transparent", 324 | "fillStyle": "hachure", 325 | "strokeWidth": 1, 326 | "strokeStyle": "solid", 327 | "roughness": 1, 328 | "opacity": 100, 329 | "groupIds": [], 330 | "roundness": { 331 | "type": 2 332 | }, 333 | "seed": 1126676282, 334 | "version": 226, 335 | "versionNonce": 1214368250, 336 | "isDeleted": false, 337 | "boundElements": null, 338 | "updated": 1672245675861, 339 | "link": null, 340 | "locked": false, 341 | "points": [ 342 | [ 343 | 0, 344 | 0 345 | ], 346 | [ 347 | 441.8294677734375, 348 | 1.3543724010337996 349 | ] 350 | ], 351 | "lastCommittedPoint": null, 352 | "startBinding": null, 353 | "endBinding": null, 354 | "startArrowhead": null, 355 | "endArrowhead": "arrow" 356 | }, 357 | { 358 | "id": "sbu7ojQX", 359 | "type": "text", 360 | "x": 267.66729736328125, 361 | "y": 0.21431732177734375, 362 | "width": 354, 363 | "height": 100, 364 | "angle": 0, 365 | "strokeColor": "#000000", 366 | "backgroundColor": "transparent", 367 | "fillStyle": "hachure", 368 | "strokeWidth": 1, 369 | "strokeStyle": "solid", 370 | "roughness": 1, 371 | "opacity": 100, 372 | "groupIds": [], 373 | "roundness": null, 374 | "seed": 877019814, 375 | "version": 198, 376 | "versionNonce": 369797114, 377 | "isDeleted": false, 378 | "boundElements": null, 379 | "updated": 1672245686624, 380 | "link": null, 381 | "locked": false, 382 | "text": "server1.clcreative.home -> 10.20.0.2\nserver2.clcreative.home -> 10.20.0.3\nserver3.clcreative.home -> 10.20.0.4\n...", 383 | "rawText": "server1.clcreative.home -> 10.20.0.2\nserver2.clcreative.home -> 10.20.0.3\nserver3.clcreative.home -> 10.20.0.4\n...", 384 | "fontSize": 20, 385 | "fontFamily": 1, 386 | "textAlign": "left", 387 | "verticalAlign": "top", 388 | "baseline": 93, 389 | "containerId": null, 390 | "originalText": "server1.clcreative.home -> 10.20.0.2\nserver2.clcreative.home -> 10.20.0.3\nserver3.clcreative.home -> 10.20.0.4\n..." 391 | }, 392 | { 393 | "id": "be2XeRwCobt1FFa-MH4D_", 394 | "type": "arrow", 395 | "x": 175.46734619140625, 396 | "y": -46.47083282470703, 397 | "width": 450.04534912109375, 398 | "height": 0.912872314453125, 399 | "angle": 0, 400 | "strokeColor": "#000000", 401 | "backgroundColor": "transparent", 402 | "fillStyle": "hachure", 403 | "strokeWidth": 1, 404 | "strokeStyle": "solid", 405 | "roughness": 1, 406 | "opacity": 100, 407 | "groupIds": [], 408 | "roundness": { 409 | "type": 2 410 | }, 411 | "seed": 555607270, 412 | "version": 60, 413 | "versionNonce": 1746381370, 414 | "isDeleted": false, 415 | "boundElements": null, 416 | "updated": 1672245627692, 417 | "link": null, 418 | "locked": false, 419 | "points": [ 420 | [ 421 | 0, 422 | 0 423 | ], 424 | [ 425 | -450.04534912109375, 426 | -0.912872314453125 427 | ] 428 | ], 429 | "lastCommittedPoint": null, 430 | "startBinding": null, 431 | "endBinding": null, 432 | "startArrowhead": null, 433 | "endArrowhead": "arrow" 434 | }, 435 | { 436 | "id": "6FLhZ8Ql", 437 | "type": "text", 438 | "x": -211.58990478515625, 439 | "y": -29.91046905517578, 440 | "width": 277, 441 | "height": 75, 442 | "angle": 0, 443 | "strokeColor": "#000000", 444 | "backgroundColor": "transparent", 445 | "fillStyle": "hachure", 446 | "strokeWidth": 1, 447 | "strokeStyle": "solid", 448 | "roughness": 1, 449 | "opacity": 100, 450 | "groupIds": [], 451 | "roundness": null, 452 | "seed": 104509030, 453 | "version": 71, 454 | "versionNonce": 1469404070, 455 | "isDeleted": false, 456 | "boundElements": null, 457 | "updated": 1672245659278, 458 | "link": null, 459 | "locked": false, 460 | "text": "2. DNS Answer:\nserver1.clcreative.home is at\n10.20.0.2", 461 | "rawText": "2. DNS Answer:\nserver1.clcreative.home is at\n10.20.0.2", 462 | "fontSize": 20, 463 | "fontFamily": 1, 464 | "textAlign": "left", 465 | "verticalAlign": "top", 466 | "baseline": 68, 467 | "containerId": null, 468 | "originalText": "2. DNS Answer:\nserver1.clcreative.home is at\n10.20.0.2" 469 | }, 470 | { 471 | "id": "WZ1bDSHjRiw-ULTwl21MK", 472 | "type": "line", 473 | "x": 29.40802001953125, 474 | "y": -273.7756881713867, 475 | "width": 0, 476 | "height": 0, 477 | "angle": 0, 478 | "strokeColor": "#000000", 479 | "backgroundColor": "transparent", 480 | "fillStyle": "hachure", 481 | "strokeWidth": 1, 482 | "strokeStyle": "solid", 483 | "roughness": 1, 484 | "opacity": 100, 485 | "groupIds": [], 486 | "roundness": { 487 | "type": 2 488 | }, 489 | "seed": 1011689978, 490 | "version": 16, 491 | "versionNonce": 1545489530, 492 | "isDeleted": true, 493 | "boundElements": null, 494 | "updated": 1672245426753, 495 | "link": null, 496 | "locked": false, 497 | "points": [ 498 | [ 499 | 0, 500 | 0 501 | ], 502 | [ 503 | 0, 504 | 0 505 | ] 506 | ], 507 | "lastCommittedPoint": [ 508 | 0, 509 | 0 510 | ], 511 | "startBinding": null, 512 | "endBinding": null, 513 | "startArrowhead": null, 514 | "endArrowhead": null 515 | }, 516 | { 517 | "id": "BMa9Q3BE", 518 | "type": "text", 519 | "x": 30.7545166015625, 520 | "y": -294.49151611328125, 521 | "width": 11, 522 | "height": 25, 523 | "angle": 0, 524 | "strokeColor": "#000000", 525 | "backgroundColor": "transparent", 526 | "fillStyle": "hachure", 527 | "strokeWidth": 1, 528 | "strokeStyle": "solid", 529 | "roughness": 1, 530 | "opacity": 100, 531 | "groupIds": [], 532 | "roundness": null, 533 | "seed": 929566714, 534 | "version": 3, 535 | "versionNonce": 1141627258, 536 | "isDeleted": true, 537 | "boundElements": null, 538 | "updated": 1672245425565, 539 | "link": null, 540 | "locked": false, 541 | "text": "", 542 | "rawText": "", 543 | "fontSize": 20, 544 | "fontFamily": 1, 545 | "textAlign": "center", 546 | "verticalAlign": "middle", 547 | "baseline": 17, 548 | "containerId": "RcfatBkMgLmPJfQE7lDPV", 549 | "originalText": "" 550 | }, 551 | { 552 | "id": "V9mo2CLv", 553 | "type": "text", 554 | "x": 295.0306396484375, 555 | "y": -184.94701385498047, 556 | "width": 11, 557 | "height": 25, 558 | "angle": 0, 559 | "strokeColor": "#364fc7", 560 | "backgroundColor": "transparent", 561 | "fillStyle": "hachure", 562 | "strokeWidth": 1, 563 | "strokeStyle": "solid", 564 | "roughness": 1, 565 | "opacity": 100, 566 | "groupIds": [], 567 | "roundness": null, 568 | "seed": 2026670758, 569 | "version": 3, 570 | "versionNonce": 746232102, 571 | "isDeleted": true, 572 | "boundElements": null, 573 | "updated": 1672245550626, 574 | "link": null, 575 | "locked": false, 576 | "text": "", 577 | "rawText": "", 578 | "fontSize": 20, 579 | "fontFamily": 1, 580 | "textAlign": "center", 581 | "verticalAlign": "middle", 582 | "baseline": 17, 583 | "containerId": "vENAlFKZFTKliiqHE9jiD", 584 | "originalText": "" 585 | }, 586 | { 587 | "id": "aioGMxdi", 588 | "type": "text", 589 | "x": 358.39459228515625, 590 | "y": -155.9478530883789, 591 | "width": 76, 592 | "height": 25, 593 | "angle": 0, 594 | "strokeColor": "#364fc7", 595 | "backgroundColor": "transparent", 596 | "fillStyle": "hachure", 597 | "strokeWidth": 1, 598 | "strokeStyle": "solid", 599 | "roughness": 1, 600 | "opacity": 100, 601 | "groupIds": [], 602 | "roundness": null, 603 | "seed": 1723889338, 604 | "version": 10, 605 | "versionNonce": 1895024358, 606 | "isDeleted": true, 607 | "boundElements": null, 608 | "updated": 1672245528746, 609 | "link": null, 610 | "locked": false, 611 | "text": "Sophos ", 612 | "rawText": "Sophos ", 613 | "fontSize": 20, 614 | "fontFamily": 1, 615 | "textAlign": "left", 616 | "verticalAlign": "top", 617 | "baseline": 18, 618 | "containerId": null, 619 | "originalText": "Sophos " 620 | }, 621 | { 622 | "id": "jp4PvWFT", 623 | "type": "text", 624 | "x": 202.09564208984375, 625 | "y": -193.33792877197266, 626 | "width": 205, 627 | "height": 50, 628 | "angle": 0, 629 | "strokeColor": "#364fc7", 630 | "backgroundColor": "transparent", 631 | "fillStyle": "hachure", 632 | "strokeWidth": 1, 633 | "strokeStyle": "solid", 634 | "roughness": 1, 635 | "opacity": 100, 636 | "groupIds": [], 637 | "roundness": null, 638 | "seed": 1663446010, 639 | "version": 67, 640 | "versionNonce": 1657315110, 641 | "isDeleted": true, 642 | "boundElements": null, 643 | "updated": 1672245540312, 644 | "link": null, 645 | "locked": false, 646 | "text": "Sophos XG Firewall\n(primary DNS Server)", 647 | "rawText": "Sophos XG Firewall\n(primary DNS Server)", 648 | "fontSize": 20, 649 | "fontFamily": 1, 650 | "textAlign": "left", 651 | "verticalAlign": "top", 652 | "baseline": 43, 653 | "containerId": null, 654 | "originalText": "Sophos XG Firewall\n(primary DNS Server)" 655 | }, 656 | { 657 | "id": "VQqFhoBA", 658 | "type": "text", 659 | "x": 280.39459228515625, 660 | "y": -239.9478530883789, 661 | "width": 11, 662 | "height": 25, 663 | "angle": 0, 664 | "strokeColor": "#364fc7", 665 | "backgroundColor": "transparent", 666 | "fillStyle": "hachure", 667 | "strokeWidth": 1, 668 | "strokeStyle": "solid", 669 | "roughness": 1, 670 | "opacity": 100, 671 | "groupIds": [], 672 | "roundness": null, 673 | "seed": 105185402, 674 | "version": 3, 675 | "versionNonce": 629334330, 676 | "isDeleted": true, 677 | "boundElements": null, 678 | "updated": 1672245548683, 679 | "link": null, 680 | "locked": false, 681 | "text": "", 682 | "rawText": "", 683 | "fontSize": 20, 684 | "fontFamily": 1, 685 | "textAlign": "left", 686 | "verticalAlign": "top", 687 | "baseline": 18, 688 | "containerId": null, 689 | "originalText": "" 690 | }, 691 | { 692 | "id": "qWJWx2gA", 693 | "type": "text", 694 | "x": 244.39459228515625, 695 | "y": -203.9478530883789, 696 | "width": 11, 697 | "height": 25, 698 | "angle": 0, 699 | "strokeColor": "#364fc7", 700 | "backgroundColor": "transparent", 701 | "fillStyle": "hachure", 702 | "strokeWidth": 1, 703 | "strokeStyle": "solid", 704 | "roughness": 1, 705 | "opacity": 100, 706 | "groupIds": [], 707 | "roundness": null, 708 | "seed": 1370283258, 709 | "version": 3, 710 | "versionNonce": 1622879674, 711 | "isDeleted": true, 712 | "boundElements": null, 713 | "updated": 1672245551941, 714 | "link": null, 715 | "locked": false, 716 | "text": "", 717 | "rawText": "", 718 | "fontSize": 20, 719 | "fontFamily": 1, 720 | "textAlign": "left", 721 | "verticalAlign": "top", 722 | "baseline": 18, 723 | "containerId": null, 724 | "originalText": "" 725 | } 726 | ], 727 | "appState": { 728 | "theme": "dark", 729 | "viewBackgroundColor": "#ffffff", 730 | "currentItemStrokeColor": "#000000", 731 | "currentItemBackgroundColor": "transparent", 732 | "currentItemFillStyle": "hachure", 733 | "currentItemStrokeWidth": 1, 734 | "currentItemStrokeStyle": "solid", 735 | "currentItemRoughness": 1, 736 | "currentItemOpacity": 100, 737 | "currentItemFontFamily": 1, 738 | "currentItemFontSize": 20, 739 | "currentItemTextAlign": "left", 740 | "currentItemStartArrowhead": null, 741 | "currentItemEndArrowhead": "arrow", 742 | "scrollX": 702.1082763671875, 743 | "scrollY": 472.95269775390625, 744 | "zoom": { 745 | "value": 1 746 | }, 747 | "currentItemRoundness": "round", 748 | "gridSize": null, 749 | "colorPalette": {} 750 | }, 751 | "files": {} 752 | } 753 | ``` 754 | %% -------------------------------------------------------------------------------- /bind9-docker/files/config/demo-clcreative-de.zone: -------------------------------------------------------------------------------- 1 | $TTL 2d 2 | 3 | $ORIGIN demo.clcreative.de. 4 | 5 | @ IN SOA ns.demo.clcreative.de. info.clcreative.de. ( 6 | 2022122800 ; serial 7 | 12h ; refresh 8 | 15m ; retry 9 | 3w ; expire 10 | 2h ; minimum ttl 11 | ) 12 | 13 | IN NS ns.demo.clcreative.de. 14 | 15 | ns IN A 10.20.3.4 16 | 17 | ; -- add dns records below 18 | 19 | srv-prod-1 IN A 10.20.0.2 20 | srv-prod-2 IN A 10.20.0.3 21 | 22 | ; -- demo servers 23 | 24 | srv-demo-2 IN A 10.20.3.4 25 | *.srv-demo-2 IN A 10.20.3.4 26 | 27 | -------------------------------------------------------------------------------- /bind9-docker/files/config/named.conf: -------------------------------------------------------------------------------- 1 | acl internal { 2 | 10.10.0.0/16; 3 | 10.11.0.0/16; 4 | 10.20.0.0/16; 5 | 10.50.0.0/16; 6 | }; 7 | 8 | options { 9 | forwarders { 10 | 1.1.1.1; 11 | 1.0.0.1; 12 | }; 13 | allow-query { internal; }; 14 | }; 15 | 16 | zone "demo.clcreative.de" IN { 17 | type master; 18 | file "/etc/bind/demo-clcreative-de.zone"; 19 | }; 20 | -------------------------------------------------------------------------------- /bind9-docker/files/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | 3 | services: 4 | bind9: 5 | container_name: dns-demo-2 6 | image: ubuntu/bind9:latest 7 | environment: 8 | - BIND9_USER=root 9 | - TZ=Europe/Berlin 10 | ports: 11 | - "53:53/tcp" 12 | - "53:53/udp" 13 | volumes: 14 | - ./config:/etc/bind 15 | - ./cache:/var/cache/bind 16 | - ./records:/var/lib/bind 17 | restart: unless-stopped 18 | -------------------------------------------------------------------------------- /bind9-terraform-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Automate local DNS with Bind and Terraform 2 | 3 | Doing manual configuration is a thing of the past! With Terraform, I’m now managing all my local DNS entries fully automatically. In this Tutorial, I’m going to show you exactly how I’ve done that. If you’re new to this, I recommend checking out my other tutorials about Terraform, and [Bind in Docker](../bind9-docker) first. 4 | 5 | Video: https://youtu.be/eSUtsDUTzuc 6 | 7 | 8 | ## Prerequisites 9 | 10 | - Local DNS Server using Bind9 11 | - Some basic knowledge about Terraform 12 | 13 | 14 | ## Prepare the DNS Server 15 | 16 | Usually, Bind9 is configured in static config files, such as the `named.conf`, and the zone config files, that contain all the IP addresses, and hostnames. 17 | 18 | To prepare BIND9 to be managed with dynamic updates, you need to generate a TSIG (Transaction Signature) key. It is used to verify the authenticity of DNS messages and prevent unauthorized updates to DNS records. 19 | 20 | When using TSIG authentication, the algorithm to use for HMAC. Valid values are `hmac-md5`, `hmac-sha1`, `hmac-sha256` or `hmac-sha512`. 21 | 22 | > Although still in common usage, the HMAC-MD5 digest is no longer considered very secure. HMAC-SHA256 is preferred. 23 | 24 | 25 | ### Generate a new TSIG key 26 | 27 | To create an HMAC256 TSIG key with the `tsig-keygen` tool, you can use the following command: 28 | 29 | ``` 30 | tsig-keygen -a hmac-sha256 31 | ``` 32 | 33 | 34 | ### Include the TSIG key in the Bind9 configuration 35 | 36 | You can store the HMAC256 TSIG key directly in the `named.conf`, or in a separate file, typically named `tsig.key`, or `named.conf.key` and include it in your BIND configuration using the `include` statement: 37 | 38 | ``` 39 | include ""; 40 | ``` 41 | 42 | Replace `` with the actual path to the `tsig.key` file. 43 | 44 | Make sure that the file is readable only by the BIND process and the user running the BIND process, and not readable by other users on the system. This will help to ensure the security of your TSIG key. 45 | 46 | ### Allow dynamic updates to your DNS zone 47 | 48 | When configuring a BIND9 DNS server to allow dynamic updates using a TSIG key, you need to use the `update-policy` statement in your zone, to specify which keys are allowed to perform updates and which record types they are allowed to update. 49 | 50 | ```conf 51 | zone "example.com" { 52 | type master; 53 | file "example.com.zone"; 54 | update-policy { grant tsig-key zonesub any; }; 55 | }; 56 | ``` 57 | 58 | ### Restart your DNS Server 59 | 60 | Restart your DNS Server, to make these changes active. 61 | 62 | --- 63 | ## Manage DNS Records with Terraform 64 | 65 | Managing DNS records with Terraform allows you to define your infrastructure as code and automate the process of creating, updating, and deleting DNS records. Terraform is a popular tool for infrastructure as code because it provides a simple and consistent way to manage resources across multiple cloud providers and on-premises infrastructure. 66 | 67 | ### Set-up the DNS Terraform Provider 68 | 69 | First, create a new directory for your Terraform configuration and create a new file named `provider.tf`. In this file, add the following code: 70 | 71 | ```tf 72 | terraform { 73 | 74 | required_providers { 75 | dns = { 76 | source = "hashicorp/dns" 77 | version = "3.2.3" 78 | } 79 | } 80 | 81 | } 82 | 83 | provider "dns" { 84 | update { 85 | server = "your-dns-server-ip" 86 | key_name = "tsig-key." 87 | key_algorithm = "hmac-sha256" 88 | key_secret = var.tsig_key 89 | } 90 | } 91 | ``` 92 | 93 | This code sets up the required provider for the HashiCorp DNS Provider and defines the provider configuration for dynamic updates with a TSIG key. Replace `your-dns-server-ip` with the IP address of your DNS server, and replace `tsig-key.` with the name of your TSIG key. 94 | 95 | Next, add the following code to the `provider.tf` file to define a sensitive variable for your TSIG key secret: 96 | 97 | ```tf 98 | variable "tsig_key" { 99 | type = string 100 | sensitive = true 101 | } 102 | ``` 103 | 104 | This code defines a sensitive variable named `tsig_key` of type `string`, which will be used to store the secret value for your TSIG key. 105 | 106 | Save the `provider.tf` file and run `terraform init` in your terminal to initialize the provider and download any necessary dependencies. 107 | 108 | ### Define your DNS resource(s) 109 | 110 | Next, define the DNS resource(s) that you want to create or update with dynamic updates, in a new Terraform file, such as `dns.tf`. 111 | 112 | This code defines a new `dns_a_record_set` resource named `example` that creates an A record for `example.com`. The `dynamic` block enables dynamic updates for the A record using the TSIG key, and the `update` block specifies the server IP address and TSIG key details. 113 | 114 | Note that the `for_each` attribute specifies a variable named `a_records`, which should be defined elsewhere in your Terraform configuration. This variable should be a map of DNS record names to IP addresses, like this: 115 | 116 | ```tf 117 | variable "a_records" { 118 | type = map(string) 119 | default = { 120 | "server1" = "10.0.0.1" 121 | "server2" = "10.0.0.2" 122 | } 123 | } 124 | ``` 125 | 126 | This code defines a map variable named `a_records` that maps the `server1` and `server2` DNS record names to their respective IP addresses. 127 | 128 | Save the `dns.tf` file and run `terraform plan` to preview the changes that Terraform will make to your DNS configuration. 129 | 130 | ### Apply your Terraform configuration 131 | 132 | If the preview looks correct, run `terraform apply` to apply your Terraform configuration and create or update your DNS resources with dynamic updates using the TSIG key. 133 | 134 | Terraform will prompt you to confirm the changes before applying them. If you're sure that you want to proceed, type `yes` and hit enter. 135 | 136 | --- 137 | ## Apply changes to the static config 138 | 139 | ### Sync dynamic updates using RNDC 140 | 141 | To sync dynamic DNS records to the static configuration in BIND using the `rndc` tool, you can follow these general steps: 142 | 143 | Use the `rndc sync` command to synchronize the dynamic updates with the static configuration in BIND. This will update the zone file with the latest DNS records from the dynamic update database. 144 | 145 | ``` 146 | rndc sync example.com 147 | ``` 148 | 149 | Replace `example.com` with the name of the zone that you want to synchronize. 150 | 151 | Note that the `rndc sync` command only synchronizes the dynamic updates with the static configuration. It does not apply any changes to the DNS server or reload the configuration. 152 | 153 | ### Reload the BIND configuration 154 | 155 | Use the `rndc reload` command to reload the BIND configuration and apply the changes. 156 | 157 | ``` 158 | rndc reload example.com 159 | ``` 160 | 161 | Replace `example.com` with the name of the zone that you want to reload. 162 | 163 | This will reload the BIND configuration and apply any changes that were made to the zone file. 164 | 165 | Overall, using the `rndc` tool to sync dynamic DNS records to the static configuration in BIND can help to ensure that your DNS zone files are up-to-date and consistent with the latest DNS records. 166 | 167 | --- 168 | ## References 169 | 170 | - [Docs overview | hashicorp/dns | Terraform Registry](https://registry.terraform.io/providers/hashicorp/dns/latest/docs) -------------------------------------------------------------------------------- /cloudflare-tunnel-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # How to use Cloudflare Tunnel in your Homelab (even with Traefik) 2 | 3 | This is a tutorial on how to use Cloudflare Tunnels in a Homelab to make internal applications easily accessible on the public internet in a secure way. The video covers the benefits of using Cloudflare Tunnels for personal websites, homelab dashboards, or any web-based application that needs to be accessed from outside the network without complicated router configuration. And it also discusses the security of Cloudflare Tunnels and shares best practices for exposing administrative interfaces. #homelab #cloudflare #cloudflaretunnel 4 | 5 | Video: https://youtu.be/yMmxw-DZ5Ec 6 | 7 | --- 8 | ## Security Considerations 9 | 10 | When using Cloudflare Tunnels, it is important to be cautious about exposing administrative interfaces to the public internet. It is best to only expose services that you need to access from outside, and set up proper authentication with user access control. 11 | 12 | It is also important to keep in mind that when you proxy your traffic through Cloudflare, they are terminating all your TLS connections, which means they can view all your traffic in plain text. This is a potential risk, even if it is a secured connection. To mitigate this risk, you can use authentication providers and other security measures. 13 | 14 | --- 15 | ## Prerequisites 16 | 17 | • A Cloudflare Account (free to sign up) 18 | • A public domain registered either directly in Cloudflare or with a different provider 19 | 20 | --- 21 | ## Add existing Domain to Cloudflare 22 | 23 | Go to the Cloudflare Dashboard and select "Add Site". Enter your domain name and follow the instructions to change the name server addresses from the ones of your DNS provider to the addresses of the Cloudflare's name servers. Note that this process may take up to 24 hours. 24 | 25 | --- 26 | ## Deploy Cloudflare Tunnel 27 | 28 | The Cloudflare Tunnel application will create a secure reverse tunnel from inside your homelab to the Cloudflare's infrastructure. 29 | 30 | ![](assets/cloudflare-tunnel-1.png) 31 | 32 | ### Create new Cloudflare Tunnel 33 | 34 | Go to the Zero Trust Dashboard and select "Set Up Cloudflare Tunnel". Give the tunnel a name and click "Next". Follow the instructions to install the Cloudflare Connector program in your network. This program will initialize a secured reverse tunnel from your internal environment to the Cloudflare's Servers. The Cloudflare Connector can be installed on Windows, MacOS, Linux, or a Docker Container. Once the program is installed, execute the command provided to make the tunnel active. 35 | 36 | 37 | ### Create Docker Compose file 38 | 39 | Create a new project folder and create a new file called `docker-compose.yml` and add the following content: 40 | 41 | ```yaml 42 | version: '3.9' 43 | 44 | networks: 45 | frontend: 46 | external: true 47 | backend: 48 | external: true 49 | 50 | services: 51 | cloudflaretunnel: 52 | container_name: cloudflaretunnel-demo-1 53 | image: cloudflare/cloudflared:2023.2.1 54 | restart: unless-stopped 55 | environment: 56 | - TUNNEL_TOKEN=$TUNNEL_TOKEN 57 | command: tunnel --no-autoupdate run 58 | networks: 59 | - frontend 60 | - backend 61 | ``` 62 | 63 | Load the tunnel token into the environment variable on your server by executing: 64 | 65 | ```sh 66 | export TUNNEL_TOKEN=xxxxx 67 | ``` 68 | 69 | Start the container by executing the `docker-compose up -d`, and check the Cloudflare dashboard to see if the tunnel is healthy. 70 | 71 | --- 72 | ## Set up public hostname 73 | 74 | 1. Go to the Cloudflare Zero Trust Dashboard and edit the Cloudflare Tunnel application. 75 | 2. Go to the Public Hostnames section. 76 | 3. Add a subdomain for the application, such as  nginx-demo-one  on your public domain. 77 | 4. Enter the target URL for the application. If the application is using an HTTP protocol and is on the same docker network, you can use the container name as the URL. 78 | 79 | --- 80 | ## Fix Traefik issues 81 | 82 | When using Cloudflare Tunnel with Traefik, you might encounter a problem with **404-errors**, when accessing your application. This is depending on the hostname request, which is using the public hostname, instead of the target hostname. 83 | 84 | If you have this problem, change the Label for the Traefik Rule to add a double pipe for a logical or instruction. This will allow the request to match the internal hostname as well as the external hostname. 85 | 86 | ``` 87 | Host(`your-internal-hostname`) || Host(`your-public-hostname`) 88 | ``` 89 | 90 | ![](assets/cloudflare-tunnel-2.png) 91 | 92 | --- 93 | ## Setting Up Access Control for Nginx Web Server Services 94 | 95 | This guide will walk you through the steps of setting up access control for Nginx web server services using Cloudflare Tunnel. 96 | 97 | **Step 1: Open the Settings Menu** 98 | 99 | Open the settings menu and head over to authentication. Here you can add multiple providers that offer well known implementations, such as Azure AD, GitHub, Google, SAML, or Okta. For this demonstration, we will pick the easiest one, which is the One time Pin authentication, that sends a short mail to you with a time based one time password. 100 | 101 | **Step 2: Create an Application** 102 | 103 | Create an application for all your subdomains with an asterisk, so that it will be active on every public hostname. Alternatively, you can create one application per service. Disable the application logo if you don't need it. Specify which authentication providers this application should support. In our case, we just have one, the TOTP PIN. 104 | 105 | **Step 3: Create an Access Policy** 106 | 107 | Create a new access policy and configure different rules that define which users should have access to this application. You can configure specific mail addresses, or you can add your public domain, so that everyone who can receive a mail under your public domain can access this application. 108 | 109 | 110 | --- 111 | ## References 112 | 113 | - [Cloudflare Tunnel · Cloudflare Zero Trust docs](https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/) 114 | -------------------------------------------------------------------------------- /cloudflare-tunnel-tutorial/assets/cloudflare-tunnel-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/cloudflare-tunnel-tutorial/assets/cloudflare-tunnel-1.png -------------------------------------------------------------------------------- /cloudflare-tunnel-tutorial/assets/cloudflare-tunnel-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/cloudflare-tunnel-tutorial/assets/cloudflare-tunnel-2.png -------------------------------------------------------------------------------- /cloudflare-tunnel-tutorial/cloudflaretunnel-demo-1/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.9' 2 | 3 | networks: 4 | frontend: 5 | external: true 6 | backend: 7 | external: true 8 | 9 | services: 10 | cloudflaretunnel: 11 | container_name: cloudflaretunnel-demo-1 12 | image: cloudflare/cloudflared:2023.2.1 13 | restart: unless-stopped 14 | environment: 15 | - TUNNEL_TOKEN=$TUNNEL_TOKEN 16 | command: tunnel --no-autoupdate run 17 | networks: 18 | - frontend 19 | - backend 20 | -------------------------------------------------------------------------------- /docker-python-debugging-vscode/README.md: -------------------------------------------------------------------------------- 1 | # Docker VSCode Python Tutorial // Run your App in a Containerv 2 | Docker VSCode Python Tutorial to run your application inside a Docker Container. I will show you how to set up your development environment with VSCode, which extensions you need and how debugging works. 3 | 4 | We will use the free and open-source software VSCode. 5 | 6 | Project Homepage: https://code.visualstudio.com 7 | 8 | Video: https://youtu.be/jtBVppyfDbE 9 | 10 | ## Prerequisites 11 | 12 | - Linux, macOS or Windows 10, 11 with WSL2 (Debian or Ubuntu) running Docker Desktop 13 | 14 | ## 1. Installation and Configuration 15 | 16 | ### 1.1. Add the right VSCode extensions for Python and Docker 17 | 18 | There are also some VSCode Extensions, which I highly recommend. You can simply just search for them in VSCode Extensions, download and install them. 19 | 20 | - Docker - Makes it easy to create, manage, and debug containerized applications. 21 | - Remote WSL - Open any folder in the Windows Subsystem for Linux (WSL) and take advantage of Visual Studio Code's full feature set. 22 | 23 | ### 1.2. Connect to WSL2 in VSCode 24 | 25 | It's very easy to run VSCode remotely on WSL2. You can just click on the Remote Connection Icon in VSCode, or open VSCode with the "code" command from your Windows Terminal Application. Let's try the second method and create a new project folder for the application and open this remote workspace in VSCode. 26 | 27 | If you execute the following commands in the WSL terminal, VSCode should open on your Windows 10 and automatically connect to your remote WSL workspace. 28 | 29 | ```bash 30 | mkdir ~/vscode-docker-python 31 | 32 | code vscode-docker-python 33 | ``` 34 | 35 | *If you're successfully connected to your WSL machine, you'll see it in the bottom statusbar.* 36 | 37 | ## 2. Build a simple Python application 38 | 39 | Finally, we can now start developing our application. This should be the easy part for you. I don't have any good example projects, so we will write a very simple application that will just add two numbers and output the result. 40 | 41 | ### 2.1. Create a new file and selection our Python interpreter 42 | 43 | Let's create a new Python file called `app.py` and place it in our workspace folder. If you get a message to select your Python Interpreter, you can simply select it. Because we need to tell WSL how we want to run Python programs. If you don't get a message, but you want to select your standard Python interpreter, only for this workspace folder you can create a new file called `.vscode/settings.json` inside your workspace folder. 44 | 45 | ```py 46 | a = 5 47 | b = 3 48 | 49 | c = a + b 50 | 51 | print(f"{a} + {b} is {c}") 52 | 53 | print("program exited.") 54 | ``` 55 | 56 | ### 2.2. Set the Python interpreter 57 | 58 | Set the Python interpreter in `.vscode/settings.json` 59 | 60 | ```json 61 | { 62 | "python.pythonPath": "/usr/bin/python3" 63 | } 64 | ``` 65 | 66 | ### 2.3. Test our Python application without a container 67 | 68 | To test and run our application without a container, we can simply execute the following command in the terminal. Or, if you have set up your Python interpreter correctly, you can also run it with F5 in VSCode and debug it. 69 | 70 | ```bash 71 | python3 app.py 72 | ``` 73 | 74 | ## 3. Build our first Docker Container 75 | 76 | But of course, we want to deploy this Python application inside a Docker container with VSCode. The Docker extension in VSCode has a pretty comfortable function to generate all necessary files automatically for us. Don't worry I will explain all the things the extension does, so you could also create all these files manually. 77 | 78 | ### 3.1. Generate Dockerfiles 79 | 80 | To generate all files, press F1 and select "Add Docker Files to Workspace". Simply follow the instructions, add your app.py as your main script and skip Docker-Compose. 81 | 82 | ### 3.2. How our Docker container is structured 83 | 84 | Then, you should see some new files in your workspace folder. Because the Python Docker extension creates a complete environment for you in VSCode. It creates a Dockerfile, a requirements.txt, a .dockerignore, and some .vscode configuration files for you. 85 | 86 | - Dockerfile - This is the main file that contains all instructions how to generate and build the Docker image file 87 | - requirements.txt - Includes all necessary libraries and plugins that should be installed inside the Docker image 88 | - .dockerignore - All files that should not be included inside the Docker image 89 | - .vscode/launch.json - How to launch the application, debug it, etc. 90 | - .vscode/tasks.json - Any tasks that should be run by VSCode before the launch tasks 91 | 92 | Let's take a closer look at the Dockerfile. 93 | 94 | ```Dockerfile 95 | # For more information, please refer to https://aka.ms/vscode-docker-python 96 | FROM python:3.8-slim-buster 97 | 98 | # Keeps Python from generating .pyc files in the container 99 | ENV PYTHONDONTWRITEBYTECODE=1 100 | 101 | # Turns off buffering for easier container logging 102 | ENV PYTHONUNBUFFERED=1 103 | 104 | # Install pip requirements 105 | COPY requirements.txt . 106 | RUN python -m pip install -r requirements.txt 107 | 108 | WORKDIR /app 109 | COPY . /app 110 | 111 | # Creates a non-root user with an explicit UID and adds permission to access the /app folder 112 | # For more info, please refer to https://aka.ms/vscode-docker-python-configure-containers 113 | RUN adduser -u 5678 --disabled-password --gecos "" appuser && chown -R appuser /app 114 | USER appuser 115 | 116 | # During debugging, this entry point will be overridden. For more information, please refer to https://aka.ms/vscode-docker-python-debug 117 | CMD ["python", "app.py"] 118 | ``` 119 | 120 | ### 3.3. Build the Docker image file and run it 121 | 122 | The Docker extension in VSCode allows us to simply build the Docker image with a right click on the Dockerfile and select "Build Image". 123 | 124 | Open a new Terminal and type docker image list. Because then we can see a new entry called vscodedockerpython (the project folder name). We can also simply run this container and see that our application is running successfully! 125 | 126 | ## 4. Debugging inside the container 127 | 128 | ### 4.1. Configure .vscode/launch.json and .vscode/tasks.json. 129 | 130 | The Docker extension in VSCode is absolutely beautiful! Because it also allows us to debug our application inside the container with no effort. In the past, I needed to install and use debugging libraries and extensions in Python, but this is not needed anymore. The extension is smart enough to rewrite our entry point file with a debugger automatically. 131 | 132 | You simply just need to click on "Start debugging" and it works! 133 | 134 | This is only possible, because the Docker extensions created the two files .vscode/launch.json and .vscode/tasks.json. 135 | 136 | ```json 137 | { 138 | "configurations": [ 139 | { 140 | "name": "Docker: Python - General", 141 | "type": "docker", 142 | "request": "launch", 143 | "preLaunchTask": "docker-run: debug", 144 | "python": { 145 | "pathMappings": [ 146 | { 147 | "localRoot": "${workspaceFolder}", 148 | "remoteRoot": "/app" 149 | } 150 | ], 151 | "projectType": "general" 152 | } 153 | } 154 | ] 155 | } 156 | ``` 157 | 158 | ```json 159 | { 160 | "version": "2.0.0", 161 | "tasks": [ 162 | { 163 | "type": "docker-build", 164 | "label": "docker-build", 165 | "platform": "python", 166 | "dockerBuild": { 167 | "tag": "vscodedockerpython:latest", 168 | "dockerfile": "${workspaceFolder}/Dockerfile", 169 | "context": "${workspaceFolder}", 170 | "pull": true 171 | } 172 | }, 173 | { 174 | "type": "docker-run", 175 | "label": "docker-run: debug", 176 | "dependsOn": [ 177 | "docker-build" 178 | ], 179 | "python": { 180 | "file": "app.py" 181 | } 182 | } 183 | ] 184 | } 185 | ``` 186 | 187 | ### 4.2. Set breakpoints and analyzing variables 188 | 189 | This also supports breakpoints and analyzing variables natively inside the Container! Which is absolutely powerful and key to write complex applications in Python. -------------------------------------------------------------------------------- /docker-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # How to use Docker and migrate your existing Apps to your Linux Server? 2 | How to use Docker and migrate your existing Apps? We will cover all the cool advantages of containerization, and how to easily migrate and manage your existing services and apps into Docker Containers on your Linux Server! 3 | 4 | We will use the free and open-source software Docker. 5 | 6 | 7 | Project Homepage: https://www.docker.com/ 8 | Documentation: https://docs.docker.com/ 9 | 10 | Video: https://youtu.be/y0GGQ2F2tvs 11 | 12 | ## Prerequisites 13 | 14 | - Linux Server running Ubuntu 20.04 LTS or newer 15 | 16 | *You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands!* 17 | 18 | ## 1. Install Docker, and Docker-Compose 19 | 20 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 21 | 22 | ### 1.1. Install Docker 23 | ```bash 24 | sudo apt update 25 | 26 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 27 | 28 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 29 | 30 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 31 | 32 | sudo apt update 33 | 34 | sudo apt-get install docker-ce docker-ce-cli containerd.io 35 | ``` 36 | 37 | ### 1.2. Check if Docker is installed correctly 38 | ```bash 39 | sudo docker run hello-world 40 | ``` 41 | 42 | ### 1.3. Install Docker-Compose 43 | 44 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 45 | 46 | ```bash 47 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 48 | 49 | sudo chmod +x /usr/local/bin/docker-compose 50 | ``` 51 | 52 | ### 1.4. Check if Docker-Compose is installed correctly 53 | ```bash 54 | sudo docker-compose --version 55 | ``` 56 | 57 | ### 1.5. (optional) Add your linux user to the `docker` group 58 | ```bash 59 | sudo usermod -aG docker $USER 60 | ``` 61 | 62 | ## 2. Docker Basics 63 | 64 | ### 2.1. Run a Docker Container 65 | 66 | ```bash 67 | docker run hello-world 68 | ``` 69 | 70 | ### 2.2. How to find Docker Images? 71 | 72 | To get new applications, just visit the [official Docker Hub](https://hub.docker.com/). Because most vendors and communities maintain their own images, you will find Images for many applications there. 73 | 74 | ### 2.3. Expose Ports 75 | 76 | Run a simple NGINX webserver. 77 | 78 | ```bash 79 | docker run -p 80:80 -d nginx 80 | ``` 81 | 82 | ### 2.4. Persistent Volumes 83 | 84 | ```bash 85 | docker run -p 80:80 -v nginx_data:/var/www/html -d nginx 86 | ``` 87 | 88 | ### 2.5. Migrate static files to a Docker Volume 89 | 90 | Migrate a NGINX webserver to Docker. 91 | 92 | ```bash 93 | sudo systemctl stop nginx 94 | 95 | sudo systemctl disable nginx 96 | 97 | docker run -p 80:80 -v /var/www/html:/var/www/html -d nginx 98 | ``` 99 | 100 | ### 2.6. Migrate databases to a Docker Volume 101 | 102 | Migrate a MySQL Server to Docker. 103 | 104 | ```bash 105 | sudo systemctl stop mysql-server 106 | 107 | sudo systemctl disable mysql-server 108 | 109 | docker run -v /var/lib/mysql:/var/lib/mysql -d mysql 110 | ``` 111 | 112 | ## 3. Set up Portainer 113 | 114 | ### 3.1. Create a new Docker Volume 115 | 116 | ```bash 117 | docker volume create portainer_data 118 | ``` 119 | 120 | ### 3.2. Launch Portainer 121 | 122 | ```bash 123 | docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce 124 | ``` 125 | 126 | -------------------------------------------------------------------------------- /duplicati-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Backup in Linux Servers - Docker Volumes, and Databases 2 | Backup in Linux doesn't need to be complicated. I'll show you backup strategies and tools to create a reliable backup for your entire Linux server. You can use this perfectly in combination with Docker Volumes and Databases. 3 | 4 | We will use the free and open-source software Duplicati. 5 | 6 | 7 | Project Homepage: https://www.duplicati.com 8 | Documentation: https://duplicati.readthedocs.io/en/latest/ 9 | Source Files: https://github.com/duplicati/duplicati 10 | 11 | Video: https://youtu.be/JoA6Bezgk1c 12 | 13 | ## Prerequisites 14 | 15 | - Linux Server running Ubuntu 20.04 LTS or newer 16 | - Domain that points to the public IP of your Linux Server 17 | 18 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 19 | 20 | ## 1. Install Docker, and Docker-Compose 21 | 22 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 23 | 24 | ### 1.1. Install Docker 25 | ```bash 26 | sudo apt update 27 | 28 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 29 | 30 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 31 | 32 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 33 | 34 | sudo apt update 35 | 36 | sudo apt-get install docker-ce docker-ce-cli containerd.io 37 | ``` 38 | 39 | ### 1.2. Check if Docker is installed correctly 40 | ```bash 41 | sudo docker run hello-world 42 | ``` 43 | 44 | ### 1.3. Install Docker-Compose 45 | 46 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 47 | 48 | ```bash 49 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 50 | 51 | sudo chmod +x /usr/local/bin/docker-compose 52 | ``` 53 | 54 | ### 1.4. Check if Docker-Compose is installed correctly 55 | ```bash 56 | sudo docker-compose --version 57 | ``` 58 | 59 | ### 1.5. (optional) Add your linux user to the `docker` group 60 | ```bash 61 | sudo usermod -aG docker $USER 62 | ``` 63 | 64 | ## 2. Set up Duplicati 65 | 66 | ### 2.1. Create a Docker-Compose file 67 | 68 | Create a new `docker-compose.yml` file in your project folder e.g. `/home//nextcloud`. 69 | 70 | *You can also create a new folder in the `/opt` directory, this may require different permissions.* 71 | 72 | ### 2.2. Start Duplicati 73 | 74 | Navigate to your project folder, and execute the following command. 75 | 76 | ```bash 77 | docker-compose up -d 78 | ``` 79 | 80 | ### 2.3. Configure Duplicati 81 | 82 | Open the web interface of **Duplicati** at `http://your-server-address:8200`, and log in with the default username, and password `...` / `...`. 83 | -------------------------------------------------------------------------------- /duplicati-tutorial/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "2.1" 2 | services: 3 | duplicati: 4 | image: ghcr.io/linuxserver/duplicati 5 | container_name: duplicati 6 | environment: 7 | - PUID=1000 8 | - PGID=1000 9 | - TZ=Europe/London 10 | # (optional) - CLI_ARGS= 11 | volumes: 12 | - ./appdata/config:/config 13 | - ./backups:/backups 14 | - ./source:/source 15 | ports: 16 | - 8200:8200 17 | restart: unless-stopped -------------------------------------------------------------------------------- /github-actions-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Let's learn GitHub Actions, in a self-hosted Homelab! 2 | 3 | Are you interested in automating your Homelab? Then learning GitHub Actions is a great way to get started! With GitHub Actions, you can automate the process of building, testing, and deploying applications on a Kubernetes Cluster. Instead of spending your day in front of a terminal, you can just sit back and let GitHub Actions do all the work for you. In this tutorial, I’ll show you how to set up a project, install a self-hosted runner, and deploy an app on your local Kubernetes Cluster. Tune in to learn more about Homelab automation with GitHub Actions! #GitHubActions #Homelab #Automation 4 | 5 | Video: https://youtu.be/tIWDpG7sNTU 6 | 7 | --- 8 | ## Prerequisites 9 | 10 | - Local Kubernetes Cluster 11 | - GitHub Account 12 | - Linux Server at Home 13 | 14 | --- 15 | ## What is GitHub Actions? 16 | 17 | GitHub Actions is a powerful tool that can help automate tasks related to your GitHub repositories. Whether you’re developing code or defining infrastructure templates, GitHub Actions can help you automate tasks that need to be performed whenever you make changes to your repository. 18 | 19 | To get started with GitHub Actions, you need to define workflows in YAML files inside your GitHub repository. These workflows can run one or more jobs, which are essentially a set of commands that are executed whenever a specific event occurs in your repository. These events can include actions like push code changes, create a pull request, or open an issue. 20 | 21 | ![](assets/what-are-github-actions.png) 22 | 23 | ## What's the plan? 24 | 25 | In this tutorial, we will learn how to automate the deployment of an Nginx webserver on a Kubernetes Demo Cluster using GitHub Actions. We will define a new workflow for a GitHub repository that contains an application template for the webserver. 26 | 27 | **Example:** 28 | ```txt 29 | actions-eval-2 30 | └── kubernetes 31 | ├── deployment.yml 32 | ├── ingress.yml 33 | └── service.yml 34 | ``` 35 | 36 | This GitHub test repository contains three separate YAML files that define the manifests we want to apply to our cluster. These files include an application deployment, a service object, and an Ingress object. 37 | 38 | Typically, we would clone the repository to our local machine and execute the Kubectl command on all three files to deploy the manifests on our cluster. However, we can automate this process using GitHub Actions. We can define a new workflow in the repository that triggers a new Kubectl Apply command whenever we make changes to the code and push them to GitHub. This way, the changes will automatically be applied to the cluster without us having to do it manually every time. 39 | 40 | ## Where do GitHub Actions run? 41 | 42 | When using GitHub Actions, one question that might come up is where the actions run. Since we might need a computing instance with a specific environment to execute commands like Kubectl, we need to know where our workflows are being executed. GitHub spawns a new process on one of their cloud machines for every workflow that is triggered. These machines are called GitHub Runners and come in different types, such as Ubuntu Linux, Microsoft Windows, and macOS. 43 | 44 | ![](assets/where-do-github-actions-run.png) 45 | 46 | To get started with GitHub Actions, we don’t need to set up a server or install anything. All we need is a GitHub repository, and we can start defining our workflows in YAML files for free to be executed on one of GitHub’s cloud runners. 47 | 48 | ## Why use self-hosted Runners? 49 | 50 | However, in a homelab situation, we usually have our own protected network where we’re hosting our services. If we want to execute workflows to apply manifests on our local cluster, we need to allow access from outside to our infrastructure so that the GitHub runners in the cloud can connect to it. While this is possible, it’s not the best solution. It’s better to run the actual commands and workflows inside our own environment using the same software that GitHub is running on their cloud and running it on one of our virtual Linux servers. 51 | 52 | ![](assets/why-use-self-hosted-runners.png) 53 | 54 | GitHub allows us to set up self-hosted runners for free, which is a better way to connect to any local environments, especially if we need to control access to internal resources. This also saves GitHub some cloud resources because they don’t need to compute the actual workloads. It’s a win-win situation for both GitHub and us. 55 | 56 | ## Set up an organization in GitHub 57 | 58 | If you want to use it in your own infrastructure with a self-hosted runner, you need to add it, first. 59 | 60 | Go to your GitHub Repository under **Settings -> Actions -> Runners**, and add a new self-hosted runner. 61 | 62 | ![](assets/github-actions-tutorial-asset-1.png) 63 | 64 | --- 65 | ## Install GitHub Runner locally on Linux 66 | 67 | Installing the Runner software on a Linux machine is a quick and easy process. Follow these steps to get started: 68 | 69 | 1. Download and extract the GitHub Runner code into your project folder. 70 | 71 | ```sh 72 | curl -o actions-runner-osx-x64-2.303.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.303.0/actions-runner-osx-x64-2.303.0.tar.gz 73 | 74 | tar xzf ./actions-runner-osx-x64-2.303.0.tar.gz 75 | ``` 76 | 77 | 2. Configure and Start the GitHub Runner. 78 | 79 | ```sh 80 | ./config.sh --url https://github.com/clcreative --token your-token 81 | 82 | ./run.sh 83 | ``` 84 | 85 | 3. Install Runner as a service. 86 | 87 | ```shell 88 | sudo ./svc.sh install 89 | 90 | sudo ./svc.sh start 91 | ``` 92 | 93 | ### Prepare the Runner 94 | 95 | To execute certain workflows on the Runner, specific applications or libraries may need to be installed on the OS. For example, if a workflow automation includes command line tools such as Docker, Kubectl, Ansible, or Terraform, these must be installed on the Runner’s system. In this tutorial, Kubectl is used to deploy a webserver onto a Kubernetes cluster. The Runner will spawn a Docker Container with Kubectl built-in, and execute the commands inside this container. Once the workflow is completed, the container will be removed. Therefore, Kubectl does not need to be installed on the Runner’s VM, but Docker Engine and CLI must be installed. 96 | 97 | On Ubuntu, you can follow these steps. 98 | 99 | **Example:** 100 | ```sh 101 | sudo apt install docker.io 102 | 103 | sudo usermod -aG docker your-username 104 | 105 | newgrp your-username 106 | ``` 107 | 108 | ## Create a new project 109 | 110 | Now that we have installed the necessary environment, we can finally start creating our project. We have set up a GitHub Account with an organization, configured the runner, and installed the right tools on it. 111 | 112 | Let’s open the **Repository** and add a new GitHub Actions workflow. 113 | 114 | 1. Create a new `yml` file into the `.github/workflows` folder. 115 | 116 | 2. Specify the type of event such as `push`, `pull_request`, ..., that should trigger the GitHub Actions workflow. 117 | 118 | 3. Specify the branch, such as `main`, where this workflow should be executed. 119 | 120 | 4. Create a new job `deploy`, that runs on the `self-hosted` Runner. 121 | 122 | 5. Add a `actions/checkout@v2` step, that will clone the Repositories code into the Runners temporary container filesystem. 123 | 124 | **Example:** 125 | ```yml 126 | name: Deploy to Kubernetes 127 | on: 128 | push: 129 | branches: 130 | - main 131 | 132 | jobs: 133 | deploy: 134 | runs-on: "self-hosted" # (optional) name of the runner labels, groups 135 | steps: 136 | - name: Checkout code 137 | uses: actions/checkout@v2 138 | 139 | # (...) 140 | ``` 141 | 142 | To do a quick test, just commit your changes to the Repository and push it. That should already execute the GitHub Actions workflow. 143 | 144 | ![](assets/github-actions-tutorial-asset-2.png) 145 | 146 | If you’re new to GitHub Actions, it’s a good idea to start by playing around with it in a test repository. Executing a few hello world commands is a great way to get familiar with how it works. 147 | 148 | ## Find GitHub Actions on the Marketplace 149 | 150 | Once you’re comfortable with the basics, you can start to do something useful with it. For example, in this project, we want to deploy three Manifests to a Kubernetes Cluster. To do this, we need to add more steps to the job in our workflow, which will execute the kubectl apply command on the files. 151 | 152 | Rather than writing shell commands to do this, we can use pre-defined environments for GitHub Actions that come with various tools installed, such as Kubectl. These environments can be found on GitHub Marketplace, and you can even create custom ones. It’s worth noting that these environments are mostly created by the community or the developers of the tools, so make sure to check who has created them before using them. 153 | 154 | GitHub also maintains a repository called Actions Hub, which provides GitHub Actions environments for Docker, Kubectl, and a few other tools. The Kubectl environment in particular contains some useful additions to handle authentication properly, which is why we’re using it in this tutorial. 155 | 156 | ![](assets/github-actions-tutorial-asset-3.png) 157 | 158 | ## Authenticate to the local Kubernetes Cluster 159 | 160 | To authenticate to a Kubernetes Cluster, we need to provide credentials to the self-hosted Runner. We can securely store sensitive credentials that the Runners need in the GitHub Repository Settings. 161 | 162 | 1. Go to the Repository Settings, select “Actions”, and add a new Secret called “KUBE_CONFIG”. 163 | 164 | 2. Copy your Kube Config file, converted to base64, and place it into the Secret. 165 | 166 | ![](assets/github-actions-tutorial-asset-4.png) 167 | 168 | ## Use the Secret in the workflow 169 | 170 | This environment variable will load the value from the GitHub Secret `KUBE_CONFIG`, and will be accessible to all jobs in the workflow, making it a great way to share data between jobs. 171 | 172 | **Example:** 173 | ```yml 174 | # (...) 175 | 176 | env: 177 | KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }} 178 | 179 | # (...) 180 | ``` 181 | 182 | ## Deploy a Kubernetes Application 183 | 184 | Next, we can add our Kubectl Commands to deploy our Manifests. We will create three different steps for each of these files, using the environment from the actions hub, and the tool kubectl. The first step will run the Kubectl Command at the Deployment file, which will run the application. The second command will apply the Service object, and the third command will apply the Ingress Object. 185 | 186 | 1. Add your desired Kubernetes **Manifests** into the `kubernetes/` project folder. 187 | 188 | 2. Add new jobs to apply your **Manifests** via GitHub Actions. 189 | 190 | **Examples:** 191 | ```yml 192 | 193 | # (...) 194 | - name: Deploy App(Deployment) 195 | uses: actions-hub/kubectl@master 196 | with: 197 | args: apply -f kubernetes/deployment.yml 198 | 199 | - name: Deploy Service 200 | uses: actions-hub/kubectl@master 201 | with: 202 | args: apply -f kubernetes/service.yml 203 | 204 | - name: Deploy Ingress 205 | uses: actions-hub/kubectl@master 206 | with: 207 | args: apply -f kubernetes/ingress.yml 208 | 209 | ``` 210 | 211 | ## Use Cases 212 | 213 | I’m really excited about the possibilities this opens up for my homelab. I can use it to deploy my infrastructure with Terraform, automate the creation of virtual machines on Proxmox, or automate the deployment of all my homelab servers. I can also use it to run regular security checks on them. 214 | 215 | I’m still figuring out the best way to use it in my homelab, but these are some ideas I’ve come up with. If you have any other ideas, let me know in the comments and I’ll show you some projects I’m working on with it in a few months. 216 | 217 | ## Watch the full Video 218 | 219 | Video: https://youtu.be/tIWDpG7sNTU 220 | 221 | --- 222 | ## References 223 | 224 | - [GitHub Actions Documentation - GitHub Docs](https://docs.github.com/en/actions) 225 | -------------------------------------------------------------------------------- /github-actions-tutorial/assets/github-actions-tutorial-asset-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/github-actions-tutorial-asset-1.png -------------------------------------------------------------------------------- /github-actions-tutorial/assets/github-actions-tutorial-asset-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/github-actions-tutorial-asset-2.png -------------------------------------------------------------------------------- /github-actions-tutorial/assets/github-actions-tutorial-asset-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/github-actions-tutorial-asset-3.png -------------------------------------------------------------------------------- /github-actions-tutorial/assets/github-actions-tutorial-asset-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/github-actions-tutorial-asset-4.png -------------------------------------------------------------------------------- /github-actions-tutorial/assets/what-are-github-actions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/what-are-github-actions.png -------------------------------------------------------------------------------- /github-actions-tutorial/assets/where-do-github-actions-run.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/where-do-github-actions-run.png -------------------------------------------------------------------------------- /github-actions-tutorial/assets/why-use-self-hosted-runners.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/github-actions-tutorial/assets/why-use-self-hosted-runners.png -------------------------------------------------------------------------------- /mailcow-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Install a mail server on Linux in 10 minutes - docker, docker-compose, mailcow 2 | 3 | Mail Server Linux Installation in 10 minutes? Here, you will learn step-by-step how to install and set up all necessary applications to have a fully featured mail server. And trust me, you can do it in about 10 minutes! 4 | 5 | We will use the free and open-source project **Mailcow Dockerized** which is a fully featured mail server powered by Docker. 6 | 7 | Project Homepage: https://mailcow.email/ 8 | Project Source: https://github.com/mailcow/mailcow-dockerized 9 | Documentation: https://mailcow.github.io/mailcow-dockerized-docs/ 10 | 11 | Video: https://www.youtube.com/watch?v=4rzc0hWRSPg 12 | 13 | ## Prerequisites 14 | 15 | - Linux Server running Ubuntu 18.04 LTS or newer 16 | 17 | You can still install mailcow on a Linux Server that is not running Ubuntu, however, this may require different commands! 18 | 19 | ## 1. Install Docker, and Docker-Compose 20 | 21 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 22 | 23 | ### 1.1. Install Docker 24 | ```bash 25 | sudo apt update 26 | 27 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 28 | 29 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 30 | 31 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 32 | 33 | sudo apt update 34 | 35 | sudo apt-get install docker-ce docker-ce-cli containerd.io 36 | ``` 37 | 38 | ### 1.2. Check if Docker is installed correctly 39 | ```bash 40 | sudo docker run hello-world 41 | ``` 42 | 43 | ### 1.3. Install Docker-Compose 44 | 45 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 46 | 47 | ```bash 48 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 49 | 50 | sudo chmod +x /usr/local/bin/docker-compose 51 | ``` 52 | 53 | ### 1.4. Check if Docker-Compose is installed correctly 54 | ```bash 55 | sudo docker-compose --version 56 | ``` 57 | 58 | ### 1.5. (optional) Add your linux user to the `docker` group 59 | ```bash 60 | sudo usermod -aG docker $USER 61 | ``` 62 | 63 | ## 2. Install mailcow-dockerized 64 | 65 | Clone mailcow into the `/opt` folder. 66 | 67 | You can also use your personal home folder `/home/`, this may require different permissions. 68 | 69 | ``` 70 | sudo git clone https://github.com/mailcow/mailcow-dockerized 71 | ``` 72 | 73 | ## 2.1. Generate your configuration file and follow the steps in the script. 74 | 75 | ``` 76 | sudo ./generate_config.sh 77 | ``` 78 | 79 | ## 2.2. Enter your mailserver FQDN (this is your mailserver hostname, not your domain name) 80 | 81 | ## 2.3. Select your timezone 82 | 83 | ## 2.4. (optional) Insert custom SSL certificate 84 | 85 | If you start "mailcow" it will automatically generate and request a letsencrypt certificate for your domains. If you don't want that, but instead use your own certificate you need to modify the `mailserver.conf` and change the line to: 86 | 87 | ``` 88 | SKIP_LETS_ENCRYPT=y 89 | ``` 90 | 91 | ## 2.5. Start mailcow 92 | 93 | ``` 94 | sudo docker-compose up -d 95 | ``` 96 | 97 | ## 2.6. Login to mailcow 98 | 99 | When all services are started successfully, you can now login to the admin dashboard and configure your domain, mailboxes, aliases, etc. 100 | 101 | The admin dashboard can be accessed by `https://` 102 | 103 | The default username is `admin`, and the password is `moohoo` 104 | 105 | ## 2.7. Set up your domain(s) 106 | 107 | You need to set up your domain first at `Configuration -> Mail Setup -> Domains`. 108 | 109 | ## 2.8. Set up your mailbox(es) 110 | 111 | If you want to configure your mailboxes, you can add them at `Configuration -> Mail Setup -> Mailboxes`. 112 | 113 | -------------------------------------------------------------------------------- /nextcloud-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Nextcloud Nginx Proxy Manager in 10 Minutes! 2 | Nextcloud Nginx Proxy Manager fast and easy tutorial in just 10 Minutes with trusted SSL Certs! We're using Docker, Docker-Compose, or Portainer to deploy this on a Linux Server. 3 | 4 | We will use the free and open-source software Nextcloud. 5 | 6 | **Nextcloud**: 7 | Project Homepage: https://nextcloud.com/ 8 | 9 | **Nginx Proxy Manager**: 10 | Project Homepage: https://nginxproxymanager.com/ 11 | Documentation: https://nginxproxymanager.com/guide/ 12 | 13 | Video: https://youtu.be/iFHbzWhKfuU 14 | 15 | ## Prerequisites 16 | 17 | - Linux Server running Ubuntu 20.04 LTS or newer 18 | - Domain that points to the public IP of your Linux Server 19 | 20 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 21 | 22 | ## 1. Install Docker, and Docker-Compose 23 | 24 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 25 | 26 | ### 1.1. Install Docker 27 | ```bash 28 | sudo apt update 29 | 30 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 31 | 32 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 33 | 34 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 35 | 36 | sudo apt update 37 | 38 | sudo apt-get install docker-ce docker-ce-cli containerd.io 39 | ``` 40 | 41 | ### 1.2. Check if Docker is installed correctly 42 | ```bash 43 | sudo docker run hello-world 44 | ``` 45 | 46 | ### 1.3. Install Docker-Compose 47 | 48 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 49 | 50 | ```bash 51 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 52 | 53 | sudo chmod +x /usr/local/bin/docker-compose 54 | ``` 55 | 56 | ### 1.4. Check if Docker-Compose is installed correctly 57 | ```bash 58 | sudo docker-compose --version 59 | ``` 60 | 61 | ### 1.5. (optional) Add your linux user to the `docker` group 62 | ```bash 63 | sudo usermod -aG docker $USER 64 | ``` 65 | 66 | ## 2. Set up Nextcloud 67 | 68 | ### 2.1. Create a Docker-Compose file 69 | 70 | Create a new `docker-compose.yml` file in your project folder e.g. `/home//nextcloud`. 71 | 72 | *You can also create a new folder in the `/opt` directory, this may require different permissions.* 73 | 74 | ### 2.2. Start Nextcloud 75 | 76 | Navigate to your project folder, and execute the following command. 77 | 78 | ```bash 79 | docker-compose up -d 80 | ``` 81 | 82 | ### 2.3. Configure Nginx Proxy Manager 83 | 84 | Open the web interface of **Nginx Proxy Manager** at `http://your-server-address:81`, and log in with the default username, and password `admin@example.com` / `changeme`. 85 | 86 | Create a new Proxy Host, and fill in a subdomain e.g. `nextcloud.your-server-address` as the domain name and forward it to `nextcloud-app` on port `80`. 87 | 88 | *You can obtain a free SSL certificate from letsencrypt to securely expose Nextcloud via HTTPS.* 89 | 90 | ### 2.4. Configure Nextcloud 91 | 92 | Open the web interface of **Nextcloud** at `https://nextcloud.your-server-address`, and log in with the default username, and password `admin` / `admin`. 93 | 94 | ## 3. (optional) Fix issues 95 | 96 | ### 3.1. Desktop Client Sync does not work 97 | 98 | If you have issues with Desktop Client Sync, you need to change the `config/config.php` file and add the following line. 99 | 100 | ``` 101 | 'overwriteprotocol' => 'https' 102 | ``` 103 | -------------------------------------------------------------------------------- /nextcloud-tutorial/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "3" 2 | volumes: 3 | nextcloud-data: 4 | nextcloud-db: 5 | npm-data: 6 | npm-ssl: 7 | npm-db: 8 | 9 | networks: 10 | frontend: 11 | # add this if the network is already existing! 12 | # external: true 13 | backend: 14 | 15 | services: 16 | nextcloud-app: 17 | image: nextcloud 18 | restart: always 19 | volumes: 20 | - nextcloud-data:/var/www/html 21 | environment: 22 | - MYSQL_PASSWORD=replace-with-secure-password 23 | - MYSQL_DATABASE=nextcloud 24 | - MYSQL_USER=nextcloud 25 | - MYSQL_HOST=nextcloud-db 26 | networks: 27 | - frontend 28 | - backend 29 | 30 | nextcloud-db: 31 | image: mariadb 32 | restart: always 33 | command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW 34 | volumes: 35 | - nextcloud-db:/var/lib/mysql 36 | environment: 37 | - MYSQL_ROOT_PASSWORD=replace-with-secure-password 38 | - MYSQL_PASSWORD=replace-with-secure-password 39 | - MYSQL_DATABASE=nextcloud 40 | - MYSQL_USER=nextcloud 41 | networks: 42 | - backend 43 | 44 | npm-app: 45 | image: jc21/nginx-proxy-manager:latest 46 | restart: always 47 | ports: 48 | - "80:80" 49 | - "81:81" 50 | - "443:443" 51 | environment: 52 | - DB_MYSQL_HOST=npm-db 53 | - DB_MYSQL_PORT=3306 54 | - DB_MYSQL_USER=npm 55 | - DB_MYSQL_PASSWORD=replace-with-secure-password 56 | - DB_MYSQL_NAME=npm 57 | volumes: 58 | - npm-data:/data 59 | - npm-ssl:/etc/letsencrypt 60 | networks: 61 | - frontend 62 | - backend 63 | 64 | npm-db: 65 | image: jc21/mariadb-aria:latest 66 | restart: always 67 | environment: 68 | - MYSQL_ROOT_PASSWORD=replace-with-secure-password 69 | - MYSQL_DATABASE=npm 70 | - MYSQL_USER=npm 71 | - MYSQL_PASSWORD=replace-with-secure-password 72 | volumes: 73 | - npm-db:/var/lib/mysql 74 | networks: 75 | - backend -------------------------------------------------------------------------------- /nginx-reverseproxy/README.md: -------------------------------------------------------------------------------- 1 | # Reverse proxy nginx letsencrypt tutorial 2 | In this tutorial, we set up a reverse proxy with NGINX and trusted HTTPS certificates from letsencrypt. We will install it directly on a Linux server and expose a sample web application. 3 | 4 | We will use the free and open-source Web Server NGINX. 5 | 6 | Project Homepage: https://www.nginx.com/ 7 | 8 | Video: https://youtu.be/DyXl4c2XN-o 9 | 10 | ## Prerequisites 11 | 12 | - Linux Server running Ubuntu 20.04 LTS or newer 13 | 14 | You can still install Nginx on a Linux Server that is not running Ubuntu, however, this may require different commands! 15 | 16 | ## Installation and Configuration 17 | 18 | 1. Install Nginx 19 | 20 | ```bash 21 | sudo apt update 22 | 23 | sudo apt install nginx 24 | ``` 25 | 26 | After you have installed NGINX, you can find the configuration files in the `/etc/nginx` folder. 27 | 28 | 2. Use NGINX as a reverse proxy 29 | 30 | To enable the reverse proxy feature in NGINX, we will create a new **default** configuration file in the `/etc/nginx/sites-enabled` directory. 31 | 32 | Note, you should never allow direct access to your app, otherwise, users could possibly just bypass the reverse proxy. I’ve changed the listening address from `0.0.0.0` to `127.0.0.1`, that our app is only accepting connections from localhost. 33 | 34 | ```nginx 35 | server { 36 | listen 80 default_server; 37 | listen [::]:80 default_server; 38 | 39 | server_name research.the-digital-life.com; 40 | 41 | location / { 42 | proxy_pass http://127.0.0.1:5000; 43 | } 44 | } 45 | ``` 46 | 47 | The `proxy_pass` argument will forward all incoming client requests to this server to our app. You can easily change the protocol, IP, or port to whatever your app or website is listening to. 48 | 49 | 3. (optional) Additional arguments 50 | 51 | If you want to specify further arguments, you can do this within the `server {}` section or directly in the `location / {}` section. For example, I will add the `proxy_set_header` directive to pass the Host-Header which may be required by some apps or websites exposed by the proxy. 52 | 53 | ```nginx 54 | location / { 55 | proxy_pass http://127.0.0.1:5000; 56 | proxy_set_header Host $host; 57 | } 58 | ``` 59 | 60 | 4. Add trusted SSL Certificates from Letsencrypt 61 | 62 | Now, let’s obtain trusted HTTPS certificates for our application. It’s also best practice to redirect all unencrypted HTTP connections to HTTPS. This is relatively easy with certbot and letsencrypt certificates. The certbot will obtain free certificates and also handle the renewal process automatically. To do that we will install certbot and also a plugin for our NGINX server. 63 | 64 | ```bash 65 | sudo apt install certbot python3-certbot-nginx 66 | ``` 67 | 68 | Once we have installed those packages, we can obtain our certificates. 69 | 70 | ```bash 71 | sudo certbot --nginx -d research.the-digital-life.com 72 | ``` 73 | 74 | It will ask you if you want to redirect all traffic from HTTP to HTTPS. Select yes (2). This automatically makes some changes to our NGINX default configuration. 75 | 76 | ```nginx 77 | server { 78 | 79 | server_name research.the-digital-life.com; 80 | 81 | location / { 82 | proxy_pass http://127.0.0.1:5000; 83 | } 84 | 85 | listen [::]:443 ssl ipv6only=on; # managed by Certbot 86 | listen 443 ssl; # managed by Certbot 87 | ssl_certificate /etc/letsencrypt/live/research.the-digital-life.com/fullchain.pem; # managed by Certbot 88 | ssl_certificate_key /etc/letsencrypt/live/research.the-digital-life.com/privkey.pem; # managed by Certbot 89 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 90 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 91 | 92 | } 93 | server { 94 | if ($host = research.the-digital-life.com) { 95 | return 301 https://$host$request_uri; 96 | } # managed by Certbot 97 | 98 | listen 80 default_server; 99 | listen [::]:80 default_server; 100 | 101 | server_name research.the-digital-life.com; 102 | return 404; # managed by Certbot 103 | } 104 | ``` 105 | 106 | 5. (optional) Test auto-renewal 107 | 108 | ```bash 109 | certbot renew --dry-run 110 | ``` -------------------------------------------------------------------------------- /nginxproxymanager-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Nginx Proxy Manager - How-To Installation and Configuration 2 | In this Nginx Proxy Manager How-To, I'll show you how to install and configure Nginx Proxy Manager in Docker. 3 | 4 | We will use the free and open-source software Nginx Proxy Manager. 5 | 6 | 7 | Project Homepage: https://nginxproxymanager.com/ 8 | Documentation: https://nginxproxymanager.com/guide/ 9 | 10 | Video: https://youtu.be/P3imFC7GSr0 11 | 12 | ## Prerequisites 13 | 14 | - Linux Server running Ubuntu 20.04 LTS or newer 15 | - Domain that points to the public IP of your Linux Server 16 | 17 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 18 | 19 | ## 1. Install Docker, and Docker-Compose 20 | 21 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 22 | 23 | ### 1.1. Install Docker 24 | ```bash 25 | sudo apt update 26 | 27 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 28 | 29 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 30 | 31 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 32 | 33 | sudo apt update 34 | 35 | sudo apt-get install docker-ce docker-ce-cli containerd.io 36 | ``` 37 | 38 | ### 1.2. Check if Docker is installed correctly 39 | ```bash 40 | sudo docker run hello-world 41 | ``` 42 | 43 | ### 1.3. Install Docker-Compose 44 | 45 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 46 | 47 | ```bash 48 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 49 | 50 | sudo chmod +x /usr/local/bin/docker-compose 51 | ``` 52 | 53 | ### 1.4. Check if Docker-Compose is installed correctly 54 | ```bash 55 | sudo docker-compose --version 56 | ``` 57 | 58 | ### 1.5. (optional) Add your linux user to the `docker` group 59 | ```bash 60 | sudo usermod -aG docker $USER 61 | ``` 62 | 63 | ## 2. Set up Nginx Proxy Manager 64 | 65 | ### 2.1. Create a new file `docker-compose.yml`file, please refer to the nginxproxymanager documentation: https://nginxproxymanager.com/guide/. 66 | 67 | ... 68 | 69 | **Example Docker-Compose File**: 70 | ```yml 71 | version: '3' 72 | services: 73 | app: 74 | image: 'jc21/nginx-proxy-manager:latest' 75 | ports: 76 | - '80:80' 77 | - '81:81' 78 | - '443:443' 79 | environment: 80 | DB_MYSQL_HOST: "db" 81 | DB_MYSQL_PORT: 3306 82 | DB_MYSQL_USER: "npm" 83 | DB_MYSQL_PASSWORD: "npm" 84 | DB_MYSQL_NAME: "npm" 85 | volumes: 86 | - ./data:/data 87 | - ./letsencrypt:/etc/letsencrypt 88 | db: 89 | image: 'jc21/mariadb-aria:latest' 90 | environment: 91 | MYSQL_ROOT_PASSWORD: 'npm' 92 | MYSQL_DATABASE: 'npm' 93 | MYSQL_USER: 'npm' 94 | MYSQL_PASSWORD: 'npm' 95 | volumes: 96 | - ./mysql:/var/lib/mysql 97 | ``` 98 | 99 | ### 2.2. Start the Nginx Proxy Manager 100 | 101 | ```bash 102 | docker-compose up -d 103 | ``` 104 | 105 | ## 3. Login to the web UI of NGINX proxy manager 106 | 107 | Now we can log in to the web UI. Simply use your browser to connect to your server by using the IP address or an FQDN and connect on port `81`. Log in with the username `admin@example.com` and the password `changeme`. Next, you should change your username and password, and that’s it! 108 | 109 | -------------------------------------------------------------------------------- /passbolt-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Self hosted, open source password manager built for teams! 2 | 3 | In this Tutorial we set up a free and open-source password manager for your home labs, or professional it teams, such as DevOps, sysadmins, and so on. We will deploy Passbolt on a docker server and configure a mail server and trusted SSL certificates by using Traefik and Letsencrypt. 4 | 5 | Project Homepage: https://www.passbolt.com 6 | 7 | Video: https://youtu.be/cEDXeYStfP4 8 | 9 | --- 10 | ## Prerequisites 11 | 12 | Before you can deploy Passbolt in Docker, you need a Linux Server that has **Docker**, and **Docker-Compose** installed. You should also have a DNS Record to expose Passbolt with trusted SSL Certificates. 13 | 14 | For further References, how to use **Docker**, **Docker-Compose** and **Traefik**, check out my previous videos: 15 | - [How to use Docker and migrate your existing Apps to your Linux Server?](https://www.youtube.com/watch?v=y0GGQ2F2tvs) 16 | - [Docker-Compose Tutorial](https://www.youtube.com/watch?v=qH4ZKfwbO8w) 17 | - [Is this the BEST Reverse Proxy for Docker? // Traefik Tutorial](https://www.youtube.com/watch?v=wLrmmh1eI94) 18 | 19 | *You can still install Passbolt on a Linux Server that is not running Docker, however, this may require different commands!* 20 | 21 | --- 22 | ## Install Passbolt in Docker 23 | 24 | ### Create Docker-Compose file 25 | 26 | Copy the example `docker-compose.yml` file in your project directory, and make sure you replace the `APP_FULL_BASE_URL` value with your passbolt's server FQDN. 27 | 28 | **Example `docker-compose.yml`**: 29 | ```yml 30 | ... 31 | version: '3.9' 32 | 33 | services: 34 | db: 35 | image: mariadb:10.3 36 | restart: unless-stopped 37 | environment: 38 | - MYSQL_RANDOM_ROOT_PASSWORD=true 39 | - MYSQL_DATABASE=passbolt 40 | - MYSQL_USER=passbolt 41 | - MYSQL_PASSWORD=P4ssb0lt 42 | volumes: 43 | - database_volume:/var/lib/mysql 44 | 45 | passbolt: 46 | image: passbolt/passbolt:latest-ce 47 | restart: unless-stopped 48 | depends_on: 49 | - db 50 | environment: 51 | - APP_FULL_BASE_URL=https://passbolt.domain.tld 52 | - DATASOURCES_DEFAULT_HOST=db 53 | - DATASOURCES_DEFAULT_USERNAME=passbolt 54 | - DATASOURCES_DEFAULT_PASSWORD=P4ssb0lt 55 | - DATASOURCES_DEFAULT_DATABASE=passbolt 56 | volumes: 57 | - gpg_volume:/etc/passbolt/gpg 58 | - jwt_volume:/etc/passbolt/jwt 59 | command: ["/usr/bin/wait-for.sh", "-t", "0", "db:3306", "--", "/docker-entrypoint.sh"] 60 | 61 | volumes: 62 | database_volume: 63 | gpg_volume: 64 | jwt_volume: 65 | 66 | ... 67 | ``` 68 | 69 | ### Mail Server Configuration 70 | 71 | Passbolt sends recovery instructions and notifications via email. Therefore, it's important you configure an email account that is allowed to send emails to the users. 72 | 73 | **Example `docker-compose.yml`**: 74 | ```yml 75 | ... 76 | passbolt: 77 | ... 78 | environment: 79 | ... 80 | - EMAIL_TRANSPORT_DEFAULT_HOST=your-mail-server 81 | - EMAIL_TRANSPORT_DEFAULT_PORT=587 82 | - EMAIL_TRANSPORT_DEFAULT_USERNAME=$EMAIL_TRANSPORT_DEFAULT_USERNAME 83 | - EMAIL_TRANSPORT_DEFAULT_PASSWORD=$EMAIL_TRANSPORT_DEFAULT_PASSWORD 84 | - EMAIL_TRANSPORT_DEFAULT_TLS=true 85 | - EMAIL_DEFAULT_FROM=no-reply@domain.tld 86 | ... 87 | ``` 88 | 89 | If you want to store your mail server credentials in a secure place, create an `.env` file in the project directory. 90 | 91 | **Example `.env`:** 92 | ``` 93 | ... 94 | EMAIL_TRANSPORT_DEFAULT_USERNAME=mailuser 95 | EMAIL_TRANSPORT_DEFAULT_PASSWORD=your-secret-mailuser-password 96 | ``` 97 | 98 | ### Get trusted SSL Certificates via Traefik 99 | 100 | The `docker-compose.yml` example contains labels to expose Passbolt via Traefik. Make sure you replace the `traefik.http.routers.passbolt-http.rule`, and `traefik.http.routers.passbolt-https.rule` with your custom rule-set, e.g. Passbolt server's FQDN. 101 | 102 | **Example `docker-compose.yml`**: 103 | ```yml 104 | ... 105 | passbolt: 106 | ... 107 | labels: 108 | traefik.enable: "true" 109 | traefik.http.routers.passbolt-http.entrypoints: "web" 110 | traefik.http.routers.passbolt-http.rule: "Host(`passbolt.domain.tld`)" 111 | traefik.http.routers.passbolt-http.middlewares: "SslHeader@file" 112 | traefik.http.routers.passbolt-https.middlewares: "SslHeader@file" 113 | traefik.http.routers.passbolt-https.entrypoints: "websecure" 114 | traefik.http.routers.passbolt-https.rule: "Host(`passbolt.domain.tld`)" 115 | traefik.http.routers.passbolt-https.tls: "true" 116 | traefik.http.routers.passbolt-https.tls.certresolver: "letsencrypt" 117 | ... 118 | ``` 119 | 120 | Copy the `traefik.yaml`, `conf/headers.yaml`, and `conf/tls.yaml` in your project directory. 121 | 122 | Make sure, you customize your settings in the `traefik yaml`. 123 | 124 | **Example `traefik.yaml`**: 125 | ```yml 126 | ... 127 | certificatesResolvers: 128 | letsencrypt: 129 | acme: 130 | email: yourname@domain.tld 131 | storage: /shared/acme.json 132 | caServer: 'https://acme-v02.api.letsencrypt.org/directory' 133 | keyType: EC256 134 | httpChallenge: 135 | entryPoint: web 136 | tlsChallenge: {} 137 | ... 138 | ``` 139 | 140 | ### (Optional) Avoid conflicts with other containers 141 | 142 | To avoid conflicts with other running containers, you should disable the `exposedByDefault` setting in the `traefik yaml`. 143 | 144 | **Example `traefik.yaml`**: 145 | ```yml 146 | ... 147 | providers: 148 | docker: 149 | ... 150 | exposedByDefault: false 151 | ... 152 | ``` 153 | 154 | ### (Optional) Use DNS Challenge on Cloudflare 155 | 156 | To enable DNS Challenge for trusted SSL Certificates (f.e. via Cloudflare), make sure you customize the `docker-compose.yml`, and `traefik.yaml` and insert your custom settings. 157 | 158 | **Example `docker-compose.yml`**: 159 | ```yml 160 | ... 161 | passbolt: 162 | ... 163 | labels: 164 | ... 165 | traefik.http.routers.passbolt-https.tls.certresolver: "cloudflare" 166 | traefik: 167 | ... 168 | environment: 169 | - CF_API_EMAIL=$CF_API_EMAIL 170 | - CF_API_KEY=$CF_API_KEY 171 | ``` 172 | 173 | **Example `traefik.yaml`**: 174 | ```yml 175 | ... 176 | certificatesResolvers: 177 | cloudflare: 178 | acme: 179 | email: yourname@domain.tld 180 | storage: /shared/acme.json 181 | caServer: 'https://acme-v02.api.letsencrypt.org/directory' 182 | keyType: EC256 183 | dnsChallenge: 184 | provider: cloudflare 185 | resolvers: 186 | - "1.1.1.1:53" 187 | - "8.8.8.8:53" 188 | ... 189 | ``` 190 | 191 | Make sure you pass the DNS Providers Credentials via the `.env` file into the Traefik container. 192 | 193 | **Example `.env`:** 194 | ``` 195 | ... 196 | CF_API_EMAIL=your-mail-address 197 | CF_API_KEY=your-secret-api-key 198 | ``` 199 | 200 | 201 | ### Start the container and create your first user 202 | 203 | To start the container, execute the following command in the project directory. 204 | 205 | ```bash 206 | docker-compose up -d 207 | ``` 208 | 209 | Create your first user with the following command. 210 | 211 | ```bash 212 | docker-compose exec passbolt su -m -c "/usr/share/php/passbolt/bin/cake \ 213 | passbolt register_user \ 214 | -u \ 215 | -f \ 216 | -l \ 217 | -r admin" -s /bin/sh www-data 218 | ``` 219 | 220 | Login with your user and create your private key and passphrase. Make sure, you store the private key in a secure location. 221 | 222 | --- 223 | ## References 224 | 225 | - [Docker passbolt installation](https://help.passbolt.com/hosting/install/ce/docker.html) 226 | - [Configure email providers](https://help.passbolt.com/configure/email/setup) 227 | - [Auto configure HTTPS with Let's Encrypt on Docker](https://help.passbolt.com/configure/https/pro/docker/auto) -------------------------------------------------------------------------------- /passbolt-tutorial/conf/headers.conf: -------------------------------------------------------------------------------- 1 | http: 2 | middlewares: 3 | SslHeader: 4 | headers: 5 | FrameDeny: true 6 | AccessControlAllowMethods: 'GET,OPTIONS,PUT' 7 | AccessControlAllowOriginList: 8 | - origin-list-or-null 9 | AccessControlMaxAge: 100 10 | AddVaryHeader: true 11 | BrowserXssFilter: true 12 | ContentTypeNosniff: true 13 | ForceSTSHeader: true 14 | STSIncludeSubdomains: true 15 | STSPreload: true 16 | ContentSecurityPolicy: default-src 'self' 'unsafe-inline' 17 | CustomFrameOptionsValue: SAMEORIGIN 18 | ReferrerPolicy: same-origin 19 | PermissionsPolicy: vibrate 'self' 20 | STSSeconds: 315360000 21 | -------------------------------------------------------------------------------- /passbolt-tutorial/conf/tls.conf: -------------------------------------------------------------------------------- 1 | tls: 2 | options: 3 | default: 4 | minVersion: VersionTLS12 5 | sniStrict: true 6 | curvePreferences: 7 | - CurveP521 8 | - CurveP384 9 | cipherSuites: 10 | - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 11 | - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 12 | - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 13 | -------------------------------------------------------------------------------- /passbolt-tutorial/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.9' 2 | 3 | services: 4 | db: 5 | image: mariadb:10.3 6 | restart: unless-stopped 7 | environment: 8 | - MYSQL_RANDOM_ROOT_PASSWORD=true 9 | - MYSQL_DATABASE=passbolt 10 | - MYSQL_USER=passbolt 11 | - MYSQL_PASSWORD=P4ssb0lt 12 | volumes: 13 | - database_volume:/var/lib/mysql 14 | 15 | passbolt: 16 | image: passbolt/passbolt:latest-ce 17 | restart: unless-stopped 18 | depends_on: 19 | - db 20 | environment: 21 | - APP_FULL_BASE_URL=https://passbolt.domain.tld 22 | - DATASOURCES_DEFAULT_HOST=db 23 | - DATASOURCES_DEFAULT_USERNAME=passbolt 24 | - DATASOURCES_DEFAULT_PASSWORD=P4ssb0lt 25 | - DATASOURCES_DEFAULT_DATABASE=passbolt 26 | - EMAIL_TRANSPORT_DEFAULT_HOST=your-mail-server 27 | - EMAIL_TRANSPORT_DEFAULT_PORT=587 28 | - EMAIL_TRANSPORT_DEFAULT_USERNAME=$EMAIL_TRANSPORT_DEFAULT_USERNAME 29 | - EMAIL_TRANSPORT_DEFAULT_PASSWORD=$EMAIL_TRANSPORT_DEFAULT_PASSWORD 30 | - EMAIL_TRANSPORT_DEFAULT_TLS=true 31 | - EMAIL_DEFAULT_FROM=no-reply@domain.tld 32 | volumes: 33 | - gpg_volume:/etc/passbolt/gpg 34 | - jwt_volume:/etc/passbolt/jwt 35 | command: ["/usr/bin/wait-for.sh", "-t", "0", "db:3306", "--", "/docker-entrypoint.sh"] 36 | labels: 37 | traefik.enable: "true" 38 | traefik.http.routers.passbolt-http.entrypoints: "web" 39 | traefik.http.routers.passbolt-http.rule: "Host(`passbolt.domain.tld`)" 40 | traefik.http.routers.passbolt-http.middlewares: "SslHeader@file" 41 | traefik.http.routers.passbolt-https.middlewares: "SslHeader@file" 42 | traefik.http.routers.passbolt-https.entrypoints: "websecure" 43 | traefik.http.routers.passbolt-https.rule: "Host(`passbolt.domain.tld`)" 44 | traefik.http.routers.passbolt-https.tls: "true" 45 | traefik.http.routers.passbolt-https.tls.certresolver: "cloudflare" 46 | 47 | traefik: 48 | image: traefik:2.6 49 | restart: always 50 | ports: 51 | - 80:80 52 | - 443:443 53 | volumes: 54 | - /var/run/docker.sock:/var/run/docker.sock:ro 55 | - ./traefik.yaml:/traefik.yaml:ro 56 | - ./conf/:/etc/traefik/conf 57 | - ./shared/:/shared 58 | environment: 59 | - CF_API_EMAIL=$CF_API_EMAIL 60 | - CF_API_KEY=$CF_API_KEY 61 | 62 | volumes: 63 | database_volume: 64 | gpg_volume: 65 | jwt_volume: 66 | -------------------------------------------------------------------------------- /passbolt-tutorial/traefik.yaml: -------------------------------------------------------------------------------- 1 | global: 2 | sendAnonymousUsage: false 3 | log: 4 | level: INFO 5 | format: common 6 | providers: 7 | docker: 8 | endpoint: 'unix:///var/run/docker.sock' 9 | watch: true 10 | exposedByDefault: false 11 | swarmMode: false 12 | file: 13 | directory: /etc/traefik/conf/ 14 | watch: true 15 | api: 16 | dashboard: false 17 | debug: false 18 | insecure: false 19 | entryPoints: 20 | web: 21 | address: ':80' 22 | http: 23 | redirections: 24 | entryPoint: 25 | to: websecure 26 | scheme: https 27 | permanent: true 28 | websecure: 29 | address: ':443' 30 | certificatesResolvers: 31 | cloudflare: 32 | acme: 33 | email: mail@domain.tld 34 | storage: /shared/acme.json 35 | caServer: 'https://acme-v02.api.letsencrypt.org/directory' 36 | keyType: EC256 37 | dnsChallenge: 38 | provider: cloudflare 39 | resolvers: 40 | - "1.1.1.1:53" 41 | - "8.8.8.8:53" -------------------------------------------------------------------------------- /podman-docker-alternative/README.md: -------------------------------------------------------------------------------- 1 | # Docker Alternative with Podman, Cockpit, and Nginx Proxy Manager 2 | Docker Alternative with Podman, Cockpit, and Nginx Proxy Manager to manage your Linux server easily and securely! We will install an Ubuntu 21.04 Server and deploy it with a nice management web UI protected with a reverse proxy. 3 | 4 | We will use the free and open-source software Podman. 5 | 6 | 7 | Project Homepage: https://podman.io/ 8 | Documentation: https://docs.podman.io/en/latest/ 9 | 10 | Video: https://youtu.be/-hJosY_M0I4 11 | 12 | ## Prerequisites 13 | 14 | - Linux Server running Ubuntu 21.04 LTS or newer 15 | 16 | You can still install Podman on a Linux Server that is not running Ubuntu, however, this may require different commands! 17 | 18 | *For older Ubuntu versions or other Linux Distributions, just follow the [official installation instructions](https://podman.io/getting-started/installation).* 19 | 20 | ## 1. Install Podman, and Podman-Compose 21 | 22 | ### 1.1. Install Podman 23 | 24 | You can install Podman on Linux, macOS, or Windows. In our example, because we're using Ubuntu 21.04, we can just install it from the Ubuntu repository. 25 | 26 | ```bash 27 | sudo apt install podman 28 | ``` 29 | 30 | ### 1.2. Install Podman-Compose 31 | 32 | ``` 33 | sudo apt install python3-pip 34 | pip3 install podman-compose 35 | ``` 36 | 37 | You can also put this command into your `.bashrc` or `.zshrc` file to make it persistent! 38 | 39 | ```bash 40 | export PATH=$PATH:$HOME/.local/bin 41 | ``` 42 | 43 | ## 2. Install Cockpit 44 | 45 | ### 2.1. Install Cockpit 46 | 47 | Note, if you're using an older version of Ubuntu, I wouldn't necessarily recommend upgrading yet. Especially if you are using the LTS version, keep running it. The only package that's not available in older LTS versions of Ubuntu is the cockpit-podman package. So, my experience is, that it works best with Ubuntu 21.04 and probably newer versions as well. 48 | 49 | With the following command, we install Cockpit and the Podman extension. 50 | 51 | ```bash 52 | sudo apt install cockpit cockpit-podman 53 | ``` 54 | 55 | ### 2.2. Access Cockpit 56 | 57 | When the installation was successful, just access it on port 9090. The interface is very easy and intuitive. You can manage your entire Linux server, update packages, set up basic configuration, and manage containers with Podman. 58 | 59 | ## 3. Expose Cockpit with Nginx Proxy Manager 60 | 61 | ### 3.1. Change unprivileged ports 62 | 63 | Before we can start running Podman containers rootless, we need to configure unprivileged ports. Because by default, Podman doesn't allow us to expose any ports lower than `1024`, without root privileges. 64 | 65 | Simply open the `/etc/sysctl.conf` file and add this line at the end. 66 | 67 | ``` 68 | net.ipv4.ip_unprivileged_port_start=80 69 | ``` 70 | 71 | ### 3.2. Deploy Nginx Proxy Manager 72 | 73 | Create a new file `docker-compose.yml`file, please refer to the nginxproxymanager documentation: https://nginxproxymanager.com/guide/. 74 | 75 | ... 76 | 77 | **Example Docker-Compose File**: 78 | ```yml 79 | version: '3' 80 | 81 | volumes: 82 | nginxproxymanager-data: 83 | nginxproxymanager-ssl: 84 | nginxproxymanager-db: 85 | 86 | services: 87 | nginxproxymanager: 88 | image: 'jc21/nginx-proxy-manager:latest' 89 | ports: 90 | - '80:80' 91 | - '81:81' 92 | - '443:443' 93 | environment: 94 | DB_MYSQL_HOST: "db" 95 | DB_MYSQL_PORT: 3306 96 | DB_MYSQL_USER: "npm" 97 | DB_MYSQL_PASSWORD: "npm" 98 | DB_MYSQL_NAME: "npm" 99 | volumes: 100 | - nginxproxymanager-data:/data 101 | - nginxproxymanager-ssl:/etc/letsencrypt 102 | nginxproxymanager-db: 103 | image: 'jc21/mariadb-aria:latest' 104 | environment: 105 | MYSQL_ROOT_PASSWORD: 'npm' 106 | MYSQL_DATABASE: 'npm' 107 | MYSQL_USER: 'npm' 108 | MYSQL_PASSWORD: 'npm' 109 | volumes: 110 | - nginxproxymanager-db:/var/lib/mysql 111 | ``` 112 | 113 | ### 3.3. Start the Nginx Proxy Manager 114 | 115 | ```bash 116 | podman-compose up -d 117 | ``` 118 | 119 | ### 3.4. Login to the web UI of NGINX proxy manager 120 | 121 | Now we can log in to the web UI. Simply use your browser to connect to your server by using the IP address or an FQDN and connect on port `81`. Log in with the username `admin@example.com` and the password `changeme`. Next, you should change your username and password, and that’s it! 122 | 123 | ### 3.5. Configure a new Proxy Host 124 | 125 | Add a new Proxy Host and make sure you select the port HTTPS because Cockpit is using HTTPS by default. As a Forward Hostname / IP just use the internal or public IP address of your server. If you use the internal IP address, you can limit the listening address of Cockpit later. With this method, you're disabling access from external networks without going through the reverse proxy! 126 | 127 | In this example, I've used the public DNS name npm3.the-digital-life.com and forwarded it to the internal IP address of my server, using the Cockpit Port 9090. 128 | 129 | ### 3.6. (optional) Stop listening on Port 9090 130 | 131 | Let's also limit access to our Cockpit Web Interface. Because you could still just use the public IP address on the port 9090 to access Cockpit. If you have used the server's internal IP address in Nginx Proxy Manager, you can now limit access to this IP address. So that only Nginx Proxy Manager and other internal servers are able to connect to our administrative interface. 132 | 133 | Create a new file `/etc/systemd/system/cockpit.socket.d/listen.conf` and add the following lines. 134 | 135 | ```conf 136 | [Socket] 137 | ListenStream= 138 | ListenStream=:9090 139 | FreeBind=yes 140 | ``` 141 | 142 | To make these settings active, execute the following commands in the terminal. 143 | 144 | ```bash 145 | sudo systemctl daemon-reload 146 | 147 | sudo systemctl restart cockpit.socket 148 | ``` 149 | 150 | -------------------------------------------------------------------------------- /portainer-remote-tls/README.md: -------------------------------------------------------------------------------- 1 | # How to manage multiple Hosts in Portainer 2 | Portainer Remote Host Management with a proper TLS protected Docker API. Learn how to easily and securely expose the Docker API on a remote server and connect your Portainer main instance for remote server management. 3 | 4 | We will use the free and open-source software Portainer. 5 | 6 | 7 | Project Homepage: 8 | Documentation: 9 | 10 | Video: https://youtu.be/kKDoPohpiNk 11 | 12 | ## Prerequisites 13 | 14 | - Running Portainer 15 | 16 | ## 1. Generate Certificates for Portainer and the Remote Server 17 | 18 | ### 1.1. Create a Certificate Authority (CA) 19 | 20 | ```bash 21 | openssl genrsa -aes256 -out ca-key.pem 4096 22 | 23 | openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem 24 | ``` 25 | 26 | ### 1.2. Generate a Server Certificate 27 | 28 | ```bash 29 | openssl genrsa -out server-key.pem 4096 30 | 31 | openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr 32 | 33 | echo subjectAltName = DNS:$HOST,IP:$INTERNALIP1,IP:$INTERNALIP2 >> extfile.cnf 34 | 35 | echo extendedKeyUsage = serverAuth >> extfile.cnf 36 | 37 | openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf 38 | ``` 39 | 40 | ### 1.3. Generate a Client Certificate 41 | 42 | ```bash 43 | openssl genrsa -out key.pem 4096 44 | 45 | openssl req -subj '/CN=$PORTAINERDNS' -new -key key.pem -out client.csr 46 | 47 | echo extendedKeyUsage = clientAuth > extfile-client.cnf 48 | 49 | openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile-client.cnf 50 | ``` 51 | 52 | ### 1.4. Cleanup and Protect your Private Keys! 53 | 54 | ```bash 55 | rm -v client.csr server.csr extfile.cnf extfile-client.cnf 56 | 57 | chmod -v 0400 ca-key.pem key.pem server-key.pem 58 | ``` 59 | 60 | ### 1.5. Enable Docker API on your Remote Server 61 | 62 | Create `/etc/docker/daemon.json` with the following settings and replace `$INTERNALIP` with your IP address of the remote server. 63 | 64 | ```json 65 | { 66 | "hosts": ["unix:///var/run/docker.sock", "tcp://$INTERNALIP:2376"], 67 | "tls": true, 68 | "tlscacert": "/root/certs/ca.pem", 69 | "tlscert": "/root/certs/server-cert.pem", 70 | "tlskey": "/root/certs/server-key.pem", 71 | "tlsverify": true 72 | } 73 | ``` 74 | 75 | Also Create a file in `/etc/systemd/system/docker.service.d/docker.conf`. 76 | 77 | ```conf 78 | [Service] 79 | ExecStart= 80 | ExecStart=/usr/bin/dockerd 81 | ``` 82 | 83 | Reload your daemon settings `systemctl daemon-reload` and restart your Docker daemon with `sudo service docker restart`. 84 | 85 | ## 2. Add the Endpoint in Portainer 86 | 87 | On the Portainer Web UI you now need to import the Certs and setup the connection. 88 | 89 | And then you should see that Portainer now has another endpoint successfully connected. And we now can manage our remote server from our main portainer instance, just like the local server. -------------------------------------------------------------------------------- /portainer-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Portainer Install Ubuntu tutorial - manage your docker containers 2 | Portainer Install Ubuntu tutorial, let’s have a look at how to install Portainer on an Ubuntu server. Portainer CE is a free and open-source tool that manages all your docker containers. It has a nice clean web UI where you can inspect and control all your docker resources. 3 | 4 | We will use the free and open-source software Portainer. 5 | 6 | 7 | Project Homepage: 8 | Documentation: 9 | 10 | Video: https://youtu.be/ljDI5jykjE8 11 | 12 | ## Prerequisites 13 | 14 | - Linux Server running Ubuntu 20.04 LTS or newer 15 | 16 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 17 | 18 | ## 1. Install Docker, and Docker-Compose 19 | 20 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 21 | 22 | ### 1.1. Install Docker 23 | ```bash 24 | sudo apt update 25 | 26 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 27 | 28 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 29 | 30 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 31 | 32 | sudo apt update 33 | 34 | sudo apt-get install docker-ce docker-ce-cli containerd.io 35 | ``` 36 | 37 | ### 1.2. Check if Docker is installed correctly 38 | ```bash 39 | sudo docker run hello-world 40 | ``` 41 | 42 | ### 1.3. Install Docker-Compose 43 | 44 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 45 | 46 | ```bash 47 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 48 | 49 | sudo chmod +x /usr/local/bin/docker-compose 50 | ``` 51 | 52 | ### 1.4. Check if Docker-Compose is installed correctly 53 | ```bash 54 | sudo docker-compose --version 55 | ``` 56 | 57 | ### 1.5. (optional) Add your linux user to the `docker` group 58 | ```bash 59 | sudo usermod -aG docker $USER 60 | ``` 61 | 62 | ## 2. Set up Portainer 63 | 64 | ### 2.1. Create a new Docker Volume 65 | 66 | ```bash 67 | docker volume create portainer_data 68 | ``` 69 | 70 | ### 2.2. Launch Portainer 71 | 72 | ```bash 73 | docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce 74 | ``` 75 | -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/README.md: -------------------------------------------------------------------------------- 1 | # Self-Signed Certificates in Kubernetes 2 | 3 | ## Steps 4 | 5 | 1. Create a Certificate Authority 6 | a. Create a CA private key 7 | ```bash 8 | openssl genrsa -out ca.key 4096 9 | ``` 10 | b. Create a CA certificate 11 | ```bash 12 | openssl req -new -x509 -sha256 -days 365 -key ca.key -out ca.crt 13 | ``` 14 | c. Import the CA certificate in the `trusted Root Ca store` of your clients 15 | 2. Convert the content of the key and crt to base64 oneline 16 | ```bash 17 | cat ca.crt | base64 -w 0 18 | cat ca.key | base64 -w 0 19 | ``` 20 | 3. Create a secret object `nginx1-ca-secret.yml` and put in the key and crt content 21 | 4. Create a cluster issuer object `nginx1-clusterissuer.yml` 22 | 5. Create a new certificate `nginx1-cert.yml` for your projects 23 | 6. Add a `tls` reference in your ingress `nginx1-ingress.yml` 24 | 7. Apply all changes 25 | 26 | ## Architecture Diagram 27 | 28 | ``` 29 | Cert-Manager Objects Nginx1 Objects 30 | 31 | ┌───────────────────────┐ ┌─────────────────────────────────┐ 32 | Created CA │ kind: Secret │ │ │ 33 | private key ──►│ name: nginx1-ca-secret│◄─────────┐ │ kind: Ingress │ 34 | and cert │ tls.key: **priv key** │ │ │ name: nginx1-ingress │ 35 | │ tls.crt: **cert** │ │ │ tls: │ 36 | └───────────────────────┘ │ │ - hosts: │ 37 | │ │ - nginx1.clcreative.home │ 38 | ┌──────────────────────────────┐ │ ┌────┼───secretName: nginx1-tls-secret │ 39 | │ │ │ │ │ │ 40 | │ kind: ClusterIssuer │ │ │ └─────────────────────────────────┘ 41 | ┌───┤►name: nginx1-clusterissuer │ │ │ 42 | │ │ secretName: nginx1-ca-secret─┼───┘ │ 43 | │ │ │ │ 44 | │ └──────────────────────────────┘ │ 45 | │ │ 46 | │ ┌───────────────────────────────┐ │ 47 | │ │ │ │ 48 | │ │ kind: Certificate │ │ 49 | │ │ name: nginx1-cert │ │ 50 | └───┼─issuerRef: │ │ 51 | │ name: nginx1-clusterissuer │ │ 52 | │ kind: ClusterIssuer │ │ 53 | │ dnsNames: │ │ 54 | │ - nginx1.clcreative.home │ │ 55 | ┌───┼─secretName: nginx1-tls-secret │ │ 56 | │ │ │ │ 57 | │ └──────────┬────────────────────┘ │ 58 | │ │ │ 59 | │ │ will be created │ 60 | │ ▼ and managed automatically │ 61 | │ ┌───────────────────────────────┐ │ 62 | │ │ │ │ 63 | │ │ kind: Secret │ │ 64 | └───┤►name: nginx1-tls-secret◄──────┼───────┘ 65 | │ │ 66 | └───────────────────────────────┘ 67 | ``` 68 | 69 | -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-ca-secret.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: nginx1-ca-secret 5 | namespace: cert-manager 6 | type: Opaque 7 | data: 8 | tls.crt: # content of ca.crt (converted to base64) 9 | tls.key: # content of ca.key (converted to base64) 10 | -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-cert.yml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: Certificate 3 | metadata: 4 | name: nginx1-cert 5 | namespace: nginx1 6 | spec: 7 | secretName: nginx1-tls-secret 8 | issuerRef: 9 | name: nginx1-clusterissuer 10 | kind: ClusterIssuer 11 | dnsNames: 12 | - nginx1.clcreative.home -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-clusterissuer.yml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1 2 | kind: ClusterIssuer 3 | metadata: 4 | name: nginx1-clusterissuer 5 | spec: 6 | ca: 7 | secretName: nginx1-ca-secret 8 | -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-deploy.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx1-deploy 5 | namespace: nginx1 6 | spec: 7 | replicas: 1 8 | selector: 9 | matchLabels: 10 | app: nginx1-app 11 | template: 12 | metadata: 13 | labels: 14 | app: nginx1-app 15 | spec: 16 | containers: 17 | - name: nginx1-app 18 | image: nginx 19 | ports: 20 | - name: web 21 | containerPort: 80 -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-ingress.yml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: Ingress 3 | metadata: 4 | name: nginx1-ingress 5 | namespace: nginx1 6 | annotations: 7 | kubernetes.io/ingress.class: traefik 8 | traefik.ingress.kubernetes.io/router.entrypoints: web, websecure 9 | traefik.ingress.kubernetes.io/router.middlewares: nginx1-redirecttohttps@kubernetescrd 10 | spec: 11 | rules: 12 | - host: "nginx1.clcreative.home" 13 | http: 14 | paths: 15 | - path: / 16 | pathType: Prefix 17 | backend: 18 | service: 19 | name: nginx1-svc 20 | port: 21 | number: 80 22 | tls: 23 | - hosts: 24 | - nginx1.clcreative.home 25 | secretName: nginx1-tls-secret -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-mw.yml: -------------------------------------------------------------------------------- 1 | apiVersion: traefik.containo.us/v1alpha1 2 | kind: Middleware 3 | metadata: 4 | name: redirecttohttps 5 | namespace: nginx1 6 | spec: 7 | redirectScheme: 8 | permanent: false 9 | scheme: https 10 | -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-ns.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: nginx1 -------------------------------------------------------------------------------- /self-signed-certificates-in-kubernetes/nginx1-svc.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx1-svc 5 | namespace: nginx1 6 | labels: 7 | app: nginx1-app 8 | spec: 9 | type: ClusterIP 10 | ports: 11 | - name: http 12 | port: 80 13 | targetPort: 80 14 | protocol: TCP 15 | selector: 16 | app: nginx1-app -------------------------------------------------------------------------------- /serverless-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # I made a YouTube Bot in Python! (Serverless Computing: DigitalOcean Functions) 2 | 3 | In this video, I explain what serverless computing is, by showing how I've used DigitalOcean Functions, to write a simple YouTube Bot in Python. This Bot is getting the latest published video on a YouTube Channel, and sends out a message to a Discord Channel. We discuss how to use APIs, store and retrieve Secrets, Tokens, use S3 Buckets, and much more. There is a lot of coding stuff in here, hope you'll enjoy it! #serverless #python #digitaloceanfunctions 4 | 5 | Video: https://www.youtube.com/watch?v=D_MUphj5tCM 6 | 7 | 8 | --- 9 | ## Prerequisites 10 | 11 | Before you can create serverless functions in **DigitalOcean**, you need to sign up. 12 | 13 | *Some examples that get information about the latest video of a YouTube channel, need to use the YouTube Data API. The YouTube Data API allows you to retrieve information about YouTube videos, channels, playlists, and more. To use the API, you'll need to have a project on the Google Cloud Platform, and you'll need to enable the YouTube Data API for that project.* 14 | 15 | --- 16 | ## Deploy a web function 17 | 18 | ### Create a namespace 19 | 20 | Log in to your **DigitalOcean** Account and in the web UI, create a new **namespace** in the **manage -> functions** section. 21 | 22 | ### Create a function 23 | 24 | In the **namespace** section, you can create a new **function** under **actions -> create function**. 25 | 26 | Select your desired **runtime**, e.g. `Python`, `NodeJS`, etc. In this tutorial I'm using `Python` You can also add your function to a **package**, which is optional. Make sure the **web function** checkbox is enabled. 27 | 28 | In the code editor, you can add your desired code or modify the example. 29 | 30 | ### Run the function 31 | 32 | To run the function from the web ui, you can click on **run**, this will simulate a web request to the function. You can also change the **parameters**, which is optional. 33 | 34 | You can also run the function from your PC by opening a web request. 35 | 36 | ```sh 37 | curl $(doctl sls fn get your-package/your-function --url) 38 | ``` 39 | 40 | ### Authentication 41 | 42 | To require a secret to run your function, you can edit the function settings it in the **settings -> access & security -> web function** menu. 43 | 44 | Enable the checkbox **secure web function**, and add a custom secret, or use the generated one. 45 | 46 | Test the authentication with the following command. 47 | 48 | ```sh 49 | curl -X GET $(doctl sls fn get your-package/your-function --url) \ 50 | -H "Content-Type: application/json" \ 51 | -H "X-Require-Whisk-Auth: your-secret" 52 | ``` 53 | 54 | --- 55 | ## Develop functions in vscode 56 | 57 | ### Install doctl 58 | 59 | Install `doctl` following the directions for your package manager or operating system, e.g. macOS: 60 | 61 | ```sh 62 | brew install doctl 63 | ``` 64 | 65 | ### Install serverless support 66 | 67 | Install the support for serverless functions in **doctl**. 68 | 69 | ```sh 70 | doctl serverless install 71 | ``` 72 | 73 | ### Connect to your namespace 74 | 75 | Connect to your **namespace**, you have created before. 76 | 77 | ```sh 78 | doctl serverless connect your-namespace 79 | ``` 80 | 81 | ### Initialize a new project 82 | 83 | You can initialize a new project by executing the following command. 84 | 85 | ```sh 86 | doctl serverless init your-namespace --language python 87 | ``` 88 | 89 | You can now open the **namespace** project directory in a tool like **VSCode** and start developing your **packages** and **functions**. 90 | 91 | ### Edit the project file 92 | 93 | All project settings of a **namespace** , **packages**, and **functions** are described in a `project.yml` file. 94 | 95 | ```yaml 96 | packages: 97 | - name: your-package 98 | functions: 99 | - name: your-function 100 | runtime: python:default 101 | web: false 102 | ``` 103 | 104 | ### Edit packages and functions 105 | 106 | All **packages** need to be created in a separate folder e.g. `packages/`. All **functions** are created by adding files in the package folders. 107 | 108 | A project structure should look like this. 109 | 110 | ```text 111 | namespace/project 112 | ↳ packages 113 | ↳ your-package-1 114 | ↳ your-function-1.py 115 | ↳ your-function-2.py 116 | ↳ your-package-1 117 | ↳ your-function-3.py 118 | ↳ .gitignore 119 | ↳ project.yml 120 | ``` 121 | 122 | ### Deploy the package 123 | 124 | Upload the **package** code to DigitalOcean. 125 | 126 | ```sh 127 | doctl serverless deploy your-package 128 | ``` 129 | 130 | ### Invoke a function 131 | 132 | You can test and run a non-web function with the following command. 133 | 134 | ```sh 135 | doctl serverless functions invoke your-package/your-function 136 | ``` 137 | 138 | ### Invoke a function with parameters 139 | 140 | You can invoke **functions** with optional or required parameters. 141 | 142 | ```sh 143 | doctl serverless functions invoke httpreq-demo-1/hello -p name:Christian 144 | ``` 145 | 146 | In the code, you can read parameters like the following. 147 | 148 | ```py 149 | def main(args): 150 | name = args.get("name", "stranger") 151 | greeting = "Hello " + name + "!" 152 | print(greeting) 153 | return {"body": greeting} 154 | ``` 155 | 156 | ### Environment Variables 157 | 158 | You can add environment variables by updating your `project.yml` like the following. 159 | 160 | ```yml 161 | ... 162 | packages: 163 | - name: httpreq-demo-1 164 | functions: 165 | - name: hello 166 | binary: false 167 | main: "" 168 | runtime: python:default 169 | web: false 170 | websecure: false 171 | parameters: {} 172 | environment: { 173 | PERSON: "Christian" 174 | } 175 | 176 | ``` 177 | 178 | In the code, you can refer to the environment variables like the following. 179 | 180 | ```py 181 | import os 182 | 183 | 184 | def read_env_variable(variable_name): 185 | try: 186 | return os.environ.get(variable_name) 187 | except Exception: 188 | return False 189 | 190 | 191 | def main(args): 192 | greeting = f"Hello {read_env_variable('PERSON')}" 193 | print(greeting) 194 | return {"body": greeting} 195 | ``` 196 | 197 | --- 198 | ## Troubleshooting 199 | 200 | ### Build requirements 201 | 202 | You can add other Python Libraries by adding a `requirements.txt` and a `build.sh` script into your project. 203 | 204 | Change your **project** structure according to the following. 205 | 206 | ```text 207 | namespace/project 208 | ↳ packages 209 | ↳ your-package-1 210 | ↳ your-function-1 211 | ↳ __main__.py 212 | ↳ build.sh 213 | ↳ requirements.txt 214 | ``` 215 | 216 | **Example `requirements.txt`:** 217 | ```sh 218 | google-api-python-client==2.72.0 219 | ``` 220 | 221 | **Example `build.sh`:** 222 | ```sh 223 | #!/bin/bash 224 | 225 | set -e 226 | 227 | virtualenv virtualenv 228 | source virtualenv/bin/activate 229 | pip install -r requirements.txt 230 | deactivate 231 | ``` 232 | 233 | ### Increase Limits 234 | 235 | When you get timeout errors, try increasing the function **limits**. 236 | 237 | ```yaml 238 | ... 239 | packages: 240 | - name: your-package 241 | functions: 242 | - name: your-function 243 | ... 244 | limits: 245 | timeout: 3000 # timeout in seconds 246 | memory: 512 # memory in mb 247 | ``` 248 | 249 | --- 250 | ## References 251 | 252 | - [DigitalOcean Functions Docs](https://docs.digitalocean.com/products/functions/) -------------------------------------------------------------------------------- /serverless-tutorial/assets/serverless-tutorial-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChristianLempa/videos/32f5ad7532ade7a679a22fb9e5abbfcd97060642/serverless-tutorial/assets/serverless-tutorial-architecture.png -------------------------------------------------------------------------------- /serverless-tutorial/files/youtubify-demo-2/.gitignore: -------------------------------------------------------------------------------- 1 | .nimbella 2 | .deployed 3 | __deployer__.zip 4 | __pycache__/ 5 | node_modules 6 | package-lock.json 7 | .DS_Store 8 | -------------------------------------------------------------------------------- /serverless-tutorial/files/youtubify-demo-2/packages/youtubify/getlatestvideo/__main__.py: -------------------------------------------------------------------------------- 1 | import os 2 | import requests 3 | from googleapiclient.discovery import build 4 | import boto3 5 | import botocore 6 | 7 | # read youtube api key and channel id from environment variables 8 | youtube_api_key = os.environ.get("YOUTUBE_API_KEY") 9 | youtube_channel_id = os.environ.get("YOUTUBE_CHANNEL_ID") 10 | 11 | # read spaces access key and secret from environment variables 12 | spaces_access_key = os.environ.get("SPACES_ACCESS_KEY") 13 | spaces_secret_key = os.environ.get("SPACES_SECRET_KEY") 14 | 15 | # read discord webhook url 16 | discord_webhook_url = os.environ.get("DISCORD_WEBHOOK_URL") 17 | 18 | 19 | # get latest video from a channel 20 | def get_latest_video(channel_id): 21 | # get latest video from a channel 22 | youtube = build("youtube", "v3", developerKey=youtube_api_key) 23 | 24 | request = youtube.search().list( 25 | part="snippet", 26 | channelId=channel_id, 27 | order="date", 28 | maxResults=1 29 | ) 30 | 31 | # execute the request 32 | response = request.execute() 33 | 34 | # get the video id from the response 35 | video_id = response["items"][0]["id"]["videoId"] 36 | 37 | # get the video meta data 38 | request = youtube.videos().list( 39 | part="snippet,contentDetails,statistics", 40 | id=video_id 41 | ) 42 | 43 | # execute the request 44 | response = request.execute() 45 | 46 | # return the video meta data 47 | return response["items"][0] 48 | 49 | 50 | # get latest video from s3, if it exists, if not return None 51 | def get_latest_video_from_s3(): 52 | # get latest video from s3, if it exists 53 | 54 | client = boto3.client( 55 | "s3", 56 | region_name="fra1", 57 | endpoint_url="https://fra1.digitaloceanspaces.com", 58 | aws_access_key_id=spaces_access_key, 59 | aws_secret_access_key=spaces_secret_key 60 | ) 61 | 62 | try: 63 | response = client.get_object( 64 | Bucket="youtubify-demo-2-space", 65 | Key="latest_video_id.txt" 66 | ) 67 | 68 | # return the video id 69 | return response["Body"].read().decode("utf-8") 70 | except botocore.exceptions.ClientError as e: 71 | if e.response["Error"]["Code"] == "NoSuchKey": 72 | # the object does not exist 73 | return None 74 | 75 | 76 | # push the latest video id to s3 77 | def push_latest_video_to_s3(video_id): 78 | # push the latest video id to s3 79 | 80 | client = boto3.client( 81 | "s3", 82 | region_name="fra1", 83 | endpoint_url="https://fra1.digitaloceanspaces.com", 84 | aws_access_key_id=spaces_access_key, 85 | aws_secret_access_key=spaces_secret_key 86 | ) 87 | 88 | client.put_object( 89 | Bucket="youtubify-demo-2-space", 90 | Key="latest_video_id.txt", 91 | Body=video_id 92 | ) 93 | 94 | 95 | def main(args): 96 | # get latest youtube video from a channel 97 | video = get_latest_video(youtube_channel_id) 98 | 99 | # get latest video id from s3 100 | latest_video_id = get_latest_video_from_s3() 101 | 102 | # if the latest video id is not the same as the latest video id from s3 103 | if latest_video_id != video["id"]: 104 | # create a message with video url that will be pushed to discord 105 | message = "New video: https://www.youtube.com/watch?v=" + video["id"] 106 | 107 | # push the video url to discord webhook 108 | requests.post(discord_webhook_url, json={ 109 | "content": message 110 | }) 111 | 112 | # push the latest video id to s3 113 | push_latest_video_to_s3(video["id"]) 114 | 115 | return {"body": "New video"} 116 | else: 117 | return {"body": "No new video"} 118 | -------------------------------------------------------------------------------- /serverless-tutorial/files/youtubify-demo-2/packages/youtubify/getlatestvideo/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | virtualenv virtualenv 6 | source virtualenv/bin/activate 7 | 8 | pip install -r requirements.txt 9 | deactivate -------------------------------------------------------------------------------- /serverless-tutorial/files/youtubify-demo-2/packages/youtubify/getlatestvideo/requirements.txt: -------------------------------------------------------------------------------- 1 | google-api-python-client==2.72.0 -------------------------------------------------------------------------------- /serverless-tutorial/files/youtubify-demo-2/project.yml: -------------------------------------------------------------------------------- 1 | packages: 2 | - name: youtubify 3 | functions: 4 | - name: getlatestvideo 5 | runtime: python:default 6 | web: false 7 | environment: 8 | YOUTUBE_API_KEY: "your-youtube-api-key" 9 | YOUTUBE_CHANNEL_ID: "your-youtube-channel-id" 10 | SPACES_ACCESS_KEY: "your-spaces-access-key" 11 | SPACES_SECRET_KEY: "your-spaces-secret-key" 12 | DISCORD_WEBHOOK_URL: "your-discord-webhook-url" 13 | limits: 14 | timeout: 3000 15 | memory: 512 16 | -------------------------------------------------------------------------------- /teleport-passwordless/README.md: -------------------------------------------------------------------------------- 1 | # Teleport Passwordless 2 | The best password is NO password! Let's add my new YubiKey as a passwordless authentication method in Teleport. That allows me to access all my Linux Servers, Kubernetes Clusters, Web Applications, Databases and RDP without remembering any password (but still is much more secure!) 3 | 4 | We will use the free and open-source software teleport. 5 | 6 | Project Homepage: https://goteleport.com 7 | Documentation: https://goteleport.com/docs/access-controls/guides/passwordless/ 8 | 9 | Video: https://youtu.be/I10mtZfVZ1Q 10 | 11 | ## Prerequisites 12 | 13 | - A Linux server running Ubuntu (20.04 LTS or newer) with Docker, and Docker-compose installed 14 | - You must have a public domain that should match the `public_addr` in the teleport.yml config file 15 | - The ports `3023`, `3024`, `3025` and `443` need to be accessible on the server 16 | 17 | *To set up a demo server using **Ubuntu 22.04 LTS** on **CIVO** and create a public DNS Record at **Cloudflare**, you can use the **Terraform** Template and **Ansible** Playbook in the `demo-server/` directory.* 18 | 19 | ## Install and configure the teleport 20 | 21 | ### (Optional) Generate teleport config file 22 | 23 | *This step is optional because you can also copy/paste the file `.config/teleport.yml` and modify it according to your needs.* 24 | 25 | Generate a teleport config file `.config/teleport.yml` from scratch. 26 | 27 | ```bash 28 | docker run --hostname localhost --rm --platform linux/amd64 --entrypoint=/bin/sh -v /Users/xcad/Projects/videos/teleport-passwordless/config:/etc/teleport -it quay.io/gravitational/teleport:10 -c "teleport configure > /etc/teleport/teleport.yml" 29 | ``` 30 | 31 | Change the following fields in the teleport config file `.config/teleport.yml`. 32 | 33 | ``` 34 | teleport: 35 | nodename: 36 | auth_service: 37 | cluster_name: 38 | proxy_service: 39 | web_listen_addr: 0.0.0.0:443 40 | public_addr: :443 41 | acme: 42 | enabled: yes 43 | email: 44 | ``` 45 | 46 | ### Upload files to server 47 | ``` 48 | scp docker-compose.yml config data :~/teleport-passwordless/ 49 | ``` 50 | 51 | Check permissions of the directory, they should match the `uid:gid` in the compose file 52 | 53 | ### Start teleport server 54 | 55 | Start the teleport server, and check if the certificates are generated correctly. 56 | 57 | ``` 58 | docker-compose up 59 | ``` 60 | 61 | ### (Optional) Start teleport server in background mode 62 | 63 | Start the teleport server in background mode. 64 | 65 | ```bash 66 | docker-compose up -d 67 | ``` 68 | 69 | Check if the server has been started successfully. 70 | 71 | ```bash 72 | docker-compose ps 73 | ``` 74 | 75 | ## Create user 76 | 77 | //TODO description text 78 | 79 | ``` 80 | docker exec -it teleport tctl users add xcad --roles=editor --logins=root,xcad 81 | ``` 82 | 83 | //TODO steps in the web ui 84 | 85 | ## Passwordless authentication 86 | 87 | ### Enable passwordless in teleport 88 | 89 | Change the following fields in the teleport config file `.config/teleport.yml`. 90 | 91 | ``` 92 | auth_service: 93 | enabled: "yes" 94 | listen_addr: 0.0.0.0:3025 95 | proxy_listener_mode: multiplex 96 | cluster_name: 97 | authentication: 98 | type: local 99 | second_factor: on 100 | webauthn: 101 | rp_id: 102 | connector_name: passwordless 103 | ``` 104 | 105 | ### Add a passwordless multifactor hardwarekey 106 | 107 | //TODO add steps in the web ui 108 | 109 | You can also add a multifactor hardwarekey via tsh. 110 | 111 | ```bash 112 | tsh mfa add 113 | ``` 114 | 115 | ### (Optional) Use touchid in Mac OS 116 | 117 | //TODO add description for touchid 118 | 119 | -------------------------------------------------------------------------------- /teleport-passwordless/config/teleport.yml: -------------------------------------------------------------------------------- 1 | # 2 | # A Sample Teleport configuration file. 3 | # 4 | # Things to update: 5 | # 1. license.pem: You only need a license from https://dashboard.goteleport.com 6 | # if you are an Enterprise customer. 7 | # 8 | version: v2 9 | teleport: 10 | nodename: teleport-demo.clcreative.de 11 | data_dir: /var/lib/teleport 12 | log: 13 | output: stderr 14 | severity: INFO 15 | format: 16 | output: text 17 | ca_pin: "" 18 | diag_addr: "" 19 | auth_service: 20 | enabled: "yes" 21 | listen_addr: 0.0.0.0:3025 22 | proxy_listener_mode: multiplex 23 | cluster_name: teleport-demo.clcreative.de 24 | authentication: 25 | type: local 26 | second_factor: on 27 | webauthn: 28 | rp_id: teleport-demo.clcreative.de 29 | connector_name: passwordless 30 | ssh_service: 31 | enabled: "yes" 32 | commands: 33 | - name: hostname 34 | command: [hostname] 35 | period: 1m0s 36 | proxy_service: 37 | enabled: "yes" 38 | web_listen_addr: 0.0.0.0:443 39 | public_addr: teleport-demo.clcreative.de 40 | https_keypairs: [] 41 | acme: 42 | enabled: "yes" 43 | email: "christian.lempa@clcreative.de" 44 | 45 | -------------------------------------------------------------------------------- /teleport-passwordless/demo-server/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory=inventory 3 | host_key_checking = False -------------------------------------------------------------------------------- /teleport-passwordless/demo-server/demo-server-civo.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | backend "remote" { 3 | organization = "clcreative" 4 | workspaces { 5 | name = "videos-teleport-passwordless" 6 | } 7 | } 8 | required_providers { 9 | cloudflare = { 10 | source = "cloudflare/cloudflare" 11 | version = "~> 3.22.0" 12 | } 13 | civo = { 14 | source = "civo/civo" 15 | version = "~> 1.0.21" 16 | } 17 | } 18 | } 19 | 20 | # --- 21 | 22 | provider "civo" { 23 | token = var.civo_token 24 | region = "FRA1" 25 | } 26 | 27 | provider "cloudflare" { 28 | email = var.cloudflare_email 29 | api_key = var.cloudflare_api_key 30 | } 31 | 32 | # --- 33 | 34 | variable "civo_token" { 35 | type = string 36 | sensitive = true 37 | } 38 | 39 | variable "cloudflare_email" { 40 | type = string 41 | } 42 | 43 | variable "cloudflare_api_key" { 44 | type = string 45 | sensitive = true 46 | } 47 | 48 | # --- 49 | 50 | data "cloudflare_zone" "clcreative" { 51 | name = "clcreative.de" 52 | } 53 | 54 | data "civo_size" "small" { 55 | filter { 56 | key = "name" 57 | values = ["g3.small"] 58 | match_by = "re" 59 | } 60 | } 61 | 62 | data "civo_disk_image" "ubuntu" { 63 | filter { 64 | key = "name" 65 | values = ["ubuntu-jammy"] 66 | } 67 | } 68 | 69 | data "civo_ssh_key" "ssh_xcad" { 70 | name = "xcad" 71 | } 72 | 73 | # --- 74 | 75 | resource "civo_instance" "srv_teleport-demo" { 76 | hostname = "teleport-demo.clcreative.de" 77 | notes = "This is a demo server for the teleport-passwordless video." 78 | size = element(data.civo_size.small.sizes, 0).name 79 | disk_image = element(data.civo_disk_image.ubuntu.diskimages, 0).id 80 | initial_user = "xcad" 81 | sshkey_id = data.civo_ssh_key.ssh_xcad.id 82 | } 83 | 84 | # --- 85 | 86 | resource "cloudflare_record" "dns_teleport-demo" { 87 | zone_id = data.cloudflare_zone.clcreative.id 88 | name = "teleport-demo.clcreative.de" 89 | value = civo_instance.srv_teleport-demo.public_ip 90 | type = "A" 91 | proxied = false 92 | } -------------------------------------------------------------------------------- /teleport-passwordless/demo-server/demo-server-install-docker.yml: -------------------------------------------------------------------------------- 1 | - hosts: teleport-demo 2 | 3 | become: yes 4 | tasks: 5 | 6 | # Install Docker 7 | # -- 8 | # 9 | - name: install prerequisites 10 | apt: 11 | name: 12 | - apt-transport-https 13 | - ca-certificates 14 | - curl 15 | - gnupg-agent 16 | - software-properties-common 17 | update_cache: yes 18 | 19 | - name: add apt-key 20 | apt_key: 21 | url: https://download.docker.com/linux/ubuntu/gpg 22 | 23 | - name: add docker repo 24 | apt_repository: 25 | repo: deb https://download.docker.com/linux/ubuntu focal stable 26 | 27 | - name: install docker 28 | apt: 29 | name: 30 | - docker-ce 31 | - docker-ce-cli 32 | - containerd.io 33 | update_cache: yes 34 | 35 | - name: add userpermissions 36 | shell: "usermod -aG docker {{ lookup('env','USER') }}" 37 | 38 | # Installs Docker SDK and Docker-Compose 39 | # -- 40 | # 41 | - name: install python package manager 42 | apt: 43 | name: python3-pip 44 | 45 | - name: install python sdk 46 | become_user: "{{ lookup('env','USER') }}" 47 | pip: 48 | name: 49 | - docker 50 | - docker-compose -------------------------------------------------------------------------------- /teleport-passwordless/demo-server/inventory: -------------------------------------------------------------------------------- 1 | [teleport-demo] 2 | teleport-demo.clcreative.de -------------------------------------------------------------------------------- /teleport-passwordless/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | services: 4 | 5 | teleport: 6 | image: quay.io/gravitational/teleport:10.0.2 7 | user: 1000:1000 8 | container_name: teleport 9 | entrypoint: /usr/bin/dumb-init 10 | command: "teleport start -d -c /etc/teleport/teleport.yml" 11 | ports: 12 | - "3023:3023" 13 | - "3024:3024" 14 | - "3025:3025" 15 | - "443:443" 16 | volumes: 17 | - /home/xcad/teleport-passwordless/config:/etc/teleport 18 | - /home/xcad/teleport-passwordless/data:/var/lib/teleport 19 | -------------------------------------------------------------------------------- /teleport-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Manage all your SSH servers with teleport 2 | How to set up an SSH proxy server with gravitational teleport to manage all your SSH connections from a web interface. Supports 2FA two-factor authentication, multiple users, monitoring, and logging of SSH sessions. Install on DigitalOcean Cloud and manage via letsencrypt load balancer certs. 3 | 4 | We will use the free and open-source software Teleport. 5 | 6 | 7 | Project Homepage: https://goteleport.com/ 8 | Documentation: https://goteleport.com/docs/ 9 | 10 | Video: https://youtu.be/nk1jfIAL5qE 11 | 12 | ## Prerequisites 13 | 14 | - Linux Server running Ubuntu 20.04 LTS or newer 15 | - Domain that points to the public IP of your Linux Server 16 | 17 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 18 | 19 | ## Install Docker, and Docker-Compose 20 | 21 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 22 | 23 | ### Install Docker 24 | ```bash 25 | sudo apt update 26 | 27 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 28 | 29 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 30 | 31 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 32 | 33 | sudo apt update 34 | 35 | sudo apt-get install docker-ce docker-ce-cli containerd.io 36 | ``` 37 | 38 | ### Check if Docker is installed correctly 39 | ```bash 40 | sudo docker run hello-world 41 | ``` 42 | 43 | ### Install Docker-Compose 44 | 45 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 46 | 47 | ```bash 48 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 49 | 50 | sudo chmod +x /usr/local/bin/docker-compose 51 | ``` 52 | 53 | ### Check if Docker-Compose is installed correctly 54 | ```bash 55 | sudo docker-compose --version 56 | ``` 57 | 58 | ### (Optional) Add your linux user to the `docker` group 59 | ```bash 60 | sudo usermod -aG docker $USER 61 | ``` 62 | 63 | ## Set up Teleport 64 | 65 | ### Create a new file `docker-compose.yml`file, please refer to the teleport documentation: https://goteleport.com/docs/getting-started/docker-compose/. 66 | 67 | First, we will need to install an authentication and proxy server. This will handle the whole authentication process for all nodes and clients. It also runs a proxy where we can connect to. You can connect to the proxy from a terminal client or via a web client. It also comes with other cool features, I will show you later. I’m running this in a cloud environment because I can easily access this server from anywhere and it’s flexible. But in theory, you could also run this in your home lab or on-premise. 68 | 69 | I’m using docker-compose to deploy the teleport proxy and auth server, but you can also install this directly on Linux, without docker if you want. However, I like the containerized deployment because it offers me the most flexibility. 70 | 71 | **Example Docker-Compose File**: 72 | ```yml 73 | version: '2' 74 | services: 75 | configure: 76 | image: quay.io/gravitational/teleport:4.3 77 | container_name: teleport-configure 78 | entrypoint: /bin/sh 79 | hostname: dev.the-digital-life.com 80 | command: -c "if [ ! -f /etc/teleport/teleport.yaml ]; then teleport configure > /etc/teleport/teleport.yaml; fi" 81 | volumes: 82 | - ./config:/etc/teleport 83 | 84 | teleport: 85 | image: quay.io/gravitational/teleport:4.3 86 | container_name: teleport 87 | entrypoint: /bin/sh 88 | hostname: dev.the-digital-life.com 89 | command: -c "sleep 1 && /bin/dumb-init teleport start -c /etc/teleport/teleport.yaml" 90 | ports: 91 | - "3023:3023" 92 | - "3024:3024" 93 | - "3025:3025" 94 | - "3080:3080" 95 | volumes: 96 | - ./config:/etc/teleport 97 | - ./data:/var/lib/teleport 98 | depends_on: 99 | - configure 100 | ``` 101 | 102 | Before you start the container you should change the `hostname` of both containers and set this to the Fully-qualified domain name of your server. You can still change it later in the configuration file, but if you set this up right from the beginning it makes things a lot easier. Then start the docker container with the following command. 103 | 104 | ### Start the Teleport Server 105 | 106 | ```bash 107 | 108 | ``` 109 | 110 | ### Adjust the Config file 111 | 112 | When we start this compose file, it will automatically create a default configuration and obtains self-signed certificates. Let’s make some adjustments in the configuration file, which is located in `./config/teleport.yaml` 113 | 114 | **Example teleport.yaml**: 115 | ```yml 116 | teleport: 117 | nodename: 118 | data_dir: /var/lib/teleport 119 | auth_token: 120 | auth_servers: 121 | - 127.0.0.1:3025 122 | log: 123 | output: stderr 124 | severity: INFO 125 | ca_pin: sha256:ca-pin-hash-goes-here 126 | auth_service: 127 | enabled: "yes" 128 | listen_addr: 0.0.0.0:3025 129 | public_addr: :3025 130 | tokens: 131 | - proxy,node: 132 | license_file: /path/to/license-if-using-teleport-enterprise.pem 133 | ssh_service: 134 | enabled: "yes" 135 | labels: 136 | db_role: master 137 | db_type: postgres 138 | commands: 139 | - name: hostname 140 | command: [/usr/bin/hostname] 141 | period: 1m0s 142 | - name: arch 143 | command: [/usr/bin/uname, -p] 144 | period: 1h0m0s 145 | proxy_service: 146 | enabled: "yes" 147 | listen_addr: 0.0.0.0:3023 148 | public_addr: 149 | ssh_public_addr: 150 | web_listen_addr: 0.0.0.0:3080 151 | tunnel_listen_addr: 0.0.0.0:3024 152 | ``` 153 | 154 | Make sure you add the `public_addr` and `ssh_public_addr` on the auth_service and proxy_service. Replace the `` with your FQDN of your teleport server or reverse-proxy/load-balancer. 155 | 156 | You also should replace your ca_pin, you can obtain by executing the following command. 157 | 158 | ```bash 159 | docker-compose exec teleport tctl status 160 | ``` 161 | 162 | After that restart your docker container with the following command. 163 | 164 | ```bash 165 | docker-compose up -d --force-recreate 166 | ``` 167 | 168 | ## How to manage teleport and connect our SSH servers 169 | 170 | Now, we will create a user on the teleport auth server. Every teleport user should also map to a user that exists on your nodes. But you can also assign multiple mappings. 171 | 172 | ```bash 173 | docker-compose exec teleport tctl users add teleport root,xcad,christian,vagrant 174 | ``` 175 | 176 | With this command, I will add a new user called `teleport` who can log in with the Linux users `root`, `xcad`, `christian`, and `vagrant` on the nodes. 177 | 178 | This will create a registration token. With the registration token, we can now set up our credentials on the teleport server. Teleport enforces 2FA by default. Install a 2FA like Google Authentication or Authy on your smartphone and scan the QR-Code. Then you can simply enter the 2FA code that is generated on your smartphone. 179 | 180 | Now you can simply connect to the docker node with the web interface by accessing `https://:3080` 181 | 182 | ## Add additional SSH servers to teleport 183 | 184 | ### Create a new access token 185 | 186 | To add another node to the proxy server, we need to download and run the teleport client on a server. But first, we need to create a new token on the auth server. 187 | 188 | ```bash 189 | docker-compose exec teleport tctl nodes add 190 | ``` 191 | 192 | ### Install Teleport on your new server 193 | 194 | We could now just execute this command on the node, once teleport is downloaded. But I prefer to add this to a static configuration file, that allows me to run the teleport as a systemd service. Download the latest version of teleport and install the package on your node. In this example, I download the .deb package and install it on an Ubuntu server. 195 | 196 | ```bash 197 | wget https://get.gravitational.com/teleport_4.3.7_amd64.deb 198 | ``` 199 | 200 | ### Create a new config for your server 201 | 202 | I also create a new configuration file `/etc/teleport.yaml` and add the following values: 203 | 204 | **Example teleport.yml**: 205 | ```yml 206 | teleport: 207 | nodename: teleport-node-1 208 | data_dir: /var/lib/teleport 209 | auth_token: 210 | auth_servers: 211 | - :3025 212 | log: 213 | output: stderr 214 | severity: INFO 215 | ca_pin: 216 | auth_service: 217 | enabled: no 218 | ssh_service: 219 | enabled: yes 220 | proxy_service: 221 | enabled: no 222 | ``` 223 | 224 | *Note: If your node doesn't show up with the correct public IP address in your teleport server, you can manually enforce this by adding the attribute to the `teleport:` section in the configuration file.* 225 | 226 | ```yml 227 | advertise_ip: 228 | ``` 229 | 230 | ### Start the new server 231 | 232 | Now we can simply start the teleport service with the command: 233 | 234 | ```bash 235 | sudo systemctl enable –now teleport 236 | ``` 237 | 238 | If everything works successfully, you should be able to see the new node in the teleport proxy server. 239 | 240 | ## How to add SSH servers through a reverse tunnel to teleport 241 | 242 | I also want to test the reverse tunnel for my home lab servers. The trick is to use a teleport reverse tunnel that establishes a connection from inside the network to the cloud server. With this solution, you’re able to connect all SSH servers behind a NAT. 243 | 244 | To create a reverse tunnel, simply change the port of the auth server to 3080 in the `/etc/teleport.yaml` configuration file. 245 | 246 | ```yml 247 | teleport: 248 | nodename: teleport-node-1 249 | data_dir: /var/lib/teleport 250 | auth_token: 251 | auth_servers: 252 | - :3080 253 | log: 254 | output: stderr 255 | severity: INFO 256 | ca_pin: 257 | auth_service: 258 | enabled: no 259 | ssh_service: 260 | enabled: yes 261 | proxy_service: 262 | enabled: no 263 | ``` -------------------------------------------------------------------------------- /teleport-tutorial/config/teleport.yml: -------------------------------------------------------------------------------- 1 | teleport: 2 | nodename: 3 | data_dir: /var/lib/teleport 4 | auth_token: 5 | auth_servers: 6 | - 127.0.0.1:3025 7 | log: 8 | output: stderr 9 | severity: INFO 10 | ca_pin: sha256:ca-pin-hash-goes-here 11 | auth_service: 12 | enabled: "yes" 13 | listen_addr: 0.0.0.0:3025 14 | public_addr: :3025 15 | tokens: 16 | - proxy,node: 17 | license_file: /path/to/license-if-using-teleport-enterprise.pem 18 | ssh_service: 19 | enabled: "yes" 20 | labels: 21 | db_role: master 22 | db_type: postgres 23 | commands: 24 | - name: hostname 25 | command: [/usr/bin/hostname] 26 | period: 1m0s 27 | - name: arch 28 | command: [/usr/bin/uname, -p] 29 | period: 1h0m0s 30 | proxy_service: 31 | enabled: "yes" 32 | listen_addr: 0.0.0.0:3023 33 | public_addr: 34 | ssh_public_addr: 35 | web_listen_addr: 0.0.0.0:3080 36 | tunnel_listen_addr: 0.0.0.0:3024 -------------------------------------------------------------------------------- /teleport-tutorial/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '2' 2 | 3 | services: 4 | configure: 5 | image: quay.io/gravitational/teleport:4.3 6 | container_name: teleport-configure 7 | entrypoint: /bin/sh 8 | hostname: dev.the-digital-life.com 9 | command: -c "if [ ! -f /etc/teleport/teleport.yaml ]; then teleport configure > /etc/teleport/teleport.yaml; fi" 10 | volumes: 11 | - ./config:/etc/teleport 12 | 13 | teleport: 14 | image: quay.io/gravitational/teleport:4.3 15 | container_name: teleport 16 | entrypoint: /bin/sh 17 | hostname: dev.the-digital-life.com 18 | command: -c "sleep 1 && /bin/dumb-init teleport start -c /etc/teleport/teleport.yaml" 19 | ports: 20 | - "3023:3023" 21 | - "3024:3024" 22 | - "3025:3025" 23 | - "3080:3080" 24 | volumes: 25 | - ./config:/etc/teleport 26 | - ./data:/var/lib/teleport 27 | depends_on: 28 | - configure -------------------------------------------------------------------------------- /truenasscale-traefik/README.md: -------------------------------------------------------------------------------- 1 | # Secure HTTPS for all your TrueNAS Scale Apps (traefik) 2 | 3 | In this Tutorial we install Traefik, a free and open-source load balancer, on TrueNAS Scale. We will cover the creation of trusted SSL Certificates via Letsencrypt and Cloudflare DNS, and also how to securely expose an application like Homeassistant. #TrueNAS #Traefik #HomeLab 4 | 5 | Project Homepage: https://www.truenas.com/ 6 | 7 | Video: https://youtu.be/QSMgfz5zrxo 8 | 9 | --- 10 | ## Prerequisites 11 | 12 | TrueNAS Scale with TrueCharts 13 | 14 | For further References, how to deploy **TrueNAS Scale** with **TrueCharts**, check out my previous video: 15 | - [TrueNAS Scale the ULTIMATE Home Server? Docker, Kubernetes, Apps](https://youtu.be/LJY9KBbL4j0) 16 | - [How to create a valid self signed SSL Certificate?](https://youtu.be/VH4gXcvkmOY) 17 | 18 | --- 19 | ## Setting up an ip address 20 | 21 | When using Traefik on TrueNAS Scale, it can be useful to add an alias ip address, hence the ports `80`, and `443` are already used by the default truenas scale web ui. Your alias ip can be used as the primary ip address for kubernetes load balancers to use port `80`, and `443` for Traefik. 22 | 23 | IP Addresses are managed under **Network -> Interfaces**. 24 | 25 | ### Add an alias ip address to the primary interface 26 | 27 | Under **IP Addresses**, add an **Alias**. 28 | 29 | ### Change kubernetes ip address 30 | 31 | Under **Apps -> Settings -> Advanced Settings**, change the **Node IP** to your alias ip address. 32 | 33 | All future load balancers deployed in Kubernetes will use this new **Node IP**. 34 | 35 | ### Change the TrueNAS Scale GUI listening ip addresses 36 | 37 | Under **System Settings -> General -> GUI -> Settings**, change the **Web Interface IPv4 Address** to include only IP addresses where the GUI should be listening on. *Exclude the alias IP address, you used for the Node IP in the App Settings.* 38 | 39 | ## Setting up certificates 40 | 41 | When using Traefik, you should issue trusted SSL certificates to be used by the load balancers. This can be a **wildcard** certificate, that is valid for all subdomains and all future applications, you want to expose. 42 | 43 | Both, ACME, and self-sigend certficiates will work. 44 | 45 | Certificates are managed under **Credentials -> Certificates**. 46 | 47 | ### Option 1: Add a new self-signed certificate 48 | 49 | Under **Certificates**, add a **Certificate**. 50 | 51 | 1. Select **Import Certificate**. 52 | 2. Upload your **Certificiate**, and **Private Key**. 53 | 54 | ### Option 2: Issue a new certificate via ACME 55 | 56 | Under **ACME DNS-Authenticators**, add a **DNS Authenticator**. 57 | 58 | 1. Select your DNS Provider 59 | 2. Add your DNS Provider's Credentials, such as Email, API Key, API Secret, or API Token 60 | 61 | Under **Certificate Signing Requests**, add a **Certificate Signing Request**. 62 | 63 | 1. Select Certificate Signing Request 64 | 2. Choose your Certificate Options (RSA, or EC) 65 | 3. Enter your Certificate Subject Details 66 | 4. Enter your Subject Alternative Names according to your hostname, and domain. 67 | 5. (Optional) you can also add a wilcard in here (`*.local`, or `*.domain.tld`), ***but don't soley use a wildcard!*** 68 | 69 | Under **Certificate Signing Requests**, select your **Certificate Signing Request** and create an **ACME Certificate**. 70 | 71 | 1. Accept **Terms of Service** and chose your **Renew Certificate Days**. 72 | 2. Select **Production**, or **Staging** in **ACME Server Directory URI**. 73 | 3. Add your **Domain**, and select your **Authenticator**. 74 | 75 | ## Install Traefik 76 | 77 | Applications are managed under **Apps -> Available Applications**. 78 | 79 | *This tutorial needs additional TrueNAS Scale Charts from the Community Repo TrueCharts. Follow the instructions described on the official website of [TrueCharts ](https://truecharts.org), to add the Community Repo to your TrueNAS Scale Charts.* 80 | 81 | ### Deploy Traefik as a new application 82 | 83 | 1. Create a new application using the Traefik TrueCharts Deployment. 84 | 2. Select your **desired replicas**, and the correct **timezone**. 85 | 3. Enable **ingressClass**, and **isDefaultClass**. 86 | 4. (Optional) change the **Main Entrypoints**, **Service Type** to **ClusterIP** to protect the web interface of Traefik. 87 | 5. Change the **web Entrypoints**, **Entrypoint Port** to `80`. 88 | 6. Change the **websecure Entrypoints**, **Entrypoint Port** to `443`. 89 | 7. (Optional) change other settings according to your environment. 90 | 91 | ## Deploy a test application 92 | 93 | Applications are managed under **Apps -> Available Applications**. 94 | 95 | *Make sure, you have a dns record that's pointing to the primary node ip address of your TrueNAS Scale server, as described above.* 96 | 97 | ### Deploy homeassistant as a test 98 | 99 | 1. Create a new application using the Homeassistant TrueCharts Deployment. 100 | 2. In the **Network and Service** Settings, change the **Service Type** to `ClusterIP`. 101 | 3. In the **Ingress** Settings, enable **Ingress**. 102 | 4. In the **Ingress** Settings, add your desired **Hostname**, and **Paths**. 103 | 5. In the **Ingress** Settings, configure your desired **TLS Settings**, and the corresponding **Hostname**, and select the created **TrueNAS Scale Certifcate**. 104 | 105 | Check if you can reach your app. 106 | 107 | --- 108 | ## References 109 | 110 | - [TrueCharts ](https://truecharts.org) 111 | -------------------------------------------------------------------------------- /vagrant-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # Automated virtual machine deployment with Vagrant 2 | In this tutorial, I will show you how to easily automate the deployment of a virtual machine with Vagrant running on Windows 10 with Hyper-V 3 | 4 | We will use the free and open-source software Vagrant by Hashicorp. 5 | 6 | Project Homepage: https://www.vagrantup.com/ 7 | Documentation: https://www.vagrantup.com/docs 8 | Find Boxes: https://app.vagrantup.com/boxes/search 9 | 10 | Video: https://youtu.be/sr9pUpSAexE 11 | 12 | ## Prerequisites 13 | 14 | - Windows 10 with Microsoft Hyper-V enabled 15 | 16 | *You can still use Vagrant Docker on Windows, Linux, or macOS using different Hypervisors like Virtualbox, ..., however, this may require different commands!* 17 | 18 | ## Installation and Configuration 19 | 20 | ### Download and Install Vagrant 21 | 22 | Download Vagrant from the official Homepage https://www.vagrantup.com/downloads and follow the installation instructions. 23 | 24 | ### Set up a virtual machine deployment 25 | 26 | Let's set up our first virtual machine deployment with Vagrant. You should create a new directory with enough disk space because Vagrant will store the configuration and virtual hard disk in this location. Then, we will initialize a new Vagrantfile with the following command. 27 | 28 | ```powershell 29 | vagrant init 30 | ``` 31 | 32 | This will create a new Vagrantfile, we can easily inspect and customize. 33 | 34 | ### Modify the Vagrantfile 35 | 36 | First, we need to change the configuration file to use a specific box. In this tutorial, we're using the `hashicorp/bionic64` box as a quick example. We edit the Vagrantfile and change the box with this line. 37 | 38 | ```vagrant 39 | ... 40 | config.vm.box = "hashicorp/bionic64" 41 | ... 42 | ``` 43 | 44 | ### (Optional) Running Vagrant on Windows 10 45 | 46 | If you're running Hyper-V on Windows 10, Microsoft recommends adding the following entries according to [this official Microsoft Blog-Post](https://docs.microsoft.com/en-us/virtualization/community/team-blog/2017/20170706-vagrant-and-hyper-v-tips-and-tricks). 47 | 48 | ```vagrant 49 | ... 50 | config.vm.provider "hyperv" 51 | config.vm.synced_folder ".", "/vagrant", disabled: true 52 | 53 | config.vm.provider "hyperv" do |h| 54 | h.enable_virtualization_extensions = true 55 | h.linked_clone = true 56 | end 57 | ... 58 | ``` 59 | 60 | Note: If you're having an AMD processor like me, setting `enable_virtualization_extensions = true` will fail, because of the missing nested virtualization support in Windows 10! If this is the case, simply change it to false. 61 | 62 | ### Start the virtual machine 63 | 64 | Now, we can simply start the creating and provisioning of your virtual machine via the console. To start a virtual machine with Vagrant execute this command in the console. 65 | 66 | ```powershell 67 | vagrant up 68 | ``` 69 | 70 | If you're running this the first time, it should automatically download the box image and ask you to generate an SSH key. Confirm this and continue with this tutorial. 71 | 72 | After the machine is booted and ready we can now connect via SSH to the virtual machine. 73 | 74 | ```powershell 75 | vagrant ssh 76 | ``` 77 | 78 | If you have made changes to the Vagrantfile, you need to stop and reboot the machine again with this command. 79 | 80 | ```powershell 81 | vagrant reload 82 | ``` 83 | 84 | If you want to delete a virtual machine, simply type this command. Note that this will delete the virtual disks and the config on your hypervisor, but not the Vagrantfile or the folder. 85 | 86 | ```powershell 87 | vagrant destroy 88 | ``` 89 | 90 | ### Explore Vagrant Boxes 91 | 92 | Probably, you want to install other Operating Systems like Windows Servers, Windows 10 or also other Linux distributions as well. You can find all those different boxes on the Vagrant box cataloge. Don't forget to filter the results for your desired provider. Otherwise, the results may not work on your hypervisor. 93 | 94 | For example, I've also tested the `generic/ubuntu2004` box. This contains an Ubuntu 20.04 LTS image. If you want to create a second virtual machine, simply create a new folder. Note, you don't need to init a new Vagrantfile with the `vagrant init` command. You can also copy the Vagrantfile of your first virtual machine and use this as a template. 95 | 96 | If you start the virtual machine, Vagrant will automatically download the box and store this on your computer. That means, you don't need to download the box image again. If you want to check what boxes are currently stored on your computer, use this command. 97 | 98 | ```powershell 99 | vagrant box list 100 | ``` 101 | 102 | You can also manage boxes, for example remove them to clean up your system with this command. 103 | 104 | ```powershell 105 | vagrant box remove 106 | ``` 107 | 108 | ### Manage virtual machines with vagrant 109 | 110 | Check what virtual machines are currently running with Vagrant using this command. This can be executed from any directory. 111 | 112 | ```powershell 113 | vagrant global-status 114 | ``` 115 | 116 | This will probably show orphaned records as well. This is because Vagrant is cashing the data and this may not be fully up to date. To clear the cash and remove invalid entries, execute this command 117 | 118 | ```powershell 119 | vagrant global-status --prune 120 | ``` 121 | -------------------------------------------------------------------------------- /watchtower-tutorial/README.md: -------------------------------------------------------------------------------- 1 | # How To Update Docker Container automatically with nearly zero downtime 2 | How to update docker container automatically with Watchtower. Scheduled updates without downtime for your entire docker environment. 3 | 4 | We will use the free and open-source software Watchtower. 5 | 6 | Project Homepage: https://containrrr.dev/watchtower/ 7 | 8 | Video: https://youtu.be/5lP_pdjcVMo 9 | 10 | ## Prerequisites 11 | 12 | - Linux Server running Docker 13 | 14 | ## Run Watchtower 15 | 16 | Watchtower can be easily deployed by executing a simple docker run command. 17 | 18 | ```bash 19 | docker run --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower 20 | ``` 21 | 22 | ## Run Watchtower in debug mode 23 | 24 | You might wonder why there is no log output apart from the welcome message. If you want to increase the logging level or watchtower, you simply just add an argument. 25 | 26 | ```bash 27 | docker run --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --debug 28 | ``` 29 | 30 | ## Run Watchtower only once, in debug mode 31 | 32 | ```bash 33 | docker run --name watchtower -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once --debug 34 | ``` 35 | 36 | ## Exclude Container from Watchtower 37 | 38 | ```bash 39 | docker run -d --label=com.centurylinklabs.watchtower.enable= false nginx 40 | ``` 41 | 42 | ## Scheduled Updates and clean up old images 43 | 44 | ```bash 45 | docker run --name watchtower -v /var/run/docker.sock:/var/run/docker.sock --restart unless-stopped containrrr/watchtower --schedule "0 0 4 * * *" --debug --cleanup 46 | ``` 47 | -------------------------------------------------------------------------------- /webserver-on-linux/README.md: -------------------------------------------------------------------------------- 1 | # Install a webserver on Linux in 15 minutes 2 | Let's install a containerized web server stack on Linux! That will also deploy an intrusion prevention system with fail2ban, obtain trusted https certificates and a database server to install a WordPress blog. 3 | 4 | We will use the free and open-source software Nginx. 5 | 6 | Project Homepage: 7 | Documentation: 8 | 9 | Video: https://youtu.be/7GTYB8RVYBc 10 | 11 | ## Prerequisites 12 | 13 | - Linux Server running Ubuntu 20.04 LTS or newer 14 | - Domain that points to the public IP of your Linux Server 15 | 16 | *For installing Docker on other Linux distriubtions or different versions than Ubuntu 20.04 LTS, follow the [official installation instructions](https://docs.docker.com/install/).* 17 | 18 | ## Install Docker, and Docker-Compose 19 | 20 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 21 | 22 | ### Install Docker 23 | ```bash 24 | sudo apt update 25 | 26 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 27 | 28 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 29 | 30 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 31 | 32 | sudo apt update 33 | 34 | sudo apt-get install docker-ce docker-ce-cli containerd.io 35 | ``` 36 | 37 | ### Check if Docker is installed correctly 38 | ```bash 39 | sudo docker run hello-world 40 | ``` 41 | 42 | ### Install Docker-Compose 43 | 44 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 45 | 46 | ```bash 47 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 48 | 49 | sudo chmod +x /usr/local/bin/docker-compose 50 | ``` 51 | 52 | ### Check if Docker-Compose is installed correctly 53 | ```bash 54 | sudo docker-compose --version 55 | ``` 56 | 57 | ### (Optional) Add your linux user to the `docker` group 58 | ```bash 59 | sudo usermod -aG docker $USER 60 | ``` 61 | 62 | ## Set up an Nginx Webserver 63 | 64 | ### Create a new file `docker-compose.yml`file, please refer to the linuxserver/swag documentation: https://hub.docker.com/r/linuxserver/swag. 65 | 66 | **Example Docker-Compose File**: 67 | ```yml 68 | version: "2" 69 | services: 70 | 71 | swag: 72 | image: linuxserver/swag 73 | container_name: swag 74 | cap_add: 75 | - NET_ADMIN 76 | environment: 77 | - PUID=1001 78 | - PGID=1001 79 | - TZ=Europe/Berlin 80 | - URL=do-test-1.the-digital-life.com 81 | - SUBDOMAINS=www 82 | - VALIDATION=http 83 | volumes: 84 | - /opt/webserver_swag/config:/config 85 | ports: 86 | - 443:443 87 | - 80:80 #optional 88 | restart: unless-stopped 89 | ``` 90 | 91 | ### To start your webserver just execute the following command, while you're in the /opt/webserver_swag folder. 92 | 93 | ```bash 94 | docker-compose up -d 95 | ``` 96 | 97 | ## Add a database to the webserver 98 | 99 | ### Update the Docker-Compose file 100 | 101 | You can update your existing Docker-Compose template, if you want to add a database to your webserver. Because you often need a MySQL database, for example to deploy a WordPress Blog. 102 | 103 | **Example Docker-Compose File**: 104 | ```yml 105 | version: "2" 106 | services: 107 | 108 | mariadb: 109 | image: linuxserver/mariadb 110 | container_name: mariadb 111 | environment: 112 | - PUID=1001 113 | - PGID=1001 114 | - MYSQL_ROOT_PASSWORD=mariadbpassword 115 | - TZ=Europe/Berlin 116 | - MYSQL_DATABASE=WP_database 117 | - MYSQL_USER=WP_dbuser 118 | - MYSQL_PASSWORD=WP_dbpassword 119 | volumes: 120 | - /opt/webserver_swag/config/mariadb:/config 121 | restart: unless-stopped 122 | 123 | swag: 124 | image: linuxserver/swag 125 | container_name: swag 126 | cap_add: 127 | - NET_ADMIN 128 | environment: 129 | - PUID=1001 130 | - PGID=1001 131 | - TZ=Europe/Berlin 132 | - URL=do-test-1.the-digital-life.com 133 | - SUBDOMAINS= 134 | - VALIDATION=http 135 | volumes: 136 | - /opt/webserver_swag/config:/config 137 | ports: 138 | - 443:443 139 | - 80:80 #optional 140 | depends_on: 141 | - mariadb 142 | restart: unless-stopped 143 | ``` -------------------------------------------------------------------------------- /webserver-on-linux/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "2" 2 | services: 3 | 4 | mariadb: 5 | image: linuxserver/mariadb 6 | container_name: mariadb 7 | environment: 8 | - PUID=1001 9 | - PGID=1001 10 | - MYSQL_ROOT_PASSWORD=mariadbpassword 11 | - TZ=Europe/Berlin 12 | - MYSQL_DATABASE=WP_database 13 | - MYSQL_USER=WP_dbuser 14 | - MYSQL_PASSWORD=WP_dbpassword 15 | volumes: 16 | - /opt/webserver_swag/config/mariadb:/config 17 | restart: unless-stopped 18 | 19 | swag: 20 | image: linuxserver/swag 21 | container_name: swag 22 | cap_add: 23 | - NET_ADMIN 24 | environment: 25 | - PUID=1001 26 | - PGID=1001 27 | - TZ=Europe/Berlin 28 | - URL=do-test-1.the-digital-life.com 29 | - SUBDOMAINS= 30 | - VALIDATION=http 31 | volumes: 32 | - /opt/webserver_swag/config:/config 33 | ports: 34 | - 443:443 35 | - 80:80 #optional 36 | depends_on: 37 | - mariadb 38 | restart: unless-stopped -------------------------------------------------------------------------------- /windows-terminal-powerlevel10k/README.md: -------------------------------------------------------------------------------- 1 | # Make your WSL or WSL2 terminal awesome - with Windows Terminal, zsh, oh-my-zsh and Powerlevel10k 2 | 3 | In this video, I will show you how you make your WSL or WSL2 terminal just awesome! We will install and configure Windows Terminal, zsh, oh-my-zsh, and Powerlevel10k theme. 4 | 5 | Video: https://youtu.be/235G6X5EAvM 6 | 7 | ## Prerequisites 8 | 9 | - Windows Subsystem for Linux, running on Windows 10 or newer 10 | 11 | ## Installation and Configuration 12 | 13 | ### Install Windows Terminal 14 | 15 | https://docs.microsoft.com/en-us/windows/terminal/install 16 | 17 | ### Install Nerd Fonts 18 | 19 | To make Windows Terminal able to render the icons we're using in **zsh** later, we need to install the **Nerd-Fonts** on Windows. First, go to the Nerd-Font homepage and select a Font you like. Note that not all of them work well with all zsh themes, you may need to try out different ones. Fonts that work for me are **Anonymice Nerd Font**, **Droid Sans Mono Nerd Font**, and **Hack Nerd Font**. Then, extract the archive and install all of the `.otf` Font files. 20 | 21 | https://www.nerdfonts.com/ 22 | 23 | ### Install zsh shell in WSL / WSL2 24 | 25 | Now we need to install the **zsh** shell in our wsl or wsl2. You can easily install it in the Ubuntu wsl by using the commands below. If you're using a different Linux distribution, you may check out the zsh documentation or your package a documentation. 26 | 27 | We will also install **oh-my-zsh** which is a nice configuration extension to the **zsh** shell. That will allow us to easily customize anything, install a theme, and add plugins later. 28 | 29 | ```bash 30 | sudo apt update 31 | 32 | sudo apt install zsh -y 33 | 34 | sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" 35 | ``` 36 | 37 | ### Install powerlevel10k zsh theme 38 | 39 | Next, we will install the powerlevel10k theme, which really looks nice and offers great customization features. It also has a good configuration wizard that makes it extremely easy to set up the theme for your favorite design. 40 | 41 | ```bash 42 | git clone --depth=1 https://github.com/romkatv/powerlevel10k.git ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/themes/powerlevel10k 43 | ``` 44 | 45 | To activate the theme you need to edit your `~/.zshrc` file in your personal folder and replace `ZSH_THEME="robbyrussel` with `ZSH_THEME="powerlevel10k/powerlevel10k`. After the change, you need to close and restart your terminal. 46 | 47 | ### Change Windows Terminal settings to use Nerd-Fonts 48 | 49 | Because we want Windows Terminal to be able to render the icons in the powerlevel10k theme correctly, we need to change the Windows Terminal configuration to use the Nerd-Font we've downloaded before. Click on **Settings** in the Windows Terminal menu and edit the `settings.json` file with your favorite text editor. 50 | 51 | Find your wsl or wsl2 profile and add the line `"fontFace": ""`. 52 | 53 | ### (Optional) How to install or enable plugins in zsh 54 | 55 | **Example: Auto-suggestion plugin** 56 | 57 | I found this nice auto-suggestion plugin for the **zsh** shell. Above all, it's really nice and helps me a lot when working with the Linux terminal. It will suggest you auto-completes based on your command history. 58 | 59 | ```bash 60 | git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions 61 | ``` 62 | 63 | To enable the auto-suggestion plugin or any other plugins in **zsh**, edit your `~/.zshrc` file in your personal folder. Simply change the default line `plugins=(git)` to `plugins=(git zsh-autosuggestions )`. Of course, replace the `` with any other plugins you want to enable. -------------------------------------------------------------------------------- /wireguard-docker/README.md: -------------------------------------------------------------------------------- 1 | # WireGuard installation and configuration - in Docker 2 | In this tutorial, I will show you how to easily create your own private VPN server with WireGuard running in a Docker container. I will walk you step by step through the installation, configuration, and how to add clients to your VPN server. 3 | 4 | We will use the free and open-source VPN protocol WireGuard 5 | 6 | Project Homepage: https://www.wireguard.com/ 7 | 8 | Video: https://youtu.be/GZRTnP4lyuo 9 | 10 | ## Prerequisites 11 | 12 | - Linux Server running Ubuntu 20.04 LTS or newer 13 | 14 | *For installing Docker on other Linux distriubtions or different versions than Ubuntu 20.04 LTS, follow the [official installation instructions](https://docs.docker.com/install/).* 15 | 16 | ## Install Docker, and Docker-Compose 17 | 18 | You can still install Docker on a Linux Server that is not running Ubuntu, however, this may require different commands! 19 | 20 | ### Install Docker 21 | ```bash 22 | sudo apt update 23 | 24 | sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common 25 | 26 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 27 | 28 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 29 | 30 | sudo apt update 31 | 32 | sudo apt-get install docker-ce docker-ce-cli containerd.io 33 | ``` 34 | 35 | ### Check if Docker is installed correctly 36 | ```bash 37 | sudo docker run hello-world 38 | ``` 39 | 40 | ### Install Docker-Compose 41 | 42 | Download the latest version (in this case it is 1.25.5, this may change whenever you read this tutorial!) 43 | 44 | ```bash 45 | sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose 46 | 47 | sudo chmod +x /usr/local/bin/docker-compose 48 | ``` 49 | 50 | ### Check if Docker-Compose is installed correctly 51 | ```bash 52 | sudo docker-compose --version 53 | ``` 54 | 55 | ### (Optional) Add your linux user to the `docker` group 56 | ```bash 57 | sudo usermod -aG docker $USER 58 | ``` 59 | 60 | ## Set up Wireguard in Docker 61 | 62 | ### Create a new Docker-Compose file 63 | 64 | Create a new folder in the `/opt` directory. 65 | 66 | You can also use your personal home folder `/home/`, this may require different permissions. 67 | 68 | Create a new file `docker-compose.yml`file, please refer to the linuxserver/wireguard documentation: https://hub.docker.com/r/linuxserver/wireguard. 69 | 70 | Replace the `` with the public IP address of your WireGuard Server, because your clients will need to connect from outside your local network. You can also set this to auto, the docker container will automatically determine your public IP address and use this in the client's configuration. 71 | 72 | ```yml 73 | version: "2.1" 74 | services: 75 | wireguard: 76 | image: linuxserver/wireguard 77 | container_name: wireguard 78 | cap_add: 79 | - NET_ADMIN 80 | - SYS_MODULE 81 | environment: 82 | - PUID=1000 83 | - PGID=1000 84 | - TZ=Europe/London 85 | - SERVERURL= #optional 86 | - SERVERPORT=51820 #optional 87 | - PEERS=1 #optional 88 | - PEERDNS=auto #optional 89 | - INTERNAL_SUBNET=10.13.13.0 #optional 90 | volumes: 91 | - /opt/wireguard-server/config:/config 92 | - /lib/modules:/lib/modules 93 | ports: 94 | - 51820:51820/udp 95 | sysctls: 96 | - net.ipv4.conf.all.src_valid_mark=1 97 | restart: unless-stopped 98 | ``` 99 | 100 | ### Start the WireGuard Server 101 | ```bash 102 | docker-compose up -d 103 | ``` 104 | 105 | ### Distribute the config files to clients 106 | 107 | You could also use the docker image for your clients. But I think it's more practical for a client to install WireGuard directly on the host OS. If you want to know how to do that, you can also refer to my article about WireGuard installation and configuration on Linux. 108 | 109 | When you have started the WireGuard container, it should automatically create all configuration files in your `./config` folder. All you need to do is to copy the corresponding `./config/peer1/peer1.conf` file to your client and use that as your `wg0.conf`, for instance. If you want to connect mobile phones you can also just scan the peer1.png QR code, to print the QR code to the console, simply use the following command: 110 | 111 | ```bash 112 | docker exec -it wireguard /app/show-peer 113 | ``` 114 | 115 | ### (Optional) Add additional clients 116 | 117 | If you want to add additional clients, you simply can increase the PEERS parameter in the docker-compose.yaml file. After changing this value you need to restart your docker container with the `--force-recreate` parameter. 118 | 119 | ```bash 120 | docker-compose up -d --force-recreate 121 | ``` 122 | -------------------------------------------------------------------------------- /wireguard-on-linux/README.md: -------------------------------------------------------------------------------- 1 | # WireGuard installation and configuration - on Linux 2 | Let me show you, how to install and configure a basic VPN connection with WireGuard on a Linux server and client. We will also have a look at some advanced configuration settings like keep-alive and traffic routing. 3 | 4 | We will use the free and open-source VPN protocol WireGuard 5 | 6 | Project Homepage: https://www.wireguard.com/ 7 | 8 | Video: https://youtu.be/bVKNSf1p1d0 9 | 10 | ## Prerequisites 11 | 12 | - Linux Server running Ubuntu 20.04 LTS or newer 13 | 14 | *For installing WireGuard on other Linux distriubtions or different versions than Ubuntu 20.04 LTS, follow the [official installation instructions](https://www.wireguard.com/install/).* 15 | 16 | ## Installation and Configuration 17 | 18 | ### Install WireGuard 19 | 20 | To install WireGuard on Ubuntu 20.04 LTS we need to execute the following commands on the Server and Client. 21 | 22 | ```bash 23 | sudo apt install wireguard 24 | ``` 25 | 26 | ### Create a private and public key on Server & Client 27 | 28 | Before we can establish a secure tunnel with WireGuard we need to create a private and public key on both, Server and Client first. WireGuard comes with a simple tool that can easily generate these keys. Execute this on the Server and Client. 29 | 30 | ```bash 31 | wg genkey | tee privatekey | wg pubkey > publickey 32 | ``` 33 | 34 | Be aware, you ***MUST NOT SHARE*** the private key with anyone! Make sure to store it in a secure way on both devices. 35 | 36 | ### Configure the Server 37 | 38 | Now you can configure the server, just add a new file called `/etc/wireguard/wg0.conf`. Insert the following configuration lines and replace the `` placeholder with the previously generated private key. 39 | 40 | You need to insert a private IP address for the `` that doesn't interfere with another subnet. Next, replace the `` with your interface the server should listen on for incoming connections. 41 | 42 | ```conf 43 | [Interface] 44 | PrivateKey= 45 | Address=/ 46 | SaveConfig=true 47 | PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o -j MASQUERADE; 48 | PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o -j MASQUERADE; 49 | ListenPort = 51820 50 | ``` 51 | 52 | ### Configure the Client 53 | 54 | Now, we need to configure the client. Create a new file called `/etc/wireguard/wg0.conf`. Insert the following configuration lines and replace the `` placeholder with the previously generated private key. 55 | 56 | You need to insert a private IP address for the `` in the same subnet like the server's IP address. Next, replace the `` with the generated servers public key. And also replace `` with the IP address where the server listens for incoming connections. 57 | 58 | *Note that if you set the AllowedIPs to `0.0.0.0/0` the client will route ALL traffic through the VPN tunnel. That means, even if the client will access the public internet, this will break out on the server-side. If you don't want route all traffic through the tunnel, you need to replace this with the target IP addresses or networks.* 59 | 60 | ```conf 61 | [Interface] 62 | PrivateKey = 63 | Address = / 64 | SaveConfig = true 65 | 66 | [Peer] 67 | PublicKey = 68 | Endpoint = :51820 69 | AllowedIPs = 0.0.0.0/0 70 | ``` 71 | 72 | Once you have created the configuration file, you need to enable the wg0 interface with the following command. 73 | 74 | ```bash 75 | wg-quick up wg0 76 | ``` 77 | 78 | You can check the status of the connection with this command. 79 | 80 | ```bash 81 | wg 82 | ``` 83 | 84 | ### Add Client to the Server 85 | 86 | Next, you need to add the client to the server configuration file. Otherwise, the tunnel will not be established. Replace the `` with the clients generated public key and the `` with the client's IP address on the wg0 interface. 87 | 88 | ```bash 89 | wg set wg0 peer allowed-ips /32 90 | ``` 91 | 92 | Now you can enable the wg0 interface on the server. 93 | 94 | ```bash 95 | wg-quick up wg0 96 | ``` 97 | 98 | ```bash 99 | wg 100 | ``` --------------------------------------------------------------------------------