├── README-first-draft.md ├── README-systemd.md ├── README-uninstall-k3s.md ├── README-volumes.md ├── README.md ├── k3s.service-containerd ├── k3s.service-docker └── nginx-deployment ├── README.md ├── nginx-all-with-pvc.yml ├── nginx-deployment.yml ├── nginx-ingress.yml └── nginx-service.yml /README-first-draft.md: -------------------------------------------------------------------------------- 1 | # k3s-getting-started 2 | 3 | Getting started with k3s - 5 minus k8s 4 | 5 | This is a quick installation guide for [k3s](https://k3s.io) with the setup of `kubectl` to get started including a smalll `nginx` deployment. 6 | 7 | ## What is k3s 8 | 9 | k3s is a stripped version of the official fullblown Kubernetes source where the implementations for Amazon, Google, Azure hosting centers have been pulled and some of the subsystems (drivers) for Volumes etc. have been pulled so you can run a full blown Kubernets cluster at home or on remote places on small devices like a Rasberry Pi and still use the tools you do for managing a Kubernetes cluster. 10 | 11 | k3s is also a great alternative for [minikube](https://kubernetes.io/docs/setup/learning-environment/minikube/) ans similar that run the full blown Kubernets and hence the requirements. 12 | 13 | Read more about `k3s` at [k3s](https://k3s.io) 14 | 15 | This guide is split in 4 sections 16 | 17 | - installing `k3s` making it ready for use 18 | - setup k3s as a systemd service 19 | - setup `kubectl` to interact with `k3s` 20 | - deploy a simple `nginx` service 21 | 22 | ## Installation requirements (linux description only) 23 | 24 | To install k3s make sure you have the following 25 | 26 | 1. A fully functioning x86_64 bit GNU/Linux installation (server or desktop will do) 27 | 2. Approx 500MB free RAM at your disposal 28 | 3. A few GB of free space (Docker images take up space - they are not free from reality) 29 | 4. Root access (you need root to run k3s) 30 | 5. Internet access 31 | 32 | ## Install k3s (linux description only) 33 | 34 | ### Step 1: Download and place binary the correct place 35 | 36 | Go to [https://github.com/rancher/k3s/releases](https://github.com/rancher/k3s/releases) and download the latest stable version (aka not one ending in `rc1` or `beta` etc). 37 | There are a few binaries but you only need the one named `k3s`. 38 | 39 | Download it to your laptop eg for k3s version 0.8.1 40 | 41 | ```bash 42 | wget https://github.com/rancher/k3s/releases/download/v0.8.1/k3s 43 | ``` 44 | 45 | Make it executable and place it in /usr/local/bin/ 46 | 47 | ```bash 48 | chmod +x k3s 49 | sudo mv k3s /usr/local/bin/ 50 | ``` 51 | 52 | ### Step 2: Decide if you want to use `Docker` or `containerd` 53 | 54 | If you alread have [Docker](https://www.docker.com) installed then go with `Step 3` below make k3s utilize your already existing [Docker](https://www.docker.com) installation. 55 | 56 | If you don't have Docker installed and don't plan on it either then go with `Step 4` which will utilize `containerd` which is the core of [Docker](https://www.docker.com) but isn't [Docker](https://www.docker.com) 57 | 58 | ### Step 3: Setup k3s as a systemd service (using Docker) 59 | 60 | Read the [systemd integration](README-systemd.md) for better systemd service integration. 61 | 62 | ### Step 5: Using the k3s service 63 | 64 | You can now start and stop the k3s service as you please. 65 | 66 | ## Start and stop `k3s` 67 | 68 | 69 | ## Start k3s 70 | 71 | ```bash 72 | sudo systemctl start k3s 73 | ``` 74 | 75 | ### Stop k3s 76 | 77 | ```bash 78 | sudo systemctl stop k3s 79 | ``` 80 | 81 | ## Running k3s as a boot service 82 | 83 | If you've followd the above installation steps and is able to start/stop k3s using the above `systemctl` commands then you can enable (or disable) it as a boot service to it will start when you boot your server/desktop or your remote _Rasberry Pi_ out in the shed... 84 | 85 | **To enable as boot service** 86 | 87 | ```bash 88 | sudo systemctl enable k3s 89 | ``` 90 | 91 | **To disable as boot service** 92 | 93 | ```bash 94 | sudo systemctl disable k3s 95 | ``` 96 | 97 | Note: you'll still be able to manually stop and start (and restart) the k3s service if it is running as a boot service using the `sudo systemctl [start|stop|restart] k3s` command(s). 98 | 99 | ## First run after installation 100 | 101 | After the installation above as a systemd service then start k3s using the command 102 | 103 | ```bash 104 | sudo systemctl start k3s 105 | ``` 106 | 107 | The first start takes a little while as k3s needs to 108 | 109 | - Unpack some files 110 | - Create some certificates (this requires CPU) 111 | - Initiate the core services 112 | 113 | ## Setting up the `kubectl` command 114 | 115 | The k3s deployment comes with its own built in `kubectl` command which you can either 116 | 117 | - use directly on every command (cumbersome) 118 | - setup as an alias (better) 119 | - install the full `kubectl` client (best) 120 | 121 | ### Using the built in `kubectl` command (cumbersome) 122 | 123 | This section will help you setup `kubectl` which will can be done in several ways. Chose one and later when referencing `kubectl` it will be either of these you use, but it will also be the `kubectl` you use when reading and using documentation from the Internet. 124 | 125 | `k3s` comes with a built in `kubectl` command which can be used like this: 126 | 127 | ```bash 128 | sudo /usr/local/bin/k3s kubectl 129 | ``` 130 | 131 | ### Using the built in `kubectl` command as an alias (better) 132 | 133 | You can setup a terminal alias for the above built in `k3s kubectl` command by updating your ~/.bashrc which will enalbe the command on every reboot hence forth. 134 | 135 | ```bash 136 | alias kubectl="sudo /usr/local/bin/k3s kubectl " 137 | ``` 138 | 139 | Before you can use the alias you need to do either of the following: 140 | 141 | - run the above alias command in your terminal (once per terminal you want to work on right now) 142 | - source the ~/.bashrc file eg `source ~/.bashrc` to activate the alias 143 | - logout and then log back in again as this will source the alias for you automatically 144 | 145 | You can run the command above in the terminal. 146 | 147 | Please note that this method will require your `sudo` password on the first run and when ever your sudo-timout has run out. 148 | 149 | ### Using the official `kubectl` command (best) 150 | 151 | Since you might be using a distro that is special or not then follow the guide at [https://kubernetes.io](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to install the `kubectl` tool on your system. 152 | 153 | After installation you need to setup a `~/.kube/config` file for your user (other wise you'll end up needing to use sudo and set the environment variable KUBECONFIG every time you want to use k3s - that isn't advised). 154 | 155 | 1. Create the directory `~/.kube` using the command ```mkdir ~/.kube``` 156 | 2. Copy the `k3s` kube config file to your directory using the command ```sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config``` 157 | 158 | You should now be able to run the command `kubectl` and have `k3s` at your fingertips :-) 159 | 160 | ## Make sure everything is working as expected 161 | 162 | Once you've started `k3s`and setup/decided on a `kubectl` method then run the following commands to make sure `k3s` is up and running: 163 | 164 | ### Get the node list to see if it is running 165 | 166 | ```bash 167 | kubectl get nodes 168 | ``` 169 | 170 | You should see 2 lines - a named column line and a line with the computer you are running `k3s` on. 171 | 172 | ### Get a list of all pods 173 | 174 | ```bash 175 | kubectl get pods --all-namespaces 176 | ``` 177 | 178 | You should see column list of namespaces and pods names and some extra information for each pod. 179 | 180 | ## Your first deployoment 181 | 182 | To deploy to `k3s` you need 3 things 183 | 184 | 1. a `kind: Deployment` 185 | 2. a `kind: Service` 186 | 3. a `kind: Ingress` 187 | 188 | Take a small `nginx` example: 189 | 190 | ## Preparing the `nginx` Deployment files (declarative YAML configuration) 191 | 192 | (these files are availabe under [nginx-deployment](nginx-deployment/)) 193 | 194 | Create a `nginx` directory for the YAML files needed to deploy `nginx` to `k3s` eg : 195 | 196 | ```bash 197 | mkdir ~/nginx-deployment 198 | cd ~/nginx-deployment 199 | ``` 200 | 201 | In this directory you'll create 3 files (or download them from the [nginx-deployment](nginx-deployment) directory) 202 | 203 | `nginx-deployment.yml` with following content: 204 | 205 | ```yaml 206 | apiVersion: apps/v1 207 | kind: Deployment 208 | metadata: 209 | name: nginx 210 | spec: 211 | selector: 212 | matchLabels: 213 | app: nginx 214 | replicas: 2 215 | template: 216 | metadata: 217 | labels: 218 | app: nginx 219 | spec: 220 | containers: 221 | - name: nginx 222 | image: nginx:1.7.9 223 | ports: 224 | - containerPort: 80 225 | ``` 226 | 227 | `nginx-service.yml` with following content: 228 | 229 | ```yaml 230 | apiVersion: v1 231 | kind: Service 232 | metadata: 233 | name: nginx 234 | spec: 235 | ports: 236 | - port: 80 237 | protocol: TCP 238 | targetPort: 80 239 | type: NodePort 240 | selector: 241 | app: nginx 242 | ``` 243 | 244 | `nginx-ingress.yml` with following content: 245 | 246 | ```yaml 247 | apiVersion: extensions/v1beta1 248 | kind: Ingress 249 | metadata: 250 | name: nginx 251 | spec: 252 | rules: 253 | - host: UNIQUE_FQDN 254 | http: 255 | paths: 256 | - backend: 257 | serviceName: nginx 258 | servicePort: 80 259 | path: /* 260 | ``` 261 | 262 | **NOTE:** The **_UNIQUE_FQDN_** is either a DNS entry you already have for the server eg nginx.example.com in your /etc/hosts file or setup in your companies DNS registry. 263 | 264 | The Ingress works on FQDN's and not IP addresses, so `localhost` is not a valid name nor subdomains for it as /etc/hosts do not do dynamic resolutions. 265 | 266 | You can get the Ingress Service IP address that `k3s` has hooked into using the command 267 | 268 | ```bash 269 | kubectl -n kube-system get svc -o yaml | grep ip: 270 | ``` 271 | 272 | Or if you have [jq](https://stedolan.github.io/jq/download/) installed then you can get _only_ the IP address using this command: 273 | 274 | ```bash 275 | kubectl -n kube-system get svc -o json | jq -r '.items[].status.loadBalancer.ingress[0].ip' 276 | ``` 277 | 278 | Using that IP address you can setup a `/etc/hosts` entry using the command 279 | 280 | ```bash 281 | export IP=$(kubectl -n kube-system get svc -o json | jq -r '.items[].status.loadBalancer.ingress[0].ip' |grep -v "^$" |grep -v null) 282 | echo "$IP nginx.example.com" | tee -a /etc/hosts 283 | ``` 284 | 285 | If you run `k3s` on a laptop that changes IP _all the time_ then use this command to update the /etc/hosts file to have only **one** updated `nginx.example.com` DNS entry in your /etc/hosts file at any given time that will work once you deploy nginx like above: 286 | 287 | ```bash 288 | export IP=$(kubectl -n kube-system get svc -o json | jq -r '.items[].status.loadBalancer.ingress[0].ip' |grep -v "^$" |grep -v null) 289 | sed -i "s,^.*nginx.example.com,$IP nginx.example.com,g" /etc/hosts 290 | ``` 291 | 292 | ## Setup a namespace for `nginx` to run in 293 | 294 | Run the following command to create a namespace for `nginx` to run in 295 | 296 | ```bash 297 | kubectl create namespace nginx 298 | ``` 299 | 300 | ## Deploy `nginx` to `k3s` 301 | 302 | Make sure you are in the directory you created the above 3 YAML files. 303 | 304 | Run the following commands: 305 | 306 | ```bash 307 | kubectl -n nginx create -f nginx-deployment.yml 308 | kubectl -n nginx create -f nginx-service.yml 309 | kubectl -n nginx create -f nginx-ingress.yml 310 | ``` 311 | 312 | You can view the status of the deployment using the command 313 | 314 | ```bash 315 | kubectl -n nginx describe deployment nginx 316 | ``` 317 | 318 | Or just wait for it to finish with the command if you trust your Internet and YAML files (typos) 319 | 320 | ```bash 321 | kubectl -n nginx rollout status deployment nginx 322 | ``` 323 | 324 | Src: [https://www.mankier.com/1/kubectl-rollout-status](https://www.mankier.com/1/kubectl-rollout-status) 325 | 326 | Once the 3 `kubectl` or `kubectl rollout status` commands are OK aka everything is running then you should be able to open [http://nginx.example.com](http://nginx.example.com) and see the default `nginx` webpage. (remeber to update your /etc/hosts file or company DNS to point to the correct `k3s` IP address (usually the only non-local IP on the server/desktop)). 327 | 328 | If you use eg _curl_ then the output would be something like this: 329 | 330 | ```bash 331 | $ curl http://nginx.example.com 332 | 333 | 334 | 335 | Welcome to nginx! 336 | 343 | 344 | 345 |

Welcome to nginx!

346 |

If you see this page, the nginx web server is successfully installed and 347 | working. Further configuration is required.

348 | 349 |

For online documentation and support please refer to 350 | nginx.org.
351 | Commercial support is available at 352 | nginx.com.

353 | 354 |

Thank you for using nginx.

355 | 356 | 357 | ``` 358 | -------------------------------------------------------------------------------- /README-systemd.md: -------------------------------------------------------------------------------- 1 | ## Systemd integration 2 | 3 | There are 3 ways to run `k3s` 4 | 5 | 1. Manually in a termainal using `sudo k3s server` (tedious) 6 | 2. Set it up as a service you start/stop manually (practical) 7 | 3. Set it up as a boot service so it is alwasy ready (practical) 8 | 9 | 10 | `k3s` supports 2 types of server runtime 11 | * with **Docker** 12 | * with **containerd** 13 | 14 | As a usage point when working with Kubernetes either will do just fine, but if you use or plan on using the computer that `k3s` runs on then choose the **Docker** variant as it will use Docker so you can "play around" as well. 15 | If you won't ever "touch" the server then use the **containerd** version. 16 | 17 | 18 | 19 | ## The 2 systemd service files 20 | 21 | **Docker** 22 | Use this systemd service file if you have Docker installed or plan on using Docker on the same server/host as `k3s` will be running on 23 | * [k3s.service-docker.service](k3s.service-docker) 24 | 25 | 26 | **containerd** 27 | Use this systemd service file if you don't have Docker installed (or plan on using Docker) eg completely remote `k3s` server eg a Rasberry Pi or similar. 28 | * [k3s.service-containerd.service](k3s.service-containerd) 29 | 30 | 31 | # Installing it 32 | 33 | 1. Download the above file that you want to use (eg the docker version) 34 | 2. Move it to `/etc/systemd/system/k3s.service` using the command `sudo mv k3s.service-docker.service /etc/systemd/system/k3s.service` 35 | 3. Reload systemd `sudo systemctl daemon-reload` 36 | 37 | 38 | [Go back to main page](README-first-draft.md) -------------------------------------------------------------------------------- /README-uninstall-k3s.md: -------------------------------------------------------------------------------- 1 | # How to uninstall k3s 2 | 3 | If you get tired of `k3s` and want to uninstall (or reinstall because the upgrade didn't work properly) then this is the way to do it: 4 | 5 | 1. Stop the `k3s` service `sudo systemctl stop k3s` 6 | 2. Disable the `k3s` service `sudo systemctl disable k3s` 7 | 3. Remove the systemd file(s) `sudo find /etc/systemd/system -iname 'k3s*' -delete` 8 | 4. If you use Docker: stop all containers `sudo docker stop $(docker ps -a -q)` 9 | 5. Unmount any volumes `umount $(mount |grep rancher | cut -d" " -f3)` 10 | 6. Remove all `k3s` directories `sudo rm -rf /etc/rancher /var/lib/rancher` 11 | 7. Purge/prune all Docker images `sudo docker system prune -a -f` 12 | 13 | And that is it -------------------------------------------------------------------------------- /README-volumes.md: -------------------------------------------------------------------------------- 1 | # `k3s` and volumes 2 | 3 | `k3s` does not come with any pre-installed StorageClass'es for PersistentVolumes (used by PersistenVolumeClaims in Deployments). 4 | 5 | You can install [local-path-provisioner](https://github.com/rancher/local-path-provisioner) to do the job for you - it will enable you to utilize the local disk that `k3s` is running on. 6 | 7 | ## Install `local-path-provisioner` 8 | 9 | ``` 10 | # Make the required directory for the local-path-provisioner to store its data in aka the PV's where the PVC's are stored 11 | sudo mkdir /opt/local-path-provisioner/ 12 | 13 | # Install local-path-provisioner 14 | kubectl create -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml 15 | ``` 16 | 17 | Wait a little for the `local-path-storage` system to be setup. You can monitor the `local-path-storage` namespace or the `rollout status` 18 | 19 | ```bash 20 | kubectl -n local-path-storage rollout status deployment local-path-provisioner 21 | ``` 22 | 23 | After that you should be able to add local volumes (PVC's) like so (if you installed the nginx deployment in [README.md](README.md): 24 | 25 | **Create a file named `nginx-pvc.yml` with the following content: 26 | ```yaml 27 | apiVersion: v1 28 | kind: PersistentVolumeClaim 29 | metadata: 30 | name: nginx-html 31 | namespace: nginx 32 | spec: 33 | accessModes: 34 | - ReadWriteOnce 35 | storageClassName: local-path 36 | resources: 37 | requests: 38 | storage: 2Gi 39 | ``` 40 | Apply the file: 41 | ```bash 42 | kubectl -n nginx create -f pvc.yml 43 | ``` 44 | Check that the PVC is being created (it should be in **Pending** mode) 45 | ```bash 46 | kubectl -n nginx get pv,pvc,pods 47 | ``` 48 | 49 | Update the nginx deployment's containers section with the last 7 lines like shown here: 50 | ```yaml 51 | apiVersion: apps/v1 52 | kind: Deployment 53 | metadata: 54 | name: nginx 55 | spec: 56 | selector: 57 | matchLabels: 58 | app: nginx 59 | replicas: 1 60 | template: 61 | metadata: 62 | labels: 63 | app: nginx 64 | spec: 65 | containers: 66 | - name: nginx 67 | image: nginx:1.7.9 68 | ports: 69 | - containerPort: 80 70 | volumeMounts: 71 | - name: html 72 | mountPath: /usr/share/nginx/html 73 | volumes: 74 | - name: html 75 | persistentVolumeClaim: 76 | claimName: nginx-html 77 | ``` 78 | 79 | Re-check the PV, PVC 80 | ```bash 81 | kubectl -n nginx get pv,pvc,pods 82 | ``` 83 | The output should now be something like this: 84 | ```bash 85 | kubectl -n nginx get pv,pvc,pod 86 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 87 | persistentvolume/pvc-1c1f0987-d4d3-11e9-b617-0800277d4863 2Gi RWO Delete Bound nginx/htmldir local-path 6m46s 88 | 89 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 90 | persistentvolumeclaim/htmldir Bound pvc-1c1f0987-d4d3-11e9-b617-0800277d4863 2Gi RWO local-path 7m5s 91 | 92 | NAME READY STATUS RESTARTS AGE 93 | pod/nginx-7cb66fc985-vpxrb 1/1 Running 0 6m30s 94 | ``` 95 | 96 | Under the directory `/opt/local-path-provisioner/` you should now have a directory named after the PVC aka `pvc-1c1f0987-d4d3-11e9-b617-0800277d4863`. 97 | If you place an `index.html` file there (**as root**) then you'll be able to see it via Nginx on [http://nginx.example.com](http://nginx.example.com) or via curl: 98 | ```bash 99 | curl http://nginx.example.com 100 | ``` 101 | 102 | 103 | ## Official installation documentation and examples on GitHub 104 | 105 | You can find the install and usage guide on [https://github.com/rancher/local-path-provisioner](https://github.com/rancher/local-path-provisioner) 106 | 107 | [Go back to main page](README-first-draft.md) 108 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # k3s Install Process step-by-step 2 | 3 | **Notes:** 4 | - This guide is spawned both because of a friend and [this Rancher k3s GitHub](https://github.com/rancher/k3s/issues/795#issuecomment-530440037) issue. 5 | - This guide is made from a fresh install of a [Ubuntu 18.04 Server](https://ubuntu.com/download/server) in VirtualBox. 6 | - The Ubuntu 18.04 Server (hence forth just 'server') only has OpenSSH pre-installed during installation. 7 | - Docker was not pre-installed as it is a [snap](https://snapcraft.io) version - **it does not work well with `k3s` - DO NOT PRE-INSTALL IT** 8 | 9 | 10 | ## Requirements before you get started 11 | 12 | * Before you start them make sure your computer is able to go on the Internet (aka `ping google.com -c1`) 13 | * If needed: Install [Docker](https://docs.docker.com/v17.09/engine/installation/linux/docker-ce/ubuntu/) using the proper installation method and **DO NOT use the _snap_ version provided by default via `apt/apt-get` - it is not working properly with `k3s`** 14 | (note only install Docker if you need it - it can be omitted) 15 | 16 | ## Installing `k3s` 17 | 18 | 19 | * Open a terminal and run the following command as your own user with **sudo** rights: ```curl -sfL https://get.k3s.io | sudo sh -``` 20 | * The installation will now procede and once finished you'll get your command prompt back. 21 | * Test that `k3s` is operational using the command: ```sudo kubectl get nodes``` 22 | 23 | ## Communicating/working with `k3s` 24 | 25 | The command line tool `kubectl` is the way to communicate with `k3s` or any Kubernetes installation if you do not have any other 3rd party tools to do so. 26 | 27 | **`k3s`** comes with its own built in `kubectl` command that is by default symlinked to `/usr/local/bin/kubectl`. 28 | This version of `kubectl` is built into `k3s` and uses the `/etc/rancher/k3s/k3s.yaml` file which is root owned. 29 | To use that version run ```sudo chmod o+r /etc/rancher/k3s/k3s.yaml``` and you can run `kubectl ` without the need for sudo. 30 | 31 | ### Alternative: use the official `kubectl` 32 | 33 | You can install the official `kubectl` and work without compromising the security of the above `k3s.yaml` file. 34 | 35 | To use the official Kubernetes `kubectl` version then either go to [https://kubernetes.io/docs/tasks/tools/install-kubectl/](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to see the installation procedure for your distribution. 36 | Or do the following 37 | 38 | Once you've installed the proper `kubectl` for your distribution then run the following 3 command: 39 | You should now be above to work with `k3s` using `kubectl` the way it is designed. If you did the *chmod* above of the `k3s.yaml` file then re-run it with `o-r` instead of `o+r` to reset the permissions on the file to 600 or read/write only for root. 40 | 41 | ### A quick offical `kubectl` install/upgrade procedure 42 | If you are on a amd64bit Intel or AMD Debian/Ubuntu or RHEL/CentOS like distribution then this can be done for install and upgrade of the official `kubectl` command: 43 | 44 | ```bash 45 | cd /tmp 46 | curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl 47 | sudo rm /usr/local/bin/kubectl 48 | chmod +x kubectl 49 | sudo mv kubectl /usr/local/bin/kubectl 50 | ``` 51 | 52 | **Regardless of which official `kubectl` installation method you choose then do this once** 53 | The office `kubectl` uses the `~/.kube/config` file to get its configuration to Kubernetes or `k3s', so make a copy of the `k3s.yaml` file in your HOME dir like this: 54 | 55 | ```bash 56 | mkdir ~/.kube 57 | sudo cat /etc/rancher/k3s/k3s.yaml > ~/.kube/config 58 | sudo rm /usr/local/bin/kubectl 59 | ``` 60 | 61 | 62 | 63 | ## Your first deployment 64 | 65 | Remember the IP address from the **Requirements** section? Now you need the IP address 66 | If you don't then you can always get it via the following command 67 | 68 | ```bash 69 | sudo kubectl get services --namespace kube-system traefik --output jsonpath='{.status.loadBalancer.ingress[0].ip}' 70 | ``` 71 | 72 | The IP address is needed for you to setup one or more DNS entries (A records or CNAME's) pointing to it or local /etc/hosts entries eg 73 | ```bash 74 | x.x.x.x www.example.com nginx.example.com example.com 75 | ``` 76 | (replace x.x.x.x with the IP from the above command) 77 | 78 | This will enable you to setup an Ingress object so you can have multiple services running on the same IP normal FQDN's like "nginx.example.com" and the like. 79 | 80 | ### Setting up a Nginx service 81 | 82 | In this repo there is a directory named [nginx-deployment](nginx-deployment) where there are 3 files in it. 83 | 84 | The 3 files are 85 | - nginx-deployment.yml 86 | - nginx-service.yml 87 | - nginx-ingress.yml 88 | 89 | **You need to update the `nginx-ingress.yml` file** with the FQDN you setup on your DNS or in /etc/hosts unless you configured it for **nginx.example.com**. 90 | 91 | Download the 3 files and apply them to `k3s` in a new namespace (namespaces are virtual "rooms" that separate applications from each other like directories do for files) 92 | 93 | **Create a new namespace: nginx** 94 | ```bash 95 | kubectl create namespace nginx 96 | 97 | # Shortform - some parameters to kubectl have shortform eg ns for namespace, svc for service etc. 98 | kubectl create ns nginx 99 | ``` 100 | 101 | **Apply the 3 files to the `nginx` namespace** 102 | ```bash 103 | kubectl create ns nginx 104 | kubectl -n nginx create -f nginx-deployment.yml 105 | kubectl -n nginx create -f nginx-service.yml 106 | kubectl -n nginx create -f nginx-ingress.yml 107 | ``` 108 | 109 | For each of the commands you should get one line back from `kubectl ...` which is 110 | - `namespace/nginx created` when creating the namespace 111 | - `deployment.apps/nginx created` when creating the Deployment 112 | - `service/nginx created` when creating the Service 113 | - `ingress.extensions/nginx created` when creating the Ingress object 114 | 115 | After a little while you'll be able to see the setup using the following commands: 116 | ```bash 117 | kubectl --namespace nginx get deployments 118 | kubectl --namespace nginx get pods 119 | kubectl --namespace nginx get ingresses 120 | kubectl --namespace nginx get services 121 | ``` 122 | 123 | Each of these will show you one or more lines. 124 | Each line explains the current state of the Deployment, Pod(s), Service and Ingress controller. 125 | 126 | 127 | **Check if Nginx is working** 128 | 129 | ```bash 130 | curl http://nginx.example.com 131 | ``` 132 | *replace nginx.example.com with whatever you setup* 133 | 134 | **Congratulations** 135 | 136 | You have now setup Kubernets on your desktop/server and have deployed **nginx** (though some files are missing... we'll get to that in [README-volumes.md](README-volumes.md)) 137 | 138 | 139 | ## Kubectl tips and tricks 140 | 141 | 142 | ### Command completion for `kubectl` 143 | 144 | If you use the BASH shell (most likely you do) then run the following command and logout and then login again 145 | ```bash 146 | kubectl completion bash | tee -a ~/.profile 147 | source ~/.profile 148 | ``` 149 | You can now use completion for most commands and pod lookups just like on the file system eg 150 | ```bash 151 | # get a list of namespaces 152 | kubectl -n 153 | 154 | # get a list of what you can get from the namespace (it is a lot) 155 | kubectl -n nginx get 156 | 157 | # get logs for a particular pod (if there are multiple they will be shown so you can chose the next alphanumeric character of the pods name you want to get logs from) 158 | kubectl -n nginx lo 159 | ``` 160 | If you use an alternate shell then run `kubectl completion -h` to get help to setup completion. 161 | 162 | -------------------------------------------------------------------------------- /k3s.service-containerd: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Lightweight Kubernetes 3 | Documentation=https://k3s.io 4 | After=network.target 5 | 6 | [Service] 7 | Type=notify 8 | EnvironmentFile=/etc/systemd/system/k3s.service.env 9 | ExecStartPre=-/sbin/modprobe br_netfilter 10 | ExecStartPre=-/sbin/modprobe overlay 11 | ExecStart=/usr/local/bin/k3s server 12 | KillMode=process 13 | Delegate=yes 14 | LimitNOFILE=infinity 15 | LimitNPROC=infinity 16 | LimitCORE=infinity 17 | TasksMax=infinity 18 | TimeoutStartSec=0 19 | 20 | [Install] 21 | WantedBy=multi-user.target 22 | -------------------------------------------------------------------------------- /k3s.service-docker: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Lightweight Kubernetes 3 | Documentation=https://k3s.io 4 | After=network.target 5 | 6 | [Service] 7 | Type=notify 8 | #EnvironmentFile=/etc/systemd/system/k3s.service.env 9 | #ExecStartPre=-/sbin/modprobe br_netfilter 10 | #ExecStartPre=-/sbin/modprobe overlay 11 | ExecStart=/usr/local/bin/k3s server --docker 12 | KillMode=process 13 | Delegate=yes 14 | LimitNOFILE=infinity 15 | LimitNPROC=infinity 16 | LimitCORE=infinity 17 | TasksMax=infinity 18 | TimeoutStartSec=0 19 | 20 | [Install] 21 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /nginx-deployment/README.md: -------------------------------------------------------------------------------- 1 | **This directory contains the 3 files described in the main [README.md](../README-first-draft.md) file** and the nginx-all-with-pvc.yml to deploy it all with a volume mounted that you can use - see below. 2 | 3 | 4 | 5 | **[nginx-deployment.yml](nginx-deployment.yml)** 6 | ``` 7 | apiVersion: apps/v1 8 | kind: Deployment 9 | metadata: 10 | name: nginx 11 | spec: 12 | selector: 13 | matchLabels: 14 | app: nginx 15 | replicas: 2 16 | template: 17 | metadata: 18 | labels: 19 | app: nginx 20 | spec: 21 | containers: 22 | - name: nginx 23 | image: nginx:stable 24 | ports: 25 | - containerPort: 80 26 | ``` 27 | 28 | **[nginx-ingress.yml](nginx-ingress.yml)** 29 | ``` 30 | apiVersion: networking.k8s.io/v1 31 | kind: Ingress 32 | metadata: 33 | name: nginx 34 | spec: 35 | rules: 36 | - host: nginx.example.com 37 | http: 38 | paths: 39 | - path: / 40 | pathType: Prefix 41 | backend: 42 | service: 43 | name: nginx 44 | port: 45 | number: 80 46 | ``` 47 | 48 | **[nginx-service.yml](nginx-service.yml)** 49 | ``` 50 | apiVersion: v1 51 | kind: Service 52 | metadata: 53 | name: nginx 54 | spec: 55 | ports: 56 | - port: 80 57 | protocol: TCP 58 | targetPort: 80 59 | type: NodePort 60 | selector: 61 | app: nginx 62 | ``` 63 | 64 | # Deploying it all in one 65 | 66 | ## With PVC (volume for static Nginx content) 67 | 68 | Deploy `nginx-all-with-pvc.yml` with 69 | 70 | ```bash 71 | kubectl apply -f nginx-all-with-pvc.yml 72 | ``` 73 | 74 | ## Accessing Nginx using curl (or your browser) 75 | 76 | Since the file says it uses "nginx.example.com" then you need to either 77 | 1. setup a /etc/hosts entry to point to your IP address 78 | 2. use the Host header when accessing the IP address 79 | 80 | 81 | ## Setting up /etc/hosts 82 | 83 | Get the IP address of the Ingress Controller in k3s 84 | ```bash 85 | kubectl -n kube-system get svc 86 | ``` 87 | Look for and note down the IP address in the `EXTERNAL-IP` column that is your IP address. 88 | 89 | Add the following to your /etc/hosts file 90 | ```bash 91 | echo " nginx.example.com" | sudo tee -a /etc/hosts 92 | ``` 93 | 94 | After that you can run 95 | ```bash 96 | curl http://nginx.example.com 97 | ``` 98 | 99 | You should see a "401 Access Denied" or similar as there are no index.html file in the Nginx html directory 100 | 101 | ## Using Host header in curl 102 | 103 | If your IP address changes "all the time" and you will only be working on the CLI using curl then you can just use the following command: 104 | ```bash 105 | curl -H "Host: nginx.exmaple.com" http://localhost 106 | ``` 107 | You should see a "401 Access Denied" or similar as there are no index.html file in the Nginx html directory 108 | 109 | The Ingress Controll sends incoming traffic to the correct Service using the Host header. 110 | 111 | ## Accessing the Nginx HTML directory (DocumentRoot in apache) 112 | 113 | The docker image deployed in the Pod in the Deployment has a local path mounted to /usr/share/nginx/html which you can find using the following command 114 | 115 | ```bash 116 | kubectl -n nginx get pv -o jsonpath='{.items[].spec.hostPath.path}' 117 | ``` 118 | 119 | The directoy is owned by `root` and your user cannot access it without `sudo`, so run the following: 120 | 121 | ```bash 122 | export NGINX_HTML_PATH=$(k -n nginx get pv -o jsonpath='{.items[].spec.hostPath.path}') 123 | sudo chown -R $USER ${NGINX_HTML_PATH} 124 | ``` 125 | 126 | Setup a local symbolik link to that path 127 | ```bash 128 | export NGINX_HTML_PATH=$(k -n nginx get pv -o jsonpath='{.items[].spec.hostPath.path}') 129 | ln -s ${NGINX_HTML_PATH} ${HOME}/nginx-html-directory 130 | cd 131 | cd nginx-html-directory 132 | echo "Welcome to Nginx on k3s" > index.html 133 | ``` 134 | 135 | After adding the above index.html then you should be able to see your custom index.html file using curl (or your browser depending on your setup) 136 | 137 | ```bash 138 | curl http://nginx.example.com 139 | curl -H "Host: nginx.example.com" http://localhost 140 | ``` 141 | 142 | The output should be what is in your index.html file eg : 143 | ```html 144 | Welcome to Nginx on k3s 145 | ``` 146 | 147 | 148 | [Go back to main page](README.md) 149 | -------------------------------------------------------------------------------- /nginx-deployment/nginx-all-with-pvc.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: nginx 6 | --- 7 | apiVersion: v1 8 | kind: PersistentVolumeClaim 9 | metadata: 10 | name: nginx-html 11 | namespace: nginx 12 | spec: 13 | accessModes: 14 | - ReadWriteOnce 15 | storageClassName: local-path 16 | resources: 17 | requests: 18 | storage: 2Gi 19 | --- 20 | apiVersion: apps/v1 21 | kind: Deployment 22 | metadata: 23 | name: nginx 24 | namespace: nginx 25 | spec: 26 | selector: 27 | matchLabels: 28 | app: nginx 29 | replicas: 1 30 | template: 31 | metadata: 32 | labels: 33 | app: nginx 34 | spec: 35 | containers: 36 | - name: nginx 37 | image: nginx:stable 38 | ports: 39 | - containerPort: 80 40 | volumeMounts: 41 | - name: htmldir 42 | mountPath: /usr/share/nginx/html 43 | volumes: 44 | - name: htmldir 45 | persistentVolumeClaim: 46 | claimName: nginx-html 47 | --- 48 | apiVersion: v1 49 | kind: Service 50 | metadata: 51 | name: nginx 52 | namespace: nginx 53 | spec: 54 | ports: 55 | - port: 80 56 | protocol: TCP 57 | targetPort: 80 58 | type: ClusterIP 59 | selector: 60 | app: nginx 61 | --- 62 | apiVersion: networking.k8s.io/v1 63 | kind: Ingress 64 | metadata: 65 | name: nginx 66 | namespace: nginx 67 | annotations: 68 | kubernetes.io/ingress.class: traefik 69 | spec: 70 | rules: 71 | - host: nginx.example.com 72 | http: 73 | paths: 74 | - path: / 75 | pathType: Prefix 76 | backend: 77 | service: 78 | name: nginx 79 | port: 80 | number: 80 81 | --- 82 | apiVersion: v1 83 | kind: ConfigMap 84 | metadata: 85 | namespace: nginx 86 | name: local-path-test 87 | labels: 88 | app.kubernetes.io/name: local-path-test 89 | data: 90 | test.sh: | 91 | #!/bin/sh 92 | id 93 | ls -al /usr/share/nginx/html&& \ 94 | echo 'Hello from local-path-test' && \ 95 | cp /config/text.txt /usr/share/nginx/html/index.html && \ 96 | touch /usr/share/nginx/html/foo && \ 97 | ls -al /usr/share/nginx/html 98 | index.html: | 99 | some test content 100 | -------------------------------------------------------------------------------- /nginx-deployment/nginx-deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: nginx 9 | replicas: 2 10 | template: 11 | metadata: 12 | labels: 13 | app: nginx 14 | spec: 15 | containers: 16 | - name: nginx 17 | image: nginx:stable 18 | ports: 19 | - containerPort: 80 20 | -------------------------------------------------------------------------------- /nginx-deployment/nginx-ingress.yml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: Ingress 3 | metadata: 4 | name: nginx 5 | spec: 6 | rules: 7 | - host: nginx.example.com 8 | http: 9 | paths: 10 | - backend: 11 | serviceName: nginx 12 | servicePort: 80 13 | path: 14 | -------------------------------------------------------------------------------- /nginx-deployment/nginx-service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx 5 | spec: 6 | ports: 7 | - port: 80 8 | protocol: TCP 9 | targetPort: 80 10 | type: NodePort 11 | selector: 12 | app: nginx 13 | --------------------------------------------------------------------------------