├── .gitignore ├── DNS.md ├── README.md ├── configs ├── coredns.yaml ├── dhcp-daemonset.yaml ├── dns-reflector.yaml └── nfs-pv.yaml ├── dashboard.md ├── glusterfs-notes.txt ├── helm ├── README.md ├── armagetron │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml ├── csgo-comp │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ └── secret.yml │ └── values.yaml ├── csgo │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ └── secret.yml │ └── values.yaml ├── hl2dm │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ └── secret.yml │ └── values.yaml ├── lancache │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ ├── dns.yaml │ │ └── storage.yaml │ └── values.yaml ├── minecraft │ ├── Chart.yaml │ ├── mc.yaml │ ├── templates │ │ ├── 02deployment.yaml │ │ └── _helpers.tpl │ └── values.yaml ├── mordhau │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml ├── mssql │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ └── storage.yaml │ └── values.yaml ├── origin-docker │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ └── storage.yaml │ └── values.yaml ├── tf2-prophunt │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml ├── trackmania-forever │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml ├── unbound-values.yaml ├── unreal4 │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml ├── ut2004 │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml ├── wreckfest │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ └── deployment.yaml │ └── values.yaml └── zdaemon │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ ├── NOTES.txt │ ├── _helpers.tpl │ └── deployment.yaml │ └── values.yaml ├── installation.md ├── metallb-conf.yaml ├── network.md ├── registry-notes.txt ├── registry.md ├── storage-ceph-storageclass.yaml ├── storage-ceph.md ├── storage.md └── topology.json /.gitignore: -------------------------------------------------------------------------------- 1 | 2 | charts-master 3 | # ignore the config files for a specific event 4 | event-*.yaml 5 | 6 | *.pem 7 | *.crt 8 | *.srl 9 | 10 | # helm dependencies that are installed via `helm dep up` 11 | charts/*.tgz 12 | 13 | # Created by https://www.gitignore.io/api/go,vim,code,sublimetext 14 | 15 | ### Code ### 16 | # Visual Studio Code - https://code.visualstudio.com/ 17 | .settings/ 18 | .vscode/ 19 | tsconfig.json 20 | jsconfig.json 21 | 22 | ### Go ### 23 | # Binaries for programs and plugins 24 | *.exe 25 | *.exe~ 26 | *.dll 27 | *.so 28 | *.dylib 29 | 30 | # Test binary, build with `go test -c` 31 | *.test 32 | 33 | # Output of the go coverage tool, specifically when used with LiteIDE 34 | *.out 35 | 36 | ### Go Patch ### 37 | /vendor/ 38 | /Godeps/ 39 | 40 | ### SublimeText ### 41 | # Cache files for Sublime Text 42 | *.tmlanguage.cache 43 | *.tmPreferences.cache 44 | *.stTheme.cache 45 | 46 | # Workspace files are user-specific 47 | *.sublime-workspace 48 | 49 | # Project files should be checked into the repository, unless a significant 50 | # proportion of contributors will probably not be using Sublime Text 51 | # *.sublime-project 52 | 53 | # SFTP configuration file 54 | sftp-config.json 55 | 56 | # Package control specific files 57 | Package Control.last-run 58 | Package Control.ca-list 59 | Package Control.ca-bundle 60 | Package Control.system-ca-bundle 61 | Package Control.cache/ 62 | Package Control.ca-certs/ 63 | Package Control.merged-ca-bundle 64 | Package Control.user-ca-bundle 65 | oscrypto-ca-bundle.crt 66 | bh_unicode_properties.cache 67 | 68 | # Sublime-github package stores a github token in this file 69 | # https://packagecontrol.io/packages/sublime-github 70 | GitHub.sublime-settings 71 | 72 | ### Vim ### 73 | # Swap 74 | [._]*.s[a-v][a-z] 75 | [._]*.sw[a-p] 76 | [._]s[a-rt-v][a-z] 77 | [._]ss[a-gi-z] 78 | [._]sw[a-p] 79 | 80 | # Session 81 | Session.vim 82 | 83 | # Temporary 84 | .netrwhist 85 | *~ 86 | # Auto-generated tag files 87 | tags 88 | # Persistent undo 89 | [._]*.un~ 90 | 91 | 92 | # End of https://www.gitignore.io/api/go,vim,code,sublimetext 93 | 94 | 95 | -------------------------------------------------------------------------------- /DNS.md: -------------------------------------------------------------------------------- 1 | ## Using CoreDNS 2 | 3 | CoreDNS is the DNS server in kubeadm 1.11 onwards. This supports [rewriting queries](https://coredns.io/plugins/rewrite/). We can [update the coredns config.](configs/coredns.yaml) 4 | 5 | The linked config rewrites `blah.server.lan` to `blah.default.svc.cluster.local`. All that remains is exposing the DNS. We can run dnsmasq locally to forward external DNS queries to the internal address. 6 | 7 | And then [dns-reflector.yaml](configs/dns-reflector.yaml) will run a DNS server on all kube master hosts which listens on the local LAN interface of the master (so it has a predictable address), and forwards queries _only_ for `.server.lan` to the kube dns system, and drops all other requests. 8 | 9 | ### Making services appear 10 | 11 | Assuming that you have some containers with the label `app: tron`, the below service will expose those containers to DNS in a conveniently short form. 12 | 13 | ``` 14 | apiVersion: v1 15 | kind: Service 16 | metadata: 17 | name: tron 18 | spec: 19 | selector: 20 | app: tron 21 | clusterIP: None 22 | ports: 23 | - name: foo # These ports don't matter if you're not using SRV recrds, but I think you need one otherwise no A records are returned 24 | port: 1234 25 | targetPort: 1234 26 | ``` 27 | 28 | produces (with 2 replicas): 29 | 30 | ``` 31 | $ dig +short @10.96.0.10 tron.default.svc.cluster.local 32 | 10.0.0.195 33 | 10.0.0.203 34 | ``` 35 | 36 | We can use the DNS rewriting config at the top of this file to then get `tron.server.lan` or similar as a nice human friendly name. 37 | 38 | ### Integrating the DNS with your LAN DNS 39 | 40 | Now that your Kubernetes masters are running a DNS server that responds to friendly names, it needs to be queryable from your LAN. 41 | 42 | Using BIND, you can add to your `named.conf.local` file: 43 | 44 | ``` 45 | zone "server.lan" { 46 | type forward; 47 | forward only; 48 | forwarders { 10.0.0.163; }; 49 | }; 50 | ``` 51 | 52 | Replacing, of course, `10.0.0.163` with the IP address of your master(s). 53 | 54 | ``` 55 | service bind9 reload # or equivalent 56 | ``` 57 | 58 | ``` 59 | $ host -t A byoc.server.lan 60 | byoc.server.lan has address 10.0.0.166 61 | byoc.server.lan has address 10.0.0.172 62 | ``` 63 | 64 | Winning! 65 | 66 | Other DNS servers will have a similar zone forwarding functionality. PfSense, for example, has "Services -> DNS Forwarder". 67 | 68 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes LAN Party 2 | 3 | This repo contains all of the info and configuration needed to get your LAN party running on Kubernetes. Or, rather, it will soon. Work in progress! 4 | 5 | ## Who is this for 6 | 7 | LAN parties have a few constraints that make Kubernetes not as straight forward as other use cases. For example, game servers need to be on the same broadcast domain to be discovered in the LAN tab of games; and for games that don't support that, we need sane DNS addresses for humans to use. And numbered servers. And simple cluster storage. So this guide is for people who want to run Kubernetes in an environment like that. 8 | 9 | ## Why Kubernetes for your LAN Party? 10 | 11 | Buzz words. 12 | 13 | Also - Centralised management. Infrastructure as Code. Super fast and reproducible setup. 14 | 15 | Kubernetes adds complexity, so it's not for everyone. Before using it, make sure you have a backup plan, just in case :) 16 | 17 | ## What is in this 18 | 19 | * Documentation 20 | * Scripts 21 | * [Example configuration files](configs/) 22 | 23 | ## Documentation Index 24 | 25 | * [Host Installation](installation.md) 26 | * [Host and pod network configuration](network.md) 27 | * [DNS](DNS.md) 28 | * [Internal Docker registry](registry.md) 29 | * [Kubernetes Dashboard](dashboard.md) 30 | * [Persistent internal storage](storage.md) 31 | 32 | ## Join the discord! 33 | 34 | [Join the Open Source LAN Discord!](https://discord.gg/0149LEvYPSzmnItKb). Chat with us about what you're doing with Kubernetes :) 35 | 36 | ## TODO 37 | 38 | This repo is still a WIP and hasn't been used in production yet. More content coming. 39 | 40 | TODO: 41 | 42 | * Scripts to automate most of the work (eg host deployment) 43 | * Deploying game servers 44 | * Deploying a LAN cache server 45 | * Documenting caveats 46 | * Testing in production 47 | * Much else... 48 | -------------------------------------------------------------------------------- /configs/coredns.yaml: -------------------------------------------------------------------------------- 1 | 2 | # WARNING: 3 | # Your coredns config may have changed. 4 | # You should take a backup of your coredns configmap before applying this 5 | # Or, just add the `rewrite stop {}` block to your own coredns configmap. 6 | # kubectl -n kube-system get configmap coredns > backup.yaml 7 | # 8 | # The below works with kube 1.16 9 | 10 | apiVersion: v1 11 | kind: ConfigMap 12 | metadata: 13 | name: coredns 14 | namespace: kube-system 15 | data: 16 | Corefile: | 17 | .:53 { 18 | errors 19 | health 20 | ready 21 | rewrite stop { 22 | name regex (.*)\.server\.lan {1}.default.svc.cluster.local 23 | answer name (.*)\.default\.svc\.cluster\.local {1}.server.lan 24 | } 25 | kubernetes cluster.local in-addr.arpa ip6.arpa { 26 | pods insecure 27 | fallthrough in-addr.arpa ip6.arpa 28 | } 29 | prometheus :9153 30 | forward . /etc/resolv.conf 31 | cache 30 32 | loop 33 | reload 34 | loadbalance 35 | } -------------------------------------------------------------------------------- /configs/dhcp-daemonset.yaml: -------------------------------------------------------------------------------- 1 | kind: ConfigMap 2 | apiVersion: v1 3 | metadata: 4 | name: kube-dhcp-cfg 5 | namespace: kube-system 6 | labels: 7 | tier: node 8 | app: dhcp 9 | data: 10 | # We require `br0` interface to be already created before bootstrapping 11 | # the kubernetes cluster. 12 | bridge.conf: | 13 | { 14 | "cniVersion": "0.2.0", 15 | "name": "bridgenet", 16 | "type": "bridge", 17 | "bridge": "br0", 18 | "ipMasq": false, 19 | "isDefaultGateway": false, 20 | "hairpinMode": false, 21 | "ipam": { 22 | "type": "dhcp" 23 | } 24 | } 25 | 26 | --- 27 | apiVersion: apps/v1 28 | kind: DaemonSet 29 | metadata: 30 | name: kube-dhcp-daemon 31 | namespace: kube-system 32 | labels: 33 | tier: node 34 | app: dhcp 35 | spec: 36 | selector: 37 | matchLabels: 38 | app: dhcp 39 | tier: node 40 | template: 41 | metadata: 42 | labels: 43 | tier: node 44 | app: dhcp 45 | spec: 46 | # Need to be hostNetwork: true for socket files to work 47 | hostNetwork: true 48 | # Need to be hostPID: true so that dhcp process can access all of the netns entries in /proc 49 | # for all of the containers being spun up (otherwise dhcp would only see itslef in /proc) 50 | hostPID: true 51 | nodeSelector: 52 | beta.kubernetes.io/arch: amd64 53 | tolerations: 54 | - key: node-role.kubernetes.io/master 55 | operator: Exists 56 | effect: NoSchedule 57 | - key: node.kubernetes.io/not-ready 58 | operator: Exists 59 | effect: NoSchedule 60 | initContainers: 61 | - name: install-cni 62 | # This image can be any image with `cp` binary available 63 | image: busybox 64 | # Set up the bridge CNI config file on the host. Pre-requisite is that a br0 bridge interface 65 | # was set up and bridged to the LAN 66 | command: 67 | - cp 68 | args: 69 | - -f 70 | - /cfg/bridge.conf 71 | - /etc/cni/net.d/10-bridge.conf 72 | volumeMounts: 73 | - name: cni 74 | mountPath: /etc/cni/net.d 75 | - name: dhcp-cfg 76 | mountPath: /cfg 77 | containers: 78 | - name: kube-dhcp 79 | # This can be any image with /bin/sh, rm, and the CGO_ENABLED=1 compiled 80 | # dhcp daemon at /dhcp. Busybox is a perfectly fine base tiny image. 81 | image: opensourcelan/dhcp-cni-plugin:latest 82 | command: 83 | - /bin/sh 84 | args: 85 | - -c 86 | # Gotta rm the sock file first; this daemon doesn't clean up its own and 87 | # it crashes if it already exists 88 | - 'rm -f /run/cni/dhcp.sock; exec /dhcp daemon' 89 | resources: 90 | requests: 91 | cpu: "10m" 92 | memory: "50Mi" 93 | limits: 94 | cpu: "100m" 95 | memory: "50Mi" 96 | securityContext: 97 | privileged: true 98 | volumeMounts: 99 | - name: run 100 | mountPath: /run 101 | volumes: 102 | - name: run 103 | hostPath: 104 | path: /run 105 | - name: cni 106 | hostPath: 107 | path: /etc/cni/net.d 108 | - name: dhcp-cfg 109 | configMap: 110 | name: kube-dhcp-cfg 111 | 112 | 113 | -------------------------------------------------------------------------------- /configs/dns-reflector.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: dns-server-redirector 5 | namespace: kube-system 6 | labels: 7 | app: dns-server-redirector 8 | spec: 9 | selector: 10 | matchLabels: 11 | app: dns-server-redirector 12 | template: 13 | metadata: 14 | labels: 15 | app: dns-server-redirector 16 | spec: 17 | containers: 18 | - name: dns-server-director 19 | image: andyshinn/dnsmasq 20 | args: 21 | # effectively disables forwarding of queries 22 | - -R 23 | # Except forward this specific domain to this server 24 | - -S 25 | - /server.lan/10.96.0.10 26 | resources: 27 | requests: 28 | cpu: "10m" 29 | memory: "50Mi" 30 | limits: 31 | cpu: "100m" 32 | memory: "50Mi" 33 | securityContext: 34 | capabilities: 35 | add: ["NET_ADMIN"] 36 | --- 37 | # This service uses metallb 38 | # install: https://metallb.universe.tf/installation/ 39 | # and then configure a `default` address pool https://metallb.universe.tf/configuration/ 40 | apiVersion: v1 41 | kind: Service 42 | metadata: 43 | annotations: 44 | metallb.universe.tf/address-pool: default 45 | name: test 46 | namespace: kube-system 47 | spec: 48 | externalTrafficPolicy: Cluster 49 | ports: 50 | - port: 53 51 | protocol: UDP 52 | targetPort: 53 53 | selector: 54 | app: dns-server-redirector 55 | sessionAffinity: None 56 | type: LoadBalancer 57 | -------------------------------------------------------------------------------- /configs/nfs-pv.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: nfs-class 5 | provisioner: kubernetes.io/fake-nfs 6 | --- 7 | apiVersion: v1 8 | kind: PersistentVolume 9 | metadata: 10 | name: nfs 11 | spec: 12 | storageClassName: nfs-class 13 | capacity: 14 | storage: 10000Gi 15 | accessModes: 16 | - ReadWriteMany 17 | nfs: 18 | server: 10.0.0.163 19 | path: "/games/tron" 20 | --- 21 | apiVersion: v1 22 | kind: PersistentVolumeClaim 23 | metadata: 24 | name: nfs 25 | spec: 26 | accessModes: 27 | - ReadWriteMany 28 | storageClassName: nfs-class 29 | resources: 30 | requests: 31 | storage: 10Gi 32 | 33 | -------------------------------------------------------------------------------- /dashboard.md: -------------------------------------------------------------------------------- 1 | From https://docs.giantswarm.io/guides/install-kubernetes-dashboard/ 2 | 3 | ``` 4 | $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml 5 | $ kubectl create serviceaccount cluster-admin-dashboard-sa 6 | $ kubectl create clusterrolebinding cluster-admin-dashboard-sa \ 7 | --clusterrole=cluster-admin \ 8 | --serviceaccount=default:cluster-admin-dashboard-sa 9 | $ kubectl get secret | grep cluster-admin-dashboard-sa | awk '{print $1}' | xargs kubectl describe secret 10 | 11 | ``` 12 | 13 | Then on your local PC, with a correctly configured `.kube/config` file such that you can use other `kubectl` commands... 14 | 15 | ``` 16 | $ kubectl proxy 17 | ``` 18 | 19 | Which will open a proxy listening on localhost:8001 .. and you can open the dashboard at .. 20 | 21 | http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 22 | 23 | Select token authentication, and provide the token from the cluster-admin-dashboard-sa serviceaccount you created earlier 24 | 25 | -------------------------------------------------------------------------------- /glusterfs-notes.txt: -------------------------------------------------------------------------------- 1 | 2 | 3 | bugs in script: 4 | * search for extensions/v1alpha1 in kube-templates/* , replace with apps/v1, and also add a `selector` block 5 | * gk-deploy uses `--show-all`, which is removed from kubectl and is now default, so can be removed 6 | * the dynamic storage provisioner outputted at the end of the script has pod IP, no service IP 7 | ... and also it should use the admin user, not the user user 8 | ... and it uses plaintext user/pass for secrets, but should isntead use a secret reference 9 | 10 | set aside an empty block device on each of the storage hosts 11 | 12 | ``` 13 | cat < /etc/modules-load.d/modules.conf 14 | dm_snapshot 15 | dm_mirror 16 | dm_thin_pool 17 | EOF 18 | modprobe dm_snapshot 19 | modprobe dm_mirror 20 | modprobe dm_thin_pool 21 | ``` 22 | 23 | either reboot or modprobe those 24 | 25 | ``` 26 | sudo apt-get install -y glusterfs-client #(to make mount.glusterfs available) 27 | ``` 28 | 29 | Make sure the target drives are empty: 30 | 31 | ``` 32 | # delete all partitions 33 | gdisk /dev/sdb 34 | d 35 | w 36 | # wipe the first 100MB of the drive to get rid of MFT etc 37 | dd if=/dev/zero of=/dev/sdb bs=1M count=100 38 | ``` 39 | 40 | set up the topology.json file 41 | 42 | ./gk-deploy -g -t kube-templates/ --admin-key --user-key topology.json 43 | 44 | 45 | 46 | This is the storage class you need to apply - the one that the script gives you is wrong. 47 | Subsitute teh resturl IP for the service IP, not the pod ID 48 | Put the restuser admin in there 49 | 50 | apiVersion: storage.k8s.io/v1 51 | kind: StorageClass 52 | metadata: 53 | name: glusterfs-storage 54 | provisioner: kubernetes.io/glusterfs 55 | parameters: 56 | resturl: "http://10.104.18.77:8080" 57 | restuser: "admin" 58 | restuserkey: "" 59 | allowVolumeExpansion: true 60 | reclaimPolicy: Retain 61 | 62 | 63 | 64 | apiVersion: v1 65 | kind: PersistentVolumeClaim 66 | metadata: 67 | name: test 68 | spec: 69 | accessModes: 70 | - ReadWriteOnce 71 | volumeMode: Filesystem 72 | resources: 73 | requests: 74 | storage: 8Gi 75 | storageClassName: glusterfs-storage 76 | 77 | 78 | 79 | 80 | Replacing broken host: 81 | 82 | bring up host. 83 | 84 | Add the label `storagenode=glusterfs` to the node 85 | 86 | That will create a new glusterfs pod on the host 87 | 88 | Once that pod is ready, run `heketi-cli --user admin --secret node add --zone=1 --cluster=45b45db0764e976a5a5e3c9b7530a884 --management-host-name=kubeworker4 --storage-host-name=10.0.0.234` 89 | 90 | then update your topology.json file, and run `heketi-cli --user admin --secret topology load --json=topo.json` to have it init the device 91 | 92 | `heketi-cli --user admin --secret topology info` - get the Device ID of the disk on the host you want to remove 93 | 94 | `heketi-cli --user admin --secret device disable ` 95 | `heketi-cli --user admin --secret device remove ` - this step _should_ migrate all bricks to another host 96 | -------------------------------------------------------------------------------- /helm/README.md: -------------------------------------------------------------------------------- 1 | # Helm! 2 | 3 | ## Installation 4 | 5 | ### Certificates 6 | To set up a secure Helm installation, first create a Certificate Authority (CA) and 7 | generate some certificates! 8 | 9 | ``` 10 | # Generate CA private key 11 | openssl genrsa -out ./ca.key.pem 4096 12 | # Generate public cert - fine to accept all default answers to questions 13 | openssl req -key ca.key.pem -new -x509 -days 7300 -sha256 -out ca.cert.pem -extensions v3_ca 14 | # Private key for server 15 | openssl genrsa -out ./tiller.key.pem 4096 16 | # Private key for client 17 | openssl genrsa -out ./helm.key.pem 4096 18 | # Cert signing request for server and client - defaults are okay again 19 | openssl req -key tiller.key.pem -new -sha256 -out tiller.csr.pem 20 | openssl req -key helm.key.pem -new -sha256 -out helm.csr.pem 21 | # And then sign the requests 22 | openssl x509 -req -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -in tiller.csr.pem -out tiller.cert.pem -days 365 23 | openssl x509 -req -CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial -in helm.csr.pem -out helm.cert.pem -days 365 24 | ``` 25 | 26 | Whew. All the certs are generated. The `*.key.pem` files are your secrets - don't 27 | make those public! 28 | 29 | Now install Tiller to your cluster. 30 | 31 | ### Tiller (server) 32 | 33 | ``` 34 | kubectl create serviceaccount --namespace kube-system tiller 35 | kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller 36 | helm init --service-account tiller --tiller-tls --tiller-tls-cert ./tiller.cert.pem --tiller-tls-key ./tiller.key.pem --tiller-tls-verify --tls-ca-cert ca.cert.pem 37 | ``` 38 | 39 | Verify that it works with the following command. It may take a minute for Tiller to 40 | download its image and start, so be patient if it returns an error about not finding 41 | a ready tiller pod. 42 | 43 | ``` 44 | helm ls --tls --tls-ca-cert ca.cert.pem --tls-cert helm.cert.pem --tls-key helm.key.pem 45 | ``` 46 | 47 | ### Helm (client) 48 | 49 | Copy your client certificates in to your `~/.helm` directory: 50 | 51 | ``` 52 | cp ca.cert.pem $(helm home)/ca.pem 53 | cp helm.cert.pem $(helm home)/cert.pem 54 | cp helm.key.pem $(helm home)/key.pem 55 | ``` 56 | 57 | Helm is meant to use those automatically since that's its default Helm home directory. 58 | But my version doesn't. So set the env var and test it works: 59 | 60 | ``` 61 | export HELM_HOME=$(helm home) 62 | export HELM_TLS_ENABLE=true 63 | helm ls 64 | ``` 65 | 66 | Success. Maybe put those in your `.bash_profile`? Now the client is set up! 67 | 68 | ## Run a server 69 | 70 | Set up a Trackmania server with a provided name. Starting from this directory: 71 | 72 | ``` 73 | helm install --name trackmania-demo --set server.name="Some Trackmania Server" ./trackmania-forever 74 | ``` 75 | 76 | This creates a deployment called `trackmania-demo` and passes a value to the 77 | `server.name` property. 78 | 79 | To see the status of your deployments: 80 | 81 | ``` 82 | helm ls 83 | ``` 84 | 85 | Done! 86 | -------------------------------------------------------------------------------- /helm/armagetron/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/armagetron/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: armagetron 2 | version: 0.0.1 3 | appVersion: 2.8.3.4 4 | description: Armagetron AD dedicated server 5 | keywords: 6 | - game 7 | - server 8 | - armagetron 9 | - tron 10 | home: https://www.opensourcelan.com 11 | sources: 12 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/armagetron 13 | maintainers: 14 | - name: Chris Holman 15 | email: chris@opensourcelan.com 16 | -------------------------------------------------------------------------------- /helm/armagetron/README.md: -------------------------------------------------------------------------------- 1 | # Armagetron 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/armagetron/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | Armagetron! 3 | 4 | Starts a tron server with FFA mode on port 34197 5 | 6 | Find it in your LAN browser with name {{ .Values.armagetron.name }} 7 | 8 | Note: currently /tron/server/var isn't externally mounted. This stores your server state for scores, logs, etc. -------------------------------------------------------------------------------- /helm/armagetron/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "armagetron.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "armagetron.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/armagetron/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "armagetron.fullname" . }} 5 | labels: 6 | app: {{ template "armagetron.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "armagetron.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "armagetron.fullname" . }} 18 | spec: 19 | imagePullSecrets: 20 | - name: regcred 21 | containers: 22 | - name: armagetron 23 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 24 | imagePullPolicy: Always 25 | resources: 26 | {{ toYaml .Values.resources | indent 10 }} 27 | env: 28 | - name: SERVER_NAME 29 | value: {{ .Values.armagetron.name | quote }} 30 | {{- if .Values.armagetron.rcon_password }} 31 | - name: RCON_PASSWORD 32 | value: {{ .Values.armagetron.rcon_password | quote }} 33 | {{- end }} 34 | 35 | ports: 36 | - name: armagetron 37 | containerPort: 4534 38 | protocol: UDP 39 | --- 40 | apiVersion: v1 41 | kind: Service 42 | metadata: 43 | name: tron 44 | spec: 45 | selector: 46 | app: {{ template "armagetron.fullname" . }} 47 | clusterIP: None 48 | ports: 49 | - name: foo # These ports don't matter if you're not using SRV recrds, but I think you need one otherwise no A records are returned 50 | port: 1234 51 | targetPort: 1234 52 | -------------------------------------------------------------------------------- /helm/armagetron/values.yaml: -------------------------------------------------------------------------------- 1 | image: armagetron 2 | imageTag: latest 3 | imagePrefix: docker.pax.lan/ 4 | imagePullSecret: regcred 5 | 6 | resources: 7 | requests: 8 | memory: 50Mi 9 | cpu: 100m 10 | 11 | armagetron: 12 | name: Kickass Tron Server 13 | port: 34197 14 | rcon_password: null 15 | -------------------------------------------------------------------------------- /helm/csgo-comp/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/csgo-comp/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: csgo 2 | version: 0.0.1 3 | appVersion: 0.0.1 4 | description: CSGO Vanilla 5 | keywords: 6 | - game 7 | - server 8 | - csgo 9 | - counterstrike 10 | home: https://www.opensourcelan.com 11 | sources: 12 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/csgo 13 | maintainers: 14 | - name: Chris Holman 15 | email: chris@opensourcelan.com 16 | -------------------------------------------------------------------------------- /helm/csgo-comp/README.md: -------------------------------------------------------------------------------- 1 | # CS:GO 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/csgo-comp/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | CS GO Competetion Server. 3 | 4 | Server hostname is: 5 | {{ .Values.server.name | quote }} 6 | 7 | Get the RCON password by running: 8 | printf $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "server.fullname" . }} -o jsonpath="{.data.rcon_password}" | base64 --decode);echo 9 | -------------------------------------------------------------------------------- /helm/csgo-comp/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/csgo-comp/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | template: 12 | metadata: 13 | labels: 14 | app: {{ template "server.fullname" . }} 15 | spec: 16 | containers: 17 | - name: server 18 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 19 | imagePullPolicy: Always 20 | tty: true 21 | stdin: true 22 | resources: 23 | {{ toYaml .Values.resources | indent 10 }} 24 | env: 25 | - name: LAN 26 | value: {{ .Values.server.lan | quote }} 27 | - name: SV_HOSTNAME 28 | value: {{ .Values.server.name | quote }} 29 | - name: MAP 30 | value: {{ .Values.server.default_map | quote }} 31 | - name: RCON_PASSWORD 32 | valueFrom: 33 | secretKeyRef: 34 | name: {{ template "server.fullname" . }} 35 | key: rcon_password 36 | - name: GAME_TYPE 37 | value: {{ .Values.server.game_type | quote }} 38 | - name: GAME_MODE 39 | value: {{ .Values.server.game_mode | quote }} 40 | 41 | ports: 42 | - name: server 43 | containerPort: 27015 44 | protocol: UDP 45 | - name: rcon 46 | containerPort: 27015 47 | protocol: TCP 48 | -------------------------------------------------------------------------------- /helm/csgo-comp/templates/secret.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | type: Opaque 11 | data: 12 | {{ if .Values.server.rconPassword }} 13 | rcon_password: {{ .Values.server.rconPassword | b64enc | quote }} 14 | {{ else }} 15 | rcon_password: {{ randAlphaNum 10 | b64enc | quote }} 16 | {{ end }} -------------------------------------------------------------------------------- /helm/csgo-comp/values.yaml: -------------------------------------------------------------------------------- 1 | image: csgo-comp 2 | imageTag: "latest" 3 | imagePrefix: minipete:5000/ 4 | 5 | resources: 6 | requests: 7 | memory: 500Mi 8 | cpu: 500m 9 | 10 | server: 11 | name: CSGO Comp Server 12 | description: CSGO Comp Server of Success 13 | rcon_password: null 14 | default_map: de_overpass 15 | lan: 1 16 | game_type: 0 17 | game_mode: 1 18 | # If left blank a password will be generated for you 19 | rconPassword: -------------------------------------------------------------------------------- /helm/csgo/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/csgo/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: csgo 2 | version: 0.0.1 3 | appVersion: 0.0.1 4 | description: CSGO Vanilla 5 | keywords: 6 | - game 7 | - server 8 | - csgo 9 | - counterstrike 10 | home: https://www.opensourcelan.com 11 | sources: 12 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/csgo 13 | maintainers: 14 | - name: Chris Holman 15 | email: chris@opensourcelan.com 16 | -------------------------------------------------------------------------------- /helm/csgo/README.md: -------------------------------------------------------------------------------- 1 | # CS:GO 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/csgo/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | CS GO Vanilla Server. 3 | 4 | Server hostname is: 5 | {{ .Values.server.name | quote }} 6 | 7 | Get the RCON password by running: 8 | printf $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "server.fullname" . }} -o jsonpath="{.data.rcon_password}" | base64 --decode);echo 9 | -------------------------------------------------------------------------------- /helm/csgo/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/csgo/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | template: 12 | metadata: 13 | labels: 14 | app: {{ template "server.fullname" . }} 15 | spec: 16 | containers: 17 | - name: server 18 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 19 | imagePullPolicy: Always 20 | tty: true 21 | stdin: true 22 | resources: 23 | {{ toYaml .Values.resources | indent 10 }} 24 | env: 25 | - name: LAN 26 | value: {{ .Values.server.lan | quote }} 27 | - name: SV_HOSTNAME 28 | value: {{ .Values.server.name | quote }} 29 | - name: MAP 30 | value: {{ .Values.server.default_map | quote }} 31 | - name: RCON_PASSWORD 32 | valueFrom: 33 | secretKeyRef: 34 | name: {{ template "server.fullname" . }} 35 | key: rcon_password 36 | 37 | 38 | ports: 39 | - name: server 40 | containerPort: 27015 41 | protocol: UDP 42 | - name: rcon 43 | containerPort: 27015 44 | protocol: TCP 45 | -------------------------------------------------------------------------------- /helm/csgo/templates/secret.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | type: Opaque 11 | data: 12 | {{ if .Values.server.rconPassword }} 13 | rcon_password: {{ .Values.server.rconPassword | b64enc | quote }} 14 | {{ else }} 15 | rcon_password: {{ randAlphaNum 10 | b64enc | quote }} 16 | {{ end }} -------------------------------------------------------------------------------- /helm/csgo/values.yaml: -------------------------------------------------------------------------------- 1 | image: csgo 2 | imageTag: "latest" 3 | imagePrefix: minipete:5000/ 4 | 5 | resources: 6 | requests: 7 | memory: 500Mi 8 | cpu: 500m 9 | 10 | server: 11 | name: Vanilla CSGO Server 12 | description: Vanilla CSGO Server 13 | rcon_password: null 14 | default_map: de_overpass 15 | lan: 1 16 | # If left blank a password will be generated for you 17 | rconPassword: 18 | 19 | -------------------------------------------------------------------------------- /helm/hl2dm/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/hl2dm/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: hl2dm 2 | version: 0.0.1 3 | appVersion: 0.0.1 4 | description: Half Life 2 Death Match 5 | keywords: 6 | - game 7 | - server 8 | - hl2dm 9 | home: https://www.opensourcelan.com 10 | sources: 11 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/hl2dm 12 | maintainers: 13 | - name: Chris Holman 14 | email: chris@opensourcelan.com 15 | -------------------------------------------------------------------------------- /helm/hl2dm/README.md: -------------------------------------------------------------------------------- 1 | # CS:GO 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/hl2dm/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | Half Life 2 Death Match Server. 3 | 4 | Server hostname is: 5 | {{ .Values.server.name | quote }} 6 | 7 | Get the RCON password by running: 8 | printf $(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "server.fullname" . }} -o jsonpath="{.data.rcon_password}" | base64 --decode);echo 9 | -------------------------------------------------------------------------------- /helm/hl2dm/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/hl2dm/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "server.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "server.fullname" . }} 18 | spec: 19 | imagePullSecrets: 20 | - name: regcred 21 | containers: 22 | - name: server 23 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 24 | imagePullPolicy: Always 25 | tty: true 26 | stdin: true 27 | resources: 28 | {{ toYaml .Values.resources | indent 10 }} 29 | env: 30 | - name: MAXPLAYERS 31 | value: {{ .Values.server.maxplayers | quote }} 32 | - name: SV_HOSTNAME 33 | value: {{ .Values.server.name | quote }} 34 | - name: RCON_PASSWORD 35 | valueFrom: 36 | secretKeyRef: 37 | name: {{ template "server.fullname" . }} 38 | key: rcon_password 39 | - name: MAP 40 | value: {{ .Values.server.map | quote }} 41 | - name: MAPCYCLE 42 | value: | 43 | {{ join "\n" .Values.server.mapcycle | indent 12 }} 44 | - name: WINLIMIT 45 | value: {{ .Values.server.winlimit | quote }} 46 | {{- if .Values.server.sv_password }} 47 | - name: SV_PASSWORD 48 | value: {{.Values.server.sv_password }} 49 | {{- end }} 50 | ports: 51 | - name: server 52 | containerPort: 27015 53 | protocol: UDP 54 | - name: rcon 55 | containerPort: 27015 56 | protocol: TCP 57 | 58 | -------------------------------------------------------------------------------- /helm/hl2dm/templates/secret.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | type: Opaque 11 | data: 12 | {{ if .Values.server.rcon_password }} 13 | rcon_password: {{ .Values.server.rcon_password | b64enc | quote }} 14 | {{ else }} 15 | rcon_password: {{ randAlphaNum 10 | b64enc | quote }} 16 | {{ end }} -------------------------------------------------------------------------------- /helm/hl2dm/values.yaml: -------------------------------------------------------------------------------- 1 | image: hl2dm 2 | imageTag: 20221003-065414 3 | imagePrefix: docker.pax.lan/ 4 | 5 | resources: 6 | requests: 7 | memory: 500Mi 8 | cpu: 500m 9 | 10 | server: 11 | name: HL2DM Server 12 | description: HL2DM Server 13 | maxplayers: 16 14 | mapcycle: 15 | - dm_lockdown 16 | - dm_overwatch 17 | - dm_powerhouse 18 | - dm_resistance 19 | - dm_runoff 20 | - dm_steamlab 21 | - dm_underpass 22 | winlimit: 100 23 | map: dm_resistance 24 | # If left blank, no password is set 25 | sv_password: 26 | # If left blank a password will be generated for you 27 | rcon_password: 28 | 29 | -------------------------------------------------------------------------------- /helm/lancache/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/lancache/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: lancache 2 | version: 0.0.1 3 | appVersion: 0.0.1 4 | description: lancache 5 | keywords: 6 | -------------------------------------------------------------------------------- /helm/lancache/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenSourceLAN/kubernetes-lanparty/8717d0805fc19063a62cf7756d410a22f63b91e2/helm/lancache/README.md -------------------------------------------------------------------------------- /helm/lancache/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/lancache/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "server.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "server.fullname" . }} 18 | spec: 19 | {{- if .Values.server.tolerations }} 20 | tolerations: 21 | {{ toYaml .Values.server.tolerations | indent 8 }} 22 | {{- end }} 23 | {{- if .Values.server.nodeSelector }} 24 | nodeSelector: 25 | {{ toYaml .Values.server.nodeSelector | indent 8 }} 26 | {{- end }} 27 | {{- if .Values.imagePullSecret }} 28 | imagePullSecrets: 29 | - name: {{ .Values.imagePullSecret }} 30 | {{- end }} 31 | containers: 32 | - name: cache 33 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 34 | imagePullPolicy: Always 35 | resources: 36 | {{ toYaml .Values.resources | indent 10 }} 37 | ports: 38 | - name: http 39 | containerPort: 80 40 | protocol: TCP 41 | - name: https 42 | containerPort: 443 43 | protocol: TCP 44 | volumeMounts: 45 | - name: cache 46 | mountPath: /data/cache 47 | - name: logs 48 | mountPath: /data/logs 49 | volumes: 50 | - name: cache 51 | persistentVolumeClaim: 52 | claimName: {{ template "server.fullname" . }}-cache 53 | - name: logs 54 | persistentVolumeClaim: 55 | claimName: {{ template "server.fullname" . }}-logs 56 | --- 57 | apiVersion: v1 58 | kind: Service 59 | metadata: 60 | name: "{{ .Release.Name }}" 61 | labels: 62 | app: {{ template "server.fullname" . }} 63 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 64 | release: "{{ .Release.Name }}" 65 | heritage: "{{ .Release.Service }}" 66 | spec: 67 | ports: 68 | - port: 80 69 | name: web 70 | - port: 443 71 | name: sniproxy 72 | clusterIP: None 73 | selector: 74 | app: {{ template "server.fullname" . }} 75 | --- 76 | apiVersion: v1 77 | kind: Service 78 | metadata: 79 | name: lancachedirect 80 | annotations: 81 | metallb.universe.tf/address-pool: default 82 | spec: 83 | allocateLoadBalancerNodePorts: false 84 | externalTrafficPolicy: Local 85 | internalTrafficPolicy: Local 86 | ports: 87 | - port: 80 88 | targetPort: 80 89 | name: http 90 | - port: 443 91 | targetPort: 443 92 | name: https 93 | selector: 94 | app: {{ template "server.fullname" . }} 95 | type: LoadBalancer -------------------------------------------------------------------------------- /helm/lancache/templates/dns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }}-dns 5 | labels: 6 | app: {{ template "server.fullname" . }}-dns 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "server.fullname" . }}-dns 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "server.fullname" . }}-dns 18 | spec: 19 | {{- if .Values.server.tolerations }} 20 | tolerations: 21 | {{ toYaml .Values.server.tolerations | indent 8 }} 22 | {{- end }} 23 | {{- if .Values.server.nodeSelector }} 24 | nodeSelector: 25 | {{ toYaml .Values.server.nodeSelector | indent 8 }} 26 | {{- end }} 27 | {{- if .Values.imagePullSecret }} 28 | imagePullSecrets: 29 | - name: {{ .Values.imagePullSecret }} 30 | {{- end }} 31 | containers: 32 | - name: dns 33 | image: lancachenet/lancache-dns:latest 34 | resources: 35 | {{ toYaml .Values.resources | indent 10 }} 36 | ports: 37 | - name: dns 38 | containerPort: 53 39 | protocol: UDP 40 | env: 41 | - name: USE_GENERIC_CACHE 42 | value: "true" 43 | - name: LANCACHE_IP 44 | value: 10.10.99.1 45 | 46 | 47 | 48 | --- 49 | apiVersion: v1 50 | kind: Service 51 | metadata: 52 | name: lancachedns 53 | annotations: 54 | metallb.universe.tf/address-pool: default 55 | spec: 56 | allocateLoadBalancerNodePorts: false 57 | externalTrafficPolicy: Local 58 | internalTrafficPolicy: Local 59 | ports: 60 | - port: 53 61 | targetPort: 53 62 | name: dns 63 | protocol: UDP 64 | selector: 65 | app: {{ template "server.fullname" . }}-dns 66 | type: LoadBalancer -------------------------------------------------------------------------------- /helm/lancache/templates/storage.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: {{ template "server.fullname" . }}-cache 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | capacity: 12 | storage: 10Gi 13 | accessModes: 14 | - ReadWriteMany 15 | # TODO: is ReadWriteMany going to break the cache? Logs? 16 | persistentVolumeReclaimPolicy: Retain 17 | storageClassName: {{ template "server.fullname" . }} 18 | local: 19 | path: {{ .Values.server.storage.path }}/cache 20 | nodeAffinity: 21 | required: 22 | nodeSelectorTerms: 23 | - matchExpressions: 24 | - key: kubernetes.io/hostname 25 | operator: In 26 | values: 27 | - {{ .Values.server.storage.hostname | quote }} 28 | 29 | --- 30 | apiVersion: v1 31 | kind: PersistentVolumeClaim 32 | metadata: 33 | name: {{ template "server.fullname" . }}-cache 34 | labels: 35 | app: {{ template "server.fullname" . }} 36 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 37 | release: "{{ .Release.Name }}" 38 | heritage: "{{ .Release.Service }}" 39 | spec: 40 | accessModes: 41 | - ReadWriteMany 42 | resources: 43 | requests: 44 | storage: 10Gi 45 | storageClassName: {{ template "server.fullname" . }} 46 | volumeName: {{ template "server.fullname" . }}-cache 47 | --- 48 | apiVersion: v1 49 | kind: PersistentVolume 50 | metadata: 51 | name: {{ template "server.fullname" . }}-logs 52 | labels: 53 | app: {{ template "server.fullname" . }} 54 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 55 | release: "{{ .Release.Name }}" 56 | heritage: "{{ .Release.Service }}" 57 | spec: 58 | capacity: 59 | storage: 10Gi 60 | accessModes: 61 | - ReadWriteMany 62 | # TODO: is ReadWriteMany going to break the cache? Logs? 63 | persistentVolumeReclaimPolicy: Retain 64 | storageClassName: {{ template "server.fullname" . }} 65 | local: 66 | path: {{ .Values.server.storage.path }}/logs 67 | nodeAffinity: 68 | required: 69 | nodeSelectorTerms: 70 | - matchExpressions: 71 | - key: kubernetes.io/hostname 72 | operator: In 73 | values: 74 | - {{ .Values.server.storage.hostname | quote }} 75 | 76 | --- 77 | apiVersion: v1 78 | kind: PersistentVolumeClaim 79 | metadata: 80 | name: {{ template "server.fullname" . }}-logs 81 | labels: 82 | app: {{ template "server.fullname" . }} 83 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 84 | release: "{{ .Release.Name }}" 85 | heritage: "{{ .Release.Service }}" 86 | spec: 87 | accessModes: 88 | - ReadWriteMany 89 | resources: 90 | requests: 91 | storage: 10Gi 92 | storageClassName: {{ template "server.fullname" . }} 93 | volumeName: {{ template "server.fullname" . }}-logs 94 | --- 95 | kind: StorageClass 96 | apiVersion: storage.k8s.io/v1 97 | metadata: 98 | name: {{ template "server.fullname" . }} 99 | labels: 100 | app: {{ template "server.fullname" . }} 101 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 102 | release: "{{ .Release.Name }}" 103 | heritage: "{{ .Release.Service }}" 104 | provisioner: kubernetes.io/no-provisioner 105 | volumeBindingMode: WaitForFirstConsumer -------------------------------------------------------------------------------- /helm/lancache/values.yaml: -------------------------------------------------------------------------------- 1 | image: lancachenet/monolithic 2 | imageTag: latest 3 | 4 | resources: 5 | requests: 6 | memory: 4000Mi 7 | cpu: 1 8 | 9 | server: 10 | storage: 11 | # node name that the storage is one 12 | hostname: lancache 13 | # directory on node that you want to store the logs/cache data on. 14 | # directory should always exist 15 | path: /data/lancache 16 | tolerations: 17 | - key: "dedicated" 18 | operator: "Equal" 19 | value: "cache" 20 | effect: "NoSchedule" 21 | nodeSelector: 22 | # kubectl label node lancache dedicated=lancache 23 | dedicated: "lancache" 24 | 25 | -------------------------------------------------------------------------------- /helm/minecraft/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: minecraft 2 | version: 0.0.1 3 | appVersion: 2011.02.21 4 | description: Minecraft 5 | keywords: 6 | - game 7 | - server 8 | - minecraft 9 | - bukkit 10 | home: https://www.opensourcelan.com 11 | sources: 12 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master 13 | maintainers: 14 | - name: Chris Holman 15 | email: chris@opensourcelan.com 16 | -------------------------------------------------------------------------------- /helm/minecraft/mc.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Source: minecraft/templates/01storage.yaml 3 | kind: StorageClass 4 | apiVersion: storage.k8s.io/v1 5 | metadata: 6 | name: minecraft1-minecraft 7 | labels: 8 | app: minecraft1-minecraft 9 | chart: "minecraft-0.0.1" 10 | release: "minecraft1" 11 | heritage: "Tiller" 12 | provisioner: kubernetes.io/no-provisioner 13 | volumeBindingMode: WaitForFirstConsumer 14 | --- 15 | apiVersion: v1 16 | kind: PersistentVolume 17 | metadata: 18 | name: minecraft1-minecraft-worlddata 19 | labels: 20 | app: minecraft 21 | chart: minecraft-0.0.1 22 | release: minecraft1 23 | heritage: Tiller 24 | spec: 25 | accessModes: 26 | - ReadWriteOnce 27 | capacity: 28 | storage: 10Gi 29 | local: 30 | path: /data/mcworlddata1 31 | nodeAffinity: 32 | required: 33 | nodeSelectorTerms: 34 | - matchExpressions: 35 | - key: kubernetes.io/hostname 36 | operator: In 37 | values: 38 | - "kubeworker8" 39 | persistentVolumeReclaimPolicy: Retain 40 | storageClassName: minecraft1-minecraft 41 | 42 | 43 | --- 44 | # Source: minecraft/templates/02deployment.yaml 45 | 46 | --- 47 | kind: PersistentVolumeClaim 48 | apiVersion: v1 49 | metadata: 50 | name: minecraft1-minecraft-worlddata 51 | labels: 52 | app: minecraft1-minecraft 53 | chart: "minecraft-0.0.1" 54 | release: "minecraft1" 55 | heritage: "Tiller" 56 | spec: 57 | accessModes: 58 | - ReadWriteOnce 59 | resources: 60 | requests: 61 | storage: 10Gi 62 | storageClassName: minecraft1-minecraft 63 | volumeName: minecraft1-minecraft-worlddata 64 | --- 65 | apiVersion: extensions/v1beta1 66 | kind: Deployment 67 | metadata: 68 | name: minecraft1-minecraft 69 | labels: 70 | app: minecraft1-minecraft 71 | chart: "minecraft-0.0.1" 72 | release: "minecraft1" 73 | heritage: "Tiller" 74 | spec: 75 | template: 76 | metadata: 77 | labels: 78 | app: minecraft1-minecraft 79 | spec: 80 | imagePullSecrets: 81 | - name: dockerreglanadminsnet 82 | containers: 83 | - name: server 84 | image: "dockerreg.lanadmins.net/bukkit:20181026-120432" 85 | imagePullPolicy: Always 86 | tty: true 87 | stdin: true 88 | resources: 89 | requests: 90 | cpu: 750m 91 | memory: 1000Mi 92 | 93 | env: 94 | - name: HOSTNAME 95 | value: "PAX Server 1" 96 | ports: 97 | - name: server 98 | containerPort: 25565 99 | protocol: UDP 100 | - name: server-udp 101 | containerPort: 25565 102 | protocol: TCP 103 | volumeMounts: 104 | - name: minecraft1-minecraft-worlddata 105 | mountPath: /bukkit/plotworld/ 106 | volumes: 107 | - name: minecraft1-minecraft-worlddata 108 | persistentVolumeClaim: 109 | claimName: minecraft1-minecraft-worlddata 110 | 111 | 112 | -------------------------------------------------------------------------------- /helm/minecraft/templates/02deployment.yaml: -------------------------------------------------------------------------------- 1 | 2 | --- 3 | kind: PersistentVolumeClaim 4 | apiVersion: v1 5 | metadata: 6 | name: {{ template "server.fullname" . }}-worlddata 7 | labels: 8 | app: {{ template "server.fullname" . }} 9 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 10 | release: "{{ .Release.Name }}" 11 | heritage: "{{ .Release.Service }}" 12 | spec: 13 | accessModes: 14 | - ReadWriteOnce 15 | resources: 16 | requests: 17 | storage: 2Gi 18 | storageClassName: glusterfs-storage 19 | # volumeName: {{ template "server.fullname" . }}-worlddata 20 | --- 21 | apiVersion: apps/v1 22 | kind: Deployment 23 | metadata: 24 | name: {{ template "server.fullname" . }} 25 | labels: 26 | app: {{ template "server.fullname" . }} 27 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 28 | release: "{{ .Release.Name }}" 29 | heritage: "{{ .Release.Service }}" 30 | spec: 31 | selector: 32 | matchLabels: 33 | app: {{ template "server.fullname" . }} 34 | template: 35 | metadata: 36 | labels: 37 | app: {{ template "server.fullname" . }} 38 | spec: 39 | imagePullSecrets: 40 | - name: dockerreglanadminsnet 41 | containers: 42 | - name: server 43 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 44 | imagePullPolicy: Always 45 | tty: true 46 | stdin: true 47 | resources: 48 | {{ toYaml .Values.resources | indent 10 }} 49 | env: 50 | - name: HOSTNAME 51 | value: {{ .Values.server.name | quote }} 52 | ports: 53 | - name: server 54 | containerPort: 25565 55 | protocol: UDP 56 | - name: server-udp 57 | containerPort: 25565 58 | protocol: TCP 59 | {{ if .Values.server.enableStorage }} 60 | volumeMounts: 61 | - name: {{ template "server.fullname" . }}-worlddata 62 | mountPath: /bukkit/plotworld/ 63 | volumes: 64 | - name: {{ template "server.fullname" . }}-worlddata 65 | persistentVolumeClaim: 66 | claimName: {{ template "server.fullname" . }}-worlddata 67 | {{ end }} 68 | -------------------------------------------------------------------------------- /helm/minecraft/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/minecraft/values.yaml: -------------------------------------------------------------------------------- 1 | image: bukkit 2 | imageTag: "latest" 3 | imagePrefix: dockerreg.lanadmins.net/ 4 | 5 | resources: 6 | requests: 7 | memory: 50Mi 8 | cpu: 100m 9 | 10 | server: 11 | enableStorage: true 12 | name: Kickass Bukkit Server 13 | rconPassword: null 14 | -------------------------------------------------------------------------------- /helm/mordhau/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: mordhau 2 | version: 0.0.1 3 | appVersion: 3525360 4 | description: Mordhau Dedicated Server 5 | keywords: 6 | - game 7 | - server 8 | - mordhau 9 | home: https://www.opensourcelan.com 10 | sources: 11 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/mordhau 12 | maintainers: 13 | - name: Chris Holman 14 | email: chris@opensourcelan.com 15 | -------------------------------------------------------------------------------- /helm/mordhau/README.md: -------------------------------------------------------------------------------- 1 | # Mordhau 2 | 3 | See [opensourcelan/gameservers-docker](https://github.com/OpenSourceLAN/gameservers-docker/tree/master/mordhau) for more info 4 | -------------------------------------------------------------------------------- /helm/mordhau/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | unreal4! 3 | 4 | You won't find this game in the LAN browser because apparently only 5 | listen servers will show up in the LAN browser -------------------------------------------------------------------------------- /helm/mordhau/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "mordhau.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "mordhau.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/mordhau/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "mordhau.fullname" . }} 5 | labels: 6 | app: {{ template "mordhau.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "mordhau.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "mordhau.fullname" . }} 18 | spec: 19 | imagePullSecrets: 20 | - name: dockerreg 21 | containers: 22 | - name: mordhau 23 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 24 | imagePullPolicy: Always 25 | resources: 26 | {{ toYaml .Values.resources | indent 10 }} 27 | env: 28 | - name: SV_HOSTNAME 29 | value: {{ .Values.server.name | quote }} 30 | {{- if .Values.server.rcon_password }} 31 | - name: RCON_PASSWORD 32 | value: {{ .Values.server.rcon_password | quote }} 33 | {{- end }} 34 | {{- if .Values.server.game_password }} 35 | - name: SV_PASSWORD 36 | value: {{ .Values.server.game_password | quote }} 37 | {{- end }} 38 | - name: MAXPLAYERS 39 | value: {{ .Values.server.maxplayers | quote }} 40 | - name: MAP 41 | value: {{ .Values.server.map | quote }} 42 | {{- if .Values.server.gamemode }} 43 | - name: GAMEMODE 44 | value: {{ .Values.server.gamemode | quote }} 45 | {{- end }} 46 | ports: 47 | - name: mordhau 48 | containerPort: 7779 49 | protocol: UDP 50 | 51 | #--- 52 | # 53 | #apiVersion: v1 54 | #kind: Service 55 | #metadata: 56 | # name: 57 | #spec: 58 | # selector: 59 | # app: {{ template "mordhau.fullname" . }} 60 | # clusterIP: None 61 | # ports: 62 | # - name: foo # Actually, no port is needed. 63 | # port: 1234 64 | # targetPort: 123 65 | -------------------------------------------------------------------------------- /helm/mordhau/values.yaml: -------------------------------------------------------------------------------- 1 | image: mordhau-squid 2 | imageTag: latest 3 | imagePrefix: dockerreg.lanadmins.net/ 4 | 5 | resources: 6 | requests: 7 | memory: 800Mi 8 | cpu: 1 9 | 10 | server: 11 | name: Kickass Mordhau Server 12 | rcon_password: null 13 | game_password: null 14 | maxplayers: 32 15 | map: FFA_ThePit 16 | #gamemode: SKM 17 | -------------------------------------------------------------------------------- /helm/mssql/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/mssql/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: mssql 2 | version: 0.0.1 3 | appVersion: 1.10.05 4 | description: MSSQL wrapper for our needs 5 | keywords: 6 | - mssql 7 | - sql server 8 | home: https://www.opensourcelan.com 9 | sources: 10 | maintainers: 11 | - name: Chris Holman 12 | email: chris@opensourcelan.com 13 | -------------------------------------------------------------------------------- /helm/mssql/README.md: -------------------------------------------------------------------------------- 1 | # MSSQL -------------------------------------------------------------------------------- /helm/mssql/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | MSSQL Server Deployed 3 | -------------------------------------------------------------------------------- /helm/mssql/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/mssql/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | replicas: 1 12 | selector: 13 | matchLabels: 14 | app: {{ template "server.fullname" . }} 15 | template: 16 | metadata: 17 | labels: 18 | app: {{ template "server.fullname" . }} 19 | spec: 20 | tolerations: 21 | - key: "role" 22 | value: "storage" 23 | effect: "NoSchedule" 24 | containers: 25 | - name: server 26 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 27 | imagePullPolicy: Always 28 | resources: 29 | {{ toYaml .Values.resources | indent 10 }} 30 | ports: 31 | - name: mssql 32 | containerPort: 1433 33 | protcol: TCP 34 | env: 35 | - name: ACCEPT_EULA 36 | value: "Y" 37 | - name: MSSQL_SA_PASSWORD 38 | value: {{ .Values.server.password | quote }} 39 | volumeMounts: 40 | - name: {{ template "server.fullname" . }}-cache 41 | mountPath: /var/opt/mssql 42 | volumeClaimTemplates: 43 | - metadata: 44 | name: {{ template "server.fullname" . }}-cache 45 | labels: 46 | app: {{ template "server.fullname" . }} 47 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 48 | release: "{{ .Release.Name }}" 49 | heritage: "{{ .Release.Service }}" 50 | spec: 51 | accessModes: 52 | - ReadWriteMany 53 | resources: 54 | requests: 55 | storage: 10Gi 56 | storageClassName: {{ template "server.fullname" . }} 57 | volumeName: {{ template "server.fullname" . }}-cache 58 | --- 59 | apiVersion: v1 60 | kind: Service 61 | metadata: 62 | name: "{{ .Release.Name }}" 63 | labels: 64 | app: {{ template "server.fullname" . }} 65 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 66 | release: "{{ .Release.Name }}" 67 | heritage: "{{ .Release.Service }}" 68 | spec: 69 | ports: 70 | - port: 1433 71 | name: mssql 72 | clusterIP: None 73 | selector: 74 | app: {{ template "server.fullname" . }} 75 | -------------------------------------------------------------------------------- /helm/mssql/templates/storage.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: {{ template "server.fullname" . }}-cache 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | capacity: 12 | storage: 10Gi 13 | accessModes: 14 | - ReadWriteMany 15 | # TODO: is ReadWriteMany going to break the cache? Logs? 16 | persistentVolumeReclaimPolicy: Retain 17 | storageClassName: {{ template "server.fullname" . }} 18 | local: 19 | path: {{ .Values.server.storage.path }} 20 | nodeAffinity: 21 | required: 22 | nodeSelectorTerms: 23 | - matchExpressions: 24 | - key: kubernetes.io/hostname 25 | operator: In 26 | values: 27 | - {{ .Values.server.storage.hostname | quote }} 28 | 29 | --- 30 | # apiVersion: v1 31 | # kind: PersistentVolumeClaim 32 | # metadata: 33 | # name: {{ template "server.fullname" . }}-cache 34 | # labels: 35 | # app: {{ template "server.fullname" . }} 36 | # chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 37 | # release: "{{ .Release.Name }}" 38 | # heritage: "{{ .Release.Service }}" 39 | # spec: 40 | # accessModes: 41 | # - ReadWriteMany 42 | # resources: 43 | # requests: 44 | # storage: 10Gi 45 | # storageClassName: {{ template "server.fullname" . }} 46 | # volumeName: {{ template "server.fullname" . }}-cache 47 | --- 48 | kind: StorageClass 49 | apiVersion: storage.k8s.io/v1 50 | metadata: 51 | name: {{ template "server.fullname" . }} 52 | labels: 53 | app: {{ template "server.fullname" . }} 54 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 55 | release: "{{ .Release.Name }}" 56 | heritage: "{{ .Release.Service }}" 57 | provisioner: kubernetes.io/no-provisioner 58 | volumeBindingMode: WaitForFirstConsumer -------------------------------------------------------------------------------- /helm/mssql/values.yaml: -------------------------------------------------------------------------------- 1 | image: mssql/server 2 | imageTag: "2017-latest" 3 | imagePrefix: "mcr.microsoft.com/" 4 | 5 | resources: 6 | requests: 7 | memory: 500Mi 8 | cpu: 1 9 | 10 | server: 11 | storage: 12 | # This path must exist on this host, or the PV won't be created 13 | hostname: kubeworker3 14 | path: /data/mssql-data 15 | password: testpasswordhere -------------------------------------------------------------------------------- /helm/origin-docker/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/origin-docker/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: origin-docker 2 | version: 0.0.1 3 | appVersion: 1.10.05 4 | description: Cache server for your LAN 5 | keywords: 6 | - origin-docker 7 | - cache 8 | - lan cache 9 | - lancache 10 | - steam 11 | - origin 12 | - blizzard 13 | home: https://www.opensourcelan.com 14 | sources: 15 | - https://github.com/OpenSourceLAN/origin-docker/ 16 | maintainers: 17 | - name: Chris Holman 18 | email: chris@opensourcelan.com 19 | -------------------------------------------------------------------------------- /helm/origin-docker/README.md: -------------------------------------------------------------------------------- 1 | # Origin-docker 2 | 3 | -------------------------------------------------------------------------------- /helm/origin-docker/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | ZDaemon! 3 | 4 | Find your server in your LAN browser with name {{ .Values.server.name }} 5 | -------------------------------------------------------------------------------- /helm/origin-docker/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/origin-docker/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "server.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "server.fullname" . }} 18 | spec: 19 | {{- if .Values.server.tolerations }} 20 | tolerations: 21 | {{ toYaml .Values.server.tolerations | indent 8 }} 22 | {{- end }} 23 | {{- if .Values.server.nodeSelector }} 24 | nodeSelector: 25 | {{ toYaml .Values.server.nodeSelector | indent 8 }} 26 | {{- end }} 27 | {{- if .Values.imagePullSecret }} 28 | imagePullSecrets: 29 | - name: {{ .Values.imagePullSecret }} 30 | {{- end }} 31 | containers: 32 | - name: cache 33 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 34 | imagePullPolicy: Always 35 | resources: 36 | {{ toYaml .Values.resources | indent 10 }} 37 | ports: 38 | - name: http 39 | containerPort: 80 40 | protocol: TCP 41 | volumeMounts: 42 | - name: cache 43 | mountPath: /cache 44 | - name: sniproxy 45 | image: "{{ .Values.imagePrefix}}{{ .Values.sniImage }}:{{ .Values.sniTag }}" 46 | ports: 47 | - name: https 48 | containerPort: 443 49 | protocol: TCP 50 | volumes: 51 | - name: cache 52 | persistentVolumeClaim: 53 | claimName: {{ template "server.fullname" . }}-cache 54 | --- 55 | apiVersion: v1 56 | kind: Service 57 | metadata: 58 | name: "{{ .Release.Name }}" 59 | labels: 60 | app: {{ template "server.fullname" . }} 61 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 62 | release: "{{ .Release.Name }}" 63 | heritage: "{{ .Release.Service }}" 64 | spec: 65 | ports: 66 | - port: 80 67 | name: web 68 | - port: 443 69 | name: sniproxy 70 | clusterIP: None 71 | selector: 72 | app: {{ template "server.fullname" . }} 73 | -------------------------------------------------------------------------------- /helm/origin-docker/templates/storage.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: {{ template "server.fullname" . }}-cache 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | capacity: 12 | storage: 10Gi 13 | accessModes: 14 | - ReadWriteMany 15 | # TODO: is ReadWriteMany going to break the cache? Logs? 16 | persistentVolumeReclaimPolicy: Retain 17 | storageClassName: {{ template "server.fullname" . }} 18 | local: 19 | path: {{ .Values.server.storage.path }} 20 | nodeAffinity: 21 | required: 22 | nodeSelectorTerms: 23 | - matchExpressions: 24 | - key: kubernetes.io/hostname 25 | operator: In 26 | values: 27 | - {{ .Values.server.storage.hostname | quote }} 28 | 29 | --- 30 | apiVersion: v1 31 | kind: PersistentVolumeClaim 32 | metadata: 33 | name: {{ template "server.fullname" . }}-cache 34 | labels: 35 | app: {{ template "server.fullname" . }} 36 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 37 | release: "{{ .Release.Name }}" 38 | heritage: "{{ .Release.Service }}" 39 | spec: 40 | accessModes: 41 | - ReadWriteMany 42 | resources: 43 | requests: 44 | storage: 10Gi 45 | storageClassName: {{ template "server.fullname" . }} 46 | volumeName: {{ template "server.fullname" . }}-cache 47 | --- 48 | kind: StorageClass 49 | apiVersion: storage.k8s.io/v1 50 | metadata: 51 | name: {{ template "server.fullname" . }} 52 | labels: 53 | app: {{ template "server.fullname" . }} 54 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 55 | release: "{{ .Release.Name }}" 56 | heritage: "{{ .Release.Service }}" 57 | provisioner: kubernetes.io/no-provisioner 58 | volumeBindingMode: WaitForFirstConsumer -------------------------------------------------------------------------------- /helm/origin-docker/values.yaml: -------------------------------------------------------------------------------- 1 | image: origin-docker 2 | imageTag: "20181025-181206" 3 | sniImage: sniproxy 4 | sniTag: "20181023-143613" 5 | imagePrefix: "dockerreg.lanadmins.net/" 6 | imagePullSecret: dockerreg 7 | 8 | resources: 9 | requests: 10 | memory: 100Mi 11 | cpu: 1 12 | 13 | server: 14 | storage: 15 | hostname: cache 16 | path: /data/cache-data 17 | tolerations: 18 | - key: "dedicated" 19 | operator: "Equal" 20 | value: "cache" 21 | effect: "NoSchedule" 22 | nodeSelector: 23 | dedicated: "cache" 24 | 25 | -------------------------------------------------------------------------------- /helm/tf2-prophunt/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/tf2-prophunt/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: tf2-prophunt 2 | version: 0.0.1 3 | appVersion: 0.0.1 4 | description: Team Fortress 2 Prop Hunt 5 | keywords: 6 | - game 7 | - server 8 | - team fortress 2 9 | - prop hunt 10 | home: https://www.opensourcelan.com 11 | sources: 12 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/tf2-prophunt 13 | maintainers: 14 | - name: Chris Holman 15 | email: chris@opensourcelan.com 16 | -------------------------------------------------------------------------------- /helm/tf2-prophunt/README.md: -------------------------------------------------------------------------------- 1 | # Armagetron 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/tf2-prophunt/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | TF2 Prop hunt! 3 | 4 | This spins up both the Prop Hunt game server and a web server to mirror the maps. 5 | 6 | The `sv_downloadurl` is auto configured. (hooray!) 7 | 8 | At time of writing, hats aren't removed from props... so have fun with that :) 9 | -------------------------------------------------------------------------------- /helm/tf2-prophunt/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/tf2-prophunt/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | template: 12 | metadata: 13 | labels: 14 | app: {{ template "server.fullname" . }} 15 | spec: 16 | containers: 17 | - name: server 18 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 19 | imagePullPolicy: Always 20 | tty: true 21 | stdin: true 22 | resources: 23 | {{ toYaml .Values.resources | indent 10 }} 24 | env: 25 | - name: SV_HOSTNAME 26 | value: {{ .Values.server.name | quote }} 27 | - name: MAP 28 | value: {{ .Values.server.default_map | quote }} 29 | - name: MAPCYCLEFILE 30 | value: map_cycle_file 31 | {{- if .Values.server.rcon_password }} 32 | - name: RCON_PASSWORD 33 | value: {{ .Values.server.rcon_password | quote }} 34 | {{- end }} 35 | - name: SV_DOWNLOADURL 36 | value: auto 37 | - name: web 38 | image: "{{ .Values.imagePrefix}}{{ .Values.webImage }}:{{ .Values.webImageTag }}" 39 | imagePullPolicy: Always 40 | resources: 41 | {{ toYaml .Values.resources | indent 10 }} 42 | 43 | ports: 44 | - name: server 45 | containerPort: 27015 46 | protocol: UDP 47 | - name: rcon 48 | containerPort: 27015 49 | protocol: TCP 50 | -------------------------------------------------------------------------------- /helm/tf2-prophunt/values.yaml: -------------------------------------------------------------------------------- 1 | image: tf2-prophunt 2 | imageTag: "20181006-104423" 3 | webImage: tf2-prophunt-web 4 | webImageTag: "20181006-102327" 5 | imagePrefix: squid:5000/ 6 | 7 | resources: 8 | requests: 9 | memory: 500Mi 10 | cpu: 500m 11 | 12 | server: 13 | name: My Cool Team Fortress Prop Hunt Server 14 | description: Team Fortress 2 Prop Hunt server 15 | rcon_password: null 16 | map_cycle_file: "ph-maplist.txt" 17 | maxplayers: 32 18 | default_map: "arena_storm_b1c" 19 | -------------------------------------------------------------------------------- /helm/trackmania-forever/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/trackmania-forever/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: trackmania-forever 2 | version: 0.0.1 3 | appVersion: 2011.02.21 4 | description: TMNF 5 | keywords: 6 | - game 7 | - server 8 | - trackmania forever 9 | home: https://www.opensourcelan.com 10 | sources: 11 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/trackmania-forever 12 | maintainers: 13 | - name: Chris Holman 14 | email: chris@opensourcelan.com 15 | -------------------------------------------------------------------------------- /helm/trackmania-forever/README.md: -------------------------------------------------------------------------------- 1 | # Trackmania Forever 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/trackmania-forever/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | Trackmania Nations Forever 3 | 4 | Starts a tron server with FFA mode on port 34197 5 | 6 | Find it in your LAN browser with name {{ .Values.server.name }} 7 | 8 | {{- if .Values.server.rconPassword }} 9 | Because you provided an RCON password, you can find a web interface 10 | available on port 80 on the server's IP address. Use this (with the 11 | provided SuperAdmin RCON password) to change maps, kick players etc. 12 | {{- end }} 13 | -------------------------------------------------------------------------------- /helm/trackmania-forever/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "armagetron.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "armagetron.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/trackmania-forever/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: {{ template "armagetron.fullname" . }} 6 | labels: 7 | app: {{ template "armagetron.fullname" . }} 8 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 9 | release: "{{ .Release.Name }}" 10 | heritage: "{{ .Release.Service }}" 11 | spec: 12 | replicas: 1 13 | selector: 14 | matchLabels: 15 | app: {{ template "armagetron.fullname" . }} 16 | template: 17 | metadata: 18 | labels: 19 | app: {{ template "armagetron.fullname" . }} 20 | spec: 21 | imagePullSecrets: 22 | - name: dockerreglanadminsnet 23 | containers: 24 | - name: tmnf 25 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 26 | tty: true 27 | stdin: true 28 | env: 29 | - name: "SERVER_NAME" 30 | value: {{ .Values.server.name | quote }} 31 | - name: TRACKLIST 32 | value: {{ .Values.server.tracklist | quote }} 33 | {{ if .Values.server.rconPassword }} 34 | - name: "RCON_PASSWORD" 35 | value: {{ .Values.server.rconPassword | quote }} 36 | {{ end }} 37 | resources: 38 | {{ toYaml .Values.resources | indent 10 }} 39 | {{ if .Values.server.rconPassword }} 40 | - name: rcon 41 | image: "{{ .Values.imagePrefix}}{{ .Values.imageRcon }}:{{ .Values.imageRconTag }}" 42 | resources: 43 | requests: 44 | cpu: 10m 45 | memory: 10Mi 46 | {{- end }} 47 | 48 | 49 | # apiVersion: v1 50 | # kind: Service 51 | # metadata: 52 | # name: trackmania-forever 53 | # spec: 54 | # selector: 55 | # app: trackmania-forever 56 | # clusterIP: None 57 | # ports: 58 | # - name: foo # Actually, no port is needed. 59 | # port: 1234 60 | # targetPort: 1234 61 | # --- -------------------------------------------------------------------------------- /helm/trackmania-forever/values.yaml: -------------------------------------------------------------------------------- 1 | image: trackmania-forever 2 | imageTag: "latest" 3 | imageRcon: trackmania-forever-rcon 4 | imageRconTag: "latest" 5 | imagePrefix: dockerreg.lanadmins.net/ 6 | 7 | resources: 8 | requests: 9 | memory: 50Mi 10 | cpu: 100m 11 | 12 | server: 13 | name: Kickass TMNF Server 14 | tracklist: tracklist-all.cfg 15 | rconPassword: null 16 | -------------------------------------------------------------------------------- /helm/unbound-values.yaml: -------------------------------------------------------------------------------- 1 | localRecords: 2 | - name: dockerreg.lanadmins.net 3 | ip: 10.10.31.234 4 | - name: pls.patch.station.sony.com 5 | ip: 10.15.192.2 6 | - name: gs2.ww.prod.dl.playstation.net 7 | ip: 10.15.192.2 8 | - name: gs2.sonycoment.loris-e.llnwd.net 9 | ip: 10.15.192.2 10 | - name: content.steampowered.com 11 | ip: 10.15.192.2 12 | - name: content1.steampowered.com 13 | ip: 10.15.192.2 14 | - name: content2.steampowered.com 15 | ip: 10.15.192.2 16 | - name: content3.steampowered.com 17 | ip: 10.15.192.2 18 | - name: content4.steampowered.com 19 | ip: 10.15.192.2 20 | - name: content5.steampowered.com 21 | ip: 10.15.192.2 22 | - name: content6.steampowered.com 23 | ip: 10.15.192.2 24 | - name: content7.steampowered.com 25 | ip: 10.15.192.2 26 | - name: content8.steampowered.com 27 | ip: 10.15.192.2 28 | - name: steamcontent.com 29 | ip: 10.15.192.2 30 | - name: client-download.steampowered.com 31 | ip: 10.15.192.2 32 | - name: hsar.steampowered.com.edgesuite.net 33 | ip: 10.15.192.2 34 | - name: akamai.steamstatic.com 35 | ip: 10.15.192.2 36 | - name: content-origin.steampowered.com 37 | ip: 10.15.192.2 38 | - name: clientconfig.akamai.steamtransparent.com 39 | ip: 10.15.192.2 40 | - name: steampipe.akamaized.net 41 | ip: 10.15.192.2 42 | - name: edgecast.steamstatic.com 43 | ip: 10.15.192.2 44 | - name: steam.apac.qtlglb.com.mwcloudcdn.com 45 | ip: 10.15.192.2 46 | - name: cs.steampowered.com 47 | ip: 10.15.192.2 48 | - name: cm.steampowered.com 49 | ip: 10.15.192.2 50 | - name: edgecast.steamstatic.com 51 | ip: 10.15.192.2 52 | - name: steamcontent.com 53 | ip: 10.15.192.2 54 | - name: cdn1-sea1.valve.net 55 | ip: 10.15.192.2 56 | - name: cdn2-sea1.valve.net 57 | ip: 10.15.192.2 58 | - name: steam-content-dnld-1.apac-1-cdn.cqloud.com 59 | ip: 10.15.192.2 60 | - name: steam-content-dnld-1.eu-c1-cdn.cqloud.com 61 | ip: 10.15.192.2 62 | - name: steam.apac.qtlglb.com 63 | ip: 10.15.192.2 64 | - name: edge.steam-dns.top.comcast.net 65 | ip: 10.15.192.2 66 | - name: edge.steam-dns-2.top.comcast.net 67 | ip: 10.15.192.2 68 | - name: steam.naeu.qtlglb.com 69 | ip: 10.15.192.2 70 | - name: steampipe-kr.akamaized.net 71 | ip: 10.15.192.2 72 | - name: steam.ix.asn.au 73 | ip: 10.15.192.2 74 | - name: steam.eca.qtlglb.com 75 | ip: 10.15.192.2 76 | - name: steam.cdn.on.net 77 | ip: 10.15.192.2 78 | - name: update5.dota2.wmsj.cn 79 | ip: 10.15.192.2 80 | - name: update2.dota2.wmsj.cn 81 | ip: 10.15.192.2 82 | - name: update6.dota2.wmsj.cn 83 | ip: 10.15.192.2 84 | - name: update3.dota2.wmsj.cn 85 | ip: 10.15.192.2 86 | - name: update1.dota2.wmsj.cn 87 | ip: 10.15.192.2 88 | - name: update4.dota2.wmsj.cn 89 | ip: 10.15.192.2 90 | - name: update5.csgo.wmsj.cn 91 | ip: 10.15.192.2 92 | - name: update2.csgo.wmsj.cn 93 | ip: 10.15.192.2 94 | - name: update4.csgo.wmsj.cn 95 | ip: 10.15.192.2 96 | - name: update3.csgo.wmsj.cn 97 | ip: 10.15.192.2 98 | - name: update6.csgo.wmsj.cn 99 | ip: 10.15.192.2 100 | - name: update1.csgo.wmsj.cn 101 | ip: 10.15.192.2 102 | - name: st.dl.bscstorage.net 103 | ip: 10.15.192.2 104 | - name: cdn.mileweb.cs.steampowered.com.8686c.com 105 | ip: 10.15.192.2 106 | # xbox 107 | - name: assets1.xboxlive.com 108 | ip: 10.15.192.2 109 | - name: assets2.xboxlive.com 110 | ip: 10.15.192.2 111 | - name: dlassets.xboxlive.com 112 | ip: 10.15.192.2 113 | - name: xboxone.loris.llnwd.net 114 | ip: 10.15.192.2 115 | - name: xboxone.vo.llnwd.net 116 | ip: 10.15.192.2 117 | - name: xbox-mbr.xboxlive.com 118 | ip: 10.15.192.2 119 | - name: assets1.xboxlive.com.nsatc.net 120 | ip: 10.15.192.2 121 | # blizzard 122 | - name: dist.blizzard.com 123 | ip: 10.15.192.2 124 | - name: dist.blizzard.com.edgesuite.net 125 | ip: 10.15.192.2 126 | - name: llnw.blizzard.com 127 | ip: 10.15.192.2 128 | - name: edgecast.blizzard.com 129 | ip: 10.15.192.2 130 | - name: blizzard.vo.llnwd.net 131 | ip: 10.15.192.2 132 | - name: blzddist1-a.akamaihd.net 133 | ip: 10.15.192.2 134 | - name: blzddist2-a.akamaihd.net 135 | ip: 10.15.192.2 136 | - name: blzddist3-a.akamaihd.net 137 | ip: 10.15.192.2 138 | - name: blzddist4-a.akamaihd.net 139 | ip: 10.15.192.2 140 | - name: level3.blizzard.com 141 | ip: 10.15.192.2 142 | - name: nydus.battle.net 143 | ip: 10.15.192.2 144 | - name: edge.blizzard.top.comcast.net 145 | ip: 10.15.192.2 146 | - name: cdn.blizzard.com 147 | ip: 10.15.192.2 148 | - name: cdn.blizzard.com 149 | ip: 10.15.192.2 150 | # league 151 | - name: l3cdn.riotgames.com 152 | ip: 10.15.192.2 153 | - name: worldwide.l3cdn.riotgames.com 154 | ip: 10.15.192.2 155 | - name: riotgamespatcher-a.akamaihd.net 156 | ip: 10.15.192.2 157 | - name: riotgamespatcher-a.akamaihd.net.edgesuite.net 158 | ip: 10.15.192.2 159 | - name: dyn.riotcdn.net 160 | ip: 10.15.192.2 161 | #epic 162 | - name: epicgames-download1.akamaized.net 163 | ip: 10.15.192.2 164 | - name: download.epicgames.com 165 | ip: 10.15.192.2 166 | - name: download2.epicgames.com 167 | ip: 10.15.192.2 168 | - name: download3.epicgames.com 169 | ip: 10.15.192.2 170 | - name: download4.epicgames.com 171 | ip: 10.15.192.2 172 | # Windows/WSUS 173 | - name: dl.delivery.mp.microsoft.com 174 | ip: 10.15.192.2 175 | allowedIpRanges: 176 | - "10.0.0.0/8" 177 | -------------------------------------------------------------------------------- /helm/unreal4/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/unreal4/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: unreal4 2 | version: 0.0.1 3 | appVersion: 3525360 4 | description: Unreal Tournament 4 Dedicated Server 5 | keywords: 6 | - game 7 | - server 8 | - Unreal 9 | - Tournament 10 | - 4 11 | home: https://www.opensourcelan.com 12 | sources: 13 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/unreal4 14 | maintainers: 15 | - name: Chris Holman 16 | email: chris@opensourcelan.com 17 | -------------------------------------------------------------------------------- /helm/unreal4/README.md: -------------------------------------------------------------------------------- 1 | # Unreal Tournament 4 2 | 3 | See [opensourcelan/gameservers-docker](https://github.com/OpenSourceLAN/gameservers-docker/tree/master/unreal4) for more info -------------------------------------------------------------------------------- /helm/unreal4/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | unreal4! 3 | 4 | You won't find this game in the LAN browser because apparently only 5 | listen servers will show up in the LAN browser -------------------------------------------------------------------------------- /helm/unreal4/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "unreal4.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "unreal4.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/unreal4/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "unreal4.fullname" . }} 5 | labels: 6 | app: {{ template "unreal4.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "unreal4.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "unreal4.fullname" . }} 18 | spec: 19 | imagePullSecrets: 20 | - name: dockerreglanadminsnet 21 | containers: 22 | - name: unreal4 23 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 24 | imagePullPolicy: Always 25 | resources: 26 | {{ toYaml .Values.resources | indent 10 }} 27 | env: 28 | - name: SV_HOSTNAME 29 | value: {{ .Values.server.name | quote }} 30 | {{- if .Values.server.rcon_password }} 31 | - name: RCON_PASSWORD 32 | value: {{ .Values.server.rcon_password | quote }} 33 | {{- end }} 34 | {{- if .Values.server.game_password }} 35 | - name: SV_PASSWORD 36 | value: {{ .Values.server.game_password | quote }} 37 | {{- end }} 38 | - name: MAP 39 | value: {{ .Values.server.map | quote }} 40 | - name: MAXPLAYERS 41 | value: {{ .Values.server.maxplayers | quote }} 42 | {{- if .Values.server.timelimit }} 43 | - name: TIMELIMIT 44 | value: {{ .Values.server.timelimit | quote }} 45 | {{- end }} 46 | {{- if .Values.server.goalscore }} 47 | - name: GOALSCORE 48 | value: {{ .Values.server.goalscore | quote }} 49 | {{- end }} 50 | {{- if .Values.server.timelimit }} 51 | - name: TIMELIMIT 52 | value: {{ .Values.server.timelimit | quote }} 53 | {{- end }} 54 | {{- if .Values.server.goalscore }} 55 | - name: GOALSCORE 56 | value: {{ .Values.server.goalscore | quote }} 57 | {{- end }} 58 | ports: 59 | - name: unreal4 60 | containerPort: 7779 61 | protocol: UDP 62 | 63 | --- 64 | 65 | apiVersion: v1 66 | kind: Service 67 | metadata: 68 | name: unreal1 69 | spec: 70 | selector: 71 | app: {{ template "unreal4.fullname" . }} 72 | clusterIP: None 73 | ports: 74 | - name: foo # Actually, no port is needed. 75 | port: 1234 76 | targetPort: 123 77 | -------------------------------------------------------------------------------- /helm/unreal4/values.yaml: -------------------------------------------------------------------------------- 1 | image: unreal4 2 | imageTag: latest 3 | imagePrefix: dockerreg.lanadmins.net/ 4 | 5 | resources: 6 | requests: 7 | memory: 800Mi 8 | cpu: 1 9 | 10 | server: 11 | name: Kickass Unreal4 Server 12 | rcon_password: null 13 | game_password: null 14 | map: "DM-Spacer" 15 | maxplayers: 32 16 | goalscore: 50 # the score to get to win 17 | timelimit: 10 18 | -------------------------------------------------------------------------------- /helm/ut2004/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/ut2004/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: ut2004 2 | version: 0.0.1 3 | appVersion: 3339 4 | description: Unreal Tournament 2004 Dedicated Server 5 | keywords: 6 | - game 7 | - server 8 | - Unreal 9 | - Tournament 10 | - ut2004 11 | home: https://www.opensourcelan.com 12 | sources: 13 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/ut2004 14 | maintainers: 15 | - name: Chris Holman 16 | email: chris@opensourcelan.com 17 | -------------------------------------------------------------------------------- /helm/ut2004/README.md: -------------------------------------------------------------------------------- 1 | # Armagetron 2 | 3 | ``` 4 | //TODO 5 | ``` -------------------------------------------------------------------------------- /helm/ut2004/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | UT2004! 3 | 4 | Find your server in your LAN browser with name {{ .Values.server.name }} 5 | -------------------------------------------------------------------------------- /helm/ut2004/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "ut2004.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "ut2004.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/ut2004/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "ut2004.fullname" . }} 5 | labels: 6 | app: {{ template "ut2004.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "ut2004.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "ut2004.fullname" . }} 18 | spec: 19 | containers: 20 | - name: ut2004 21 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 22 | imagePullPolicy: Always 23 | resources: 24 | {{ toYaml .Values.resources | indent 10 }} 25 | env: 26 | - name: SV_HOSTNAME 27 | value: {{ .Values.server.name | quote }} 28 | {{- if .Values.server.rcon_password }} 29 | - name: RCON_PASSWORD 30 | value: {{ .Values.server.rcon_password | quote }} 31 | {{- end }} 32 | {{- if .Values.server.game_password }} 33 | - name: SV_PASSWORD 34 | value: {{ .Values.server.game_password | quote }} 35 | {{- end }} 36 | {{- if .Values.server.instagib }} 37 | - name: INSTAGIB 38 | value: {{ .Values.server.instagib | quote }} 39 | {{- end }} 40 | {{- if .Values.server.mutators }} 41 | - name: MUTATORS 42 | value: {{ .Values.server.mutators | quote }} 43 | {{- end }} 44 | - name: MAP 45 | value: {{ .Values.server.map | quote }} 46 | - name: MAXPLAYERS 47 | value: {{ .Values.server.maxplayers | quote }} 48 | {{- if .Values.server.lan }} 49 | - name: LAN 50 | value: "1" 51 | {{- end }} 52 | {{- if .Values.server.other_opts }} 53 | - name: OTHER_OPTS 54 | value: {{ .Values.server.other_opts | quote }} 55 | {{- end }} 56 | ports: 57 | - name: ut2004 58 | containerPort: 7779 59 | protocol: UDP 60 | 61 | -------------------------------------------------------------------------------- /helm/ut2004/values.yaml: -------------------------------------------------------------------------------- 1 | image: ut2004 2 | imageTag: "20181004-220119" 3 | imagePrefix: dockerreg.lanadmins.net/ 4 | 5 | resources: 6 | requests: 7 | memory: 200Mi 8 | cpu: 500m 9 | 10 | server: 11 | name: Kickass UT2004 Server 12 | rcon_password: null 13 | game_password: null 14 | instagib: false 15 | map: "DM-Deck17" 16 | maxplayers: 32 17 | lan: true 18 | other_opts: null 19 | mutators: null 20 | -------------------------------------------------------------------------------- /helm/wreckfest/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/wreckfest/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: wreckfest 2 | version: 0.0.1 3 | appVersion: 1.10.05 4 | description: Wreckfest server 5 | keywords: 6 | - game 7 | - server 8 | - wreckfest 9 | home: https://www.opensourcelan.com 10 | sources: 11 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/wreckfest 12 | maintainers: 13 | - name: Chris Holman 14 | email: chris@opensourcelan.com 15 | -------------------------------------------------------------------------------- /helm/wreckfest/README.md: -------------------------------------------------------------------------------- 1 | # ZDaemon 2 | 3 | see https://github.com/OpenSourceLAN/gameservers-docker/tree/master/zdaemon for configuration details -------------------------------------------------------------------------------- /helm/wreckfest/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | ZDaemon! 3 | 4 | Find your server in your LAN browser with name {{ .Values.server.name }} 5 | -------------------------------------------------------------------------------- /helm/wreckfest/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "server.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "server.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/wreckfest/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "server.fullname" . }} 5 | labels: 6 | app: {{ template "server.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "server.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "server.fullname" . }} 18 | spec: 19 | imagePullSecrets: 20 | - name: regcred 21 | containers: 22 | - name: server 23 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 24 | imagePullPolicy: Always 25 | tty: true 26 | stdin: true 27 | resources: 28 | {{ toYaml .Values.resources | indent 10 }} 29 | env: 30 | - name: SERVER_NAME 31 | value: {{ .Values.server.server_name | quote }} 32 | {{- if .Values.server.welcome_message }} 33 | - name: WELCOME_MESSAGE 34 | value: {{ .Values.server.welcome_message | quote }} 35 | {{- end }} 36 | {{- if .Values.server.game_password }} 37 | - name: GAME_PASSWORD 38 | value: {{ .Values.server.game_password | quote }} 39 | {{- end }} 40 | - name: MAX_PLAYERS 41 | value: {{ .Values.server.max_players | quote }} 42 | - name: TRACK 43 | value: {{ join "," .Values.server.track | quote }} 44 | - name: GAME_MODE 45 | value: {{ .Values.server.gamemode | quote }} 46 | - name: LAPS 47 | value: {{ .Values.server.laps | quote }} 48 | - name: TIME_LIMIT 49 | value: {{ .Values.server.time_limit | quote }} 50 | -------------------------------------------------------------------------------- /helm/wreckfest/values.yaml: -------------------------------------------------------------------------------- 1 | image: wreckfest 2 | imageTag: latest 3 | imagePrefix: docker.pax.lan/ 4 | 5 | resources: 6 | requests: 7 | memory: 200Mi 8 | cpu: 500m 9 | 10 | server: 11 | server_name: Wreckfest server 12 | welcome_message: nil 13 | game_password: null 14 | max_players: 32 15 | track: bigstadium_demolition_arena 16 | gamemode: derby 17 | laps: 3 18 | time_limit: 20 19 | 20 | 21 | -------------------------------------------------------------------------------- /helm/zdaemon/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/zdaemon/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: zdaemon 2 | version: 0.0.1 3 | appVersion: 1.10.05 4 | description: Zdaemon Dedicated server 5 | keywords: 6 | - game 7 | - server 8 | - zdaemon 9 | home: https://www.opensourcelan.com 10 | sources: 11 | - https://github.com/OpenSourceLAN/gameservers-docker/tree/master/zdaemon 12 | maintainers: 13 | - name: Chris Holman 14 | email: chris@opensourcelan.com 15 | -------------------------------------------------------------------------------- /helm/zdaemon/README.md: -------------------------------------------------------------------------------- 1 | # ZDaemon 2 | 3 | see https://github.com/OpenSourceLAN/gameservers-docker/tree/master/zdaemon for configuration details -------------------------------------------------------------------------------- /helm/zdaemon/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 2 | ZDaemon! 3 | 4 | Find your server in your LAN browser with name {{ .Values.server.name }} 5 | -------------------------------------------------------------------------------- /helm/zdaemon/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "zdaemon.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 6 | {{- end -}} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | */}} 12 | {{- define "zdaemon.fullname" -}} 13 | {{- $name := default .Chart.Name .Values.nameOverride -}} 14 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 15 | {{- end -}} 16 | -------------------------------------------------------------------------------- /helm/zdaemon/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "zdaemon.fullname" . }} 5 | labels: 6 | app: {{ template "zdaemon.fullname" . }} 7 | chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" 8 | release: "{{ .Release.Name }}" 9 | heritage: "{{ .Release.Service }}" 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "zdaemon.fullname" . }} 14 | template: 15 | metadata: 16 | labels: 17 | app: {{ template "zdaemon.fullname" . }} 18 | spec: 19 | containers: 20 | - name: zdaemon 21 | image: "{{ .Values.imagePrefix}}{{ .Values.image }}:{{ .Values.imageTag }}" 22 | imagePullPolicy: Always 23 | resources: 24 | {{ toYaml .Values.resources | indent 10 }} 25 | env: 26 | - name: HOSTNAME 27 | value: {{ .Values.server.name | quote }} 28 | {{- if .Values.server.motd }} 29 | - name: MOTD 30 | value: {{ .Values.server.motd | quote }} 31 | {{- end }} 32 | {{- if .Values.server.rcon_password }} 33 | - name: RCON_PASSWORD 34 | value: {{ .Values.server.rcon_password | quote }} 35 | {{- end }} 36 | {{- if .Values.server.game_password }} 37 | - name: PASSWORD 38 | value: {{ .Values.server.game_password | quote }} 39 | {{- end }} 40 | - name: MAPS 41 | value: {{ join "," .Values.server.maps | quote }} 42 | - name: MAXPLAYERS 43 | value: {{ .Values.server.maxplayers | quote }} 44 | {{- if .Values.server.lan }} 45 | - name: "LAN" 46 | value: "1" 47 | {{- end }} 48 | {{- if .Values.server.fraglimit }} 49 | - name: "CVAR_fraglimit" 50 | value: {{ .Values.server.fraglimit | quote }} 51 | {{- end }} 52 | {{- if .Values.server.timelimit }} 53 | - name: "CVAR_timelimit" 54 | value: {{ .Values.server.timelimit | quote }} 55 | {{- end }} 56 | ports: 57 | - name: zdaemon 58 | # I don't actually know what port number this uses 59 | containerPort: 65534 60 | protocol: UDP 61 | 62 | -------------------------------------------------------------------------------- /helm/zdaemon/values.yaml: -------------------------------------------------------------------------------- 1 | image: zdaemon 2 | imageTag: "20181009-234733" 3 | imagePrefix: dockerreg.lanadmins.net/ 4 | 5 | resources: 6 | requests: 7 | memory: 100Mi 8 | cpu: 100m 9 | 10 | server: 11 | name: Kickass ZDaemon Server 12 | motd: Welcome to ZDaemon 13 | rcon_password: null 14 | game_password: null 15 | maps: 16 | - map01 17 | - map04 18 | - map07 19 | maxplayers: 32 20 | lan: true 21 | timelimit: 10 22 | fraglimit: 50 23 | 24 | -------------------------------------------------------------------------------- /installation.md: -------------------------------------------------------------------------------- 1 | To install a kubernetes cluster from scratch, we use kubeadm. 2 | 3 | https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 4 | 5 | TODO: automate this whole page :) 6 | 7 | ## Steps to do on both masters and slaves 8 | 9 | ``` 10 | # ip_forward needs to be enabled before docker starts otherwise docker 11 | # sets the iptables foward poilcy to drop 12 | cat </etc/sysctl.d/05-forwarding.conf 13 | net.ipv4.ip_forward=1 14 | EOF 15 | sysctl --load=/etc/sysctl.d/05-forwarding.conf 16 | 17 | curl -sSL https://get.docker.io | sudo bash #yolo 18 | 19 | # containerd needs CRI enabled, which is disabled by default 20 | rm /etc/containerd/config.toml 21 | systemctl restart containerd 22 | 23 | # kubectl crashloops if swap is enabled... but kubeadm only gives a warning :( 24 | swapoff -a 25 | sed -i '/swap/d' /etc/fstab 26 | 27 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 28 | cat </etc/apt/sources.list.d/kubernetes.list 29 | deb http://apt.kubernetes.io/ kubernetes-xenial main 30 | EOF 31 | apt-get update 32 | apt-get install -y kubelet kubeadm kubectl 33 | sed -i 's/\(ExecStart=.*kubelet.*\)/\1 --cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 34 | systemctl daemon-reload 35 | systemctl restart kubelet 36 | ``` 37 | 38 | ## Then, for the master node 39 | 40 | ``` 41 | kubeadm init 42 | ``` 43 | 44 | If your LAN IP address is not on the same interface as the default route (eg, in VirtualBox where the route is out the private vbox NAT interface), use the `--apiserver-advertise-address=x.x.x.x` argument to the above command to hard code an address. 45 | 46 | The command will output a bunch of garbage, and eventually right at the end give you a line starting with `kubeadm join`. This is the line you need for the workers. Save it but keep it secure. This token is effectively a root password to your cluster. 47 | 48 | And then make the kubectl API easily accessible by making a .kube config directory: 49 | 50 | ``` 51 | mkdir -p $HOME/.kube 52 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 53 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 54 | ``` 55 | 56 | Now you can run `kubectl` commands from your terminal. You can also copy this config file to another PC to use kubectl remotely. 57 | 58 | ## For a worker 59 | 60 | When you initialised the master, it gave you a `kubectl join` command. Run that on the worker. 61 | 62 | Alternatively, you can use `kubeadm token create | tail -n 1` on the master to generate a token. Then, on your worker-to-be, run `kubeadm join --token 10.0.0.123:6443` (where 10.0.0.123 is your master's IP address) 63 | 64 | Now go follow the [networking instructions](network.md) to set up the network for the worker - the node won't be considered as ready until it has network. 65 | 66 | Once network is set up, wait for a minute and run (back on the master) 67 | 68 | ``` 69 | kubectl get nodes 70 | ``` 71 | and you should see your worker(s) listed as Ready if it joined and the networking is set up properly. 72 | 73 | -------------------------------------------------------------------------------- /metallb-conf.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: metallb.io/v1beta1 2 | kind: IPAddressPool 3 | metadata: 4 | name: default 5 | namespace: metallb-system 6 | spec: 7 | addresses: 8 | - 10.10.99.0-10.10.99.255 9 | --- 10 | apiVersion: metallb.io/v1beta1 11 | kind: L2Advertisement 12 | metadata: 13 | name: example-advertisement3 14 | namespace: metallb-system 15 | spec: 16 | ipAddressPools: 17 | - default 18 | interfaces: # this interfaces block is recommended if you have bonds/bridge interfaces because ... reasons? filtering what responds to arp or something? 19 | - br0 20 | 21 | # note to self: if you are having trouble getting arp to work... the qlcnics I have sometimes need promiscuous mode turned on 22 | # ... seems like same problem I had in VyOS. TODO: test if same problem happens on other NICs 23 | 24 | # install MetalLB: 25 | # kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/master/config/manifests/metallb-native.yaml 26 | -------------------------------------------------------------------------------- /network.md: -------------------------------------------------------------------------------- 1 | For the purposes of game servers, we want our containers to have an IP address on the local LAN. We do this via a bridge interface and CNI. 2 | 3 | ## Do network the easy way 4 | 5 | Make sure *every* host in your cluster has a `br0` bridge interface that is bridged to the LAN. 6 | 7 | Make sure *every* host in your cluster has `ip_forwarding` [enabled with a default allow forwarding iptables rule](https://github.com/OpenSourceLAN/dhcp-cni-plugin#how-to-use) - 8 | 9 | ``` 10 | cat </etc/sysctl.d/05-forwarding.conf 11 | net.ipv4.ip_forward=1 12 | EOF 13 | sysctl --load=/etc/sysctl.d/05-forwarding.conf 14 | iptables --policy FORWARD ACCEPT 15 | ``` 16 | 17 | Now apply the magic fongi file: 18 | 19 | ``` 20 | kubectl apply -f configs/dhcp-daemonset.yaml 21 | ``` 22 | 23 | Done :) 24 | 25 | ## Do network the manual way 26 | 27 | When kubernetes runs a new pod, it first invokes the CNI plugins you've specified, which will add their own networking configuration to the network namespace of the pod. Don't think of this as kubernetes configuring anything - network isn't managed by kubernetes, and all configuration is entirely host local. Kubernetes invokes the CNI plugin, which then reaches back in to the container to configure it. 28 | 29 | So on every host (kubelets and masters), we configure a single bridge and CNI network that uses the bridge like this: 30 | 31 | ``` 32 | cat < /etc/netplan/10-bridge.conf 33 | network: 34 | ethernets: 35 | eno1: 36 | dhcp4: false 37 | bridges: 38 | br0: 39 | interfaces: [eno1] 40 | # either dhcp4: true, or, a static address like below 41 | addresses: [ "10.0.200.1/16"] 42 | gateway4: "10.0.0.1" 43 | nameservers: 44 | addresses: [ 10.0.0.1 ] 45 | version: 2 46 | EOF 47 | 48 | netplan apply 49 | 50 | mkdir -p /etc/cni/net.d 51 | cat < /etc/cni/net.d/10-bridge.conf 52 | { 53 | "name": "bridgenet", 54 | "type": "bridge", 55 | "bridge": "br0", 56 | "ipMasq": false, 57 | "isDefaultGateway": false, 58 | "hairpinMode": false, 59 | "ipam": { 60 | "type": "dhcp" 61 | } 62 | } 63 | EOF 64 | 65 | ``` 66 | (replacing eno1 with the host's local LAN interface). 67 | 68 | A requirement of using bridges is that we set `iptables --policy FORWARD ACCEPT`. This is because **all** packets leaving a kube container need to be evaluated by the iptables chains, so we need iptables to allow forward traffic. 69 | Some websites recommend disabling `bridge-nf-call-iptables`, but that will result in the packets not hitting iptables rules, which breaks serviceaddresses/endpoints. So we need iptables, and we need it to forward. 70 | 71 | Since a recent Docker version, if Docker starts with `ip_forward` disabled, docker will enable it and also set the default FORWARD policy to DROP. So the solution is to set the ip_forward sysctl to 1 in `/etc/sysctl.d` _before_ Docker starts, so it leaves the FORWARD policy at its default (ACCEPT) 72 | 73 | The DHCP IPAM system requries a daemon running to assign IPs. The CNI plugins repository provides us with this daemon, so we just need to compile it and run it. 74 | 75 | ``` 76 | sudo apt-get install golang 77 | git clone https://github.com/containernetworking/plugins.git 78 | cd plugins/plugins/ipam/dhcp 79 | mkdir ~/go 80 | export GOPATH=~/go 81 | go get 82 | go build 83 | sudo ./dhcp daemon & 84 | ``` 85 | 86 | Alternatively, if DHCP seems wrong for you, you can do host local static ranges instead: 87 | 88 | ``` 89 | cat < /etc/cni/net.d/10-bridge.conf 90 | { 91 | "cniVersion": "0.2.0", 92 | "name": "bridgenet", 93 | "type": "bridge", 94 | "bridge": "br0", 95 | "ipMasq": false, 96 | "isDefaultGateway": false, 97 | "hairpinMode": false, 98 | "ipam": { 99 | "type": "host-local", "ranges": [[{ "subnet": "10.0.0.0/24", "rangeStart": "10.0.0.40", "rangeEnd": "10.0.0.54", "gateway": "10.0.0.1"}]], 100 | "routes": [{"dst": "0.0.0.0/0", "gw": "10.0.0.1"}], 101 | "dataDir": "/run/ipam-state" 102 | } 103 | } 104 | EOF 105 | ``` 106 | 107 | Inside the ranges arrays... the first level is interface. Every top level array entry will generate an interface inside a container. Commonly this might be for IPv6 and IPv4 to coexist. The second level is ranges for a given interface. Multiple ranges can be specified at the second level, but I'm uncertain what the behaviour is (and I don't think it matters to me) 108 | 109 | 110 | ### On the `macvlan` plugin 111 | 112 | Originally, I was using the macvlan CNI plugin. This is touted as supported by kubernetes in a number of places, and totally supported by docker. When using this, I could spin up containers and they'd be on the LAN as expected. But kubedns would not work, complaining about being unable to connect to 10.96.0.1, and then kube dashboard wouldn't work because kubedns as down. 113 | 114 | 10.96.0.1 is inaccessbile because macvlan interfaces bypass all iptables rules of the host. Kube relies heavily on iptables rules for numerous reasons, including routing to "Cluster IP addresses" such as 10.96.0.1. Using a bridge interface instead forces packets through the host iptables stack, allowing correct processing of packages. Yay! 115 | 116 | -------------------------------------------------------------------------------- /registry-notes.txt: -------------------------------------------------------------------------------- 1 | docker run -d \ 2 | -p 443:5000 \ 3 | --restart=always \ 4 | --name registry\ 5 | -v /data/nvme/dockerreg/_data:/var/lib/registry \ 6 | -e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/fullchain1.pem" \ 7 | -e "REGISTRY_HTTP_TLS_KEY=/certs/privkey1.pem" \ 8 | -v `pwd`/auth:/auth \ 9 | -e "REGISTRY_AUTH=htpasswd" \ 10 | -e "REGISTRY_AUTH_HTPASSWD_REALM=Nothing To See Here" \ 11 | -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ 12 | -v `pwd`/certs:/certs \ 13 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/fullchain1.pem \ 14 | -e REGISTRY_HTTP_TLS_KEY=/certs/privkey1.pem \ 15 | registry:2 16 | 17 | 18 | 19 | 20 | 21 | docker run --rm \ 22 | --entrypoint htpasswd \ 23 | registry:2 -Bbn pax password > htpasswd 24 | 25 | kubectl create secret docker-registry dockerreg --docker-username=pax --docker-password=password--docker-server='https://dockerreg.lanadmins.net 26 | 27 | 28 | kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "dockerreg"}]}' 29 | 30 | ' 31 | 32 | -------------------------------------------------------------------------------- /registry.md: -------------------------------------------------------------------------------- 1 | Because you don't know which host your container will run on, your images need to be able to be pulled from a docker registry. You can't just build an image on a host and trust that it will work. 2 | 3 | To that end, instantiate a registry on your LAN. 4 | 5 | Start a docker registry somewhere. This should even be in k8s itself, but that's left as an exercise for the reader at this time. Give it a good DNS name. 6 | 7 | ``` 8 | docker run -d -p 5000:5000 --restart=always --name registry registry:2 9 | ``` 10 | 11 | **Important**: this registry has no authentication or access restrictions. Consider your threat model before leaving this as is. [Follow instructions on this page](https://docs.docker.com/registry/deploying/) for how to secure the registry. And then [this other page](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) for how to configure kubernetes to support that registry. 12 | 13 | With the registry up and running, now we need to configure the docker service on **every kubernetes host** to support our unsecured docker registry: 14 | 15 | ``` 16 | cat < /etc/docker/daemon.json 17 | { 18 | "insecure-registries": [ "docker.pax.lan"] 19 | } 20 | EOF 21 | service docker restart 22 | ``` 23 | Make sure to update the hostname accordingly. `some.docker.registry` should be an FQDN that resolves to your registry host. 24 | 25 | But that only impacts you if you're using `docker`... latest versions of Kube use `containerrd` interface, which needs to be configured seaprately: 26 | 27 | ``` 28 | cat < /etc/containerd/config.toml 29 | version = 2 30 | 31 | [plugins."io.containerd.grpc.v1.cri".registry.mirrors] 32 | [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.pax.lan"] 33 | endpoint = ["http://docker.pax.lan"] 34 | EOF 35 | 36 | ``` 37 | 38 | Now make an image. On your local desktop or whatever, tag and push an image: 39 | ``` 40 | docker tag armagetron:latest some.docker.registry:5000/armagetron:latest 41 | docker push some.docker.registry:5000/armagetron:latest 42 | ``` 43 | 44 | The image is now in the docker registry, and accessible by kubernetes. 45 | 46 | ``` 47 | kubectl run armagetron-test-server --image=some.docker.registry:5000/armagetron:latest -t -i --rm 48 | 49 | ``` 50 | 51 | ## Securing the registry 52 | 53 | ``` 54 | docker run \ 55 | --entrypoint htpasswd \ 56 | registry:2 -Bbn testuser testpassword > auth/htpasswd 57 | ``` 58 | 59 | ``` 60 | docker run -d \ 61 | -p 5000:5000 \ 62 | --restart=always \ 63 | --name registry \ 64 | -v `pwd`/registry:/var/lib/registry \ 65 | -v `pwd`/auth:/auth \ 66 | -e "REGISTRY_AUTH=htpasswd" \ 67 | -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ 68 | -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ 69 | -v `pwd`/certs:/certs \ 70 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ 71 | -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ 72 | registry:2 73 | ``` 74 | 75 | ``` 76 | kubectl create secret docker-registry secretnamehere --docker-username=someuser --docker-password=somepassword --docker-server='https://some.docker.registry' 77 | ``` 78 | 79 | ``` 80 | kubectl run -i -t hello-world --restart=Never --rm=true \ 81 | --image=dockerreg.lanadmins.net/lan-registration:latest \ 82 | --overrides='{ "apiVersion": "v1", "spec": { "imagePullSecrets": [{"name": "dockerreglanadminsnet"}] } }' 83 | ``` 84 | 85 | ``` 86 | curl --basic --user someuser:somepassword https://some-secure-reg.com/v2/_catalog 87 | curl --basic --user someuser:somepassword https://some-secure-reg.com/v2/someimage/tags/list 88 | ``` 89 | 90 | Make this secret default for all pods in a namespace using [this method](https://stackoverflow.com/a/40646132) 91 | -------------------------------------------------------------------------------- /storage-ceph-storageclass.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: ceph.rook.io/v1 2 | kind: CephBlockPool 3 | metadata: 4 | name: replicapool 5 | namespace: rook-ceph 6 | spec: 7 | failureDomain: host 8 | replicated: 9 | size: 3 10 | --- 11 | apiVersion: storage.k8s.io/v1 12 | kind: StorageClass 13 | metadata: 14 | name: rook-ceph-block 15 | # Change "rook-ceph" provisioner prefix to match the operator namespace if needed 16 | provisioner: rook-ceph.rbd.csi.ceph.com 17 | parameters: 18 | # clusterID is the namespace where the rook cluster is running 19 | clusterID: rook-ceph 20 | # Ceph pool into which the RBD image shall be created 21 | pool: replicapool 22 | 23 | # (optional) mapOptions is a comma-separated list of map options. 24 | # For krbd options refer 25 | # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options 26 | # For nbd options refer 27 | # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options 28 | # mapOptions: lock_on_read,queue_depth=1024 29 | 30 | # (optional) unmapOptions is a comma-separated list of unmap options. 31 | # For krbd options refer 32 | # https://docs.ceph.com/docs/master/man/8/rbd/#kernel-rbd-krbd-options 33 | # For nbd options refer 34 | # https://docs.ceph.com/docs/master/man/8/rbd-nbd/#options 35 | # unmapOptions: force 36 | 37 | # RBD image format. Defaults to "2". 38 | imageFormat: "2" 39 | 40 | # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature. 41 | imageFeatures: layering 42 | 43 | # The secrets contain Ceph admin credentials. 44 | csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner 45 | csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph 46 | csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner 47 | csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph 48 | csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node 49 | csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph 50 | 51 | # Specify the filesystem type of the volume. If not specified, csi-provisioner 52 | # will set default as `ext4`. Note that `xfs` is not recommended due to potential deadlock 53 | # in hyperconverged settings where the volume is mounted on the same node as the osds. 54 | csi.storage.k8s.io/fstype: ext4 55 | 56 | # Delete the rbd volume when a PVC is deleted 57 | reclaimPolicy: Delete 58 | 59 | -------------------------------------------------------------------------------- /storage-ceph.md: -------------------------------------------------------------------------------- 1 | Rather than NFS, [Rook](https://rook.io) offers nearly turn key storage options. 2 | 3 | Multiple storage engines are offered. We are interested in Ceph, as it can provide 4 | block storage to pods via PersistentVolumeClaims. 5 | 6 | Required are 3 or more hosts with SSDs, all part of the kube cluster. Ideally you'll 7 | want these kube hosts to have a label and taint so that the ceph resources only run 8 | on the storage nodes, and the storage nodes only run ceph. 9 | 10 | In this example, I have a 3 node cluster, all of which are master control plane nodes. 11 | I'll be adding further compute nodes to the cluster later. 12 | 13 | This is based on the [official quickstart guide](https://rook.io/docs/rook/v1.6/ceph-quickstart.html), 14 | with notes on what to change to work for my situation. 15 | 16 | **If you have used these disks for a Ceph cluster previously, [follow this cleanup guide first](https://rook.io/docs/rook/v1.10/Storage-Configuration/ceph-teardown/#delete-the-data-on-hosts)** 17 | 18 | The cleanup guide can be roughly summarised as.... on every host in the kube cluster, `rm -rf /var/lib/rook`... and on every block device that has hosted Ceph data before, zero it entirely, either with `dd` or for SSDs `blkdiscard`. 19 | 20 | ``` 21 | git clone https://github.com/rook/rook.git 22 | cd cluster/examples/kubernetes/ceph 23 | ``` 24 | 25 | Start by spinning up the operator. If you don't have any non-master nodes in your cluster, 26 | make this change to the operator.yaml file and then apply. 27 | 28 | ``` 29 | 30 | # for the deployment at the bottom of operator.yaml, set spec.template.spec.tolerations to: 31 | # [{"key": "node-role.kubernetes.io/master", "effect": "NoSchedule", "operator": "Exists"}] 32 | # 33 | # Search the file for `CSI_PROVISIONER_TOLERATIONS` and uncomment, set it like this: 34 | # CSI_PROVISIONER_TOLERATIONS: | 35 | # [{"key": "node-role.kubernetes.io/master", "effect": "NoSchedule", "operator": "Exists"}] 36 | # Do the same for CSI_PLUGIN_TOLERATIONS 37 | 38 | kubectl create -f crds.yaml -f common.yaml -f operator.yaml 39 | ``` 40 | 41 | Once the operator pod is running successfully, you can customise the `cluster.yaml` file and apply. 42 | `cluster.yaml` contains a single CR. 43 | ``` 44 | # spec.placement: 45 | # all: 46 | # tolerations: [{"key": "node-role.kubernetes.io/control-plane", "effect": "NoSchedule", "operator": "Exists"}] 47 | 48 | # storage.useAllNodes: false 49 | # storage.useAllDevices: false 50 | 51 | # storage.nodes: 52 | # - name: kubemaster1 53 | # devices: [ {"name": "sdb" }] 54 | # (there's lots of ways to configure which storage devices to use, this is the simplest) 55 | 56 | kubectl create -f cluster.yaml 57 | ``` 58 | 59 | It can take some minutes for each of the successive pods to pull and execute. It may look like it's doing 60 | nothing for a while. 61 | 62 | You should eventually see multiple pods each of rook-ceph-mon, rook-ceph-mgr, and rook-ceph-osd. Note that 63 | the osd-prepare pods are different. If the osd pods are not present, but the osd-prepare ones are, check the 64 | logs on the osd-prepare pods. 65 | 66 | When it is finished, the `ceph status` command via the toolbox (`kubectl apply -f toolbox.yaml`) should show 3 daemons, a mgr, and 3 osds. The 67 | available should show the size of your total block devices allocated. 68 | 69 | Ceph is now healthy. 70 | 71 | Create the `CephFilesystem` resource. Modify the `tolerations` as appropriate. 72 | 73 | ``` 74 | kubectl apply -f filesystem.yaml 75 | ``` 76 | 77 | 78 | Blindly apply the storage class. It depends on the filesystem and the cephcluster resources already provisioned. 79 | ``` 80 | kubectl apply -f storage-ceph-storageclass.yaml 81 | ``` 82 | 83 | And hey presto, you should be able to provision storage: 84 | 85 | ``` 86 | cat < /etc/exports 14 | /games 10.10.0.0/16(rw,sync,no_root_squash) 15 | EOF 16 | exportfs -a 17 | ``` 18 | 19 | If we export, say, `/games` on the server, we can have individual servers mount subfolders - so tron1 could mount `/games/tron1`. Being that it's NFS, containers can start on any host and access their data. 20 | 21 | Since it's an almost-public NFS mount (restricted to the server subnet), we should probably do something like make it a zfs/btrfs mount that gets snapshotted every 5 minutes in case something does `rm -rf *` or a malicious user gets in 22 | 23 | ### Using storage in your pods 24 | 25 | **Important:** Ensure nfs-common package is installed on all hosts. Your containers will fail to start if they cannot mount nfs, and nfs cannot be mounted without an nfs client. 26 | 27 | Start by creating a StorageClass, PhysicalVolume and PhysicalVolumeClaim using [provided nfs-pv.yaml](configs/nfs-pv.yaml). 28 | 29 | ``` 30 | kubectl apply -f configs/nfs-pv.yaml 31 | ``` 32 | 33 | Then you can add the `volumes` and `volumeMounts` properties to your Pod spec: 34 | 35 | ``` 36 | spec: 37 | containers: 38 | - name: tron 39 | image: squid:5000/armagetron 40 | env: 41 | - name: SERVER_NAME 42 | value: Test Server lolol222 43 | resources: 44 | requests: 45 | cpu: 100m 46 | memory: 100Mi 47 | volumeMounts: 48 | - name: nfs 49 | mountPath: "/games" 50 | volumes: 51 | - name: nfs 52 | persistentVolumeClaim: 53 | claimName: nfs 54 | 55 | ``` 56 | 57 | Note that the exported path of the PersistentVolume in `nfs-pv.yaml` was `/games/tron`, but the NFS server exports `/games/` - we can mount subdirectories of an NFS export. This is useful for sharing a single export with multiple game servers. 58 | 59 | 60 | -------------------------------------------------------------------------------- /topology.json: -------------------------------------------------------------------------------- 1 | { 2 | "clusters": [ 3 | { 4 | "nodes": [ 5 | { 6 | "node": { 7 | "hostnames": { 8 | "manage": [ 9 | "kubestorage1" 10 | ], 11 | "storage": [ 12 | "10.15.0.11" 13 | ] 14 | }, 15 | "zone": 1 16 | }, 17 | "devices": [ 18 | "/dev/sdb" ] 19 | }, 20 | { 21 | "node": { 22 | "hostnames": { 23 | "manage": [ 24 | "kubestorage2" 25 | ], 26 | "storage": [ 27 | "10.15.0.12" 28 | ] 29 | }, 30 | "zone": 1 31 | }, 32 | "devices": [ 33 | "/dev/sdb" 34 | ] 35 | }, 36 | { 37 | "node": { 38 | "hostnames": { 39 | "manage": [ 40 | "kubestorage3" 41 | ], 42 | "storage": [ 43 | "10.15.0.13" 44 | ] 45 | }, 46 | "zone": 1 47 | }, 48 | "devices": [ 49 | "/dev/sdb" 50 | ] 51 | } 52 | ] 53 | } 54 | ] 55 | } 56 | 57 | --------------------------------------------------------------------------------