├── README.md
├── _config.yml
├── install_k8s.sh
├── install_new_node.sh
└── resources
├── glusterfs-volumeStore
├── glusterfs-enpoints.json
├── glusterfs-svc.json
└── mysql-depl-example.yaml
└── heapster
├── grafana-ingress.yaml
├── grafana-service.yaml
├── heapster-controller.yaml
├── heapster-service.yaml
├── influxdb-grafana-controller.yaml
└── influxdb-service.yaml
/README.md:
--------------------------------------------------------------------------------
1 | # THIS PROJECT IS DEPRECATED
2 |
3 | I deprecated it because I can't maintain Kubernetes updates (and test them all).
4 | Currently I am focused on maintaining https://github.com/valentin2105/Kubernetes-Saltstack which
5 | allows the creation of a single node cluster, too. Using Saltstack is more generally flexible, and
6 | also allows the creation of a production cluster.
7 |
8 | **Other tools like Simplekube**: **[Minikube](https://github.com/kubernetes/minikube)**, **[k9s](https://k9ss.io/)**, **[microk8s](https://github.com/ubuntu/microk8s)**, etc. - these all focus on simple node installation.
9 |
10 | ----
11 |
12 |
13 |
14 | > Simple as a shell script. It allow you to deploy easily k8s for tests or learn purposes.
15 |
16 | With Simplekube, you can install Kubernetes on Linux servers without have to plug with any cloud provider.
17 |
18 | Just take a Linux empty box, clone the git repo, launch the script and have fun with k8s !
19 | It come with few things like Kube-DNS, Calico, Helm, Firewall and IPv6 !
20 |
21 | If you need, you can easily add new workers (from multi-clouds) !
22 |
23 | ## How-to use it ?
24 |
25 | #### 1- Tweak the head of `install_k8s.sh`
26 |
27 | ```
28 | # please change this value :
29 | hostIP="__PUBLIC_OR_PRIVATE_IPV4__"
30 | # -----------------------
31 | k8sVersion="v1.8.1"
32 | etcdVersion="v3.2.9"
33 | dockerVersion="17.05.0-ce"
34 | cniVersion="v0.6.0"
35 | calicoCNIVersion="v1.11.0"
36 | calicoctlVersion="v1.6.1"
37 | cfsslVersion="v1.2.0"
38 | helmVersion="v2.6.2"
39 | ```
40 | #### 2- Launch the script as user (with sudo power)
41 |
42 | `./install_k8s.sh --master`
43 |
44 | #### 3- You can now play with k8s (...)
45 | ```
46 | $- kubectl get cs
47 | NAME STATUS MESSAGE ERROR
48 | controller-manager Healthy ok
49 | scheduler Healthy ok
50 | etcd-0 Healthy {"health": "true"}
51 |
52 | $- kubectl get pod --all-namespaces
53 | NAMESPACE NAME READY STATUS RESTARTS AGE
54 | kube-system calico-policy-controller-4180354049-63p5v 1/1 Running 0 4m
55 | kube-system kube-dns-1822236363-zzkdq 4/4 Running 0 4m
56 | kube-system kubernetes-dashboard-3313488171-lff6h 1/1 Running 0 4m
57 | kube-system tiller-deploy-1884622320-0glqq 1/1 Running 0 4m
58 |
59 | $- calicoctl get ippool
60 | CIDR
61 | 192.168.0.0/16
62 | fd80:24e2:f998:72d6::/64
63 |
64 | $- kubectl run -i -t alpine --image=alpine --restart=Never
65 | / # ping6 google.com
66 | PING google.com (2404:6800:4003:80d::200e): 56 data bytes
67 | 64 bytes from 2404:6800:4003:80d::200e: seq=0 ttl=57 time=2.129 ms
68 | ```
69 | #### 4- Cluster's integrated components :
70 |
71 | - KubeDNS
72 | - HELM ready
73 | - KubeDashboard
74 | - RBAC only by default
75 | - Calico CNI plugin
76 | - Calico Policy controller
77 | - Calicoctl tool
78 | - UFW to secure access (can be disabled)
79 | - ECDSA cluster certs w/ CFSSL
80 | - IPv4/IPv6
81 |
82 | #### 5- Expose services :
83 |
84 | You can expose easily your services with :
85 |
86 | - Only reachable on the machine : `ClusterIP`
87 | - Expose on high TCP ports : `NodePort`
88 | - Expose publicly : Service's `ExternalIPs`
89 |
90 |
91 | #### 6- Add new nodes :
92 |
93 | You can easily add new nodes to your cluster by launching `./install_new_worker.sh`
94 |
95 | Before launch the script, be sure to tweak the head of the script :
96 | ```
97 | nodeIP="__PUBLIC_OR_PRIVATE_IPV4__"
98 | sshUser="root"
99 | setupFirewall="True"
100 | CAcountry="US"
101 | ```
102 |
103 | ## Requirements
104 |
105 | This script download each k8s components with `wget` and launch k8s with `systemd units`.
106 |
107 | You will need `socat`, `conntrack`, `sudo` and `git` on your servers.
108 |
109 | To add a node, you will need to setup `key-based` SSH authentification between master & workers.
110 |
111 | If you want IPv6 on pods side, you need working IPv6 on hosts.
112 |
113 | Simplekube is tested on `Debian 8/9` and `Ubuntu 16.x/17.x`.
114 |
115 | Feel free to open an Issue if you need assistance !
116 |
--------------------------------------------------------------------------------
/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-architect
--------------------------------------------------------------------------------
/install_k8s.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 | # -----------------------
3 | # please change this value :
4 | hostIP="__PUBLIC_OR_PRIVATE_IPV4__"
5 | # -----------------------
6 | k8sVersion="v1.8.1"
7 | etcdVersion="v3.2.9"
8 | dockerVersion="17.05.0-ce"
9 | cniVersion="v0.6.0"
10 | calicoCNIVersion="v1.11.0"
11 | calicoctlVersion="v1.6.1"
12 | cfsslVersion="v1.2.0"
13 | helmVersion="v2.6.2"
14 | # -----------------------
15 | clusterDomain="cluster.local" # Default k8s domain
16 | enableIPv6="true" # Enable IPv6 on pod side (need IPv6 on host)
17 | IPv6Pool="fd80:24e2:f998:72d6::/64" # Default Calico NAT IPv6 Pool
18 | setupFirewall="True" # Setup UFW
19 | enableIPinIP="True" # IPinIP is needed if VMs are not in the same LAN
20 | CAcountry="US"
21 | # -----------------------
22 | # -----------------------
23 | adminToken=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1)
24 | kubeletToken=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1)
25 |
26 | hostname=$(hostname)
27 |
28 | ## Let's go :
29 | if [[ "$1" == "--master" ]]; then
30 | apt-get update && apt-get -y install socat conntrack
31 |
32 | if [[ "$setupFirewall" == "True" ]]; then
33 | apt-get -y install ufw
34 | ufw allow ssh
35 | ufw allow 6443/tcp
36 | ufw enable
37 | sed -i -- 's/DEFAULT_FORWARD_POLICY="DROP"/DEFAULT_FORWARD_POLICY="ACCEPT"/g' /etc/default/ufw
38 | service ufw restart
39 | fi
40 |
41 | ## Resolv.conf
42 | echo "$hostIP $hostname" >> /etc/hosts
43 |
44 | ## Certs
45 | wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
46 | chmod +x cfssl_linux-amd64
47 | sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
48 | wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
49 | chmod +x cfssljson_linux-amd64
50 | sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
51 |
52 | echo '{
53 | "signing": {
54 | "default": {
55 | "expiry": "8760h"
56 | },
57 | "profiles": {
58 | "kubernetes": {
59 | "usages": ["signing", "key encipherment", "server auth", "client auth"],
60 | "expiry": "8760h"
61 | }
62 | }
63 | }
64 | }' > ca-config.json
65 |
66 |
67 | cat > ca-csr.json < kubernetes-csr.json < etcd.service < token.csv < kube-apiserver.service < kube-controller-manager.service < kube-scheduler.service < docker.service < 10-calico.conf < calico.service < kubeconfig < kubelet.service < kube-proxy.service < /etc/sysctl.d/81-ipv4-forward.conf
490 |
491 |
492 | if [[ "$enableIPv6" == "true" ]]; then
493 | echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding
494 | echo "net.ipv6.conf.all.forwarding=1" > /etc/sysctl.d/80-ipv6-forward.conf
495 | calicoctl delete ippool fd80:24e2:f998:72d6::/64
496 | sleep 2
497 | cat </dev/null
825 | - --url=/healthz-dnsmasq
826 | - --cmd=nslookup kubernetes.default.svc.$clusterDomain 127.0.0.1:10053 >/dev/null
827 | - --url=/healthz-kubedns
828 | - --port=8080
829 | - --quiet
830 | ports:
831 | - containerPort: 8080
832 | protocol: TCP
833 | dnsPolicy: Default # Don't use cluster DNS.
834 | ---
835 |
836 | apiVersion: v1
837 | kind: Service
838 | metadata:
839 | name: kube-dns
840 | namespace: kube-system
841 | labels:
842 | k8s-app: kube-dns
843 | kubernetes.io/cluster-service: "true"
844 | kubernetes.io/name: "KubeDNS"
845 | spec:
846 | selector:
847 | k8s-app: kube-dns
848 | clusterIP: 10.32.0.10
849 | ports:
850 | - name: dns
851 | port: 53
852 | protocol: UDP
853 | - name: dns-tcp
854 | port: 53
855 | protocol: TCP
856 |
857 | EOF
858 |
859 |
860 | # KubeDashboard
861 | kubectl create -f https://git.io/kube-dashboard
862 |
863 | # Init HELM
864 | kubectl create serviceaccount tiller --namespace kube-system
865 |
866 | cat <> /etc/hosts
912 |
913 | if [[ "$setupFirewall" == "True" ]]; then
914 | apt-get update && apt-get -y install ufw
915 | ufw allow ssh
916 | ufw allow from $hostIP
917 | ufw enable
918 | fi
919 |
920 | wget https://get.docker.com/builds/Linux/x86_64/docker-"$dockerVersion".tgz
921 | tar -xvf docker-"$dockerVersion".tgz
922 | sudo cp docker/docker* /usr/bin/
923 |
924 | cat > docker.service < 10-calico.conf < calico.service < kubelet.service < kube-proxy.service < /etc/sysctl.d/81-ipv4-forward.conf
1100 |
1101 | if [[ "$enableIPv6" == "true" ]]; then
1102 | echo -n "1" >/proc/sys/net/ipv6/conf/all/forwarding
1103 | echo "net.ipv6.conf.all.forwarding=1" > /etc/sysctl.d/80-ipv6-forward.conf
1104 | fi
1105 |
1106 | sleep 2
1107 | #kubectl get node -o wide
1108 | exit 0
1109 | fi
1110 |
--------------------------------------------------------------------------------
/install_new_node.sh:
--------------------------------------------------------------------------------
1 | #! /bin/bash
2 | # -----------------------
3 | # Set ip up :
4 | # -----------------------
5 | nodeIP="__PUBLIC_OR_PRIVATE_IPV4__"
6 | sshUser="root"
7 | setupFirewall="True"
8 | CAcountry="US"
9 |
10 | nodeHostname=$(ssh $sshUser@$nodeIP 'hostname')
11 |
12 | echo "$nodeIP $nodeHostname" >> /etc/hosts
13 |
14 | if [ ! -f ca.pem ]; then
15 | echo "ca.pem don't exist, lauch ./install_k8s --master before !"
16 | exit 1
17 | fi
18 |
19 | if [ ! -f ca-key.pem ]; then
20 | echo "ca-key.pem don't exist, lauch ./install_k8s --master before !"
21 | exit 1
22 | fi
23 |
24 | if [ ! -f ca-config.json ]; then
25 | echo "ca-config.json don't exist, lauch ./install_k8s --master before !"
26 | exit 1
27 | fi
28 |
29 | if [[ "$setupFirewall" == "True" ]]; then
30 | # apt-get update && apt-get -y install ufw
31 | ufw allow from $nodeIP
32 | fi
33 |
34 |
35 | cat > $nodeHostname-csr.json < /tmp/IP'
76 | ssh $sshUser@$nodeIP 'echo "$nodeIP $nodeHostname" >> /etc/hosts'
77 |
78 | ssh $sshUser@$nodeIP '/opt/Simplekube/install_k8s.sh --worker'
79 |
--------------------------------------------------------------------------------
/resources/glusterfs-volumeStore/glusterfs-enpoints.json:
--------------------------------------------------------------------------------
1 | {
2 | "kind": "Endpoints",
3 | "apiVersion": "v1",
4 | "metadata": {
5 | "name": "glusterfs-cluster"
6 | },
7 | "subsets": [
8 | {
9 | "addresses": [
10 | {
11 | "ip": "gluster_IP01"
12 | }
13 | ],
14 | "ports": [
15 | {
16 | "port": 1
17 | }
18 | ]
19 | },
20 | {
21 | "addresses": [
22 | {
23 | "ip": "gluster_IP02"
24 | }
25 | ],
26 | "ports": [
27 | {
28 | "port": 1
29 | }
30 | ]
31 | }
32 | ]
33 | }
34 |
--------------------------------------------------------------------------------
/resources/glusterfs-volumeStore/glusterfs-svc.json:
--------------------------------------------------------------------------------
1 | {
2 | "kind": "Service",
3 | "apiVersion": "v1",
4 | "metadata": {
5 | "name": "glusterfs-cluster"
6 | },
7 | "spec": {
8 | "ports": [
9 | {"port": 1}
10 | ]
11 | }
12 | }
13 |
--------------------------------------------------------------------------------
/resources/glusterfs-volumeStore/mysql-depl-example.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | annotations:
5 | deployment.kubernetes.io/revision: "1"
6 | generation: 1
7 | labels:
8 | run: glusterdb
9 | name: glusterdb
10 | namespace: default
11 | spec:
12 | replicas: 1
13 | selector:
14 | matchLabels:
15 | run: glusterdb
16 | strategy:
17 | rollingUpdate:
18 | maxSurge: 1
19 | maxUnavailable: 0
20 | type: RollingUpdate
21 | template:
22 | metadata:
23 | creationTimestamp: null
24 | labels:
25 | run: glusterdb
26 | spec:
27 | containers:
28 | - image: mysql:latest
29 | name: glusterdb
30 | env:
31 | - name: MYSQL_ROOT_PASSWORD
32 | # Change this password!
33 | value: pAssw0rd
34 | ports:
35 | - containerPort: 80
36 | name: glusterdb
37 | volumeMounts:
38 | - name: glusterdb
39 | mountPath: /var/lib/mysql
40 | volumes:
41 | - name: glusterdb
42 | glusterfs:
43 | endpoints: glusterfs-cluster
44 | path: kubernetes (gluster:/kubernetes)
45 | readOnly: false
46 |
47 |
--------------------------------------------------------------------------------
/resources/heapster/grafana-ingress.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Ingress
3 | metadata:
4 | name: monitoring-grafana
5 | namespace: kube-system
6 | spec:
7 | rules:
8 | - host: graphs.kubecluster.ouvrard.it
9 | http:
10 | paths:
11 | - backend:
12 | serviceName: monitoring-grafana
13 | servicePort: 80
14 |
--------------------------------------------------------------------------------
/resources/heapster/grafana-service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | labels:
5 | kubernetes.io/cluster-service: 'true'
6 | kubernetes.io/name: monitoring-grafana
7 | name: monitoring-grafana
8 | namespace: kube-system
9 | spec:
10 | # In a production setup, we recommend accessing Grafana through an external Loadbalancer
11 | # or through a public IP.
12 | # type: LoadBalancer
13 | ports:
14 | - port: 80
15 | targetPort: 3000
16 | selector:
17 | name: influxGrafana
18 |
--------------------------------------------------------------------------------
/resources/heapster/heapster-controller.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ReplicationController
3 | metadata:
4 | labels:
5 | k8s-app: heapster
6 | name: heapster
7 | version: v6
8 | name: heapster
9 | namespace: kube-system
10 | spec:
11 | replicas: 1
12 | selector:
13 | k8s-app: heapster
14 | version: v6
15 | template:
16 | metadata:
17 | labels:
18 | k8s-app: heapster
19 | version: v6
20 | spec:
21 | containers:
22 | - name: heapster
23 | image: kubernetes/heapster:canary
24 | imagePullPolicy: Always
25 | command:
26 | - /heapster
27 | - --source=kubernetes:https://kubernetes.default
28 | - --sink=influxdb:http://monitoring-influxdb:8086
29 |
--------------------------------------------------------------------------------
/resources/heapster/heapster-service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | labels:
5 | kubernetes.io/cluster-service: 'true'
6 | kubernetes.io/name: Heapster
7 | name: heapster
8 | namespace: kube-system
9 | spec:
10 | ports:
11 | - port: 80
12 | targetPort: 8082
13 | selector:
14 | k8s-app: heapster
15 |
--------------------------------------------------------------------------------
/resources/heapster/influxdb-grafana-controller.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ReplicationController
3 | metadata:
4 | labels:
5 | name: influxGrafana
6 | name: influxdb-grafana
7 | namespace: kube-system
8 | spec:
9 | replicas: 1
10 | selector:
11 | name: influxGrafana
12 | template:
13 | metadata:
14 | labels:
15 | name: influxGrafana
16 | spec:
17 | containers:
18 | - name: influxdb
19 | image: kubernetes/heapster_influxdb:v0.5
20 | volumeMounts:
21 | - mountPath: /data
22 | name: influxdb-storage
23 | - name: grafana
24 | image: gcr.io/google_containers/heapster_grafana:v2.6.0-2
25 | env:
26 | - name: INFLUXDB_SERVICE_URL
27 | value: http://monitoring-influxdb:8086
28 | # The following env variables are required to make Grafana accessible via
29 | # the kubernetes api-server proxy. On production clusters, we recommend
30 | # removing these env variables, setup auth for grafana, and expose the grafana
31 | # service using a LoadBalancer or a public IP.
32 | - name: GF_AUTH_BASIC_ENABLED
33 | value: "true"
34 | - name: GF_AUTH_ANONYMOUS_ENABLED
35 | value: "false"
36 | - name: GF_AUTH_ANONYMOUS_ORG_ROLE
37 | value: Admin
38 | - name: GF_SERVER_ROOT_URL
39 | value: /
40 | #value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
41 | volumeMounts:
42 | - mountPath: /var
43 | name: grafana-storage
44 | volumes:
45 | - name: influxdb-storage
46 | emptyDir: {}
47 | - name: grafana-storage
48 | emptyDir: {}
49 |
--------------------------------------------------------------------------------
/resources/heapster/influxdb-service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | labels: null
5 | name: monitoring-influxdb
6 | namespace: kube-system
7 | spec:
8 | ports:
9 | - name: http
10 | port: 8083
11 | targetPort: 8083
12 | - name: api
13 | port: 8086
14 | targetPort: 8086
15 | selector:
16 | name: influxGrafana
17 |
--------------------------------------------------------------------------------