├── .gitignore ├── README.md ├── chapter-1 ├── README.md ├── vagrant │ ├── Vagrantfile │ └── golang-install.sh └── web-server.go ├── chapter-2 └── README.md ├── chapter-3 ├── Dockerfile ├── Readme.md ├── vagrant-docker │ ├── Vagrantfile │ └── docker-install.sh ├── vagrant-host-2 │ ├── Vagrantfile │ ├── docker-install.sh │ └── ubuntu-xenial-16.04-cloudimg-console.log └── web-server.go ├── chapter-4 ├── Dockerfile ├── README.md ├── connectivity-check.yaml ├── database.yaml ├── dnsutils.yaml ├── go.mod ├── go.sum ├── kind-config.yaml ├── layer_3_net_pol.yaml ├── layer_7_netpol.yml ├── web-server-netpol.yaml ├── web-server.go └── web.yaml ├── chapter-5 ├── Dockerfile ├── README.adoc ├── app-linkerd-dashboard.png ├── app-stats.png ├── container_connectivity.png ├── database.yaml ├── dnsutils.yaml ├── go.mod ├── go.sum ├── ingress-example-2.yaml ├── ingress-rule.yaml ├── ingress.yaml ├── kind-ingress.yaml ├── linkerd-dashboard.png ├── metallb-configmap.yaml ├── metallb.yaml ├── mlb-ns.yaml ├── nginx-ingress-controller.yml ├── service-clusterip.yaml ├── service-external.yml ├── service-headless.yml ├── services-loadbalancer.yaml ├── services-nodeport.yaml ├── web-server.go └── web.yaml └── chapter-6 ├── AWS ├── README.adoc ├── alb-rules.yml ├── crds.yml ├── database.yml ├── dnsutils.yml ├── iam_policy.json └── web.yml └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .idea/ 2 | ./vagrant/.vagrant/* 3 | ./vagrant-docker/.vagrant/* 4 | ./vagrant-host-2/.vagrant/* 5 | advanced_networking_code_examples 6 | .DS_Store 7 | chapter-1/vagrant/.vagrant/ 8 | chapter-2/vagrant/.vagrant/ 9 | chapter-3/vagrant/.vagrant/ 10 | chapter-2/vagrant-docker/.vagrant/ 11 | chapter-1/vagrant/ubuntu-xenial-16.04-cloudimg-console.log -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Networking and Kubernetes Code Examples 2 | 3 | ### Chapter 1 4 | - [Golang Minimal Web Server](chapter-1) 5 | 6 | ### Chapter 2 7 | - [Container Networking with Vagrant and Docker](chapter-2) 8 | 9 | ### Chapter 3 10 | - [Building Docker Container for Golang Minimal Web Server](chapter-3) 11 | 12 | ### Chapter 4 13 | - [Network Policies and CNI with Cilium](chapter-4) 14 | 15 | ### Chapter 5 16 | - [Kubernetes Networking Abstractions](chapter-5) 17 | 18 | ### Chapter 6 19 | - [Kubernetes in the Cloud](chapter-6) 20 | -------------------------------------------------------------------------------- /chapter-1/README.md: -------------------------------------------------------------------------------- 1 | # Chapter 1 Networking Introduction 2 | 3 | 1. Start up our Vagrant Host 4 | 2. Start Golang web server 5 | 3. Test Webserver with Curl 6 | 7 | A client will make an HTTP request to our minimal go web server, and it will send an HTTP response with text 8 | containing "Hello". The web server runs locally inside an Ubuntu virtual machine to test the full TCP/IP stack. 9 | 10 | ## 1. Start up our Vagrant Host 11 | Vagrant Install can be found [here](https://www.vagrantup.com/) 12 | 13 | https://app.vagrantup.com/ubuntu/boxes/xenial64 14 | 15 | Let's start our Vagrant host. 16 | 17 | ```bash 18 | ± |master ?:3 ✗| → cd vagrant 19 | ± |master ✓| → vagrant up 20 | Bringing machine 'default' up with 'virtualbox' provider... 21 | ==> default: Box 'ubuntu/xenial64' could not be found. Attempting to find and install... 22 | default: Box Provider: virtualbox 23 | default: Box Version: >= 0 24 | ==> default: Loading metadata for box 'ubuntu/xenial64' 25 | default: URL: https://vagrantcloud.com/ubuntu/xenial64 26 | ==> default: Adding box 'ubuntu/xenial64' (v20200904.0.0) for provider: virtualbox 27 | default: Downloading: https://vagrantcloud.com/ubuntu/boxes/xenial64/versions/20200904.0.0/providers/virtualbox.box 28 | Download redirected to host: cloud-images.ubuntu.com 29 | ==> default: Successfully added box 'ubuntu/xenial64' (v20200904.0.0) for 'virtualbox'! 30 | ==> default: Importing base box 'ubuntu/xenial64'... 31 | ==> default: Matching MAC address for NAT networking... 32 | ==> default: Checking if box 'ubuntu/xenial64' version '20200904.0.0' is up to date... 33 | ==> default: Setting the name of the VM: advanced_networking_code_examples_default_1599414274357_7492 34 | ==> default: Clearing any previously set network interfaces... 35 | ==> default: Preparing network interfaces based on configuration... 36 | default: Adapter 1: nat 37 | ==> default: Forwarding ports... 38 | default: 22 (guest) => 2222 (host) (adapter 1) 39 | ==> default: Running 'pre-boot' VM customizations... 40 | ==> default: Booting VM... 41 | ==> default: Waiting for machine to boot. This may take a few minutes... 42 | default: SSH address: 127.0.0.1:2222 43 | default: SSH username: vagrant 44 | default: SSH auth method: private key 45 | default: 46 | default: Vagrant insecure key detected. Vagrant will automatically replace 47 | default: this with a newly generated keypair for better security. 48 | default: 49 | default: Inserting generated public key within guest... 50 | default: Removing insecure key from the guest if it's present... 51 | default: Key inserted! Disconnecting and reconnecting using new SSH key... 52 | ==> default: Machine booted and ready! 53 | Got different reports about installed GuestAdditions version: 54 | Virtualbox on your host claims: 5.0.18 55 | VBoxService inside the vm claims: 5.1.38 56 | Going on, assuming VBoxService is correct... 57 | [default] A Virtualbox Guest Additions installation was found but no tools to rebuild or start them. 58 | Got different reports about installed GuestAdditions version: 59 | Virtualbox on your host claims: 5.0.18 60 | VBoxService inside the vm claims: 5.1.38 61 | Going on, assuming VBoxService is correct... 62 | Reading package lists... 63 | Building dependency tree... 64 | Reading state information... 65 | Package 'virtualbox-guest-dkms' is not installed, so not removed 66 | Package 'virtualbox-guest-x11' is not installed, so not removed 67 | The following packages will be REMOVED: 68 | virtualbox-guest-utils* 69 | 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 70 | After this operation, 2,339 kB disk space will be freed. 71 | (Reading database ... 54304 files and directories currently installed.) 72 | Removing virtualbox-guest-utils (5.1.38-dfsg-0ubuntu1.16.04.3) ... 73 | Purging configuration files for virtualbox-guest-utils (5.1.38-dfsg-0ubuntu1.16.04.3) ... 74 | Processing triggers for man-db (2.7.5-1) ... 75 | Reading package lists... 76 | Building dependency tree... 77 | Reading state information... 78 | linux-headers-4.4.0-189-generic is already the newest version (4.4.0-189.219). 79 | linux-headers-4.4.0-189-generic set to manually installed. 80 | The following additional packages will be installed: 81 | binutils cpp cpp-5 fakeroot gcc gcc-5 libasan2 libatomic1 libc-dev-bin 82 | libc6-dev libcc1-0 libcilkrts5 libfakeroot libgcc-5-dev libgomp1 libisl15 83 | libitm1 liblsan0 libmpc3 libmpx0 libquadmath0 libtsan0 libubsan0 84 | linux-libc-dev make manpages-dev 85 | Suggested packages: 86 | binutils-doc cpp-doc gcc-5-locales gcc-multilib autoconf automake libtool 87 | flex bison gdb gcc-doc gcc-5-multilib gcc-5-doc libgcc1-dbg libgomp1-dbg 88 | libitm1-dbg libatomic1-dbg libasan2-dbg liblsan0-dbg libtsan0-dbg 89 | libubsan0-dbg libcilkrts5-dbg libmpx0-dbg libquadmath0-dbg glibc-doc 90 | make-doc 91 | The following NEW packages will be installed: 92 | binutils cpp cpp-5 dkms fakeroot gcc gcc-5 libasan2 libatomic1 libc-dev-bin 93 | libc6-dev libcc1-0 libcilkrts5 libfakeroot libgcc-5-dev libgomp1 libisl15 94 | libitm1 liblsan0 libmpc3 libmpx0 libquadmath0 libtsan0 libubsan0 95 | linux-libc-dev make manpages-dev 96 | 0 upgraded, 27 newly installed, 0 to remove and 0 not upgraded. 97 | Need to get 27.9 MB of archives. 98 | After this operation, 101 MB of additional disk space will be used. 99 | Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libmpc3 amd64 1.0.3-1 [39.7 kB] 100 | Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 binutils amd64 2.26.1-1ubuntu1~16.04.8 [2,312 kB] 101 | Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libisl15 amd64 0.16.1-1 [524 kB] 102 | Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 cpp-5 amd64 5.4.0-6ubuntu1~16.04.12 [7,783 kB] 103 | Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 cpp amd64 4:5.3.1-1ubuntu1 [27.7 kB] 104 | Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcc1-0 amd64 5.4.0-6ubuntu1~16.04.12 [38.8 kB] 105 | Get:7 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgomp1 amd64 5.4.0-6ubuntu1~16.04.12 [55.2 kB] 106 | Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libitm1 amd64 5.4.0-6ubuntu1~16.04.12 [27.4 kB] 107 | Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libatomic1 amd64 5.4.0-6ubuntu1~16.04.12 [8,892 B] 108 | Get:10 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libasan2 amd64 5.4.0-6ubuntu1~16.04.12 [265 kB] 109 | Get:11 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 liblsan0 amd64 5.4.0-6ubuntu1~16.04.12 [105 kB] 110 | Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libtsan0 amd64 5.4.0-6ubuntu1~16.04.12 [244 kB] 111 | Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libubsan0 amd64 5.4.0-6ubuntu1~16.04.12 [95.3 kB] 112 | Get:14 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcilkrts5 amd64 5.4.0-6ubuntu1~16.04.12 [40.0 kB] 113 | Get:15 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libmpx0 amd64 5.4.0-6ubuntu1~16.04.12 [9,762 B] 114 | Get:16 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libquadmath0 amd64 5.4.0-6ubuntu1~16.04.12 [131 kB] 115 | Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libgcc-5-dev amd64 5.4.0-6ubuntu1~16.04.12 [2,239 kB] 116 | Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 gcc-5 amd64 5.4.0-6ubuntu1~16.04.12 [8,612 kB] 117 | Get:19 http://archive.ubuntu.com/ubuntu xenial/main amd64 gcc amd64 4:5.3.1-1ubuntu1 [5,244 B] 118 | Get:20 http://archive.ubuntu.com/ubuntu xenial/main amd64 make amd64 4.1-6 [151 kB] 119 | Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dkms all 2.2.0.3-2ubuntu11.8 [66.4 kB] 120 | Get:22 http://archive.ubuntu.com/ubuntu xenial/main amd64 libfakeroot amd64 1.20.2-1ubuntu1 [25.5 kB] 121 | Get:23 http://archive.ubuntu.com/ubuntu xenial/main amd64 fakeroot amd64 1.20.2-1ubuntu1 [61.8 kB] 122 | Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc-dev-bin amd64 2.23-0ubuntu11.2 [68.8 kB] 123 | Get:25 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-libc-dev amd64 4.4.0-189.219 [852 kB] 124 | Get:26 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libc6-dev amd64 2.23-0ubuntu11.2 [2,083 kB] 125 | Get:27 http://archive.ubuntu.com/ubuntu xenial/main amd64 manpages-dev all 4.04-2 [2,048 kB] 126 | Fetched 27.9 MB in 3s (7,850 kB/s) 127 | Selecting previously unselected package libmpc3:amd64. 128 | (Reading database ... 54291 files and directories currently installed.) 129 | Preparing to unpack .../libmpc3_1.0.3-1_amd64.deb ... 130 | Unpacking libmpc3:amd64 (1.0.3-1) ... 131 | Selecting previously unselected package binutils. 132 | Preparing to unpack .../binutils_2.26.1-1ubuntu1~16.04.8_amd64.deb ... 133 | Unpacking binutils (2.26.1-1ubuntu1~16.04.8) ... 134 | Selecting previously unselected package libisl15:amd64. 135 | Preparing to unpack .../libisl15_0.16.1-1_amd64.deb ... 136 | Unpacking libisl15:amd64 (0.16.1-1) ... 137 | Selecting previously unselected package cpp-5. 138 | Preparing to unpack .../cpp-5_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 139 | Unpacking cpp-5 (5.4.0-6ubuntu1~16.04.12) ... 140 | Selecting previously unselected package cpp. 141 | Preparing to unpack .../cpp_4%3a5.3.1-1ubuntu1_amd64.deb ... 142 | Unpacking cpp (4:5.3.1-1ubuntu1) ... 143 | Selecting previously unselected package libcc1-0:amd64. 144 | Preparing to unpack .../libcc1-0_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 145 | Unpacking libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 146 | Selecting previously unselected package libgomp1:amd64. 147 | Preparing to unpack .../libgomp1_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 148 | Unpacking libgomp1:amd64 (5.4.0-6ubuntu1~16.04.12) ... 149 | Selecting previously unselected package libitm1:amd64. 150 | Preparing to unpack .../libitm1_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 151 | Unpacking libitm1:amd64 (5.4.0-6ubuntu1~16.04.12) ... 152 | Selecting previously unselected package libatomic1:amd64. 153 | Preparing to unpack .../libatomic1_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 154 | Unpacking libatomic1:amd64 (5.4.0-6ubuntu1~16.04.12) ... 155 | Selecting previously unselected package libasan2:amd64. 156 | Preparing to unpack .../libasan2_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 157 | Unpacking libasan2:amd64 (5.4.0-6ubuntu1~16.04.12) ... 158 | Selecting previously unselected package liblsan0:amd64. 159 | Preparing to unpack .../liblsan0_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 160 | Unpacking liblsan0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 161 | Selecting previously unselected package libtsan0:amd64. 162 | Preparing to unpack .../libtsan0_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 163 | Unpacking libtsan0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 164 | Selecting previously unselected package libubsan0:amd64. 165 | Preparing to unpack .../libubsan0_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 166 | Unpacking libubsan0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 167 | Selecting previously unselected package libcilkrts5:amd64. 168 | Preparing to unpack .../libcilkrts5_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 169 | Unpacking libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.12) ... 170 | Selecting previously unselected package libmpx0:amd64. 171 | Preparing to unpack .../libmpx0_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 172 | Unpacking libmpx0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 173 | Selecting previously unselected package libquadmath0:amd64. 174 | Preparing to unpack .../libquadmath0_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 175 | Unpacking libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 176 | Selecting previously unselected package libgcc-5-dev:amd64. 177 | Preparing to unpack .../libgcc-5-dev_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 178 | Unpacking libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.12) ... 179 | Selecting previously unselected package gcc-5. 180 | Preparing to unpack .../gcc-5_5.4.0-6ubuntu1~16.04.12_amd64.deb ... 181 | Unpacking gcc-5 (5.4.0-6ubuntu1~16.04.12) ... 182 | Selecting previously unselected package gcc. 183 | Preparing to unpack .../gcc_4%3a5.3.1-1ubuntu1_amd64.deb ... 184 | Unpacking gcc (4:5.3.1-1ubuntu1) ... 185 | Selecting previously unselected package make. 186 | Preparing to unpack .../archives/make_4.1-6_amd64.deb ... 187 | Unpacking make (4.1-6) ... 188 | Selecting previously unselected package dkms. 189 | Preparing to unpack .../dkms_2.2.0.3-2ubuntu11.8_all.deb ... 190 | Unpacking dkms (2.2.0.3-2ubuntu11.8) ... 191 | Selecting previously unselected package libfakeroot:amd64. 192 | Preparing to unpack .../libfakeroot_1.20.2-1ubuntu1_amd64.deb ... 193 | Unpacking libfakeroot:amd64 (1.20.2-1ubuntu1) ... 194 | Selecting previously unselected package fakeroot. 195 | Preparing to unpack .../fakeroot_1.20.2-1ubuntu1_amd64.deb ... 196 | Unpacking fakeroot (1.20.2-1ubuntu1) ... 197 | Selecting previously unselected package libc-dev-bin. 198 | Preparing to unpack .../libc-dev-bin_2.23-0ubuntu11.2_amd64.deb ... 199 | Unpacking libc-dev-bin (2.23-0ubuntu11.2) ... 200 | Selecting previously unselected package linux-libc-dev:amd64. 201 | Preparing to unpack .../linux-libc-dev_4.4.0-189.219_amd64.deb ... 202 | Unpacking linux-libc-dev:amd64 (4.4.0-189.219) ... 203 | Selecting previously unselected package libc6-dev:amd64. 204 | Preparing to unpack .../libc6-dev_2.23-0ubuntu11.2_amd64.deb ... 205 | Unpacking libc6-dev:amd64 (2.23-0ubuntu11.2) ... 206 | Selecting previously unselected package manpages-dev. 207 | Preparing to unpack .../manpages-dev_4.04-2_all.deb ... 208 | Unpacking manpages-dev (4.04-2) ... 209 | Processing triggers for libc-bin (2.23-0ubuntu11.2) ... 210 | Processing triggers for man-db (2.7.5-1) ... 211 | Setting up libmpc3:amd64 (1.0.3-1) ... 212 | Setting up binutils (2.26.1-1ubuntu1~16.04.8) ... 213 | Setting up libisl15:amd64 (0.16.1-1) ... 214 | Setting up cpp-5 (5.4.0-6ubuntu1~16.04.12) ... 215 | Setting up cpp (4:5.3.1-1ubuntu1) ... 216 | Setting up libcc1-0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 217 | Setting up libgomp1:amd64 (5.4.0-6ubuntu1~16.04.12) ... 218 | Setting up libitm1:amd64 (5.4.0-6ubuntu1~16.04.12) ... 219 | Setting up libatomic1:amd64 (5.4.0-6ubuntu1~16.04.12) ... 220 | Setting up libasan2:amd64 (5.4.0-6ubuntu1~16.04.12) ... 221 | Setting up liblsan0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 222 | Setting up libtsan0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 223 | Setting up libubsan0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 224 | Setting up libcilkrts5:amd64 (5.4.0-6ubuntu1~16.04.12) ... 225 | Setting up libmpx0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 226 | Setting up libquadmath0:amd64 (5.4.0-6ubuntu1~16.04.12) ... 227 | Setting up libgcc-5-dev:amd64 (5.4.0-6ubuntu1~16.04.12) ... 228 | Setting up gcc-5 (5.4.0-6ubuntu1~16.04.12) ... 229 | Setting up gcc (4:5.3.1-1ubuntu1) ... 230 | Setting up make (4.1-6) ... 231 | Setting up dkms (2.2.0.3-2ubuntu11.8) ... 232 | Setting up libfakeroot:amd64 (1.20.2-1ubuntu1) ... 233 | Setting up fakeroot (1.20.2-1ubuntu1) ... 234 | update-alternatives: using /usr/bin/fakeroot-sysv to provide /usr/bin/fakeroot (fakeroot) in auto mode 235 | Setting up libc-dev-bin (2.23-0ubuntu11.2) ... 236 | Setting up linux-libc-dev:amd64 (4.4.0-189.219) ... 237 | Setting up libc6-dev:amd64 (2.23-0ubuntu11.2) ... 238 | Setting up manpages-dev (4.04-2) ... 239 | Processing triggers for libc-bin (2.23-0ubuntu11.2) ... 240 | Copy iso file /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso 241 | Mounting Virtualbox Guest Additions ISO to: /mnt 242 | mount: /dev/loop0 is write-protected, mounting read-only 243 | Installing Virtualbox Guest Additions 5.2.38 - guest version is 5.1.38 244 | Verifying archive integrity... All good. 245 | Uncompressing VirtualBox 5.2.38 Guest Additions for Linux........ 246 | VirtualBox Guest Additions installer 247 | Copying additional installer modules ... 248 | Installing additional modules ... 249 | VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel 250 | modules. This may take a while. 251 | VirtualBox Guest Additions: To build modules for other installed kernels, run 252 | VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup 253 | VirtualBox Guest Additions: Building the modules for kernel 4.4.0-189-generic. 254 | update-initramfs: Generating /boot/initrd.img-4.4.0-189-generic 255 | VirtualBox Guest Additions: Starting. 256 | Unmounting Virtualbox Guest Additions ISO from: /mnt 257 | ==> default: Checking for guest additions in VM... 258 | ==> default: Mounting shared folders... 259 | default: /vagrant => /Users/strongjz/Documents/code/advanced_networking_code_examples 260 | ``` 261 | 262 | ## 2. Start Golang web server 263 | 264 | ```bash 265 | export PATH=$PATH:/usr/local/go/bin 266 | go run web-server.go 267 | 268 | ``` 269 | 270 | ## 3. Test Webserver with Curl 271 | In a new terminal window, ssh again into the vagrant machine to execute the curl command. 272 | 273 | ```bash 274 | vagrant@ubuntu-xenial:~$ curl -vvv localhost:8080 275 | * Rebuilt URL to: localhost:8080/ 276 | * Trying 127.0.0.1... 277 | * Connected to localhost (127.0.0.1) port 8080 (#0) 278 | > GET / HTTP/1.1 279 | > Host: localhost:8080 280 | > User-Agent: curl/7.47.0 281 | > Accept: */* 282 | > 283 | < HTTP/1.1 200 OK 284 | < Date: Sun, 20 Jun 2021 19:59:52 GMT 285 | < Content-Length: 5 286 | < Content-Type: text/plain; charset=utf-8 287 | < 288 | * Connection #0 to host localhost left intact 289 | Hellovagrant@ubuntu-xenial:~$ 290 | ``` 291 | 292 | -------------------------------------------------------------------------------- /chapter-1/vagrant/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | config.vm.box = "ubuntu/xenial64" 6 | config.vm.network "public_network", use_dhcp_assigned_default_route: true 7 | config.vm.provision "shell", path: "golang-install.sh" 8 | config.vm.provision "file", source: "../web-server.go", destination: "web-server.go" 9 | end 10 | -------------------------------------------------------------------------------- /chapter-1/vagrant/golang-install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -ex 2 | 3 | curl -L -o go1.16.5.linux-amd64.tar.gz https://golang.org/dl/go1.16.5.linux-amd64.tar.gz 4 | 5 | echo "b12c23023b68de22f74c0524f10b753e7b08b1504cb7e417eccebdd3fae49061 go1.16.5.linux-amd64.tar.gz" | sha256sum -c 6 | 7 | rm -rf /usr/local/go && tar -C /usr/local -xzf go1.16.5.linux-amd64.tar.gz 8 | 9 | export PATH=$PATH:/usr/local/go/bin 10 | 11 | go version 12 | -------------------------------------------------------------------------------- /chapter-1/web-server.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "net/http" 6 | ) 7 | 8 | func hello(w http.ResponseWriter, r *http.Request) { 9 | fmt.Fprintf(w, "Hello") 10 | } 11 | 12 | func main() { 13 | http.HandleFunc("/", hello) 14 | http.ListenAndServe("0.0.0.0:8080", nil) 15 | } -------------------------------------------------------------------------------- /chapter-2/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/strongjz/Networking-and-Kubernetes/bd0e0702ff21473d1216da8a23583473c26da09e/chapter-2/README.md -------------------------------------------------------------------------------- /chapter-3/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:1.15 AS builder 2 | WORKDIR /opt 3 | COPY web-server.go . 4 | RUN CGO_ENABLED=0 GOOS=linux go build -o web-server . 5 | 6 | FROM golang:1.15 7 | WORKDIR /opt 8 | COPY --from=0 /opt/web-server . 9 | CMD ["/opt/web-server"] 10 | -------------------------------------------------------------------------------- /chapter-3/Readme.md: -------------------------------------------------------------------------------- 1 | 2 | # Chapter 3 Container Networking Intro 3 | 4 | The following steps show how to create the networking setup. 5 | 6 | 1. Create a host with a root network namespace. 7 | 2. Create two new network namespace. 8 | 3. Create two veth pair. 9 | 4. Move one side of each veth pair into a new network namespace. 10 | 5. Address side of the veth pair inside the new network namespace. 11 | 6. Create a bridge interface. 12 | 7. Attach bridge to the host interface. 13 | 8. Attach one side of each veth pair to the bridge interface. 14 | 9. Test 15 | 16 | ```bash 17 | br-veth0 veth0 +-------------+ 18 | +--------------------- + net0 | 19 | | 192.168.1.100 +-------------+ 20 | +--------+ 21 | | | 22 | | br1 | 192.168.1.10 23 | | | 24 | +--------+ 25 | | veth1 +-------------+ 26 | +---------------------+ net1 | 27 | br-veth1 192.168.1.101 +-------------+ 28 | ``` 29 | 30 | ## 1. Create a host with a root network namespace. 31 | 32 | Follow the steps from Chapter 1 to start a Vagrant Host. 33 | 34 | Connect to the machine 35 | 36 | ```bash 37 | vagrant ssh 38 | ``` 39 | ## 2. Create a new network namespace. 40 | 41 | ```bash 42 | vagrant@ubuntu-xenial:~$ sudo ip netns list 43 | vagrant@ubuntu-xenial:~$ sudo ip netns add net0 44 | vagrant@ubuntu-xenial:~$ sudo ip netns add net1 45 | vagrant@ubuntu-xenial:~$ sudo ip netns list 46 | net1 47 | net0 48 | ``` 49 | 50 | ## 3. Create veth pairs. 51 | 52 | ```bash 53 | vagrant@ubuntu-xenial:~$ sudo ip link add veth0 type veth peer name br-veth0 54 | vagrant@ubuntu-xenial:~$ sudo ip link add veth1 type veth peer name br-veth1 55 | ``` 56 | 57 | ```bash 58 | vagrant@ubuntu-xenial:~$ ip link list veth1 59 | 7: veth1@br-veth1: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 60 | link/ether 2a:92:85:81:50:50 brd ff:ff:ff:ff:ff:ff 61 | vagrant@ubuntu-xenial:~$ ip link list veth0 62 | 5: veth0@br-veth0: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 63 | link/ether 0a:a1:be:3c:89:3d brd ff:ff:ff:ff:ff:ff 64 | ``` 65 | 66 | ## 4. Move one side of the veth pair into a new network namespace. 67 | 68 | Move veth0 int net0 namespace, and veth1 into net1 69 | 70 | ```bash 71 | vagrant@ubuntu-xenial:~$ sudo ip link set veth0 netns net0 72 | vagrant@ubuntu-xenial:~$ sudo ip link set veth1 netns net1 73 | ``` 74 | 75 | Examine the network namespaces 76 | 77 | ```bash 78 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net1 ip link list veth1 79 | 7: veth1@if6: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 80 | link/ether 2a:92:85:81:50:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0 81 | 82 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net0 ip link list veth0 83 | 5: veth0@if4: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 84 | link/ether 0a:a1:be:3c:89:3d brd ff:ff:ff:ff:ff:ff link-netnsid 0 85 | 86 | ``` 87 | 88 | ## 5. Address side of the veth pair inside the new network namespace. 89 | 90 | Address veths 91 | 92 | ```bash 93 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net0 ip addr add "192.168.1.100/24" dev veth0 94 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net1 ip addr add "192.168.1.101/24" dev veth1 95 | ``` 96 | 97 | Turn up the veth side 98 | ```bash 99 | sudo ip netns exec net0 ip link set dev veth0 up 100 | sudo ip netns exec net1 ip link set dev veth1 up 101 | ``` 102 | 103 | ```bash 104 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net0 ip link list 105 | 1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1 106 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 107 | 5: veth0@if4: mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 108 | link/ether 0a:a1:be:3c:89:3d brd ff:ff:ff:ff:ff:ff link-netnsid 0 109 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net1 ip link list 110 | 1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1 111 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 112 | 7: veth1@if6: mtu 1500 qdisc noqueue state LOWERLAYERDOWN mode DEFAULT group default qlen 1000 113 | link/ether 2a:92:85:81:50:50 brd ff:ff:ff:ff:ff:ff link-netnsid 0 114 | 115 | ``` 116 | 117 | ## 6. Create a bridge interface. 118 | 119 | Create a bridge interface, turn it on, and address it. 120 | 121 | ```bash 122 | sudo ip link add name br1 type bridge 123 | sudo ip link set br1 up 124 | sudo ip addr add 192.168.1.10/24 brd + dev br1 125 | ``` 126 | 127 | ## 7. Attach bridge to the host interface. 128 | 129 | ```bash 130 | sudo ip link set enp0s8 master br1 131 | ``` 132 | 133 | Turn each side of the veth pair that will attach to the bridge, up. 134 | ```bash 135 | sudo ip link set br-veth0 up 136 | sudo ip link set br-veth1 up 137 | ``` 138 | 139 | ## 8. Attach one side of the veth pair to the bridge interface. 140 | 141 | ```bash 142 | sudo ip link set br-veth0 master br1 143 | sudo ip link set br-veth1 master br1 144 | ``` 145 | 146 | ## 9. Test. 147 | 148 | From the host ping .10, .100 and .101 149 | 150 | ```bash 151 | vagrant@ubuntu-xenial:~$ ping 192.168.1.10 152 | PING 192.168.1.10 (192.168.1.10) 56(84) bytes of data. 153 | 64 bytes from 192.168.1.10: icmp_seq=1 ttl=64 time=0.011 ms 154 | 64 bytes from 192.168.1.10: icmp_seq=2 ttl=64 time=0.038 ms 155 | 64 bytes from 192.168.1.10: icmp_seq=3 ttl=64 time=0.038 ms 156 | 64 bytes from 192.168.1.10: icmp_seq=4 ttl=64 time=0.021 ms 157 | ``` 158 | 159 | ```bash 160 | vagrant@ubuntu-xenial:~$ ping 192.168.1.100 161 | PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data. 162 | 64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.017 ms 163 | 64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=0.027 ms 164 | 64 bytes from 192.168.1.100: icmp_seq=3 ttl=64 time=0.048 ms 165 | ``` 166 | 167 | ```bash 168 | vagrant@ubuntu-xenial:~$ ping 192.168.1.101 169 | PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data. 170 | 64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=0.049 ms 171 | 64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=0.028 ms 172 | 64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=0.038 ms 173 | ``` 174 | Now from the perspective namespaces ping each other. 175 | 176 | ```bash 177 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net0 ping -c 4 192.168.1.101 178 | PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data. 179 | 64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=0.085 ms 180 | 64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=0.054 ms 181 | 64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=0.029 ms 182 | 64 bytes from 192.168.1.101: icmp_seq=4 ttl=64 time=0.030 ms 183 | 184 | --- 192.168.1.101 ping statistics --- 185 | 4 packets transmitted, 4 received, 0% packet loss, time 3000ms 186 | rtt min/avg/max/mdev = 0.029/0.049/0.085/0.023 ms 187 | ``` 188 | 189 | ```bash 190 | vagrant@ubuntu-xenial:~$ sudo ip netns exec net1 ping -c 4 192.168.1.100 191 | PING 192.168.1.100 (192.168.1.100) 56(84) bytes of data. 192 | 64 bytes from 192.168.1.100: icmp_seq=1 ttl=64 time=0.020 ms 193 | 64 bytes from 192.168.1.100: icmp_seq=2 ttl=64 time=0.045 ms 194 | 64 bytes from 192.168.1.100: icmp_seq=3 ttl=64 time=0.029 ms 195 | 64 bytes from 192.168.1.100: icmp_seq=4 ttl=64 time=0.034 ms 196 | 197 | --- 192.168.1.100 ping statistics --- 198 | 4 packets transmitted, 4 received, 0% packet loss, time 2998ms 199 | rtt min/avg/max/mdev = 0.020/0.032/0.045/0.009 ms 200 | ``` 201 | 202 | 203 | -------------------------------------------------------------------------------- /chapter-3/vagrant-docker/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | config.vm.box = "ubuntu/xenial64" 6 | config.vm.network "public_network", use_dhcp_assigned_default_route: true 7 | config.vm.provision "shell", path: "docker-install.sh" 8 | end 9 | -------------------------------------------------------------------------------- /chapter-3/vagrant-docker/docker-install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -ex 2 | 3 | sudo apt-get update -y 4 | 5 | sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common 6 | 7 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 8 | 9 | sudo apt-key fingerprint 0EBFCD88 10 | 11 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 12 | 13 | sudo apt-get -y update 14 | 15 | sudo apt-get install -y docker-ce docker-ce-cli containerd.io 16 | 17 | sudo docker run hello-world 18 | 19 | 20 | -------------------------------------------------------------------------------- /chapter-3/vagrant-host-2/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | config.vm.hostname = "host-2" 6 | config.vm.box = "ubuntu/xenial64" 7 | config.vm.network "public_network", use_dhcp_assigned_default_route: true 8 | config.vm.provision "shell", path: "docker-install.sh" 9 | end 10 | -------------------------------------------------------------------------------- /chapter-3/vagrant-host-2/docker-install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -ex 2 | 3 | sudo apt-get update -y 4 | 5 | sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common 6 | 7 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 8 | 9 | sudo apt-key fingerprint 0EBFCD88 10 | 11 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 12 | 13 | sudo apt-get -y update 14 | 15 | sudo apt-get install -y docker-ce docker-ce-cli containerd.io 16 | 17 | sudo docker run hello-world 18 | 19 | 20 | -------------------------------------------------------------------------------- /chapter-3/web-server.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "net/http" 6 | ) 7 | 8 | func hello(w http.ResponseWriter, r *http.Request) { 9 | fmt.Fprintf(w, "Hello") 10 | } 11 | 12 | func main() { 13 | http.HandleFunc("/", hello) 14 | http.ListenAndServe("0.0.0.0:8080", nil) 15 | } -------------------------------------------------------------------------------- /chapter-4/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:1.15 AS builder 2 | WORKDIR /opt 3 | COPY go.mod . 4 | COPY web-server.go . 5 | RUN CGO_ENABLED=0 GOOS=linux go build -o web-server . 6 | 7 | FROM golang:1.15 8 | WORKDIR /opt 9 | COPY --from=0 /opt/web-server . 10 | CMD ["/opt/web-server"] 11 | -------------------------------------------------------------------------------- /chapter-4/README.md: -------------------------------------------------------------------------------- 1 | 2 | Tools Needed 3 | * Docker 4 | * Kind 5 | * Helm 6 | 7 | Kind install can be found [here](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) 8 | 9 | Helm install can be found [here](https://helm.sh/docs/intro/install/) 10 | 11 | Cilium Rules basic are available [here](https://docs.cilium.io/en/v1.9/policy/intro/#rule-basics) 12 | 13 | Steps 14 | 1. Create Kind cluster 15 | 2. Add Cilium images to kind cluster 16 | 3. Install Cilium in the cluster 17 | 4. Test connectivity 18 | 5. Test Webserver and Database NetworkPolicies 19 | 20 | # 1. Create Kind cluster 21 | 22 | With the kind cluster configuration yaml, we can use kind to create that cluster with the below command. If this is the first time running it, it will take some time to download all the docker images for the working and control plane docker images. 23 | 24 | ```bash 25 | kind create cluster --config=kind-config.yaml 26 | Creating cluster "kind" ... 27 | ✓ Ensuring node image (kindest/node:v1.18.2) 🖼 28 | ✓ Preparing nodes 📦 📦 📦 📦 29 | ✓ Writing configuration 📜 30 | ✓ Starting control-plane 🕹️ 31 | ✓ Installing StorageClass 💾 32 | ✓ Joining worker nodes 🚜 33 | Set kubectl context to "kind-kind" 34 | You can now use your cluster with: 35 | 36 | kubectl cluster-info --context kind-kind 37 | 38 | Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂 39 | ``` 40 | 41 | Always verify that the cluster is up and running with kubectl. 42 | 43 | ```bash 44 | kubectl cluster-info --context kind-kind 45 | Kubernetes master is running at https://127.0.0.1:59511 46 | KubeDNS is running at https://127.0.0.1:59511/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 47 | 48 | To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. 49 | ``` 50 | 51 | # 2. Add Cilium images to kind cluster 52 | Now that our cluster is running locally we can begin installing Cilium using helm, a kubernetes deployment tool. This is the prefered way to install Cilium. First, we need to add the helm repo for Cilium. Then download the docker images for cilium, and finally instruct kind to load the cilium images into the cluster. 53 | 54 | ```bash 55 | helm repo add cilium https://helm.cilium.io/ 56 | docker pull cilium/cilium:v1.9.1 57 | kind load docker-image cilium/cilium:v1.9.1 58 | ``` 59 | 60 | # 3. Install Cilium in the cluster 61 | 62 | Now the pre-requisites for Cilium are completed we can install Cilium in our cluster with helm. There are many configuration options for Ciluim, and they are set with the helm options --set. 63 | 64 | ```bash 65 | helm install cilium cilium/cilium --version 1.9.1 \ 66 | --namespace kube-system \ 67 | --set nodeinit.enabled=true \ 68 | --set kubeProxyReplacement=partial \ 69 | --set hostServices.enabled=false \ 70 | --set externalIPs.enabled=true \ 71 | --set nodePort.enabled=true \ 72 | --set hostPort.enabled=true \ 73 | --set bpf.masquerade=false \ 74 | --set image.pullPolicy=IfNotPresent \ 75 | --set ipam.mode=kubernetes 76 | ``` 77 | 78 | # 4. Test connectivity 79 | 80 | Now that Cilium is deployed we can run the connectivity check from Cilium to ensure the CNI is installed in the cluster correctly. 81 | 82 | ```bash 83 | kubectl create ns cilium-test 84 | namespace/cilium-test created 85 | 86 | kubectl apply -n cilium-test -f https://raw.githubusercontent.com/strongjz/advanced_networking_code_examples/master/chapter-4/connectivity-check.yaml 87 | deployment.apps/echo-a created 88 | deployment.apps/echo-b created 89 | deployment.apps/echo-b-host created 90 | deployment.apps/pod-to-a created 91 | deployment.apps/pod-to-external-1111 created 92 | deployment.apps/pod-to-a-denied-cnp created 93 | deployment.apps/pod-to-a-allowed-cnp created 94 | deployment.apps/pod-to-external-fqdn-allow-google-cnp created 95 | deployment.apps/pod-to-b-multi-node-clusterip created 96 | deployment.apps/pod-to-b-multi-node-headless created 97 | deployment.apps/host-to-b-multi-node-clusterip created 98 | deployment.apps/host-to-b-multi-node-headless created 99 | deployment.apps/pod-to-b-multi-node-nodeport created 100 | deployment.apps/pod-to-b-intra-node-nodeport created 101 | service/echo-a created 102 | service/echo-b created 103 | service/echo-b-headless created 104 | service/echo-b-host-headless created 105 | ciliumnetworkpolicy.cilium.io/pod-to-a-denied-cnp created 106 | ciliumnetworkpolicy.cilium.io/pod-to-a-allowed-cnp created 107 | ciliumnetworkpolicy.cilium.io/pod-to-external-fqdn-allow-google-cnp created 108 | ``` 109 | 110 | Cilium installs several pieces in the cluster, the agent, the client, operator and the cilium-cni plugin. 111 | 112 | Agent - The Cilium agent, cilium-agent, runs on each node in the cluster. The agent accepts configuration via Kubernetes or APIs that describes networking, service load-balancing, network policies, and visibility & monitoring requirements. 113 | 114 | Client (CLI) - The Cilium CLI client (cilium) is a command-line tool that is installed along with the Cilium agent. It interacts with the REST API of the Cilium agent running on the same node. The CLI allows inspecting the state and status of the local agent. It also provides tooling to directly access the eBPF maps to validate their state. 115 | 116 | Operator - The Cilium Operator is responsible for managing duties in the cluster which should logically be handled once for the entire cluster, rather than once for each node in the cluster. 117 | 118 | CNI Plugin - The CNI plugin (cilium-cni) interacts with the Cilium API of the node to trigger the configuration to provide networking, load-balancing and network policies for the pod. 119 | 120 | 121 | We can observe all these components being deployed in the cluster with the kubectl -n kube-system get pods --watch command. 122 | 123 | ```bash 124 | kubectl get pods -n cilium-test -w 125 | NAME READY STATUS RESTARTS AGE 126 | echo-a-57cbbd9b8b-szn94 1/1 Running 0 34m 127 | echo-b-6db5fc8ff8-wkcr6 1/1 Running 0 34m 128 | echo-b-host-76d89978c-dsjm8 1/1 Running 0 34m 129 | host-to-b-multi-node-clusterip-fd6868749-7zkcr 1/1 Running 2 34m 130 | host-to-b-multi-node-headless-54fbc4659f-z4rtd 1/1 Running 2 34m 131 | pod-to-a-648fd74787-x27hc 1/1 Running 1 34m 132 | pod-to-a-allowed-cnp-7776c879f-6rq7z 1/1 Running 0 34m 133 | pod-to-a-denied-cnp-b5ff897c7-qp5kp 1/1 Running 0 34m 134 | pod-to-b-intra-node-nodeport-6546644d59-qkmck 1/1 Running 2 34m 135 | pod-to-b-multi-node-clusterip-7d54c74c5f-4j7pm 1/1 Running 2 34m 136 | pod-to-b-multi-node-headless-76db68d547-fhlz7 1/1 Running 2 34m 137 | pod-to-b-multi-node-nodeport-7496df84d7-5z872 1/1 Running 2 34m 138 | pod-to-external-1111-6d4f9d9645-kfl4x 1/1 Running 0 34m 139 | pod-to-external-fqdn-allow-google-cnp-5bc496897c-bnlqs 1/1 Running 0 34m 140 | ``` 141 | 142 | # 5. Test Webserver and Database NetworkPolicies 143 | 144 | Now that the Cilium CNI is deployed into our cluster we can begin exploring the power of its Network policies. We 145 | will deploy our golang webserver that now connects to a database. Using a network utility pod we will test connectivity 146 | without the network policies in place, then deploy network policies that will restrict connectivity to the web 147 | server and database. 148 | 149 | 1. Deploy Containers, Web, DB and Utils 150 | 2. Test open connectivity 151 | 3. Deploy Network policies 152 | 4. Test Closed Network Connectivity 153 | 154 | #### 1. Deploy Golang web server 155 | 156 | Our Golang web server has been updated to connect to a postgres database. Let's deploy the Postgres database with 157 | the following yaml and commands. 158 | 159 | 1.1 Deploy Database 160 | 161 | ```bash 162 | kubectl apply -f database.yaml 163 | service/postgres created 164 | configmap/postgres-config created 165 | statefulset.apps/postgres created 166 | ``` 167 | 168 | Deploying our Webserver as a kubernetes deployment to our kind cluster. 169 | 170 | 1.2 Deploy Web Server 171 | 172 | ```bash 173 | kubectl apply -f web.yml 174 | deployment.apps/app created 175 | ``` 176 | 177 | To run connectivity tests inside the cluster network we will deploy and use a dns utils pod that has basic 178 | networking tools like ping and curl. 179 | 180 | 1.3 Deploy Dns Utils pod 181 | 182 | ```bash 183 | kubectl apply -f dnsutils.yaml 184 | pod/dnsutils created 185 | ``` 186 | 187 | #### 2. Test open connectivity 188 | 189 | Since we are not deploying A service with an ingress, we can use kubectl port forward to test connectivity to our 190 | webserver 191 | 192 | More information about kubectl port-forward can be found [here](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) 193 | ```bash 194 | kubectl port-forward app-5878d69796-j889q 8080:8080 195 | ``` 196 | 197 | Now from our local terminal we can reach out API. 198 | 199 | ```bash 200 | curl localhost:8080/ 201 | Hello 202 | curl localhost:8080/healthz 203 | Healthy 204 | curl localhost:8080/data 205 | Database Connected 206 | ``` 207 | 208 | Let's test connectivity to our web server inside the cluster from other pods. In order to do that we need to get the 209 | IP address of our web server pod. 210 | 211 | ```bash 212 | kubectl get pods -l app=app -o wide 213 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 214 | app-5878d69796-j889q 1/1 Running 0 87m 10.244.1.188 kind-worker3 215 | ``` 216 | 217 | Now we can test layer4 and 7 connectivity to the web server from the DNS utils pod. 218 | 219 | ```bash 220 | kubectl exec dnsutils -- nc -z -vv 10.244.1.188 8080 221 | 10.244.1.188 (10.244.1.188:8080) open 222 | sent 0, rcvd 0 223 | ``` 224 | 225 | Layer 7 HTTP API Access 226 | ```bash 227 | kubectl exec dnsutils -- wget -qO- 10.244.1.188:8080/ 228 | Hello 229 | 230 | kubectl exec dnsutils -- wget -qO- 10.244.1.188:8080/data 231 | Database Connected 232 | 233 | kubectl exec dnsutils -- wget -qO- 10.244.1.188:8080/healthz 234 | Healthy 235 | ``` 236 | 237 | We can also test the same to the database pod. 238 | 239 | Retrieve the IP Address of database pod. 240 | ```bash 241 | kubectl get pods -l app=postgres -o wide 242 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 243 | postgres-0 1/1 Running 0 98m 10.244.2.189 kind-worker 244 | ``` 245 | 246 | DNS Utils Connectivity 247 | ```bash 248 | kubectl exec dnsutils -- nc -z -vv 10.244.2.189 5432 249 | 10.244.2.189 (10.244.2.189:5432) open 250 | sent 0, rcvd 0 251 | ``` 252 | 253 | 254 | #### 3. Deploy Network policies and Test Closed Network Connectivity 255 | 256 | Let's first restrict access to the database pod to only the Web server. 257 | 258 | The postgress port 5432 is open from dnsutils to database. 259 | 260 | ```bash 261 | kubectl exec dnsutils -- nc -z -vv -w 5 10.244.2.189 5432 262 | 10.244.2.189 (10.244.2.189:5432) open 263 | sent 0, rcvd 0 264 | ``` 265 | 266 | Apply the Network policy that only allows traffic from the Web Server pod to the database. 267 | 268 | ```bash 269 | kubectl apply -f layer_3_net_pol.yaml 270 | ciliumnetworkpolicy.cilium.io/l3-rule-app-to-db created 271 | ``` 272 | 273 | With the network policy applied, the dnsutils pod can no longer reach the database pod. 274 | 275 | ```bash 276 | kubectl exec dnsutils -- nc -z -vv -w 5 10.244.2.189 5432 277 | nc: 10.244.2.189 (10.244.2.189:5432): Operation timed out 278 | sent 0, rcvd 0 279 | command terminated with exit code 1 280 | ``` 281 | 282 | But we the Web server is still connected to the Database. 283 | 284 | ```bash 285 | kubectl exec dnsutils -- wget -qO- 10.244.1.188:8080/data 286 | Database Connected 287 | 288 | curl localhost:8080/data 289 | Database Connected 290 | ``` 291 | 292 | The Cilium install and deploy of cilium objects creates resources that can retrieved just like pods with kubectl. 293 | 294 | ```bash 295 | kubectl describe ciliumnetworkpolicies.cilium.io l3-rule-app-to-db 296 | Name: l3-rule-app-to-db 297 | Namespace: default 298 | Labels: 299 | Annotations: API Version: cilium.io/v2 300 | Kind: CiliumNetworkPolicy 301 | Metadata: 302 | Creation Timestamp: 2021-01-10T01:06:13Z 303 | Generation: 1 304 | Managed Fields: 305 | API Version: cilium.io/v2 306 | Fields Type: FieldsV1 307 | fieldsV1: 308 | f:metadata: 309 | f:annotations: 310 | .: 311 | f:kubectl.kubernetes.io/last-applied-configuration: 312 | f:spec: 313 | .: 314 | f:endpointSelector: 315 | .: 316 | f:matchLabels: 317 | .: 318 | f:app: 319 | f:ingress: 320 | Manager: kubectl 321 | Operation: Update 322 | Time: 2021-01-10T01:06:13Z 323 | Resource Version: 47377 324 | Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/l3-rule-app-to-db 325 | UID: 71ee6571-9551-449d-8f3e-c177becda35a 326 | Spec: 327 | Endpoint Selector: 328 | Match Labels: 329 | App: postgres 330 | Ingress: 331 | From Endpoints: 332 | Match Labels: 333 | App: app 334 | Events: 335 | ``` 336 | 337 | Now let us apply the Layer 7 policy. Cilium is layer 7 aware, so we can block or allow certain base on HTTP URI paths. 338 | In our example policy we allow HTTP GETs on / and /data but not allow on /healthz, lets test that out. 339 | 340 | ```bash 341 | kubectl apply -f layer_7_netpol.yml 342 | ciliumnetworkpolicy.cilium.io/l7-rule created 343 | ``` 344 | 345 | ```bash 346 | kubectl get ciliumnetworkpolicies.cilium.io 347 | NAME AGE 348 | l7-rule 6m54s 349 | 350 | kubectl describe ciliumnetworkpolicies.cilium.io l7-rule 351 | Name: l7-rule 352 | Namespace: default 353 | Labels: 354 | Annotations: API Version: cilium.io/v2 355 | Kind: CiliumNetworkPolicy 356 | Metadata: 357 | Creation Timestamp: 2021-01-10T00:49:34Z 358 | Generation: 1 359 | Managed Fields: 360 | API Version: cilium.io/v2 361 | Fields Type: FieldsV1 362 | fieldsV1: 363 | f:metadata: 364 | f:annotations: 365 | .: 366 | f:kubectl.kubernetes.io/last-applied-configuration: 367 | f:spec: 368 | .: 369 | f:egress: 370 | f:endpointSelector: 371 | .: 372 | f:matchLabels: 373 | .: 374 | f:app: 375 | Manager: kubectl 376 | Operation: Update 377 | Time: 2021-01-10T00:49:34Z 378 | Resource Version: 43869 379 | Self Link: /apis/cilium.io/v2/namespaces/default/ciliumnetworkpolicies/l7-rule 380 | UID: 0162c16e-dd55-4020-83b9-464bb625b164 381 | Spec: 382 | Egress: 383 | To Ports: 384 | Ports: 385 | Port: 8080 386 | Protocol: TCP 387 | Rules: 388 | Http: 389 | Method: GET 390 | Path: / 391 | Method: GET 392 | Path: /data 393 | Endpoint Selector: 394 | Match Labels: 395 | App: app 396 | Events: 397 | ``` 398 | 399 | As you can see, / and /data are available by not /healthz 400 | 401 | ```bash 402 | kubectl exec dnsutils -- wget -qO- 10.244.1.188:8080/data 403 | Database Connected 404 | 405 | kubectl exec dnsutils -- wget -qO- 10.244.1.188:8080/ 406 | Hello 407 | 408 | kubectl exec dnsutils -- wget -qO- -T 5 10.244.1.188:8080/healthz 409 | wget: error getting response 410 | command terminated with exit code 1 411 | ``` 412 | -------------------------------------------------------------------------------- /chapter-4/connectivity-check.yaml: -------------------------------------------------------------------------------- 1 | # Automatically generated by Makefile. DO NOT EDIT 2 | --- 3 | metadata: 4 | name: echo-a 5 | labels: 6 | name: echo-a 7 | topology: any 8 | component: network-check 9 | traffic: internal 10 | quarantine: "false" 11 | type: autocheck 12 | spec: 13 | template: 14 | metadata: 15 | labels: 16 | name: echo-a 17 | spec: 18 | hostNetwork: false 19 | containers: 20 | - name: echo-a-container 21 | env: 22 | - name: PORT 23 | value: "8080" 24 | ports: 25 | - containerPort: 8080 26 | image: docker.io/cilium/json-mock:1.2 27 | imagePullPolicy: IfNotPresent 28 | readinessProbe: 29 | timeoutSeconds: 7 30 | exec: 31 | command: 32 | - curl 33 | - -sS 34 | - --fail 35 | - --connect-timeout 36 | - "5" 37 | - -o 38 | - /dev/null 39 | - localhost:8080 40 | livenessProbe: 41 | timeoutSeconds: 7 42 | exec: 43 | command: 44 | - curl 45 | - -sS 46 | - --fail 47 | - --connect-timeout 48 | - "5" 49 | - -o 50 | - /dev/null 51 | - localhost:8080 52 | selector: 53 | matchLabels: 54 | name: echo-a 55 | replicas: 1 56 | apiVersion: apps/v1 57 | kind: Deployment 58 | --- 59 | metadata: 60 | name: echo-b 61 | labels: 62 | name: echo-b 63 | topology: any 64 | component: services-check 65 | traffic: internal 66 | quarantine: "false" 67 | type: autocheck 68 | spec: 69 | template: 70 | metadata: 71 | labels: 72 | name: echo-b 73 | spec: 74 | hostNetwork: false 75 | containers: 76 | - name: echo-b-container 77 | env: 78 | - name: PORT 79 | value: "8080" 80 | ports: 81 | - containerPort: 8080 82 | hostPort: 40000 83 | image: docker.io/cilium/json-mock:1.2 84 | imagePullPolicy: IfNotPresent 85 | readinessProbe: 86 | timeoutSeconds: 7 87 | exec: 88 | command: 89 | - curl 90 | - -sS 91 | - --fail 92 | - --connect-timeout 93 | - "5" 94 | - -o 95 | - /dev/null 96 | - localhost:8080 97 | livenessProbe: 98 | timeoutSeconds: 7 99 | exec: 100 | command: 101 | - curl 102 | - -sS 103 | - --fail 104 | - --connect-timeout 105 | - "5" 106 | - -o 107 | - /dev/null 108 | - localhost:8080 109 | selector: 110 | matchLabels: 111 | name: echo-b 112 | replicas: 1 113 | apiVersion: apps/v1 114 | kind: Deployment 115 | --- 116 | metadata: 117 | name: echo-b-host 118 | labels: 119 | name: echo-b-host 120 | topology: any 121 | component: services-check 122 | traffic: internal 123 | quarantine: "false" 124 | type: autocheck 125 | spec: 126 | template: 127 | metadata: 128 | labels: 129 | name: echo-b-host 130 | spec: 131 | hostNetwork: true 132 | containers: 133 | - name: echo-b-host-container 134 | env: 135 | - name: PORT 136 | value: "41000" 137 | ports: [] 138 | image: docker.io/cilium/json-mock:1.2 139 | imagePullPolicy: IfNotPresent 140 | readinessProbe: 141 | timeoutSeconds: 7 142 | exec: 143 | command: 144 | - curl 145 | - -sS 146 | - --fail 147 | - --connect-timeout 148 | - "5" 149 | - -o 150 | - /dev/null 151 | - localhost:41000 152 | livenessProbe: 153 | timeoutSeconds: 7 154 | exec: 155 | command: 156 | - curl 157 | - -sS 158 | - --fail 159 | - --connect-timeout 160 | - "5" 161 | - -o 162 | - /dev/null 163 | - localhost:41000 164 | affinity: 165 | podAffinity: 166 | requiredDuringSchedulingIgnoredDuringExecution: 167 | - labelSelector: 168 | matchExpressions: 169 | - key: name 170 | operator: In 171 | values: 172 | - echo-b 173 | topologyKey: kubernetes.io/hostname 174 | selector: 175 | matchLabels: 176 | name: echo-b-host 177 | replicas: 1 178 | apiVersion: apps/v1 179 | kind: Deployment 180 | --- 181 | metadata: 182 | name: pod-to-a 183 | labels: 184 | name: pod-to-a 185 | topology: any 186 | component: network-check 187 | traffic: internal 188 | quarantine: "false" 189 | type: autocheck 190 | spec: 191 | template: 192 | metadata: 193 | labels: 194 | name: pod-to-a 195 | spec: 196 | hostNetwork: false 197 | containers: 198 | - name: pod-to-a-container 199 | ports: [] 200 | image: docker.io/byrnedo/alpine-curl:0.1.8 201 | imagePullPolicy: IfNotPresent 202 | command: 203 | - /bin/ash 204 | - -c 205 | - sleep 1000000000 206 | readinessProbe: 207 | timeoutSeconds: 7 208 | exec: 209 | command: 210 | - curl 211 | - -sS 212 | - --fail 213 | - --connect-timeout 214 | - "5" 215 | - -o 216 | - /dev/null 217 | - echo-a:8080/public 218 | livenessProbe: 219 | timeoutSeconds: 7 220 | exec: 221 | command: 222 | - curl 223 | - -sS 224 | - --fail 225 | - --connect-timeout 226 | - "5" 227 | - -o 228 | - /dev/null 229 | - echo-a:8080/public 230 | selector: 231 | matchLabels: 232 | name: pod-to-a 233 | replicas: 1 234 | apiVersion: apps/v1 235 | kind: Deployment 236 | --- 237 | metadata: 238 | name: pod-to-external-1111 239 | labels: 240 | name: pod-to-external-1111 241 | topology: any 242 | component: network-check 243 | traffic: external 244 | quarantine: "false" 245 | type: autocheck 246 | spec: 247 | template: 248 | metadata: 249 | labels: 250 | name: pod-to-external-1111 251 | spec: 252 | hostNetwork: false 253 | containers: 254 | - name: pod-to-external-1111-container 255 | ports: [] 256 | image: docker.io/byrnedo/alpine-curl:0.1.8 257 | imagePullPolicy: IfNotPresent 258 | command: 259 | - /bin/ash 260 | - -c 261 | - sleep 1000000000 262 | readinessProbe: 263 | timeoutSeconds: 7 264 | exec: 265 | command: 266 | - curl 267 | - -sS 268 | - --fail 269 | - --connect-timeout 270 | - "5" 271 | - -o 272 | - /dev/null 273 | - 1.1.1.1 274 | livenessProbe: 275 | timeoutSeconds: 7 276 | exec: 277 | command: 278 | - curl 279 | - -sS 280 | - --fail 281 | - --connect-timeout 282 | - "5" 283 | - -o 284 | - /dev/null 285 | - 1.1.1.1 286 | selector: 287 | matchLabels: 288 | name: pod-to-external-1111 289 | replicas: 1 290 | apiVersion: apps/v1 291 | kind: Deployment 292 | --- 293 | metadata: 294 | name: pod-to-a-denied-cnp 295 | labels: 296 | name: pod-to-a-denied-cnp 297 | topology: any 298 | component: policy-check 299 | traffic: internal 300 | quarantine: "false" 301 | type: autocheck 302 | spec: 303 | template: 304 | metadata: 305 | labels: 306 | name: pod-to-a-denied-cnp 307 | spec: 308 | hostNetwork: false 309 | containers: 310 | - name: pod-to-a-denied-cnp-container 311 | ports: [] 312 | image: docker.io/byrnedo/alpine-curl:0.1.8 313 | imagePullPolicy: IfNotPresent 314 | command: 315 | - /bin/ash 316 | - -c 317 | - sleep 1000000000 318 | readinessProbe: 319 | timeoutSeconds: 7 320 | exec: 321 | command: 322 | - ash 323 | - -c 324 | - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private' 325 | livenessProbe: 326 | timeoutSeconds: 7 327 | exec: 328 | command: 329 | - ash 330 | - -c 331 | - '! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private' 332 | selector: 333 | matchLabels: 334 | name: pod-to-a-denied-cnp 335 | replicas: 1 336 | apiVersion: apps/v1 337 | kind: Deployment 338 | --- 339 | metadata: 340 | name: pod-to-a-allowed-cnp 341 | labels: 342 | name: pod-to-a-allowed-cnp 343 | topology: any 344 | component: policy-check 345 | traffic: internal 346 | quarantine: "false" 347 | type: autocheck 348 | spec: 349 | template: 350 | metadata: 351 | labels: 352 | name: pod-to-a-allowed-cnp 353 | spec: 354 | hostNetwork: false 355 | containers: 356 | - name: pod-to-a-allowed-cnp-container 357 | ports: [] 358 | image: docker.io/byrnedo/alpine-curl:0.1.8 359 | imagePullPolicy: IfNotPresent 360 | command: 361 | - /bin/ash 362 | - -c 363 | - sleep 1000000000 364 | readinessProbe: 365 | timeoutSeconds: 7 366 | exec: 367 | command: 368 | - curl 369 | - -sS 370 | - --fail 371 | - --connect-timeout 372 | - "5" 373 | - -o 374 | - /dev/null 375 | - echo-a:8080/public 376 | livenessProbe: 377 | timeoutSeconds: 7 378 | exec: 379 | command: 380 | - curl 381 | - -sS 382 | - --fail 383 | - --connect-timeout 384 | - "5" 385 | - -o 386 | - /dev/null 387 | - echo-a:8080/public 388 | selector: 389 | matchLabels: 390 | name: pod-to-a-allowed-cnp 391 | replicas: 1 392 | apiVersion: apps/v1 393 | kind: Deployment 394 | --- 395 | metadata: 396 | name: pod-to-external-fqdn-allow-google-cnp 397 | labels: 398 | name: pod-to-external-fqdn-allow-google-cnp 399 | topology: any 400 | component: policy-check 401 | traffic: external 402 | quarantine: "false" 403 | type: autocheck 404 | spec: 405 | template: 406 | metadata: 407 | labels: 408 | name: pod-to-external-fqdn-allow-google-cnp 409 | spec: 410 | hostNetwork: false 411 | containers: 412 | - name: pod-to-external-fqdn-allow-google-cnp-container 413 | ports: [] 414 | image: docker.io/byrnedo/alpine-curl:0.1.8 415 | imagePullPolicy: IfNotPresent 416 | command: 417 | - /bin/ash 418 | - -c 419 | - sleep 1000000000 420 | readinessProbe: 421 | timeoutSeconds: 7 422 | exec: 423 | command: 424 | - curl 425 | - -sS 426 | - --fail 427 | - --connect-timeout 428 | - "5" 429 | - -o 430 | - /dev/null 431 | - www.google.com 432 | livenessProbe: 433 | timeoutSeconds: 7 434 | exec: 435 | command: 436 | - curl 437 | - -sS 438 | - --fail 439 | - --connect-timeout 440 | - "5" 441 | - -o 442 | - /dev/null 443 | - www.google.com 444 | selector: 445 | matchLabels: 446 | name: pod-to-external-fqdn-allow-google-cnp 447 | replicas: 1 448 | apiVersion: apps/v1 449 | kind: Deployment 450 | --- 451 | metadata: 452 | name: pod-to-b-multi-node-clusterip 453 | labels: 454 | name: pod-to-b-multi-node-clusterip 455 | topology: multi-node 456 | component: services-check 457 | traffic: internal 458 | quarantine: "false" 459 | type: autocheck 460 | spec: 461 | template: 462 | metadata: 463 | labels: 464 | name: pod-to-b-multi-node-clusterip 465 | spec: 466 | hostNetwork: false 467 | containers: 468 | - name: pod-to-b-multi-node-clusterip-container 469 | ports: [] 470 | image: docker.io/byrnedo/alpine-curl:0.1.8 471 | imagePullPolicy: IfNotPresent 472 | command: 473 | - /bin/ash 474 | - -c 475 | - sleep 1000000000 476 | readinessProbe: 477 | timeoutSeconds: 7 478 | exec: 479 | command: 480 | - curl 481 | - -sS 482 | - --fail 483 | - --connect-timeout 484 | - "5" 485 | - -o 486 | - /dev/null 487 | - echo-b:8080/public 488 | livenessProbe: 489 | timeoutSeconds: 7 490 | exec: 491 | command: 492 | - curl 493 | - -sS 494 | - --fail 495 | - --connect-timeout 496 | - "5" 497 | - -o 498 | - /dev/null 499 | - echo-b:8080/public 500 | affinity: 501 | podAntiAffinity: 502 | requiredDuringSchedulingIgnoredDuringExecution: 503 | - labelSelector: 504 | matchExpressions: 505 | - key: name 506 | operator: In 507 | values: 508 | - echo-b 509 | topologyKey: kubernetes.io/hostname 510 | selector: 511 | matchLabels: 512 | name: pod-to-b-multi-node-clusterip 513 | replicas: 1 514 | apiVersion: apps/v1 515 | kind: Deployment 516 | --- 517 | metadata: 518 | name: pod-to-b-multi-node-headless 519 | labels: 520 | name: pod-to-b-multi-node-headless 521 | topology: multi-node 522 | component: services-check 523 | traffic: internal 524 | quarantine: "false" 525 | type: autocheck 526 | spec: 527 | template: 528 | metadata: 529 | labels: 530 | name: pod-to-b-multi-node-headless 531 | spec: 532 | hostNetwork: false 533 | containers: 534 | - name: pod-to-b-multi-node-headless-container 535 | ports: [] 536 | image: docker.io/byrnedo/alpine-curl:0.1.8 537 | imagePullPolicy: IfNotPresent 538 | command: 539 | - /bin/ash 540 | - -c 541 | - sleep 1000000000 542 | readinessProbe: 543 | timeoutSeconds: 7 544 | exec: 545 | command: 546 | - curl 547 | - -sS 548 | - --fail 549 | - --connect-timeout 550 | - "5" 551 | - -o 552 | - /dev/null 553 | - echo-b-headless:8080/public 554 | livenessProbe: 555 | timeoutSeconds: 7 556 | exec: 557 | command: 558 | - curl 559 | - -sS 560 | - --fail 561 | - --connect-timeout 562 | - "5" 563 | - -o 564 | - /dev/null 565 | - echo-b-headless:8080/public 566 | affinity: 567 | podAntiAffinity: 568 | requiredDuringSchedulingIgnoredDuringExecution: 569 | - labelSelector: 570 | matchExpressions: 571 | - key: name 572 | operator: In 573 | values: 574 | - echo-b 575 | topologyKey: kubernetes.io/hostname 576 | selector: 577 | matchLabels: 578 | name: pod-to-b-multi-node-headless 579 | replicas: 1 580 | apiVersion: apps/v1 581 | kind: Deployment 582 | --- 583 | metadata: 584 | name: host-to-b-multi-node-clusterip 585 | labels: 586 | name: host-to-b-multi-node-clusterip 587 | topology: multi-node 588 | component: services-check 589 | traffic: internal 590 | quarantine: "false" 591 | type: autocheck 592 | spec: 593 | template: 594 | metadata: 595 | labels: 596 | name: host-to-b-multi-node-clusterip 597 | spec: 598 | hostNetwork: true 599 | containers: 600 | - name: host-to-b-multi-node-clusterip-container 601 | ports: [] 602 | image: docker.io/byrnedo/alpine-curl:0.1.8 603 | imagePullPolicy: IfNotPresent 604 | command: 605 | - /bin/ash 606 | - -c 607 | - sleep 1000000000 608 | readinessProbe: 609 | timeoutSeconds: 7 610 | exec: 611 | command: 612 | - curl 613 | - -sS 614 | - --fail 615 | - --connect-timeout 616 | - "5" 617 | - -o 618 | - /dev/null 619 | - echo-b:8080/public 620 | livenessProbe: 621 | timeoutSeconds: 7 622 | exec: 623 | command: 624 | - curl 625 | - -sS 626 | - --fail 627 | - --connect-timeout 628 | - "5" 629 | - -o 630 | - /dev/null 631 | - echo-b:8080/public 632 | affinity: 633 | podAntiAffinity: 634 | requiredDuringSchedulingIgnoredDuringExecution: 635 | - labelSelector: 636 | matchExpressions: 637 | - key: name 638 | operator: In 639 | values: 640 | - echo-b 641 | topologyKey: kubernetes.io/hostname 642 | dnsPolicy: ClusterFirstWithHostNet 643 | selector: 644 | matchLabels: 645 | name: host-to-b-multi-node-clusterip 646 | replicas: 1 647 | apiVersion: apps/v1 648 | kind: Deployment 649 | --- 650 | metadata: 651 | name: host-to-b-multi-node-headless 652 | labels: 653 | name: host-to-b-multi-node-headless 654 | topology: multi-node 655 | component: services-check 656 | traffic: internal 657 | quarantine: "false" 658 | type: autocheck 659 | spec: 660 | template: 661 | metadata: 662 | labels: 663 | name: host-to-b-multi-node-headless 664 | spec: 665 | hostNetwork: true 666 | containers: 667 | - name: host-to-b-multi-node-headless-container 668 | ports: [] 669 | image: docker.io/byrnedo/alpine-curl:0.1.8 670 | imagePullPolicy: IfNotPresent 671 | command: 672 | - /bin/ash 673 | - -c 674 | - sleep 1000000000 675 | readinessProbe: 676 | timeoutSeconds: 7 677 | exec: 678 | command: 679 | - curl 680 | - -sS 681 | - --fail 682 | - --connect-timeout 683 | - "5" 684 | - -o 685 | - /dev/null 686 | - echo-b-headless:8080/public 687 | livenessProbe: 688 | timeoutSeconds: 7 689 | exec: 690 | command: 691 | - curl 692 | - -sS 693 | - --fail 694 | - --connect-timeout 695 | - "5" 696 | - -o 697 | - /dev/null 698 | - echo-b-headless:8080/public 699 | affinity: 700 | podAntiAffinity: 701 | requiredDuringSchedulingIgnoredDuringExecution: 702 | - labelSelector: 703 | matchExpressions: 704 | - key: name 705 | operator: In 706 | values: 707 | - echo-b 708 | topologyKey: kubernetes.io/hostname 709 | dnsPolicy: ClusterFirstWithHostNet 710 | selector: 711 | matchLabels: 712 | name: host-to-b-multi-node-headless 713 | replicas: 1 714 | apiVersion: apps/v1 715 | kind: Deployment 716 | --- 717 | metadata: 718 | name: pod-to-b-multi-node-nodeport 719 | labels: 720 | name: pod-to-b-multi-node-nodeport 721 | topology: multi-node 722 | component: nodeport-check 723 | traffic: internal 724 | quarantine: "false" 725 | type: autocheck 726 | spec: 727 | template: 728 | metadata: 729 | labels: 730 | name: pod-to-b-multi-node-nodeport 731 | spec: 732 | hostNetwork: false 733 | containers: 734 | - name: pod-to-b-multi-node-nodeport-container 735 | ports: [] 736 | image: docker.io/byrnedo/alpine-curl:0.1.8 737 | imagePullPolicy: IfNotPresent 738 | command: 739 | - /bin/ash 740 | - -c 741 | - sleep 1000000000 742 | readinessProbe: 743 | timeoutSeconds: 7 744 | exec: 745 | command: 746 | - curl 747 | - -sS 748 | - --fail 749 | - --connect-timeout 750 | - "5" 751 | - -o 752 | - /dev/null 753 | - echo-b-host-headless:31313/public 754 | livenessProbe: 755 | timeoutSeconds: 7 756 | exec: 757 | command: 758 | - curl 759 | - -sS 760 | - --fail 761 | - --connect-timeout 762 | - "5" 763 | - -o 764 | - /dev/null 765 | - echo-b-host-headless:31313/public 766 | affinity: 767 | podAntiAffinity: 768 | requiredDuringSchedulingIgnoredDuringExecution: 769 | - labelSelector: 770 | matchExpressions: 771 | - key: name 772 | operator: In 773 | values: 774 | - echo-b 775 | topologyKey: kubernetes.io/hostname 776 | selector: 777 | matchLabels: 778 | name: pod-to-b-multi-node-nodeport 779 | replicas: 1 780 | apiVersion: apps/v1 781 | kind: Deployment 782 | --- 783 | metadata: 784 | name: pod-to-b-intra-node-nodeport 785 | labels: 786 | name: pod-to-b-intra-node-nodeport 787 | topology: intra-node 788 | component: nodeport-check 789 | traffic: internal 790 | quarantine: "false" 791 | type: autocheck 792 | spec: 793 | template: 794 | metadata: 795 | labels: 796 | name: pod-to-b-intra-node-nodeport 797 | spec: 798 | hostNetwork: false 799 | containers: 800 | - name: pod-to-b-intra-node-nodeport-container 801 | ports: [] 802 | image: docker.io/byrnedo/alpine-curl:0.1.8 803 | imagePullPolicy: IfNotPresent 804 | command: 805 | - /bin/ash 806 | - -c 807 | - sleep 1000000000 808 | readinessProbe: 809 | timeoutSeconds: 7 810 | exec: 811 | command: 812 | - curl 813 | - -sS 814 | - --fail 815 | - --connect-timeout 816 | - "5" 817 | - -o 818 | - /dev/null 819 | - echo-b-host-headless:31313/public 820 | livenessProbe: 821 | timeoutSeconds: 7 822 | exec: 823 | command: 824 | - curl 825 | - -sS 826 | - --fail 827 | - --connect-timeout 828 | - "5" 829 | - -o 830 | - /dev/null 831 | - echo-b-host-headless:31313/public 832 | affinity: 833 | podAffinity: 834 | requiredDuringSchedulingIgnoredDuringExecution: 835 | - labelSelector: 836 | matchExpressions: 837 | - key: name 838 | operator: In 839 | values: 840 | - echo-b 841 | topologyKey: kubernetes.io/hostname 842 | selector: 843 | matchLabels: 844 | name: pod-to-b-intra-node-nodeport 845 | replicas: 1 846 | apiVersion: apps/v1 847 | kind: Deployment 848 | --- 849 | metadata: 850 | name: echo-a 851 | labels: 852 | name: echo-a 853 | topology: any 854 | component: network-check 855 | traffic: internal 856 | quarantine: "false" 857 | type: autocheck 858 | spec: 859 | ports: 860 | - name: http 861 | port: 8080 862 | type: ClusterIP 863 | selector: 864 | name: echo-a 865 | apiVersion: v1 866 | kind: Service 867 | --- 868 | metadata: 869 | name: echo-b 870 | labels: 871 | name: echo-b 872 | topology: any 873 | component: services-check 874 | traffic: internal 875 | quarantine: "false" 876 | type: autocheck 877 | spec: 878 | ports: 879 | - name: http 880 | port: 8080 881 | nodePort: 31313 882 | type: NodePort 883 | selector: 884 | name: echo-b 885 | apiVersion: v1 886 | kind: Service 887 | --- 888 | metadata: 889 | name: echo-b-headless 890 | labels: 891 | name: echo-b-headless 892 | topology: any 893 | component: services-check 894 | traffic: internal 895 | quarantine: "false" 896 | type: autocheck 897 | spec: 898 | ports: 899 | - name: http 900 | port: 8080 901 | type: ClusterIP 902 | selector: 903 | name: echo-b 904 | clusterIP: None 905 | apiVersion: v1 906 | kind: Service 907 | --- 908 | metadata: 909 | name: echo-b-host-headless 910 | labels: 911 | name: echo-b-host-headless 912 | topology: any 913 | component: services-check 914 | traffic: internal 915 | quarantine: "false" 916 | type: autocheck 917 | spec: 918 | ports: [] 919 | type: ClusterIP 920 | selector: 921 | name: echo-b-host 922 | clusterIP: None 923 | apiVersion: v1 924 | kind: Service 925 | --- 926 | metadata: 927 | name: pod-to-a-denied-cnp 928 | labels: 929 | name: pod-to-a-denied-cnp 930 | topology: any 931 | component: policy-check 932 | traffic: internal 933 | quarantine: "false" 934 | type: autocheck 935 | spec: 936 | endpointSelector: 937 | matchLabels: 938 | name: pod-to-a-denied-cnp 939 | egress: 940 | - toPorts: 941 | - ports: 942 | - port: "53" 943 | protocol: ANY 944 | toEndpoints: 945 | - matchLabels: 946 | k8s:io.kubernetes.pod.namespace: kube-system 947 | k8s:k8s-app: kube-dns 948 | - toPorts: 949 | - ports: 950 | - port: "5353" 951 | protocol: UDP 952 | toEndpoints: 953 | - matchLabels: 954 | k8s:io.kubernetes.pod.namespace: openshift-dns 955 | k8s:dns.operator.openshift.io/daemonset-dns: default 956 | apiVersion: cilium.io/v2 957 | kind: CiliumNetworkPolicy 958 | --- 959 | metadata: 960 | name: pod-to-a-allowed-cnp 961 | labels: 962 | name: pod-to-a-allowed-cnp 963 | topology: any 964 | component: policy-check 965 | traffic: internal 966 | quarantine: "false" 967 | type: autocheck 968 | spec: 969 | endpointSelector: 970 | matchLabels: 971 | name: pod-to-a-allowed-cnp 972 | egress: 973 | - toPorts: 974 | - ports: 975 | - port: "8080" 976 | protocol: TCP 977 | toEndpoints: 978 | - matchLabels: 979 | name: echo-a 980 | - toPorts: 981 | - ports: 982 | - port: "53" 983 | protocol: ANY 984 | toEndpoints: 985 | - matchLabels: 986 | k8s:io.kubernetes.pod.namespace: kube-system 987 | k8s:k8s-app: kube-dns 988 | - toPorts: 989 | - ports: 990 | - port: "5353" 991 | protocol: UDP 992 | toEndpoints: 993 | - matchLabels: 994 | k8s:io.kubernetes.pod.namespace: openshift-dns 995 | k8s:dns.operator.openshift.io/daemonset-dns: default 996 | apiVersion: cilium.io/v2 997 | kind: CiliumNetworkPolicy 998 | --- 999 | metadata: 1000 | name: pod-to-external-fqdn-allow-google-cnp 1001 | labels: 1002 | name: pod-to-external-fqdn-allow-google-cnp 1003 | topology: any 1004 | component: policy-check 1005 | traffic: external 1006 | quarantine: "false" 1007 | type: autocheck 1008 | spec: 1009 | endpointSelector: 1010 | matchLabels: 1011 | name: pod-to-external-fqdn-allow-google-cnp 1012 | egress: 1013 | - toFQDNs: 1014 | - matchPattern: '*.google.com' 1015 | - toPorts: 1016 | - ports: 1017 | - port: "53" 1018 | protocol: ANY 1019 | rules: 1020 | dns: 1021 | - matchPattern: '*' 1022 | toEndpoints: 1023 | - matchLabels: 1024 | k8s:io.kubernetes.pod.namespace: kube-system 1025 | k8s:k8s-app: kube-dns 1026 | - toPorts: 1027 | - ports: 1028 | - port: "5353" 1029 | protocol: UDP 1030 | rules: 1031 | dns: 1032 | - matchPattern: '*' 1033 | toEndpoints: 1034 | - matchLabels: 1035 | k8s:io.kubernetes.pod.namespace: openshift-dns 1036 | k8s:dns.operator.openshift.io/daemonset-dns: default 1037 | apiVersion: cilium.io/v2 1038 | kind: CiliumNetworkPolicy 1039 | 1040 | 1041 | -------------------------------------------------------------------------------- /chapter-4/database.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: postgres 5 | labels: 6 | app: postgres 7 | spec: 8 | ports: 9 | - port: 5432 10 | name: postgres 11 | selector: 12 | app: postgres 13 | --- 14 | apiVersion: v1 15 | kind: ConfigMap 16 | metadata: 17 | name: postgres-config 18 | labels: 19 | app: postgres 20 | data: 21 | POSTGRES_DB: testpostgresdb 22 | POSTGRES_USER: postgres 23 | POSTGRES_PASSWORD: mysecretpassword 24 | --- 25 | apiVersion: apps/v1 26 | kind: StatefulSet 27 | metadata: 28 | name: postgres 29 | spec: 30 | serviceName: "postgres" 31 | replicas: 1 32 | selector: 33 | matchLabels: 34 | app: postgres 35 | template: 36 | metadata: 37 | labels: 38 | app: postgres 39 | spec: 40 | containers: 41 | - name: postgres 42 | image: postgres:12.2 43 | envFrom: 44 | - configMapRef: 45 | name: postgres-config 46 | ports: 47 | - containerPort: 5432 48 | name: postgredb 49 | volumeMounts: 50 | - name: postgredb 51 | mountPath: /var/lib/postgresql/data 52 | subPath: postgres 53 | volumeClaimTemplates: 54 | - metadata: 55 | name: postgredb 56 | spec: 57 | accessModes: [ "ReadWriteOnce" ] 58 | resources: 59 | requests: 60 | storage: 1Gi 61 | -------------------------------------------------------------------------------- /chapter-4/dnsutils.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: dnsutils 5 | namespace: default 6 | spec: 7 | containers: 8 | - name: dnsutils 9 | image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 10 | command: 11 | - sleep 12 | - "3600" 13 | imagePullPolicy: IfNotPresent 14 | restartPolicy: Always -------------------------------------------------------------------------------- /chapter-4/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/strongjz/advanced_networking_code_examples 2 | 3 | go 1.15 4 | 5 | require ( 6 | github.com/lib/pq v1.3.0 7 | ) -------------------------------------------------------------------------------- /chapter-4/go.sum: -------------------------------------------------------------------------------- 1 | github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= 2 | -------------------------------------------------------------------------------- /chapter-4/kind-config.yaml: -------------------------------------------------------------------------------- 1 | kind: Cluster 2 | apiVersion: kind.x-k8s.io/v1alpha4 3 | nodes: 4 | - role: control-plane 5 | - role: worker 6 | - role: worker 7 | - role: worker 8 | networking: 9 | disableDefaultCNI: true 10 | -------------------------------------------------------------------------------- /chapter-4/layer_3_net_pol.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "cilium.io/v2" 2 | kind: CiliumNetworkPolicy 3 | metadata: 4 | name: "l3-rule-app-to-db" 5 | spec: 6 | endpointSelector: 7 | matchLabels: 8 | app: postgres 9 | ingress: 10 | - fromEndpoints: 11 | - matchLabels: 12 | app: app -------------------------------------------------------------------------------- /chapter-4/layer_7_netpol.yml: -------------------------------------------------------------------------------- 1 | apiVersion: "cilium.io/v2" 2 | kind: CiliumNetworkPolicy 3 | metadata: 4 | name: "l7-rule" 5 | spec: 6 | endpointSelector: 7 | matchLabels: 8 | app: app 9 | ingress: 10 | - toPorts: 11 | - ports: 12 | - port: '8080' 13 | protocol: TCP 14 | rules: 15 | http: 16 | - method: GET 17 | path: "/" 18 | - method: GET 19 | path: "/data" 20 | 21 | -------------------------------------------------------------------------------- /chapter-4/web-server-netpol.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: "cilium.io/v2" 2 | kind: CiliumNetworkPolicy 3 | metadata: 4 | name: "l3-rule-app-to-db" 5 | spec: 6 | endpointSelector: 7 | matchLabels: 8 | app: postgres 9 | ingress: 10 | - fromEndpoints: 11 | - matchLabels: 12 | app: app 13 | --- -------------------------------------------------------------------------------- /chapter-4/web-server.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "database/sql" 5 | "fmt" 6 | _ "github.com/lib/pq" 7 | "log" 8 | "net/http" 9 | "os" 10 | ) 11 | 12 | func hello(w http.ResponseWriter, _ *http.Request) { 13 | fmt.Fprintf(w, "Hello") 14 | } 15 | 16 | func healthz(w http.ResponseWriter, _ *http.Request) { 17 | fmt.Fprintf(w, "Healthy") 18 | } 19 | 20 | func dataHandler(w http.ResponseWriter, _ *http.Request) { 21 | db := CreateCon() 22 | 23 | err := db.Ping() 24 | if err != nil { 25 | http.Error(w, err.Error(), http.StatusInternalServerError) 26 | } else { 27 | fmt.Fprintf(w, "Database Connected") 28 | } 29 | } 30 | 31 | func main() { 32 | http.HandleFunc("/", hello) 33 | 34 | http.HandleFunc("/healthz", healthz) 35 | 36 | http.HandleFunc("/data", dataHandler) 37 | 38 | http.ListenAndServe("0.0.0.0:8080", nil) 39 | } 40 | 41 | /*Create sql database connection*/ 42 | func CreateCon() *sql.DB { 43 | user := os.Getenv("DB_USER") 44 | pass := os.Getenv("DB_PASSWORD") 45 | host := os.Getenv("DB_HOST") 46 | port := os.Getenv("DB_PORT") 47 | 48 | connStr := fmt.Sprintf("postgres://%v:%v@%v:%v?sslmode=disable", user, pass, host, port) 49 | 50 | fmt.Printf("Database Connection String: %v \n", connStr) 51 | 52 | db, err := sql.Open("postgres", connStr) 53 | 54 | if err != nil { 55 | log.Fatalf("ERROR: %v", err) 56 | } 57 | 58 | return db 59 | } 60 | -------------------------------------------------------------------------------- /chapter-4/web.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: app 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: app 9 | replicas: 1 10 | template: 11 | metadata: 12 | labels: 13 | app: app 14 | spec: 15 | containers: 16 | - name: go-web 17 | image: strongjz/go-web:v0.0.2 18 | ports: 19 | - containerPort: 8080 20 | livenessProbe: 21 | httpGet: 22 | path: /healthz 23 | port: 8080 24 | initialDelaySeconds: 5 25 | periodSeconds: 5 26 | readinessProbe: 27 | httpGet: 28 | path: / 29 | port: 8080 30 | initialDelaySeconds: 5 31 | periodSeconds: 5 32 | env: 33 | - name: DB_HOST 34 | value: "postgres" 35 | - name: DB_USER 36 | value: "postgres" 37 | - name: DB_PASSWORD 38 | value: "mysecretpassword" 39 | - name: DB_PORT 40 | value: "5432" 41 | -------------------------------------------------------------------------------- /chapter-5/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:1.15 AS builder 2 | WORKDIR /opt 3 | COPY go.mod . 4 | COPY web-server.go . 5 | RUN CGO_ENABLED=0 GOOS=linux go build -o web-server . 6 | 7 | FROM golang:1.15 8 | WORKDIR /opt 9 | COPY --from=0 /opt/web-server . 10 | CMD ["/opt/web-server"] 11 | -------------------------------------------------------------------------------- /chapter-5/app-linkerd-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/strongjz/Networking-and-Kubernetes/bd0e0702ff21473d1216da8a23583473c26da09e/chapter-5/app-linkerd-dashboard.png -------------------------------------------------------------------------------- /chapter-5/app-stats.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/strongjz/Networking-and-Kubernetes/bd0e0702ff21473d1216da8a23583473c26da09e/chapter-5/app-stats.png -------------------------------------------------------------------------------- /chapter-5/container_connectivity.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/strongjz/Networking-and-Kubernetes/bd0e0702ff21473d1216da8a23583473c26da09e/chapter-5/container_connectivity.png -------------------------------------------------------------------------------- /chapter-5/database.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: postgres 5 | labels: 6 | app: postgres 7 | spec: 8 | ports: 9 | - port: 5432 10 | name: postgres 11 | selector: 12 | app: postgres 13 | --- 14 | apiVersion: v1 15 | kind: ConfigMap 16 | metadata: 17 | name: postgres-config 18 | labels: 19 | app: postgres 20 | data: 21 | POSTGRES_DB: testpostgresdb 22 | POSTGRES_USER: postgres 23 | POSTGRES_PASSWORD: mysecretpassword 24 | --- 25 | apiVersion: apps/v1 26 | kind: StatefulSet 27 | metadata: 28 | name: postgres 29 | spec: 30 | serviceName: "postgres" 31 | replicas: 2 32 | selector: 33 | matchLabels: 34 | app: postgres 35 | template: 36 | metadata: 37 | labels: 38 | app: postgres 39 | spec: 40 | containers: 41 | - name: postgres 42 | image: postgres:12.2 43 | envFrom: 44 | - configMapRef: 45 | name: postgres-config 46 | ports: 47 | - containerPort: 5432 48 | name: postgredb 49 | volumeMounts: 50 | - name: postgredb 51 | mountPath: /var/lib/postgresql/data 52 | subPath: postgres 53 | volumeClaimTemplates: 54 | - metadata: 55 | name: postgredb 56 | spec: 57 | accessModes: [ "ReadWriteOnce" ] 58 | resources: 59 | requests: 60 | storage: 1Gi 61 | -------------------------------------------------------------------------------- /chapter-5/dnsutils.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: dnsutils 5 | namespace: default 6 | spec: 7 | containers: 8 | - name: dnsutils 9 | image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 10 | command: 11 | - sleep 12 | - "3600" 13 | imagePullPolicy: IfNotPresent 14 | restartPolicy: Always -------------------------------------------------------------------------------- /chapter-5/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/strongjz/advanced_networking_code_examples 2 | 3 | go 1.15 4 | 5 | require ( 6 | github.com/lib/pq v1.3.0 7 | ) -------------------------------------------------------------------------------- /chapter-5/go.sum: -------------------------------------------------------------------------------- 1 | github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= 2 | -------------------------------------------------------------------------------- /chapter-5/ingress-example-2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: app2 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: app2 9 | replicas: 3 10 | template: 11 | metadata: 12 | labels: 13 | app: app2 14 | spec: 15 | containers: 16 | - name: go-web 17 | image: strongjz/go-web:v0.0.6 18 | ports: 19 | - containerPort: 8080 20 | livenessProbe: 21 | httpGet: 22 | path: /healthz 23 | port: 8080 24 | initialDelaySeconds: 5 25 | periodSeconds: 5 26 | readinessProbe: 27 | httpGet: 28 | path: / 29 | port: 8080 30 | initialDelaySeconds: 5 31 | periodSeconds: 5 32 | env: 33 | - name: MY_NODE_NAME 34 | valueFrom: 35 | fieldRef: 36 | fieldPath: spec.nodeName 37 | - name: MY_POD_NAME 38 | valueFrom: 39 | fieldRef: 40 | fieldPath: metadata.name 41 | - name: MY_POD_NAMESPACE 42 | valueFrom: 43 | fieldRef: 44 | fieldPath: metadata.namespace 45 | - name: MY_POD_IP 46 | valueFrom: 47 | fieldRef: 48 | fieldPath: status.podIP 49 | - name: MY_POD_SERVICE_ACCOUNT 50 | valueFrom: 51 | fieldRef: 52 | fieldPath: spec.serviceAccountName 53 | - name: DB_HOST 54 | value: "postgres" 55 | - name: DB_USER 56 | value: "postgres" 57 | - name: DB_PASSWORD 58 | value: "mysecretpassword" 59 | - name: DB_PORT 60 | value: "5432" 61 | --- 62 | apiVersion: v1 63 | kind: Service 64 | metadata: 65 | name: clusterip-service-2 66 | labels: 67 | app: app2 68 | spec: 69 | selector: 70 | app: app2 71 | ports: 72 | - protocol: TCP 73 | port: 80 74 | targetPort: 8080 75 | --- 76 | apiVersion: networking.k8s.io/v1 77 | kind: Ingress 78 | metadata: 79 | name: ingress-resource 80 | annotations: 81 | kubernetes.io/ingress.class: nginx 82 | spec: 83 | rules: 84 | - http: 85 | paths: 86 | - path: /data 87 | pathType: Exact 88 | backend: 89 | service: 90 | name: clusterip-service-2 91 | port: 92 | number: 8080 -------------------------------------------------------------------------------- /chapter-5/ingress-rule.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: Ingress 3 | metadata: 4 | name: ingress-resource 5 | annotations: 6 | kubernetes.io/ingress.class: nginx 7 | spec: 8 | rules: 9 | - http: 10 | paths: 11 | - path: /host 12 | pathType: Exact 13 | backend: 14 | service: 15 | name: clusterip-service 16 | port: 17 | number: 8080 -------------------------------------------------------------------------------- /chapter-5/ingress.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: ingress-nginx 6 | labels: 7 | app.kubernetes.io/name: ingress-nginx 8 | app.kubernetes.io/instance: ingress-nginx 9 | 10 | --- 11 | # Source: ingress-nginx/templates/controller-serviceaccount.yaml 12 | apiVersion: v1 13 | kind: ServiceAccount 14 | metadata: 15 | labels: 16 | helm.sh/chart: ingress-nginx-3.21.0 17 | app.kubernetes.io/name: ingress-nginx 18 | app.kubernetes.io/instance: ingress-nginx 19 | app.kubernetes.io/version: 0.43.0 20 | app.kubernetes.io/managed-by: Helm 21 | app.kubernetes.io/component: controller 22 | name: ingress-nginx 23 | namespace: ingress-nginx 24 | --- 25 | # Source: ingress-nginx/templates/controller-configmap.yaml 26 | apiVersion: v1 27 | kind: ConfigMap 28 | metadata: 29 | labels: 30 | helm.sh/chart: ingress-nginx-3.21.0 31 | app.kubernetes.io/name: ingress-nginx 32 | app.kubernetes.io/instance: ingress-nginx 33 | app.kubernetes.io/version: 0.43.0 34 | app.kubernetes.io/managed-by: Helm 35 | app.kubernetes.io/component: controller 36 | name: ingress-nginx-controller 37 | namespace: ingress-nginx 38 | data: 39 | --- 40 | # Source: ingress-nginx/templates/clusterrole.yaml 41 | apiVersion: rbac.authorization.k8s.io/v1 42 | kind: ClusterRole 43 | metadata: 44 | labels: 45 | helm.sh/chart: ingress-nginx-3.21.0 46 | app.kubernetes.io/name: ingress-nginx 47 | app.kubernetes.io/instance: ingress-nginx 48 | app.kubernetes.io/version: 0.43.0 49 | app.kubernetes.io/managed-by: Helm 50 | name: ingress-nginx 51 | rules: 52 | - apiGroups: 53 | - '' 54 | resources: 55 | - configmaps 56 | - endpoints 57 | - nodes 58 | - pods 59 | - secrets 60 | verbs: 61 | - list 62 | - watch 63 | - apiGroups: 64 | - '' 65 | resources: 66 | - nodes 67 | verbs: 68 | - get 69 | - apiGroups: 70 | - '' 71 | resources: 72 | - services 73 | verbs: 74 | - get 75 | - list 76 | - watch 77 | - apiGroups: 78 | - extensions 79 | - networking.k8s.io # k8s 1.14+ 80 | resources: 81 | - ingresses 82 | verbs: 83 | - get 84 | - list 85 | - watch 86 | - apiGroups: 87 | - '' 88 | resources: 89 | - events 90 | verbs: 91 | - create 92 | - patch 93 | - apiGroups: 94 | - extensions 95 | - networking.k8s.io # k8s 1.14+ 96 | resources: 97 | - ingresses/status 98 | verbs: 99 | - update 100 | - apiGroups: 101 | - networking.k8s.io # k8s 1.14+ 102 | resources: 103 | - ingressclasses 104 | verbs: 105 | - get 106 | - list 107 | - watch 108 | --- 109 | # Source: ingress-nginx/templates/clusterrolebinding.yaml 110 | apiVersion: rbac.authorization.k8s.io/v1 111 | kind: ClusterRoleBinding 112 | metadata: 113 | labels: 114 | helm.sh/chart: ingress-nginx-3.21.0 115 | app.kubernetes.io/name: ingress-nginx 116 | app.kubernetes.io/instance: ingress-nginx 117 | app.kubernetes.io/version: 0.43.0 118 | app.kubernetes.io/managed-by: Helm 119 | name: ingress-nginx 120 | roleRef: 121 | apiGroup: rbac.authorization.k8s.io 122 | kind: ClusterRole 123 | name: ingress-nginx 124 | subjects: 125 | - kind: ServiceAccount 126 | name: ingress-nginx 127 | namespace: ingress-nginx 128 | --- 129 | # Source: ingress-nginx/templates/controller-role.yaml 130 | apiVersion: rbac.authorization.k8s.io/v1 131 | kind: Role 132 | metadata: 133 | labels: 134 | helm.sh/chart: ingress-nginx-3.21.0 135 | app.kubernetes.io/name: ingress-nginx 136 | app.kubernetes.io/instance: ingress-nginx 137 | app.kubernetes.io/version: 0.43.0 138 | app.kubernetes.io/managed-by: Helm 139 | app.kubernetes.io/component: controller 140 | name: ingress-nginx 141 | namespace: ingress-nginx 142 | rules: 143 | - apiGroups: 144 | - '' 145 | resources: 146 | - namespaces 147 | verbs: 148 | - get 149 | - apiGroups: 150 | - '' 151 | resources: 152 | - configmaps 153 | - pods 154 | - secrets 155 | - endpoints 156 | verbs: 157 | - get 158 | - list 159 | - watch 160 | - apiGroups: 161 | - '' 162 | resources: 163 | - services 164 | verbs: 165 | - get 166 | - list 167 | - watch 168 | - apiGroups: 169 | - extensions 170 | - networking.k8s.io # k8s 1.14+ 171 | resources: 172 | - ingresses 173 | verbs: 174 | - get 175 | - list 176 | - watch 177 | - apiGroups: 178 | - extensions 179 | - networking.k8s.io # k8s 1.14+ 180 | resources: 181 | - ingresses/status 182 | verbs: 183 | - update 184 | - apiGroups: 185 | - networking.k8s.io # k8s 1.14+ 186 | resources: 187 | - ingressclasses 188 | verbs: 189 | - get 190 | - list 191 | - watch 192 | - apiGroups: 193 | - '' 194 | resources: 195 | - configmaps 196 | resourceNames: 197 | - ingress-controller-leader-nginx 198 | verbs: 199 | - get 200 | - update 201 | - apiGroups: 202 | - '' 203 | resources: 204 | - configmaps 205 | verbs: 206 | - create 207 | - apiGroups: 208 | - '' 209 | resources: 210 | - events 211 | verbs: 212 | - create 213 | - patch 214 | --- 215 | # Source: ingress-nginx/templates/controller-rolebinding.yaml 216 | apiVersion: rbac.authorization.k8s.io/v1 217 | kind: RoleBinding 218 | metadata: 219 | labels: 220 | helm.sh/chart: ingress-nginx-3.21.0 221 | app.kubernetes.io/name: ingress-nginx 222 | app.kubernetes.io/instance: ingress-nginx 223 | app.kubernetes.io/version: 0.43.0 224 | app.kubernetes.io/managed-by: Helm 225 | app.kubernetes.io/component: controller 226 | name: ingress-nginx 227 | namespace: ingress-nginx 228 | roleRef: 229 | apiGroup: rbac.authorization.k8s.io 230 | kind: Role 231 | name: ingress-nginx 232 | subjects: 233 | - kind: ServiceAccount 234 | name: ingress-nginx 235 | namespace: ingress-nginx 236 | --- 237 | # Source: ingress-nginx/templates/controller-service-webhook.yaml 238 | apiVersion: v1 239 | kind: Service 240 | metadata: 241 | labels: 242 | helm.sh/chart: ingress-nginx-3.21.0 243 | app.kubernetes.io/name: ingress-nginx 244 | app.kubernetes.io/instance: ingress-nginx 245 | app.kubernetes.io/version: 0.43.0 246 | app.kubernetes.io/managed-by: Helm 247 | app.kubernetes.io/component: controller 248 | name: ingress-nginx-controller-admission 249 | namespace: ingress-nginx 250 | spec: 251 | type: ClusterIP 252 | ports: 253 | - name: https-webhook 254 | port: 443 255 | targetPort: webhook 256 | selector: 257 | app.kubernetes.io/name: ingress-nginx 258 | app.kubernetes.io/instance: ingress-nginx 259 | app.kubernetes.io/component: controller 260 | --- 261 | # Source: ingress-nginx/templates/controller-service.yaml 262 | apiVersion: v1 263 | kind: Service 264 | metadata: 265 | annotations: 266 | labels: 267 | helm.sh/chart: ingress-nginx-3.21.0 268 | app.kubernetes.io/name: ingress-nginx 269 | app.kubernetes.io/instance: ingress-nginx 270 | app.kubernetes.io/version: 0.43.0 271 | app.kubernetes.io/managed-by: Helm 272 | app.kubernetes.io/component: controller 273 | name: ingress-nginx-controller 274 | namespace: ingress-nginx 275 | spec: 276 | type: NodePort 277 | ports: 278 | - name: http 279 | port: 80 280 | protocol: TCP 281 | targetPort: http 282 | - name: https 283 | port: 443 284 | protocol: TCP 285 | targetPort: https 286 | selector: 287 | app.kubernetes.io/name: ingress-nginx 288 | app.kubernetes.io/instance: ingress-nginx 289 | app.kubernetes.io/component: controller 290 | --- 291 | # Source: ingress-nginx/templates/controller-deployment.yaml 292 | apiVersion: apps/v1 293 | kind: Deployment 294 | metadata: 295 | labels: 296 | helm.sh/chart: ingress-nginx-3.21.0 297 | app.kubernetes.io/name: ingress-nginx 298 | app.kubernetes.io/instance: ingress-nginx 299 | app.kubernetes.io/version: 0.43.0 300 | app.kubernetes.io/managed-by: Helm 301 | app.kubernetes.io/component: controller 302 | name: ingress-nginx-controller 303 | namespace: ingress-nginx 304 | spec: 305 | selector: 306 | matchLabels: 307 | app.kubernetes.io/name: ingress-nginx 308 | app.kubernetes.io/instance: ingress-nginx 309 | app.kubernetes.io/component: controller 310 | revisionHistoryLimit: 10 311 | strategy: 312 | rollingUpdate: 313 | maxUnavailable: 1 314 | type: RollingUpdate 315 | minReadySeconds: 0 316 | template: 317 | metadata: 318 | labels: 319 | app.kubernetes.io/name: ingress-nginx 320 | app.kubernetes.io/instance: ingress-nginx 321 | app.kubernetes.io/component: controller 322 | spec: 323 | dnsPolicy: ClusterFirst 324 | containers: 325 | - name: controller 326 | image: k8s.gcr.io/ingress-nginx/controller:v0.43.0@sha256:9bba603b99bf25f6d117cf1235b6598c16033ad027b143c90fa5b3cc583c5713 327 | imagePullPolicy: IfNotPresent 328 | lifecycle: 329 | preStop: 330 | exec: 331 | command: 332 | - /wait-shutdown 333 | args: 334 | - /nginx-ingress-controller 335 | - --election-id=ingress-controller-leader 336 | - --ingress-class=nginx 337 | - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller 338 | - --validating-webhook=:8443 339 | - --validating-webhook-certificate=/usr/local/certificates/cert 340 | - --validating-webhook-key=/usr/local/certificates/key 341 | - --publish-status-address=localhost 342 | securityContext: 343 | capabilities: 344 | drop: 345 | - ALL 346 | add: 347 | - NET_BIND_SERVICE 348 | runAsUser: 101 349 | allowPrivilegeEscalation: true 350 | env: 351 | - name: POD_NAME 352 | valueFrom: 353 | fieldRef: 354 | fieldPath: metadata.name 355 | - name: POD_NAMESPACE 356 | valueFrom: 357 | fieldRef: 358 | fieldPath: metadata.namespace 359 | - name: LD_PRELOAD 360 | value: /usr/local/lib/libmimalloc.so 361 | livenessProbe: 362 | httpGet: 363 | path: /healthz 364 | port: 10254 365 | scheme: HTTP 366 | initialDelaySeconds: 10 367 | periodSeconds: 10 368 | timeoutSeconds: 1 369 | successThreshold: 1 370 | failureThreshold: 5 371 | readinessProbe: 372 | httpGet: 373 | path: /healthz 374 | port: 10254 375 | scheme: HTTP 376 | initialDelaySeconds: 10 377 | periodSeconds: 10 378 | timeoutSeconds: 1 379 | successThreshold: 1 380 | failureThreshold: 3 381 | ports: 382 | - name: http 383 | containerPort: 80 384 | protocol: TCP 385 | hostPort: 80 386 | - name: https 387 | containerPort: 443 388 | protocol: TCP 389 | hostPort: 443 390 | - name: webhook 391 | containerPort: 8443 392 | protocol: TCP 393 | volumeMounts: 394 | - name: webhook-cert 395 | mountPath: /usr/local/certificates/ 396 | readOnly: true 397 | resources: 398 | requests: 399 | cpu: 100m 400 | memory: 90Mi 401 | nodeSelector: 402 | ingress-ready: 'true' 403 | kubernetes.io/os: linux 404 | tolerations: 405 | - effect: NoSchedule 406 | key: node-role.kubernetes.io/master 407 | operator: Equal 408 | serviceAccountName: ingress-nginx 409 | terminationGracePeriodSeconds: 0 410 | volumes: 411 | - name: webhook-cert 412 | secret: 413 | secretName: ingress-nginx-admission 414 | --- 415 | # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml 416 | # before changing this value, check the required kubernetes version 417 | # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites 418 | apiVersion: admissionregistration.k8s.io/v1 419 | kind: ValidatingWebhookConfiguration 420 | metadata: 421 | labels: 422 | helm.sh/chart: ingress-nginx-3.21.0 423 | app.kubernetes.io/name: ingress-nginx 424 | app.kubernetes.io/instance: ingress-nginx 425 | app.kubernetes.io/version: 0.43.0 426 | app.kubernetes.io/managed-by: Helm 427 | app.kubernetes.io/component: admission-webhook 428 | name: ingress-nginx-admission 429 | webhooks: 430 | - name: validate.nginx.ingress.kubernetes.io 431 | matchPolicy: Equivalent 432 | rules: 433 | - apiGroups: 434 | - networking.k8s.io 435 | apiVersions: 436 | - v1beta1 437 | operations: 438 | - CREATE 439 | - UPDATE 440 | resources: 441 | - ingresses 442 | failurePolicy: Fail 443 | sideEffects: None 444 | admissionReviewVersions: 445 | - v1 446 | - v1beta1 447 | clientConfig: 448 | service: 449 | namespace: ingress-nginx 450 | name: ingress-nginx-controller-admission 451 | path: /networking/v1beta1/ingresses 452 | --- 453 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml 454 | apiVersion: v1 455 | kind: ServiceAccount 456 | metadata: 457 | name: ingress-nginx-admission 458 | annotations: 459 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 460 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 461 | labels: 462 | helm.sh/chart: ingress-nginx-3.21.0 463 | app.kubernetes.io/name: ingress-nginx 464 | app.kubernetes.io/instance: ingress-nginx 465 | app.kubernetes.io/version: 0.43.0 466 | app.kubernetes.io/managed-by: Helm 467 | app.kubernetes.io/component: admission-webhook 468 | namespace: ingress-nginx 469 | --- 470 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml 471 | apiVersion: rbac.authorization.k8s.io/v1 472 | kind: ClusterRole 473 | metadata: 474 | name: ingress-nginx-admission 475 | annotations: 476 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 477 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 478 | labels: 479 | helm.sh/chart: ingress-nginx-3.21.0 480 | app.kubernetes.io/name: ingress-nginx 481 | app.kubernetes.io/instance: ingress-nginx 482 | app.kubernetes.io/version: 0.43.0 483 | app.kubernetes.io/managed-by: Helm 484 | app.kubernetes.io/component: admission-webhook 485 | rules: 486 | - apiGroups: 487 | - admissionregistration.k8s.io 488 | resources: 489 | - validatingwebhookconfigurations 490 | verbs: 491 | - get 492 | - update 493 | --- 494 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml 495 | apiVersion: rbac.authorization.k8s.io/v1 496 | kind: ClusterRoleBinding 497 | metadata: 498 | name: ingress-nginx-admission 499 | annotations: 500 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 501 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 502 | labels: 503 | helm.sh/chart: ingress-nginx-3.21.0 504 | app.kubernetes.io/name: ingress-nginx 505 | app.kubernetes.io/instance: ingress-nginx 506 | app.kubernetes.io/version: 0.43.0 507 | app.kubernetes.io/managed-by: Helm 508 | app.kubernetes.io/component: admission-webhook 509 | roleRef: 510 | apiGroup: rbac.authorization.k8s.io 511 | kind: ClusterRole 512 | name: ingress-nginx-admission 513 | subjects: 514 | - kind: ServiceAccount 515 | name: ingress-nginx-admission 516 | namespace: ingress-nginx 517 | --- 518 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml 519 | apiVersion: rbac.authorization.k8s.io/v1 520 | kind: Role 521 | metadata: 522 | name: ingress-nginx-admission 523 | annotations: 524 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 525 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 526 | labels: 527 | helm.sh/chart: ingress-nginx-3.21.0 528 | app.kubernetes.io/name: ingress-nginx 529 | app.kubernetes.io/instance: ingress-nginx 530 | app.kubernetes.io/version: 0.43.0 531 | app.kubernetes.io/managed-by: Helm 532 | app.kubernetes.io/component: admission-webhook 533 | namespace: ingress-nginx 534 | rules: 535 | - apiGroups: 536 | - '' 537 | resources: 538 | - secrets 539 | verbs: 540 | - get 541 | - create 542 | --- 543 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml 544 | apiVersion: rbac.authorization.k8s.io/v1 545 | kind: RoleBinding 546 | metadata: 547 | name: ingress-nginx-admission 548 | annotations: 549 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 550 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 551 | labels: 552 | helm.sh/chart: ingress-nginx-3.21.0 553 | app.kubernetes.io/name: ingress-nginx 554 | app.kubernetes.io/instance: ingress-nginx 555 | app.kubernetes.io/version: 0.43.0 556 | app.kubernetes.io/managed-by: Helm 557 | app.kubernetes.io/component: admission-webhook 558 | namespace: ingress-nginx 559 | roleRef: 560 | apiGroup: rbac.authorization.k8s.io 561 | kind: Role 562 | name: ingress-nginx-admission 563 | subjects: 564 | - kind: ServiceAccount 565 | name: ingress-nginx-admission 566 | namespace: ingress-nginx 567 | --- 568 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml 569 | apiVersion: batch/v1 570 | kind: Job 571 | metadata: 572 | name: ingress-nginx-admission-create 573 | annotations: 574 | helm.sh/hook: pre-install,pre-upgrade 575 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 576 | labels: 577 | helm.sh/chart: ingress-nginx-3.21.0 578 | app.kubernetes.io/name: ingress-nginx 579 | app.kubernetes.io/instance: ingress-nginx 580 | app.kubernetes.io/version: 0.43.0 581 | app.kubernetes.io/managed-by: Helm 582 | app.kubernetes.io/component: admission-webhook 583 | namespace: ingress-nginx 584 | spec: 585 | template: 586 | metadata: 587 | name: ingress-nginx-admission-create 588 | labels: 589 | helm.sh/chart: ingress-nginx-3.21.0 590 | app.kubernetes.io/name: ingress-nginx 591 | app.kubernetes.io/instance: ingress-nginx 592 | app.kubernetes.io/version: 0.43.0 593 | app.kubernetes.io/managed-by: Helm 594 | app.kubernetes.io/component: admission-webhook 595 | spec: 596 | containers: 597 | - name: create 598 | image: docker.io/jettech/kube-webhook-certgen:v1.5.1 599 | imagePullPolicy: IfNotPresent 600 | args: 601 | - create 602 | - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc 603 | - --namespace=$(POD_NAMESPACE) 604 | - --secret-name=ingress-nginx-admission 605 | env: 606 | - name: POD_NAMESPACE 607 | valueFrom: 608 | fieldRef: 609 | fieldPath: metadata.namespace 610 | restartPolicy: OnFailure 611 | serviceAccountName: ingress-nginx-admission 612 | securityContext: 613 | runAsNonRoot: true 614 | runAsUser: 2000 615 | --- 616 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml 617 | apiVersion: batch/v1 618 | kind: Job 619 | metadata: 620 | name: ingress-nginx-admission-patch 621 | annotations: 622 | helm.sh/hook: post-install,post-upgrade 623 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 624 | labels: 625 | helm.sh/chart: ingress-nginx-3.21.0 626 | app.kubernetes.io/name: ingress-nginx 627 | app.kubernetes.io/instance: ingress-nginx 628 | app.kubernetes.io/version: 0.43.0 629 | app.kubernetes.io/managed-by: Helm 630 | app.kubernetes.io/component: admission-webhook 631 | namespace: ingress-nginx 632 | spec: 633 | template: 634 | metadata: 635 | name: ingress-nginx-admission-patch 636 | labels: 637 | helm.sh/chart: ingress-nginx-3.21.0 638 | app.kubernetes.io/name: ingress-nginx 639 | app.kubernetes.io/instance: ingress-nginx 640 | app.kubernetes.io/version: 0.43.0 641 | app.kubernetes.io/managed-by: Helm 642 | app.kubernetes.io/component: admission-webhook 643 | spec: 644 | containers: 645 | - name: patch 646 | image: docker.io/jettech/kube-webhook-certgen:v1.5.1 647 | imagePullPolicy: IfNotPresent 648 | args: 649 | - patch 650 | - --webhook-name=ingress-nginx-admission 651 | - --namespace=$(POD_NAMESPACE) 652 | - --patch-mutating=false 653 | - --secret-name=ingress-nginx-admission 654 | - --patch-failure-policy=Fail 655 | env: 656 | - name: POD_NAMESPACE 657 | valueFrom: 658 | fieldRef: 659 | fieldPath: metadata.namespace 660 | restartPolicy: OnFailure 661 | serviceAccountName: ingress-nginx-admission 662 | securityContext: 663 | runAsNonRoot: true 664 | runAsUser: 2000 665 | -------------------------------------------------------------------------------- /chapter-5/kind-ingress.yaml: -------------------------------------------------------------------------------- 1 | kind: Cluster 2 | apiVersion: kind.x-k8s.io/v1alpha4 3 | nodes: 4 | - role: control-plane 5 | kubeadmConfigPatches: 6 | - | 7 | kind: InitConfiguration 8 | nodeRegistration: 9 | kubeletExtraArgs: 10 | node-labels: "ingress-ready=true" 11 | extraPortMappings: 12 | - containerPort: 80 13 | hostPort: 80 14 | protocol: TCP 15 | - containerPort: 443 16 | hostPort: 443 17 | protocol: TCP 18 | - role: worker 19 | - role: worker 20 | - role: worker -------------------------------------------------------------------------------- /chapter-5/linkerd-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/strongjz/Networking-and-Kubernetes/bd0e0702ff21473d1216da8a23583473c26da09e/chapter-5/linkerd-dashboard.png -------------------------------------------------------------------------------- /chapter-5/metallb-configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: default 10 | protocol: layer2 11 | addresses: 12 | - 172.18.255.200-172.18.255.250 -------------------------------------------------------------------------------- /chapter-5/metallb.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: policy/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | labels: 5 | app: metallb 6 | name: controller 7 | namespace: metallb-system 8 | spec: 9 | allowPrivilegeEscalation: false 10 | allowedCapabilities: [] 11 | allowedHostPaths: [] 12 | defaultAddCapabilities: [] 13 | defaultAllowPrivilegeEscalation: false 14 | fsGroup: 15 | ranges: 16 | - max: 65535 17 | min: 1 18 | rule: MustRunAs 19 | hostIPC: false 20 | hostNetwork: false 21 | hostPID: false 22 | privileged: false 23 | readOnlyRootFilesystem: true 24 | requiredDropCapabilities: 25 | - ALL 26 | runAsUser: 27 | ranges: 28 | - max: 65535 29 | min: 1 30 | rule: MustRunAs 31 | seLinux: 32 | rule: RunAsAny 33 | supplementalGroups: 34 | ranges: 35 | - max: 65535 36 | min: 1 37 | rule: MustRunAs 38 | volumes: 39 | - configMap 40 | - secret 41 | - emptyDir 42 | --- 43 | apiVersion: policy/v1beta1 44 | kind: PodSecurityPolicy 45 | metadata: 46 | labels: 47 | app: metallb 48 | name: speaker 49 | namespace: metallb-system 50 | spec: 51 | allowPrivilegeEscalation: false 52 | allowedCapabilities: 53 | - NET_ADMIN 54 | - NET_RAW 55 | - SYS_ADMIN 56 | allowedHostPaths: [] 57 | defaultAddCapabilities: [] 58 | defaultAllowPrivilegeEscalation: false 59 | fsGroup: 60 | rule: RunAsAny 61 | hostIPC: false 62 | hostNetwork: true 63 | hostPID: false 64 | hostPorts: 65 | - max: 7472 66 | min: 7472 67 | privileged: true 68 | readOnlyRootFilesystem: true 69 | requiredDropCapabilities: 70 | - ALL 71 | runAsUser: 72 | rule: RunAsAny 73 | seLinux: 74 | rule: RunAsAny 75 | supplementalGroups: 76 | rule: RunAsAny 77 | volumes: 78 | - configMap 79 | - secret 80 | - emptyDir 81 | --- 82 | apiVersion: v1 83 | kind: ServiceAccount 84 | metadata: 85 | labels: 86 | app: metallb 87 | name: controller 88 | namespace: metallb-system 89 | --- 90 | apiVersion: v1 91 | kind: ServiceAccount 92 | metadata: 93 | labels: 94 | app: metallb 95 | name: speaker 96 | namespace: metallb-system 97 | --- 98 | apiVersion: rbac.authorization.k8s.io/v1 99 | kind: ClusterRole 100 | metadata: 101 | labels: 102 | app: metallb 103 | name: metallb-system:controller 104 | rules: 105 | - apiGroups: 106 | - '' 107 | resources: 108 | - services 109 | verbs: 110 | - get 111 | - list 112 | - watch 113 | - update 114 | - apiGroups: 115 | - '' 116 | resources: 117 | - services/status 118 | verbs: 119 | - update 120 | - apiGroups: 121 | - '' 122 | resources: 123 | - events 124 | verbs: 125 | - create 126 | - patch 127 | - apiGroups: 128 | - policy 129 | resourceNames: 130 | - controller 131 | resources: 132 | - podsecuritypolicies 133 | verbs: 134 | - use 135 | --- 136 | apiVersion: rbac.authorization.k8s.io/v1 137 | kind: ClusterRole 138 | metadata: 139 | labels: 140 | app: metallb 141 | name: metallb-system:speaker 142 | rules: 143 | - apiGroups: 144 | - '' 145 | resources: 146 | - services 147 | - endpoints 148 | - nodes 149 | verbs: 150 | - get 151 | - list 152 | - watch 153 | - apiGroups: 154 | - '' 155 | resources: 156 | - events 157 | verbs: 158 | - create 159 | - patch 160 | - apiGroups: 161 | - policy 162 | resourceNames: 163 | - speaker 164 | resources: 165 | - podsecuritypolicies 166 | verbs: 167 | - use 168 | --- 169 | apiVersion: rbac.authorization.k8s.io/v1 170 | kind: Role 171 | metadata: 172 | labels: 173 | app: metallb 174 | name: config-watcher 175 | namespace: metallb-system 176 | rules: 177 | - apiGroups: 178 | - '' 179 | resources: 180 | - configmaps 181 | verbs: 182 | - get 183 | - list 184 | - watch 185 | --- 186 | apiVersion: rbac.authorization.k8s.io/v1 187 | kind: Role 188 | metadata: 189 | labels: 190 | app: metallb 191 | name: pod-lister 192 | namespace: metallb-system 193 | rules: 194 | - apiGroups: 195 | - '' 196 | resources: 197 | - pods 198 | verbs: 199 | - list 200 | --- 201 | apiVersion: rbac.authorization.k8s.io/v1 202 | kind: ClusterRoleBinding 203 | metadata: 204 | labels: 205 | app: metallb 206 | name: metallb-system:controller 207 | roleRef: 208 | apiGroup: rbac.authorization.k8s.io 209 | kind: ClusterRole 210 | name: metallb-system:controller 211 | subjects: 212 | - kind: ServiceAccount 213 | name: controller 214 | namespace: metallb-system 215 | --- 216 | apiVersion: rbac.authorization.k8s.io/v1 217 | kind: ClusterRoleBinding 218 | metadata: 219 | labels: 220 | app: metallb 221 | name: metallb-system:speaker 222 | roleRef: 223 | apiGroup: rbac.authorization.k8s.io 224 | kind: ClusterRole 225 | name: metallb-system:speaker 226 | subjects: 227 | - kind: ServiceAccount 228 | name: speaker 229 | namespace: metallb-system 230 | --- 231 | apiVersion: rbac.authorization.k8s.io/v1 232 | kind: RoleBinding 233 | metadata: 234 | labels: 235 | app: metallb 236 | name: config-watcher 237 | namespace: metallb-system 238 | roleRef: 239 | apiGroup: rbac.authorization.k8s.io 240 | kind: Role 241 | name: config-watcher 242 | subjects: 243 | - kind: ServiceAccount 244 | name: controller 245 | - kind: ServiceAccount 246 | name: speaker 247 | --- 248 | apiVersion: rbac.authorization.k8s.io/v1 249 | kind: RoleBinding 250 | metadata: 251 | labels: 252 | app: metallb 253 | name: pod-lister 254 | namespace: metallb-system 255 | roleRef: 256 | apiGroup: rbac.authorization.k8s.io 257 | kind: Role 258 | name: pod-lister 259 | subjects: 260 | - kind: ServiceAccount 261 | name: speaker 262 | --- 263 | apiVersion: apps/v1 264 | kind: DaemonSet 265 | metadata: 266 | labels: 267 | app: metallb 268 | component: speaker 269 | name: speaker 270 | namespace: metallb-system 271 | spec: 272 | selector: 273 | matchLabels: 274 | app: metallb 275 | component: speaker 276 | template: 277 | metadata: 278 | annotations: 279 | prometheus.io/port: '7472' 280 | prometheus.io/scrape: 'true' 281 | labels: 282 | app: metallb 283 | component: speaker 284 | spec: 285 | containers: 286 | - args: 287 | - --port=7472 288 | - --config=config 289 | env: 290 | - name: METALLB_NODE_NAME 291 | valueFrom: 292 | fieldRef: 293 | fieldPath: spec.nodeName 294 | - name: METALLB_HOST 295 | valueFrom: 296 | fieldRef: 297 | fieldPath: status.hostIP 298 | - name: METALLB_ML_BIND_ADDR 299 | valueFrom: 300 | fieldRef: 301 | fieldPath: status.podIP 302 | # needed when another software is also using memberlist / port 7946 303 | #- name: METALLB_ML_BIND_PORT 304 | # value: "7946" 305 | - name: METALLB_ML_LABELS 306 | value: "app=metallb,component=speaker" 307 | - name: METALLB_ML_NAMESPACE 308 | valueFrom: 309 | fieldRef: 310 | fieldPath: metadata.namespace 311 | - name: METALLB_ML_SECRET_KEY 312 | valueFrom: 313 | secretKeyRef: 314 | name: memberlist 315 | key: secretkey 316 | image: quay.io/metallb/speaker:main 317 | imagePullPolicy: Always 318 | name: speaker 319 | ports: 320 | - containerPort: 7472 321 | name: monitoring 322 | resources: 323 | limits: 324 | cpu: 100m 325 | memory: 100Mi 326 | securityContext: 327 | allowPrivilegeEscalation: false 328 | capabilities: 329 | add: 330 | - NET_ADMIN 331 | - NET_RAW 332 | - SYS_ADMIN 333 | drop: 334 | - ALL 335 | readOnlyRootFilesystem: true 336 | hostNetwork: true 337 | nodeSelector: 338 | kubernetes.io/os: linux 339 | serviceAccountName: speaker 340 | terminationGracePeriodSeconds: 2 341 | tolerations: 342 | - effect: NoSchedule 343 | key: node-role.kubernetes.io/master 344 | --- 345 | apiVersion: apps/v1 346 | kind: Deployment 347 | metadata: 348 | labels: 349 | app: metallb 350 | component: controller 351 | name: controller 352 | namespace: metallb-system 353 | spec: 354 | revisionHistoryLimit: 3 355 | selector: 356 | matchLabels: 357 | app: metallb 358 | component: controller 359 | template: 360 | metadata: 361 | annotations: 362 | prometheus.io/port: '7472' 363 | prometheus.io/scrape: 'true' 364 | labels: 365 | app: metallb 366 | component: controller 367 | spec: 368 | containers: 369 | - args: 370 | - --port=7472 371 | - --config=config 372 | image: quay.io/metallb/controller:main 373 | imagePullPolicy: Always 374 | name: controller 375 | ports: 376 | - containerPort: 7472 377 | name: monitoring 378 | resources: 379 | limits: 380 | cpu: 100m 381 | memory: 100Mi 382 | securityContext: 383 | allowPrivilegeEscalation: false 384 | capabilities: 385 | drop: 386 | - all 387 | readOnlyRootFilesystem: true 388 | nodeSelector: 389 | kubernetes.io/os: linux 390 | securityContext: 391 | runAsNonRoot: true 392 | runAsUser: 65534 393 | serviceAccountName: controller 394 | terminationGracePeriodSeconds: 0 395 | -------------------------------------------------------------------------------- /chapter-5/mlb-ns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: metallb-system 5 | labels: 6 | app: metallb 7 | -------------------------------------------------------------------------------- /chapter-5/nginx-ingress-controller.yml: -------------------------------------------------------------------------------- 1 | # Source https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/deploy.yaml 2 | --- 3 | apiVersion: v1 4 | kind: Namespace 5 | metadata: 6 | name: ingress-nginx 7 | labels: 8 | app.kubernetes.io/name: ingress-nginx 9 | app.kubernetes.io/instance: ingress-nginx 10 | 11 | --- 12 | # Source: ingress-nginx/templates/controller-serviceaccount.yaml 13 | apiVersion: v1 14 | kind: ServiceAccount 15 | metadata: 16 | labels: 17 | helm.sh/chart: ingress-nginx-2.0.0 18 | app.kubernetes.io/name: ingress-nginx 19 | app.kubernetes.io/instance: ingress-nginx 20 | app.kubernetes.io/version: 0.30.0 21 | app.kubernetes.io/managed-by: Helm 22 | app.kubernetes.io/component: controller 23 | name: ingress-nginx 24 | namespace: ingress-nginx 25 | --- 26 | # Source: ingress-nginx/templates/controller-configmap.yaml 27 | apiVersion: v1 28 | kind: ConfigMap 29 | metadata: 30 | labels: 31 | helm.sh/chart: ingress-nginx-2.0.0 32 | app.kubernetes.io/name: ingress-nginx 33 | app.kubernetes.io/instance: ingress-nginx 34 | app.kubernetes.io/version: 0.30.0 35 | app.kubernetes.io/managed-by: Helm 36 | app.kubernetes.io/component: controller 37 | name: ingress-nginx-controller 38 | namespace: ingress-nginx 39 | data: 40 | --- 41 | # Source: ingress-nginx/templates/clusterrole.yaml 42 | apiVersion: rbac.authorization.k8s.io/v1 43 | kind: ClusterRole 44 | metadata: 45 | labels: 46 | helm.sh/chart: ingress-nginx-2.0.0 47 | app.kubernetes.io/name: ingress-nginx 48 | app.kubernetes.io/instance: ingress-nginx 49 | app.kubernetes.io/version: 0.30.0 50 | app.kubernetes.io/managed-by: Helm 51 | name: ingress-nginx 52 | namespace: ingress-nginx 53 | rules: 54 | - apiGroups: 55 | - '' 56 | resources: 57 | - configmaps 58 | - endpoints 59 | - nodes 60 | - pods 61 | - secrets 62 | verbs: 63 | - list 64 | - watch 65 | - apiGroups: 66 | - '' 67 | resources: 68 | - nodes 69 | verbs: 70 | - get 71 | - apiGroups: 72 | - '' 73 | resources: 74 | - services 75 | verbs: 76 | - get 77 | - list 78 | - update 79 | - watch 80 | - apiGroups: 81 | - extensions 82 | - networking.k8s.io # k8s 1.14+ 83 | resources: 84 | - ingresses 85 | verbs: 86 | - get 87 | - list 88 | - watch 89 | - apiGroups: 90 | - '' 91 | resources: 92 | - events 93 | verbs: 94 | - create 95 | - patch 96 | - apiGroups: 97 | - extensions 98 | - networking.k8s.io # k8s 1.14+ 99 | resources: 100 | - ingresses/status 101 | verbs: 102 | - update 103 | - apiGroups: 104 | - networking.k8s.io # k8s 1.14+ 105 | resources: 106 | - ingressclasses 107 | verbs: 108 | - get 109 | - list 110 | - watch 111 | --- 112 | # Source: ingress-nginx/templates/clusterrolebinding.yaml 113 | apiVersion: rbac.authorization.k8s.io/v1 114 | kind: ClusterRoleBinding 115 | metadata: 116 | labels: 117 | helm.sh/chart: ingress-nginx-2.0.0 118 | app.kubernetes.io/name: ingress-nginx 119 | app.kubernetes.io/instance: ingress-nginx 120 | app.kubernetes.io/version: 0.30.0 121 | app.kubernetes.io/managed-by: Helm 122 | name: ingress-nginx 123 | namespace: ingress-nginx 124 | roleRef: 125 | apiGroup: rbac.authorization.k8s.io 126 | kind: ClusterRole 127 | name: ingress-nginx 128 | subjects: 129 | - kind: ServiceAccount 130 | name: ingress-nginx 131 | namespace: ingress-nginx 132 | --- 133 | # Source: ingress-nginx/templates/controller-role.yaml 134 | apiVersion: rbac.authorization.k8s.io/v1 135 | kind: Role 136 | metadata: 137 | labels: 138 | helm.sh/chart: ingress-nginx-2.0.0 139 | app.kubernetes.io/name: ingress-nginx 140 | app.kubernetes.io/instance: ingress-nginx 141 | app.kubernetes.io/version: 0.30.0 142 | app.kubernetes.io/managed-by: Helm 143 | app.kubernetes.io/component: controller 144 | name: ingress-nginx 145 | namespace: ingress-nginx 146 | rules: 147 | - apiGroups: 148 | - '' 149 | resources: 150 | - namespaces 151 | verbs: 152 | - get 153 | - apiGroups: 154 | - '' 155 | resources: 156 | - configmaps 157 | - pods 158 | - secrets 159 | - endpoints 160 | verbs: 161 | - get 162 | - list 163 | - watch 164 | - apiGroups: 165 | - '' 166 | resources: 167 | - services 168 | verbs: 169 | - get 170 | - list 171 | - update 172 | - watch 173 | - apiGroups: 174 | - extensions 175 | - networking.k8s.io # k8s 1.14+ 176 | resources: 177 | - ingresses 178 | verbs: 179 | - get 180 | - list 181 | - watch 182 | - apiGroups: 183 | - extensions 184 | - networking.k8s.io # k8s 1.14+ 185 | resources: 186 | - ingresses/status 187 | verbs: 188 | - update 189 | - apiGroups: 190 | - networking.k8s.io # k8s 1.14+ 191 | resources: 192 | - ingressclasses 193 | verbs: 194 | - get 195 | - list 196 | - watch 197 | - apiGroups: 198 | - '' 199 | resources: 200 | - configmaps 201 | resourceNames: 202 | - ingress-controller-leader-nginx 203 | verbs: 204 | - get 205 | - update 206 | - apiGroups: 207 | - '' 208 | resources: 209 | - configmaps 210 | verbs: 211 | - create 212 | - apiGroups: 213 | - '' 214 | resources: 215 | - endpoints 216 | verbs: 217 | - create 218 | - get 219 | - update 220 | - apiGroups: 221 | - '' 222 | resources: 223 | - events 224 | verbs: 225 | - create 226 | - patch 227 | --- 228 | # Source: ingress-nginx/templates/controller-rolebinding.yaml 229 | apiVersion: rbac.authorization.k8s.io/v1 230 | kind: RoleBinding 231 | metadata: 232 | labels: 233 | helm.sh/chart: ingress-nginx-2.0.0 234 | app.kubernetes.io/name: ingress-nginx 235 | app.kubernetes.io/instance: ingress-nginx 236 | app.kubernetes.io/version: 0.30.0 237 | app.kubernetes.io/managed-by: Helm 238 | app.kubernetes.io/component: controller 239 | name: ingress-nginx 240 | namespace: ingress-nginx 241 | roleRef: 242 | apiGroup: rbac.authorization.k8s.io 243 | kind: Role 244 | name: ingress-nginx 245 | subjects: 246 | - kind: ServiceAccount 247 | name: ingress-nginx 248 | namespace: ingress-nginx 249 | --- 250 | # Source: ingress-nginx/templates/controller-service-webhook.yaml 251 | apiVersion: v1 252 | kind: Service 253 | metadata: 254 | labels: 255 | helm.sh/chart: ingress-nginx-2.0.0 256 | app.kubernetes.io/name: ingress-nginx 257 | app.kubernetes.io/instance: ingress-nginx 258 | app.kubernetes.io/version: 0.30.0 259 | app.kubernetes.io/managed-by: Helm 260 | app.kubernetes.io/component: controller 261 | name: ingress-nginx-controller-admission 262 | namespace: ingress-nginx 263 | spec: 264 | type: ClusterIP 265 | ports: 266 | - name: https-webhook 267 | port: 443 268 | targetPort: webhook 269 | selector: 270 | app.kubernetes.io/name: ingress-nginx 271 | app.kubernetes.io/instance: ingress-nginx 272 | app.kubernetes.io/component: controller 273 | --- 274 | # Source: ingress-nginx/templates/controller-service.yaml 275 | apiVersion: v1 276 | kind: Service 277 | metadata: 278 | annotations: 279 | service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp 280 | service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60' 281 | service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true' 282 | service.beta.kubernetes.io/aws-load-balancer-type: nlb 283 | labels: 284 | helm.sh/chart: ingress-nginx-2.0.0 285 | app.kubernetes.io/name: ingress-nginx 286 | app.kubernetes.io/instance: ingress-nginx 287 | app.kubernetes.io/version: 0.30.0 288 | app.kubernetes.io/managed-by: Helm 289 | app.kubernetes.io/component: controller 290 | name: ingress-nginx-controller 291 | namespace: ingress-nginx 292 | spec: 293 | type: LoadBalancer 294 | externalTrafficPolicy: Local 295 | ports: 296 | - name: http 297 | port: 80 298 | protocol: TCP 299 | targetPort: http 300 | - name: https 301 | port: 443 302 | protocol: TCP 303 | targetPort: https 304 | selector: 305 | app.kubernetes.io/name: ingress-nginx 306 | app.kubernetes.io/instance: ingress-nginx 307 | app.kubernetes.io/component: controller 308 | --- 309 | # Source: ingress-nginx/templates/controller-deployment.yaml 310 | apiVersion: apps/v1 311 | kind: Deployment 312 | metadata: 313 | labels: 314 | helm.sh/chart: ingress-nginx-2.0.0 315 | app.kubernetes.io/name: ingress-nginx 316 | app.kubernetes.io/instance: ingress-nginx 317 | app.kubernetes.io/version: 0.30.0 318 | app.kubernetes.io/managed-by: Helm 319 | app.kubernetes.io/component: controller 320 | name: ingress-nginx-controller 321 | namespace: ingress-nginx 322 | spec: 323 | selector: 324 | matchLabels: 325 | app.kubernetes.io/name: ingress-nginx 326 | app.kubernetes.io/instance: ingress-nginx 327 | app.kubernetes.io/component: controller 328 | revisionHistoryLimit: 10 329 | minReadySeconds: 0 330 | template: 331 | metadata: 332 | labels: 333 | app.kubernetes.io/name: ingress-nginx 334 | app.kubernetes.io/instance: ingress-nginx 335 | app.kubernetes.io/component: controller 336 | spec: 337 | dnsPolicy: ClusterFirst 338 | containers: 339 | - name: controller 340 | image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 341 | imagePullPolicy: IfNotPresent 342 | lifecycle: 343 | preStop: 344 | exec: 345 | command: 346 | - /wait-shutdown 347 | args: 348 | - /nginx-ingress-controller 349 | - --publish-service=ingress-nginx/ingress-nginx-controller 350 | - --election-id=ingress-controller-leader 351 | - --ingress-class=nginx 352 | - --configmap=ingress-nginx/ingress-nginx-controller 353 | - --validating-webhook=:8443 354 | - --validating-webhook-certificate=/usr/local/certificates/cert 355 | - --validating-webhook-key=/usr/local/certificates/key 356 | securityContext: 357 | capabilities: 358 | drop: 359 | - ALL 360 | add: 361 | - NET_BIND_SERVICE 362 | runAsUser: 101 363 | allowPrivilegeEscalation: true 364 | env: 365 | - name: POD_NAME 366 | valueFrom: 367 | fieldRef: 368 | fieldPath: metadata.name 369 | - name: POD_NAMESPACE 370 | valueFrom: 371 | fieldRef: 372 | fieldPath: metadata.namespace 373 | livenessProbe: 374 | httpGet: 375 | path: /healthz 376 | port: 10254 377 | scheme: HTTP 378 | initialDelaySeconds: 10 379 | periodSeconds: 10 380 | timeoutSeconds: 1 381 | successThreshold: 1 382 | failureThreshold: 3 383 | readinessProbe: 384 | httpGet: 385 | path: /healthz 386 | port: 10254 387 | scheme: HTTP 388 | initialDelaySeconds: 10 389 | periodSeconds: 10 390 | timeoutSeconds: 1 391 | successThreshold: 1 392 | failureThreshold: 3 393 | ports: 394 | - name: http 395 | containerPort: 80 396 | protocol: TCP 397 | - name: https 398 | containerPort: 443 399 | protocol: TCP 400 | - name: webhook 401 | containerPort: 8443 402 | protocol: TCP 403 | volumeMounts: 404 | - name: webhook-cert 405 | mountPath: /usr/local/certificates/ 406 | readOnly: true 407 | resources: 408 | requests: 409 | cpu: 100m 410 | memory: 90Mi 411 | serviceAccountName: ingress-nginx 412 | terminationGracePeriodSeconds: 300 413 | volumes: 414 | - name: webhook-cert 415 | secret: 416 | secretName: ingress-nginx-admission 417 | --- 418 | # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml 419 | apiVersion: admissionregistration.k8s.io/v1beta1 420 | kind: ValidatingWebhookConfiguration 421 | metadata: 422 | labels: 423 | helm.sh/chart: ingress-nginx-2.0.0 424 | app.kubernetes.io/name: ingress-nginx 425 | app.kubernetes.io/instance: ingress-nginx 426 | app.kubernetes.io/version: 0.30.0 427 | app.kubernetes.io/managed-by: Helm 428 | app.kubernetes.io/component: admission-webhook 429 | name: ingress-nginx-admission 430 | namespace: ingress-nginx 431 | webhooks: 432 | - name: validate.nginx.ingress.kubernetes.io 433 | rules: 434 | - apiGroups: 435 | - extensions 436 | - networking.k8s.io 437 | apiVersions: 438 | - v1beta1 439 | operations: 440 | - CREATE 441 | - UPDATE 442 | resources: 443 | - ingresses 444 | failurePolicy: Fail 445 | clientConfig: 446 | service: 447 | namespace: ingress-nginx 448 | name: ingress-nginx-controller-admission 449 | path: /extensions/v1beta1/ingresses 450 | --- 451 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml 452 | apiVersion: rbac.authorization.k8s.io/v1 453 | kind: ClusterRole 454 | metadata: 455 | name: ingress-nginx-admission 456 | annotations: 457 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 458 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 459 | labels: 460 | helm.sh/chart: ingress-nginx-2.0.0 461 | app.kubernetes.io/name: ingress-nginx 462 | app.kubernetes.io/instance: ingress-nginx 463 | app.kubernetes.io/version: 0.30.0 464 | app.kubernetes.io/managed-by: Helm 465 | app.kubernetes.io/component: admission-webhook 466 | namespace: ingress-nginx 467 | rules: 468 | - apiGroups: 469 | - admissionregistration.k8s.io 470 | resources: 471 | - validatingwebhookconfigurations 472 | verbs: 473 | - get 474 | - update 475 | --- 476 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml 477 | apiVersion: rbac.authorization.k8s.io/v1 478 | kind: ClusterRoleBinding 479 | metadata: 480 | name: ingress-nginx-admission 481 | annotations: 482 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 483 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 484 | labels: 485 | helm.sh/chart: ingress-nginx-2.0.0 486 | app.kubernetes.io/name: ingress-nginx 487 | app.kubernetes.io/instance: ingress-nginx 488 | app.kubernetes.io/version: 0.30.0 489 | app.kubernetes.io/managed-by: Helm 490 | app.kubernetes.io/component: admission-webhook 491 | namespace: ingress-nginx 492 | roleRef: 493 | apiGroup: rbac.authorization.k8s.io 494 | kind: ClusterRole 495 | name: ingress-nginx-admission 496 | subjects: 497 | - kind: ServiceAccount 498 | name: ingress-nginx-admission 499 | namespace: ingress-nginx 500 | --- 501 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml 502 | apiVersion: batch/v1 503 | kind: Job 504 | metadata: 505 | name: ingress-nginx-admission-create 506 | annotations: 507 | helm.sh/hook: pre-install,pre-upgrade 508 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 509 | labels: 510 | helm.sh/chart: ingress-nginx-2.0.0 511 | app.kubernetes.io/name: ingress-nginx 512 | app.kubernetes.io/instance: ingress-nginx 513 | app.kubernetes.io/version: 0.30.0 514 | app.kubernetes.io/managed-by: Helm 515 | app.kubernetes.io/component: admission-webhook 516 | namespace: ingress-nginx 517 | spec: 518 | template: 519 | metadata: 520 | name: ingress-nginx-admission-create 521 | labels: 522 | helm.sh/chart: ingress-nginx-2.0.0 523 | app.kubernetes.io/name: ingress-nginx 524 | app.kubernetes.io/instance: ingress-nginx 525 | app.kubernetes.io/version: 0.30.0 526 | app.kubernetes.io/managed-by: Helm 527 | app.kubernetes.io/component: admission-webhook 528 | spec: 529 | containers: 530 | - name: create 531 | image: jettech/kube-webhook-certgen:v1.0.0 532 | imagePullPolicy: IfNotPresent 533 | args: 534 | - create 535 | - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.ingress-nginx.svc 536 | - --namespace=ingress-nginx 537 | - --secret-name=ingress-nginx-admission 538 | restartPolicy: OnFailure 539 | serviceAccountName: ingress-nginx-admission 540 | securityContext: 541 | runAsNonRoot: true 542 | runAsUser: 2000 543 | --- 544 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml 545 | apiVersion: batch/v1 546 | kind: Job 547 | metadata: 548 | name: ingress-nginx-admission-patch 549 | annotations: 550 | helm.sh/hook: post-install,post-upgrade 551 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 552 | labels: 553 | helm.sh/chart: ingress-nginx-2.0.0 554 | app.kubernetes.io/name: ingress-nginx 555 | app.kubernetes.io/instance: ingress-nginx 556 | app.kubernetes.io/version: 0.30.0 557 | app.kubernetes.io/managed-by: Helm 558 | app.kubernetes.io/component: admission-webhook 559 | namespace: ingress-nginx 560 | spec: 561 | template: 562 | metadata: 563 | name: ingress-nginx-admission-patch 564 | labels: 565 | helm.sh/chart: ingress-nginx-2.0.0 566 | app.kubernetes.io/name: ingress-nginx 567 | app.kubernetes.io/instance: ingress-nginx 568 | app.kubernetes.io/version: 0.30.0 569 | app.kubernetes.io/managed-by: Helm 570 | app.kubernetes.io/component: admission-webhook 571 | spec: 572 | containers: 573 | - name: patch 574 | image: jettech/kube-webhook-certgen:v1.0.0 575 | imagePullPolicy: 576 | args: 577 | - patch 578 | - --webhook-name=ingress-nginx-admission 579 | - --namespace=ingress-nginx 580 | - --patch-mutating=false 581 | - --secret-name=ingress-nginx-admission 582 | - --patch-failure-policy=Fail 583 | restartPolicy: OnFailure 584 | serviceAccountName: ingress-nginx-admission 585 | securityContext: 586 | runAsNonRoot: true 587 | runAsUser: 2000 588 | --- 589 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml 590 | apiVersion: rbac.authorization.k8s.io/v1 591 | kind: Role 592 | metadata: 593 | name: ingress-nginx-admission 594 | annotations: 595 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 596 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 597 | labels: 598 | helm.sh/chart: ingress-nginx-2.0.0 599 | app.kubernetes.io/name: ingress-nginx 600 | app.kubernetes.io/instance: ingress-nginx 601 | app.kubernetes.io/version: 0.30.0 602 | app.kubernetes.io/managed-by: Helm 603 | app.kubernetes.io/component: admission-webhook 604 | namespace: ingress-nginx 605 | rules: 606 | - apiGroups: 607 | - '' 608 | resources: 609 | - secrets 610 | verbs: 611 | - get 612 | - create 613 | --- 614 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml 615 | apiVersion: rbac.authorization.k8s.io/v1 616 | kind: RoleBinding 617 | metadata: 618 | name: ingress-nginx-admission 619 | annotations: 620 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 621 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 622 | labels: 623 | helm.sh/chart: ingress-nginx-2.0.0 624 | app.kubernetes.io/name: ingress-nginx 625 | app.kubernetes.io/instance: ingress-nginx 626 | app.kubernetes.io/version: 0.30.0 627 | app.kubernetes.io/managed-by: Helm 628 | app.kubernetes.io/component: admission-webhook 629 | namespace: ingress-nginx 630 | roleRef: 631 | apiGroup: rbac.authorization.k8s.io 632 | kind: Role 633 | name: ingress-nginx-admission 634 | subjects: 635 | - kind: ServiceAccount 636 | name: ingress-nginx-admission 637 | namespace: ingress-nginx 638 | --- 639 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml 640 | apiVersion: v1 641 | kind: ServiceAccount 642 | metadata: 643 | name: ingress-nginx-admission 644 | annotations: 645 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade 646 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded 647 | labels: 648 | helm.sh/chart: ingress-nginx-2.0.0 649 | app.kubernetes.io/name: ingress-nginx 650 | app.kubernetes.io/instance: ingress-nginx 651 | app.kubernetes.io/version: 0.30.0 652 | app.kubernetes.io/managed-by: Helm 653 | app.kubernetes.io/component: admission-webhook 654 | namespace: ingress-nginx 655 | -------------------------------------------------------------------------------- /chapter-5/service-clusterip.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: clusterip-service 6 | labels: 7 | app: app 8 | spec: 9 | selector: 10 | app: app 11 | ports: 12 | - protocol: TCP 13 | port: 80 14 | targetPort: 8080 15 | -------------------------------------------------------------------------------- /chapter-5/service-external.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: external-service 5 | spec: 6 | type: ExternalName 7 | externalName: github.com -------------------------------------------------------------------------------- /chapter-5/service-headless.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: headless-service 5 | spec: 6 | clusterIP: None 7 | selector: 8 | app: app 9 | ports: 10 | - protocol: TCP 11 | port: 80 12 | targetPort: 8080 13 | -------------------------------------------------------------------------------- /chapter-5/services-loadbalancer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: loadbalancer-service 5 | labels: 6 | app: app 7 | spec: 8 | selector: 9 | app: app 10 | ports: 11 | - name: service-port 12 | protocol: TCP 13 | port: 80 14 | targetPort: 8080 15 | type: LoadBalancer -------------------------------------------------------------------------------- /chapter-5/services-nodeport.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nodeport-service 5 | spec: 6 | selector: 7 | app: app 8 | type: NodePort 9 | ports: 10 | - name: echo 11 | port: 8080 12 | targetPort: 8080 13 | nodePort: 30040 14 | protocol: TCP -------------------------------------------------------------------------------- /chapter-5/web-server.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "database/sql" 5 | "fmt" 6 | _ "github.com/lib/pq" 7 | "log" 8 | "net/http" 9 | "os" 10 | ) 11 | 12 | func hello(w http.ResponseWriter, _ *http.Request) { 13 | fmt.Fprintf(w, "Hello") 14 | } 15 | 16 | func healthz(w http.ResponseWriter, _ *http.Request) { 17 | fmt.Fprintf(w, "Healthy") 18 | } 19 | 20 | func host(w http.ResponseWriter, _ *http.Request) { 21 | node := os.Getenv("MY_NODE_NAME") 22 | podIP := os.Getenv("MY_POD_IP") 23 | 24 | fmt.Fprintf(w,"NODE: %v, POD IP:%v",node, podIP) 25 | } 26 | 27 | func dataHandler(w http.ResponseWriter, _ *http.Request) { 28 | db := CreateCon() 29 | 30 | err := db.Ping() 31 | if err != nil { 32 | http.Error(w, err.Error(), http.StatusInternalServerError) 33 | } else { 34 | fmt.Fprintf(w, "Database Connected") 35 | } 36 | } 37 | 38 | func main() { 39 | http.HandleFunc("/", hello) 40 | 41 | http.HandleFunc("/healthz", healthz) 42 | 43 | http.HandleFunc("/data", dataHandler) 44 | 45 | http.HandleFunc("/host", host) 46 | 47 | http.ListenAndServe("0.0.0.0:8080", nil) 48 | } 49 | 50 | /*Create sql database connection*/ 51 | func CreateCon() *sql.DB { 52 | user := os.Getenv("DB_USER") 53 | pass := os.Getenv("DB_PASSWORD") 54 | host := os.Getenv("DB_HOST") 55 | port := os.Getenv("DB_PORT") 56 | 57 | connStr := fmt.Sprintf("postgres://%v:%v@%v:%v?sslmode=disable", user, pass, host, port) 58 | 59 | fmt.Printf("Database Connection String: %v \n", connStr) 60 | 61 | db, err := sql.Open("postgres", connStr) 62 | 63 | if err != nil { 64 | log.Fatalf("ERROR: %v", err) 65 | } 66 | 67 | return db 68 | } 69 | -------------------------------------------------------------------------------- /chapter-5/web.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: app 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: app 9 | replicas: 1 10 | template: 11 | metadata: 12 | labels: 13 | app: app 14 | spec: 15 | containers: 16 | - name: go-web 17 | image: strongjz/go-web:v0.0.6 18 | ports: 19 | - containerPort: 8080 20 | livenessProbe: 21 | httpGet: 22 | path: /healthz 23 | port: 8080 24 | initialDelaySeconds: 5 25 | periodSeconds: 5 26 | readinessProbe: 27 | httpGet: 28 | path: / 29 | port: 8080 30 | initialDelaySeconds: 5 31 | periodSeconds: 5 32 | env: 33 | - name: MY_NODE_NAME 34 | valueFrom: 35 | fieldRef: 36 | fieldPath: spec.nodeName 37 | - name: MY_POD_NAME 38 | valueFrom: 39 | fieldRef: 40 | fieldPath: metadata.name 41 | - name: MY_POD_NAMESPACE 42 | valueFrom: 43 | fieldRef: 44 | fieldPath: metadata.namespace 45 | - name: MY_POD_IP 46 | valueFrom: 47 | fieldRef: 48 | fieldPath: status.podIP 49 | - name: MY_POD_SERVICE_ACCOUNT 50 | valueFrom: 51 | fieldRef: 52 | fieldPath: spec.serviceAccountName 53 | - name: DB_HOST 54 | value: "postgres" 55 | - name: DB_USER 56 | value: "postgres" 57 | - name: DB_PASSWORD 58 | value: "mysecretpassword" 59 | - name: DB_PORT 60 | value: "5432" 61 | -------------------------------------------------------------------------------- /chapter-6/AWS/README.adoc: -------------------------------------------------------------------------------- 1 | ==== Deploying an Application on AWS EKS Cluster 2 | 3 | Let's walk through deploying an EKS cluster to manage our Golang web server. 4 | 5 | 1. Deploy EKS Cluster 6 | 2. Deploy Web Server Application and Loadbalancer 7 | 3. Verify 8 | 4. Clean Up 9 | 10 | ===== Deploy EKS Cluster 11 | 12 | Let's deploy an EKS cluster, with the current, latest version EKS supports, 1.20. 13 | 14 | [source,bash] 15 | ---- 16 | export CLUSTER_NAME=eks-demo 17 | eksctl create cluster -N 3 --name ${CLUSTER_NAME} --version=1.20 18 | 2021-06-26 15:21:51 [ℹ] eksctl version 0.54.0 19 | 2021-06-26 15:21:51 [ℹ] using region us-west-2 20 | 2021-06-26 15:21:52 [ℹ] setting availability zones to [us-west-2b us-west-2a us-west-2c] 21 | 2021-06-26 15:21:52 [ℹ] subnets for us-west-2b - public:192.168.0.0/19 private:192.168.96.0/19 22 | 2021-06-26 15:21:52 [ℹ] subnets for us-west-2a - public:192.168.32.0/19 private:192.168.128.0/19 23 | 2021-06-26 15:21:52 [ℹ] subnets for us-west-2c - public:192.168.64.0/19 private:192.168.160.0/19 24 | 2021-06-26 15:21:52 [ℹ] nodegroup "ng-90b7a9a5" will use "ami-0a1abe779ecfc6a3e" [AmazonLinux2/1.20] 25 | 2021-06-26 15:21:52 [ℹ] using Kubernetes version 1.20 26 | 2021-06-26 15:21:52 [ℹ] creating EKS cluster "eks-demo" in "us-west-2" region with un-managed nodes 27 | 2021-06-26 15:21:52 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup 28 | 2021-06-26 15:21:52 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=eks-demo' 29 | 2021-06-26 15:21:52 [ℹ] CloudWatch logging will not be enabled for cluster "eks-demo" in "us-west-2" 30 | 2021-06-26 15:21:52 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=eks-demo' 31 | 2021-06-26 15:21:52 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-demo" in "us-west-2" 32 | 2021-06-26 15:21:52 [ℹ] 2 sequential tasks: { create cluster control plane "eks-demo", 3 sequential sub-tasks: { wait for control plane to become ready, 1 task: { create addons }, create nodegroup "ng-90b7a9a5" } } 33 | 2021-06-26 15:21:52 [ℹ] building cluster stack "eksctl-eks-demo-cluster" 34 | 2021-06-26 15:21:54 [ℹ] deploying stack "eksctl-eks-demo-cluster" 35 | 2021-06-26 15:22:24 [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-cluster" 36 | 37 | 2021-06-26 15:39:04 [ℹ] building nodegroup stack "eksctl-eks-demo-nodegroup-ng-90b7a9a5" 38 | 2021-06-26 15:39:04 [ℹ] --nodes-min=3 was set automatically for nodegroup ng-90b7a9a5 39 | 2021-06-26 15:39:06 [ℹ] deploying stack "eksctl-eks-demo-nodegroup-ng-90b7a9a5" 40 | 2021-06-26 15:39:06 [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-nodegroup-ng-90b7a9a5" 41 | 42 | 2021-06-26 15:42:44 [ℹ] waiting for the control plane availability... 43 | 2021-06-26 15:42:44 [✔] saved kubeconfig as "/Users/strongjz/.kube/config" 44 | 2021-06-26 15:42:44 [ℹ] no tasks 45 | 2021-06-26 15:42:44 [✔] all EKS cluster resources for "eks-demo" have been created 46 | 2021-06-26 15:42:45 [ℹ] adding identity "arn:aws:iam::1234567890:role/eksctl-eks-demo-nodegroup-ng-9-NodeInstanceRole-TLKVDDVTW2TZ" to auth ConfigMap 47 | 2021-06-26 15:42:45 [ℹ] nodegroup "ng-90b7a9a5" has 0 node(s) 48 | 2021-06-26 15:42:45 [ℹ] waiting for at least 3 node(s) to become ready in "ng-90b7a9a5" 49 | 2021-06-26 15:43:23 [ℹ] nodegroup "ng-90b7a9a5" has 3 node(s) 50 | 2021-06-26 15:43:23 [ℹ] node "ip-192-168-31-17.us-west-2.compute.internal" is ready 51 | 2021-06-26 15:43:23 [ℹ] node "ip-192-168-58-247.us-west-2.compute.internal" is ready 52 | 2021-06-26 15:43:23 [ℹ] node "ip-192-168-85-104.us-west-2.compute.internal" is ready 53 | 2021-06-26 15:45:37 [ℹ] kubectl command should work with "/Users/strongjz/.kube/config", try 'kubectl get nodes' 54 | 2021-06-26 15:45:37 [✔] EKS cluster "eks-demo" in "us-west-2" region is ready 55 | 56 | ---- 57 | 58 | In the output we can see that EKS creating a nodegroup, eksctl-eks-demo-nodegroup-ng-90b7a9a5, with 3 nodes, 59 | 60 | [source] 61 | ---- 62 | ip-192-168-31-17.us-west-2.compute.internal 63 | ip-192-168-58-247.us-west-2.compute.internal 64 | ip-192-168-85-104.us-west-2.compute.internal 65 | ---- 66 | 67 | All inside a VPC with 3 public and 3 private subnets across 3 AZs. 68 | 69 | [soource] 70 | ---- 71 | public:192.168.0.0/19 private:192.168.96.0/19 72 | public:192.168.32.0/19 private:192.168.128.0/19 73 | public:192.168.64.0/19 private:192.168.160.0/19 74 | ---- 75 | 76 | [WARNING] 77 | We used the default settings of eksctl, and it deployed the k8s API as a public endpoint, {publicAccess=true, 78 | privateAccess=false} 79 | 80 | Now we can deploy our Golang web application in the cluster and expose it with a Loadbalancer service. 81 | 82 | ===== Deploy Test Application 83 | 84 | You can deploy them individually or all together. dnsutils.yml is our dnsutils testing pod, database.yml is the 85 | postgres database for pod connectivity testing,web.yml is the golang web server and the Loadbalancer service. 86 | 87 | [source,bash] 88 | ---- 89 | kubectl apply -f dnsutils.yml,database.yml,web.yml 90 | ---- 91 | 92 | Let's run a `kubectl get pods` to see if all the pods are running fine. 93 | 94 | [source,bash] 95 | ---- 96 | kubectl get pods -o wide 97 | NAME READY STATUS RESTARTS AGE IP NODE 98 | app-6bf97c555d-5mzfb 1/1 Running 0 9m16s 192.168.15.108 ip-192-168-0-94.us-west-2.compute.internal 99 | app-6bf97c555d-76fgm 1/1 Running 0 9m16s 192.168.52.42 ip-192-168-63-151.us-west-2.compute.internal 100 | app-6bf97c555d-gw4k9 1/1 Running 0 9m16s 192.168.88.61 ip-192-168-91-46.us-west-2.compute.internal 101 | dnsutils 1/1 Running 0 9m17s 192.168.57.174 ip-192-168-63-151.us-west-2.compute.internal 102 | postgres-0 1/1 Running 0 9m17s 192.168.70.170 ip-192-168-91-46.us-west-2.compute.internal 103 | ---- 104 | 105 | and check on the loadbalancer service looks good. 106 | 107 | [source,bash] 108 | ---- 109 | kubectl get svc clusterip-service 110 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 111 | clusterip-service LoadBalancer 10.100.159.28 a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com 80:32671/TCP 29m 112 | ---- 113 | 114 | The Service has endpoints as well. 115 | 116 | [source,bash] 117 | ---- 118 | kubectl get endpoints clusterip-service 119 | NAME ENDPOINTS AGE 120 | clusterip-service 192.168.15.108:8080,192.168.52.42:8080,192.168.88.61:8080 58m 121 | ---- 122 | 123 | We should verify the application is reachable inside the cluster, with the clusterip and 124 | port- `10.100.159.28:8080`, service name and port, `clusterip-service:80`, and finally pod ip and port - `192.168.15.108:8080` 125 | 126 | [source,bash] 127 | ---- 128 | kubectl exec dnsutils -- wget -qO- 10.100.159.28:80/data 129 | Database Connected 130 | 131 | kubectl exec dnsutils -- wget -qO- 10.100.159.28:80/host 132 | NODE: ip-192-168-63-151.us-west-2.compute.internal, POD IP:192.168.52.42 133 | 134 | kubectl exec dnsutils -- wget -qO- clusterip-service:80/host 135 | NODE: ip-192-168-91-46.us-west-2.compute.internal, POD IP:192.168.88.61 136 | 137 | kubectl exec dnsutils -- wget -qO- clusterip-service:80/data 138 | Database Connected 139 | 140 | kubectl exec dnsutils -- wget -qO- 192.168.15.108:8080/data 141 | Database Connected 142 | 143 | kubectl exec dnsutils -- wget -qO- 192.168.15.108:8080/host 144 | NODE: ip-192-168-0-94.us-west-2.compute.internal, POD IP:192.168.15.108 145 | 146 | ---- 147 | 148 | Database port is reachable from dnsutils, with pod IP and port `192.168.70.170:5432`, and the service name and port - `postgres:5432`. 149 | 150 | [source,bash] 151 | ---- 152 | kubectl exec dnsutils -- nc -z -vv -w 5 192.168.70.170 5432 153 | 192.168.70.170 (192.168.70.170:5432) open 154 | sent 0, rcvd 0 155 | 156 | kc exec dnsutils -- nc -z -vv -w 5 postgres 5432 157 | postgres (10.100.106.134:5432) open 158 | sent 0, rcvd 0 159 | 160 | ---- 161 | 162 | The application inside the cluster is up and running. Let's test it from external to the cluster. 163 | 164 | ===== Verify LoadBalancer Services for Golang Web Server 165 | 166 | kubectl will return all the information we will need to test, the cluster-ip, the external-ip, and all the ports. 167 | 168 | [source,bash] 169 | ---- 170 | kubectl get svc clusterip-service 171 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 172 | clusterip-service LoadBalancer 10.100.159.28 a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com 80:32671/TCP 29m 173 | 174 | ---- 175 | 176 | Using the External-ip of the loadbalancer 177 | 178 | [source,bash] 179 | ---- 180 | wget -qO- a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com/data 181 | Database Connected 182 | 183 | ---- 184 | 185 | Let's test out the Loadbalancer and make multiple requests to our backends. 186 | 187 | [source,bash] 188 | ---- 189 | wget -qO- a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com/host 190 | NODE: ip-192-168-63-151.us-west-2.compute.internal, POD IP:192.168.52.42 191 | 192 | wget -qO- a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com/host 193 | NODE: ip-192-168-91-46.us-west-2.compute.internal, POD IP:192.168.88.61 194 | 195 | wget -qO- a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com/host 196 | NODE: ip-192-168-0-94.us-west-2.compute.internal, POD IP:192.168.15.108 197 | 198 | wget -qO- a76d1c69125e543e5b67c899f5e45284-593302470.us-west-2.elb.amazonaws.com/host 199 | NODE: ip-192-168-0-94.us-west-2.compute.internal, POD IP:192.168.15.108 200 | 201 | ---- 202 | 203 | `kubectl get pods -o wide` again will verify our pod information matches the loadbalancer requests. 204 | 205 | [source,bash] 206 | ---- 207 | kubectl get pods -o wide 208 | NAME READY STATUS RESTARTS AGE IP NODE 209 | app-6bf97c555d-5mzfb 1/1 Running 0 9m16s 192.168.15.108 ip-192-168-0-94.us-west-2.compute.internal 210 | app-6bf97c555d-76fgm 1/1 Running 0 9m16s 192.168.52.42 ip-192-168-63-151.us-west-2.compute.internal 211 | app-6bf97c555d-gw4k9 1/1 Running 0 9m16s 192.168.88.61 ip-192-168-91-46.us-west-2.compute.internal 212 | dnsutils 1/1 Running 0 9m17s 192.168.57.174 ip-192-168-63-151.us-west-2.compute.internal 213 | postgres-0 1/1 Running 0 9m17s 192.168.70.170 ip-192-168-91-46.us-west-2.compute.internal 214 | ---- 215 | 216 | We can also check the nodeport, since dnsutils is running inside our VPC, on an EC2 instance, it can do a dns lookup on 217 | the private host, ip-192-168-0-94.us-west-2.compute.internal, and the `kubectl get service` command gave use the 218 | nodeport, 32671. 219 | 220 | [source,bash] 221 | ---- 222 | kubectl exec dnsutils -- wget -qO- ip-192-168-0-94.us-west-2.compute.internal:32671/host 223 | NODE: ip-192-168-0-94.us-west-2.compute.internal, POD IP:192.168.15.108 224 | ---- 225 | 226 | Everything seems to running just fine externally and locally in our cluster. 227 | 228 | ==== Deploy ALB Ingress and Verify 229 | 230 | For some sections of the deployment, we will need to know the AWS Account ID we are deploying. Let's put that into 231 | an environment variable. To get your account ID you can run: 232 | 233 | [source,bash] 234 | ---- 235 | aws sts get-caller-identity 236 | { 237 | "UserId": "AIDA2RZMTHAQTEUI3Z537", 238 | "Account": "1234567890", 239 | "Arn": "arn:aws:iam::1234567890:user/eks" 240 | } 241 | 242 | export ACCOUNT_ID=1234567890 243 | ---- 244 | 245 | If it is not setup for the cluster already, we will have to set up an OIDC provider with the cluster. 246 | 247 | This step is needed to give IAM permissions to a pod running in the cluster using the IAM for SA. 248 | 249 | [source,bash] 250 | ---- 251 | eksctl utils associate-iam-oidc-provider \ 252 | --region ${AWS_REGION} \ 253 | --cluster ${CLUSTER_NAME} \ 254 | --approve 255 | ---- 256 | 257 | For the SA role, we will need to create an IAM policy to determine the permissions for the ALB Controller in AWS. 258 | 259 | [source,bash] 260 | ---- 261 | aws iam create-policy \ 262 | --policy-name AWSLoadBalancerControllerIAMPolicy \ 263 | --policy-document iam_policy.json 264 | ---- 265 | 266 | Now we need to create the SA and attached it to the IAM role we created. 267 | 268 | [source,bash] 269 | ---- 270 | eksctl create iamserviceaccount \ 271 | > --cluster ${CLUSTER_NAME} \ 272 | > --namespace kube-system \ 273 | > --name aws-load-balancer-controller \ 274 | > --attach-policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/AWSLoadBalancerControllerIAMPolicy \ 275 | > --override-existing-serviceaccounts \ 276 | > --approve 277 | 2021-06-27 14:39:30 [ℹ] eksctl version 0.54.0 278 | 2021-06-27 14:39:30 [ℹ] using region us-west-2 279 | 2021-06-27 14:39:31 [ℹ] 1 iamserviceaccount (kube-system/aws-load-balancer-controller) was included (based on the include/exclude rules) 280 | 2021-06-27 14:39:31 [!] metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set 281 | 2021-06-27 14:39:31 [ℹ] 1 task: { 2 sequential sub-tasks: { create IAM role for serviceaccount "kube-system/aws-load-balancer-controller", create serviceaccount "kube-system/aws-load-balancer-controller" } } 282 | 2021-06-27 14:39:31 [ℹ] building iamserviceaccount stack "eksctl-alb-ingress-3-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" 283 | 2021-06-27 14:39:31 [ℹ] deploying stack "eksctl-alb-ingress-3-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" 284 | 2021-06-27 14:39:31 [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" 285 | 2021-06-27 14:39:48 [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" 286 | 2021-06-27 14:40:05 [ℹ] waiting for CloudFormation stack "eksctl-eks-demo-addon-iamserviceaccount-kube-system-aws-load-balancer-controller" 287 | 2021-06-27 14:40:06 [ℹ] created serviceaccount "kube-system/aws-load-balancer-controller" 288 | ---- 289 | 290 | We can see all the details of the SA with 291 | 292 | [source,bash] 293 | ---- 294 | kubectl get sa aws-load-balancer-controller -n kube-system -o yaml 295 | apiVersion: v1 296 | kind: ServiceAccount 297 | metadata: 298 | annotations: 299 | eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/eksctl-eks-demo-addon-iamserviceaccount-Role1-RNXLL4UJ1NPV 300 | creationTimestamp: "2021-06-27T18:40:06Z" 301 | labels: 302 | app.kubernetes.io/managed-by: eksctl 303 | name: aws-load-balancer-controller 304 | namespace: kube-system 305 | resourceVersion: "16133" 306 | uid: 30281eb5-8edf-4840-bc94-f214c1102e4f 307 | secrets: 308 | - name: aws-load-balancer-controller-token-dtq48 309 | ---- 310 | 311 | The TargetGroupBinding Customer Resource definition, CRD, is allows the Controller to bind a Kubernetes 312 | service endpoints to a AWS TargetGroup. 313 | 314 | [source,bash] 315 | ---- 316 | kubectl apply -f crd.yml 317 | customresourcedefinition.apiextensions.k8s.io/ingressclassparams.elbv2.k8s.aws configured 318 | customresourcedefinition.apiextensions.k8s.io/targetgroupbindings.elbv2.k8s.aws configured 319 | ---- 320 | 321 | Now were ready to the deploy the ALB Controller with helm. 322 | 323 | Set the version environment to deploy 324 | [source,bash] 325 | ---- 326 | export ALB_LB_VERSION="v2.2.0" 327 | ---- 328 | 329 | Now deploy it, add the eks helm repo, get the VPC id the cluster is running in and finally deploy via helm. 330 | 331 | [source,bash] 332 | ---- 333 | helm repo add eks https://aws.github.io/eks-charts 334 | 335 | export VPC_ID=$(aws eks describe-cluster \ 336 | --name ${CLUSTER_NAME} \ 337 | --query "cluster.resourcesVpcConfig.vpcId" \ 338 | --output text) 339 | 340 | helm upgrade -i aws-load-balancer-controller \ 341 | eks/aws-load-balancer-controller \ 342 | -n kube-system \ 343 | --set clusterName=${CLUSTER_NAME} \ 344 | --set serviceAccount.create=false \ 345 | --set serviceAccount.name=aws-load-balancer-controller \ 346 | --set image.tag="${ALB_LB_VERSION}" \ 347 | --set region=${AWS_REGION} \ 348 | --set vpcId=${VPC_ID} 349 | 350 | Release "aws-load-balancer-controller" has been upgraded. Happy Helming! 351 | NAME: aws-load-balancer-controller 352 | LAST DEPLOYED: Sun Jun 27 14:43:06 2021 353 | NAMESPACE: kube-system 354 | STATUS: deployed 355 | REVISION: 2 356 | TEST SUITE: None 357 | NOTES: 358 | AWS Load Balancer controller installed! 359 | ---- 360 | 361 | We can watch the deploy logs here: 362 | 363 | [source,bash] 364 | ---- 365 | kc logs -n kube-system -f deploy/aws-load-balancer-controller 366 | ---- 367 | 368 | Now to deploy our Ingress with ALB 369 | 370 | [source,bash] 371 | ---- 372 | kubeclt apply -f alb-rules.yml 373 | ingress.networking.k8s.io/app configured 374 | ---- 375 | 376 | With the `kubectl describe ing app` output, we can see the ALB has been deployed. 377 | 378 | We can also see the ALB Public DNS address, the rules for the instances, and the endpoints backing the service. 379 | 380 | [source,bash] 381 | ---- 382 | kubectl describe ing app 383 | Name: app 384 | Namespace: default 385 | Address: k8s-default-app-d5e5a26be4-2128411681.us-west-2.elb.amazonaws.com 386 | Default backend: default-http-backend:80 () 387 | Rules: 388 | Host Path Backends 389 | ---- ---- -------- 390 | * 391 | /data clusterip-service:80 (192.168.3.221:8080,192.168.44.165:8080,192.168.89.224:8080) 392 | /host clusterip-service:80 (192.168.3.221:8080,192.168.44.165:8080,192.168.89.224:8080) 393 | Annotations: alb.ingress.kubernetes.io/scheme: internet-facing 394 | kubernetes.io/ingress.class: alb 395 | Events: 396 | Type Reason Age From Message 397 | ---- ------ ---- ---- ------- 398 | Normal SuccessfullyReconciled 4m33s (x2 over 5m58s) ingress Successfully reconciled 399 | ---- 400 | 401 | Time to test our ALB! 402 | 403 | [source,bash] 404 | ---- 405 | wget -qO- k8s-default-app-d5e5a26be4-2128411681.us-west-2.elb.amazonaws.com/data 406 | Database Connected 407 | 408 | wget -qO- k8s-default-app-d5e5a26be4-2128411681.us-west-2.elb.amazonaws.com/host 409 | NODE: ip-192-168-63-151.us-west-2.compute.internal, POD IP:192.168.44.165 410 | ---- 411 | 412 | ===== Clean Up 413 | 414 | Once you are done working with EKS and testing, make sure to delete the applications pods, and the service to ensure 415 | that everything is deleted. 416 | 417 | [source,bash] 418 | ---- 419 | kubectl delete -f dnsutils.yml,database.yml,web.yml 420 | ---- 421 | 422 | Clean up the ALB. 423 | 424 | [source,bash] 425 | ---- 426 | kubectl delete -f alb-rules.yml 427 | ---- 428 | 429 | Remove The IAM policy for ALB Controller. 430 | 431 | [source,bash] 432 | ---- 433 | aws iam delete-policy --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/AWSLoadBalancerControllerIAMPolicy 434 | ---- 435 | 436 | Verify there are no left over EBS volumes from the PVC's for test application. Delete any ebs volumes found for the 437 | PVC's for the postgres test database. 438 | 439 | [source,bash] 440 | ---- 441 | aws ec2 describe-volumes --filters Name=tag:kubernetes.io/created-for/pv/name,Values=* --query "Volumes[].{ID:VolumeId}" 442 | ---- 443 | 444 | Verify there are no Load balancers running, ALB or otherwise 445 | 446 | [source,bash] 447 | ---- 448 | aws elbv2 describe-load-balancers --query "LoadBalancers[].LoadBalancerArn" 449 | ---- 450 | 451 | [source,bash] 452 | ---- 453 | aws elb describe-load-balancers --query "LoadBalancerDescriptions[].DNSName" 454 | ---- 455 | 456 | Let's make sure we delete the Cluster, so you don't get charged for a cluster doing nothing! 457 | 458 | [source,bash] 459 | ---- 460 | eksctl delete cluster --name ${CLUSTER_NAME} 461 | ---- 462 | -------------------------------------------------------------------------------- /chapter-6/AWS/alb-rules.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: networking.k8s.io/v1 3 | kind: Ingress 4 | metadata: 5 | annotations: 6 | alb.ingress.kubernetes.io/scheme: internet-facing 7 | kubernetes.io/ingress.class: alb 8 | name: app 9 | spec: 10 | rules: 11 | - http: 12 | paths: 13 | - path: /data 14 | pathType: Exact 15 | backend: 16 | service: 17 | name: clusterip-service 18 | port: 19 | number: 80 20 | - path: /host 21 | pathType: Exact 22 | backend: 23 | service: 24 | name: clusterip-service 25 | port: 26 | number: 80 -------------------------------------------------------------------------------- /chapter-6/AWS/crds.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apiextensions.k8s.io/v1 2 | kind: CustomResourceDefinition 3 | metadata: 4 | annotations: 5 | controller-gen.kubebuilder.io/version: v0.5.0 6 | creationTimestamp: null 7 | name: ingressclassparams.elbv2.k8s.aws 8 | spec: 9 | group: elbv2.k8s.aws 10 | names: 11 | kind: IngressClassParams 12 | listKind: IngressClassParamsList 13 | plural: ingressclassparams 14 | singular: ingressclassparams 15 | scope: Cluster 16 | versions: 17 | - additionalPrinterColumns: 18 | - description: The Ingress Group name 19 | jsonPath: .spec.group.name 20 | name: GROUP-NAME 21 | type: string 22 | - description: The AWS Load Balancer scheme 23 | jsonPath: .spec.scheme 24 | name: SCHEME 25 | type: string 26 | - description: The AWS Load Balancer ipAddressType 27 | jsonPath: .spec.ipAddressType 28 | name: IP-ADDRESS-TYPE 29 | type: string 30 | - jsonPath: .metadata.creationTimestamp 31 | name: AGE 32 | type: date 33 | name: v1beta1 34 | schema: 35 | openAPIV3Schema: 36 | description: IngressClassParams is the Schema for the IngressClassParams API 37 | properties: 38 | apiVersion: 39 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 40 | type: string 41 | kind: 42 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 43 | type: string 44 | metadata: 45 | type: object 46 | spec: 47 | description: IngressClassParamsSpec defines the desired state of IngressClassParams 48 | properties: 49 | group: 50 | description: Group defines the IngressGroup for all Ingresses that belong to IngressClass with this IngressClassParams. 51 | properties: 52 | name: 53 | description: Name is the name of IngressGroup. 54 | type: string 55 | required: 56 | - name 57 | type: object 58 | ipAddressType: 59 | description: IPAddressType defines the ip address type for all Ingresses that belong to IngressClass with this IngressClassParams. 60 | enum: 61 | - ipv4 62 | - dualstack 63 | type: string 64 | namespaceSelector: 65 | description: NamespaceSelector restrict the namespaces of Ingresses that are allowed to specify the IngressClass with this IngressClassParams. * if absent or present but empty, it selects all namespaces. 66 | properties: 67 | matchExpressions: 68 | description: matchExpressions is a list of label selector requirements. The requirements are ANDed. 69 | items: 70 | description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 71 | properties: 72 | key: 73 | description: key is the label key that the selector applies to. 74 | type: string 75 | operator: 76 | description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. 77 | type: string 78 | values: 79 | description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 80 | items: 81 | type: string 82 | type: array 83 | required: 84 | - key 85 | - operator 86 | type: object 87 | type: array 88 | matchLabels: 89 | additionalProperties: 90 | type: string 91 | description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 92 | type: object 93 | type: object 94 | scheme: 95 | description: Scheme defines the scheme for all Ingresses that belong to IngressClass with this IngressClassParams. 96 | enum: 97 | - internal 98 | - internet-facing 99 | type: string 100 | tags: 101 | description: Tags defines list of Tags on AWS resources provisioned for Ingresses that belong to IngressClass with this IngressClassParams. 102 | items: 103 | description: Tag defines a AWS Tag on resources. 104 | properties: 105 | key: 106 | description: The key of the tag. 107 | type: string 108 | value: 109 | description: The value of the tag. 110 | type: string 111 | required: 112 | - key 113 | - value 114 | type: object 115 | type: array 116 | type: object 117 | type: object 118 | served: true 119 | storage: true 120 | subresources: {} 121 | status: 122 | acceptedNames: 123 | kind: "" 124 | plural: "" 125 | conditions: [] 126 | storedVersions: [] 127 | --- 128 | apiVersion: apiextensions.k8s.io/v1 129 | kind: CustomResourceDefinition 130 | metadata: 131 | annotations: 132 | controller-gen.kubebuilder.io/version: v0.5.0 133 | creationTimestamp: null 134 | name: targetgroupbindings.elbv2.k8s.aws 135 | spec: 136 | group: elbv2.k8s.aws 137 | names: 138 | kind: TargetGroupBinding 139 | listKind: TargetGroupBindingList 140 | plural: targetgroupbindings 141 | singular: targetgroupbinding 142 | scope: Namespaced 143 | versions: 144 | - additionalPrinterColumns: 145 | - description: The Kubernetes Service's name 146 | jsonPath: .spec.serviceRef.name 147 | name: SERVICE-NAME 148 | type: string 149 | - description: The Kubernetes Service's port 150 | jsonPath: .spec.serviceRef.port 151 | name: SERVICE-PORT 152 | type: string 153 | - description: The AWS TargetGroup's TargetType 154 | jsonPath: .spec.targetType 155 | name: TARGET-TYPE 156 | type: string 157 | - description: The AWS TargetGroup's Amazon Resource Name 158 | jsonPath: .spec.targetGroupARN 159 | name: ARN 160 | priority: 1 161 | type: string 162 | - jsonPath: .metadata.creationTimestamp 163 | name: AGE 164 | type: date 165 | name: v1alpha1 166 | schema: 167 | openAPIV3Schema: 168 | description: TargetGroupBinding is the Schema for the TargetGroupBinding API 169 | properties: 170 | apiVersion: 171 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 172 | type: string 173 | kind: 174 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 175 | type: string 176 | metadata: 177 | type: object 178 | spec: 179 | description: TargetGroupBindingSpec defines the desired state of TargetGroupBinding 180 | properties: 181 | networking: 182 | description: networking provides the networking setup for ELBV2 LoadBalancer to access targets in TargetGroup. 183 | properties: 184 | ingress: 185 | description: List of ingress rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. 186 | items: 187 | properties: 188 | from: 189 | description: List of peers which should be able to access the targets in TargetGroup. At least one NetworkingPeer should be specified. 190 | items: 191 | description: NetworkingPeer defines the source/destination peer for networking rules. 192 | properties: 193 | ipBlock: 194 | description: IPBlock defines an IPBlock peer. If specified, none of the other fields can be set. 195 | properties: 196 | cidr: 197 | description: CIDR is the network CIDR. Both IPV4 or IPV6 CIDR are accepted. 198 | type: string 199 | required: 200 | - cidr 201 | type: object 202 | securityGroup: 203 | description: SecurityGroup defines a SecurityGroup peer. If specified, none of the other fields can be set. 204 | properties: 205 | groupID: 206 | description: GroupID is the EC2 SecurityGroupID. 207 | type: string 208 | required: 209 | - groupID 210 | type: object 211 | type: object 212 | type: array 213 | ports: 214 | description: List of ports which should be made accessible on the targets in TargetGroup. If ports is empty or unspecified, it defaults to all ports with TCP. 215 | items: 216 | properties: 217 | port: 218 | anyOf: 219 | - type: integer 220 | - type: string 221 | description: The port which traffic must match. When NodePort endpoints(instance TargetType) is used, this must be a numerical port. When Port endpoints(ip TargetType) is used, this can be either numerical or named port on pods. if port is unspecified, it defaults to all ports. 222 | x-kubernetes-int-or-string: true 223 | protocol: 224 | description: The protocol which traffic must match. If protocol is unspecified, it defaults to TCP. 225 | enum: 226 | - TCP 227 | - UDP 228 | type: string 229 | type: object 230 | type: array 231 | required: 232 | - from 233 | - ports 234 | type: object 235 | type: array 236 | type: object 237 | serviceRef: 238 | description: serviceRef is a reference to a Kubernetes Service and ServicePort. 239 | properties: 240 | name: 241 | description: Name is the name of the Service. 242 | type: string 243 | port: 244 | anyOf: 245 | - type: integer 246 | - type: string 247 | description: Port is the port of the ServicePort. 248 | x-kubernetes-int-or-string: true 249 | required: 250 | - name 251 | - port 252 | type: object 253 | targetGroupARN: 254 | description: targetGroupARN is the Amazon Resource Name (ARN) for the TargetGroup. 255 | type: string 256 | targetType: 257 | description: targetType is the TargetType of TargetGroup. If unspecified, it will be automatically inferred. 258 | enum: 259 | - instance 260 | - ip 261 | type: string 262 | required: 263 | - serviceRef 264 | - targetGroupARN 265 | type: object 266 | status: 267 | description: TargetGroupBindingStatus defines the observed state of TargetGroupBinding 268 | properties: 269 | observedGeneration: 270 | description: The generation observed by the TargetGroupBinding controller. 271 | format: int64 272 | type: integer 273 | type: object 274 | type: object 275 | served: true 276 | storage: false 277 | subresources: 278 | status: {} 279 | - additionalPrinterColumns: 280 | - description: The Kubernetes Service's name 281 | jsonPath: .spec.serviceRef.name 282 | name: SERVICE-NAME 283 | type: string 284 | - description: The Kubernetes Service's port 285 | jsonPath: .spec.serviceRef.port 286 | name: SERVICE-PORT 287 | type: string 288 | - description: The AWS TargetGroup's TargetType 289 | jsonPath: .spec.targetType 290 | name: TARGET-TYPE 291 | type: string 292 | - description: The AWS TargetGroup's Amazon Resource Name 293 | jsonPath: .spec.targetGroupARN 294 | name: ARN 295 | priority: 1 296 | type: string 297 | - jsonPath: .metadata.creationTimestamp 298 | name: AGE 299 | type: date 300 | name: v1beta1 301 | schema: 302 | openAPIV3Schema: 303 | description: TargetGroupBinding is the Schema for the TargetGroupBinding API 304 | properties: 305 | apiVersion: 306 | description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' 307 | type: string 308 | kind: 309 | description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' 310 | type: string 311 | metadata: 312 | type: object 313 | spec: 314 | description: TargetGroupBindingSpec defines the desired state of TargetGroupBinding 315 | properties: 316 | networking: 317 | description: networking defines the networking rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. 318 | properties: 319 | ingress: 320 | description: List of ingress rules to allow ELBV2 LoadBalancer to access targets in TargetGroup. 321 | items: 322 | description: NetworkingIngressRule defines a particular set of traffic that is allowed to access TargetGroup's targets. 323 | properties: 324 | from: 325 | description: List of peers which should be able to access the targets in TargetGroup. At least one NetworkingPeer should be specified. 326 | items: 327 | description: NetworkingPeer defines the source/destination peer for networking rules. 328 | properties: 329 | ipBlock: 330 | description: IPBlock defines an IPBlock peer. If specified, none of the other fields can be set. 331 | properties: 332 | cidr: 333 | description: CIDR is the network CIDR. Both IPV4 or IPV6 CIDR are accepted. 334 | type: string 335 | required: 336 | - cidr 337 | type: object 338 | securityGroup: 339 | description: SecurityGroup defines a SecurityGroup peer. If specified, none of the other fields can be set. 340 | properties: 341 | groupID: 342 | description: GroupID is the EC2 SecurityGroupID. 343 | type: string 344 | required: 345 | - groupID 346 | type: object 347 | type: object 348 | type: array 349 | ports: 350 | description: List of ports which should be made accessible on the targets in TargetGroup. If ports is empty or unspecified, it defaults to all ports with TCP. 351 | items: 352 | description: NetworkingPort defines the port and protocol for networking rules. 353 | properties: 354 | port: 355 | anyOf: 356 | - type: integer 357 | - type: string 358 | description: The port which traffic must match. When NodePort endpoints(instance TargetType) is used, this must be a numerical port. When Port endpoints(ip TargetType) is used, this can be either numerical or named port on pods. if port is unspecified, it defaults to all ports. 359 | x-kubernetes-int-or-string: true 360 | protocol: 361 | description: The protocol which traffic must match. If protocol is unspecified, it defaults to TCP. 362 | enum: 363 | - TCP 364 | - UDP 365 | type: string 366 | type: object 367 | type: array 368 | required: 369 | - from 370 | - ports 371 | type: object 372 | type: array 373 | type: object 374 | nodeSelector: 375 | description: node selector for instance type target groups to only register certain nodes 376 | properties: 377 | matchExpressions: 378 | description: matchExpressions is a list of label selector requirements. The requirements are ANDed. 379 | items: 380 | description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 381 | properties: 382 | key: 383 | description: key is the label key that the selector applies to. 384 | type: string 385 | operator: 386 | description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. 387 | type: string 388 | values: 389 | description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 390 | items: 391 | type: string 392 | type: array 393 | required: 394 | - key 395 | - operator 396 | type: object 397 | type: array 398 | matchLabels: 399 | additionalProperties: 400 | type: string 401 | description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 402 | type: object 403 | type: object 404 | serviceRef: 405 | description: serviceRef is a reference to a Kubernetes Service and ServicePort. 406 | properties: 407 | name: 408 | description: Name is the name of the Service. 409 | type: string 410 | port: 411 | anyOf: 412 | - type: integer 413 | - type: string 414 | description: Port is the port of the ServicePort. 415 | x-kubernetes-int-or-string: true 416 | required: 417 | - name 418 | - port 419 | type: object 420 | targetGroupARN: 421 | description: targetGroupARN is the Amazon Resource Name (ARN) for the TargetGroup. 422 | minLength: 1 423 | type: string 424 | targetType: 425 | description: targetType is the TargetType of TargetGroup. If unspecified, it will be automatically inferred. 426 | enum: 427 | - instance 428 | - ip 429 | type: string 430 | required: 431 | - serviceRef 432 | - targetGroupARN 433 | type: object 434 | status: 435 | description: TargetGroupBindingStatus defines the observed state of TargetGroupBinding 436 | properties: 437 | observedGeneration: 438 | description: The generation observed by the TargetGroupBinding controller. 439 | format: int64 440 | type: integer 441 | type: object 442 | type: object 443 | served: true 444 | storage: true 445 | subresources: 446 | status: {} 447 | status: 448 | acceptedNames: 449 | kind: "" 450 | plural: "" 451 | conditions: [] 452 | storedVersions: [] 453 | -------------------------------------------------------------------------------- /chapter-6/AWS/database.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: postgres 5 | labels: 6 | app: postgres 7 | spec: 8 | ports: 9 | - port: 5432 10 | name: postgres 11 | selector: 12 | app: postgres 13 | --- 14 | apiVersion: v1 15 | kind: ConfigMap 16 | metadata: 17 | name: postgres-config 18 | labels: 19 | app: postgres 20 | data: 21 | POSTGRES_DB: testpostgresdb 22 | POSTGRES_USER: postgres 23 | POSTGRES_PASSWORD: mysecretpassword 24 | --- 25 | apiVersion: apps/v1 26 | kind: StatefulSet 27 | metadata: 28 | name: postgres 29 | spec: 30 | serviceName: "postgres" 31 | replicas: 1 32 | selector: 33 | matchLabels: 34 | app: postgres 35 | template: 36 | metadata: 37 | labels: 38 | app: postgres 39 | spec: 40 | containers: 41 | - name: postgres 42 | image: postgres:12.2 43 | envFrom: 44 | - configMapRef: 45 | name: postgres-config 46 | ports: 47 | - containerPort: 5432 48 | name: postgredb 49 | volumeMounts: 50 | - name: postgredb 51 | mountPath: /var/lib/postgresql/data 52 | subPath: postgres 53 | volumeClaimTemplates: 54 | - metadata: 55 | name: postgredb 56 | spec: 57 | accessModes: [ "ReadWriteOnce" ] 58 | storageClassName: gp2 59 | resources: 60 | requests: 61 | storage: 1Gi 62 | -------------------------------------------------------------------------------- /chapter-6/AWS/dnsutils.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: dnsutils 5 | namespace: default 6 | spec: 7 | containers: 8 | - name: dnsutils 9 | image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 10 | command: 11 | - sleep 12 | - "3600" 13 | imagePullPolicy: IfNotPresent 14 | restartPolicy: Always -------------------------------------------------------------------------------- /chapter-6/AWS/iam_policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "iam:CreateServiceLinkedRole", 8 | "ec2:DescribeAccountAttributes", 9 | "ec2:DescribeAddresses", 10 | "ec2:DescribeAvailabilityZones", 11 | "ec2:DescribeInternetGateways", 12 | "ec2:DescribeVpcs", 13 | "ec2:DescribeSubnets", 14 | "ec2:DescribeSecurityGroups", 15 | "ec2:DescribeInstances", 16 | "ec2:DescribeNetworkInterfaces", 17 | "ec2:DescribeTags", 18 | "ec2:GetCoipPoolUsage", 19 | "ec2:DescribeCoipPools", 20 | "elasticloadbalancing:DescribeLoadBalancers", 21 | "elasticloadbalancing:DescribeLoadBalancerAttributes", 22 | "elasticloadbalancing:DescribeListeners", 23 | "elasticloadbalancing:DescribeListenerCertificates", 24 | "elasticloadbalancing:DescribeSSLPolicies", 25 | "elasticloadbalancing:DescribeRules", 26 | "elasticloadbalancing:DescribeTargetGroups", 27 | "elasticloadbalancing:DescribeTargetGroupAttributes", 28 | "elasticloadbalancing:DescribeTargetHealth", 29 | "elasticloadbalancing:DescribeTags" 30 | ], 31 | "Resource": "*" 32 | }, 33 | { 34 | "Effect": "Allow", 35 | "Action": [ 36 | "cognito-idp:DescribeUserPoolClient", 37 | "acm:ListCertificates", 38 | "acm:DescribeCertificate", 39 | "iam:ListServerCertificates", 40 | "iam:GetServerCertificate", 41 | "waf-regional:GetWebACL", 42 | "waf-regional:GetWebACLForResource", 43 | "waf-regional:AssociateWebACL", 44 | "waf-regional:DisassociateWebACL", 45 | "wafv2:GetWebACL", 46 | "wafv2:GetWebACLForResource", 47 | "wafv2:AssociateWebACL", 48 | "wafv2:DisassociateWebACL", 49 | "shield:GetSubscriptionState", 50 | "shield:DescribeProtection", 51 | "shield:CreateProtection", 52 | "shield:DeleteProtection" 53 | ], 54 | "Resource": "*" 55 | }, 56 | { 57 | "Effect": "Allow", 58 | "Action": [ 59 | "ec2:AuthorizeSecurityGroupIngress", 60 | "ec2:RevokeSecurityGroupIngress" 61 | ], 62 | "Resource": "*" 63 | }, 64 | { 65 | "Effect": "Allow", 66 | "Action": [ 67 | "ec2:CreateSecurityGroup" 68 | ], 69 | "Resource": "*" 70 | }, 71 | { 72 | "Effect": "Allow", 73 | "Action": [ 74 | "ec2:CreateTags" 75 | ], 76 | "Resource": "arn:aws:ec2:*:*:security-group/*" 77 | }, 78 | { 79 | "Effect": "Allow", 80 | "Action": [ 81 | "ec2:CreateTags", 82 | "ec2:DeleteTags" 83 | ], 84 | "Resource": "arn:aws:ec2:*:*:security-group/*" 85 | }, 86 | { 87 | "Effect": "Allow", 88 | "Action": [ 89 | "ec2:AuthorizeSecurityGroupIngress", 90 | "ec2:RevokeSecurityGroupIngress", 91 | "ec2:DeleteSecurityGroup" 92 | ], 93 | "Resource": "*" 94 | }, 95 | { 96 | "Effect": "Allow", 97 | "Action": [ 98 | "elasticloadbalancing:CreateLoadBalancer", 99 | "elasticloadbalancing:CreateTargetGroup" 100 | ], 101 | "Resource": "*", 102 | "Condition": { 103 | "Null": { 104 | "aws:RequestTag/elbv2.k8s.aws/cluster": "false" 105 | } 106 | } 107 | }, 108 | { 109 | "Effect": "Allow", 110 | "Action": [ 111 | "elasticloadbalancing:CreateListener", 112 | "elasticloadbalancing:DeleteListener", 113 | "elasticloadbalancing:CreateRule", 114 | "elasticloadbalancing:DeleteRule" 115 | ], 116 | "Resource": "*" 117 | }, 118 | { 119 | "Effect": "Allow", 120 | "Action": [ 121 | "elasticloadbalancing:AddTags", 122 | "elasticloadbalancing:RemoveTags" 123 | ], 124 | "Resource": [ 125 | "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*", 126 | "arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*", 127 | "arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*" 128 | ], 129 | "Condition": { 130 | "Null": { 131 | "aws:RequestTag/elbv2.k8s.aws/cluster": "true", 132 | "aws:ResourceTag/elbv2.k8s.aws/cluster": "false" 133 | } 134 | } 135 | }, 136 | { 137 | "Effect": "Allow", 138 | "Action": [ 139 | "elasticloadbalancing:AddTags", 140 | "elasticloadbalancing:RemoveTags" 141 | ], 142 | "Resource": [ 143 | "arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*", 144 | "arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*", 145 | "arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*", 146 | "arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*" 147 | ] 148 | }, 149 | { 150 | "Effect": "Allow", 151 | "Action": [ 152 | "elasticloadbalancing:ModifyLoadBalancerAttributes", 153 | "elasticloadbalancing:SetIpAddressType", 154 | "elasticloadbalancing:SetSecurityGroups", 155 | "elasticloadbalancing:SetSubnets", 156 | "elasticloadbalancing:DeleteLoadBalancer", 157 | "elasticloadbalancing:ModifyTargetGroup", 158 | "elasticloadbalancing:ModifyTargetGroupAttributes", 159 | "elasticloadbalancing:DeleteTargetGroup" 160 | ], 161 | "Resource": "*" 162 | }, 163 | { 164 | "Effect": "Allow", 165 | "Action": [ 166 | "elasticloadbalancing:RegisterTargets", 167 | "elasticloadbalancing:DeregisterTargets" 168 | ], 169 | "Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*" 170 | }, 171 | { 172 | "Effect": "Allow", 173 | "Action": [ 174 | "elasticloadbalancing:SetWebAcl", 175 | "elasticloadbalancing:ModifyListener", 176 | "elasticloadbalancing:AddListenerCertificates", 177 | "elasticloadbalancing:RemoveListenerCertificates", 178 | "elasticloadbalancing:ModifyRule" 179 | ], 180 | "Resource": "*" 181 | } 182 | ] 183 | } -------------------------------------------------------------------------------- /chapter-6/AWS/web.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: app 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: app 9 | replicas: 3 10 | template: 11 | metadata: 12 | labels: 13 | app: app 14 | spec: 15 | containers: 16 | - name: go-web 17 | image: strongjz/go-web:v0.0.6 18 | ports: 19 | - containerPort: 8080 20 | livenessProbe: 21 | httpGet: 22 | path: /healthz 23 | port: 8080 24 | initialDelaySeconds: 5 25 | periodSeconds: 5 26 | readinessProbe: 27 | httpGet: 28 | path: / 29 | port: 8080 30 | initialDelaySeconds: 5 31 | periodSeconds: 5 32 | env: 33 | - name: MY_NODE_NAME 34 | valueFrom: 35 | fieldRef: 36 | fieldPath: spec.nodeName 37 | - name: MY_POD_NAME 38 | valueFrom: 39 | fieldRef: 40 | fieldPath: metadata.name 41 | - name: MY_POD_NAMESPACE 42 | valueFrom: 43 | fieldRef: 44 | fieldPath: metadata.namespace 45 | - name: MY_POD_IP 46 | valueFrom: 47 | fieldRef: 48 | fieldPath: status.podIP 49 | - name: MY_POD_SERVICE_ACCOUNT 50 | valueFrom: 51 | fieldRef: 52 | fieldPath: spec.serviceAccountName 53 | - name: DB_HOST 54 | value: "postgres" 55 | - name: DB_USER 56 | value: "postgres" 57 | - name: DB_PASSWORD 58 | value: "mysecretpassword" 59 | - name: DB_PORT 60 | value: "5432" 61 | --- 62 | apiVersion: v1 63 | kind: Service 64 | metadata: 65 | name: clusterip-service 66 | labels: 67 | app: app 68 | spec: 69 | type: LoadBalancer 70 | selector: 71 | app: app 72 | ports: 73 | - protocol: TCP 74 | port: 80 75 | targetPort: 8080 76 | -------------------------------------------------------------------------------- /chapter-6/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/strongjz/Networking-and-Kubernetes/bd0e0702ff21473d1216da8a23583473c26da09e/chapter-6/README.md --------------------------------------------------------------------------------