├── .gitignore ├── Dockerfile ├── README.md ├── Vagrantfile ├── app └── app.py ├── circle.yml ├── docker-compose.yml └── prod.yml /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.6 2 | RUN pip install Flask redis 3 | RUN useradd -ms /bin/bash admin 4 | USER admin 5 | COPY app /app 6 | WORKDIR /app 7 | CMD ["python", "app.py"] 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Docker & Kubernetes Notes 2 | 3 | ## Table of Contents 4 | 5 | - [Hypervisor-based Virtualization](#hypervisor-based-virtualization) 6 | - [Container Based](#container-based) 7 | - [Docker Client](#docker-client) 8 | - [Docker Concepts](#docker-concepts) 9 | - [Commands](#commands) 10 | - [Docker Ports & Logging](#docker-ports--logging) 11 | - [Docker Images](#docker-images) 12 | - [Ways to Build an Image](#ways-to-build-an-image) 13 | - [Dockerfile](#dockerfile) 14 | - [Chain Run Command](#chain-run-command) 15 | - [Sort Multi-Lines Arguments Alphanumerically](#sort-multi-lines-arguments-alphanumerically) 16 | - [CMD](#cmd) 17 | - [Push Images to DockerHub](#push-images-to-dockerhub) 18 | - [Containerization An Application](#containerize-an-application) 19 | - [Docker Container Links](#docker-container-links) 20 | - [Docker Compose](#docker-compose) 21 | - [Docker Networking](#docker-networking) 22 | - [Unit Tests in Containers](#unit-tests-in-containers) 23 | - [Continuous Integration](#continuous-integration) 24 | - [Running in Production](#running-in-production) 25 | - [Kubernetes](#kubernetes) 26 | - [MiniKube](#minikube) 27 | - [Kops](#kops) 28 | - [Load Balancers](#load-balancers) 29 | - [Basics](#basics) 30 | - [Scaling](#scaling) 31 | - [Labels](#labels) 32 | - [Node Labels](#node-labels) 33 | - [Health Checks](#health-checks) 34 | - [Secrets](#secrets) 35 | - [Web UI](#web-ui) 36 | - [Ingress](#ingress) 37 | - [Advanced Topics](#advanced-topics) 38 | - [Config Map](#config-map) 39 | - [Volumes](#volumes) 40 | - [Volume Provisioning](#volume-provisioning) 41 | - [Pet Sets](#pet-sets) 42 | - [Daemon Sets](#daemon-sets) 43 | - [Autoscaling](#autoscaling) 44 | - [Resource Management](#resource-management) 45 | - [Namespaces](#namespaces) 46 | - [User Management](#user-management) 47 | - [Networking](#networking) 48 | - [Node Maintenance](#node-maintenance) 49 | - [High Availability](#high-availability) 50 | 51 | ## Hypervisor-based Virtualization 52 | 53 | - Physical Server 54 | - Host OS 55 | - Hypervisor 56 | - Debian 57 | - Libs/bins 58 | - Ubuntu 59 | - Libs/bins 60 | 61 | ### Benefits 62 | 63 | - Cost effective 64 | - Easy to Scale 65 | 66 | ### Limitations 67 | 68 | - Kernel Resource Duplication 69 | - Application Portability Issue 70 | 71 | ## Container-based 72 | 73 | Containers share the same kernel resource. 74 | 75 | ### Runtime Isolation 76 | 77 | - Uses only the minimal requirements 78 | - Allows for fast deployment 79 | - Guaranteed Portability 80 | 81 | Example: 82 | 83 | - Application A - Container A 84 | - JRE 8 85 | - JVM 86 | - Application B - Container B 87 | - JRE 7 88 | 89 | ## Docker Client 90 | 91 | docker build - Docker daemon 92 | docker pull - Docker daemon 93 | docker run - Docker daemon 94 | 95 | Docker Daemon -> Registry (Redis) -> Redis Image 96 | Image -> Container 97 | 98 | Docker Daemon 99 | 100 | Docker engine - run on remote docker daemon or local. 101 | 102 | ## Docker Concepts 103 | 104 | ### Images 105 | 106 | - Images are read as templates used to create containers. 107 | - Images are created with the docker build command, either by us or by other docker users. 108 | - Images are composed of layers of other images. 109 | - Images are stored in Docker registry. 110 | 111 | ### Containers 112 | 113 | - If an image is a class, then a container is an instance of a class - a runtime object. 114 | - Containers are lightweight and portable encapsulations of an environment in which to run programs. 115 | - Containers are created from images. Inside a container, it has all the binaries and dependencies needed to run the application. 116 | 117 | ### Registries and Repositories 118 | 119 | - A registry is where we store our images. 120 | - You can host your own registry, or you can use Docker public registry which is called DockerHub. 121 | - Inside a registry, images are stored repositories. 122 | - Docker repository is a collection of different docker images with the same name, that have different tags, each tag usually means a different version of the image. 123 | - DockerHub is a popular service for repositories. 124 | - Official names are reviewed by DockerHub. 125 | - If you don't specify a tag, it defaults to latest. 126 | - Docker will use the local image first if it's available, otherwise it will download it from the network. 127 | 128 | ## Commands 129 | 130 | To list available images use: 131 | 132 | ``` 133 | docker images 134 | ``` 135 | 136 | To use an image to echo "hello world": 137 | 138 | ``` 139 | docker run image echo "hello world" 140 | ``` 141 | 142 | Use -i -t for interactive. 143 | 144 | To view currently running Docker containers use: 145 | 146 | ``` 147 | docker ps 148 | ``` 149 | 150 | To use all previous ran containers use -a. 151 | 152 | To remove a container use: 153 | 154 | ``` 155 | docker run image --rm 156 | ``` 157 | 158 | To name a container use --name. 159 | 160 | Use -d to run in detached mode. 161 | 162 | To view low-level docker container information use: 163 | 164 | ``` 165 | docker inspect 166 | ``` 167 | 168 | ## Docker Ports & Logging 169 | 170 | To run docker on a specific port use: 171 | 172 | ``` 173 | docker run -it -p host_port:container_port tomcat:8.0 174 | ``` 175 | 176 | To view logs from a container use: 177 | 178 | ``` 179 | docker logs container_id 180 | ``` 181 | 182 | ## Docker Images 183 | 184 | Docker images contain a series of layers. To see the layers an image has use: 185 | 186 | ``` 187 | docker history image 188 | ``` 189 | 190 | 1. All changes made to a running containers will be written to a writable layer. 191 | 2. When the container is deleted, the writable layer is also deleted, but the underlying image remains unchanged. 192 | 3. Multiple containers can share access to the same underlying image. 193 | 194 | ## Ways to Build an Image 195 | 196 | 1. Create a Dockerfile. 197 | 2. Commit changes made to a Docker container. 198 | 199 | To commit your Docker changes use: 200 | 201 | ``` 202 | docker commit container_id repository_name:tag 203 | ``` 204 | 205 | ## Dockerfile 206 | 207 | To install an image use: 208 | 209 | ``` 210 | FROM image 211 | ``` 212 | 213 | To run a command use: 214 | 215 | ``` 216 | RUN apt-get update 217 | ``` 218 | 219 | To build a container from a Dockerfile use: 220 | 221 | ``` 222 | docker build 223 | ``` 224 | 225 | - Optionally you can specify the build context. 226 | - When the build starts the build context gets saved to a tarball. 227 | - The tarball is then transferred to the daemon. 228 | 229 | ## Chain Run Command 230 | 231 | - Each run command executes on the top writeable layer and then commits the change. 232 | - The new image is then used for the next step in the Dockerfile. 233 | - Each run command creates a new image layer. 234 | - Its recommended to chain the run commands to minimize the new image layers. 235 | 236 | ## Sort Multi-Lines Arguments Alphanumerically 237 | 238 | - This will help you avoid duplication. 239 | 240 | ## CMD 241 | 242 | - Specifies which command to run when the container starts. 243 | - If we don't specify the CMD command in the Dockerfile, Docker will use the default command from the image. 244 | - The CMD instruction doesn't run when building the image. 245 | - You can specify the command in either exec form or shell form. 246 | 247 | ## Push Images to DockerHub 248 | 249 | - List docker images by using: 250 | 251 | ``` 252 | docker images 253 | ``` 254 | 255 | - In order to push the image you must first link the image to a Docker account. 256 | - Name the repository something like username/repo 257 | - To push a repo use: 258 | 259 | ``` 260 | docker tag hash_id username/repo 261 | ``` 262 | 263 | ## Latest Tag 264 | 265 | - Docker will use latest as a default tag when no tag is provided. 266 | - A lot of repositories use it to tag the most up-to-date stable image, however, 267 | this is still only a convention and is entirely not enforced. 268 | - Images which are tagged latest will not be updated automatically when a newer version of the image is pushed to the repository. 269 | - Avoid using latest tag. 270 | 271 | ### Login 272 | 273 | To login use: 274 | 275 | ``` 276 | docker login --username=dsims 277 | ``` 278 | 279 | To push your repo use: 280 | 281 | ``` 282 | docker push username/repo 283 | ``` 284 | 285 | ## Containerize An Application 286 | 287 | To build the container use: 288 | 289 | ``` 290 | docker build -t dockerapp:v1.0 . 291 | ``` 292 | 293 | To run docker container use: 294 | 295 | ``` 296 | docker run -d -p 5000:5000 image_id 297 | ``` 298 | 299 | To find out where the docker container is running use: 300 | 301 | ``` 302 | docker-machine ls 303 | ``` 304 | 305 | To run a command inside a container use: 306 | 307 | ``` 308 | docker exec -it image_id bash 309 | ``` 310 | 311 | ## Docker Container Links 312 | 313 | Allows containers to communicate with each other. Requires a recipient container (i.e. Dockerapp) and a source container (i.e. Redis). 314 | 315 | First run the container with: 316 | 317 | ``` 318 | docker run -d --name redis redis:3.2.0 319 | ``` 320 | 321 | Then run the Dockerapp container by using: 322 | 323 | ``` 324 | docker run -d -p 5000:5000 --link redis dockerapp:v0.3 325 | ``` 326 | 327 | Benefits of Docker Container Links: 328 | 329 | - When you build an application with a micro-service architecture, it allows to run many 330 | independent applications in different containers to connect with one another. 331 | - Creates a secure tunnel between containers that doesn't need to expose any ports externally. 332 | 333 | ## Docker Compose 334 | 335 | - Manual linking containers doesn't make sense when there's a lot of different containers (20+). 336 | - Docker Network allows all services to connect to each other. 337 | - Removes the burden of maintaining scripts for Docker orchestration. 338 | 339 | To start docker services use: 340 | 341 | ``` 342 | docker-compose up 343 | ``` 344 | 345 | To view Docker compose logs use: 346 | 347 | ``` 348 | docker-compose logs 349 | ``` 350 | 351 | To stop Docker containers use: 352 | 353 | ``` 354 | docker-compose stop 355 | ``` 356 | 357 | To remove all Docker containers use: 358 | 359 | ``` 360 | docker-compose rm --all 361 | ``` 362 | 363 | To force a rebuild of an image use: 364 | 365 | ``` 366 | docker-compose build 367 | ``` 368 | 369 | ## Docker Networking 370 | 371 | Each container connects to a bridge network to connect to the host machine. 372 | 373 | ### Docker Network Types 374 | 375 | - Closed Network / None Network 376 | - Bridge Network 377 | - Host Network 378 | - Overlay Network 379 | 380 | To see Docker networks use: 381 | 382 | ``` 383 | docker network ls 384 | ``` 385 | 386 | ### None Network 387 | 388 | It's an isolated container that has no connection to the outside world. 389 | 390 | To run a closed network use: 391 | 392 | ``` 393 | docker run -d --net none image 394 | ``` 395 | 396 | - Provides the maximum level of network protection. 397 | - Not a good choice if network or Internet connection is required. 398 | - Suites well when the container requires maximum level of security. 399 | 400 | ### Bridge Network Interface 401 | 402 | To create a network use: 403 | 404 | ``` 405 | docker network create --driver bridge my_bridge_network 406 | ``` 407 | 408 | To inspect a network use: 409 | 410 | ``` 411 | docker network inspect my_bridge_network 412 | ``` 413 | 414 | To disconnect from a network use: 415 | 416 | ``` 417 | docker network disconnect bridge container_3 418 | ``` 419 | 420 | - In a bridge network containers have access to 2 network interfaces (loopback and private). 421 | - All containers in the same network can communicate with each other. 422 | - Containers from different bridge networks can't connect with each other by default. 423 | 424 | ### Host Network 425 | 426 | - Least protected network model, add container to host network. 427 | - Containers hosted on host network have full access to host's interface. 428 | - These containers are called open containers. 429 | - No isolation. 430 | - Host containers have best performance. 431 | 432 | ### Overlay Network 433 | 434 | - Supports multi-host networks out-of-the-box. 435 | - Require some pre-existing conditions before it can be created (running in swarm mode). 436 | 437 | ### Define Network in Compose 438 | 439 | By default it sets a single network. 440 | 441 | ## Unit Tests in Containers 442 | 443 | - Should test some functionality in a container without any outside resources. 444 | - Should run quickly as possible to avoid being blocked. 445 | - Spin up quick and provide a clean and isolated environment for unit tests. 446 | 447 | To run a service inside of a container use: 448 | 449 | ``` 450 | docker-compose run dockerapp python test.py 451 | ``` 452 | 453 | Pros: 454 | - A single image is used throughout development and production. Increases reliability. 455 | 456 | Cons: 457 | - Increases size of the image. 458 | 459 | ## Continuous Integration 460 | 461 | - Isolated changes are immediately tested and reported when they're added to a larger code base. 462 | - Provides rapid feedback so if a bug is introduced it will be identified as soon as possible. 463 | 464 | ### Configuring Circle CI 465 | 466 | Set DockerHub credentials in Circle CI by going to Project > Settings > Build Settings > Environmental Variables. 467 | 468 | Then set: 469 | 470 | ``` 471 | DOCKER_HUB_EMAIL 472 | DOCKER_HUB_PWD 473 | DOCKER_HUB_USER_ID 474 | ``` 475 | 476 | To their respective DockerHub credentials. 477 | 478 | ### Tag the Docker Images with Two Tags 479 | 480 | 1. Git commit hash of the source code. 481 | 2. Latest 482 | 483 | ## Running In Production 484 | 485 | Concerns: 486 | 487 | - There are some missing pieces about Docker around data persistence, networking, security, and identity management. 488 | - The ecosystem to support monitoring and logging are not still fully ready yet. 489 | 490 | A lot of companies are already using Docker in production. 491 | 492 | ### Why Run It In a VM? 493 | 494 | - To address security concerns 495 | - Hardware level isolation 496 | 497 | Amazon EC2 still uses VMs internally 498 | 499 | ### Docker Machine 500 | 501 | - Can provision new VMs 502 | - Install Docker Engine 503 | - Configure Docker Client 504 | 505 | ### Creating a New VM 506 | 507 | To create a new Docker VM use: 508 | 509 | ``` 510 | docker-machine create --driver digitalocean --digitalocean-access-token tag docker-app-machine 511 | ``` 512 | 513 | To setup the Docker environment use: 514 | 515 | ``` 516 | docker-machine env docker-app-machine 517 | ``` 518 | 519 | Then to see the environment variable use: 520 | 521 | ``` 522 | eval $(docker-machine env docker-app-machine) 523 | ``` 524 | 525 | ### Production deployment 526 | 527 | ``` 528 | docker-compose -f prod.yml up 529 | ``` 530 | 531 | ### Extends keyword 532 | 533 | - Allows for sharing of common configurations among different files or projects. 534 | - Can be useful if you have several different environments that re-use most configuration options. 535 | 536 | Use: 537 | 538 | ``` 539 | extends: 540 | file: common.yml 541 | service: redis 542 | ``` 543 | 544 | ### Docker Swarm 545 | 546 | - A tool that clusters many Docker Engines and scheduled containers. 547 | - Decides which host to run the container based on scheduling methods. 548 | 549 | Set Digital Ocean configuration information by using: 550 | 551 | ``` 552 | export DIGITALOCEAN_ACCESS_TOKEN= 553 | export DIGITALOCEAN_PRIVATE_NETWORKING=true 554 | export DIGITALOCEAN_IMAGE= 555 | ``` 556 | 557 | ### Service Discovery 558 | 559 | - Key component of most distributed systems and service oriented architectures. 560 | - In the context of Docker Swarm, service discovery is about how the Swarm Manager keeps track of the state of all the nodes in a cluster. 561 | 562 | ### Deploy Docker App In a Swarm Cluster 563 | 564 | 1. Provision a consul machine and run machine as a key-value store for service discovery. 565 | 2. Provision a Docker swarm master node. 566 | 3. Provision a Docker swarm slave node. 567 | 4. Define the overlay network to support multi-host networking. 568 | 5. Deploy out Docker app services on the Swarm cluster via Docker compose. 569 | 570 | To display the network configuration use: 571 | 572 | ``` 573 | docker-machine ssh consul ifconfig 574 | ``` 575 | 576 | To ping the private and public IPs use: 577 | 578 | ``` 579 | ping -c 1 $(docker-machine ssh consul 'ifconfig eth0 | grep "inet addr:" | cut -d: -f2 | cut -d" " -f1') 580 | ping -c 1 $(docker-machine ssh consul 'ifconfig eth1 | grep "inet addr:" | cut - 581 | ``` 582 | 583 | To configure Docker client to connect to consul use: 584 | 585 | ``` 586 | eval $(docker-machine env consul) 587 | ``` 588 | 589 | To create a Swarm master node use: 590 | 591 | ``` 592 | docker-machine create -d digitalocean --swarm --swarm-master --swarm-discovery="consul://${KP_IP}:8500" --engine-opt="cluster-store=consul://${KP_IP}:8500" --engine-opt="cluster-advertise=eth1:2376" master 593 | ``` 594 | 595 | To create slave nodes use: 596 | 597 | ``` 598 | docker-machine create -d digitalocean --swarm --swarm-discovery="consul://${KP_IP}:8500" --engine-opt="cluster-store=consul://${KP_IP}:8500" --engine-opt="cluster-advertise=eth1:2376" slave 599 | ``` 600 | 601 | ## Kubernetes 602 | 603 | Is an open-source orchestration system for Docker containers. 604 | 605 | - Lets you schedule containers on a cluster of machines 606 | - You can run multiple containers on one machine 607 | - You can run long running services (like web applications) 608 | - Managed the state of these containers 609 | - Can start container on specific nodes 610 | - Will restart a container when it gets killed 611 | - Can move containers from one node to another node 612 | 613 | - Instead of running a few docker containers manually. Kubernetes manages it for you. 614 | - Clusters can start with one node until thousands of nodes 615 | - Some other popular docker orchestrations are: 616 | - Docker Swarm 617 | - Mesos 618 | 619 | Kubernetes: 620 | - On-premise 621 | - Public (AWS) 622 | - Hybrid: public & private 623 | 624 | - Highly modular 625 | - Open source 626 | - Backed by Google 627 | 628 | Docker Engine 629 | 630 | - Docker runtime 631 | - Software to run docker images 632 | 633 | Docker Hub 634 | 635 | - Online service to store and fetch docker images 636 | - Also allows you to build docker images online 637 | 638 | Kubernetes can be ran anywhere (except more integrations exists for AWS/GCE) 639 | 640 | - Things like Volumes and External Load Balancers work only with supported Cloud Providers 641 | - Minikube - run Kubernetes locally 642 | - Kops - used to spin up highly available production cluster 643 | 644 | ### MiniKube 645 | 646 | Starting MiniKube: 647 | 648 | ``` 649 | minikube start 650 | ``` 651 | 652 | MiniKube config: 653 | 654 | ``` 655 | cat ~/.kube/config 656 | ``` 657 | 658 | Expose minikube test: 659 | 660 | ``` 661 | kubectl expose deployment hello-minikube --type=NodePort 662 | ``` 663 | 664 | Run minikube service: 665 | 666 | ``` 667 | kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --port=8080 668 | minikube service hello-minikube --url 669 | ``` 670 | 671 | Stop minikube: 672 | 673 | ``` 674 | minikube stop 675 | ``` 676 | 677 | ### Kops 678 | 679 | Virtual Machine setup: 680 | 681 | ``` 682 | vagrant init ubuntu/xenial64 683 | ``` 684 | 685 | Virtual Machine run: 686 | 687 | ``` 688 | vagrant up 689 | ``` 690 | 691 | Download Kops on Machine: 692 | 693 | ``` 694 | brew update && brew install kops 695 | ``` 696 | 697 | Install AWS CLI using pip. Create Kops user on AWS and give it Administrative Access. 698 | 699 | Create a subdomain using Route53 for Kubernetes: (kubernetes.domain.com). 700 | 701 | To create a kops cluster use: 702 | 703 | ``` 704 | kops create cluster --name=kubernetes.domain.com --state=s3://kobs-state-blah --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=kubernetes.domain.com 705 | ``` 706 | 707 | To configure the cluster use: 708 | 709 | ``` 710 | kops update cluster kubernetes.domain.com --yes 711 | ``` 712 | 713 | To edit the cluster use: 714 | 715 | ``` 716 | kops edit cluster kubernetes.domain.com 717 | ``` 718 | 719 | Or 720 | 721 | ``` 722 | kops edit cluster kubernetes.domain.com --state=s3://kops-state-blah 723 | ``` 724 | 725 | Docker/Kubernetes Remarks: 726 | 727 | - You should only run one process in one container 728 | - Don't try to create one giant docker image for your app, split it up if necessary. 729 | - All the data in the container is not preserved, when a container stops, all the changes within a container are lost. 730 | - You can preserve data, using volumes, which is covered later in this course 731 | 732 | - A pod describes an application running on Kubernetes. 733 | - A pod can contain one or more tightly coupled containers, that make up the app. 734 | - Those apps can easily communicate with each other using their local port numbers. 735 | - Our app only has one container. 736 | - Create pod-helloworld.yml 737 | 738 | ``` 739 | apiVersion: v1 740 | kind: Pod 741 | metadata: 742 | name: nodehelloworld.example.com 743 | labels: 744 | app: helloworld 745 | spec: 746 | containers: 747 | - name: k8s-demo 748 | image: my-image 749 | ports: 750 | - name: nodejs-port 751 | - containerPort: 3000 752 | ``` 753 | 754 | To create a pod-helloworld.yml with the pod definition use: 755 | 756 | ``` 757 | kubectl create -f k8s-demo/pod-helloworld.yml 758 | ``` 759 | 760 | Useful commands: 761 | 762 | ``` 763 | kubectl get pod - Get information about all running pods 764 | kubectl describe pod - Describe one pod 765 | kubectl expose pod --port=444 --name=frontend - Expose the port of a pod (creates a new service) 766 | kubectl port-forward 8080 - Port forward the exposed post port to your local machine 767 | kubectl attach -i - Attach to the pod 768 | kubectl exec -- command - Execute a command on the pod 769 | kubectl label pods mylabel=awesome - Add a new label to a pod 770 | kubectl run -i --tty busybox --image=busybox --restart=Never -- sh - Run a shell in a pod - very useful for debugging 771 | ``` 772 | 773 | ### Load Balancers 774 | 775 | On AWS the load balancer will route the traffic to the correct pod in Kubernetes. 776 | 777 | You can use haproxy or nginx load balancer in front of your cluster. Expose port directly. 778 | 779 | To setup a service with a load balancer use: 780 | 781 | ``` 782 | apiVersion: v1 783 | kind: Service 784 | metadata: 785 | name: helloworld-service 786 | spec: 787 | ports: 788 | - port: 80 789 | targetPort: nodejs-port 790 | protocol: TCP 791 | selector: 792 | app: helloworld 793 | type: LoadBalancer 794 | ``` 795 | 796 | ### Basics 797 | 798 | Internet traffic is directed to a load balancer which then does a look-up on the iptables. IPTables then directs traffic to pods containing Docker containers. Kubelet/kube-proxy also does a look-up against iptables. 799 | 800 | ### Scaling 801 | 802 | If your application is stateless then you can horizontally scale it. 803 | 804 | - Stateless means that it doesn't write any local files or keep local sessions. 805 | - All traditional databases (MySQL/Postgres) are stateful, they have database files that can't be split over multiple instances. 806 | 807 | Most web applications can be made stateless. 808 | 809 | - Session management needs to be done outside the container. 810 | - Scaling in Kubernetes can be done using the Replication Controller. 811 | - The replication controller will ensure a specified number of pod replicas will run at all time. 812 | - A pods created with the replica controller will automatically be replaced if they fail, get deleted, or are terminated. 813 | - Using the replication controller is also recommended if you just want to make sure 1 pod is always running, even after reboots 814 | - You can then run a plication controller with just 1 replica 815 | - This makes sure that the pod is always running 816 | 817 | To replicate an app 2 times using the Replication controller: 818 | 819 | ``` 820 | apiVersion: v1 821 | kind: ReplicationController 822 | metadata: 823 | name: helloworld-controller 824 | spec: 825 | replicas: 2 826 | selector: 827 | app: helloworld 828 | template: 829 | metadata: 830 | labels: 831 | app: helloworld 832 | spec: 833 | containers: 834 | - name: k8s-demo 835 | image: k8s-demo 836 | port: 837 | - containerPort: 3000 838 | ``` 839 | 840 | Via CLI you can use: 841 | 842 | ``` 843 | kubectl scale --replicas=4 -f config.yml 844 | ``` 845 | 846 | Replication Set 847 | 848 | - Replica Set is the next-generation Replication Controller 849 | - It supports a new selector that can do selection based on filtering according to a set of values 850 | - "environment" can be "dev" or "qa" 851 | - not based on equality 852 | 853 | - Replica Set is used by Deployment object 854 | 855 | - A deployment declaration in Kubernetes allows you to do app deployments and updates 856 | - When using the deployment object, you define the state of your application 857 | - Kubernetes will then make sure the clusters matches your desired state 858 | - Using the replication controller or replication set might be cumbersome to deploy apps 859 | - The Deployment Object is easier to use and gives you more possibilities 860 | 861 | With a deployment object you can: 862 | 863 | - Create a deployment (deploy an app) 864 | - Update a deployment (deploy a new version of an app) 865 | - Rolling updates (zero downtime deployments) 866 | - Roll back to a previous version 867 | - Pause/resume a deployment (e.g. to roll-out to only a certain percentage) 868 | 869 | Example deployment: 870 | 871 | ``` 872 | apiVersion: v1 873 | kind: Deployment 874 | metadata: 875 | name: helloworld-deployment 876 | spec: 877 | replicas: 3 878 | template: 879 | metadata: 880 | labels: 881 | app: helloworld 882 | spec: 883 | containers: 884 | - name: k8s-demo 885 | image: k8s-demo 886 | ports: 887 | - containerPort: 3000 888 | ``` 889 | 890 | Useful commands: 891 | 892 | ``` 893 | kubectl get deployments - Get information on current deployments 894 | kubectl get rs - Get information about replica sets 895 | kubectl get pods --show-labels - get pods, and also show labels attached to those pods 896 | kubectl rollout status deployment/helloworld - Get deployment status 897 | kubectl set image deployment/helloworld k8s-demo=k8s-demo:2 - Run k8s-demo with image label version 2 898 | kubectl edit deployment/helloworld-deployment - Edit the deployment object 899 | kubectl rollout status deployment/helloworld-deployment - Get status of the rollout 900 | kubectl rollout history deployment/helloworld-deployment - Get the rollout history 901 | kubectl rollout undo deployment/helloworld-deployment - Rollback to previous version 902 | kubectl rollout undo deployment/helloworld-deployment --to-revision=n - Rollback to any version 903 | ``` 904 | 905 | ### Services 906 | 907 | - Pods are very dynamic, they come and go on the Kubernetes cluster 908 | - When using a Replication Controller, pods are terminated and created during scaling operations 909 | - When using Deployments, when updating the image version, pods are terminated and new pods take the place of older pods 910 | - A service is the logical bridge between the "mortal" pods and other services or end-users 911 | - kubectl expose creates a new service for your pod. 912 | - Creating a service will create an endpoint for your pod(s): 913 | - A ClusterIP: a virtual IP address only reachable from within the cluster 914 | - A NodePort: a port that is the same on each node that is reachable externally 915 | - A LoadBalancer: a LoadBalancer created by the cloud provider that will route external traffic to every node on the NodePort (ELB on AWS) 916 | - The options just shown only allow you to create virtual IPs or ports 917 | - There is also a possibility to use DNS names 918 | - ExternalName can provide a DNS name for the service 919 | - e.g. for service discovery using DNS 920 | - This only works when the DNS add-on is enabled 921 | 922 | Example Service definition: 923 | 924 | ``` 925 | apiVersion: v1 926 | kind: Service 927 | metadata: 928 | name: helloworld-service 929 | spec: 930 | ports: 931 | - port: 31001 932 | nodePort: 31001 933 | targetPort: nodejs-port 934 | protocol: TCP 935 | selector: 936 | app: helloworld 937 | type: NodePort 938 | ``` 939 | 940 | Default services can only run between ports 30000 - 32767, but you can change this behavior by using --service-node-port-range= to kube-apiserver. 941 | 942 | ### Labels 943 | 944 | - Key/value pairs that can be attached to objects 945 | - Labels are like tags in AWS or other cloud providers, used to tag resources 946 | - You can label your objects, for instance your pod. 947 | - Key: env - Value: dev / qa 948 | - Key: department - Value: engineering 949 | - Labels are not unique and multiple labels can be added to one object 950 | - Once labels are attached to an object, you can use filters to narrow down results (Label Selectors) 951 | - Using Label Selectors, you can use matching expressions to match labels 952 | - For instance, a particular pod can only run on a node labeled with "env": "dev" 953 | - More complex matching: "env": "dev" or "qa" 954 | 955 | ### Node Labels 956 | 957 | - You can also use labels to tag nodes 958 | - Once nodes are tagged, you can use label selectors to let pods only run on specific nodes 959 | - There are 2 steps required to run a pod on a specific set of nodes: 960 | - First you tag the node 961 | - Then you add a nodeSelector to your pod configuration 962 | 963 | To add a label use: 964 | 965 | ``` 966 | kubectl label nodes node1 hardware=high-spec 967 | kubectl label nodes node2 hardware=low-spec 968 | ``` 969 | 970 | Then add a pod that uses those labels: 971 | 972 | ``` 973 | apiVersion: v1 974 | kind: Pod 975 | metadata: 976 | name: helloworld-service 977 | labels: 978 | app: helloworld 979 | spec: 980 | containers: 981 | - name: k8s 982 | image: k8s 983 | ports: 984 | - containerPort: 3000 985 | nodeSelector: 986 | hardware: high-spec 987 | ``` 988 | 989 | ### Health Checks 990 | 991 | - If your application malfunctions, the pod and container can still be running, but the application might not work anymore. 992 | - To detect and resolve problems with your application, you can run health checks. 993 | - You can run 2 different type of health checks 994 | - Running a command in the container periodically 995 | - Periodic checks on a URL (HTTP) 996 | - The typical production application behind a load balancer should always have health checks implemented in some way to ensure availability and resiliency of the app. 997 | 998 | Example of health checks: 999 | 1000 | ``` 1001 | apiVersion: v1 1002 | kind: Pod 1003 | metadata: 1004 | name: nodehelloworld.example.com 1005 | labels: 1006 | app: helloworld 1007 | spec: 1008 | containers: 1009 | - name: k8s 1010 | image: k8s 1011 | ports: 1012 | - containerPort: 3000 1013 | livenessProbe: 1014 | httpGet: 1015 | path: / 1016 | port: 3000 1017 | initialDelaySeconds: 15 1018 | timeoutSeconds: 30 1019 | ``` 1020 | 1021 | ### Secrets 1022 | 1023 | - Secrets provides a way in Kubernetes to distribute credentials, keys, passwords or "secret" data to the pods. 1024 | - Kubernetes itself uses this Secrets mechanism to provide the credentials to access the internal API. 1025 | - You can also use the same mechanism to provide secrets to your application. 1026 | - Secrets is one way to provide secrets, native to Kubernetes. 1027 | - There are still other ways your container can get its secrets if you don't want to use Secrets (e.g. using an external vault service) 1028 | - Can be used as environment variables. 1029 | - Use secrets as a file in a pod 1030 | - This setup uses volumes to be mounted in a container 1031 | - In this volume you have files 1032 | - Can be used for instance for dotenv files or your app can just read this file 1033 | - Use an external image to pull secrets (from a private image registry) 1034 | 1035 | To generate secrets using files: 1036 | 1037 | ``` 1038 | kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt 1039 | ``` 1040 | 1041 | A secret can also be generated from an SSH key or SSL certificate: 1042 | 1043 | ``` 1044 | kubectl create secret generic ssl-certificate --from-file=ssh-privatekey=~./ssh/id_rsa --ssl-cert-=ssl-cert=mysslcert.crt 1045 | ``` 1046 | 1047 | To generate secrets using yaml definitions: 1048 | 1049 | secrets-db-secret.yml 1050 | 1051 | ``` 1052 | apiVersion: v1 1053 | kind: Secret 1054 | metadata: 1055 | name: db-secret 1056 | type: Opaque 1057 | data: 1058 | password: 1059 | username: 1060 | ``` 1061 | 1062 | Then use: 1063 | 1064 | ``` 1065 | kubectl create -f secrets-db-secret.yml 1066 | ``` 1067 | 1068 | Using secrets: 1069 | 1070 | ``` 1071 | spec: 1072 | env: 1073 | - name: SECRET_USERNAME 1074 | valueFrom: 1075 | secretKeyRef: 1076 | name: db-secret 1077 | key: username 1078 | - name: SECRET_PASSWORD 1079 | ``` 1080 | 1081 | Or: 1082 | 1083 | ``` 1084 | spec: 1085 | volumeMounts: 1086 | - name: credvolume 1087 | mountPath: /etc/creds 1088 | readOnly: true 1089 | volumes: 1090 | - name: credvolume 1091 | secret: 1092 | secretName: db-secrets 1093 | ``` 1094 | 1095 | ### Web UI 1096 | 1097 | - Kubernetes comes with a Web UI you can use instead of the kubectl commands 1098 | - Get an overview of your cluster 1099 | - Creating and modifying individual Kubernetes settings 1100 | - If a password is asked you can retrieve the password by using: 1101 | 1102 | ``` 1103 | kubectl config view 1104 | ``` 1105 | 1106 | To launch the dashboard: 1107 | 1108 | ``` 1109 | minikube dashboard 1110 | ``` 1111 | 1112 | ### Advanced Topics 1113 | 1114 | #### Service Discovery 1115 | 1116 | As of Kubernetes 1.3 DNS is built-in that's launched automatically. 1117 | 1118 | - The addons are in the /etc/kubernetes/addons directory on master node. 1119 | - The DNS service can be used within pods to find other services running on the same cluster 1120 | - Multiple containers within 1 pod don't need this service, as they can contact each other directly 1121 | - A container in the same pod can connect the port of the other container using localhost:port. 1122 | - To make DNS work, a pod will need a Service definition 1123 | 1124 | You only need the name of the service to get back the IP address of the pod so long as they're in the same namespace. 1125 | 1126 | ### Config Map 1127 | 1128 | - Configuration parameters that are not secret, can be put in a ConfigMap. 1129 | - The input is key - value pairs. 1130 | - The ConfigMap key-value pairs can then be read by the app using: 1131 | - Environment variables 1132 | - Container command-line arguments 1133 | - Using volumes 1134 | - The ConfigMap can also contain full configuration files 1135 | - This file can be mounted using volumes where the application expects its config file 1136 | - This way you can inject configuration settings into containers without changing the container itself 1137 | 1138 | To generate configmap using files: 1139 | 1140 | ``` 1141 | cat < app.properties 1142 | driver=jdbc 1143 | database=postgres 1144 | lookandfeel=1 1145 | otherparams=xyz 1146 | param.with.hierarchy=xyz 1147 | EOF 1148 | ``` 1149 | 1150 | ``` 1151 | kubectl create configmap app-config --from-file=app.properties 1152 | ``` 1153 | 1154 | ``` 1155 | spec: 1156 | containers: 1157 | volumeMounts: 1158 | - name: config-volume 1159 | mountPath: /etc/config 1160 | volumes: 1161 | - name: config-volume 1162 | configMap: 1163 | name: app-config 1164 | env: 1165 | - name: DRIVER 1166 | valueFrom: 1167 | configMapKeyRef: 1168 | name: app-config 1169 | key: driver 1170 | - name: DATABASE 1171 | ``` 1172 | 1173 | To return the ConfigMap use: 1174 | 1175 | ``` 1176 | kubectl get configmap app-config 1177 | ``` 1178 | 1179 | ### Ingress 1180 | 1181 | - Allows inbound connections to the cluster 1182 | - It's an alternative to the external LoadBalancer and nodePorts 1183 | - Allows you to easily expose services that need to accessible from outside the cluster 1184 | - With Ingress you can run your own ingress controller (basically a loadbalancer) within the Kubernetes cluster 1185 | - There are default Ingress controllers available, or you can write your own ingress controller 1186 | 1187 | Scenario: 1188 | 1189 | Internet -> Ingress Controller (NGINX) -> pod 1 1190 | Internet -> Ingress Controller (NGINX) -> pod 2 1191 | 1192 | ``` 1193 | apiVersion: extensions/v1beta1 1194 | kind: Ingress 1195 | metadata: 1196 | name: helloworld-rules 1197 | spec: 1198 | rules: 1199 | - host: helloworld-v1.example.com 1200 | http: 1201 | paths: 1202 | - path: / 1203 | backend: 1204 | serviceName: helloworld-v1 1205 | servicePort: 80 1206 | ``` 1207 | 1208 | ### Volumes 1209 | 1210 | - Volumes in Kubernetes allow you to store state outside the container 1211 | - When a container stops, all data on the container itself is lost 1212 | - Persistent volumes in Kubernetes allow you to attach a volume to a container that will exist even when the container stops 1213 | 1214 | Volumes can be attached using different volume plugins. 1215 | 1216 | Containers in a pod can be used to attach a volume. 1217 | 1218 | Using volumes you could deploy applications with state on your cluster, such as: 1219 | 1220 | - Applications that need to read/write on the local file system that need to be persistent in time 1221 | - You could run MySQL using persistent volumes 1222 | - Volume can be attached to a new node if the original node is destroyed 1223 | 1224 | To remove the volume from the pod use: 1225 | 1226 | ``` 1227 | kubectl drain pod 1228 | ``` 1229 | 1230 | To create an AWS EC2 volume use: 1231 | 1232 | ``` 1233 | aws ec2 delete-volume --volume-id vol-hash 1234 | ``` 1235 | 1236 | ### Volume Provisioning 1237 | 1238 | - The Kubernetes plugin allows you to provision storage for you 1239 | - The AWS Plugin can for instance provision storage for you by creating the volumes in AWS before attaching them to a node 1240 | - This is done using the StorageClass Object 1241 | - To use the auto provisioned volumes you can create storage.yml: 1242 | 1243 | ``` 1244 | kind: StorageClass 1245 | apiVersion: storage.k8s.io/v1beta 1246 | metadata: 1247 | name: standard 1248 | provisioner: kubernetes.io/aws-ebs 1249 | parameters: 1250 | type: gp2 1251 | zone: us-east-1 1252 | ``` 1253 | 1254 | Volume Claim: 1255 | 1256 | ``` 1257 | kind: PersistentVolumeClaim 1258 | apiVersion: v1 1259 | metadata: 1260 | name: myclaim 1261 | annotations: 1262 | volume.beta.kubernetes.io/storage-class: "standard" 1263 | spec: 1264 | accessModes: 1265 | - ReadWriteOnce 1266 | resources: 1267 | requests: 1268 | storage: 8Gi 1269 | ``` 1270 | 1271 | To launch the pod use: 1272 | 1273 | ``` 1274 | kind: Pod 1275 | apiVersion: v1 1276 | metadata: 1277 | name: mypod 1278 | spec: 1279 | containers: 1280 | - name: myfrontend 1281 | image: nginx 1282 | volumeMounts: 1283 | - mountPath: "/var/www/html" 1284 | name: mypd 1285 | volumes: 1286 | - name: mypd 1287 | persistentVolumeClaim: 1288 | claimName: myclaim 1289 | ``` 1290 | 1291 | ### Pet Sets 1292 | 1293 | - New feature in v1.3 alpha 1294 | - Introduce to create stateful application that need: 1295 | - A stable pod hostname 1296 | - Your podname will have an index when having multiple instances of a pod 1297 | - A stateful app that requires multiple pods with volumes based on their ordinal number (podname-x) 1298 | - Currently deleting and/or scaling a PetSet down will not delete the volumes associated with the PetSet. 1299 | 1300 | - A pet set will allow your stateful app to use DNS to find other peers 1301 | - Cassandra clusters, Elasticsearch clusters, use DNS to find other members of the cluster 1302 | - One running node of your Pet Set is called a Pet (e.g. one node in Cassandra). 1303 | - Using Pet Sets you can run for instance 5 cassandra nodes on Kubernetes named cassandra-1 until cassandra-5 1304 | - If you wouldn't use Pet Sets, you would get a dynamic hostname, which you wouldn't be able to use it in your configuration file. 1305 | 1306 | - A pet set will also allow your stateful application to order your start-up and tear-down of the pets. 1307 | - Instead of randomly terminating one Pet, you'll know which one goes 1308 | - This is useful if you first need to drain the data from a node before it can be shut down 1309 | 1310 | ### Daemon Sets 1311 | 1312 | - Daemon sets ensure that every single node in the cluster in the Kubernetes cluster runs the same pod resource. 1313 | - This is useful if you want to ensure that a certain pod is running on every single Kubernetes node. 1314 | - When a node is added to the cluster, a new pod will be started on that automatically. 1315 | - Same when the node is removed, the pod will not be rescheduled on another node. 1316 | 1317 | Typical use cases: 1318 | 1319 | - Logging aggregators 1320 | - Monitoring 1321 | - Load Balancers / Reverse Proxies / API Gateways 1322 | - Running a daemon that only needs one instance per physical instance 1323 | 1324 | ``` 1325 | apiVersion: extensions/v1beta1 1326 | kind: DaemonSet 1327 | metadata: 1328 | name: monitoring-agent 1329 | labels: 1330 | app: monitoring-agent 1331 | spec: 1332 | template: 1333 | metadata: 1334 | labels: 1335 | name: monitoring-agent 1336 | spec: 1337 | containers: 1338 | - name: k8s 1339 | image: k8s 1340 | ports: 1341 | - name: nodejs-port 1342 | containerPort: 3000 1343 | ``` 1344 | 1345 | ### Resource Usage Monitoring 1346 | 1347 | - Heapster enables container cluster monitoring and performance analysis 1348 | - It provides a monitoring platform for Kubernetes 1349 | - It's a prerequisite if you want to do pod auto-scaling 1350 | - Heapster exports cluster metrics via REST endpoints 1351 | - You can use different backends with Heapster 1352 | - Visualizations can be shown using Grafana 1353 | - The Kubernetes dashboard will also show graphs once monitoring is enabled 1354 | - All can be started in pods 1355 | 1356 | Heapster (https://github.com/kubernetes/heapster) 1357 | 1358 | ### Autoscaling 1359 | 1360 | - Kubernetes has the possibility to automatically scale pods based on metrics 1361 | - Kubernetes can automatically scale a Deployment, ReplicationController, or Replica 1362 | - In Kubernetes 1.3 scaling based on CPU usage is possible out of the box 1363 | - With alpha support, application based metrics are also available (like queries per second or average request latency) 1364 | - To enable this, the cluster has to be started with the env var ENABLE_CUSTOM_METRICS to true 1365 | - Autoscaling with periodically query the utilization for the targeted pods 1366 | - By default it is every 30 seconds, but can be changed using 'horizontal-pod-scaler-sync-period' 1367 | when launching the controller-manager 1368 | - Autoscaling will use heapster, the monitoring tool, to gather its metrics and make scaling decisions 1369 | - Run a deployment with a pod with a CPU resource request of 200m 1370 | - 200m = 200 millicpu (20% single CPU core of a node) 1371 | - Horizontal Pod Autoscaling will increase/decrease pods to maintain a target CPU utilization of 50% 1372 | 1373 | ``` 1374 | apiVersion: extensions/v1beta1 1375 | kind: Deployment 1376 | metadata: 1377 | name: hpa-example 1378 | spec: 1379 | replicas: 3 1380 | template: 1381 | metadata: 1382 | labels: 1383 | app: hpa-example 1384 | spec: 1385 | containers: 1386 | - name: hpa-example 1387 | image: gcr.io/google_containers/hpa-example 1388 | ports: 1389 | - name: http-port 1390 | containerPort: 80 1391 | resources: 1392 | requests: 1393 | cpu: 200m 1394 | ``` 1395 | 1396 | ``` 1397 | apiVersion: autoscaling/v1 1398 | kind: HorizontalPodAutoscaler 1399 | metadata: 1400 | name: hpa-example-autoscaler 1401 | spec: 1402 | scaleTargetRef: 1403 | apiVersion: extensions/v1beta1 1404 | kind: Deployment 1405 | name: hpa-example 1406 | minReplicas: 1 1407 | maxReplicas: 10 1408 | targetCPUUtilizationPercentage: 50 1409 | ``` 1410 | 1411 | ### Resource Management 1412 | 1413 | - When a Kubernetes cluster is used by multiple people or teams, resource management becomes more important. 1414 | - You want to be able to manage the resources you give to a person or a team 1415 | - You don't want the person or team taking up all the resources 1416 | - You can divide your cluster into namespaces and enable resource quotas on it 1417 | - You can do this using the ResourceQuota and ObjectQuota objects 1418 | - Each container can specify request capacity and capacity limits 1419 | - Resource capacity is an explicit request for resources 1420 | - The scheduler can use the request capacity to make decisions where to put the pod on 1421 | - You can set it as a minimum amount of resources the pods need 1422 | - Resource limit is a limit imposed to the container 1423 | - The container will not be able to utilize more resources than specified 1424 | 1425 | Example Resource Quotas: 1426 | 1427 | - You run a deployment with a pod with a CPU resource request of 200m 1428 | - 200m = 200 millicpu (200 millicores) 1429 | - 200m = 0.2 (20% of CPU core) 1430 | - Memory quotas as defined as MiB or GiB 1431 | - If a capacity quota (e.g. mem/cpu) has been specified by the administrator, then each 1432 | pod needs to specify capacity quota during creation 1433 | - The administrator can specify default request values for pods that don't specify any values for 1434 | capacity 1435 | - The same for valid limit quotas 1436 | - If a resource is requested more than allowed capacity, the server API will give a 403 forbidden error 1437 | 1438 | #### Set Resource Quotas 1439 | 1440 | requests.cpu - the sum of all CPU requests can exceed this value 1441 | requests.mem - The sum of all MEM requests of all pods cannot exceed this value 1442 | requests.storage - The sum of storage requests of all persistent volume claims cannot exceed this value 1443 | limits.cpu - The sum of the CPU limits of all pods cannot exceed this value 1444 | limits.memory - The sum of MEM limits of all pods cannot exceed this value 1445 | 1446 | The administrator can set the following objects: 1447 | 1448 | configmaps - total number of configmaps that can exist in a namespace 1449 | persistentvolumeclaims - total number of persistent volume claims that can exist in a namespace 1450 | pods - total number of pods that can exist in a namespace 1451 | replicationcontrollers - total number of replicationcontrollers that exist in a namespace 1452 | resourcequotas - total number of resource quotas that can exist in a namespace 1453 | services - total number of services that exist in a namespace 1454 | services.loadbalancer - total number of load balancers that can exist in a namespace 1455 | services.nodeports - total number of node ports that can exist in a namespace 1456 | secrets - total number of secrets that can exist in a namespace 1457 | 1458 | ### Namespaces 1459 | 1460 | - Namespaces allow you to create virtual clusters within the same virtual cluster 1461 | - Namespaces logically separates your cluster 1462 | - The standard default namespace is called default and that's where all resources are launched by default 1463 | - There is also a namespace for kubernetes specific resources, called kube-system 1464 | - Namespaces are intended when you have multiple teams / projects using the same cluster 1465 | - The name across resources need to be unique within a namespace, but not across namespaces 1466 | - You can limit resources on a per namespace basis 1467 | 1468 | To create a namespace: 1469 | 1470 | ``` 1471 | kubectl create namespace myspace 1472 | ``` 1473 | 1474 | To get namespaces use: 1475 | 1476 | ``` 1477 | kubectl get namespaces 1478 | ``` 1479 | 1480 | You can set the default namespace using: 1481 | 1482 | ``` 1483 | export CONTEXT=$(kubectl config view | awk '/current_context/ {print $2}') 1484 | kubectl config set-context $CONTEXT --namespace=myspace 1485 | ``` 1486 | 1487 | Then to create the resource limits within that namespace: 1488 | 1489 | ``` 1490 | apiVersion: v1 1491 | kind: ResourceQuota 1492 | metadata: 1493 | name: computer-resources 1494 | namespace: myspace 1495 | spec: 1496 | hard: 1497 | requests.cpu: "1" 1498 | requests.memory: 1Gi 1499 | limits.cpu: "2" 1500 | limits.memory: 2Gi 1501 | ``` 1502 | 1503 | Create object limits: 1504 | 1505 | ``` 1506 | apiVersion: v1 1507 | kind: ResourceQuota 1508 | metadata: 1509 | name: object-counts 1510 | namespace: myspace 1511 | spec: 1512 | hard: 1513 | configmaps: "10" 1514 | persistentvolumeclaims: "10" 1515 | replicationcontrollers: "20" 1516 | secrets: "10" 1517 | services: "10" 1518 | services.loadbalancers: "2" 1519 | ``` 1520 | 1521 | ### User Management 1522 | 1523 | There are 2 different types of users you can create 1524 | 1525 | - A normal user meant to access the user externally 1526 | - User not managed through objects 1527 | - A service user, which is managed by objects in Kubernetes 1528 | - This is the type of user used to authenticate within the cluster 1529 | - From within a pod or from kubelet 1530 | - Credentials are managed like secrets 1531 | - There are multiple authentication strategies for normal users 1532 | - Client certificates 1533 | - Bearer tokens 1534 | - Authentication proxy 1535 | - HTTP Basic Authentication 1536 | - OpenID 1537 | - Webhooks 1538 | 1539 | - Service users use service account tokens 1540 | - They are stored as credentials using secrets 1541 | - Those secrets are also stored in pods to allow communication between the services 1542 | - Service users are specific to a namespace 1543 | - Any API call that is not authenticated is considered to be an anonymous user 1544 | - Independently from the authentication mechanism, normal users have the following attributes: 1545 | - A username 1546 | - A UID 1547 | - Groups 1548 | - Extra fields 1549 | 1550 | - After a normal user authenticates they will have access to everything. 1551 | - To limit access, you need to configure authorization 1552 | - There are multiple offerings to choose from: 1553 | - Always Allow / Always Deny 1554 | - ABAC (Attribute-Based Access Control) 1555 | - RBAC (Role-Based Access Control) 1556 | - Webhook (authorized by remote service) 1557 | 1558 | - Authorization is still a work in progress 1559 | - The ABAC needs to be configured manually 1560 | - RBAC uses rbac.authorization.k8s.io API group 1561 | - This allows admins to dynamically configure permissions through the API 1562 | 1563 | ### Networking 1564 | 1565 | - The approach to networking is quite different than the default Docker setup 1566 | - Kubernetes assumes that pods should be able to communicate to other pods, regardless of which 1567 | node they are running 1568 | - Every pod has its own IP address 1569 | - Pods on different nodes need to be able to communicate to each other using those IP addresses 1570 | - Implemented differently depending on your networking setup 1571 | 1572 | - On AWS: kubenet networking 1573 | - Alternatives: 1574 | - Container Networking Interface (CNI) 1575 | - Software that provides libraries / plugins for network interfaces within containers 1576 | - Popular solutions include Calico and Weave 1577 | - An overlay network 1578 | - Flannel is an easy and popular way 1579 | 1580 | ### Node Maintenance 1581 | 1582 | - NodeController is responsible for managing the Node objects 1583 | - It assigns an IP space to the node when a new node is launched 1584 | - It keeps the node list up-to-date with the available machines 1585 | - The node controller is also monitoring the health of the node 1586 | - If a node is unhealthy it gets deleted 1587 | - Pods running on the unhealthy node will get rescheduled 1588 | 1589 | - When adding a new node, the kubelet will attempt to register itself 1590 | - This is called self-registration and is the default behavior 1591 | - It allows you to easily add more nodes to the cluster without making API changes yourself 1592 | - A new node object is automatically created with: 1593 | - The metadata (IP or hostname) 1594 | - Labels (cloud region / availability zone / instance size) 1595 | - A node also has a node condition (e.g. Ready, OutOfDisk) 1596 | - When you want to decommission a node, you want to do it gracefully 1597 | - You drain the node before you shut it down or take it out of the cluster 1598 | - To drain a node you use the following command: 1599 | 1600 | ``` 1601 | kubectl drain nodename --grace-period=600 1602 | ``` 1603 | 1604 | If the node runs pods not managed by a controller use: 1605 | 1606 | ``` 1607 | kubectl drain nodename --force 1608 | ``` 1609 | 1610 | ### High Availability 1611 | 1612 | - If you're going to run your cluster in production, you're going to want to have all master 1613 | services in a high availability (HA) setup 1614 | - The setup looks like this: 1615 | - Clustering etcd: run at least 3 etcd nodes 1616 | - Replicated API servers with a LoadBalancer 1617 | - Running multiple instances of the scheduler and the controllers 1618 | - Only one is the leader and the rest are on stand-by 1619 | - A cluster like minikube does not HA, it is only one cluster 1620 | - If you are going to use a cluster on AWS, kops can do the heavy lifting for you 1621 | 1622 | ## Credit 1623 | 1624 | - https://www.github.com/jleetutorial/ 1625 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 | # configures the configuration version (we support older styles for 6 | # backwards compatibility). Please don't change it unless you know what 7 | # you're doing. 8 | Vagrant.configure("2") do |config| 9 | # The most common configuration options are documented and commented below. 10 | # For a complete reference, please see the online documentation at 11 | # https://docs.vagrantup.com. 12 | 13 | # Every Vagrant development environment requires a box. You can search for 14 | # boxes at https://vagrantcloud.com/search. 15 | config.vm.box = "ubuntu/xenial64" 16 | 17 | # Disable automatic box update checking. If you disable this, then 18 | # boxes will only be checked for updates when the user runs 19 | # `vagrant box outdated`. This is not recommended. 20 | # config.vm.box_check_update = false 21 | 22 | # Create a forwarded port mapping which allows access to a specific port 23 | # within the machine from a port on the host machine. In the example below, 24 | # accessing "localhost:8080" will access port 80 on the guest machine. 25 | # NOTE: This will enable public access to the opened port 26 | # config.vm.network "forwarded_port", guest: 80, host: 8080 27 | 28 | # Create a forwarded port mapping which allows access to a specific port 29 | # within the machine from a port on the host machine and only allow access 30 | # via 127.0.0.1 to disable public access 31 | # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" 32 | 33 | # Create a private network, which allows host-only access to the machine 34 | # using a specific IP. 35 | # config.vm.network "private_network", ip: "192.168.33.10" 36 | 37 | # Create a public network, which generally matched to bridged network. 38 | # Bridged networks make the machine appear as another physical device on 39 | # your network. 40 | # config.vm.network "public_network" 41 | 42 | # Share an additional folder to the guest VM. The first argument is 43 | # the path on the host to the actual folder. The second argument is 44 | # the path on the guest to mount the folder. And the optional third 45 | # argument is a set of non-required options. 46 | # config.vm.synced_folder "../data", "/vagrant_data" 47 | 48 | # Provider-specific configuration so you can fine-tune various 49 | # backing providers for Vagrant. These expose provider-specific options. 50 | # Example for VirtualBox: 51 | # 52 | # config.vm.provider "virtualbox" do |vb| 53 | # # Display the VirtualBox GUI when booting the machine 54 | # vb.gui = true 55 | # 56 | # # Customize the amount of memory on the VM: 57 | # vb.memory = "1024" 58 | # end 59 | # 60 | # View the documentation for the provider you are using for more 61 | # information on available options. 62 | 63 | # Enable provisioning with a shell script. Additional provisioners such as 64 | # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the 65 | # documentation for more information about their specific syntax and use. 66 | # config.vm.provision "shell", inline: <<-SHELL 67 | # apt-get update 68 | # apt-get install -y apache2 69 | # SHELL 70 | end 71 | -------------------------------------------------------------------------------- /app/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | app = Flask(__name__) 3 | 4 | @app.route('/') 5 | def hello_world(): 6 | return 'Hello, World!' 7 | 8 | if __name__ == '__main__': 9 | app.run(host='0.0.0.0') 10 | -------------------------------------------------------------------------------- /circle.yml: -------------------------------------------------------------------------------- 1 | machine: 2 | pre: 3 | - curl -sSL https://s3.amazonaws.com/circle-downloads/install-circleci-docker.sh | bash -s -- 1.10.0 4 | services: 5 | - docker 6 | 7 | dependencies: 8 | pre: 9 | - sudo pip install docker-compose 10 | 11 | test: 12 | override: 13 | - docker-compose up -d 14 | - docker-compose run dockerapp python test.py 15 | 16 | deployment: 17 | hub: 18 | branch: /.*/ 19 | commands: 20 | - docker login -e $DOCKER_HUB_EMAIL -u $DOCKER_HUB_USER_ID -p $DOCKER_HUB_PWD 21 | - docker tag dockerapp_dockerapp $DOCKER_HUB_USER_ID/dockerapp:$CIRCLE_SHA1 22 | - docker tag dockerapp_dockerapp $DOCKER_HUB_USER_ID/dockerapp:latest 23 | - docker push $DOCKER_HUB_USER_ID/dockerapp:$CIRCLE_SHA1 24 | - docker push $DOCKER_HUB_USER_ID/dockerapp:latest 25 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '2' 2 | services: 3 | dockerapp: 4 | build: . 5 | ports: 6 | - "5000:5000" 7 | networks: 8 | - my_net 9 | 10 | redis: 11 | image: redis:3.2.0 12 | networks: 13 | - my_net 14 | 15 | networks: 16 | my_net: 17 | driver: bridge 18 | -------------------------------------------------------------------------------- /prod.yml: -------------------------------------------------------------------------------- 1 | version: '2' 2 | services: 3 | dockerapp: 4 | image: dockerapp 5 | environment: 6 | - constraint:node==master 7 | depends_on: 8 | - redis 9 | networks: 10 | - my_net 11 | 12 | redis: 13 | image: redis:3.2.0 14 | networks: 15 | - my_net 16 | 17 | networks: 18 | my_net: 19 | driver: bridge 20 | --------------------------------------------------------------------------------