├── AWS EKS.zip ├── Chart.yaml ├── CouchDb Helm.docx ├── DaemonSets-in-Kubernetes.pdf ├── Deploy a registry server ├── Dockepyfile ├── Docker Cookbook by Sébastien Goasguen (z-lib.org).pdf ├── Docker Ppt.pptx ├── Docker-Mastery-Commands.zip ├── DockerFile CHeet Sheet ├── Dockerfilenodejs ├── EKS Installation ├── HelloWorld.java ├── HelmChart ├── IBM Resiliency Orchestration (1).doc ├── Jenkins DOcker Pipeline ├── Jenkins PPT.pptx ├── Jenkins Slave.docx ├── Jenkinspipeline ├── Kubernetes PPT.pptx ├── Kubernetes Up and Running by Brendan Burns, Joe Beda, Kelsey Hightower (z-lib.org).pdf ├── Master+Microservices+with+Spring,+Docker,+Kubernetes (2).pdf ├── Mastering Kubernetes Master The Art Of Container Management By Using The Power Of Kubernetes by Gigi Sayfan (z-lib.org).pdf ├── NodejsAPP ├── TOC.PDF ├── The Docker Book by James Turnbull (z-lib.org).pdf ├── Untitled Diagram.drawio ├── Volumes ├── Windows With Ansible.docx ├── [Jeff-Geerling]-Ansible-for-DevOps(z-lib.org).pdf ├── [Lorin-Hochstein,-Rene-Moser]-Ansible_-Up-and-Runn(z-lib.org).pdf ├── [Russ-McKendrick]-Learn-Ansible_-Automate-cloud,-s(z-lib.org).pdf ├── ansible.pptx ├── apigateway ├── centos 7 machines 26 april.xlsx ├── daemon-set.pdf ├── deployment.yaml ├── deploymentrole ├── docker-compose.yml ├── docker-compose.yml1 ├── dockerfilepy ├── eks.yaml ├── envpython ├── expose ├── fluentd-daemonset.yaml ├── fullstackspringappkubernetes.yaml ├── helm video link ├── ibmdeployment.yaml ├── jenkinsdockerpipeline ├── jenkinspipelineversioing ├── k8sclient ├── k8sdashboardaccess ├── k8smaster ├── k8sreset ├── kubecommand.txt ├── kubernetes installation ├── kubernetes user command ├── liveprobe ├── manifest.txt ├── mongodbcompose ├── mybrd.zip ├── mysqlforservicedescovery ├── mysqlpresetpod ├── mysqlservice ├── namesspace-role-rolebinding-deployment ├── nginx-deployment-ingress ├── nginx-service-ingress ├── nodeappcompose1 ├── nodecompose ├── nodejsdockerfile ├── nodejswebappmysqlservice ├── nodejswebservicemysql ├── ns-role-rolebinding ├── openshift_cheat_sheet_r1v1.pdf ├── package.json ├── pod.yaml ├── presistent volume claim ├── presistentvolume ├── probeapp ├── probedockerfile ├── pv-pod ├── pvclaimpod ├── pvcpod ├── quota command ├── quota-mem-cpu-pod ├── resourcequota ├── server.js ├── springboot ├── statefullset1nginx ├── statefullsetredis ├── taintpod ├── updatedpythoncompose ├── user-peter-role-rolebinding ├── visits.zip ├── wordpresscompose └── wordpressdockercompose /AWS EKS.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/AWS EKS.zip -------------------------------------------------------------------------------- /Chart.yaml: -------------------------------------------------------------------------------- 1 | 672 cd /root 2 | 673 mkdir charts 3 | 674 cd charts/ 4 | 675 mkdir my-nginx 5 | 676 cd my-nginx/ 6 | 677 vi Chart.yaml 7 | 678 history 8 | root@ip-172-31-29-148:~/charts/my-nginx# ^C 9 | root@ip-172-31-29-148:~/charts/my-nginx# cat Chart.yaml 10 | apiVersion: v1 11 | name: my-nginx 12 | version: 0.1.0 13 | appVersion: 1.0 14 | description: My Custom Nginx 15 | -------------------------------------------------------------------------------- /CouchDb Helm.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/CouchDb Helm.docx -------------------------------------------------------------------------------- /DaemonSets-in-Kubernetes.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/DaemonSets-in-Kubernetes.pdf -------------------------------------------------------------------------------- /Deploy a registry server: -------------------------------------------------------------------------------- 1 | Deploy a registry server 2 | Estimated reading time: 18 minutes 3 | Before you can deploy a registry, you need to install Docker on the host. A registry is an instance of the registry image, and runs within Docker. 4 | 5 | This topic provides basic information about deploying and configuring a registry. For an exhaustive list of configuration options, see the configuration reference. 6 | 7 | If you have an air-gapped datacenter, see Considerations for air-gapped registries. 8 | 9 | Run a local registry 10 | Use a command like the following to start the registry container: 11 | 12 | $ docker run -d -p 5000:5000 --restart=always --name registry registry:2 13 | The registry is now ready to use. 14 | 15 | Warning: These first few examples show registry configurations that are only appropriate for testing. A production-ready registry must be protected by TLS and should ideally use an access-control mechanism. Keep reading and then continue to the configuration guide to deploy a production-ready registry. 16 | 17 | Copy an image from Docker Hub to your registry 18 | You can pull an image from Docker Hub and push it to your registry. The following example pulls the ubuntu:16.04 image from Docker Hub and re-tags it as my-ubuntu, then pushes it to the local registry. Finally, the ubuntu:16.04 and my-ubuntu images are deleted locally and the my-ubuntu image is pulled from the local registry. 19 | 20 | Pull the ubuntu:16.04 image from Docker Hub. 21 | 22 | $ docker pull ubuntu:16.04 23 | Tag the image as localhost:5000/my-ubuntu. This creates an additional tag for the existing image. When the first part of the tag is a hostname and port, Docker interprets this as the location of a registry, when pushing. 24 | 25 | $ docker tag ubuntu:16.04 localhost:5000/my-ubuntu 26 | Push the image to the local registry running at localhost:5000: 27 | 28 | $ docker push localhost:5000/my-ubuntu 29 | Remove the locally-cached ubuntu:16.04 and localhost:5000/my-ubuntu images, so that you can test pulling the image from your registry. This does not remove the localhost:5000/my-ubuntu image from your registry. 30 | 31 | $ docker image remove ubuntu:16.04 32 | $ docker image remove localhost:5000/my-ubuntu 33 | Pull the localhost:5000/my-ubuntu image from your local registry. 34 | 35 | $ docker pull localhost:5000/my-ubuntu 36 | Stop a local registry 37 | To stop the registry, use the same docker container stop command as with any other container. 38 | 39 | $ docker container stop registry 40 | To remove the container, use docker container rm. 41 | 42 | $ docker container stop registry && docker container rm -v registry 43 | Basic configuration 44 | To configure the container, you can pass additional or modified options to the docker run command. 45 | 46 | The following sections provide basic guidelines for configuring your registry. For more details, see the registry configuration reference. 47 | 48 | Start the registry automatically 49 | If you want to use the registry as part of your permanent infrastructure, you should set it to restart automatically when Docker restarts or if it exits. This example uses the --restart always flag to set a restart policy for the registry. 50 | 51 | $ docker run -d \ 52 | -p 5000:5000 \ 53 | --restart=always \ 54 | --name registry \ 55 | registry:2 56 | Customize the published port 57 | If you are already using port 5000, or you want to run multiple local registries to separate areas of concern, you can customize the registry’s port settings. This example runs the registry on port 5001 and also names it registry-test. Remember, the first part of the -p value is the host port and the second part is the port within the container. Within the container, the registry listens on port 5000 by default. 58 | 59 | $ docker run -d \ 60 | -p 5001:5000 \ 61 | --name registry-test \ 62 | registry:2 63 | If you want to change the port the registry listens on within the container, you can use the environment variable REGISTRY_HTTP_ADDR to change it. This command causes the registry to listen on port 5001 within the container: 64 | 65 | $ docker run -d \ 66 | -e REGISTRY_HTTP_ADDR=0.0.0.0:5001 \ 67 | -p 5001:5001 \ 68 | --name registry-test \ 69 | registry:2 70 | Storage customization 71 | Customize the storage location 72 | By default, your registry data is persisted as a docker volume on the host filesystem. If you want to store your registry contents at a specific location on your host filesystem, such as if you have an SSD or SAN mounted into a particular directory, you might decide to use a bind mount instead. A bind mount is more dependent on the filesystem layout of the Docker host, but more performant in many situations. The following example bind-mounts the host directory /mnt/registry into the registry container at /var/lib/registry/. 73 | 74 | $ docker run -d \ 75 | -p 5000:5000 \ 76 | --restart=always \ 77 | --name registry \ 78 | -v /mnt/registry:/var/lib/registry \ 79 | registry:2 80 | Customize the storage back-end 81 | By default, the registry stores its data on the local filesystem, whether you use a bind mount or a volume. You can store the registry data in an Amazon S3 bucket, Google Cloud Platform, or on another storage back-end by using storage drivers. For more information, see storage configuration options. 82 | 83 | Run an externally-accessible registry 84 | Running a registry only accessible on localhost has limited usefulness. In order to make your registry accessible to external hosts, you must first secure it using TLS. 85 | 86 | This example is extended in Run the registry as a service below. 87 | 88 | Get a certificate 89 | These examples assume the following: 90 | 91 | Your registry URL is https://myregistry.domain.com/. 92 | Your DNS, routing, and firewall settings allow access to the registry’s host on port 443. 93 | You have already obtained a certificate from a certificate authority (CA). 94 | If you have been issued an intermediate certificate instead, see use an intermediate certificate. 95 | 96 | Create a certs directory. 97 | 98 | $ mkdir -p certs 99 | Copy the .crt and .key files from the CA into the certs directory. The following steps assume that the files are named domain.crt and domain.key. 100 | 101 | Stop the registry if it is currently running. 102 | 103 | $ docker container stop registry 104 | Restart the registry, directing it to use the TLS certificate. This command bind-mounts the certs/ directory into the container at /certs/, and sets environment variables that tell the container where to find the domain.crt and domain.key file. The registry runs on port 443, the default HTTPS port. 105 | 106 | $ docker run -d \ 107 | --restart=always \ 108 | --name registry \ 109 | -v "$(pwd)"/certs:/certs \ 110 | -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \ 111 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ 112 | -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ 113 | -p 443:443 \ 114 | registry:2 115 | Docker clients can now pull from and push to your registry using its external address. The following commands demonstrate this: 116 | 117 | $ docker pull ubuntu:16.04 118 | $ docker tag ubuntu:16.04 myregistry.domain.com/my-ubuntu 119 | $ docker push myregistry.domain.com/my-ubuntu 120 | $ docker pull myregistry.domain.com/my-ubuntu 121 | USE AN INTERMEDIATE CERTIFICATE 122 | A certificate issuer may supply you with an intermediate certificate. In this case, you must concatenate your certificate with the intermediate certificate to form a certificate bundle. You can do this using the cat command: 123 | 124 | cat domain.crt intermediate-certificates.pem > certs/domain.crt 125 | You can use the certificate bundle just as you use the domain.crt file in the previous example. 126 | 127 | Support for Let’s Encrypt 128 | The registry supports using Let’s Encrypt to automatically obtain a browser-trusted certificate. For more information on Let’s Encrypt, see https://letsencrypt.org/how-it-works/ and the relevant section of the registry configuration. 129 | 130 | Use an insecure registry (testing only) 131 | It is possible to use a self-signed certificate, or to use our registry insecurely. Unless you have set up verification for your self-signed certificate, this is for testing only. See run an insecure registry. 132 | 133 | Run the registry as a service 134 | Swarm services provide several advantages over standalone containers. They use a declarative model, which means that you define the desired state and Docker works to keep your service in that state. Services provide automatic load balancing scaling, and the ability to control the distribution of your service, among other advantages. Services also allow you to store sensitive data such as TLS certificates in secrets. 135 | 136 | The storage back-end you use determines whether you use a fully scaled service or a service with either only a single node or a node constraint. 137 | 138 | If you use a distributed storage driver, such as Amazon S3, you can use a fully replicated service. Each worker can write to the storage back-end without causing write conflicts. 139 | 140 | If you use a local bind mount or volume, each worker node writes to its own storage location, which means that each registry contains a different data set. You can solve this problem by using a single-replica service and a node constraint to ensure that only a single worker is writing to the bind mount. 141 | 142 | The following example starts a registry as a single-replica service, which is accessible on any swarm node on port 80. It assumes you are using the same TLS certificates as in the previous examples. 143 | 144 | First, save the TLS certificate and key as secrets: 145 | 146 | $ docker secret create domain.crt certs/domain.crt 147 | 148 | $ docker secret create domain.key certs/domain.key 149 | Next, add a label to the node where you want to run the registry. To get the node’s name, use docker node ls. Substitute your node’s name for node1 below. 150 | 151 | $ docker node update --label-add registry=true node1 152 | Next, create the service, granting it access to the two secrets and constraining it to only run on nodes with the label registry=true. Besides the constraint, you are also specifying that only a single replica should run at a time. The example bind-mounts /mnt/registry on the swarm node to /var/lib/registry/ within the container. Bind mounts rely on the pre-existing source directory, so be sure /mnt/registry exists on node1. You might need to create it before running the following docker service create command. 153 | 154 | By default, secrets are mounted into a service at /run/secrets/. 155 | 156 | $ docker service create \ 157 | --name registry \ 158 | --secret domain.crt \ 159 | --secret domain.key \ 160 | --constraint 'node.labels.registry==true' \ 161 | --mount type=bind,src=/mnt/registry,dst=/var/lib/registry \ 162 | -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \ 163 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/run/secrets/domain.crt \ 164 | -e REGISTRY_HTTP_TLS_KEY=/run/secrets/domain.key \ 165 | --publish published=443,target=443 \ 166 | --replicas 1 \ 167 | registry:2 168 | You can access the service on port 443 of any swarm node. Docker sends the requests to the node which is running the service. 169 | 170 | Load balancing considerations 171 | One may want to use a load balancer to distribute load, terminate TLS or provide high availability. While a full load balancing setup is outside the scope of this document, there are a few considerations that can make the process smoother. 172 | 173 | The most important aspect is that a load balanced cluster of registries must share the same resources. For the current version of the registry, this means the following must be the same: 174 | 175 | Storage Driver 176 | HTTP Secret 177 | Redis Cache (if configured) 178 | Differences in any of the above cause problems serving requests. As an example, if you’re using the filesystem driver, all registry instances must have access to the same filesystem root, on the same machine. For other drivers, such as S3 or Azure, they should be accessing the same resource and share an identical configuration. The HTTP Secret coordinates uploads, so also must be the same across instances. Configuring different redis instances works (at the time of writing), but is not optimal if the instances are not shared, because more requests are directed to the backend. 179 | 180 | Important/Required HTTP-Headers 181 | Getting the headers correct is very important. For all responses to any request under the “/v2/” url space, the Docker-Distribution-API-Version header should be set to the value “registry/2.0”, even for a 4xx response. This header allows the docker engine to quickly resolve authentication realms and fallback to version 1 registries, if necessary. Confirming this is setup correctly can help avoid problems with fallback. 182 | 183 | In the same train of thought, you must make sure you are properly sending the X-Forwarded-Proto, X-Forwarded-For, and Host headers to their “client-side” values. Failure to do so usually makes the registry issue redirects to internal hostnames or downgrading from https to http. 184 | 185 | A properly secured registry should return 401 when the “/v2/” endpoint is hit without credentials. The response should include a WWW-Authenticate challenge, providing guidance on how to authenticate, such as with basic auth or a token service. If the load balancer has health checks, it is recommended to configure it to consider a 401 response as healthy and any other as down. This secures your registry by ensuring that configuration problems with authentication don’t accidentally expose an unprotected registry. If you’re using a less sophisticated load balancer, such as Amazon’s Elastic Load Balancer, that doesn’t allow one to change the healthy response code, health checks can be directed at “/”, which always returns a 200 OK response. 186 | 187 | Restricting access 188 | Except for registries running on secure local networks, registries should always implement access restrictions. 189 | 190 | Native basic auth 191 | The simplest way to achieve access restriction is through basic authentication (this is very similar to other web servers’ basic authentication mechanism). This example uses native basic authentication using htpasswd to store the secrets. 192 | 193 | Warning: You cannot use authentication with authentication schemes that send credentials as clear text. You must configure TLS first for authentication to work. 194 | 195 | Create a password file with one entry for the user testuser, with password testpassword: 196 | 197 | $ mkdir auth 198 | $ docker run \ 199 | --entrypoint htpasswd \ 200 | registry:2 -Bbn testuser testpassword > auth/htpasswd 201 | Stop the registry. 202 | 203 | $ docker container stop registry 204 | Start the registry with basic authentication. 205 | 206 | $ docker run -d \ 207 | -p 5000:5000 \ 208 | --restart=always \ 209 | --name registry \ 210 | -v "$(pwd)"/auth:/auth \ 211 | -e "REGISTRY_AUTH=htpasswd" \ 212 | -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \ 213 | -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \ 214 | -v "$(pwd)"/certs:/certs \ 215 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \ 216 | -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \ 217 | registry:2 218 | Try to pull an image from the registry, or push an image to the registry. These commands fail. 219 | 220 | Log in to the registry. 221 | 222 | $ docker login myregistrydomain.com:5000 223 | Provide the username and password from the first step. 224 | 225 | Test that you can now pull an image from the registry or push an image to the registry. 226 | 227 | X509 errors: X509 errors usually indicate that you are attempting to use a self-signed certificate without configuring the Docker daemon correctly. See run an insecure registry. 228 | 229 | More advanced authentication 230 | You may want to leverage more advanced basic auth implementations by using a proxy in front of the registry. See the recipes list. 231 | 232 | The registry also supports delegated authentication which redirects users to a specific trusted token server. This approach is more complicated to set up, and only makes sense if you need to fully configure ACLs and need more control over the registry’s integration into your global authorization and authentication systems. Refer to the following background information and configuration information here. 233 | 234 | This approach requires you to implement your own authentication system or leverage a third-party implementation. 235 | 236 | Deploy your registry using a Compose file 237 | If your registry invocation is advanced, it may be easier to use a Docker compose file to deploy it, rather than relying on a specific docker run invocation. Use the following example docker-compose.yml as a template. 238 | 239 | registry: 240 | restart: always 241 | image: registry:2 242 | ports: 243 | - 5000:5000 244 | environment: 245 | REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt 246 | REGISTRY_HTTP_TLS_KEY: /certs/domain.key 247 | REGISTRY_AUTH: htpasswd 248 | REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd 249 | REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm 250 | volumes: 251 | - /path/data:/var/lib/registry 252 | - /path/certs:/certs 253 | - /path/auth:/auth 254 | Replace /path with the directory which contains the certs/ and auth/ directories. 255 | 256 | Start your registry by issuing the following command in the directory containing the docker-compose.yml file: 257 | 258 | $ docker-compose up -d 259 | Considerations for air-gapped registries 260 | You can run a registry in an environment with no internet connectivity. However, if you rely on any images which are not local, you need to consider the following: 261 | 262 | You may need to build your local registry’s data volume on a connected host where you can run docker pull to get any images which are available remotely, and then migrate the registry’s data volume to the air-gapped network. 263 | 264 | Certain images, such as the official Microsoft Windows base images, are not distributable. This means that when you push an image based on one of these images to your private registry, the non-distributable layers are not pushed, but are always fetched from their authorized location. This is fine for internet-connected hosts, but not in an air-gapped set-up. 265 | 266 | In Docker 17.06 and higher, you can configure the Docker daemon to allow pushing non-distributable layers to private registries, in this scenario. This is only useful in air-gapped set-ups in the presence of non-distributable images, or in extremely bandwidth-limited situations. You are responsible for ensuring that you are in compliance with the terms of use for non-distributable layers. 267 | 268 | Edit the daemon.json file, which is located in /etc/docker/ on Linux hosts and C:\ProgramData\docker\config\daemon.json on Windows Server. Assuming the file was previously empty, add the following contents: 269 | 270 | { 271 | "allow-nondistributable-artifacts": ["myregistrydomain.com:5000"] 272 | } 273 | The value is an array of registry addresses, separated by commas. 274 | 275 | Save and exit the file. 276 | 277 | Restart Docker. 278 | 279 | Restart the registry if it does not start automatically. 280 | 281 | When you push images to the registries in the list, their non-distributable layers are pushed to the registry. 282 | 283 | Warning: Non-distributable artifacts typically have restrictions on how and where they can be distributed and shared. Only use this feature to push artifacts to private registries and ensure that you are in compliance with any terms that cover redistributing non-distributable artifacts. 284 | -------------------------------------------------------------------------------- /Dockepyfile: -------------------------------------------------------------------------------- 1 | name: connect 2 | channels: 3 | - conda-forge 4 | - defaults 5 | dependencies: 6 | - python=3.6 7 | - ipython 8 | - anaconda-client 9 | - pymongo 10 | -------------------------------------------------------------------------------- /Docker Cookbook by Sébastien Goasguen (z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Docker Cookbook by Sébastien Goasguen (z-lib.org).pdf -------------------------------------------------------------------------------- /Docker Ppt.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Docker Ppt.pptx -------------------------------------------------------------------------------- /Docker-Mastery-Commands.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Docker-Mastery-Commands.zip -------------------------------------------------------------------------------- /DockerFile CHeet Sheet: -------------------------------------------------------------------------------- 1 | ROM 2 | 3 | Usage: 4 | 5 | FROM 6 | FROM : 7 | FROM @ 8 | Information: 9 | 10 | FROM must be the first non-comment instruction in the Dockerfile. 11 | FROM can appear multiple times within a single Dockerfile in order to create multiple images. Simply make a note of the last image ID output by the commit before each new FROM command. 12 | The tag or digest values are optional. If you omit either of them, the builder assumes a latest by default. The builder returns an error if it cannot match the tag value. 13 | Reference - Best Practices 14 | 15 | MAINTAINER 16 | 17 | Usage: 18 | 19 | MAINTAINER 20 | The MAINTAINER instruction allows you to set the Author field of the generated images. 21 | 22 | Reference 23 | 24 | RUN 25 | 26 | Usage: 27 | 28 | RUN (shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows) 29 | RUN ["", "", ""] (exec form) 30 | Information: 31 | 32 | The exec form makes it possible to avoid shell string munging, and to RUN commands using a base image that does not contain the specified shell executable. 33 | The default shell for the shell form can be changed using the SHELL command. 34 | Normal shell processing does not occur when using the exec form. For example, RUN ["echo", "$HOME"] will not do variable substitution on $HOME. 35 | Reference - Best Practices 36 | 37 | CMD 38 | 39 | Usage: 40 | 41 | CMD ["","",""] (exec form, this is the preferred form) 42 | CMD ["",""] (as default parameters to ENTRYPOINT) 43 | CMD (shell form) 44 | Information: 45 | 46 | The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well. 47 | There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect. 48 | If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified with the JSON array format. 49 | If the user specifies arguments to docker run then they will override the default specified in CMD. 50 | Normal shell processing does not occur when using the exec form. For example, CMD ["echo", "$HOME"] will not do variable substitution on $HOME. 51 | Reference - Best Practices 52 | 53 | LABEL 54 | 55 | Usage: 56 | 57 | LABEL = [= ...] 58 | Information: 59 | 60 | The LABEL instruction adds metadata to an image. 61 | To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing. 62 | Labels are additive including LABELs in FROM images. 63 | If Docker encounters a label/key that already exists, the new value overrides any previous labels with identical keys. 64 | To view an image’s labels, use the docker inspect command. They will be under the "Labels" JSON attribute. 65 | Reference - Best Practices 66 | 67 | EXPOSE 68 | 69 | Usage: 70 | 71 | EXPOSE [ ...] 72 | Information: 73 | 74 | Informs Docker that the container listens on the specified network port(s) at runtime. 75 | EXPOSE does not make the ports of the container accessible to the host. 76 | Reference - Best Practices 77 | 78 | ENV 79 | 80 | Usage: 81 | 82 | ENV 83 | ENV = [= ...] 84 | Information: 85 | 86 | The ENV instruction sets the environment variable to the value . 87 | The value will be in the environment of all “descendant” Dockerfile commands and can be replaced inline as well. 88 | The environment variables set using ENV will persist when a container is run from the resulting image. 89 | The first form will set a single variable to a value with the entire string after the first space being treated as the - including characters such as spaces and quotes. 90 | Reference - Best Practices 91 | 92 | ADD 93 | 94 | Usage: 95 | 96 | ADD [ ...] 97 | ADD ["", ... ""] (this form is required for paths containing whitespace) 98 | Information: 99 | 100 | Copies new files, directories, or remote file URLs from and adds them to the filesystem of the image at the path . 101 | may contain wildcards and matching will be done using Go’s filepath.Match rules. 102 | If is a file or directory, then they must be relative to the source directory that is being built (the context of the build). 103 | is an absolute path, or a path relative to WORKDIR. 104 | If doesn’t exist, it is created along with all missing directories in its path. 105 | Reference - Best Practices 106 | 107 | COPY 108 | 109 | Usage: 110 | 111 | COPY [ ...] 112 | COPY ["", ... ""] (this form is required for paths containing whitespace) 113 | Information: 114 | 115 | Copies new files or directories from and adds them to the filesystem of the image at the path . 116 | may contain wildcards and matching will be done using Go’s filepath.Match rules. 117 | must be relative to the source directory that is being built (the context of the build). 118 | is an absolute path, or a path relative to WORKDIR. 119 | If doesn’t exist, it is created along with all missing directories in its path. 120 | Reference - Best Practices 121 | 122 | ENTRYPOINT 123 | 124 | Usage: 125 | 126 | ENTRYPOINT ["", "", ""] (exec form, preferred) 127 | ENTRYPOINT (shell form) 128 | Information: 129 | 130 | Allows you to configure a container that will run as an executable. 131 | Command line arguments to docker run will be appended after all elements in an exec form ENTRYPOINT and will override all elements specified using CMD. 132 | The shell form prevents any CMD or run command line arguments from being used, but the ENTRYPOINT will start via the shell. This means the executable will not be PID 1 nor will it receive UNIX signals. Prepend exec to get around this drawback. 133 | Only the last ENTRYPOINT instruction in the Dockerfile will have an effect. 134 | Reference - Best Practices 135 | 136 | VOLUME 137 | 138 | Usage: 139 | 140 | VOLUME ["", ...] 141 | VOLUME [ ...] 142 | Creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. 143 | 144 | Reference - Best Practices 145 | 146 | USER 147 | 148 | Usage: 149 | 150 | USER 151 | The USER instruction sets the user name or UID to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile. 152 | 153 | Reference - Best Practices 154 | 155 | WORKDIR 156 | 157 | Usage: 158 | 159 | WORKDIR 160 | Information: 161 | 162 | Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions that follow it. 163 | It can be used multiple times in the one Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction. 164 | Reference - Best Practices 165 | 166 | ARG 167 | 168 | Usage: 169 | 170 | ARG [=] 171 | Information: 172 | 173 | Defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg = flag. 174 | Multiple variables may be defined by specifying ARG multiple times. 175 | It is not recommended to use build-time variables for passing secrets like github keys, user credentials, etc. Build-time variable values are visible to any user of the image with the docker history command. 176 | Environment variables defined using the ENV instruction always override an ARG instruction of the same name. 177 | Docker has a set of predefined ARG variables that you can use without a corresponding ARG instruction in the Dockerfile. 178 | HTTP_PROXY and http_proxy 179 | HTTPS_PROXY and https_proxy 180 | FTP_PROXY and ftp_proxy 181 | NO_PROXY and no_proxy 182 | Reference 183 | 184 | ONBUILD 185 | 186 | Usage: 187 | 188 | ONBUILD 189 | Information: 190 | 191 | Adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile. 192 | Any build instruction can be registered as a trigger. 193 | Triggers are inherited by the "child" build only. In other words, they are not inherited by "grand-children" builds. 194 | The ONBUILD instruction may not trigger FROM, MAINTAINER, or ONBUILD instructions. 195 | Reference - Best Practices 196 | 197 | STOPSIGNAL 198 | 199 | Usage: 200 | 201 | STOPSIGNAL 202 | The STOPSIGNAL instruction sets the system call signal that will be sent to the container to exit. This signal can be a valid unsigned number that matches a position in the kernel’s syscall table, for instance 9, or a signal name in the format SIGNAME, for instance SIGKILL. 203 | 204 | Reference 205 | 206 | HEALTHCHECK 207 | 208 | Usage: 209 | 210 | HEALTHCHECK [] CMD (check container health by running a command inside the container) 211 | HEALTHCHECK NONE (disable any healthcheck inherited from the base image) 212 | Information: 213 | 214 | Tells Docker how to test a container to check that it is still working 215 | Whenever a health check passes, it becomes healthy. After a certain number of consecutive failures, it becomes unhealthy. 216 | The that can appear are... 217 | --interval= (default: 30s) 218 | --timeout= (default: 30s) 219 | --retries= (default: 3) 220 | The health check will first run interval seconds after the container is started, and then again interval seconds after each previous check completes. If a single run of the check takes longer than timeout seconds then the check is considered to have failed. It takes retries consecutive failures of the health check for the container to be considered unhealthy. 221 | There can only be one HEALTHCHECK instruction in a Dockerfile. If you list more than one then only the last HEALTHCHECK will take effect. 222 | can be either a shell command or an exec JSON array. 223 | The command's exit status indicates the health status of the container. 224 | 0: success - the container is healthy and ready for use 225 | 1: unhealthy - the container is not working correctly 226 | 2: reserved - do not use this exit code 227 | The first 4096 bytes of stdout and stderr from the are stored and can be queried with docker inspect. 228 | When the health status of a container changes, a health_status event is generated with the new status. 229 | Reference 230 | 231 | SHELL 232 | 233 | Usage: 234 | 235 | SHELL ["", "", ""] 236 | Information: 237 | 238 | Allows the default shell used for the shell form of commands to be overridden. 239 | Each SHELL instruction overrides all previous SHELL instructions, and affects all subsequent instructions. 240 | Allows an alternate shell be used such as zsh, csh, tcsh, powershell, and others. 241 | Reference 242 | -------------------------------------------------------------------------------- /Dockerfilenodejs: -------------------------------------------------------------------------------- 1 | FROM node:10 2 | 3 | # Create app directory 4 | WORKDIR /usr/src/app 5 | 6 | # Install app dependencies 7 | # A wildcard is used to ensure both package.json AND package-lock.json are copied 8 | # where available (npm@5+) 9 | COPY package*.json ./ 10 | 11 | RUN npm install 12 | # If you are building your code for production 13 | # RUN npm ci --only=production 14 | 15 | # Bundle app source 16 | COPY . . 17 | 18 | EXPOSE 8080 19 | CMD [ "node", "server.js" ] 20 | -------------------------------------------------------------------------------- /EKS Installation: -------------------------------------------------------------------------------- 1 | Create EKS Cluster & Node Groups 2 | Step-00: Introduction 3 | Understand about EKS Core Objects 4 | Control Plane 5 | Worker Nodes & Node Groups 6 | Fargate Profiles 7 | VPC 8 | Create EKS Cluster 9 | Associate EKS Cluster to IAM OIDC Provider 10 | Create EKS Node Groups 11 | Verify Cluster, Node Groups, EC2 Instances, IAM Policies and Node Groups 12 | Step-01: Create EKS Cluster using eksctl 13 | It will take 15 to 20 minutes to create the Cluster Control Plane 14 | # Create Cluster 15 | eksctl create cluster --name=eksdemo1 \ 16 | --region=us-east-1 \ 17 | --zones=us-east-1a,us-east-1b \ 18 | --without-nodegroup 19 | 20 | # Get List of clusters 21 | eksctl get cluster 22 | Step-02: Create & Associate IAM OIDC Provider for our EKS Cluster 23 | To enable and use AWS IAM roles for Kubernetes service accounts on our EKS cluster, we must create & associate OIDC identity provider. 24 | To do so using eksctl we can use the below command. 25 | Use latest eksctl version (as on today the latest version is 0.21.0) 26 | # Template 27 | eksctl utils associate-iam-oidc-provider \ 28 | --region region-code \ 29 | --cluster \ 30 | --approve 31 | 32 | # Replace with region & cluster name 33 | eksctl utils associate-iam-oidc-provider \ 34 | --region us-east-1 \ 35 | --cluster eksdemo1 \ 36 | --approve 37 | Step-03: Create EC2 Keypair 38 | Create a new EC2 Keypair with name as kube-demo 39 | This keypair we will use it when creating the EKS NodeGroup. 40 | This will help us to login to the EKS Worker Nodes using Terminal. 41 | Step-04: Create Node Group with additional Add-Ons in Public Subnets 42 | These add-ons will create the respective IAM policies for us automatically within our Node Group role. 43 | # Create Public Node Group 44 | eksctl create nodegroup --cluster=eksdemo1 \ 45 | --region=us-east-1 \ 46 | --name=eksdemo1-ng-public1 \ 47 | --node-type=t3.medium \ 48 | --nodes=2 \ 49 | --nodes-min=2 \ 50 | --nodes-max=4 \ 51 | --node-volume-size=20 \ 52 | --ssh-access \ 53 | --ssh-public-key=kube-demo \ 54 | --managed \ 55 | --asg-access \ 56 | --external-dns-access \ 57 | --full-ecr-access \ 58 | --appmesh-access \ 59 | --alb-ingress-access 60 | Step-05: Verify Cluster & Nodes 61 | Verify NodeGroup subnets to confirm EC2 Instances are in Public Subnet 62 | Verify the node group subnet to ensure it created in public subnets 63 | Go to Services -> EKS -> eksdemo -> eksdemo1-ng1-public 64 | Click on Associated subnet in Details tab 65 | Click on Route Table Tab. 66 | We should see that internet route via Internet Gateway (0.0.0.0/0 -> igw-xxxxxxxx) 67 | Verify Cluster, NodeGroup in EKS Management Console 68 | Go to Services -> Elastic Kubernetes Service -> eksdemo1 69 | List Worker Nodes 70 | # List EKS clusters 71 | eksctl get cluster 72 | 73 | # List NodeGroups in a cluster 74 | eksctl get nodegroup --cluster= 75 | 76 | # List Nodes in current kubernetes cluster 77 | kubectl get nodes -o wide 78 | 79 | # Our kubectl context should be automatically changed to new cluster 80 | kubectl config view --minify 81 | -------------------------------------------------------------------------------- /HelloWorld.java: -------------------------------------------------------------------------------- 1 | public class HelloWorld { 2 | public static void main(String[] args){ 3 | System.out.println("Hello World :) "); 4 | } 5 | } 6 | -------------------------------------------------------------------------------- /HelmChart: -------------------------------------------------------------------------------- 1 | 672 cd /root 2 | 673 mkdir charts 3 | 674 cd charts/ 4 | 675 mkdir my-nginx 5 | 676 cd my-nginx/ 6 | 677 vi Chart.yaml 7 | root@ip-172-31-29-148:~/charts/my-nginx# cat Chart.yaml 8 | apiVersion: v1 9 | name: mg0nginx 10 | version: 0.1.0 11 | appVersion: 1.0 12 | description: My Custom Nginx 13 | -------------------------------------------------------------------------------- /IBM Resiliency Orchestration (1).doc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/IBM Resiliency Orchestration (1).doc -------------------------------------------------------------------------------- /Jenkins DOcker Pipeline: -------------------------------------------------------------------------------- 1 | 206 apt install default-jdk 2 | 207 curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null 3 | 208 echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null 4 | 209 sudo apt-get update 5 | 210 sudo apt-get install jenkins 6 | 211 apt-get install ca-certificates curl gnupg lsb-release 7 | 212 echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ 8 | $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 9 | 213 apt-get update 10 | 214 apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin 11 | 215 apt install maven -y 12 | 216 /var/lib/jenkins/secrets/initialAdminPassword 13 | 217 cat/var/lib/jenkins/secrets/initialAdminPassword 14 | 218 cat /var/lib/jenkins/secrets/initialAdminPassword 15 | 219 mvn --version 16 | 220 usermod -aG docker jenkins 17 | 221 service jenkins restart 18 | 222 docker images 19 | 223 docker login 20 | -------------------------------------------------------------------------------- /Jenkins PPT.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Jenkins PPT.pptx -------------------------------------------------------------------------------- /Jenkins Slave.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Jenkins Slave.docx -------------------------------------------------------------------------------- /Jenkinspipeline: -------------------------------------------------------------------------------- 1 | node { 2 | stage('SCM Checkout'){ 3 | git 'https://github.com/javahometech/my-app' 4 | } 5 | stage('MVN Package'){ 6 | def mvnHome = tool name: 'localMaven', type: 'maven' 7 | def mvnCMD = "${mvnHome}/bin/mvn" 8 | sh "${mvnCMD} clean package" 9 | } 10 | stage ('Build Docker Image'){ 11 | sh 'docker build -t gopal1409/my-app:1.0.0 .' 12 | } 13 | stage('Push Docker Imgage'){ 14 | withCredentials([string(credentialsId: 'container p', variable: 'dockerHubPwd')]) { 15 | sh 'docker login -u gopal1409 -p ${dockerHubPwd}' 16 | } 17 | sh 'docker push gopal1409/my-app:1.0.0' 18 | } 19 | stage('RUN Container on Dev Server'){ 20 | 21 | def dockerRUn = 'docker run -p 8080:8080 -d -name myapp gopal1409/my-app:2.0.0' 22 | sshagent(['dev-server']) { 23 | sh 'ssh -o StrictHostKeyChecking=no root@192.168.233.129 ${dockerRun}' 24 | } 25 | } 26 | 27 | } 28 | -------------------------------------------------------------------------------- /Kubernetes PPT.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Kubernetes PPT.pptx -------------------------------------------------------------------------------- /Kubernetes Up and Running by Brendan Burns, Joe Beda, Kelsey Hightower (z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Kubernetes Up and Running by Brendan Burns, Joe Beda, Kelsey Hightower (z-lib.org).pdf -------------------------------------------------------------------------------- /Master+Microservices+with+Spring,+Docker,+Kubernetes (2).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Master+Microservices+with+Spring,+Docker,+Kubernetes (2).pdf -------------------------------------------------------------------------------- /Mastering Kubernetes Master The Art Of Container Management By Using The Power Of Kubernetes by Gigi Sayfan (z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Mastering Kubernetes Master The Art Of Container Management By Using The Power Of Kubernetes by Gigi Sayfan (z-lib.org).pdf -------------------------------------------------------------------------------- /NodejsAPP: -------------------------------------------------------------------------------- 1 | Dockerizing a Node.js web app 2 | The goal of this example is to show you how to get a Node.js application into a Docker container. The guide is intended for development, and not for a production deployment. The guide also assumes you have a working Docker installation and a basic understanding of how a Node.js application is structured. 3 | 4 | In the first part of this guide we will create a simple web application in Node.js, then we will build a Docker image for that application, and lastly we will instantiate a container from that image. 5 | 6 | Docker allows you to package an application with its environment and all of its dependencies into a "box", called a container. Usually, a container consists of an application running in a stripped-to-basics version of a Linux operating system. An image is the blueprint for a container, a container is a running instance of an image. 7 | 8 | Create the Node.js app 9 | First, create a new directory where all the files would live. In this directory create a package.json file that describes your app and its dependencies: 10 | 11 | { 12 | "name": "docker_web_app", 13 | "version": "1.0.0", 14 | "description": "Node.js on Docker", 15 | "author": "First Last ", 16 | "main": "server.js", 17 | "scripts": { 18 | "start": "node server.js" 19 | }, 20 | "dependencies": { 21 | "express": "^4.16.1" 22 | } 23 | } 24 | With your new package.json file, run npm install. If you are using npm version 5 or later, this will generate a package-lock.json file which will be copied to your Docker image. 25 | 26 | Then, create a server.js file that defines a web app using the Express.js framework: 27 | 28 | 'use strict'; 29 | 30 | const express = require('express'); 31 | 32 | // Constants 33 | const PORT = 8080; 34 | const HOST = '0.0.0.0'; 35 | 36 | // App 37 | const app = express(); 38 | app.get('/', (req, res) => { 39 | res.send('Hello World'); 40 | }); 41 | 42 | app.listen(PORT, HOST); 43 | console.log(`Running on http://${HOST}:${PORT}`); 44 | In the next steps, we'll look at how you can run this app inside a Docker container using the official Docker image. First, you'll need to build a Docker image of your app. 45 | 46 | Creating a Dockerfile 47 | Create an empty file called Dockerfile: 48 | 49 | touch Dockerfile 50 | Open the Dockerfile in your favorite text editor 51 | 52 | The first thing we need to do is define from what image we want to build from. Here we will use the latest LTS (long term support) version 10 of node available from the Docker Hub: 53 | 54 | FROM node:10 55 | Next we create a directory to hold the application code inside the image, this will be the working directory for your application: 56 | 57 | # Create app directory 58 | WORKDIR /usr/src/app 59 | This image comes with Node.js and NPM already installed so the next thing we need to do is to install your app dependencies using the npm binary. Please note that if you are using npm version 4 or earlier a package-lock.json file will not be generated. 60 | 61 | # Install app dependencies 62 | # A wildcard is used to ensure both package.json AND package-lock.json are copied 63 | # where available (npm@5+) 64 | COPY package*.json ./ 65 | 66 | RUN npm install 67 | # If you are building your code for production 68 | # RUN npm ci --only=production 69 | Note that, rather than copying the entire working directory, we are only copying the package.json file. This allows us to take advantage of cached Docker layers. bitJudo has a good explanation of this here. Furthermore, the npm ci command, specified in the comments, helps provide faster, reliable, reproducible builds for production environments. You can read more about this here. 70 | 71 | To bundle your app's source code inside the Docker image, use the COPY instruction: 72 | 73 | # Bundle app source 74 | COPY . . 75 | Your app binds to port 8080 so you'll use the EXPOSE instruction to have it mapped by the docker daemon: 76 | 77 | EXPOSE 8080 78 | Last but not least, define the command to run your app using CMD which defines your runtime. Here we will use node server.js to start your server: 79 | 80 | CMD [ "node", "server.js" ] 81 | Your Dockerfile should now look like this: 82 | 83 | FROM node:10 84 | 85 | # Create app directory 86 | WORKDIR /usr/src/app 87 | 88 | # Install app dependencies 89 | # A wildcard is used to ensure both package.json AND package-lock.json are copied 90 | # where available (npm@5+) 91 | COPY package*.json ./ 92 | 93 | RUN npm install 94 | # If you are building your code for production 95 | # RUN npm ci --only=production 96 | 97 | # Bundle app source 98 | COPY . . 99 | 100 | EXPOSE 8080 101 | CMD [ "node", "server.js" ] 102 | .dockerignore file 103 | Create a .dockerignore file in the same directory as your Dockerfile with following content: 104 | 105 | node_modules 106 | npm-debug.log 107 | This will prevent your local modules and debug logs from being copied onto your Docker image and possibly overwriting modules installed within your image. 108 | 109 | Building your image 110 | Go to the directory that has your Dockerfile and run the following command to build the Docker image. The -t flag lets you tag your image so it's easier to find later using the docker images command: 111 | 112 | docker build -t /node-web-app . 113 | Your image will now be listed by Docker: 114 | 115 | $ docker images 116 | 117 | # Example 118 | REPOSITORY TAG ID CREATED 119 | node 10 1934b0b038d1 5 days ago 120 | /node-web-app latest d64d3505b0d2 1 minute ago 121 | Run the image 122 | Running your image with -d runs the container in detached mode, leaving the container running in the background. The -p flag redirects a public port to a private port inside the container. Run the image you previously built: 123 | 124 | docker run -p 49160:8080 -d /node-web-app 125 | Print the output of your app: 126 | 127 | # Get container ID 128 | $ docker ps 129 | 130 | # Print app output 131 | $ docker logs 132 | 133 | # Example 134 | Running on http://localhost:8080 135 | If you need to go inside the container you can use the exec command: 136 | 137 | # Enter the container 138 | $ docker exec -it /bin/bash 139 | Test 140 | To test your app, get the port of your app that Docker mapped: 141 | 142 | $ docker ps 143 | 144 | # Example 145 | ID IMAGE COMMAND ... PORTS 146 | ecce33b30ebf /node-web-app:latest npm start ... 49160->8080 147 | In the example above, Docker mapped the 8080 port inside of the container to the port 49160 on your machine. 148 | 149 | Now you can call your app using curl (install if needed via: sudo apt-get install curl): 150 | 151 | $ curl -i localhost:49160 152 | 153 | HTTP/1.1 200 OK 154 | X-Powered-By: Express 155 | Content-Type: text/html; charset=utf-8 156 | Content-Length: 12 157 | ETag: W/"c-M6tWOb/Y57lesdjQuHeB1P/qTV0" 158 | Date: Mon, 13 Nov 2017 20:53:59 GMT 159 | Connection: keep-alive 160 | 161 | Hello world 162 | -------------------------------------------------------------------------------- /TOC.PDF: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/TOC.PDF -------------------------------------------------------------------------------- /The Docker Book by James Turnbull (z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/The Docker Book by James Turnbull (z-lib.org).pdf -------------------------------------------------------------------------------- /Untitled Diagram.drawio: -------------------------------------------------------------------------------- 1 | UzV2zq1wL0osyPDNT0nNUTV2VTV2LsrPL4GwciucU3NyVI0MMlNUjV1UjYwMgFjVyA2HrCFY1qAgsSg1rwSLBiADYTaQg2Y1AA== -------------------------------------------------------------------------------- /Volumes: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: task-pv-volume 5 | labels: 6 | type: local 7 | spec: 8 | storageClassName: manual 9 | capacity: 10 | storage: 10Gi 11 | accessModes: 12 | - ReadWriteOnce 13 | hostPath: 14 | path: "/mnt/data" 15 | 16 | 17 | pvc.yaml 18 | apiVersion: v1 19 | kind: PersistentVolumeClaim 20 | metadata: 21 | name: task-pv-claim 22 | spec: 23 | storageClassName: manual 24 | accessModes: 25 | - ReadWriteOnce 26 | resources: 27 | requests: 28 | storage: 3Gi 29 | -------------------------------------------------------------------------------- /Windows With Ansible.docx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/Windows With Ansible.docx -------------------------------------------------------------------------------- /[Jeff-Geerling]-Ansible-for-DevOps(z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/[Jeff-Geerling]-Ansible-for-DevOps(z-lib.org).pdf -------------------------------------------------------------------------------- /[Lorin-Hochstein,-Rene-Moser]-Ansible_-Up-and-Runn(z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/[Lorin-Hochstein,-Rene-Moser]-Ansible_-Up-and-Runn(z-lib.org).pdf -------------------------------------------------------------------------------- /[Russ-McKendrick]-Learn-Ansible_-Automate-cloud,-s(z-lib.org).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/[Russ-McKendrick]-Learn-Ansible_-Automate-cloud,-s(z-lib.org).pdf -------------------------------------------------------------------------------- /ansible.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/ansible.pptx -------------------------------------------------------------------------------- /apigateway: -------------------------------------------------------------------------------- 1 | spring.application.name=api-gateway 2 | server.port=8082 3 | eureka.client.service-url.defaultZone=http://localhost:8761/eureka 4 | spring.cloud.gateway.discovery.locator.enabled=true 5 | -------------------------------------------------------------------------------- /centos 7 machines 26 april.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/centos 7 machines 26 april.xlsx -------------------------------------------------------------------------------- /daemon-set.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/daemon-set.pdf -------------------------------------------------------------------------------- /deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | labels: 6 | app: nginx 7 | spec: 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: nginx:1.7.9 20 | ports: 21 | - containerPort: 80 22 | -------------------------------------------------------------------------------- /deploymentrole: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: mynamespace 5 | ---- 6 | apiVersion: apps/v1 7 | kind: Deployment 8 | metadata: 9 | name: nginx-deployment 10 | namespace: mynamespace 11 | labels: 12 | app: nginx 13 | spec: 14 | replicas: 3 15 | selector: 16 | matchLabels: 17 | app: nginx 18 | template: 19 | metadata: 20 | labels: 21 | app: nginx 22 | spec: 23 | containers: 24 | - name: k8s-demo 25 | image: nginx 26 | ports: 27 | - name: nginx-port 28 | containerPort: 8080 29 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.3' 2 | 3 | services: 4 | db: 5 | image: mysql:5.7 6 | volumes: 7 | - db_data:/var/lib/mysql 8 | restart: always 9 | environment: 10 | MYSQL_ROOT_PASSWORD: somewordpress 11 | MYSQL_DATABASE: wordpress 12 | MYSQL_USER: wordpress 13 | MYSQL_PASSWORD: wordpress 14 | 15 | wordpress: 16 | depends_on: 17 | - db 18 | image: wordpress:latest 19 | ports: 20 | - "8000:80" 21 | restart: always 22 | environment: 23 | WORDPRESS_DB_HOST: db:3306 24 | WORDPRESS_DB_USER: wordpress 25 | WORDPRESS_DB_PASSWORD: wordpress 26 | WORDPRESS_DB_NAME: wordpress 27 | volumes: 28 | db_data: {} 29 | -------------------------------------------------------------------------------- /docker-compose.yml1: -------------------------------------------------------------------------------- 1 | version: '3.3' 2 | services: 3 | my-nginx-service: 4 | container_name: my-firtst-container 5 | image: nginx 6 | cpus: 1.5 7 | mem_limit: 2048m 8 | ports: 9 | - "8080:80" 10 | -------------------------------------------------------------------------------- /dockerfilepy: -------------------------------------------------------------------------------- 1 | FROM continuumio/miniconda3:4.5.11 2 | 3 | RUN apt-get update -y; apt-get upgrade -y 4 | RUN apt-get update -y; apt-get upgrade -y; apt-get install -y vim-tiny vim-athena ssh 5 | 6 | COPY environment.yml environment.yml 7 | 8 | RUN conda env create -f environment.yml 9 | RUN echo "alias l='ls -lah'" >> ~/.bashrc 10 | RUN echo "source activate connect" >> ~/.bashrc 11 | 12 | ENV CONDA_EXE /opt/conda/bin/conda 13 | ENV CONDA_PREFIX /opt/conda/envs/connect 14 | ENV CONDA_PYTHON_EXE /opt/conda/bin/python 15 | ENV CONDA_PROMPT_MODIFIER (connect) 16 | ENV CONDA_DEFAULT_ENV connect 17 | ENV PATH /opt/conda/envs/connect/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 18 | -------------------------------------------------------------------------------- /eks.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: eksctl.io/v1alpha5 2 | kind: ClusterConfig 3 | metadata: 4 | name: gopal-cluster 5 | region: us-east-2 6 | managedNodeGroups: 7 | - name: gdnode 8 | instanceType: t2.small 9 | desiredCapacity: 3 10 | ssh: 11 | publicKeyName: dockergopal 12 | -------------------------------------------------------------------------------- /envpython: -------------------------------------------------------------------------------- 1 | name: connect 2 | channels: 3 | - conda-forge 4 | - defaults 5 | dependencies: 6 | - python=3.6 7 | - ipython 8 | - anaconda-client 9 | - pymongo 10 | -------------------------------------------------------------------------------- /expose: -------------------------------------------------------------------------------- 1 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml 2 | kubectl edit svc kubernetes-dashboard --namespace=kubernetes-dashboard 3 | kubectl get svc --all-namespaces 4 | kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}" 5 | 6 | -------------------------------------------------------------------------------- /fluentd-daemonset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: fluentd-elasticsearch 5 | namespace: kube-system 6 | labels: 7 | k8s-app: fluentd-logging 8 | spec: 9 | selector: 10 | matchLabels: 11 | name: fluentd-elasticsearch 12 | updateStrategy: 13 | type: RollingUpdate 14 | rollingUpdate: 15 | maxUnavailable: 1 16 | template: 17 | metadata: 18 | labels: 19 | name: fluentd-elasticsearch 20 | spec: 21 | tolerations: 22 | # this toleration is to have the daemonset runnable on master nodes 23 | # remove it if your masters can't run pods 24 | - key: node-role.kubernetes.io/master 25 | effect: NoSchedule 26 | containers: 27 | - name: fluentd-elasticsearch 28 | image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2 29 | volumeMounts: 30 | - name: varlog 31 | mountPath: /var/log 32 | - name: varlibdockercontainers 33 | mountPath: /var/lib/docker/containers 34 | readOnly: true 35 | terminationGracePeriodSeconds: 30 36 | volumes: 37 | - name: varlog 38 | hostPath: 39 | path: /var/log 40 | - name: varlibdockercontainers 41 | hostPath: 42 | path: /var/lib/docker/containers 43 | -------------------------------------------------------------------------------- /fullstackspringappkubernetes.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume # Create a PersistentVolume 3 | metadata: 4 | name: mysql-pv 5 | labels: 6 | type: local 7 | spec: 8 | storageClassName: standard # Storage class. A PV Claim requesting the same storageClass can be bound to this volume. 9 | capacity: 10 | storage: 250Mi 11 | accessModes: 12 | - ReadWriteOnce 13 | hostPath: # hostPath PersistentVolume is used for development and testing. It uses a file/directory on the Node to emulate network-attached storage 14 | path: "/mnt/data" 15 | persistentVolumeReclaimPolicy: Retain # Retain the PersistentVolume even after PersistentVolumeClaim is deleted. The volume is considered “released”. But it is not yet available for another claim because the previous claimant’s data remains on the volume. 16 | --- 17 | apiVersion: v1 18 | kind: PersistentVolumeClaim # Create a PersistentVolumeClaim to request a PersistentVolume storage 19 | metadata: # Claim name and labels 20 | name: mysql-pv-claim 21 | labels: 22 | app: polling-app 23 | spec: # Access mode and resource limits 24 | storageClassName: standard # Request a certain storage class 25 | accessModes: 26 | - ReadWriteOnce # ReadWriteOnce means the volume can be mounted as read-write by a single Node 27 | resources: 28 | requests: 29 | storage: 250Mi 30 | --- 31 | apiVersion: v1 # API version 32 | kind: Service # Type of kubernetes resource 33 | metadata: 34 | name: polling-app-mysql # Name of the resource 35 | labels: # Labels that will be applied to the resource 36 | app: polling-app 37 | spec: 38 | ports: 39 | - port: 3306 40 | selector: # Selects any Pod with labels `app=polling-app,tier=mysql` 41 | app: polling-app 42 | tier: mysql 43 | clusterIP: None 44 | --- 45 | apiVersion: apps/v1 46 | kind: Deployment # Type of the kubernetes resource 47 | metadata: 48 | name: polling-app-mysql # Name of the deployment 49 | labels: # Labels applied to this deployment 50 | app: polling-app 51 | spec: 52 | selector: 53 | matchLabels: # This deployment applies to the Pods matching the specified labels 54 | app: polling-app 55 | tier: mysql 56 | strategy: 57 | type: Recreate 58 | template: # Template for the Pods in this deployment 59 | metadata: 60 | labels: # Labels to be applied to the Pods in this deployment 61 | app: polling-app 62 | tier: mysql 63 | spec: # The spec for the containers that will be run inside the Pods in this deployment 64 | containers: 65 | - image: mysql:5.6 # The container image 66 | name: mysql 67 | env: # Environment variables passed to the container 68 | - name: MYSQL_ROOT_PASSWORD 69 | valueFrom: # Read environment variables from kubernetes secrets 70 | secretKeyRef: 71 | name: mysql-root-pass 72 | key: password 73 | - name: MYSQL_DATABASE 74 | valueFrom: 75 | secretKeyRef: 76 | name: mysql-db-url 77 | key: database 78 | - name: MYSQL_USER 79 | valueFrom: 80 | secretKeyRef: 81 | name: mysql-user-pass 82 | key: username 83 | - name: MYSQL_PASSWORD 84 | valueFrom: 85 | secretKeyRef: 86 | name: mysql-user-pass 87 | key: password 88 | ports: 89 | - containerPort: 3306 # The port that the container exposes 90 | name: mysql 91 | volumeMounts: 92 | - name: mysql-persistent-storage # This name should match the name specified in `volumes.name` 93 | mountPath: /var/lib/mysql 94 | volumes: # A PersistentVolume is mounted as a volume to the Pod 95 | - name: mysql-persistent-storage 96 | persistentVolumeClaim: 97 | claimName: mysql-pv-claim 98 | -------------------------------------------------------------------------------- /helm video link: -------------------------------------------------------------------------------- 1 | https://www.youtube.com/watch?v=3GPpm2nZb2s&t=1668s 2 | -------------------------------------------------------------------------------- /ibmdeployment.yaml: -------------------------------------------------------------------------------- 1 | #Section 1 2 | # all the communication to the kubernetes happen strictly through apis 3 | #kubectl apply -f yamlfilename 4 | apiVersion: apps/v1 #kubectl apply will execute this file from top to botton and left to right 5 | #this will invoke your apiversion apps/v1 6 | kind: Deployment #it will invoke the Deployment API 7 | metadata: #once it invoke the deployment api 8 | name: mydeployment #using metadata we give the name of the deployment as 9 | labels: #this deployment will have label 10 | app: myapp 11 | #Section 2 12 | spec: 13 | replicas: 3 14 | selector: # 15 | matchLabels: #will have an label match expression 16 | app: myapp 17 | template: #inside this template we will define the container creation process 18 | metadata: 19 | labels: 20 | app: myapp 21 | #Section 3 : 22 | spec: #specification about container whcih is going to be launc 23 | containers: 24 | - name: myapp # name of the container 25 | image: piuma/phpsysinfo #container download from hub.docker 26 | ports: 27 | - containerPort: 80 #internal port where container is exposed 28 | -------------------------------------------------------------------------------- /jenkinsdockerpipeline: -------------------------------------------------------------------------------- 1 | node { 2 | stage('SCM Checkout'){ 3 | git 'https://github.com/javahometech/my-app.git' 4 | } 5 | 6 | stage('Mvn Package'){ 7 | def mvnHome = tool name: 'localMaven', type: 'maven' 8 | def mvnCMD = "${mvnHome}/bin/mvn" 9 | sh "${mvnCMD} clean package" 10 | } 11 | stage('Build Docker Image'){ 12 | sh 'docker build -t gopal1409/myapp .' 13 | } 14 | stage('Push Docker Image'){ 15 | withCredentials([string(credentialsId: 'docker-pwd', variable: 'dockerHubPwd')]) { 16 | sh "docker login -u gopal1409 -p ${dockerHubPwd}" 17 | } 18 | sh 'docker push gopal1409/myapp' 19 | } 20 | stage('Run container on Dev Server'){ 21 | def dockerRun='docker run -d -p 8080:8080 --name myapp gopal1409/myapp' 22 | sshagent(['dev-server']) { 23 | sh "ssh -o StrictHostKeyChecking=no centos@52.15.51.234 ${dockerRun}" 24 | } 25 | } 26 | } 27 | -------------------------------------------------------------------------------- /jenkinspipelineversioing: -------------------------------------------------------------------------------- 1 | What is the Problem With Maven Release Plugin 2 | Maven release plugin is created years before people started realizing that software production has to become smoother and continuous. Main problems with the plugin includes: 3 | 4 | It is not atomic. If release goal fails for some stupid reason, you have committed and broken poms. 5 | It spoils the commit history, making it unreadable if you want to have frequent releases. 6 | It is very opinonated on various things and tries to own multiple responsibilities (managing versions, committing stuff, creating tags etc) Your flows does not have to comply with how release plugin sees the world. 7 | Now let’s look at another approach that works much nicer with CD. First, we need to define few key principles: 8 | 9 | Few Principles For Continuous Delivery 10 | A regular build is no different than a release build. What makes a release special is how we see it. 11 | No human intervention should be required. All decisions can be calculated automatically. If you have parameters that can not be automated, it is wrong. 12 | If you have multiple branches for some reason, have a dedicated build job for each one of them, in order to see and manage the current branch status easily. 13 | Branch builds must enforce building from the top, never to be parameterized for building custom changesets. 14 | I mention branches, but avoid having them as much as possible in the first place. 15 | Avoid having continuous delivery before making code reviews enforced by the build system. 16 | Block code merges to branches except for the build user 17 | Block artifact deployments except for the build user 18 | Make it possible to start everything completely from scratch 19 | Do not have any snapshot dependency 20 | Do not use version ranges in dependencies, because it prevents reproducable builds 21 | Keep most logic inside maven to keep it reusable everywhere, without need of a build server. Your Jenkinsfile (or whatever you use) should be very similar to running same steps from command line. This also makes much easier to change build environment without having to re-implement a lot of stuff. 22 | Do not rely on internet to build your software. Maintain proxying caches for everything you need. 23 | Jenkins Declarative Pipelines 24 | We will use declarative pipeline of Jenkins to define the build flow. It allows us to use basic but common building blocks to define whole build/release process. Skeleton of a pipeline looks like this: 25 | 26 | 1 27 | 2 28 | 3 29 | 4 30 | 5 31 | 6 32 | 7 33 | 8 34 | 9 35 | 10 36 | 11 37 | 12 38 | 13 39 | 14 40 | 15 41 | 16 42 | 17 43 | 18 44 | 19 45 | 20 46 | 21 47 | 22 48 | pipeline { 49 | agent { label 'label_for_build_agent' } 50 | options { 51 | } 52 | parameters { 53 | // e.g: passing -X in order to debug something during the build 54 | string(name: 'MAVEN_OPTIONS') 55 | } 56 | environment { 57 | JAVA_OPTS='-Xsomething=something' 58 | } 59 | trigger { 60 | // How your build is going to get triggered 61 | // For branch builds, the only trigger must be merge operation on that branch. 62 | } 63 | stages { 64 | // All your build flow will come here 65 | } 66 | post { 67 | // Place for defining actions in case of success/failure/unstable builds. 68 | } 69 | } 70 | All the magic will happen in stages. This is a single pipeline that has no relation to other pipelines. 71 | 72 | You can have a spider web of pipelines obviously, but it complicates the management and debugging which you should not risk your time wasting on it unless it is unavoidable. 73 | 74 | While using pipelines, it is better to make stages indicate logical state of the build. I personally try to avoid using scripted pipeline for a few reasons: 75 | 76 | Jenkins Blue Ocean needs declarative pipeline 77 | Scripted pipeline tricks you to break the abstractions and write ad-hoc code here and there to make quick fixes to the build, rather than doing them in correct places. 78 | It is not as safe as relying only on the simplicty of declarative pipeline commands and sh blocks in terms of forward compatibility. 79 | Unfortunately, Jenkins does not allow us to define top level parts of declarative pipeline from libraries. For example, it is not possible to totally omit options, parameters and environment and fetch it from library. Declarative pipeline has to exist as a single block with no interruption. Still, you can define steps inside libraries. 80 | 81 | For the moment, we have one point left to decide: 82 | 83 | What About Versioning During CD ? 84 | One of the first questions in continuous delivery when coming from traditional maven release plugin is to decide how to set versions. 85 | 86 | Why ? 87 | 88 | Since every merge to your branch will result a new delivery that may or may not get in to production, it will need a unique version. 89 | 90 | In most cases, running maven release plugin is an explicit decision either by people or some automated logic, but it is almost never “at every single merge”. There may be hundreds of commits, but the version is bumped based on some logic. 91 | 92 | This may or may not fit directly to CD since you will have your last version digit getting incremented at every merge. It is not a concern at all if your software is consumed internally. 93 | 94 | However, if it is consumed by your clients, they will be confused with numbers increasing crazily. You will want to establish a mutual, clear, communicated understanding of what your version indicates (eg: semantic versioning) 95 | 96 | Sometimes, marketing can even interfere with this. (eg: by trying to prefix versions with year because it is so trendy and everyone else not doing it are dinosaurs) 97 | 98 | If you decide increasing major, minor, patch numbers based on a logic not related to merges, you will need to use qualifiers to generate unique versions. 99 | 100 | At this point you have a couple of quick options: 101 | 102 | Using timestamps up to seconds granularity 103 | Using Jenkins build number (automatically increased by Jenkins) 104 | The important point here is to make sure that version comparison results in correct logical order becase maven has strict and well defined rules for versioning. 105 | 106 | If you want to quickly test your hypothetical versioning scheme is not broken, you can directly use maven to make tests: 107 | 108 | java -jar /usr/local/Cellar/maven/3.5.2/libexec/lib/maven-artifact-3.5.2.jar 1.2.3-20180209010130 1.2.3-20180209010135 109 | Display parameters as parsed by Maven (in canonical form) and comparison result: 110 | 1. 1.2.3-20180209010130 == 1.2.3-20180209010130 111 | 1.2.3-20180209010130 < 1.2.3-20180209010135 112 | 2. 1.2.3-20180209010135 == 1.2.3-20180209010135 113 | So, we are done with versioning, right ? 114 | 115 | May be not. Here is the next question. 116 | 117 | How to Trace Back to Git Revisions Using Versions ? 118 | When you use maven release plugin in a traditional way, you have much less releases than your code merges and all your releases have associated tags, which makes very trivial to check out the specific released code. But how are you going to easily find what 1.2.3-20180209010130 stands for ? Of course you can dive in to build logs to see the actual checked out revision but it is agonizingly pointless to spend time on that. 119 | 120 | If you keep creating tags in CD, you either have to do it for every merge or you have to record the revision somewhere and use that revision to create a tag when the build actually goes in to production. 121 | 122 | The first one ends you up with tags as many as merge commits. Definitely ugly. The second one requires you add extra moving parts to the build maintenance. I do not like that too. 123 | 124 | Third option might be appending revision to the version. For git, we can do it like: 125 | 126 | git rev-parse --short HEAD 127 | The catch here is, if you use only short revision, the ordering will be broken, therefore you need to prefix it either with build number or timestamp or something that increments. 128 | 129 | The downside of the third option is, your version string may become a bit too long, like 1.2.3-20180209010130.a1b2c3 130 | 131 | However, it is extremely trivial to checkout the code under question with this. 132 | 133 | Now lets continue to the build 134 | 135 | Jenkins Stages 136 | 1 137 | 2 138 | 3 139 | 4 140 | 5 141 | 6 142 | 7 143 | 8 144 | 9 145 | 10 146 | 11 147 | 12 148 | 13 149 | 14 150 | 15 151 | 16 152 | 17 153 | 18 154 | 19 155 | 20 156 | 21 157 | 22 158 | 23 159 | 24 160 | 25 161 | 26 162 | 27 163 | 28 164 | 29 165 | 30 166 | 31 167 | 32 168 | 33 169 | 34 170 | 35 171 | 36 172 | 37 173 | 38 174 | 39 175 | 40 176 | 41 177 | 42 178 | 43 179 | 44 180 | 45 181 | 46 182 | 47 183 | 48 184 | 49 185 | 50 186 | 51 187 | 52 188 | 53 189 | 54 190 | 55 191 | 56 192 | 57 193 | 58 194 | 59 195 | 60 196 | 61 197 | 62 198 | 63 199 | 64 200 | 65 201 | 66 202 | 67 203 | 68 204 | 69 205 | 70 206 | stages { 207 | 208 | // No checkout stage ? That is not required for this case 209 | // because Jenkins will checkout whole repo that contains Jenkinsfile, 210 | // which is also the tip of the branch that we want to build 211 | 212 | stage ('Build') { 213 | steps { 214 | // For debugging purposes, it is always useful to print info 215 | // about build environment that is seen by shell during the build 216 | sh 'env' 217 | sh """ 218 | SHORTREV=`git rev-parse --short HEAD` 219 | """ 220 | script { 221 | def pom = readMavenPom file: 'pom.xml' 222 | // Now you have access to raw version string in pom.version 223 | // Based on your versioning scheme, automatically calculate the next one 224 | VERSION = pom.version.replaceAll('SNAPSHOT', BUILD_TIMESTAMP + "." + SHORTREV) 225 | } 226 | // We never build a SNAPSHOT 227 | // We explicitly set versions. 228 | sh """ 229 | mvn -B org.codehaus.mojo:versions-maven-plugin:2.5:set -DprocessAllModules -DnewVersion=${VERSION} $MAVEN_OPTIONS 230 | """ 231 | sh """ 232 | mvn -B clean compile $MAVEN_OPTIONS 233 | """ 234 | } 235 | 236 | stage('Unit Tests') { 237 | // We have seperate stage for tests so 238 | // they stand out in grouping and visualizations 239 | steps { 240 | sh """ 241 | mvn -B test $MAVEN_OPTIONS 242 | """ 243 | } 244 | // Note that, this requires having test results. 245 | // But you should anyway never skip tests in branch builds 246 | post { 247 | always { 248 | junit '**/target/surefire-reports/*.xml' 249 | } 250 | } 251 | } 252 | 253 | stage('Integration Tests') { 254 | steps { 255 | sh """ 256 | mvn -B integration-test $MAVEN_OPTIONS 257 | """ 258 | } 259 | post { 260 | always { 261 | junit '**/target/failsafe-reports/*.xml' 262 | } 263 | } 264 | } 265 | 266 | stage('Deploy') { 267 | steps { 268 | // Finally deploy all your jars, containers, 269 | // deliverables to their respective repositories 270 | sh """ 271 | mvn -B deploy 272 | """ 273 | } 274 | } 275 | } 276 | There is still a problem here. Integration tests may require access to generated deliverables in order to test them. However deploy is the last lifecycle. Therefore, you may need to change the ordering such as: 277 | 278 | 1 279 | 2 280 | 3 281 | 4 282 | 5 283 | 6 284 | 7 285 | 8 286 | 9 287 | 10 288 | 11 289 | 12 290 | 13 291 | 14 292 | 15 293 | 16 294 | 17 295 | 18 296 | 19 297 | 20 298 | 299 | stage('Deploy') { 300 | steps { 301 | sh """ 302 | mvn -B deploy -DskipTests 303 | """ 304 | } 305 | } 306 | stage('Integration Tests') { 307 | steps { 308 | sh """ 309 | mvn -B integration-test $MAVEN_OPTIONS 310 | """ 311 | } 312 | post { 313 | always { 314 | junit '**/target/failsafe-reports/*.xml' 315 | } 316 | } 317 | } 318 | Still a problem exists, right ? 319 | 320 | What if integration tests fail ? You had deployed stuff already. How is this even possible ? See following git log (drawn by gitgraph.js) for example: 321 | 322 | git 323 | 324 | Even though Alice’s changes are verified correctly based on its own parent and merges fine with Bob’s change, together they can cause failure in tests. 325 | 326 | You can enforce rebase during reviews to avoid this as much as possible. 327 | 328 | How to Prevent Repository Pollution During CD ? 329 | Repository pollution is a real problem. Bigger repository you have, better infrastructure and baby sitting will you need in order to provide decent I/O performance. Again there are multiple ways to have control on this. 330 | 331 | Using Maven Deploy Options To Deploy Different Repositores 332 | Maven deploy plugin allows us to use alternate deployment repositories from the commandline and that is our solution. You can define target locations based on the nature of the build. 333 | 334 | If you manage containers and images via docker-maven-plugin in your build, it also supports setting different registries via command line. This allows us to change destinations at each step. 335 | 336 | 1 337 | 2 338 | 3 339 | 4 340 | 5 341 | 6 342 | 7 343 | 8 344 | 9 345 | 10 346 | 11 347 | 12 348 | 13 349 | 14 350 | 15 351 | 16 352 | 17 353 | 18 354 | 19 355 | 20 356 | 21 357 | 22 358 | 23 359 | 24 360 | 25 361 | 26 362 | 27 363 | 28 364 | 29 365 | 366 | stage('Deploy Staging') { 367 | steps { 368 | // We deploy to staging repositories for further integration tests 369 | sh """ 370 | mvn -B deploy -DskipTests -DretryFailedDeploymentCount=5 -DaltDeploymentRepository=myrepo::default::https://my.staging.maven.repo -Ddocker.push.registry=https://my.staging.docker.registry 371 | """ 372 | } 373 | } 374 | stage('Integration Tests') { 375 | steps { 376 | sh """ 377 | mvn -B integration-test $MAVEN_OPTIONS 378 | """ 379 | } 380 | post { 381 | always { 382 | junit '**/target/failsafe-reports/*.xml' 383 | } 384 | } 385 | } 386 | stage('Deploy Official') { 387 | steps { 388 | sh """ 389 | mvn -B deploy -DskipTests -DretryFailedDeploymentCount=5 -DaltDeploymentRepository=myrepo::default::https://my.release.maven.repo -Ddocker.push.registry=https://my.release.docker.registry 390 | """ 391 | } 392 | } 393 | 394 | Now this looks better. During the integration tests, we consume artifacts from staging repositories that are supposed to be purged very frequently (daily ?). Once we are sure that our product has passed all the verifications, we deploy again, but this time towards real repository. But can we avoid pushing stuff twice ? Yes, sort of. 395 | 396 | Using Artifact Promotion Features of Binary Repository Management Systems 397 | Commercial solutions like JFrog Artifactory have features to “promote” build artifacts, which basically means moving an artifact from a repository to another. (eg: From staging to production, in case of successful build) 398 | 399 | Downsides of this: 400 | 401 | It is expensive, if that is a concern 402 | It requires using non-standard plugins/apis so it forces you to have a build code that actually includes a vendor lock-in. Personally I would rather stay neutral and compliant on well known interfaces. 403 | -------------------------------------------------------------------------------- /k8sclient: -------------------------------------------------------------------------------- 1 | 51 yum install -y yum-utils 2 | 52 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 3 | 53 yum install docker-ce docker-ce-cli containerd.io 4 | 54 systemctl start docker 5 | 55 systemctl enable docker 6 | 56 swapoff -a 7 | 57 cat < 443:31707/TCP 21h 71 | Dashboard has been exposed on port 31707 (HTTPS). Now you can access it from your browser at: https://:31707. master-ip can be found by executing kubectl cluster-info. Usually it is either 127.0.0.1 or IP of your machine, assuming that your cluster is running directly on the machine, on which these commands are executed. 72 | 73 | In case you are trying to expose Dashboard using NodePort on a multi-node cluster, then you have to find out IP of the node on which Dashboard is running to access it. Instead of accessing https://: you should access https://:. 74 | 75 | API Server 76 | In case Kubernetes API server is exposed and accessible from outside you can directly access dashboard at: https://:/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ 77 | 78 | Note: This way of accessing Dashboard is only possible if you choose to install your user certificates in the browser. In example, certificates used by the kubeconfig file to contact API Server can be used. 79 | 80 | Ingress 81 | Dashboard can be also exposed using Ingress resource. For more information check: https://kubernetes.io/docs/concepts/services-networking/ingress. 82 | 83 | Login not available 84 | If your login view displays below error, this means that you are trying to log in over HTTP and it has been disabled for the security reasons. 85 | 86 | Logging in is available only if URL used to access Dashboard starts with: 87 | 88 | http://localhost/... 89 | http://127.0.0.1/... 90 | https:///... 91 | -------------------------------------------------------------------------------- /k8smaster: -------------------------------------------------------------------------------- 1 | 397 swapoff -a 2 | 398 cat < hello.tar 360 | 360 ls 361 | 361 docker login 362 | 362 ls 363 | 363 docker images 364 | 364 docker tag 365 | 365 docker tag 32af0101b17a gopal1409/helloworld 366 | 366 docker images 367 | 367 docker push gopal1409/helloworld 368 | 368 docker login 369 | 369 docker image 370 | 370 docker images 371 | 371 ls 372 | 372 vi Dockerfile 373 | 373 docker node ls 374 | 374 clear 375 | 375 docker node ls 376 | 376 docker swarm join-token 377 | 377 docker swarm join-token worker 378 | 378 docker node ls 379 | 379 clear 380 | 380 docker service create --replicas 1 --name helloworld alpine ping docker.com 381 | 381 docker service ls 382 | 382 docker service create --replicas 1 --name helloworld1 alpine ping google.com 383 | 383 docker service ls 384 | 384 docker service inspect --pretty helloworld 385 | 385 docker service inspect helloworld 386 | 386 clear 387 | 387 docker service ps helloworld 388 | 388 docker service scale helloworld=5 389 | 389 docker service ps helloworld 390 | 390 hostnamectl set-hostname centos01 391 | 391 bash 392 | 392 clear 393 | 393 ip a 394 | 394 clear 395 | 395 cat < /etc/yum.repos.d/kubernetes.repo 396 | 396 [kubernetes] 397 | 397 name=Kubernetes 398 | 398 baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch 399 | 399 enabled=1 400 | 400 gpgcheck=1 401 | 401 repo_gpgcheck=1 402 | 402 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 403 | 403 exclude=kubelet kubeadm kubectl 404 | 404 EOF 405 | 405 setenforce 0 406 | 406 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 407 | 407 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 408 | 408 systemctl enable --now kubelet 409 | 409 systemctl daemon-reload 410 | 410 systemctl restart kubelet 411 | 411 history 412 | 412 ssh root@192.168.133.137 413 | 413 clear 414 | 414 kubeadm init 415 | 415 swapoff -a 416 | 416 kubeadm init 417 | 417 mkdir -p $HOME/.kube 418 | 418 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 419 | 419 sudo chown $(id -u):$(id -g) $HOME/.kube/config 420 | 420 ssh root@192.168.133.137 421 | 421 clear 422 | 422 kubectl get nodes 423 | 423 history 424 | 424 kubectl get nodes 425 | 425 kubectl get pods --all-namespaces 426 | 426 kubectl get ns 427 | 427 kubectl get pod 428 | 428 $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 429 | 429 kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 430 | 430 kubectl get nodes 431 | 431 kubectl get pods --all-namespaces 432 | 432 kubectl get nodes 433 | 433 kubectl get ns 434 | 434 vi namespace.yaml 435 | 435 kubectl create -f namespace.yaml 436 | 436 vi namespace.yaml 437 | 437 kubectl create -f namespace.yaml 438 | 438 kubectl get ns 439 | 439 kubectl describe namesapces development 440 | 440 kubectl describe ns development 441 | 441 kubectl create ns prod 442 | 442 kubectl get ns 443 | 443 kubectl describe ns prod 444 | 444 kubectl delete ns prod 445 | 445 kubetl edit ns development 446 | 446 kubectl edit ns development 447 | 447 docker ps 448 | 448 clear 449 | 449 kubectl get nodes 450 | 450 kubectl get pod 451 | 451 kubectl autoscale deployment.v1.apps/nginx-deployment --min=10 --max=15 --cpu-percent=80 452 | 452 kubectl describe deployment 453 | 453 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml 454 | 454 kubectl get pod --all-namespaces 455 | 455 kubectl proxy 456 | 456 vi user.yaml 457 | 457 kubectl apply -f user.yaml 458 | 458 vi token.yaml 459 | 459 kubectl apply -f token.yaml 460 | 460 kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') 461 | 461 kubectl proxy 462 | 462 kubectl get pod --all-namespaces 463 | 463 kubectl proxy 464 | 464 ip a 465 | 465 service firewalld status 466 | 466 reboot 467 | 467 vi pod.yaml 468 | 468 kubectl apply -f pod.yaml 469 | 469 kubectl get pod 470 | 470 kubectl describe pod 471 | 471 clear 472 | 472 kubectl get pod 473 | 473 kubectl describe pod 474 | 474 kubectl get nodes 475 | 475 kubectl create -f pod.yaml 476 | 476 clear 477 | 477 kubectl get pod 478 | 478 kubectl describe pod 479 | 479 kubectl get pod 480 | 480 kubectl describe pod 481 | 481 kubectl get pod 482 | 482 cat pod.yaml 483 | 483 vi deployment.yaml 484 | 484 kubectl create -f deployment.yaml 485 | 485 vi deployment.yaml 486 | 486 kubectl create -f deployment.yaml 487 | 487 kubectl get pods 488 | 488 kubectl get deployment 489 | 489 kubectl rollout status deployment.v1.apps/nginx-deployment 490 | 490 kubectl get rs 491 | 491 kubectl get pods --show-labels 492 | 492 history 493 | 493 kubectl get pods --show-labels 494 | 494 vi deployment.yaml 495 | 495 kubectl create -f deployment.yaml 496 | 496 kubectl get deployment.yaml 497 | 497 kubectl get deployment 498 | 498 kubectl get pods --show-labels 499 | 499 kubectl rollout status deployment.v1.apps/nginx-deployment 500 | 500 vi deployment.yaml 501 | 501 kubectl create -f deployment.yaml 502 | 502 kubectl delete -f deployment.yaml 503 | 503 kubectl create -f deployment.yaml 504 | 504 kubectl rollout status deployment1.v1.apps/nginx-deployment 505 | 505 kubectl rollout status deployment.v1.apps/nginx-deployment1 506 | 506 kubectl delete -f deployment.yaml 507 | 507 kubectl get deployment 508 | 508 kubectl set image deployment/nginx-deployment nginx=nginx:1.6.1 --record 509 | 509 cat deployment.yaml 510 | 510 kubectl edit deployment.v1.apps/nginx-deployment 511 | 511 kubectl rollout status deployment.v1.apps/nginx-deployment1 512 | 512 kubectl rollout status deployment.v1.apps/nginx-deployment 513 | 513 kubectl get rs 514 | 514 kubectl rollout history deployment.v1.apps/nginx-deployment 515 | 515 kubectl set image deployment/nginx-deployment nginx=nginx:1.61 --record 516 | 516 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record 517 | 517 kubectl rollout history deployment.v1.apps/nginx-deployment 518 | 518 kubectl get pod 519 | 519 kubectl set image deployment/nginx-deployment nginx=nginx:1.162 --record 520 | 520 kubectl get pod 521 | 521 kubectl rollout history deployment.v1.apps/nginx-deployment 522 | 522 kubectl rollout history deployment.v1.apps/nginx-deployment --revision=5 523 | 523 clear 524 | 524 history 525 | 525 kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 526 | 526 kubect get deployment 527 | 527 kubectl get deployment 528 | 528 kubectl get rs 529 | 529 history 530 | 530 kubectl describe deployment 531 | 531 history 532 | 532 kubectl rollout undo deployment.v1.apps/nginx-deployment --revision=5 533 | 533 kubectl rollout undo deployment.v1.apps/nginx-deployment 534 | 534 kubectl describe deployment 535 | 535 clear 536 | 536 history 537 | 537 docker images 538 | 538 clear 539 | 539 history 540 | 540 kubectl proxy 541 | 541 kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') 542 | 542 kubectl get pod --all-namespaces 543 | 543 kubectl get pod 544 | 544 vi /etc/fstab 545 | 545 reboot 546 | 546 helm 547 | 547 exit 548 | 548 cd /usr/local/bin 549 | 549 ls 550 | 550 rm helm 551 | 551 cd /home/centos01/Downloads/ 552 | 552 ls 553 | 553 tar xvf helm-v3.2.0-linux-amd64.tar.gz 554 | 554 mv linux-amd64/helm /usr/local/bin/ 555 | 555 helm 556 | 556 bash 557 | 557 exit 558 | 558 kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') 559 | 559 kubectl proxy 560 | 560 yum install maven 561 | 561 cd /home 562 | 562 ls 563 | 563 cd centos01/ 564 | 564 ls 565 | 565 cd Downloads/ 566 | 566 ls 567 | 567 tar xvf helm-v3.2.0-linux-amd64.tar.gz 568 | 568 ls 569 | 569 pwd 570 | 570 cd linux-amd64/ 571 | 571 ls 572 | 572 sudo mv helm /usr/local/bin 573 | 573 helm version 574 | 574 bash 575 | 575 exit 576 | 576 helm 577 | 577 /home/centos01/Downloads/ 578 | 578 cd /home/centos01/Downloads/ 579 | 579 ls 580 | 580 helm 581 | 581 tar xvf helm-v3.2.0-linux-amd64.tar.gz 582 | 582 mv helm /usr/local/bin 583 | 583 mv linux-amd64/helm /usr/local/bin/ 584 | 584 helm 585 | 585 bash 586 | 586 helm 587 | 587 ./helm 588 | 588 exit 589 | 589 ls 590 | 590 helm 591 | 591 ls 592 | 592 cd /home/centos01/ 593 | 593 ls 594 | 594 cd Downloads/ 595 | 595 ls 596 | 596 tar xvf helm-v3.2.0-linux-amd64.tar.gz 597 | 597 cd linux-amd64/ 598 | 598 ls 599 | 599 ls -l 600 | 600 chmod 777 helm 601 | 601 cp helm /usr/local/bin/ 602 | 602 helm 603 | 603 cp helm /usr/bin/ 604 | 604 helm 605 | 605 clear 606 | 606 helm 607 | 607 clar 608 | 608 clear 609 | 609 helm 610 | 610 exit 611 | 611 helm delete my-release 612 | 612 helm install --name my-release --set createAdminSecret=false --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 613 | 613 helm install my-release --set createAdminSecret=false --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 614 | 614 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u 615 | 615 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u adminUsername 616 | 616 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster" -u adminUsername 617 | 617 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster" -u 618 | 618 helm delete my-release 619 | 619 helm install my-release1 --set createAdminSecret=false --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 620 | 620 kubectl get pods --namespace default -l "app=couchdb,release=my-release1" 621 | 621 helm delete my-release1 622 | 622 kubectl get pods --namespace default -l "app=couchdb,release=my-release1" 623 | 623 clear 624 | 624 helm delete my-release1 625 | 625 helm repo update 626 | 626 helm install --name my-release --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 627 | 627 helm install my-release --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 628 | 628 kubectl get pods --namespace default -l "app=couchdb,release=my-release" 629 | 629 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u 630 | 630 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u adminUsername 631 | 631 ip a 632 | 632 ip a | more 633 | 633* 634 | 634 service firewalld status 635 | 635 kubectl get pods --namespace default -l "app=couchdb,release=my-release" 636 | 636 $ kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode 637 | 637 kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode 638 | 638 $ kubectl create kubectl generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz 639 | 639 kubectl create kubectl generic my-release-couchdb --from-literal=adminUsername=foo --from-literal=adminPassword=bar --from-literal=cookieAuthSecret=baz 640 | 640 helm repo list 641 | 641 helm repo list couchdb-helm 642 | 642 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u adminUsername 643 | 643 kubectl get ssssssvc 644 | 644 kubectl get svc 645 | 645 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u adminUsername 646 | 646 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u 647 | 647 kubectl exec --namespace default -it my-release-couchdb-0 -c couchdb -- curl -s http://127.0.0.1:5984/_cluster_setup -X POST -H "Content-Type: application/json" -d '{"action": "finish_cluster"}' -u adminUsername 648 | 648 vi /etc/hosts 649 | 649 hostname 650 | 650 ping centos01 651 | 651 vi /etc/hosts 652 | 652 curl http://127.0.0.1:5984 653 | 653 kubectl gettt svc 654 | 654 kubectl get svc 655 | 655 kubectl edit svc my-release-svc-couchdb 656 | 656 kubectl get svc 657 | 657 helm upgrade my-release couchdb/couchdb 658 | 658 kubectl edit svc my-release-svc-couchdb 659 | 659 kubectl get svc 660 | 660 kubectl edit svc my-release-svc-couchdb 661 | 661 kubectl get svc 662 | 662 kubectl exec task-pv-pod bash 663 | 663 kubectl describe pod task-pv-pod 664 | 664 kubectl get nodes 665 | 665 df -h 666 | 666 clear 667 | 667 helm repo add couchdb https://apache.github.io/couchdb-helm 668 | 668 helm update repo 669 | 669 helm repo update 670 | 670 history 671 | 671 helm repo list 672 | 672 cd /home/centos01/ 673 | 673 ls 674 | 674 cd Downloads/ 675 | 675 ls 676 | 676 unzip couchdb-helm-master.zip 677 | 677 ls 678 | 678 cd couchdb-helm-master/ 679 | 679 ls 680 | 680 cd couchdb/ 681 | 681 ls 682 | 682 helm install --name my-release --set createAdminSecret=false --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 683 | 683 helm install my-release --set createAdminSecret=false --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 684 | 684 helm install --name my-release --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad couchdb/couchdb 685 | 685 helm install my-release --set couchdbConfig.couchdb.uuid=decafbaddecafbaddecafbaddecafbad . 686 | 686 kubectl get pod 687 | 687 kubectl get svc 688 | 688 kubectl edit svc my-release-svc-couchdb 689 | 689 kubectl get svc 690 | 690 kubectl get secret my-release-couchdb -o go-template='{{ .data.adminPassword }}' | base64 --decode 691 | 691 history > kubecommand.txt 692 | -------------------------------------------------------------------------------- /kubernetes installation: -------------------------------------------------------------------------------- 1 | Master Server Configuration 2 | 256 cat < /dev/null 47 | 5 apt-get update 48 | 6 cat < /etc/docker/daemon.json <", 6 | "main": "server.js", 7 | "scripts": { 8 | "start": "node server.js" 9 | }, 10 | "dependencies": { 11 | "express": "^4.16.1" 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: static-web 5 | labels: 6 | role: myrole 7 | spec: 8 | containers: 9 | - name: web 10 | image: nginx 11 | ports: 12 | - name: web 13 | containerPort: 80 14 | protocol: TCP 15 | -------------------------------------------------------------------------------- /presistent volume claim: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: task-pv-claim 5 | spec: 6 | storageClassName: manual 7 | accessModes: 8 | - ReadWriteOnce 9 | resources: 10 | requests: 11 | storage: 3Gi 12 | -------------------------------------------------------------------------------- /presistentvolume: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: task-pv-volume 5 | labels: 6 | type: local 7 | spec: 8 | storageClassName: manual 9 | capacity: 10 | storage: 10Gi 11 | accessModes: 12 | - ReadWriteOnce 13 | hostPath: 14 | path: "/mnt/data" 15 | -------------------------------------------------------------------------------- /probeapp: -------------------------------------------------------------------------------- 1 | var http = require('http'); 2 | 3 | var server = http.createServer(function(req, res) { 4 | res.writeHead(500, { "Content-type": "text/plain" }); 5 | res.end("We have run into an error\n"); 6 | }); 7 | 8 | server.listen(3000, function() { 9 | console.log('Server is running at 3000') 10 | }) 11 | -------------------------------------------------------------------------------- /probedockerfile: -------------------------------------------------------------------------------- 1 | FROM node 2 | COPY app.js / 3 | EXPOSE 3000 4 | ENTRYPOINT [ "node","/app.js" ] 5 | -------------------------------------------------------------------------------- /pv-pod: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: task-pv-pod 5 | spec: 6 | volumes: 7 | - name: task-pv-storage 8 | persistentVolumeClaim: 9 | claimName: task-pv-claim 10 | containers: 11 | - name: task-pv-container 12 | image: nginx 13 | ports: 14 | - containerPort: 80 15 | name: "http-server" 16 | volumeMounts: 17 | - mountPath: "/usr/share/nginx/html" 18 | name: task-pv-storage 19 | -------------------------------------------------------------------------------- /pvclaimpod: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: task-pv-pod 5 | spec: 6 | volumes: 7 | - name: task-pv-storage 8 | persistentVolumeClaim: 9 | claimName: task-pv-claim 10 | containers: 11 | - name: task-pv-container 12 | image: nginx 13 | ports: 14 | - containerPort: 80 15 | name: "http-server" 16 | volumeMounts: 17 | - mountPath: "/usr/share/nginx/html" 18 | name: task-pv-storage 19 | -------------------------------------------------------------------------------- /pvcpod: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: task-pv-pod 5 | spec: 6 | volumes: 7 | - name: task-pv-storage 8 | persistenVolumeClaim: 9 | claimName: task-pv-claim 10 | containers: 11 | - name: task-pv-container 12 | image: nginx 13 | ports: 14 | - containerPort: 80 15 | name: "http-server" 16 | volumeMounts: 17 | - mountPath: "/usr/share/nginx/html" 18 | name: task-pv-storage 19 | -------------------------------------------------------------------------------- /quota command: -------------------------------------------------------------------------------- 1 | 68 kubectl create namespace quota-mem-cpu-example 2 | 69 vi quota.yaml 3 | 70 kubectl apply -f quota.yaml --namespace=quota-mem-cpu-example 4 | 71 kubectl describe ns 5 | 72 clear 6 | 73 kubectl get ns | more 7 | 74 kubectl describe ns | more 8 | 75 clear 9 | 76 vi quota1.yaml 10 | 77 kubectl apply -f quota1.yaml --namespace=quota-mem-cpu-example 11 | 12 | 13 | 82 kubectl get resourcequota mem-cpu-demo --namespace=quota-mem-cpu-example --output=yaml 14 | 83 history 15 | -------------------------------------------------------------------------------- /quota-mem-cpu-pod: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: quota-mem-cpu-demo 5 | spec: 6 | containers: 7 | - name: quota-mem-cpu-demo-ctr 8 | image: nginx 9 | resources: 10 | limits: 11 | memory: "800Mi" 12 | cpu: "800m" 13 | requests: 14 | memory: "600Mi" 15 | cpu: "400m" 16 | -------------------------------------------------------------------------------- /resourcequota: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ResourceQuota 3 | metadata: 4 | name: mem-cpu-demo 5 | spec: 6 | hard: 7 | request.cpu: "1" # inside this namespace request total for all container not exceed 1cpu 8 | request.memory: 1Gi 9 | limits.cpu: "2" 10 | limits.memory: 2Gi 11 | -------------------------------------------------------------------------------- /server.js: -------------------------------------------------------------------------------- 1 | 'use strict'; 2 | 3 | const express = require('express'); 4 | 5 | // Constants 6 | const PORT = 8080; 7 | const HOST = '0.0.0.0'; 8 | 9 | // App 10 | const app = express(); 11 | app.get('/', (req, res) => { 12 | res.send('Hello World'); 13 | }); 14 | 15 | app.listen(PORT, HOST); 16 | console.log(`Running on http://${HOST}:${PORT}`); 17 | -------------------------------------------------------------------------------- /springboot: -------------------------------------------------------------------------------- 1 | Create a Spring Boot Application 2 | The first thing we will do is create a Spring Boot application. If you have one you prefer to use already in github, you could clone it in the terminal (git and java are installed already). Or you can create an application from scratch using start.spring.io: 3 | 4 | curl https://start.spring.io/starter.tgz -d dependencies=webflux,actuator | tar -xzvf -COPY 5 | You can then build the application: 6 | 7 | ./mvnw installCOPY 8 | It will take a couple of minutes the first time, but then once the dependencies are all cached it will be fast. 9 | And you can see the result of the build. If the build was successful, you should see a JAR file, something like this: 10 | 11 | ls -l target/*.jar 12 | -rw-r--r-- 1 root root 19463334 Nov 15 11:54 target/demo-0.0.1-SNAPSHOT.jarCOPY 13 | The JAR is executable: 14 | 15 | $ java -jar target/*.jarCOPY 16 | The app has some built in HTTP endpoints by virtue of the "actuator" dependency we added when we downloaded the project. So you will see something like this in the logs on startup: 17 | 18 | ... 19 | 2019-11-15 12:12:35.333 INFO 13912 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator' 20 | 2019-11-15 12:12:36.448 INFO 13912 --- [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port(s): 8080 21 | ...COPY 22 | So you can curl the endpoints in another terminal: 23 | 24 | $ curl localhost:8080/actuator | jq . 25 | { 26 | "_links": { 27 | "self": { 28 | "href": "http://localhost:8080/actuator", 29 | "templated": false 30 | }, 31 | "health-path": { 32 | "href": "http://localhost:8080/actuator/health/{*path}", 33 | "templated": true 34 | }, 35 | "health": { 36 | "href": "http://localhost:8080/actuator/health", 37 | "templated": false 38 | }, 39 | "info": { 40 | "href": "http://localhost:8080/actuator/info", 41 | "templated": false 42 | } 43 | } 44 | }COPY 45 | To complete this step, send Ctrl+C to kill the application. 46 | 47 | Containerize the Application 48 | There are multiple options for containerizing a Spring Boot application. For local development and testing it makes sense to start with a Dockerfile as the docker build workflow is generally well known and understood. 49 | 50 | If you don’t have docker locally or want to automatically push an image to a registry then Jib would be a good choice. In an enterprise setting, when you need a trusted build service for a CI/CD pipeline, you could look at Cloud Native Buildpacks. 51 | First create a Dockerfile: 52 | 53 | FROM openjdk:8-jdk-alpine AS builder 54 | WORKDIR target/dependency 55 | ARG APPJAR=target/*.jar 56 | COPY ${APPJAR} app.jar 57 | RUN jar -xf ./app.jar 58 | 59 | FROM openjdk:8-jre-alpine 60 | VOLUME /tmp 61 | ARG DEPENDENCY=target/dependency 62 | COPY --from=builder ${DEPENDENCY}/BOOT-INF/lib /app/lib 63 | COPY --from=builder ${DEPENDENCY}/META-INF /app/META-INF 64 | COPY --from=builder ${DEPENDENCY}/BOOT-INF/classes /app 65 | ENTRYPOINT ["java","-cp","app:app/lib/*","com.example.demo.DemoApplication"]COPY 66 | Then build the container image, giving it a tag (choose your own ID instead of "springguides" if you are going to push to Dockerhub): 67 | 68 | $ docker build -t springguides/demo .COPY 69 | You can run the container locally: 70 | 71 | $ docker run -p 8080:8080 springguides/demoCOPY 72 | and check that it works in another terminal: 73 | 74 | $ curl localhost:8080/actuator/healthCOPY 75 | Finish off by killing the container. 76 | 77 | You won’t be able to push the image unless you authenticate with Dockerhub (docker login), but there’s an image there already that should work. If you were authenticated you could: 78 | 79 | $ docker push springguides/demoCOPY 80 | In real life the image needs to be pushed to Dockerhub (or some other accessible repository) because Kubernetes pulls the image from inside its Kubelets (nodes), which are not in general connected to the local docker daemon. For the purposes of this scenario you can omit the push and just use the image that is already there. 81 | 82 | Just for testing, there are workarounds that make docker push work with an insecure local registry, for instance, but that is out of scope for this scenario. 83 | Deploy the Application to Kubernetes 84 | You have a container that runs and exposes port 8080, so all you need to make Kubernetes run it is some YAML. To avoid having to look at or edit YAML, for now, you can ask kubectl to generate it for you. The only thing that might vary here is the --image name. If you deployed your container to your own repository, use its tag instead of this one: 85 | 86 | $ kubectl create deployment demo --image=springguides/demo --dry-run -o=yaml > deployment.yaml 87 | $ echo --- >> deployment.yaml 88 | $ kubectl create service clusterip demo --tcp=8080:8080 --dry-run -o=yaml >> deployment.yamlCOPY 89 | You can take the YAML generated above and edit it if you like, or you can just apply it: 90 | 91 | $ kubectl apply -f deployment.yaml 92 | deployment.apps/demo created 93 | service/demo createdCOPY 94 | Check that the application is running: 95 | 96 | $ kubectl get all 97 | NAME READY STATUS RESTARTS AGE 98 | pod/demo-658b7f4997-qfw9l 1/1 Running 0 146m 99 | 100 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 101 | service/kubernetes ClusterIP 10.43.0.1 443/TCP 2d18h 102 | service/demo ClusterIP 10.43.138.213 8080/TCP 21h 103 | 104 | NAME READY UP-TO-DATE AVAILABLE AGE 105 | deployment.apps/demo 1/1 1 1 21h 106 | 107 | NAME DESIRED CURRENT READY AGE 108 | replicaset.apps/demo-658b7f4997 1 1 1 21h 109 | dCOPY 110 | Keep doing kubectl get all until the demo pod shows its status as "Running". 111 | Now you need to be able to connect to the application, which you have exposed as a Service in Kubernetes. One way to do that, which works great at development time, is to create an SSH tunnel: 112 | 113 | $ kubectl port-forward svc/demo 8080:8080COPY 114 | then you can verify that the app is running in another terminal: 115 | 116 | $ curl localhost:8080/actuator/health 117 | {"status":"UP"} 118 | -------------------------------------------------------------------------------- /statefullset1nginx: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx 5 | labels: 6 | app: nginx 7 | spec: 8 | ports: 9 | - port: 80 10 | name: web 11 | clusterIP: None 12 | selector: 13 | app: nginx 14 | --- 15 | apiVersion: apps/v1 16 | kind: StatefulSet 17 | metadata: 18 | name: web 19 | spec: 20 | selector: 21 | matchLabels: 22 | app: nginx # has to match .spec.template.metadata.labels 23 | serviceName: "nginx" 24 | replicas: 3 # by default is 1 25 | template: 26 | metadata: 27 | labels: 28 | app: nginx # has to match .spec.selector.matchLabels 29 | spec: 30 | terminationGracePeriodSeconds: 10 31 | containers: 32 | - name: nginx 33 | image: k8s.gcr.io/nginx-slim:0.8 34 | ports: 35 | - containerPort: 80 36 | name: web 37 | volumeMounts: 38 | - name: www 39 | mountPath: /usr/share/nginx/html 40 | volumeClaimTemplates: 41 | - metadata: 42 | name: www 43 | spec: 44 | accessModes: [ "ReadWriteOnce" ] 45 | storageClassName: "my-storage-class" 46 | resources: 47 | requests: 48 | storage: 1Gi 49 | -------------------------------------------------------------------------------- /statefullsetredis: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: redis 5 | namespace: default 6 | labels: 7 | app: redis 8 | spec: 9 | ports: 10 | - port: 6379 11 | protocol: TCP 12 | selector: 13 | app: redis 14 | type: ClusterIP 15 | clusterIP: None 16 | --- 17 | apiVersion: apps/v1 18 | kind: StatefulSet 19 | metadata: 20 | name: redis 21 | spec: 22 | selector: 23 | matchLabels: 24 | app: redis 25 | serviceName: "redis" 26 | replicas: 1 27 | template: 28 | metadata: 29 | labels: 30 | app: redis 31 | spec: 32 | containers: 33 | - name: redis 34 | image: redis:5.0.4 35 | command: ["redis-server", "--appendonly", "yes"] 36 | ports: 37 | - containerPort: 6379 38 | name: web 39 | volumeMounts: 40 | - name: redis-aof 41 | mountPath: /data 42 | volumeClaimTemplates: 43 | - metadata: 44 | name: redis-aof 45 | spec: 46 | accessModes: [ "ReadWriteOnce" ] 47 | storageClassName: "gp2" 48 | resources: 49 | requests: 50 | storage: 1Gi 51 | -------------------------------------------------------------------------------- /taintpod: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx 5 | labels: 6 | env: test 7 | spec: 8 | containers: 9 | - name: nginx 10 | image: nginx 11 | tolerations: 12 | - key: "example-key" 13 | operator: "Exists" 14 | effect: "NoSchedule" 15 | -------------------------------------------------------------------------------- /updatedpythoncompose: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | # Run as 4 | # docker-compose build; docker-compose up -d 5 | # Check with 6 | # docker ps 7 | # Then check the logs with 8 | # docker logs --tail 50 $service_name 9 | # docker-compose images 10 | # docker-compose logs --tail 20 service_name 11 | 12 | services: 13 | 14 | mongo: 15 | image: mongo 16 | restart: always 17 | environment: 18 | MONGO_INITDB_ROOT_USERNAME: root 19 | MONGO_INITDB_ROOT_PASSWORD: password 20 | networks: 21 | - app-tier 22 | volumes: 23 | # Sample data comes from: 24 | # https://github.com/mistertandon/node-express-hbs/blob/master/movies_collection.json 25 | - ./mongo:/docker-entrypoint-initdb.d 26 | 27 | python_app: 28 | build: 29 | context: . 30 | dockerfile: Dockerfile 31 | depends_on: 32 | - mongo 33 | networks: 34 | - app-tier 35 | command: 36 | tail -f /dev/null 37 | 38 | 39 | networks: 40 | app-tier: 41 | driver: bridge 42 | -------------------------------------------------------------------------------- /user-peter-role-rolebinding: -------------------------------------------------------------------------------- 1 | kind: Role 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: peter-acces 5 | namespace: mynamespace 6 | rules: 7 | - apiGroups: [""] 8 | resources: ["pod"] 9 | verbs: ["get","watch","list","create"] 10 | - apiGroups: ["","extensions","apps"] 11 | resources: ["deployment"] 12 | verbs: ["get","list","watch","create","delete"] 13 | --- 14 | kind: RoleBinding 15 | apiVersion: rbac.authorization.k8s.io/v1 16 | metadata: 17 | name: peter-role-binding 18 | namespace: mynamespace 19 | subjects: 20 | - kind: User 21 | name: peter 22 | namespace: mynamespace 23 | apiGroup: rbac.authorization.k8s.io 24 | roleRef: 25 | kind: Role 26 | name: peter-access 27 | apiGroup: rbac.authorization.k8s.io 28 | -------------------------------------------------------------------------------- /visits.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gopal1409/KubernetesDocker/eec617a1aa81f7b00694d580bb2a57b0e68b502d/visits.zip -------------------------------------------------------------------------------- /wordpresscompose: -------------------------------------------------------------------------------- 1 | version: '3.3' 2 | services: 3 | db: 4 | image: mysql:5.7 5 | volumes: 6 | - db_data:/var/lib/mysql 7 | restart: always 8 | environment: 9 | MYSQL_ROOT_PASSWORD: somewordpress 10 | MYSQL_DATABASE: wordpress 11 | MYSQL_USER: wordpress 12 | MYSQL_PASSWORD: wordpress 13 | wordpress: 14 | depends_on: 15 | - db 16 | image: wordpress:latest 17 | ports: 18 | - "8000:80" 19 | restart: always 20 | environment: 21 | WORDPRESS_DB_HOST: db:3306 22 | WORDPRESS_DB_USER: wordpress 23 | WORDPRESS_DB_PASSWORD: wordpress 24 | volumes: 25 | db_data {} 26 | -------------------------------------------------------------------------------- /wordpressdockercompose: -------------------------------------------------------------------------------- 1 | version: '3.3' 2 | services: 3 | db: 4 | image: mysql:5.7 5 | volumes: 6 | - db_data:/var/lib/mysql 7 | restart: always 8 | environment: 9 | MYSQL_ROOT_PASSWORD: somewordpress 10 | MYSQL_DATABASE: wordpress 11 | MYSQL_USER: wordpress 12 | MYSQL_PASSWORD: wordpress 13 | wordpress: 14 | depends_on: 15 | - db 16 | image: wordpress:latest 17 | ports: 18 | - "8000:80" 19 | restart: always 20 | environment: 21 | WORDPRESS_DB_HOST: db:3306 22 | WORDPRESS_DB_USER: wordpress 23 | WORDPRESS_DB_PASSWORD: wordpress 24 | WORDPRESS_DB_NAME: wordpress 25 | volumes: 26 | db_data: {} 27 | --------------------------------------------------------------------------------