├── Contributing.md ├── Getting Started with Kubernetes.go ├── README.md └── Security Glossary.md /Contributing.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | **Make sure your pull request follows these guidelines:** 4 | 5 | - Search through the previous pull requests before making a new one! 6 | - Adding new categories, or improving existing categories is welcome! 7 | - Make sure you've personally used or benefited from the suggested resource. 8 | - Make an individual pull request for each suggestion. 9 | - Use the following format: `[Resource Title](url link) — description.` 10 | - Expand on why the resource is useful in your pull request if needed. 11 | - Keep descriptions short and simple, but descriptive. 12 | - Please double check your spelling and grammar. 13 | 14 | **Thanks for contributing to this Project!** 15 | -------------------------------------------------------------------------------- /Getting Started with Kubernetes.go: -------------------------------------------------------------------------------- 1 | Code samples & snippets coming soon! 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

2 | 3 |
4 | Kubernetes Guide 5 |

6 | 7 | 8 | followers 9 | 10 | ![Maintenance](https://img.shields.io/maintenance/yes/2024?style=for-the-badge) 11 | ![Last-Commit](https://img.shields.io/github/last-commit/mikeroyal/kubernetes-guide?style=for-the-badge) 12 | 13 | #### A guide covering Kubernetes including the applications and tools that will make you a better and more efficient Kubernetes developer. 14 | 15 | **Note: You can easily convert this markdown file to a PDF in [VSCode](https://code.visualstudio.com/) using this handy extension [Markdown PDF](https://marketplace.visualstudio.com/items?itemName=yzane.markdown-pdf).** 16 | 17 | # Table of Contents 18 | 19 | 1. [Getting Started with Kubernetes](#getting-started-with-kubernetes) 20 | - [Developer Resources](#developer-resources) 21 | - [Kubernetes Courses & Certifications](#kubernetes-courses--certifications) 22 | - [Books](#kubernetes-books) 23 | - [YouTube Tutorials](#youtube-tutorials) 24 | - [Red Hat CodeReady Containers (CRC) OpenShift on WSL](#red-Hat-CodeReady-Containers-CRC-on-wsl) 25 | - [Setting up Podman on WSL](#setting-up-podman-on-wsl) 26 | - [Setting up Buildah on WSL](#setting-up-buildah-on-wsl) 27 | - [Installing Kubernetes on WSL with Rancher Desktop](#installing-kubernetes-on-wsl-with-rancher-desktop) 28 | - [Installing Kubernetes on WSL with Docker Desktop](#installing-kubernetes-on-wsl-with-docker-desktop) 29 | - [Installing Kubernetes on WSL with Microk8s](#installing-kubernetes-on-wsl-with-microk8s) 30 | 31 | 2. [Kubernetes Tools and Projects](#kubernetes-tools-and-projects) 32 | * [Getting Started with OpenShift](#getting-started-with-openshift) 33 | * [What is OpenShift?](#what-is-openshift) 34 | * [OpenShift Developer Resources](#openshift-developer-resources) 35 | * [Source-to-Image(S2I) images for buildng your Apps](#source-to-image-s2i-images-for-programmingbuildng-your-apps) 36 | 37 | * [Java](#Java) 38 | * [Python](#Python) 39 | * [Golang](#Golang) 40 | * [Ruby](#Ruby) 41 | * [.NET Core](#net-core) 42 | * [Node.js](#Nodejs) 43 | * [Perl](#Perl) 44 | * [PHP](#PHP) 45 | 46 | * [Builder Images for setting up Databases](#builder-images-for-setting-up-Databases) 47 | 48 | * [MySQL](#mysql) 49 | * [PostgreSQL](#postgresql) 50 | * [MongoDB](#mongodb) 51 | * [MariaDB](#mariadb) 52 | * [Redis](#redis) 53 | 54 | * [Setting up Openshift on Microsoft Azure](#setting-up-on-Microsoft-Azure) 55 | * [Setting up Openshift on Google Cloud Platform (GCP)](#setting-up-on-Google-Cloud-GCP) 56 | * [Setting up Red Hat OpenShift Data Science](#setting-up-Red-Hat-OpenShift-Data-Science) 57 | * [Setting up Red Hat CodeReady Containers (CRC) OpenShift](#red-Hat-CodeReady-Containers-CRC) 58 | * [Setting up Podman](#setting-up-podman) 59 | * [Setting up Buildah](#setting-up-buildah) 60 | * [Setting up Skopeo](#setting-up-skopeo) 61 | * [File systems](#file-systems) 62 | 63 | * [OpenShift Tools](#openshift-tools) 64 | 65 | 3. [Go Development](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#go-development) 66 | 67 | 4. [Python Development](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#python-development) 68 | 69 | 5. [Bash/PowerShell Development](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#bashpowershell-development) 70 | 71 | 6. [Machine Learning](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#machine-learning) 72 | 73 | 7. [Networking](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#networking) 74 | 75 | 8. [Databases](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#databases) 76 | 77 | 9. [Telco 5G](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#telco-5g) 78 | 79 | 10. [Open Source Security](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#open-source-security) 80 | - [Security Tutorials & Resources](#Security-Tutorials--Resources) 81 | - [Security Cerifications](#Security-Cerifications) 82 | 83 |

84 | 85 |
86 |

87 | 88 | # Getting Started with Kubernetes 89 | [Back to the Top](#table-of-contents) 90 | 91 | [Kubernetes (K8s)](https://kubernetes.io/) is an open-source system for automating deployment, scaling, and management of containerized applications. 92 | 93 | 94 | 95 | **Building Highly-Availability(HA) Clusters with kubeadm. Source: [Kubernetes.io](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/)** 96 | 97 | ### Developer Resources 98 | [Back to the Top](#table-of-contents) 99 | 100 | - [Kubernetes Certifications](https://kubernetes.io/training/) 101 | 102 | - [Getting started with Kubernetes on AWS](https://aws.amazon.com/kubernetes/) 103 | 104 | - [Kubernetes on Microsoft Azure](https://azure.microsoft.com/en-us/topic/what-is-kubernetes/) 105 | 106 | - [Intro to Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/kubernetes-dashboard) 107 | 108 | - [Getting started with Google Cloud](https://cloud.google.com/learn/what-is-kubernetes) 109 | 110 | - [Azure Red Hat OpenShift ](https://azure.microsoft.com/en-us/services/openshift/) 111 | 112 | - [Getting started with Kubernetes on Red Hat](https://www.redhat.com/en/topics/containers/what-is-kubernetes) 113 | 114 | - [Getting started with Kubernetes on IBM](https://www.ibm.com/cloud/learn/kubernetes) 115 | 116 | - [Red Hat OpenShift on IBM Cloud](https://www.ibm.com/cloud/openshift) 117 | 118 | - [Kubernetes Contributors](https://www.kubernetes.dev/) 119 | 120 | - [Kubernetes Tutorials from Pulumi](https://www.pulumi.com/docs/tutorials/kubernetes/) 121 | 122 | - [Enable OpenShift Virtualization on Red Hat OpenShift](https://developers.redhat.com/blog/2020/08/28/enable-openshift-virtualization-on-red-hat-openshift/) 123 | 124 | - [YAML basics in Kubernetes](https://developer.ibm.com/technologies/containers/tutorials/yaml-basics-and-usage-in-kubernetes/) 125 | 126 | - [Elastic Cloud on Kubernetes](https://www.elastic.co/elastic-cloud-kubernetes) 127 | 128 | - [Docker and Kubernetes](https://www.docker.com/products/kubernetes) 129 | 130 | - [Running Apache Spark on Kubernetes](http://spark.apache.org/docs/latest/running-on-kubernetes.html) 131 | 132 | - [Kubernetes Across VMware vRealize Automation](https://blogs.vmware.com/management/2019/06/kubernetes-across-vmware-cloud-automation-services.html) 133 | 134 | - [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) 135 | 136 | - [All the Ways VMware Tanzu Works with AWS](https://tanzu.vmware.com/content/blog/all-the-ways-vmware-tanzutm-works-with-aws) 137 | 138 | - [Using Ansible in a Cloud-Native Kubernetes Environment](https://www.ansible.com/blog/how-useful-is-ansible-in-a-cloud-native-kubernetes-environment) 139 | 140 | - [Managing Kubernetes (K8s) objects with Ansible](https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_module.html) 141 | 142 | - [Setting up a Kubernetes cluster using Vagrant and Ansible](https://kubernetes.io/blog/2019/03/15/kubernetes-setup-using-ansible-and-vagrant/) 143 | 144 | - [Running MongoDB with Kubernetes](https://www.mongodb.com/kubernetes) 145 | 146 | - [Kubernetes Fluentd](https://docs.fluentd.org/v/0.12/articles/kubernetes-fluentd) 147 | 148 | - [Understanding the new GitLab Kubernetes Agent](https://about.gitlab.com/blog/2020/09/22/introducing-the-gitlab-kubernetes-agent/) 149 | 150 | - [Intro Local Process with Kubernetes for Visual Studio 2019](https://devblogs.microsoft.com/visualstudio/introducing-local-process-with-kubernetes-for-visual-studio%E2%80%AF2019/) 151 | 152 | - [Kubernetes Playground by Katacoda](https://www.katacoda.com/courses/kubernetes/playground) 153 | 154 | ### Kubernetes Courses & Certifications 155 | [Back to the Top](#table-of-contents) 156 | 157 | - [Kubernetes Training & Certifications](https://kubernetes.io/training/) 158 | 159 | - [Top Kubernetes Courses Online | Coursera](https://www.coursera.org/courses?query=kubernetes) 160 | 161 | - [Top Kubernetes Courses Online | Udemy](https://www.udemy.com/topic/kubernetes/) 162 | 163 | - [Kubernetes Courses - IBM Developer](https://developer.ibm.com/components/kubernetes/courses/) 164 | 165 | - [Introduction to Kubernetes Courses | edX](https://www.edx.org/course/introduction-to-kubernetes) 166 | 167 | - [VMware Tanzu Education](https://tanzu.vmware.com/education) 168 | 169 | - [KubeAcademy from VMware](https://kube.academy/) 170 | 171 | - [Online Kubernetes Course: Beginners Guide to Kubernetes | Pluralsight](https://www.pluralsight.com/courses/getting-started-kubernetes) 172 | 173 | - [Getting Started with Google Kubernetes Engine | Pluralsight](https://www.pluralsight.com/courses/getting-started-google-kubernetes-engine-8) 174 | 175 | - [Scalable Microservices with Kubernetes course from Udacity](https://www.udacity.com/course/scalable-microservices-with-kubernetes--ud615) 176 | 177 | ### Kubernetes Books 178 | [Back to the Top](#table-of-contents) 179 | 180 | - [Kubernetes for Full-Stack Developers by Digital Ocean](https://assets.digitalocean.com/books/kubernetes-for-full-stack-developers.pdf) 181 | 182 | - [Kubernetes Patterns - Red Hat](https://www.redhat.com/cms/managed-files/cm-oreilly-kubernetes-patterns-ebook-f19824-201910-en.pdf) 183 | 184 | - [The Ultimate Guide to Kubernetes Deployments with Octopus](https://i.octopus.com/books/kubernetes-book.pdf) 185 | 186 | - [Learng Kubernetes (PDF)](https://riptutorial.com/Download/kubernetes.pdf) 187 | 188 | - [Certified Kubernetes Administrator (CKA) Study Guide: In-Depth Guidance and Practice](https://www.amazon.com/Certified-Kubernetes-Administrator-Study-Depth/dp/1098107225/ref=sr_1_29?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-29) 189 | 190 | - [Quick Start Kubernetes by Nigel Poulton (2022)](https://www.amazon.com/Quick-Start-Kubernetes-Nigel-Poulton-ebook/dp/B08T21NW4Z/ref=sr_1_18?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-18) 191 | 192 | - [The Kubernetes Book by Nigel Poulton (2022)](https://www.amazon.com/Kubernetes-Book-Version-November-2018-ebook/dp/B072TS9ZQZ/ref=sr_1_4?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-4) 193 | 194 | - [Kubernetes: Up and Running: Dive into the Future of Infrastructure](https://www.amazon.com/Kubernetes-Running-Dive-Future-Infrastructure/dp/1492046531/ref=sr_1_5?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-5) 195 | 196 | - [Kubernetes and Docker - An Enterprise Guide: Effectively containerize applications, integrate enterprise systems, and scale applications in your enterprise](https://www.amazon.com/Kubernetes-Docker-Effectively-containerize-applications/dp/183921340X/ref=sr_1_24?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-24) 197 | 198 | - [Kubernetes in Action](https://www.amazon.com/Kubernetes-Action-Marko-Luksa/dp/1617293725/ref=sr_1_7?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-7) 199 | 200 | - [Kubernetes – An Enterprise Guide: Effectively containerize applications, integrate enterprise systems, and scale](https://www.amazon.com/Kubernetes-Enterprise-Effectively-containerize-applications/dp/1803230037/ref=sr_1_6?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-6) 201 | 202 | - [Production Kubernetes: Building Successful Application Platforms](https://www.amazon.com/Production-Kubernetes-Successful-Application-Platforms/dp/1492092304/ref=sr_1_8?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-8) 203 | 204 | - [The Kubernetes Bible: The definitive guide to deploying and managing Kubernetes across major cloud platforms](https://www.amazon.com/Kubernetes-Bible-definitive-deploying-platforms/dp/1838827692/ref=sr_1_16?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-16) 205 | 206 | - [Networking and Kubernetes: A Layered Approach](https://www.amazon.com/Networking-Kubernetes-Approach-James-Strong/dp/1492081655/ref=sr_1_12?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-12) 207 | 208 | - [Kubernetes Best Practices: Blueprints for Building Successful Applications on Kubernetes](https://www.amazon.com/Kubernetes-Best-Practices-Blueprints-Applications/dp/1492056472/ref=sr_1_19?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-19) 209 | 210 | - [Kubernetes Security and Observability: A Holistic Approach to Securing Containers and Cloud Native Apps](https://www.amazon.com/Kubernetes-Security-Observability-Containers-Applications/dp/1098107101/ref=sr_1_26?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-26) 211 | 212 | - [Hands-on Kubernetes on Azure: Use Azure Kubernetes Service to automate management, scaling, and deployment of containerized apps](https://www.amazon.com/Hands-Kubernetes-Azure-containerized-applications-ebook/dp/B095H26VFY/ref=sr_1_11?crid=15963283P4C0V&keywords=kubernetes&qid=1653935057&s=books&sprefix=kubernetes%2Cstripbooks%2C174&sr=1-11) 213 | 214 | ### YouTube Tutorials 215 | [Back to the Top](#table-of-contents) 216 | 217 | [![Kubernetes in 2023](https://ytcards.demolab.com/?id=kGrpLKNi4ZI&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Kubernetes in 2023")](https://www.youtube.com/watch?v=kGrpLKNi4ZI) 218 | [![Cloud Native Live: Introduction to platform engineering maturity model](https://ytcards.demolab.com/?id=Oe_mhDtb22M&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Cloud Native Live: Introduction to platform engineering maturity model")](https://www.youtube.com/watch?v=Oe_mhDtb22M) 219 | [![Containers vs Pods](https://ytcards.demolab.com/?id=vxtq_pJp7_A&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Containers vs Pods")](https://www.youtube.com/watch?v=vxtq_pJp7_A) 220 | [![Kubernetes Roadmap - Complete Step-by-Step Learning Path](https://ytcards.demolab.com/?id=S8eX0MxfnB4&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Kubernetes Roadmap - Complete Step-by-Step Learning Path")](https://www.youtube.com/watch?v=S8eX0MxfnB4) 221 | [![Kubernetes Course - Full Beginners Tutorial (Containerize Your Apps!)](https://ytcards.demolab.com/?id=d6WC5n9G_sM&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Kubernetes Course - Full Beginners Tutorial (Containerize Your Apps!)")](https://www.youtube.com/watch?v=d6WC5n9G_sM) 222 | [![What is Kubernetes | Kubernetes explained in 15 mins](https://ytcards.demolab.com/?id=VnvRFRk_51k&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "What is Kubernetes | Kubernetes explained in 15 mins")](https://www.youtube.com/watch?v=VnvRFRk_51k) 223 | [![Do NOT Learn Kubernetes Without Knowing These Concepts...](https://ytcards.demolab.com/?id=wXuSqFJVNQA&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Do NOT Learn Kubernetes Without Knowing These Concepts...")](https://www.youtube.com/watch?v=wXuSqFJVNQA) 224 | [![Docker Containers and Kubernetes Fundamentals – Full Hands-On Course](https://ytcards.demolab.com/?id=kTp5xUtcalw&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Docker Containers and Kubernetes Fundamentals – Full Hands-On Course")](https://www.youtube.com/watch?v=kTp5xUtcalw) 225 | [![Kubernetes Explained in 100 Seconds](https://ytcards.demolab.com/?id=PziYflu8cB8&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Kubernetes Explained in 100 Seconds")](https://www.youtube.com/watch?v=PziYflu8cB8) 226 | [![Docker vs Kubernetes, what's better in a Homelab?](https://ytcards.demolab.com/?id=n-fAf2mte6M&lang=en&background_color=%230d1117&title_color=%23ffffff&stats_color=%23dedede&width=240 "Docker vs Kubernetes, what's better in a Homelab?")](https://www.youtube.com/watch?v=n-fAf2mte6M) 227 | 228 | ### Red Hat CodeReady Containers (CRC) on WSL 229 | 230 | [Back to the Top](#table-of-contents) 231 | 232 | [Red Hat CodeReady Containers (CRC)](https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/2.9.0) is a tool that provides a minimal, preconfigured OpenShift 4 cluster on a laptop or desktop machine for development and testing purposes. CRC is delivered as a platform inside of the VM. 233 | 234 | * **odo (OpenShift Do)**, a CLI tool for developers, to manage application components on the OpenShift Container Platform. 235 | 236 |

237 | 238 |
239 |

240 | 241 | **System Requirements:** 242 | 243 | * **OS:** CentOS Stream 8/RHEL 8/Fedora or later (the latest 2 releases). 244 | * **Download:** [pull-secret](https://cloud.redhat.com/openshift/install/crc/installer-provisioned?intcmp=701f20000012ngPAAQ) 245 | * **Login:** [Red Hat account](https://access.redhat.com/login) 246 | 247 | **Other physical requirements include:** 248 | 249 | * Four virtual CPUs (**4 vCPUs**) 250 | * 10GB of memory (**RAM**) 251 | * 40GB of storage space 252 | 253 | **To set up CodeReady Containers, start by creating the ```crc``` directory, and then download and extract the ```crc``` package:** 254 | 255 | ```mkdir /home//crc``` 256 | 257 | ```wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz``` 258 | 259 | ```tar -xvf crc-linux-amd64.tar.xz``` 260 | 261 | **Next, move the files to the crc directory and remove the downloaded package(s):** 262 | 263 | ```mv /home//crc-linux--amd64/* /home//crc``` 264 | 265 | ```rm /home//crc-linux-amd64.tar.xz``` 266 | 267 | ```rm -r /home//crc-linux--amd64``` 268 | 269 | **Change to the ```crc``` directory, make ```crc``` executable, and export your ```PATH``` like this:** 270 | 271 | ```cd /home//crc``` 272 | 273 | ```chmod +x crc``` 274 | 275 | ```export PATH=$PATH:/home//crc``` 276 | 277 | **Set up and start the cluster:** 278 | 279 | ```crc setup``` 280 | 281 | ```crc start -p //pull-secret.txt``` 282 | 283 | **Set up the OC environment:** 284 | 285 | ```crc oc-env``` 286 | 287 | ```eval $(crc oc-env)``` 288 | 289 | **Log in as the developer user:** 290 | 291 | ```oc login -u developer -p developer https://api.crc.testing:6443``` 292 | 293 | ```oc logout``` 294 | 295 | **And then, log in as the platform’s admin:** 296 | 297 | ```oc login -u kubeadmin -p password https://api.crc.testing:6443``` 298 | 299 | ```oc logout``` 300 | 301 | #### Interacting with the cluster. The most common ways include: 302 | 303 | **Starting the graphical web console:** 304 | 305 | ```crc console``` 306 | 307 | **Display the cluster’s status:** 308 | 309 | ```crc status``` 310 | 311 | **Shut down the OpenShift cluster:** 312 | 313 | ```crc stop``` 314 | 315 | **Delete or kill the OpenShift cluster:** 316 | 317 | ```crc delete``` 318 | 319 |

320 | 321 |
322 |

323 | 324 | ### Setting up Podman on WSL 325 | 326 | [Back to the Top](#table-of-contents) 327 | 328 | [Podman (the POD manager)](https://podman.io/) is an open source tool for developing, managing, and running containers on your Linux® systems. It also manages the entire container ecosystem using the libpod library. Podman’s daemonless and inclusive architecture makes it a more secure and accessible option for container management, and its accompanying tools and features, such as [Buildah](https://www.redhat.com/en/topics/containers/what-is-buildah) and [Skopeo](https://www.redhat.com/en/topics/containers/what-is-skopeo), allow developers to customize their container environments to best suit their needs. 329 | 330 | * Fedora: ```sudo dnf install podman``` 331 | * CentOS: ```sudo yum --enablerepo=extras install podman``` 332 | * Ubuntu 20.04 or later: ```sudo apt install podman``` 333 | * Debian 11 (bullseye) or later, or sid/unstable: ```sudo apt install podman``` 334 | * ArchLinux: ```sudo pacman -S podman``` and then tweaks for rootless 335 | 336 |

337 | 338 |
339 | Podman 340 |

341 | 342 | ### Setting up Buildah on WSL 343 | 344 | [Back to the Top](#table-of-contents) 345 | 346 | [Buildah](https://buildah.io/) is an open source, Linux-based tool that can build Docker- and Kubernetes-compatible images, and is easy to incorporate into scripts and build pipelines. In addition, Buildah has overlap functionality with [Podman](https://podman.io/), [Skopeo](https://github.com/containers/skopeo), and [CRI-O](https://cri-o.io/). 347 | 348 | * Fedora: ```sudo dnf -y install buildah``` 349 | * CentOS: ```sudo yum --enablerepo=extras install buildah``` 350 | * Ubuntu 20.04 or later: ```sudo apt install buildah``` 351 | * Debian 11 (bullseye) or later, or sid/unstable: ```sudo apt install -y buildah``` 352 | * ArchLinux: ```sudo pacman -S buildah``` and then tweaks for rootless 353 | 354 |

355 | 356 |
357 | Buildah 358 |

359 | 360 | ### Installing Kubernetes on WSL with Rancher Desktop 361 | 362 | [Back to the Top](#table-of-contents) 363 | 364 | [Rancher Desktop](https://www.rancher.com/products/rancher-desktop) is an open-source desktop application for Mac, Windows and Linux. Rancher Desktop runs Kubernetes and container management on your desktop letting you choose the version of Kubernetes you want to run. It can also build, push, pull, and run container images using either the Docker CLI (with Moby/dockerd) or nerdctl (with containerd). 365 | 366 | **Features:** 367 | 368 | * Installs a new Linux VM in WSL2 that has a Kubernetes cluster based on [k3s](https://k3s.io/) as well as installs various components in it such as KIM (for building docker images on the cluster) and the [Traefik Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/). 369 | 370 | * It installs the kubectl and Helm CLIs on the Windows side linked to them. 371 | 372 | * A nice Windows app to manage its settings and help facilitate its upgrades. 373 | 374 |

375 | 376 |
377 | Rancher Desktop 378 |

379 | 380 |

381 | 382 |
383 | Rancher Desktop Kubernetes Settings 384 |

385 | 386 | #### .deb Dev Repository 387 | 388 | ```curl -s https://download.opensuse.org/repositories/isv:/Rancher:/dev/deb/Release.key | gpg --dearmor | sudo dd status=none of=/usr/share/keyrings/isv-rancher-dev-archive-keyring.gpg``` 389 | 390 | ```echo 'deb [signed-by=/usr/share/keyrings/isv-rancher-dev-archive-keyring.gpg] https://download.opensuse.org/repositories/isv:/Rancher:/dev/deb/ ./' | sudo dd status=none of=/etc/apt/sources.list.d/isv-rancher-dev.list``` 391 | 392 | ```sudo apt update``` 393 | 394 | **See available versions** 395 | 396 | ```apt list -a rancher-desktop``` 397 | 398 | ```sudo apt install rancher-desktop=``` 399 | 400 | #### .rpm Dev Repository 401 | 402 | ```sudo zypper addrepo https://download.opensuse.org/repositories/isv:/Rancher:/dev/rpm/isv:Rancher:dev.repo``` 403 | 404 | ```sudo zypper refresh``` 405 | 406 | **See available versions** 407 | 408 | ```zypper search -s rancher-desktop``` 409 | 410 | ```zypper install --oldpackage rancher-desktop=``` 411 | 412 |

413 | 414 |
415 | Rancher Desktop Architecture Overview 416 |

417 | 418 | ### Installing Kubernetes on WSL with Docker Desktop 419 | 420 | [Back to the Top](#table-of-contents) 421 | 422 |

423 | 424 |
425 | Enable the WSL 2 base engine in Docker Desktop 426 |

427 | 428 | We also need to set in Resources which WSL2 distribution we want to access Docker from, as shown below using Ubuntu 20.04. Then remember to restart Docker for Windows, and once the restart is complete we can use the docker command from within WSL: 429 | 430 |

431 | 432 |
433 |

434 | 435 | 436 | Make sure to use kind as a simple way to run Kubernetes in a container. Here we will install the instructions from the official [Kind website](https://kind.sigs.k8s.io/docs/user/quick-start/). 437 | 438 | ```curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-$(uname)-amd64``` 439 | 440 | ```chmod +x ./kind``` 441 | 442 | ```mv ./kind /usr/local/bin/``` 443 | 444 | Now that kind is installed, we can create the Kubernetes cluster 445 | 446 | ```echo $KUBECONFIG``` 447 | 448 | ```ls $HOME/.kube``` 449 | 450 | ```kind create cluster --name wslkube``` 451 | 452 | ```ls $HOME/.kube``` 453 | 454 | We have successfully created a single-node Kubernetes cluster. 455 | 456 | ```kubectl get nodes``` 457 | 458 | ```kubectl get all --all-namespaces``` 459 | 460 | ### Installing Kubernetes on WSL with Microk8s 461 | 462 | [Back to the Top](#table-of-contents) 463 | 464 | * **Note:** This install option requires systemd to be running on WSL 465 | 466 | * **WSL Systemd requirements:** Windows 11 and a version of WSL 0.67.6 or above. 467 | 468 | [MicroK8s](https://microk8s.io/) is the simplest production-grade upstream Kubernets setup to get up and running. 469 | 470 | Installing Microk8s 471 | 472 | ```sudo snap install microk8s --classic``` 473 | 474 | Checking the status while Kubernetes starts 475 | 476 | ```microk8s status --wait-ready``` 477 | 478 | Turning on the services you want 479 | 480 | ```microk8s enable dashboard dns registry istio``` 481 | 482 | Try **microk8s enable --help** for a list of available services and optional features. **microk8s disable ** turns off a service. 483 | 484 | Start using Kubernetes 485 | 486 | ```microk8s kubectl get all --all-namespaces``` 487 | 488 | If you mainly use MicroK8s you can make our kubectl the default one on your command-line with **alias mkctl="microk8s kubectl"**. 489 | 490 | 491 | Access the Kubernetes dashboard 492 | 493 | ```microk8s dashboard-proxy``` 494 | 495 | 496 | # Kubernetes Tools, Frameworks, and Projects 497 | 498 | [Back to the Top](#table-of-contents) 499 | 500 | [Open Container Initiative](https://opencontainers.org/about/overview/) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. 501 | 502 | [Buildah](https://buildah.io/) is a command line tool to build Open Container Initiative (OCI) images. It can be used with Docker, Podman, Kubernetes. 503 | 504 | [Podman](https://podman.io/) is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. 505 | 506 | [Containerd](https://containerd.io) is a daemon that manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond. It is available for Linux and Windows. 507 | 508 | [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/) is a managed, production-ready environment for running containerized applications. 509 | 510 | [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/) is serverless Kubernetes, with a integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance. Unite your development and operations teams on a single platform to rapidly build, deliver, and scale applications with confidence. 511 | 512 | [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) is a tool that runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. 513 | 514 | [AWS Controllers for Kubernetes (ACK)](https://aws.amazon.com/blogs/containers/aws-controllers-for-kubernetes-ack/) is a new tool that lets you directly manage AWS services from Kubernetes. ACK makes it simple to build scalable and highly-available Kubernetes applications that utilize AWS services. 515 | 516 | [Container Engine for Kubernetes (OKE)](https://www.oracle.com/cloud-native/container-engine-kubernetes/) is an Oracle-managed container orchestration service that can reduce the time and cost to build modern cloud native applications. Unlike most other vendors, Oracle Cloud Infrastructure provides Container Engine for Kubernetes as a free service that runs on higher-performance, lower-cost compute. 517 | 518 | [Anthos](https://cloud.google.com/anthos/docs/concepts/overview) is a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments. 519 | 520 | [Red Hat Openshift](https://www.openshift.com/) is a fully managed Kubernetes platform that provides a foundation for on-premises, hybrid, and multicloud deployments. 521 | 522 | [OKD](https://okd.io/) is a community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. 523 | 524 | [Odo](https://odo.dev/) is a fast, iterative, and straightforward CLI tool for developers who write, build, and deploy applications on Kubernetes and OpenShift. 525 | 526 | [Kata Operator](https://github.com/openshift/kata-operator) is an operator to perform lifecycle management (install/upgrade/uninstall) of [Kata Runtime](https://katacontainers.io/) on Openshift as well as Kubernetes cluster. 527 | 528 | [Thanos](https://thanos.io/) is a set of components that can be composed into a highly available metric system with unlimited storage capacity, which can be added seamlessly on top of existing Prometheus deployments. 529 | 530 | [OpenShift Hive](https://github.com/openshift/hive) is an operator which runs as a service on top of Kubernetes/OpenShift. The Hive service can be used to provision and perform initial configuration of OpenShift 4 clusters. 531 | 532 | [Rook](https://rook.io/) is a tool that turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management. 533 | 534 | [VMware Tanzu](https://tanzu.vmware.com/tanzu) is a centralized management platform for consistently operating and securing your Kubernetes infrastructure and modern applications across multiple teams and private/public clouds. 535 | 536 | [Kubespray](https://kubespray.io/) is a tool that combines Kubernetes and Ansible to easily install Kubernetes clusters that can be deployed on [AWS](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/aws.md), GCE, [Azure](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/azure.md), [OpenStack](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/openstack.md), [vSphere](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/vsphere.md), [Packet](https://github.com/kubernetes-sigs/kubespray/blob/master/docs/packet.md) (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal. 537 | 538 | [KubeInit](https://github.com/kubeinit/kubeinit) provides Ansible playbooks and roles for the deployment and configuration of multiple Kubernetes distributions. 539 | 540 | [Rancher](https://rancher.com/) is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters, while providing DevOps teams with integrated tools for running containerized workloads. 541 | 542 | [K3s](https://github.com/rancher/k3s) is a highly available, certified Kubernetes distribution designed for production workloads in unattended, resource-constrained, remote locations or inside IoT appliances. 543 | 544 | [Helm](https://helm.sh/) is a Kubernetes Package Manager tool that makes it easier to install and manage Kubernetes applications. 545 | 546 | [Knative](https://knative.dev/) is a Kubernetes-based platform to build, deploy, and manage modern serverless workloads. Knative takes care of the operational overhead details of networking, autoscaling (even to zero), and revision tracking. 547 | 548 | [KubeFlow](https://www.kubeflow.org/) is a tool dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. 549 | 550 | [Kubebox](https://github.com/astefanutti/kubebox) is a Terminal and Web console for Kubernetes. 551 | 552 | [Kubsec](https://github.com/controlplaneio/kubesec) is a Security risk analysis for Kubernetes resources. 553 | 554 | [Replex](https://www.replex.io/) is a Kubernetes Governance and Cost Management for the Cloud-Native Enterprise. 555 | 556 | [Virtual Kubelet](https://virtual-kubelet.io/) is an open-source [Kubernetes kubelet](https://kubernetes.io/docs/reference/generated/kubelet/) implementation that masquerades as a kubelet. 557 | 558 | [Telepresence](https://www.telepresence.io/) is a fast, local development for Kubernetes and OpenShift microservices. 559 | 560 | [Weave Scope](https://www.weave.works/oss/scope/) is a tool that automatically detects processes, containers, hosts. No kernel modules, no agents, no special libraries, no coding. It seamless integration with Docker, Kubernetes, DCOS and AWS ECS. 561 | 562 | [Nuclio](https://nuclio.io/) is a high-performance "serverless" framework focused on data, I/O, and compute intensive workloads. It is well integrated with popular data science tools, such as [Jupyter](https://jupyter.org/) and [Kubeflow](https://www.kubeflow.org/); supports a variety of data and streaming sources; and supports execution over CPUs and GPUs. 563 | 564 | [Supergiant Control](https://github.com/supergiant/control) is a tool that manages the lifecycle of clusters on your infrastructure and allows deployment of applications via HELM. Its deployment and configuration workflows will help you to get up and running with Kubernetes faster. 565 | 566 | [Supergiant Capacity - Beta](https://github.com/supergiant/capacity) is a tool that ensures that the right hardware is available for the required resource load of your Kubernetes cluster at any given time. This helps prevent over-provisioning of your container environment and overspending on your hardware budget. 567 | 568 | [Test suite for Kubernetes](https://github.com/mrahbar/k8s-testsuite) is a test suite consists of two Helm charts for network bandwith testing and load testing a Kuberntes cluster. 569 | 570 | [Keel](https://github.com/keel-hq/keel) is a Kubernetes Operator to automate Helm, DaemonSet, StatefulSet & Deployment updates. 571 | 572 | [Kube Monkey](https://github.com/asobti/kube-monkey) is an implementation of Netflix's Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes (k8s) pods in the cluster encouraging and validating the development of failure-resilient services. 573 | 574 | [Kube State Metrics (KSM)](https://github.com/kubernetes/kube-state-metrics) is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. It's not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes and pods. 575 | 576 | [Sonobuoy](https://sonobuoy.io/) is a diagnostic tool that makes it easier to understand the state of a Kubernetes cluster by running a choice of configuration tests in an accessible and non-destructive manner. 577 | 578 | [PowerfulSeal](https://github.com/powerfulseal/powerfulseal) is a powerful testing tool for your Kubernetes clusters, so that you can detect problems as early as possible. 579 | 580 | [Test Infra](https://github.com/kubernetes/test-infra) is a repository contains tools and configuration files for the testing and automation needs of the Kubernetes project. 581 | 582 | [cAdvisor (Container Advisor)](https://github.com/google/cadvisor) is a tool that provides container users an understanding of the resource usage and performance characteristics of their running containers. It's a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. 583 | 584 | [Etcd](https://etcd.io/) is a distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. Etcd is used as the backend for service discovery and stores cluster state and configuration for Kubernetes. 585 | 586 | [nacos](https://github.com/alibaba/nacos) is an easy-to-use dynamic service discovery, configuration and service management platform for building cloud native applications. 587 | 588 | [Kuma](https://kuma.io/install) is a modern Envoy-based service mesh that can run on every cloud, in a single or multi-zone capacity, across both Kubernetes and VMs. Thanks to its broad universal workload support, combined with native support for Envoy as its data plane proxy technology (but with no Envoy expertise required), Kuma provides modern L4-L7 service connectivity, discovery, security, observability, routing and more across any service on any platform, databases included. 589 | 590 | [Open Service Mesh (OSM)](https://openservicemesh.io/) is a lightweight, extensible, cloud native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments. 591 | 592 | [kserve](https://github.com/kserve/kserve) is a Standardized Serverless ML Inference Platform on Kubernetes. 593 | 594 | [naftis](https://github.com/XiaoMi/naftis) is an awesome dashboard for Istio built with love. 595 | 596 | [Traefik Mesh](https://traefik.io/traefik-mesh) is a simple, yet full-featured service mesh. It is container-native and fits as your de-facto service mesh in your Kubernetes cluster. It supports the latest Service Mesh Interface specification [SMI](https://smi-spec.io/) that facilitates integration with pre-existing solution. 597 | 598 | [Meshery](https://meshery.io/) is the cloud native management plane offering lifecycle, configuration, and performance management of Kubernetes, service meshes, and your workloads. 599 | 600 | [kubectx](https://kubectx.dev/) is a tool to switch between contexts (clusters) on kubectl faster. 601 | 602 | [Dapr](https://dapr.io/) is a portable, event-driven, runtime for building distributed applications across cloud and edge. 603 | 604 | [OpenEBS](https://openebs.io/) is a Kubernetes-based tool to create stateful applications using Container Attached Storage. 605 | 606 | [Container Storage Interface (CSI)](https://www.architecting.it/blog/container-storage-interface/) is an API that lets container orchestration platforms like Kubernetes seamlessly communicate with stored data via a plug-in. 607 | 608 | [MicroK8s](https://microk8s.io/) is a tool that delivers the full Kubernetes experience. In a Fully containerized deployment with compressed over-the-air updates for ultra-reliable operations. It is supported on Linux, Windows, and MacOS. 609 | 610 | [Charmed Kubernetes](https://ubuntu.com/kubernetes/features) is a well integrated, turn-key, conformant Kubernetes platform, optimized for your multi-cloud environments developed by Canonical. 611 | 612 | [Grafana Kubernetes App](https://grafana.com/grafana/plugins/grafana-kubernetes-app) is a toll that allows you to monitor your Kubernetes cluster's performance. It includes 4 dashboards, Cluster, Node, Pod/Container and Deployment. It allows for the automatic deployment of the required Prometheus exporters and a default scrape config to use with your in cluster Prometheus deployment. 613 | 614 | [KubeEdge](https://kubeedge.io/en/) is an open source system for extending native containerized application orchestration capabilities to hosts at Edge.It is built upon kubernetes and provides fundamental infrastructure support for network, app. deployment and metadata synchronization between cloud and edge. 615 | 616 | [Lens](https://k8slens.dev/) is the most powerful IDE for people who need to deal with Kubernetes clusters on a daily basis. It has support for MacOS, Windows and Linux operating systems. 617 | 618 | [kind](https://kind.sigs.k8s.io/) is a tool for running local Kubernetes clusters using Docker container “nodes”. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI. 619 | 620 | [Flux CD](https://fluxcd.io/) is a tool that automatically ensures that the state of your Kubernetes cluster matches the configuration you've supplied in Git. It uses an operator in the cluster to trigger deployments inside Kubernetes, which means that you don't need a separate continuous delivery tool. 621 | 622 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) is a Kubernetes as a service that ensures fully automated Day-2 operations with 99.9% SLA on any environment, whether in data-centers, public clouds, or at the edge. 623 | 624 | ## Getting Started with OpenShift 625 | 626 | [Back to the Top](#table-of-contents) 627 | 628 |

629 | 630 |
631 |

632 | 633 | ### What is OpenShift? 634 | 635 | [Red Hat OpenShift](https://www.openshift.com/) is an open source container application platform based on the Kubernetes container orchestrator for enterprise app development and deployment in the hybrid cloud Red Hat OpenShift, the open hybrid cloud platform built on Kubernetes. OpenShift can manage applications written in different languages and frameworks, such as Ruby, Node.js, Java, Perl, and Python. 636 | 637 |

638 | 639 |
640 |

641 | 642 | **Red Hat OpenShift Development Architecture. Source: [Red Hat](https://www.redhat.com/en/resources/openshift-container-storage-datasheet)** 643 | 644 | ### OpenShift Developer Resources 645 | [Back to the Top](#table-of-contents) 646 | 647 | * [Get Started with the CLI on OpenShift](https://docs.openshift.com/container-platform/3.9/cli_reference/get_started_cli.html) 648 | 649 | * [CI/CD with OpenShift](https://www.openshift.com/blog/cicd-with-openshift) 650 | 651 | * [AI/ML on OpenShift](https://www.openshift.com/learn/topics/ai-ml) 652 | 653 | * [Red Hat OpenShift on VMware](https://www.openshift.com/learn/topics/openshift-on-vmware) 654 | 655 | * [Understanding service mesh with OpenShift](https://docs.openshift.com/container-platform/4.1/service_mesh/service_mesh_arch/understanding-ossm.html) 656 | 657 | * [IBM Redbooks | Red Hat](https://www.redbooks.ibm.com/domains/redhat) 658 | 659 | * [DevOps Training & Tutorials | Red Hat Developer](https://developers.redhat.com/topics/devops) 660 | 661 | * [All Topics for Software Developers | Red Hat Developer](https://developers.redhat.com/topics) 662 | 663 | * [Develop Applications on OpenShift](https://developers.redhat.com/openshift) 664 | 665 | * [Automate application security with OpenShift Pipelines](https://developers.redhat.com/topics/devsecops) 666 | 667 | * [What is the difference between OpenShift and Kubernetes?](https://developers.redhat.com/openshift/difference-openshift-kubernetes/) 668 | 669 | * [What books are available about OpenShift?](https://developers.redhat.com/openshift/openshift-books/) 670 | 671 | * [Where can I try out OpenShift to see what it is like?](https://developers.redhat.com/openshift/try-openshift/) 672 | 673 | * [How can I run OpenShift on my own computer for development?](https://developers.redhat.com/openshift/local-openshift/) 674 | 675 | * [What hosting services are there that use OpenShift?](https://developers.redhat.com/openshift/hosting-openshift/) 676 | 677 | ### Certifications & Courses 678 | [Back to the Top](#table-of-contents) 679 | 680 | * [OpenShift Training from Red Hat](https://www.redhat.com/en/openshift-training) 681 | 682 | * [OpenShift: Interactive Learning Portal](https://learn.openshift.com/) 683 | 684 | * [Red Hat Certified Specialist in OpenShift Administration](https://www.redhat.com/en/services/certification/rhcs-paas) 685 | 686 | * [Red Hat OpenShift Operator Certification](https://www.openshift.com/blog/red-hat-openshift-operator-certification) 687 | 688 | * [Kubernetes and OpenShift: Community, Standards and Certifications](https://www.openshift.com/blog/kubernetes-and-openshift-community-standards-and-certifications) 689 | 690 | * [OpenShift Courses | Udemy](https://www.udemy.com/topic/openshift/) 691 | 692 | * [OpenShift - Deploying Applications course | Coursera](https://www.coursera.org/lecture/ibm-cloud-essentials/openshift-499y0) 693 | 694 | * [Introduction to Containers w/ Docker, Kubernetes & OpenShift course | Coursera](https://www.coursera.org/learn/ibm-containers-docker-kubernetes-openshift) 695 | 696 | * [Fundamentals of Containers, Kubernetes, and Red Hat OpenShift | edX](https://www.edx.org/course/fundamentals-of-containers-kubernetes-and-red-hat) 697 | 698 | 699 | ### Books 700 | [Back to the Top](#table-of-contents) 701 | 702 | * [OpenShift for Developers, Second Edition by Joshua Wood & Brian Tannous ](https://developers.redhat.com/e-books/openshift-for-developers) 703 | 704 | * [Introducing Istio Service Mesh for Microservices by Burr Sutter and Christian Posta](https://developers.redhat.com/e-books/introducing-istio-service-mesh-microservices-old) 705 | 706 | * [DevOps with OpenShift by Stefano Picozzi, Mike Hepburn & Noel O'Connor](https://developers.redhat.com/topics/devops) 707 | 708 | * [Microservices for Java Developers: A Hands-on Introduction to Frameworks and Containers by Rafael Benevides](https://developers.redhat.com/e-books/microservices-java-developers-hands-introduction-frameworks-and-containers-old) 709 | 710 | * [Migrating to Microservice Databases: From Relational Monolith to Distributed Data by Edson Yanaga](https://developers.redhat.com/e-books/migrating-microservice-databases-relational-monolith-distributed-data-old) 711 | 712 | * [OpenShift 3 for Developers: A Guide for Impatient Beginners by Grant Shipley, Graham Dumpleton](https://developers.redhat.com/e-books/openshift-developers-guide-impatient-beginners-old) 713 | 714 | * [Using the IBM Block Storage CSI Driver in a Red Hat OpenShift Environment](https://www.redbooks.ibm.com/abstracts/redp5613.html) 715 | 716 | * [Storage Multi-tenancy for Red Hat OpenShift Container Platform with IBM Storage](https://www.redbooks.ibm.com/abstracts/redp5638.html) 717 | 718 | * [An Implementation of Red Hat OpenShift Network Isolation Using Multiple Ingress Controllers](https://www.redbooks.ibm.com/abstracts/redp5641.html) 719 | 720 | * [IBM Spectrum Scale as a Persistent Storage for Red Hat OpenShift on IBM Z Quick Installation Guide](https://www.redbooks.ibm.com/abstracts/redp5645.html) 721 | 722 | * [Innovate at Scale and Deploy with Confidence in a Hybrid Cloud Environment](https://www.redbooks.ibm.com/abstracts/redp5621.html) 723 | 724 | ### Source-to-Image (S2I) images for programming/buildng your Apps 725 | 726 | [Back to the Top](#table-of-contents) 727 | 728 | #### Java 729 | 730 |

731 | 732 |
733 |

734 | 735 | * [Java - Source-to-Image (S2I) Builder Images for OpenShift](https://docs.openshift.com/online/pro/using_images/s2i_images/java.html). 736 | 737 | #### Python 738 | 739 |

740 | 741 |
742 |

743 | 744 | * [Python - Source-to-Image (S2I) Builder Images for OpenShift](https://docs.openshift.com/online/pro/using_images/s2i_images/python.html). 745 | 746 | #### Golang 747 | 748 |

749 | 750 |
751 |

752 | 753 | * [Golang- Source-to-Image (S2I) Builder Images for OpenShift](https://github.com/sclorg/golang-container). 754 | 755 | #### Ruby 756 | 757 |

758 | 759 |
760 |

761 | 762 | * [Ruby - Source-to-Image (S2I) Builder Images for OpenShift](https://docs.openshift.com/online/pro/using_images/s2i_images/ruby.html). 763 | 764 | #### .NET Core 765 | 766 |

767 | 768 |
769 |

770 | 771 | * [.NET Core - Source-to-Image (S2I) Builder Images for OpenShift(https://docs.openshift.com/online/pro/using_images/s2i_images/dot_net_core.html). 772 | 773 | #### Node.js 774 | 775 |

776 | 777 |
778 |

779 | 780 | * [Node.js - Source-to-Image (S2I) Builder Images for OpenShift](https://docs.openshift.com/online/pro/using_images/s2i_images/nodejs.html). 781 | 782 | #### Perl 783 | 784 |

785 | 786 |
787 |

788 | 789 | * [Perl - Source-to-Image (S2I) Builder Images for OpenShift](https://docs.openshift.com/online/pro/using_images/s2i_images/perl.html). 790 | 791 | #### PHP 792 | 793 |

794 | 795 |
796 |

797 | 798 | * [PHP - Source-to-Image (S2I) Builder Images for OpenShift](https://docs.openshift.com/online/pro/using_images/s2i_images/php.html). 799 | 800 | ### Builder Images for setting up Databases 801 | 802 | [Back to the Top](#table-of-contents) 803 | 804 | #### MySQL 805 | 806 |

807 | 808 |
809 |

810 | 811 | * [MySQL - Database Images for OpenShift](https://docs.openshift.com/online/pro/using_images/db_images/mysql.html) 812 | 813 | #### PostgreSQL 814 | 815 |

816 | 817 |
818 |

819 | 820 | * [PostgreSQL - Database Images for OpenShift](https://docs.openshift.com/online/pro/using_images/db_images/postgresql.html) 821 | 822 | #### MongoDB 823 | 824 |

825 | 826 |
827 |

828 | 829 | * [MongoDB - Database Images for OpenShift](https://docs.openshift.com/online/pro/using_images/db_images/mongodb.html) 830 | 831 | #### MariaDB 832 | 833 |

834 | 835 |
836 |

837 | 838 | * [MariaDB - Database Images for OpenShift](https://docs.openshift.com/online/pro/using_images/db_images/mariadb.html) 839 | 840 | #### Redis 841 | 842 |

843 | 844 |
845 |

846 | 847 | * [Redis - Database Images for OpenShift](https://github.com/sclorg/redis-container) 848 | 849 | ### Setting up on Microsoft Azure 850 | 851 | [Back to the Top](#table-of-contents) 852 | 853 |

854 | 855 |
856 |

857 | 858 | 859 | [Microsoft Azure Red Hat OpenShift](https://learn.microsoft.com/en-us/azure/openshift/) is a fully managed offering of OpenShift running in Azure. This service is jointly managed and supported by [Microsoft](https://www.microsoft.com) and [Red Hat](https://redhat.com/). 860 | 861 | **Requirements:** 862 | 863 | * [Azure CLI](https://learn.microsoft.com/en-us/cli/azure/) version 2.6.0 or later. 864 | 865 | * **56 vCPUs**, so you must increase the account limit. 866 | 867 | By default, each cluster creates the following instances: 868 | 869 | * One bootstrap machine, which is removed after installation 870 | 871 | * Three control plane machines 872 | 873 | * Three compute machines 874 | 875 | Because the bootstrap, control plane, and worker machines use ```Standard_DS4_v2``` virtual machines, which use **8 vCPUs**, a default cluster requires **56 vCPUs**. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. 876 | 877 | * **1 VNet.** Each default cluster requires one Virtual Network (VNet), which contains two subnets. 878 | 879 | * **7 Network interfaces.** Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. 880 | 881 | * **2 Network security groups.** Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: 882 | controlplane 883 | 884 | * Allows the control plane machines to be reached on port 6443 from anywhere. 885 | node 886 | 887 | * Allows worker nodes to be reached from the internet on ports 80 and 443. 888 | 889 | * **3 Network load balancers.** Each cluster creates the following load balancers: 890 | default 891 | 892 | * Public IP address that load balances requests to ports 80 and 443 across worker machines 893 | internal 894 | 895 | * Private IP address that load balances requests to ports 6443 and 22623 across control plane machines 896 | external 897 | 898 | * Public IP address that load balances requests to port 6443 across control plane machines 899 | 900 | * **Note:** If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. 901 | 902 | 903 | * **2 Public IP addresses.** The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. 904 | 905 | * **7 Private IP addresses.** The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. 906 | 907 |

908 | 909 |
910 |

911 | 912 | Ingress traffic to an Azure Red Hat OpenShift cluster. Image Credit: [Red Hat](https://www.redhat.com/en/blog/how-deploy-azure-red-hat-openshift) 913 | 914 |

915 | 916 |
917 |

918 | 919 | Egress traffic from an Azure Red Hat OpenShift cluster and connection to the cluster. Image Credit: [Red Hat](https://www.redhat.com/en/blog/how-deploy-azure-red-hat-openshift) 920 | 921 | #### Register the Resource Providers 922 | 923 | * [Member and guest users.](https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/users-default-permissions#member-and-guest-users) 924 | 925 | * [Assign administrator and non-administrator roles to users with Azure Active Director.](https://learn.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal) 926 | 927 | **If you have multiple Azure subscriptions, specify the relevant subscription ID:** 928 | 929 | ```az account set --subscription ``` 930 | 931 | **Register the Microsoft.RedHatOpenShift resource provider:** 932 | 933 | ```az provider register -n Microsoft.RedHatOpenShift --wait``` 934 | 935 | **Register the Microsoft.Compute resource provider:** 936 | 937 | ```az provider register -n Microsoft.Compute --wait``` 938 | 939 | **Register the Microsoft.Storage resource provider:** 940 | 941 | ```az provider register -n Microsoft.Storage --wait``` 942 | 943 | **Register the Microsoft.Authorization resource provider:** 944 | 945 | ```az provider register -n Microsoft.Authorization --wait``` 946 | 947 | **Create a Resource Group:** 948 | 949 | ``` 950 | az group create \ 951 | --name $RESOURCEGROUP \ 952 | --location $LOCATION 953 | ``` 954 | 955 | **Creating a Virtual Network:** 956 | 957 | ``` 958 | az network vnet create \ 959 | --resource-group $RESOURCEGROUP \ 960 | --name aro-vnet \ 961 | --address-prefixes 10.0.0.0/22 962 | ``` 963 | **Adding empty subnet for the master nodes.** 964 | 965 | ``` 966 | az network vnet subnet create \ 967 | --resource-group $RESOURCEGROUP \ 968 | --vnet-name aro-vnet \ 969 | --name master-subnet \ 970 | --address-prefixes 10.0.0.0/23 \ 971 | --service-endpoints Microsoft.ContainerRegistry 972 | ``` 973 | **Adding empty subnet for the worker nodes.** 974 | 975 | ``` 976 | az network vnet subnet create \ 977 | --resource-group $RESOURCEGROUP \ 978 | --vnet-name aro-vnet \ 979 | --name worker-subnet \ 980 | --address-prefixes 10.0.2.0/23 \ 981 | --service-endpoints Microsoft.ContainerRegistry 982 | ``` 983 | 984 | **[Disable subnet private endpoint policies](https://learn.microsoft.com/en-us/azure/private-link/disable-private-link-service-network-policy) on the master subnet.** 985 | 986 | ``` 987 | az network vnet subnet update \ 988 | --name master-subnet \ 989 | --resource-group $RESOURCEGROUP \ 990 | --vnet-name aro-vnet \ 991 | --disable-private-link-service-network-policies true 992 | 993 | ``` 994 | 995 | **Creating a Cluster** 996 | 997 | ``` 998 | az aro create \ 999 | --resource-group $RESOURCEGROUP \ 1000 | --name $CLUSTER \ 1001 | --vnet aro-vnet \ 1002 | --master-subnet master-subnet \ 1003 | --worker-subnet worker-subnet 1004 | 1005 | ``` 1006 | 1007 | ### Setting up on Google Cloud (GCP) 1008 | 1009 | [Back to the Top](#table-of-contents) 1010 | 1011 |

1012 | 1013 |
1014 |

1015 | 1016 | #### Minimum Requirements: 1017 | 1018 | * [gcloud CLI](https://cloud.google.com/sdk/gcloud/) or [OpenShift CLI (oc)](https://access.redhat.com/downloads/content/290). 1019 | 1020 | **Master Nodes:** 1021 | 1022 | * Minimum 4 vCPU (additional are strongly recommended). 1023 | 1024 | * Minimum 16 GB RAM (additional memory is strongly recommended, especially if etcd is co-located on masters). 1025 | 1026 | * Minimum 40 GB hard disk space for the file system . 1027 | 1028 | **Worker Nodes:** 1029 | 1030 | * 1 vCPU. 1031 | 1032 | * Minimum 8 GB RAM. 1033 | 1034 | * Minimum 15 GB hard disk space for the file system. 1035 | 1036 | * If you don’t have a GCP account already, [sign-up for Cloud Platform](https://cloud.google.com/free-trial/), setup billing and activate APIs. 1037 | 1038 | * Setup a service account. A service account is a way to interact with your GCP resources by using a different identity than your primary login and is generally intended for server-to-server interaction. From the GCP Navigation Menu, click on **"Permissions."** 1039 | 1040 | * Click on **"Service accounts."** 1041 | 1042 |

1043 | 1044 |
1045 |

1046 | 1047 | Click on **"Create service account,"** which will prompt you to enter a [service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#overview) name. Provide a name for your project and click on **"Furnish a new private key."** The default **"JSON"** Key type should be left selected. 1048 | 1049 |

1050 | 1051 |
1052 |

1053 | 1054 | Once you click **"Create,"** a service account **“.json”** will be downloaded to your browser’s downloads location. 1055 | 1056 | * **Important:** Like any credential, this represents an access mechanism to authenticate and use resources in your GCP account. Never place this file in a publicly accessible source repo (Public GitHub or GitLab). 1057 | 1058 | using the JSON credential via a Kubernetes secret deployed to your OpenShift cluster. To do so, first perform a base64 encoding of your JSON credential file: 1059 | 1060 | ``` base64 -i ~/path/to/downloads/credentials.json``` 1061 | 1062 | Keep the output (a very long string) ready for use in the next step, where you’ll replace ```‘BASE64_CREDENTIAL_STRING’``` in the pod example (below) with the output just captured from base64 encoding. 1063 | 1064 | * **Note:** base64 is encoded (not encrypted) and can be readily reversed, so this file (with the base64 string) is just as confidential as the credential file above. 1065 | 1066 | Create the [Kubernetes secret](http://kubernetes.io/docs/user-guide/secrets/) inside your OpenShift Cluster. A secret is the proper place to make sensitive information available to pods running in your cluster (like passwords or the credentials downloaded in the previous step). 1067 | 1068 | ``` 1069 | apiVersion: v1 1070 | kind: Secret 1071 | metadata: 1072 | name: google-services-secret 1073 | type: Opaque 1074 | data: 1075 | google-services.json: BASE64_CREDENTIAL_STRING 1076 | ``` 1077 | 1078 | **Note:** Replace ```‘BASE64_CREDENTIAL_STRING’``` with the base64 output from the prior step. 1079 | 1080 | **Deploy the secret to the cluster:** 1081 | 1082 | ```oc create -f google-secret.yaml``` 1083 | 1084 | 1085 | ### Setting up Red Hat OpenShift Data Science 1086 | 1087 | [Back to the Top](#table-of-contents) 1088 | 1089 |

1090 | 1091 |
1092 |

1093 | 1094 | [Red Hat® OpenShift® Data Science](https://www.redhat.com/en/technologies/cloud-computing/openshift/openshift-data-science) is a fully managed cloud service for data scientists and developers of intelligent applications on [Red Hat OpenShift Dedicated](https://cloud.redhat.com/products/dedicated/) or [Red Hat OpenShift Service on AWS](https://cloud.redhat.com/products/amazon-openshift). It provides a fully supported sandbox in which to rapidly develop, train, and test machine learning (ML) models in the public cloud before deploying in production. 1095 | 1096 | * [Red Hat OpenShift Data Science learning tutorials](https://developers.redhat.com/learn/openshift-data-science) 1097 | 1098 | 1099 |

1100 | 1101 |
1102 | Installing Red Hat OpenShift Data Science 1103 |

1104 | 1105 |

1106 | 1107 |
1108 | Opening Red Hat OpenShift Data Science 1109 |

1110 | 1111 |

1112 | 1113 |
1114 | JuypterHub on Red Hat OpenShift Data Science 1115 |

1116 | 1117 |

1118 | 1119 |
1120 | Exploring Tools on Red Hat OpenShift Data Science 1121 |

1122 | 1123 |

1124 | 1125 |
1126 | Setting up JupyterHub Notebook Server 1127 |

1128 | 1129 |

1130 | 1131 |
1132 | Creating a new Python 3 Notebook 1133 |

1134 | 1135 |

1136 | 1137 |
1138 | Python 3 JupyterHub Notebook 1139 |

1140 | 1141 |

1142 | 1143 |
1144 | JupyterHub Notebook Sample Demo 1145 |

1146 | 1147 |

1148 | 1149 |
1150 | OpenShift Project Models 1151 |

1152 | 1153 |

1154 | 1155 |
1156 | How OpenShift integrates with JupyterHub using Python - Source-to-Image (S2I) 1157 |

1158 | 1159 | ### Red Hat CodeReady Containers (CRC) 1160 | 1161 | [Back to the Top](#table-of-contents) 1162 | 1163 | [Red Hat CodeReady Containers (CRC)](https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/crc/2.9.0) is a tool that provides a minimal, preconfigured OpenShift 4 cluster on a laptop or desktop machine for development and testing purposes. CRC is delivered as a platform inside of the VM. 1164 | 1165 | * **odo (OpenShift Do)**, a CLI tool for developers, to manage application components on the OpenShift Container Platform. 1166 | 1167 |

1168 | 1169 |
1170 |

1171 | 1172 | **System Requirements:** 1173 | 1174 | * **OS:** CentOS Stream 8/RHEL 8/Fedora or later (the latest 2 releases). 1175 | * **Download:** [pull-secret](https://cloud.redhat.com/openshift/install/crc/installer-provisioned?intcmp=701f20000012ngPAAQ) 1176 | * **Login:** [Red Hat account](https://access.redhat.com/login) 1177 | 1178 | **Other physical requirements include:** 1179 | 1180 | * Four virtual CPUs (**4 vCPUs**) 1181 | * 10GB of memory (**RAM**) 1182 | * 40GB of storage space 1183 | 1184 | **To set up CodeReady Containers, start by creating the ```crc``` directory, and then download and extract the ```crc``` package:** 1185 | 1186 | ```mkdir /home//crc``` 1187 | 1188 | ```wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz``` 1189 | 1190 | ```tar -xvf crc-linux-amd64.tar.xz``` 1191 | 1192 | **Next, move the files to the crc directory and remove the downloaded package(s):** 1193 | 1194 | ```mv /home//crc-linux--amd64/* /home//crc``` 1195 | 1196 | ```rm /home//crc-linux-amd64.tar.xz``` 1197 | 1198 | ```rm -r /home//crc-linux--amd64``` 1199 | 1200 | **Change to the ```crc``` directory, make ```crc``` executable, and export your ```PATH``` like this:** 1201 | 1202 | ```cd /home//crc``` 1203 | 1204 | ```chmod +x crc``` 1205 | 1206 | ```export PATH=$PATH:/home//crc``` 1207 | 1208 | **Set up and start the cluster:** 1209 | 1210 | ```crc setup``` 1211 | 1212 | ```crc start -p //pull-secret.txt``` 1213 | 1214 | **Set up the OC environment:** 1215 | 1216 | ```crc oc-env``` 1217 | 1218 | ```eval $(crc oc-env)``` 1219 | 1220 | **Log in as the developer user:** 1221 | 1222 | ```oc login -u developer -p developer https://api.crc.testing:6443``` 1223 | 1224 | ```oc logout``` 1225 | 1226 | **And then, log in as the platform’s admin:** 1227 | 1228 | ```oc login -u kubeadmin -p password https://api.crc.testing:6443``` 1229 | 1230 | ```oc logout``` 1231 | 1232 | #### Interacting with the cluster. The most common ways include: 1233 | 1234 | **Starting the graphical web console:** 1235 | 1236 | ```crc console``` 1237 | 1238 | **Display the cluster’s status:** 1239 | 1240 | ```crc status``` 1241 | 1242 | **Shut down the OpenShift cluster:** 1243 | 1244 | ```crc stop``` 1245 | 1246 | **Delete or kill the OpenShift cluster:** 1247 | 1248 | ```crc delete``` 1249 | 1250 |

1251 | 1252 |
1253 |

1254 | 1255 | ### Setting up Podman 1256 | 1257 | [Back to the Top](#table-of-contents) 1258 | 1259 | [Podman (the POD manager)](https://podman.io/) is an open source tool for developing, managing, and running containers on your Linux systems. It also manages the entire container ecosystem using the libpod library. Podman’s daemonless and inclusive architecture makes it a more secure and accessible option for container management, and its accompanying tools and features, such as [Buildah](https://www.redhat.com/en/topics/containers/what-is-buildah) and [Skopeo](https://www.redhat.com/en/topics/containers/what-is-skopeo), allow developers to customize their container environments to best suit their needs. 1260 | 1261 | * [Libpod](https://pkg.go.dev/github.com/containers/podman/libpod) provides a library for applications looking to use the Container Pod concept made popular by Kubernetes. 1262 | 1263 | **Installing Podman:** 1264 | 1265 | * Fedora: ```sudo dnf install podman``` 1266 | * CentOS Stream: ```sudo dnf install buildah``` 1267 | * Ubuntu 20.04 or later: ```sudo apt install podman``` 1268 | * Debian 11 (bullseye) or later, or sid/unstable: ```sudo apt install podman``` 1269 | * openSUSE: ```sudo zypper install podman``` 1270 | * ArchLinux: ```sudo pacman -S podman``` and then tweaks for rootless 1271 | 1272 | [Podman Desktop](https://github.com/containers/podman-desktop) is a tool to manage Podman and other container engines from a single UI and tray local environment. 1273 | 1274 |

1275 | 1276 |
1277 | Podman Desktop 1278 |

1279 | 1280 |

1281 | 1282 |
1283 | Podman 1284 |

1285 | 1286 | ### Setting up Buildah 1287 | 1288 | [Back to the Top](#table-of-contents) 1289 | 1290 | [Buildah](https://buildah.io/) is an open source, Linux-based tool that can build Docker- and Kubernetes-compatible images, and is easy to incorporate into scripts and build pipelines. In addition, Buildah has overlap functionality with [Podman](https://podman.io/), [Skopeo](https://github.com/containers/skopeo), and [CRI-O](https://cri-o.io/). 1291 | 1292 | * Fedora: ```sudo dnf -y install buildah``` 1293 | * CentOS Stream: ```sudo dnf -y install buildah``` 1294 | * Ubuntu 20.04 or later: ```sudo apt install buildah``` 1295 | * Debian 11 (bullseye) or later, or sid/unstable: ```sudo apt install -y buildah``` 1296 | * openSUSE: ```sudo zypper install buildah``` 1297 | * ArchLinux: ```sudo pacman -S buildah``` and then tweaks for rootless 1298 | 1299 |

1300 | 1301 |
1302 | Buildah 1303 |

1304 | 1305 | ### Setting up Skopeo 1306 | 1307 | [Back to the Top](#table-of-contents) 1308 | 1309 | [Skopeo](https://github.com/containers/skopeo) is a tool for manipulating, inspecting, signing, and transferring container images and image repositories on Linux systems, Windows and MacOS. In addition, Skopeo has overlap functionality with [Podman](https://podman.io/), [Buildah](https://buildah.io/), and [CRI-O](https://cri-o.io/). 1310 | 1311 |

1312 | 1313 |
1314 |

1315 | 1316 | **Installing Skopeo:** 1317 | 1318 | * Fedora: ```sudo dnf install skopeo``` 1319 | * CentOS Stream: ```sudo dnf -y install skopeo``` 1320 | * Ubuntu 20.04 or later: ```sudo apt install skopeo``` 1321 | * Debian 11 (bullseye) or later, or sid/unstable: ```sudo apt install skopeo``` 1322 | * openSUSE: ```sudo zypper install skopeo``` 1323 | * Alpine Linux: ```sudo apk add skopeo``` 1324 | * ArchLinux: ```sudo pacman -S skopeo``` and then tweaks for rootless 1325 | * Nix/NixOS: ```$ nix-env -i skopeo``` 1326 | * MacOS: ```brew install skopeo``` 1327 | 1328 | **Skopeo Usage:** 1329 | 1330 | ``` 1331 | $ skopeo --help 1332 | 1333 | Various operations with container images and container image registries 1334 | 1335 | Usage: 1336 | skopeo [command] 1337 | 1338 | Available Commands: 1339 | copy Copy an IMAGE-NAME from one location to another 1340 | delete Delete image IMAGE-NAME 1341 | help Help about any command 1342 | inspect Inspect image IMAGE-NAME 1343 | list-tags List tags in the transport/repository specified by the REPOSITORY-NAME 1344 | login Login to a container registry 1345 | logout Logout of a container registry 1346 | manifest-digest Compute a manifest digest of a file 1347 | standalone-sign Create a signature using local files 1348 | standalone-verify Verify a signature using local files 1349 | sync Synchronize one or more images from one location to another 1350 | ``` 1351 | 1352 | 1353 | ### File systems 1354 | 1355 | [Back to the Top](#table-of-contents) 1356 | 1357 | [CIFS (Common Internet File System)](https://cifs.com/) is a network filesystem protocol used for providing shared access to files and printers between machines on the network. The client application can read, write, edit and even remove files on the remote server. 1358 | 1359 | [Network File System (NFS)](https://learn.microsoft.com/en-us/windows-server/storage/nfs/nfs-overview) is a protocol that provides a file sharing solution for enterprises that have heterogeneous environments that include both Windows and non-Windows computers. It's most notable for its host authentication, it’s simple to setup, and makes it possible to connect to another service using an IP address only. 1360 | 1361 | **Additional benefits of NFS file share include:** 1362 | 1363 | * NFS provides a central management. 1364 | * NFS allows for a user to log into any server and have access to their files transparently. 1365 | * It’s been around for a long time, so it comes with familiarity in terms of applications. 1366 | * No manual refresh needed for new files. 1367 | * It Can be secured with firewalls and Kerberos. 1368 | 1369 | [GlusterFS](https://www.gluster.org/) is a free and open source scalable network filesystem. Gluster is a scalable network filesystem. Using common off-the-shelf hardware, you can create large, distributed storage solutions for media streaming, data analysis, and other data- and bandwidth-intensive tasks. 1370 | 1371 | [Ceph](https://ceph.io/) is a software-defined storage solution designed to address the object, block, and file storage needs of data centers adopting open source as the new norm for high-growth block storage, object stores and data lakes. Ceph provides enterprise scalable storage while keeping [CAPEX](https://corporatefinanceinstitute.com/resources/knowledge/modeling/how-to-calculate-capex-formula/) and [OPEX](https://www.investopedia.com/terms/o/operating_expense.asp) costs in line with underlying bulk commodity disk prices. 1372 | 1373 | [Hadoop Distributed File System (HDFS)](https://www.ibm.com/analytics/hadoop/hdfs) is a distributed file system that handles large data sets running on commodity hardware. It is used to scale a single Apache Hadoop cluster to hundreds (and even thousands) of nodes. HDFS is one of the major components of Apache Hadoop, the others being [MapReduce](https://www.ibm.com/analytics/hadoop/mapreduce) and [YARN](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html). 1374 | 1375 | [ZFS](https://docs.oracle.com/cd/E19253-01/819-5461/zfsover-2/) is an enterprise-ready open source file system and volume manager with unprecedented flexibility and an uncompromising commitment to data integrity. 1376 | 1377 | [OpenZFS](https://openzfs.org/wiki/Main_Page ) is an open-source storage platform. It includes the functionality of both traditional file systems and volume manager. It has many advanced features including: 1378 | 1379 | - Protection against data corruption. 1380 | - Integrity checking for both data and metadata. 1381 | - Continuous integrity verification and automatic "self-healing" repair. 1382 | 1383 | [Btrfs](https://btrfs.wiki.kernel.org/index.php/Main_Page) is a modern copy on write (CoW) filesystem for Linux aimed at implementing advanced features while also focusing on fault tolerance, repair and easy administration. Its main features and benefits are: 1384 | 1385 | * Snapshots which do not make the full copy of files 1386 | * RAID - support for software-based RAID 0, RAID 1, RAID 10 1387 | * Self-healing - checksums for data and metadata, automatic detection of silent data corruptions 1388 | 1389 | [Bcachefs](https://bcachefs.org/) is an advanced new filesystem for Linux, with an emphasis on reliability and robustness and the complete set of features one would expect from a modern filesystem. Scalability has been tested to 50+ TB, will eventually scale far higher. 1390 | 1391 | [Ext4](https://ext4.wiki.kernel.org/index.php/Ext4_Howto) is a journaling file system for Linux, developed as the successor to ext3 1392 | 1393 | [Squashfs](https://www.kernel.org/doc/html/latest/filesystems/squashfs.html) is a compressed read-only filesystem for Linux. It uses zlib, lz4, lzo, or xz compression to compress files, inodes and directories. Inodes in the system are very small and all blocks are packed to minimize data overhead. 1394 | 1395 | [NTFS(New Technology File System)](https://docs.microsoft.com/en-us/windows-server/storage/file-server/ntfs-overview) is the primary file system for recent versions of Windows and Windows Server—provides a full set of features including security descriptors, encryption, disk quotas, and rich metadata, and can be used with Cluster Shared Volumes (CSV) to provide continuously available volumes that can be accessed simultaneously from multiple nodes of a failover cluster. 1396 | 1397 | ## OpenShift Tools 1398 | 1399 | [Back to the Top](#table-of-contents) 1400 | 1401 | 1402 | [OpenShift CLI (oc)](https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_cli/getting-started-cli.html) is a command line interface tool that extends the capabilities of kubectl with [many convenience functions](https://docs.openshift.com/container-platform/4.4/cli_reference/openshift_cli/usage-oc-kubectl.html) that make interacting with both Kubernetes and OpenShift clusters easier. 1403 | 1404 | [OpenShift Serverless CLI (kn)](https://docs.openshift.com/container-platform/4.4/serverless/serverless-getting-started.html) is a command line interface tool to deploy serverless applications, then you’ll want access and control via the kn command. 1405 | 1406 | [OpenShift Pipelines CLI (tkn)](https://docs.openshift.com/container-platform/4.4/pipelines/understanding-openshift-pipelines.html) is a command line interface tool for using Tekton to provide cloud-native CI/CD functionality within the cluster. The tkn command is used to manage the functionality from the CLI. 1407 | 1408 | [Red Hat CodeReady Containers](https://developers.redhat.com/products/codeready-containers) is an option to host a local, all-in-one OpenShift 4 cluster on your workstation. CodeReady Containers replaces [minishift](https://www.okd.io/minishift/), used to run OpenShift 3 clusters on your workstation, as a quick and easy method of creating test and development clusters. 1409 | 1410 | [Helm CLI](https://docs.openshift.com/container-platform/4.4/cli_reference/helm_cli/getting-started-with-helm-on-openshift-container-platform.html) is a command line interface tool for deploying and managing Kubernetes applications to your clusters. 1411 | 1412 | [OpenShift Hive](https://github.com/openshift/hive) is an operator which runs as a service on top of Kubernetes/OpenShift. The Hive service can be used to provision and perform initial configuration of OpenShift 4 clusters. 1413 | 1414 | [OpenShift Service Mesh](https://www.openshift.com/blog/introducing-openshift-service-mesh-2.0) is a tool that provides a layer on top of OpenShift for securely connecting services in a consistent manner. This provides centralized control, security and observability across your services without having to modify your applications. 1415 | 1416 | [Azure Red Hat OpenShift](https://azure.microsoft.com/en-us/services/openshift/) is a flexible, self-service deployment of fully managed OpenShift clusters. Maintain regulatory compliance and focus on your application development, while your master, infrastructure, and application nodes are patched, updated, and monitored by both Microsoft and Red Hat. 1417 | 1418 | [Red Hat OpenShift Service on AWS (ROSA)](https://www.openshift.com/products/amazon-openshift) is a fully-managed and jointly supported Red Hat OpenShift offering that combines the power of Red Hat OpenShift, the industry's most comprehensive enterprise Kubernetes platform, and the AWS public cloud. 1419 | 1420 | [Red Hat OpenShift on Google Cloud](https://cloud.google.com/solutions/partners/openshift-on-gcp) is a fully-managed and jointly supported Red Hat OpenShift offering that enables you to deploy stateful and stateless apps with nearly any language, framework, database, or service. It gives you a hosted environment entirely on Google Cloud. A hybrid environment where you maintain part of your workload on-premises or in a private hosting environment and migrate the rest to Google Cloud. 1421 | 1422 | [Red Hat® Quay](https://www.openshift.com/products/quay) is a secure, private container registry that builds, analyzes and distributes container images. It provides a high level of automation and customization. 1423 | 1424 | [Kata Operator](https://github.com/openshift/kata-operator) is an operator to perform lifecycle management (install/upgrade/uninstall) of [Kata Runtime](https://katacontainers.io/) on Openshift as well as Kubernetes cluster. 1425 | 1426 | [Open Container Initiative](https://opencontainers.org/about/overview/) is an open governance structure for the express purpose of creating open industry standards around container formats and runtimes. 1427 | 1428 | [Buildah](https://buildah.io/) is a command line tool to build Open Container Initiative (OCI) images. It can be used with Docker, Podman, Kubernetes. 1429 | 1430 | [Podman](https://podman.io/) is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. 1431 | 1432 | [Containerd](https://containerd.io)is a daemon that manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond. It is available for Linux and Windows. 1433 | 1434 | [OKD](https://okd.io/) is a community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. 1435 | 1436 | 1437 | # Go Development 1438 | 1439 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 1440 | 1441 | ## Go Learning Resources 1442 | 1443 | [Go](https://golang.org/) is an open source programming language that makes it easy to build simple, reliable, and efficient software. 1444 | 1445 | [Golang Contribution Guide](https://golang.org/doc/contribute.html) 1446 | 1447 | [Google Developers Training](https://developers.google.com/training/) 1448 | 1449 | [Google Developers Certification](https://developers.google.com/certification/) 1450 | 1451 | [Uber's Go Style Guide](https://github.com/uber-go/guide/blob/master/style.md) 1452 | 1453 | [GitLab's Go standards and style guidelines](https://docs.gitlab.com/ee/development/go_guide/) 1454 | 1455 | [Effective Go](https://golang.org/doc/effective_go.html) 1456 | 1457 | [Go: The Complete Developer's Guide (Golang) on Udemy](https://www.udemy.com/course/go-the-complete-developers-guide/) 1458 | 1459 | [Getting Started with Go on Coursera](https://www.coursera.org/learn/golang-getting-started) 1460 | 1461 | [Programming with Google Go on Coursera](https://www.coursera.org/specializations/google-golang) 1462 | 1463 | [Learning Go Fundamentals on Pluralsight](https://www.pluralsight.com/courses/go-fundamentals) 1464 | 1465 | [Learning Go on Codecademy](https://www.codecademy.com/learn/learn-go) 1466 | 1467 | ## Go Tools 1468 | 1469 | [golang tools](https://pkg.go.dev/golang.org/x/tools) holds the source for various packages and tools that support the Go programming language. 1470 | 1471 | [Go in Visual Studio Code](https://code.visualstudio.com/docs/languages/go) is an extension that gives you language features like IntelliSense, code navigation, symbol search, bracket matching, snippets, and many more that will help you in Golang development. 1472 | 1473 | [Traefik](https://github.com/traefik/traefik) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy. Traefik integrates with your existing infrastructure components (Docker, Swarm mode, Kubernetes, Marathon, Consul, Etcd, Rancher, Amazon ECS, etc.) and configures itself automatically and dynamically. Pointing Traefik at your orchestrator should be the only configuration step you need. 1474 | 1475 | [Gitea](https://github.com/go-gitea/gitea) is Git with a cup of tea, painless self-hosted git service. Using Go, this can be done with an independent binary distribution across all platforms which Go supports, including Linux, macOS, and Windows on x86, amd64, ARM and PowerPC architectures. 1476 | 1477 | [OpenFaaS](https://github.com/openfaas/faas) is Serverless Functions Made Simple. It makes it easy for developers to deploy event-driven functions and microservices to Kubernetes without repetitive, boiler-plate coding. Package your code or an existing binary in a Docker image to get a highly scalable endpoint with auto-scaling and metrics. 1478 | 1479 | [micro](https://github.com/zyedidia/micro) is a terminal-based text editor that aims to be easy to use and intuitive, while also taking advantage of the capabilities of modern terminals. As its name indicates, micro aims to be somewhat of a successor to the nano editor by being easy to install and use. It strives to be enjoyable as a full-time editor for people who prefer to work in a terminal, or those who regularly edit files over SSH. 1480 | 1481 | [Gravitational Teleport](https://github.com/gravitational/teleport) is a modern security gateway for remotely accessing into Clusters of Linux servers via SSH or SSH-over-HTTPS in a browser or Kubernetes clusters. 1482 | 1483 | [NATS](https://nats.io/) is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Foundation (CNCF). NATS has over 30 client language implementations, and its server can run on-premise, in the cloud, at the edge, and even on a Raspberry Pi. NATS can secure and simplify design and operation of modern distributed systems. 1484 | 1485 | [Act](https://github.com/nektos/act) is a GO program that allows you to run our GitHub Actions locally. 1486 | 1487 | [Fiber](https://gofiber.io/) is an [Express](https://github.com/expressjs/express) inspired web framework built on top of [Fasthttp](https://github.com/valyala/fasthttp), the fastest HTTP engine for Go. Designed to ease things up for fast development with zero memory allocation and performance in mind. 1488 | 1489 | [Glide](https://github.com/Masterminds/glide) is a vendor Package Management for Golang. 1490 | 1491 | [BadgerDB](https://github.com/dgraph-io/badger) is an embeddable, persistent and fast key-value (KV) database written in pure Go. It is the underlying database for [Dgraph](https://dgraph.io/), a fast, distributed graph database. It's meant to be a performant alternative to non-Go-based key-value stores like RocksDB. 1492 | 1493 | [Go kit](https://github.com/go-kit/kit) is a programming toolkit for building microservices (or elegant monoliths) in Go. We solve common problems in distributed systems and application architecture so you can focus on delivering business value. 1494 | 1495 | [Codis](https://github.com/CodisLabs/codis) is a proxy based high performance Redis cluster solution written in Go. 1496 | 1497 | [zap](https://github.com/uber-go/zap) is a blazing fast, structured, leveled logging in Go. 1498 | 1499 | [HttpRouter](https://github.com/julienschmidt/httprouter) is a lightweight high performance HTTP request router (also called multiplexer or just mux for short) for Go. 1500 | 1501 | [Gorilla WebSocket](https://github.com/gorilla/websocket) is a Go implementation of the WebSocket protocol. 1502 | 1503 | [Delve](https://github.com/go-delve/delve) is a debugger for the Go programming language. 1504 | 1505 | [GORM](https://github.com/go-gorm/gorm) is a fantastic ORM library for Golang, aims to be developer friendly. 1506 | 1507 | [Go Patterns](https://github.com/tmrts/go-patterns) is a curated collection of idiomatic design & application patterns for Go language. 1508 | 1509 | 1510 | # Python Development 1511 | 1512 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 1513 | 1514 | ## Python Learning Resources 1515 | 1516 | [Python](https://www.python.org) is an interpreted, high-level programming language. Python is used heavily in the fields of Data Science and Machine Learning. 1517 | 1518 | [Python Developer’s Guide](https://devguide.python.org) is a comprehensive resource for contributing to Python – for both new and experienced contributors. It is maintained by the same community that maintains Python. 1519 | 1520 | [Get started with Kubernetes using Python](https://kubernetes.io/blog/2019/07/23/get-started-with-kubernetes-using-python/) 1521 | 1522 | [Data Science with Python & JupyterHub on Kubernetes](https://tanzu.vmware.com/developer/blog/data-science-with-python-jupyterhub-on-kubernetes-part-1/) 1523 | 1524 | [Azure Functions Python developer guide](https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-python) is an introduction to developing Azure Functions using Python. The content below assumes that you've already read the [Azure Functions developers guide](https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference). 1525 | 1526 | [CheckiO](https://checkio.org/) is a programming learning platform and a gamified website that teaches Python through solving code challenges and competing for the most elegant and creative solutions. 1527 | 1528 | [Python Institute](https://pythoninstitute.org) 1529 | 1530 | [PCEP – Certified Entry-Level Python Programmer certification](https://pythoninstitute.org/pcep-certification-entry-level/) 1531 | 1532 | [PCAP – Certified Associate in Python Programming certification](https://pythoninstitute.org/pcap-certification-associate/) 1533 | 1534 | [PCPP – Certified Professional in Python Programming 1 certification](https://pythoninstitute.org/pcpp-certification-professional/) 1535 | 1536 | [PCPP – Certified Professional in Python Programming 2](https://pythoninstitute.org/pcpp-certification-professional/) 1537 | 1538 | [MTA: Introduction to Programming Using Python Certification](https://docs.microsoft.com/en-us/learn/certifications/mta-introduction-to-programming-using-python) 1539 | 1540 | [Getting Started with Python in Visual Studio Code](https://code.visualstudio.com/docs/python/python-tutorial) 1541 | 1542 | [Google's Python Style Guide](https://google.github.io/styleguide/pyguide.html) 1543 | 1544 | [Google's Python Education Class](https://developers.google.com/edu/python/) 1545 | 1546 | [Real Python](https://realpython.com) 1547 | 1548 | [The Python Open Source Computer Science Degree by Forrest Knight](https://github.com/ForrestKnight/open-source-cs-python) 1549 | 1550 | [Intro to Python for Data Science](https://www.datacamp.com/courses/intro-to-python-for-data-science) 1551 | 1552 | [Intro to Python by W3schools](https://www.w3schools.com/python/python_intro.asp) 1553 | 1554 | [Codecademy's Python 3 course](https://www.codecademy.com/learn/learn-python-3) 1555 | 1556 | [Learn Python with Online Courses and Classes from edX](https://www.edx.org/learn/python) 1557 | 1558 | [Python Courses Online from Coursera](https://www.coursera.org/courses?query=python) 1559 | 1560 | ## Python Frameworks and Tools 1561 | 1562 | [Python Package Index (PyPI)](https://pypi.org/) is a repository of software for the Python programming language. PyPI helps you find and install software developed and shared by the Python community. 1563 | 1564 | [PyCharm](https://www.jetbrains.com/pycharm/) is the best IDE I've ever used. With PyCharm, you can access the command line, connect to a database, create a virtual environment, and manage your version control system all in one place, saving time by avoiding constantly switching between windows. 1565 | 1566 | [Python Tools for Visual Studio(PTVS)](https://microsoft.github.io/PTVS/) is a free, open source plugin that turns Visual Studio into a Python IDE. It supports editing, browsing, IntelliSense, mixed Python/C++ debugging, remote Linux/MacOS debugging, profiling, IPython, and web development with Django and other frameworks. 1567 | 1568 | [Pylance](https://github.com/microsoft/pylance-release) is an extension that works alongside Python in Visual Studio Code to provide performant language support. Under the hood, Pylance is powered by Pyright, Microsoft's static type checking tool. 1569 | 1570 | [Pyright](https://github.com/Microsoft/pyright) is a fast type checker meant for large Python source bases. It can run in a “watch” mode and performs fast incremental updates when files are modified. 1571 | 1572 | [Django](https://www.djangoproject.com/) is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. 1573 | 1574 | [Flask](https://flask.palletsprojects.com/) is a micro web framework written in Python. It is classified as a microframework because it does not require particular tools or libraries. 1575 | 1576 | [Web2py](http://web2py.com/) is an open-source web application framework written in Python allowing allows web developers to program dynamic web content. One web2py instance can run multiple web sites using different databases. 1577 | 1578 | [AWS Chalice](https://github.com/aws/chalice) is a framework for writing serverless apps in python. It allows you to quickly create and deploy applications that use AWS Lambda. 1579 | 1580 | [Tornado](https://www.tornadoweb.org/) is a Python web framework and asynchronous networking library. Tornado uses a non-blocking network I/O, which can scale to tens of thousands of open connections. 1581 | 1582 | [HTTPie](https://github.com/httpie/httpie) is a command line HTTP client that makes CLI interaction with web services as easy as possible. HTTPie is designed for testing, debugging, and generally interacting with APIs & HTTP servers. 1583 | 1584 | [Scrapy](https://scrapy.org/) is a fast high-level web crawling and web scraping framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing. 1585 | 1586 | [Sentry](https://sentry.io/) is a service that helps you monitor and fix crashes in realtime. The server is in Python, but it contains a full API for sending events from any language, in any application. 1587 | 1588 | [Pipenv](https://github.com/pypa/pipenv) is a tool that aims to bring the best of all packaging worlds (bundler, composer, npm, cargo, yarn, etc.) to the Python world. 1589 | 1590 | [Python Fire](https://github.com/google/python-fire) is a library for automatically generating command line interfaces (CLIs) from absolutely any Python object. 1591 | 1592 | [Bottle](https://github.com/bottlepy/bottle) is a fast, simple and lightweight [WSGI](https://www.wsgi.org/) micro web-framework for Python. It is distributed as a single file module and has no dependencies other than the [Python Standard Library](https://docs.python.org/library/). 1593 | 1594 | [CherryPy](https://cherrypy.org) is a minimalist Python object-oriented HTTP web framework. 1595 | 1596 | [Sanic](https://github.com/huge-success/sanic) is a Python 3.6+ web server and web framework that's written to go fast. 1597 | 1598 | [Pyramid](https://trypyramid.com) is a small and fast open source Python web framework. It makes real-world web application development and deployment more fun and more productive. 1599 | 1600 | [TurboGears](https://turbogears.org) is a hybrid web framework able to act both as a Full Stack framework or as a Microframework. 1601 | 1602 | [Falcon](https://falconframework.org/) is a reliable, high-performance Python web framework for building large-scale app backends and microservices with support for MongoDB, Pluggable Applications and autogenerated Admin. 1603 | 1604 | [Neural Network Intelligence(NNI)](https://github.com/microsoft/nni) is an open source AutoML toolkit for automate machine learning lifecycle, including [Feature Engineering](https://github.com/microsoft/nni/blob/master/docs/en_US/FeatureEngineering/Overview.md), [Neural Architecture Search](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Overview.md), [Model Compression](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Overview.md) and [Hyperparameter Tuning](https://github.com/microsoft/nni/blob/master/docs/en_US/Tuner/BuiltinTuner.md). 1605 | 1606 | [Dash](https://plotly.com/dash) is a popular Python framework for building ML & data science web apps for Python, R, Julia, and Jupyter. 1607 | 1608 | [Luigi](https://github.com/spotify/luigi) is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built-in. 1609 | 1610 | [Locust](https://github.com/locustio/locust) is an easy to use, scriptable and scalable performance testing tool. 1611 | 1612 | [spaCy](https://github.com/explosion/spaCy) is a library for advanced Natural Language Processing in Python and Cython. 1613 | 1614 | [NumPy](https://www.numpy.org/) is the fundamental package needed for scientific computing with Python. 1615 | 1616 | [Pillow](https://python-pillow.org/) is a friendly PIL(Python Imaging Library) fork. 1617 | 1618 | [IPython](https://ipython.org/) is a command shell for interactive computing in multiple programming languages, originally developed for the Python programming language, that offers enhanced introspection, rich media, additional shell syntax, tab completion, and rich history. 1619 | 1620 | [GraphLab Create](https://turi.com/) is a Python library, backed by a C++ engine, for quickly building large-scale, high-performance machine learning models. 1621 | 1622 | [Pandas](https://pandas.pydata.org/) is a fast, powerful, and easy to use open source data structrures, data analysis and manipulation tool, built on top of the Python programming language. 1623 | 1624 | [PuLP](https://coin-or.github.io/pulp/) is an Linear Programming modeler written in python. PuLP can generate LP files and call on use highly optimized solvers, GLPK, COIN CLP/CBC, CPLEX, and GUROBI, to solve these linear problems. 1625 | 1626 | [Matplotlib](https://matplotlib.org/) is a 2D plotting library for creating static, animated, and interactive visualizations in Python. Matplotlib produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms. 1627 | 1628 | [Scikit-Learn](https://scikit-learn.org/stable/index.html) is a simple and efficient tool for data mining and data analysis. It is built on NumPy,SciPy, and mathplotlib. 1629 | 1630 | # Bash/PowerShell Development 1631 | 1632 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 1633 | 1634 | ## Bash/PowerShell Learning Resources 1635 | 1636 | [Introduction to Bash Shell Scripting by Coursera](https://www.coursera.org/projects/introduction-to-bash-shell-scripting) 1637 | 1638 | [Bash: Shell Script Basics by Pluralsight](https://www.pluralsight.com/courses/bash-shell-scripting) 1639 | 1640 | [Bash/Shell by Codecademy](https://www.codecademy.com/catalog/language/bash) 1641 | 1642 | [Getting Started with PowerShell](https://docs.microsoft.com/en-us/powershell/) 1643 | 1644 | [Deploy an Azure Kubernetes Service cluster using PowerShell](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-powershell) 1645 | 1646 | [PowerShell in Azure Cloud Shell](https://aka.ms/cloudshell/powershell-docs) 1647 | 1648 | [Azure Functions using PowerShell](https://docs.microsoft.com/en-us/azure/azure-functions/functions-create-first-function-vs-code?pivots=programming-language-powershell) 1649 | 1650 | [Azure Automation runbooks](https://docs.microsoft.com/en-us/azure/automation/automation-runbook-types) 1651 | 1652 | [Using Visual Studio Code for PowerShell Development](https://docs.microsoft.com/en-us/powershell/scripting/dev-cross-plat/vscode/using-vscode?view=powershell-7) 1653 | 1654 | [Integrated Terminal in Visual Studio Code](https://code.visualstudio.com/docs/editor/integrated-terminal) 1655 | 1656 | [AWS Tools for Windows PowerShell](https://aws.amazon.com/powershell/) 1657 | 1658 | [PowerShell Best Practices and Style Guide](https://poshcode.gitbooks.io/powershell-practice-and-style) 1659 | 1660 | [AWS Command Line Interface and aws-shell Sample for AWS Cloud9](https://docs.aws.amazon.com/cloud9/latest/user-guide/sample-aws-cli.html) 1661 | 1662 | [Configuring Cloud Shell on Google Cloud](https://cloud.google.com/shell/docs/configuring-cloud-shell) 1663 | 1664 | [Google's Shell Style Guide](https://google.github.io/styleguide/shellguide.html) 1665 | 1666 | ## Bash/ PowerShell Tools 1667 | 1668 | [Bash](https://www.gnu.org/software/bash/) is the GNU Project's shell(Bourne Again SHell), which is an sh-compatible shell that integrates together useful features from the Korn shell (ksh) and the C shell (csh). 1669 | 1670 | [PowerShell Core](https://microsoft.com/PowerShell) is a cross-platform (Windows, Linux, and macOS) automation and configuration tool/framework that works well with your existing tools and is optimized for dealing with structured data (JSON, CSV, XML, etc.), REST APIs, and object models. It also includes a command-line shell, an associated scripting language and a framework for processing cmdlets. 1671 | 1672 | [Azure PowerShell](https://docs.microsoft.com/en-us/powershell/azure/overview) is a set of cmdlets for managing Microsoft Azure resources directly from the PowerShell command line. 1673 | 1674 | [AWS Shell](https://aws.amazon.com/cli/) is a command-line shell program that provides convenience and productivity features to help both new and advanced users of the AWS Command Line Interface. 1675 | 1676 | [Google Cloud Shell](https://cloud.google.com/shell/) is a free admin machine with browser-based command-line access for managing your infrastructure and applications on Google Cloud Platform. 1677 | 1678 | [VS Code Bash Debug](https://marketplace.visualstudio.com/items?itemName=rogalmic.bash-debug) is a bash debugger GUI frontend based on awesome bashdb scripts (bashdb now included in package). 1679 | 1680 | [VS Code Bash IDE](https://marketplace.visualstudio.com/items?itemName=mads-hartmann.bash-ide-vscode) is a Visual Studio Code extension utilizing the [bash language server](https://github.com/bash-lsp/bash-language-server/blob/master/bash-lsp), that is based on [Tree Sitter](https://github.com/tree-sitter/tree-sitter) and its [grammar for Bash](https://github.com/tree-sitter/tree-sitter-bash) and supports [explainshell](https://explainshell.com/) integration. 1681 | 1682 | 1683 | # Machine Learning 1684 | 1685 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 1686 | 1687 | 1688 | 1689 | ## ML Learning Resources 1690 | 1691 | [Kubernetes for Machine Learning on Platform9](https://platform9.com/blog/kubernetes-for-machine-learning/) 1692 | 1693 | [Introducing Amazon SageMaker Operators for Kubernetes](https://aws.amazon.com/blogs/machine-learning/introducing-amazon-sagemaker-operators-for-kubernetes/) 1694 | 1695 | [Deploying machine learning models on Kubernetes with Google Cloud](https://cloud.google.com/community/tutorials/kubernetes-ml-ops) 1696 | 1697 | [Create and attach Azure Kubernetes Service with Azure Machine Learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-attach-kubernetes) 1698 | 1699 | [Kubernetes for MLOps: Scaling Enterprise Machine Learning, Deep Learning, AI with HPE](https://www.hpe.com/us/en/resources/solutions/kubernetes-mlops.html) 1700 | 1701 | [Machine Learning by Stanford University from Coursera](https://www.coursera.org/learn/machine-learning) 1702 | 1703 | [Machine Learning Courses Online from Coursera](https://www.coursera.org/courses?query=machine%20learning&) 1704 | 1705 | [Machine Learning Courses Online from Udemy](https://www.udemy.com/topic/machine-learning/) 1706 | 1707 | [Learn Machine Learning with Online Courses and Classes from edX](https://www.edx.org/learn/machine-learning) 1708 | 1709 | ## ML frameworks & applications 1710 | 1711 | [TensorFlow](https://www.tensorflow.org) is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. 1712 | 1713 | [Tensorman](https://github.com/pop-os/tensorman) is a utility for easy management of Tensorflow containers by developed by [System76]( https://system76.com).Tensorman allows Tensorflow to operate in an isolated environment that is contained from the rest of the system. This virtual environment can operate independent of the base system, allowing you to use any version of Tensorflow on any version of a Linux distribution that supports the Docker runtime. 1714 | 1715 | [Keras](https://keras.io) is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano.It was developed with a focus on enabling fast experimentation. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. 1716 | 1717 | [PyTorch](https://pytorch.org) is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds. Primarily developed by Facebook's AI Research lab. 1718 | 1719 | [Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. SageMaker removes the heavy lifting from each step of the machine learning process to make it easier to develop high quality models. 1720 | 1721 | [Azure Databricks](https://azure.microsoft.com/en-us/services/databricks/) is a fast and collaborative Apache Spark-based big data analytics service designed for data science and data engineering. Azure Databricks, sets up your Apache Spark environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. 1722 | 1723 | [Microsoft Cognitive Toolkit (CNTK)](https://docs.microsoft.com/en-us/cognitive-toolkit/) is an open-source toolkit for commercial-grade distributed deep learning. It describes neural networks as a series of computational steps via a directed graph. CNTK allows the user to easily realize and combine popular model types such as feed-forward DNNs, convolutional neural networks (CNNs) and recurrent neural networks (RNNs/LSTMs). CNTK implements stochastic gradient descent (SGD, error backpropagation) learning with automatic differentiation and parallelization across multiple GPUs and servers. 1724 | 1725 | [Apache Airflow](https://airflow.apache.org) is an open-source workflow management platform created by the community to programmatically author, schedule and monitor workflows. Install. Principles. Scalable. Airflow has a modular architecture and uses a message queue to orchestrate an arbitrary number of workers. Airflow is ready to scale to infinity. 1726 | 1727 | [Open Neural Network Exchange(ONNX)](https://github.com/onnx) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. 1728 | 1729 | [Apache MXNet](https://mxnet.apache.org/) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines. Support for Python, R, Julia, Scala, Go, Javascript and more. 1730 | 1731 | [AutoGluon](https://autogluon.mxnet.io/index.html) is toolkit for Deep learning that automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data. 1732 | 1733 | [Anaconda](https://www.anaconda.com/) is a very popular Data Science platform for machine learning and deep learning that enables users to develop models, train them, and deploy them. 1734 | 1735 | [PlaidML](https://github.com/plaidml/plaidml) is an advanced and portable tensor compiler for enabling deep learning on laptops, embedded devices, or other devices where the available computing hardware is not well supported or the available software stack contains unpalatable license restrictions. 1736 | 1737 | [OpenCV](https://opencv.org) is a highly optimized library with focus on real-time computer vision applications. The C++, Python, and Java interfaces support Linux, MacOS, Windows, iOS, and Android. 1738 | 1739 | [Scikit-Learn](https://scikit-learn.org/stable/index.html) is a Python module for machine learning built on top of SciPy, NumPy, and matplotlib, making it easier to apply robust and simple implementations of many popular machine learning algorithms. 1740 | 1741 | [Weka](https://www.cs.waikato.ac.nz/ml/weka/) is an open source machine learning software that can be accessed through a graphical user interface, standard terminal applications, or a Java API. It is widely used for teaching, research, and industrial applications, contains a plethora of built-in tools for standard machine learning tasks, and additionally gives transparent access to well-known toolboxes such as scikit-learn, R, and Deeplearning4j. 1742 | 1743 | [Caffe](https://github.com/BVLC/caffe) is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berkeley Vision and Learning Center (BVLC) and community contributors. 1744 | 1745 | [Theano](https://github.com/Theano/Theano) is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently including tight integration with NumPy. 1746 | 1747 | [nGraph](https://github.com/NervanaSystems/ngraph) is an open source C++ library, compiler and runtime for Deep Learning. The nGraph Compiler aims to accelerate developing AI workloads using any deep learning framework and deploying to a variety of hardware targets.It provides the freedom, performance, and ease-of-use to AI developers. 1748 | 1749 | [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) is a GPU-accelerated library of primitives for [deep neural networks](https://developer.nvidia.com/deep-learning). cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN accelerates widely used deep learning frameworks, including [Caffe2](https://caffe2.ai/), [Chainer](https://chainer.org/), [Keras](https://keras.io/), [MATLAB](https://www.mathworks.com/solutions/deep-learning.html), [MxNet](https://mxnet.incubator.apache.org/), [PyTorch](https://pytorch.org/), and [TensorFlow](https://www.tensorflow.org/). 1750 | 1751 | [Jupyter Notebook](https://jupyter.org/) is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Jupyter is used widely in industries that do data cleaning and transformation, numerical simulation, statistical modeling, data visualization, data science, and machine learning. 1752 | 1753 | [Apache Spark](https://spark.apache.org/) is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including Spark SQL for SQL and DataFrames, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for stream processing. 1754 | 1755 | [Apache Spark Connector for SQL Server and Azure SQL](https://github.com/microsoft/sql-spark-connector) is a high-performance connector that enables you to use transactional data in big data analytics and persists results for ad-hoc queries or reporting. The connector allows you to use any SQL database, on-premises or in the cloud, as an input data source or output data sink for Spark jobs. 1756 | 1757 | [Apache PredictionIO](https://predictionio.apache.org/) is an open source machine learning framework for developers, data scientists, and end users. It supports event collection, deployment of algorithms, evaluation, querying predictive results via REST APIs. It is based on scalable open source services like Hadoop, HBase (and other DBs), Elasticsearch, Spark and implements what is called a Lambda Architecture. 1758 | 1759 | [Cluster Manager for Apache Kafka(CMAK)](https://github.com/yahoo/CMAK) is a tool for managing [Apache Kafka](https://kafka.apache.org/) clusters. 1760 | 1761 | [BigDL](https://bigdl-project.github.io/) is a distributed deep learning library for Apache Spark. With BigDL, users can write their deep learning applications as standard Spark programs, which can directly run on top of existing Spark or Hadoop clusters. 1762 | 1763 | [Koalas](https://pypi.org/project/koalas/) is project makes data scientists more productive when interacting with big data, by implementing the pandas DataFrame API on top of Apache Spark. 1764 | 1765 | [Apache Spark™ MLflow](https://mlflow.org/) is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. MLflow currently offers four components: 1766 | 1767 | **[MLflow Tracking](https://mlflow.org/docs/latest/tracking.html)**: Record and query experiments: code, data, config, and results. 1768 | 1769 | **[MLflow Projects](https://mlflow.org/docs/latest/projects.html)**: Package data science code in a format to reproduce runs on any platform. 1770 | 1771 | **[MLflow Models](https://mlflow.org/docs/latest/models.html)**: Deploy machine learning models in diverse serving environments. 1772 | 1773 | **[Model Registry](https://mlflow.org/docs/latest/model-registry.html)**: Store, annotate, discover, and manage models in a central repository. 1774 | 1775 | [Eclipse Deeplearning4J (DL4J)](https://deeplearning4j.konduit.ai/) is a set of projects intended to support all the needs of a JVM-based(Scala, Kotlin, Clojure, and Groovy) deep learning application. This means starting with the raw data, loading and preprocessing it from wherever and whatever format it is in to building and tuning a wide variety of simple and complex deep learning networks. 1776 | 1777 | [Numba](https://github.com/numba/numba) is an open source, NumPy-aware optimizing compiler for Python sponsored by Anaconda, Inc. It uses the LLVM compiler project to generate machine code from Python syntax. Numba can compile a large subset of numerically-focused Python, including many NumPy functions. Additionally, Numba has support for automatic parallelization of loops, generation of GPU-accelerated code, and creation of ufuncs and C callbacks. 1778 | 1779 | [Chainer](https://chainer.org/) is a Python-based deep learning framework aiming at flexibility. It provides automatic differentiation APIs based on the define-by-run approach (dynamic computational graphs) as well as object-oriented high-level APIs to build and train neural networks. It also supports CUDA/cuDNN using [CuPy](https://github.com/cupy/cupy) for high performance training and inference. 1780 | 1781 | [cuML](https://github.com/rapidsai/cuml) is a suite of libraries that implement machine learning algorithms and mathematical primitives functions that share compatible APIs with other RAPIDS projects. cuML enables data scientists, researchers, and software engineers to run traditional tabular ML tasks on GPUs without going into the details of CUDA programming. In most cases, cuML's Python API matches the API from scikit-learn. 1782 | 1783 | # Networking 1784 | 1785 | [Back to the Top](https://github.com/mikeroyal/kubernetes-Guide/blob/main/README.md#table-of-contents) 1786 | 1787 |

1788 | 1789 |
1790 |

1791 | 1792 | ## Networking Learning Resources 1793 | 1794 | [AWS Certified Security - Specialty Certification](https://aws.amazon.com/certification/certified-security-specialty/) 1795 | 1796 | [Microsoft Certified: Azure Security Engineer Associate](https://docs.microsoft.com/en-us/learn/certifications/azure-security-engineer) 1797 | 1798 | [Google Cloud Certified Professional Cloud Security Engineer](https://cloud.google.com/certification/cloud-security-engineer) 1799 | 1800 | [Cisco Security Certifications](https://www.cisco.com/c/en/us/training-events/training-certifications/certifications/security.html) 1801 | 1802 | [The Red Hat Certified Specialist in Security: Linux](https://www.redhat.com/en/services/training/ex415-red-hat-certified-specialist-security-linux-exam) 1803 | 1804 | [Linux Professional Institute LPIC-3 Enterprise Security Certification](https://www.lpi.org/our-certifications/lpic-3-303-overview) 1805 | 1806 | [Cybersecurity Training and Courses from IBM Skills](https://www.ibm.com/skills/topics/cybersecurity/) 1807 | 1808 | [Cybersecurity Courses and Certifications by Offensive Security](https://www.offensive-security.com/courses-and-certifications/) 1809 | 1810 | [Citrix Certified Associate – Networking(CCA-N)](http://training.citrix.com/cms/index.php/certification/networking/) 1811 | 1812 | [Citrix Certified Professional – Virtualization(CCP-V)](https://www.globalknowledge.com/us-en/training/certification-prep/brands/citrix/section/virtualization/citrix-certified-professional-virtualization-ccp-v/) 1813 | 1814 | [CCNP Routing and Switching](https://learningnetwork.cisco.com/s/ccnp-enterprise) 1815 | 1816 | [Certified Information Security Manager(CISM)](https://www.isaca.org/credentialing/cism) 1817 | 1818 | [Wireshark Certified Network Analyst (WCNA)](https://www.wiresharktraining.com/certification.html) 1819 | 1820 | [Juniper Networks Certification Program Enterprise (JNCP)](https://www.juniper.net/us/en/training/certification/) 1821 | 1822 | [Networking courses and specializations from Coursera](https://www.coursera.org/browse/information-technology/networking) 1823 | 1824 | [Network & Security Courses from Udemy](https://www.udemy.com/courses/it-and-software/network-and-security/) 1825 | 1826 | [Network & Security Courses from edX](https://www.edx.org/learn/cybersecurity) 1827 | 1828 | ## Networking Tools & Concepts 1829 | 1830 | [Qt Network Authorization](https://doc.qt.io/qt-6/qtnetworkauth-index.html) is a tool that provides a set of APIs that enable Qt applications to obtain limited access to online accounts and HTTP services without exposing users' passwords. 1831 | 1832 | [cURL](https://curl.se/) is a computer software project providing a library and command-line tool for transferring data using various network protocols(HTTP, HTTPS, FTP, FTPS, SCP, SFTP, TFTP, DICT, TELNET, LDAP LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP or SMTPS). cURL is also used in cars, television sets, routers, printers, audio equipment, mobile phones, tablets, settop boxes, media players and is the Internet transfer engine for thousands of software applications in over ten billion installations. 1833 | 1834 | [cURL Fuzzer](https://github.com/curl/curl-fuzzer) is a quality assurance testing for the curl project. 1835 | 1836 | [DoH](https://github.com/curl/doh) is a stand-alone application for DoH (DNS-over-HTTPS) name resolves and lookups. 1837 | 1838 | [Authelia](https://www.authelia.com/) is an open-source highly-available authentication server providing single sign-on capability and two-factor authentication to applications running behind [NGINX](https://nginx.org/en/). 1839 | 1840 | [nginx(engine x)](https://nginx.org/en/) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server, originally written by Igor Sysoev. 1841 | 1842 | [Proxmox Virtual Environment(VE)](https://www.proxmox.com/en/) is a complete open-source platform for enterprise virtualization. It inlcudes a built-in web interface that you can easily manage VMs and containers, software-defined storage and networking, high-availability clustering, and multiple out-of-the-box tools on a single solution. 1843 | 1844 | [Wireshark](https://www.wireshark.org/) is a very popular network protocol analyzer that is commonly used for network troubleshooting, analysis, and communications protocol development. Learn more about the other useful [Wireshark Tools](https://wiki.wireshark.org/Tools) available. 1845 | 1846 | [HTTPie](https://github.com/httpie/httpie) is a command-line HTTP client. Its goal is to make CLI interaction with web services as human-friendly as possible. HTTPie is designed for testing, debugging, and generally interacting with APIs & HTTP servers. 1847 | 1848 | [HTTPStat](https://github.com/reorx/httpstat) is a tool that visualizes curl statistics in a simple layout. 1849 | 1850 | [Wuzz](https://github.com/asciimoo/wuzz) is an interactive cli tool for HTTP inspection. It can be used to inspect/modify requests copied from the browser's network inspector with the "copy as cURL" feature. 1851 | 1852 | [Websocat](https://github.com/vi/websocat) is a ommand-line client for WebSockets, like netcat (or curl) for ws:// with advanced socat-like functions. 1853 | 1854 | - Connection: In networking, a connection refers to pieces of related information that are transferred through a network. This generally infers that a connection is built before the data transfer (by following the procedures laid out in a protocol) and then is deconstructed at the at the end of the data transfer. 1855 | 1856 | - Packet: A packet is, generally speaking, the most basic unit that is transferred over a network. When communicating over a network, packets are the envelopes that carry your data (in pieces) from one end point to the other. 1857 | 1858 | Packets have a header portion that contains information about the packet including the source and destination, timestamps, network hops. The main portion of a packet contains the actual data being transferred. It is sometimes called the body or the payload. 1859 | 1860 | - Network Interface: A network interface can refer to any kind of software interface to networking hardware. For instance, if you have two network cards in your computer, you can control and configure each network interface associated with them individually. 1861 | 1862 | A network interface may be associated with a physical device, or it may be a representation of a virtual interface. The "loop-back" device, which is a virtual interface to the local machine, is an example of this. 1863 | 1864 | - LAN: LAN stands for "local area network". It refers to a network or a portion of a network that is not publicly accessible to the greater internet. A home or office network is an example of a LAN. 1865 | 1866 | - WAN: WAN stands for "wide area network". It means a network that is much more extensive than a LAN. While WAN is the relevant term to use to describe large, dispersed networks in general, it is usually meant to mean the internet, as a whole. 1867 | If an interface is connected to the WAN, it is generally assumed that it is reachable through the internet. 1868 | 1869 | - Protocol: A protocol is a set of rules and standards that basically define a language that devices can use to communicate. There are a great number of protocols in use extensively in networking, and they are often implemented in different layers. 1870 | 1871 | Some low level protocols are TCP, UDP, IP, and ICMP. Some familiar examples of application layer protocols, built on these lower protocols, are HTTP (for accessing web content), SSH, TLS/SSL, and FTP. 1872 | 1873 | - Port: A port is an address on a single machine that can be tied to a specific piece of software. It is not a physical interface or location, but it allows your server to be able to communicate using more than one application. 1874 | 1875 | - Firewall: A firewall is a program that decides whether traffic coming into a server or going out should be allowed. A firewall usually works by creating rules for which type of traffic is acceptable on which ports. Generally, firewalls block ports that are not used by a specific application on a server. 1876 | 1877 | - NAT: Network address translation is a way to translate requests that are incoming into a routing server to the relevant devices or servers that it knows about in the LAN. This is usually implemented in physical LANs as a way to route requests through one IP address to the necessary backend servers. 1878 | 1879 | - VPN: Virtual private network is a means of connecting separate LANs through the internet, while maintaining privacy. This is used as a means of connecting remote systems as if they were on a local network, often for security reasons. 1880 | 1881 | ## Network Layers 1882 | 1883 | While networking is often discussed in terms of topology in a horizontal way, between hosts, its implementation is layered in a vertical fashion throughout a computer or network. This means is that there are multiple technologies and protocols that are built on top of each other in order for communication to function more easily. Each successive, higher layer abstracts the raw data a little bit more, and makes it simpler to use for applications and users. It also allows you to leverage lower layers in new ways without having to invest the time and energy to develop the protocols and applications that handle those types of traffic. 1884 | 1885 | As data is sent out of one machine, it begins at the top of the stack and filters downwards. At the lowest level, actual transmission to another machine takes place. At this point, the data travels back up through the layers of the other computer. Each layer has the ability to add its own "wrapper" around the data that it receives from the adjacent layer, which will help the layers that come after decide what to do with the data when it is passed off. 1886 | 1887 | One method of talking about the different layers of network communication is the OSI model. OSI stands for [Open Systems Interconnect](https://en.wikipedia.org/wiki/OSI_model).This model defines seven separate layers. The layers in this model are: 1888 | 1889 | - Application: The application layer is the layer that the users and user-applications most often interact with. Network communication is discussed in terms of availability of resources, partners to communicate with, and data synchronization. 1890 | 1891 | - Presentation: The presentation layer is responsible for mapping resources and creating context. It is used to translate lower level networking data into data that applications expect to see. 1892 | 1893 | - Session: The session layer is a connection handler. It creates, maintains, and destroys connections between nodes in a persistent way. 1894 | 1895 | - Transport: The transport layer is responsible for handing the layers above it a reliable connection. In this context, reliable refers to the ability to verify that a piece of data was received intact at the other end of the connection. This layer can resend information that has been dropped or corrupted and can acknowledge the receipt of data to remote computers. 1896 | 1897 | - Network: The network layer is used to route data between different nodes on the network. It uses addresses to be able to tell which computer to send information to. This layer can also break apart larger messages into smaller chunks to be reassembled on the opposite end. 1898 | 1899 | - Data Link: This layer is implemented as a method of establishing and maintaining reliable links between different nodes or devices on a network using existing physical connections. 1900 | 1901 | - Physical: The physical layer is responsible for handling the actual physical devices that are used to make a connection. This layer involves the bare software that manages physical connections as well as the hardware itself (like Ethernet). 1902 | 1903 | The TCP/IP model, more commonly known as the Internet protocol suite, is another layering model that is simpler and has been widely adopted.It defines the four separate layers, some of which overlap with the OSI model: 1904 | 1905 | - Application: In this model, the application layer is responsible for creating and transmitting user data between applications. The applications can be on remote systems, and should appear to operate as if locally to the end user. 1906 | The communication takes place between peers network. 1907 | 1908 | - Transport: The transport layer is responsible for communication between processes. This level of networking utilizes ports to address different services. It can build up unreliable or reliable connections depending on the type of protocol used. 1909 | 1910 | - Internet: The internet layer is used to transport data from node to node in a network. This layer is aware of the endpoints of the connections, but does not worry about the actual connection needed to get from one place to another. IP addresses are defined in this layer as a way of reaching remote systems in an addressable manner. 1911 | 1912 | - Link: The link layer implements the actual topology of the local network that allows the internet layer to present an addressable interface. It establishes connections between neighboring nodes to send data. 1913 | 1914 | ### Interfaces 1915 | **Interfaces** are networking communication points for your computer. Each interface is associated with a physical or virtual networking device. Typically, your server will have one configurable network interface for each Ethernet or wireless internet card you have. In addition, it will define a virtual network interface called the "loopback" or localhost interface. This is used as an interface to connect applications and processes on a single computer to other applications and processes. You can see this referenced as the "lo" interface in many tools. 1916 | 1917 | ## Network Protocols 1918 | 1919 | Networking works by piggybacks on a number of different protocols on top of each other. In this way, one piece of data can be transmitted using multiple protocols encapsulated within one another. 1920 | 1921 | **Media Access Control(MAC)** is a communications protocol that is used to distinguish specific devices. Each device is supposed to get a unique MAC address during the manufacturing process that differentiates it from every other device on the internet. Addressing hardware by the MAC address allows you to reference a device by a unique value even when the software on top may change the name for that specific device during operation. Media access control is one of the only protocols from the link layer that you are likely to interact with on a regular basis. 1922 | 1923 | **The IP protocol** is one of the fundamental protocols that allow the internet to work. IP addresses are unique on each network and they allow machines to address each other across a network. It is implemented on the internet layer in the IP/TCP model. Networks can be linked together, but traffic must be routed when crossing network boundaries. This protocol assumes an unreliable network and multiple paths to the same destination that it can dynamically change between. There are a number of different implementations of the protocol. The most common implementation today is IPv4, although IPv6 is growing in popularity as an alternative due to the scarcity of IPv4 addresses available and improvements in the protocols capabilities. 1924 | 1925 | **ICMP: internet control message protocol** is used to send messages between devices to indicate the availability or error conditions. These packets are used in a variety of network diagnostic tools, such as ping and traceroute. Usually ICMP packets are transmitted when a packet of a different kind meets some kind of a problem. Basically, they are used as a feedback mechanism for network communications. 1926 | 1927 | **TCP: Transmission control protocol** is implemented in the transport layer of the IP/TCP model and is used to establish reliable connections. TCP is one of the protocols that encapsulates data into packets. It then transfers these to the remote end of the connection using the methods available on the lower layers. On the other end, it can check for errors, request certain pieces to be resent, and reassemble the information into one logical piece to send to the application layer. The protocol builds up a connection prior to data transfer using a system called a three-way handshake. This is a way for the two ends of the communication to acknowledge the request and agree upon a method of ensuring data reliability. After the data has been sent, the connection is torn down using a similar four-way handshake. TCP is the protocol of choice for many of the most popular uses for the internet, including WWW, FTP, SSH, and email. It is safe to say that the internet we know today would not be here without TCP. 1928 | 1929 | **UDP: User datagram protocol** is a popular companion protocol to TCP and is also implemented in the transport layer. The fundamental difference between UDP and TCP is that UDP offers unreliable data transfer. It does not verify that data has been received on the other end of the connection. This might sound like a bad thing, and for many purposes, it is. However, it is also extremely important for some functions. It’s not required to wait for confirmation that the data was received and forced to resend data, UDP is much faster than TCP. It does not establish a connection with the remote host, it simply fires off the data to that host and doesn't care if it is accepted or not. Since UDP is a simple transaction, it is useful for simple communications like querying for network resources. It also doesn't maintain a state, which makes it great for transmitting data from one machine to many real-time clients. This makes it ideal for VOIP, games, and other applications that cannot afford delays. 1930 | 1931 | **HTTP: Hypertext transfer protocol** is a protocol defined in the application layer that forms the basis for communication on the web. HTTP defines a number of functions that tell the remote system what you are requesting. For instance, GET, POST, and DELETE all interact with the requested data in a different way. 1932 | 1933 | **FTP: File transfer protocol** is in the application layer and provides a way of transferring complete files from one host to another. It is inherently insecure, so it is not recommended for any externally facing network unless it is implemented as a public, download-only resource. 1934 | 1935 | **DNS: Domain name system** is an application layer protocol used to provide a human-friendly naming mechanism for internet resources. It is what ties a domain name to an IP address and allows you to access sites by name in your browser. 1936 | 1937 | **SSH: Secure shell** is an encrypted protocol implemented in the application layer that can be used to communicate with a remote server in a secure way. Many additional technologies are built around this protocol because of its end-to-end encryption and ubiquity. There are many other protocols that we haven't covered that are equally important. However, this should give you a good overview of some of the fundamental technologies that make the internet and networking possible. 1938 | 1939 | [REST(REpresentational State Transfer)](https://www.codecademy.com/articles/what-is-rest) is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other. 1940 | 1941 | [JSON Web Token (JWT)](https://jwt.io) is a compact URL-safe means of representing claims to be transferred between two parties. The claims in a JWT are encoded as a JSON object that is digitally signed using JSON Web Signature (JWS). 1942 | 1943 | [OAuth 2.0](https://oauth.net/2/) is an open source authorization framework that enables applications to obtain limited access to user accounts on an HTTP service, such as Amazon, Google, Facebook, Microsoft, Twitter GitHub, and DigitalOcean. It works by delegating user authentication to the service that hosts the user account, and authorizing third-party applications to access the user account. 1944 | 1945 | ## Virtualization 1946 | 1947 | [KVM (for Kernel-based Virtual Machine)](https://www.linux-kvm.org/page/Main_Page) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. 1948 | 1949 | [QEMU](https://www.qemu.org) is a fast processor emulator using a portable dynamic translator. QEMU emulates a full system, including a processor and various peripherals. It can be used to launch a different Operating System without rebooting the PC or to debug system code. 1950 | 1951 | [Hyper-V](https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/) enables running virtualized computer systems on top of a physical host. These virtualized systems can be used and managed just as if they were physical computer systems, however they exist in virtualized and isolated environment. Special software called a hypervisor manages access between the virtual systems and the physical hardware resources. Virtualization enables quick deployment of computer systems, a way to quickly restore systems to a previously known good state, and the ability to migrate systems between physical hosts. 1952 | 1953 | [VirtManager](https://github.com/virt-manager/virt-manager) is a graphical tool for managing virtual machines via libvirt. Most usage is with QEMU/KVM virtual machines, but Xen and libvirt LXC containers are well supported. Common operations for any libvirt driver should work. 1954 | 1955 | [oVirt](https://www.ovirt.org) is an open-source distributed virtualization solution, designed to manage your entire enterprise infrastructure. oVirt uses the trusted KVM hypervisor and is built upon several other community projects, including libvirt, Gluster, PatternFly, and Ansible.Founded by Red Hat as a community project on which Red Hat Enterprise Virtualization is based allowing for centralized management of virtual machines, compute, storage and networking resources, from an easy-to-use web-based front-end with platform independent access. 1956 | 1957 | [Xen](https://github.com/xen-project/xen) is focused on advancing virtualization in a number of different commercial and open source applications, including server virtualization, Infrastructure as a Services (IaaS), desktop virtualization, security applications, embedded and hardware appliances, and automotive/aviation. 1958 | 1959 | [Ganeti](https://github.com/ganeti/ganeti) is a virtual machine cluster management tool built on top of existing virtualization technologies such as Xen or KVM and other open source software. Once installed, the tool assumes management of the virtual instances (Xen DomU). 1960 | 1961 | [Packer](https://www.packer.io/) is an open source tool for creating identical machine images for multiple platforms from a single source configuration. Packer is lightweight, runs on every major operating system, and is highly performant, creating machine images for multiple platforms in parallel. Packer does not replace configuration management like Chef or Puppet. In fact, when building images, Packer is able to use tools like Chef or Puppet to install software onto the image. 1962 | 1963 | [Vagrant](https://www.vagrantup.com/) is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the "works on my machine" excuse a relic of the past. It provides easy to configure, reproducible, and portable work environments built on top of industry-standard technology and controlled by a single consistent workflow to help maximize the productivity and flexibility of you and your team. 1964 | 1965 | [VMware Workstation](https://www.vmware.com/products/workstation-pro.html) is a hosted hypervisor that runs on x64 versions of Windows and Linux operating systems; it enables users to set up virtual machines on a single physical machine, and use them simultaneously along with the actual machine. 1966 | 1967 | [VirtualBox](https://www.virtualbox.org) is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers. 1968 | 1969 | # Databases 1970 | 1971 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 1972 | 1973 | ## Database Learning Resources 1974 | 1975 | [SQL](https://en.wikipedia.org/wiki/SQL) is a standard language for storing, manipulating and retrieving data in relational databases. 1976 | 1977 | [SQL Tutorial by W3Schools](https://www.w3schools.com/sql/) 1978 | 1979 | [Learn SQL Skills Online from Coursera](https://www.coursera.org/courses?query=sql) 1980 | 1981 | [SQL Courses Online from Udemy](https://www.udemy.com/topic/sql/) 1982 | 1983 | [SQL Online Training Courses from LinkedIn Learning](https://www.linkedin.com/learning/topics/sql) 1984 | 1985 | [Learn SQL For Free from Codecademy](https://www.codecademy.com/learn/learn-sql) 1986 | 1987 | [GitLab's SQL Style Guide](https://about.gitlab.com/handbook/business-ops/data-team/platform/sql-style-guide/) 1988 | 1989 | [OracleDB SQL Style Guide Basics](https://oracle.readthedocs.io/en/latest/sql/basics/style-guide.html) 1990 | 1991 | [Tableau CRM: BI Software and Tools](https://www.salesforce.com/products/crm-analytics/overview/) 1992 | 1993 | [Databases on AWS](https://aws.amazon.com/products/databases/) 1994 | 1995 | [Best Practices and Recommendations for SQL Server Clustering in AWS EC2.](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/aws-sql-clustering.html) 1996 | 1997 | [Connecting from Google Kubernetes Engine to a Cloud SQL instance.](https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine) 1998 | 1999 | [Educational Microsoft Azure SQL resources](https://docs.microsoft.com/en-us/sql/sql-server/educational-sql-resources?view=sql-server-ver15) 2000 | 2001 | [MySQL Certifications](https://www.mysql.com/certification/) 2002 | 2003 | [SQL vs. NoSQL Databases: What's the Difference?](https://www.ibm.com/cloud/blog/sql-vs-nosql) 2004 | 2005 | [What is NoSQL?](https://aws.amazon.com/nosql/) 2006 | 2007 | ## Databases and Tools 2008 | 2009 | [Azure Data Studio](https://github.com/Microsoft/azuredatastudio) is an open source data management tool that enables working with SQL Server, Azure SQL DB and SQL DW from Windows, macOS and Linux. 2010 | 2011 | [Azure SQL Database](https://azure.microsoft.com/en-us/services/sql-database/) is the intelligent, scalable, relational database service built for the cloud. It’s evergreen and always up to date, with AI-powered and automated features that optimize performance and durability for you. Serverless compute and Hyperscale storage options automatically scale resources on demand, so you can focus on building new applications without worrying about storage size or resource management. 2012 | 2013 | [Azure SQL Managed Instance](https://azure.microsoft.com/en-us/services/azure-sql/sql-managed-instance/) is a fully managed SQL Server Database engine instance that's hosted in Azure and placed in your network. This deployment model makes it easy to lift and shift your on-premises applications to the cloud with very few application and database changes. Managed instance has split compute and storage components. 2014 | 2015 | [Azure Synapse Analytics](https://azure.microsoft.com/en-us/services/synapse-analytics/) is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless or provisioned resources at scale. It brings together the best of the SQL technologies used in enterprise data warehousing, Spark technologies used in big data analytics, and Pipelines for data integration and ETL/ELT. 2016 | 2017 | [MSSQL for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-mssql.mssql) is an extension for developing Microsoft SQL Server, Azure SQL Database and SQL Data Warehouse everywhere with a rich set of functionalities. 2018 | 2019 | [SQL Server Data Tools (SSDT)](https://docs.microsoft.com/en-us/sql/ssdt/download-sql-server-data-tools-ssdt) is a development tool for building SQL Server relational databases, Azure SQL Databases, Analysis Services (AS) data models, Integration Services (IS) packages, and Reporting Services (RS) reports. With SSDT, a developer can design and deploy any SQL Server content type with the same ease as they would develop an application in Visual Studio or Visual Studio Code. 2020 | 2021 | [Bulk Copy Program](https://docs.microsoft.com/en-us/sql/tools/bcp-utility) is a command-line tool that comes with Microsoft SQL Server. BCP, allows you to import and export large amounts of data in and out of SQL Server databases quickly snd efficeiently. 2022 | 2023 | [SQL Server Migration Assistant](https://www.microsoft.com/en-us/download/details.aspx?id=54258) is a tool from Microsoft that simplifies database migration process from Oracle to SQL Server, Azure SQL Database, Azure SQL Database Managed Instance and Azure SQL Data Warehouse. 2024 | 2025 | [SQL Server Integration Services](https://docs.microsoft.com/en-us/sql/integration-services/sql-server-integration-services?view=sql-server-ver15) is a development platform for building enterprise-level data integration and data transformations solutions. Use Integration Services to solve complex business problems by copying or downloading files, loading data warehouses, cleansing and mining data, and managing SQL Server objects and data. 2026 | 2027 | [SQL Server Business Intelligence(BI)](https://www.microsoft.com/en-us/sql-server/sql-business-intelligence) is a collection of tools in Microsoft's SQL Server for transforming raw data into information businesses can use to make decisions. 2028 | 2029 | [Tableau](https://www.tableau.com/) is a Data Visualization software used in relational databases, cloud databases, and spreadsheets. Tableau was acquired by [Salesforce in August 2019](https://investor.salesforce.com/press-releases/press-release-details/2019/Salesforce-Completes-Acquisition-of-Tableau/default.aspx). 2030 | 2031 | [DataGrip](https://www.jetbrains.com/datagrip/) is a professional DataBase IDE developed by Jet Brains that provides context-sensitive code completion, helping you to write SQL code faster. Completion is aware of the tables structure, foreign keys, and even database objects created in code you're editing. 2032 | 2033 | [RStudio](https://rstudio.com/) is an integrated development environment for R and Python, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management. 2034 | 2035 | [MySQL](https://www.mysql.com/) is a fully managed database service to deploy cloud-native applications using the world's most popular open source database. 2036 | 2037 | [PostgreSQL](https://www.postgresql.org/) is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance. 2038 | 2039 | [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) is a key-value and document database that delivers single-digit millisecond performance at any scale. It is a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. 2040 | 2041 | [FoundationDB](https://www.foundationdb.org/) is an open source distributed database designed to handle large volumes of structured data across clusters of commodity servers. It organizes data as an ordered key-value store and employs ACID transactions for all operations. It is especially well-suited for read/write workloads but also has excellent performance for write-intensive workloads. FoundationDB was acquired by [Apple in 2015](https://techcrunch.com/2015/03/24/apple-acquires-durable-database-company-foundationdb/). 2042 | 2043 | [CouchbaseDB](https://www.couchbase.com/) is an open source distributed [multi-model NoSQL document-oriented database](https://en.wikipedia.org/wiki/Multi-model_database). It creates a key-value store with managed cache for sub-millisecond data operations, with purpose-built indexers for efficient queries and a powerful query engine for executing SQL queries. 2044 | 2045 | [IBM DB2](https://www.ibm.com/analytics/db2) is a collection of hybrid data management products offering a complete suite of AI-empowered capabilities designed to help you manage both structured and unstructured data on premises as well as in private and public cloud environments. Db2 is built on an intelligent common SQL engine designed for scalability and flexibility. 2046 | 2047 | [MongoDB](https://www.mongodb.com/) is a document database meaning it stores data in JSON-like documents. 2048 | 2049 | [OracleDB](https://www.oracle.com/database/) is a powerful fully managed database helps developers manage business-critical data with the highest availability, reliability, and security. 2050 | 2051 | [MariaDB](https://mariadb.com/) is an enterprise open source database solution for modern, mission-critical applications. 2052 | 2053 | [SQLite](https://sqlite.org/index.html) is a C-language library that implements a small, fast, self-contained, high-reliability, full-featured, SQL database engine.SQLite is the most used database engine in the world. SQLite is built into all mobile phones and most computers and comes bundled inside countless other applications that people use every day. 2054 | 2055 | [SQLite Database Browser](https://sqlitebrowser.org/) is an open source SQL tool that allows users to create, design and edits SQLite database files. It lets users show a log of all the SQL commands that have been issued by them and by the application itself. 2056 | 2057 | [dbWatch](https://www.dbwatch.com/) is a complete database monitoring/management solution for SQL Server, Oracle, PostgreSQL, Sybase, MySQL and Azure. Designed for proactive management and automation of routine maintenance in large scale on-premise, hybrid/cloud database environments. 2058 | 2059 | [Cosmos DB Profiler](https://hibernatingrhinos.com/products/cosmosdbprof) is a real-time visual debugger allowing a development team to gain valuable insight and perspective into their usage of Cosmos DB database. It identifies over a dozen suspicious behaviors from your application’s interaction with Cosmos DB. 2060 | 2061 | [Adminer](https://www.adminer.org/) is an SQL management client tool for managing databases, tables, relations, indexes, users. Adminer has support for all the popular database management systems such as MySQL, MariaDB, PostgreSQL, SQLite, MS SQL, Oracle, Firebird, SimpleDB, Elasticsearch and MongoDB. 2062 | 2063 | [DBeaver](https://dbeaver.io/) is an open source database tool for developers and database administrators. It offers supports for JDBC compliant databases such as MySQL, Oracle, IBM DB2, SQL Server, Firebird, SQLite, Sybase, Teradata, Firebird, Apache Hive, Phoenix, and Presto. 2064 | 2065 | [DbVisualizer](https://dbvis.com/) is a SQL management tool that allows users to manage a wide range of databases such as Oracle, Sybase, SQL Server, MySQL, H3, and SQLite. 2066 | 2067 | [AppDynamics Database](https://www.appdynamics.com/supported-technologies/database) is a management product for Microsoft SQL Server. With AppDynamics you can monitor and trend key performance metrics such as resource consumption, database objects, schema statistics and more, allowing you to proactively tune and fix issues in a High-Volume Production Environment. 2068 | 2069 | [Toad](https://www.quest.com/toad/) is a SQL Server DBMS toolset developed by Quest. It increases productivity by using extensive automation, intuitive workflows, and built-in expertise. This SQL management tool resolve issues, manage change and promote the highest levels of code quality for both relational and non-relational databases. 2070 | 2071 | [Lepide SQL Server](https://www.lepide.com/sql-storage-manager/) is an open source storage manager utility to analyse the performance of SQL Servers. It provides a complete overview of all configuration and permission changes being made to your SQL Server environment through an easy-to-use, graphical user interface. 2072 | 2073 | [Sequel Pro](https://sequelpro.com/) is a fast MacOS database management tool for working with MySQL. This SQL management tool helpful for interacting with your database by easily to adding new databases, new tables, and new rows. 2074 | 2075 | # Telco 5G 2076 | 2077 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 2078 | 2079 | 2080 | 2081 | **VMware Cloud First Approach. Source: [VMware](https://www.vmware.com/products/telco-cloud-automation.html).** 2082 | 2083 | 2084 | 2085 | 2086 | **VMware Telco Cloud Automation Components. Source: [VMware](https://www.vmware.com/products/telco-cloud-automation.html).** 2087 | 2088 | 2089 | ## Telco Learning Resources 2090 | 2091 | [HPE(Hewlett Packard Enterprise) Telco Blueprints overview](https://techhub.hpe.com/eginfolib/servers/docs/Telco/Blueprints/infocenter/index.html#GUID-9906A227-C1FB-4FD5-A3C3-F3B72EC81CAB.html) 2092 | 2093 | [Network Functions Virtualization Infrastructure (NFVI) by Cisco](https://www.cisco.com/c/en/us/solutions/service-provider/network-functions-virtualization-nfv-infrastructure/index.html) 2094 | 2095 | [Introduction to vCloud NFV Telco Edge from VMware](https://docs.vmware.com/en/VMware-vCloud-NFV-OpenStack-Edition/3.1/vloud-nfv-edge-reference-arch-31/GUID-744C45F1-A8D5-4523-9E5E-EAF6336EE3A0.html) 2096 | 2097 | [VMware Telco Cloud Automation(TCA) Architecture Overview](https://docs.vmware.com/en/VMware-Telco-Cloud-Platform-5G-Edition/1.0/telco-cloud-platform-5G-edition-reference-architecture/GUID-C19566B3-F42D-4351-BA55-DE70D55FB0DD.html) 2098 | 2099 | [5G Telco Cloud from VMware](https://telco.vmware.com/) 2100 | 2101 | [Maturing OpenStack Together To Solve Telco Needs from Red Hat](https://www.redhat.com/cms/managed-files/4.Nokia%20CloudBand%20&%20Red%20Hat%20-%20Maturing%20Openstack%20together%20to%20solve%20Telco%20needs%20Ehud%20Malik,%20Senior%20PLM,%20Nokia%20CloudBand.pdf) 2102 | 2103 | [Red Hat telco ecosystem program](https://connect.redhat.com/en/programs/telco-ecosystem) 2104 | 2105 | [OpenStack for Telcos by Canonical](https://ubuntu.com/blog/openstack-for-telcos-by-canonical) 2106 | 2107 | [Open source NFV platform for 5G from Ubuntu](https://ubuntu.com/telco) 2108 | 2109 | [Understanding 5G Technology from Verizon](https://www.verizon.com/5g/) 2110 | 2111 | [Verizon and Unity partner to enable 5G & MEC gaming and enterprise applications](https://www.verizon.com/about/news/verizon-unity-partner-5g-mec-gaming-enterprise) 2112 | 2113 | [Understanding 5G Technology from Intel](https://www.intel.com/content/www/us/en/wireless-network/what-is-5g.html) 2114 | 2115 | [Understanding 5G Technology from Qualcomm](https://www.qualcomm.com/invention/5g/what-is-5g) 2116 | 2117 | [Telco Acceleration with Xilinx](https://www.xilinx.com/applications/wired-wireless/telco.html) 2118 | 2119 | [VIMs on OSM Public Wiki](https://osm.etsi.org/wikipub/index.php/VIMs) 2120 | 2121 | [Amazon EC2 Overview and Networking Introduction for Telecom Companies](https://docs.aws.amazon.com/whitepapers/latest/ec2-networking-for-telecom/ec2-networking-for-telecom.pdf) 2122 | 2123 | [Citrix Certified Associate – Networking(CCA-N)](http://training.citrix.com/cms/index.php/certification/networking/) 2124 | 2125 | [Citrix Certified Professional – Virtualization(CCP-V)](https://www.globalknowledge.com/us-en/training/certification-prep/brands/citrix/section/virtualization/citrix-certified-professional-virtualization-ccp-v/) 2126 | 2127 | [CCNP Routing and Switching](https://learningnetwork.cisco.com/s/ccnp-enterprise) 2128 | 2129 | [Certified Information Security Manager(CISM)](https://www.isaca.org/credentialing/cism) 2130 | 2131 | [Wireshark Certified Network Analyst (WCNA)](https://www.wiresharktraining.com/certification.html) 2132 | 2133 | [Juniper Networks Certification Program Enterprise (JNCP)](https://www.juniper.net/us/en/training/certification/) 2134 | 2135 | [Cloud Native Computing Foundation Training and Certification Program](https://www.cncf.io/certification/training/) 2136 | 2137 | 2138 | ## Tools 2139 | 2140 | [Open Stack](https://www.openstack.org/) is an open source cloud platform, deployed as infrastructure-as-a-service (IaaS) to orchestrate data center operations on bare metal, private cloud hardware, public cloud resources, or both (hybrid/multi-cloud architecture). OpenStack includes advance use of virtualization & SDN for network traffic optimization to handle the core cloud-computing services of compute, networking, storage, identity, and image services. 2141 | 2142 | [StarlingX](https://www.starlingx.io/) is a complete cloud infrastructure software stack for the edge used by the most demanding applications in industrial IOT, telecom, video delivery and other ultra-low latency use cases. 2143 | 2144 | [Airship](https://www.airshipit.org/) is a collection of open source tools for automating cloud provisioning and management. Airship provides a declarative framework for defining and managing the life cycle of open infrastructure tools and the underlying hardware. 2145 | 2146 | [Network functions virtualization (NFV)](https://www.vmware.com/topics/glossary/content/network-functions-virtualization-nfv) is the replacement of network appliance hardware with virtual machines. The virtual machines use a hypervisor to run networking software and processes such as routing and load balancing. NFV allows for the separation of communication services from dedicated hardware, such as routers and firewalls. This separation means network operations can provide new services dynamically and without installing new hardware. Deploying network components with network functions virtualization only takes hours compared to months like with traditional networking solutions. 2147 | 2148 | [Software Defined Networking (SDN)](https://www.vmware.com/topics/glossary/content/software-defined-networking) is an approach to networking that uses software-based controllers or application programming interfaces (APIs) to communicate with underlying hardware infrastructure and direct traffic on a network. This model differs from that of traditional networks, which use dedicated hardware devices (routers and switches) to control network traffic. 2149 | 2150 | [Virtualized Infrastructure Manager (VIM)](https://www.cisco.com/c/en/us/td/docs/net_mgmt/network_function_virtualization_Infrastructure/3_2_2/install_guide/Cisco_VIM_Install_Guide_3_2_2/Cisco_VIM_Install_Guide_3_2_2_chapter_00.html) is a service delivery and reduce costs with high performance lifecycle management Manage the full lifecycle of the software and hardware comprising your NFV infrastructure (NFVI), and maintaining a live inventory and allocation plan of both physical and virtual resources. 2151 | 2152 | [Management and Orchestration(MANO)](https://www.etsi.org/technologies/open-source-mano) is an ETSI-hosted initiative to develop an Open Source NFV Management and Orchestration (MANO) software stack aligned with ETSI NFV. Two of the key components of the ETSI NFV architectural framework are the NFV Orchestrator and VNF Manager, known as NFV MANO. 2153 | 2154 | [Magma](https://www.magmacore.org/) is an open source software platform that gives network operators an open, flexible and extendable mobile core network solution. Their mission is to connect the world to a faster network by enabling service providers to build cost-effective and extensible carrier-grade networks. Magma is 3GPP generation (2G, 3G, 4G or upcoming 5G networks) and access network agnostic (cellular or WiFi). It can flexibly support a radio access network with minimal development and deployment effort. 2155 | 2156 | [OpenRAN](https://open-ran.org/) is an intelligent Radio Access Network(RAN) integrated on general purpose platforms with open interface between software defined functions. Open RANecosystem enables enormous flexibility and interoperability with a complete openess to multi-vendor deployments. 2157 | 2158 | [Open vSwitch(OVS)](https://www.openvswitch.org/)is an open source production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (NetFlow, sFlow, IPFIX, RSPAN, CLI, LACP, 802.1ag). 2159 | 2160 | [Edge](https://www.ibm.com/cloud/what-is-edge-computing) is a distributed computing framework that brings enterprise applications closer to data sources such as IoT devices or local edge servers. This proximity to data at its source can deliver strong business benefits, including faster insights, improved response times and better bandwidth availability. 2161 | 2162 | [Multi-access edge computing (MEC)](https://www.etsi.org/technologies/multi-access-edge-computing) is an Industry Specification Group (ISG) within ETSI to create a standardized, open environment which will allow the efficient and seamless integration of applications from vendors, service providers, and third-parties across multi-vendor Multi-access Edge Computing platforms. 2163 | 2164 | [Virtualized network functions(VNFs)](https://www.juniper.net/documentation/en_US/cso4.1/topics/concept/nsd-vnf-overview.html) is a software application used in a Network Functions Virtualization (NFV) implementation that has well defined interfaces, and provides one or more component networking functions in a defined way. For example, a security VNF provides Network Address Translation (NAT) and firewall component functions. 2165 | 2166 | [Cloud-Native Network Functions(CNF)](https://www.cncf.io/announcements/2020/11/18/cloud-native-network-functions-conformance-launched-by-cncf/) is a network function designed and implemented to run inside containers. CNFs inherit all the cloud native architectural and operational principles including Kubernetes(K8s) lifecycle management, agility, resilience, and observability. 2167 | 2168 | [Physical Network Function(PNF)](https://www.mpirical.com/glossary/pnf-physical-network-function) is a physical network node which has not undergone virtualization. Both PNFs and VNFs (Virtualized Network Functions) can be used to form an overall Network Service. 2169 | 2170 | [Network functions virtualization infrastructure(NFVI)](https://docs.vmware.com/en/VMware-vCloud-NFV/2.0/vmware-vcloud-nfv-reference-architecture-20/GUID-FBEA6C6B-54D8-4A37-87B1-D825F9E0DBC7.html) is the foundation of the overall NFV architecture. It provides the physical compute, storage, and networking hardware that hosts the VNFs. Each NFVI block can be thought of as an NFVI node and many nodes can be deployed and controlled geographically. 2171 | 2172 | # Open Source Security 2173 | 2174 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 2175 | 2176 | 2177 | [Open Source Security Foundation (OpenSSF)](https://openssf.org/) is a cross-industry collaboration that brings together leaders to improve the security of open source software by building a broader community, targeted initiatives, and best practices. The OpenSSF brings together open source security initiatives under one foundation to accelerate work through cross-industry support. Along with the Core Infrastructure Initiative and the Open Source Security Coalition, and will include new working groups that address vulnerability disclosures, security tooling and more. 2178 | 2179 | ## Security Standards, Frameworks and Benchmarks 2180 | 2181 | [STIGs Benchmarks - Security Technical Implementation Guides](https://public.cyber.mil/stigs/) 2182 | 2183 | [CIS Benchmarks - CIS Center for Internet Security](https://www.cisecurity.org/cis-benchmarks/) 2184 | 2185 | [CIS Top 18 Critical Security Controls](https://www.cisecurity.org/controls/cis-controls-list) 2186 | 2187 | [OSSTMM (Open Source Security Testing Methodology Manual) PDF](https://github.com/mikeroyal/Open-Source-Security-Guide/files/8834704/osstmm.en.2.1.pdf) 2188 | 2189 | [NIST Technical Guide to Information Security Testing and Assessment (PDF)](https://github.com/mikeroyal/Open-Source-Security-Guide/files/8834705/nistspecialpublication800-115.pdf) 2190 | 2191 | [NIST - Current FIPS](https://www.nist.gov/itl/current-fips) 2192 | 2193 | [ISO Standards Catalogue](https://www.iso.org/standards.html) 2194 | 2195 | [Common Criteria for Information Technology Security Evaluation (CC)](https://www.commoncriteriaportal.org/cc/) is an international standard (ISO / IEC 15408) for computer security. It allows an objective evaluation to validate that a particular product satisfies a defined set of security requirements. 2196 | 2197 | [ISO 22301](https://www.iso.org/en/contents/data/standard/07/51/75106.html) is the international standard that provides a best-practice framework for implementing an optimised BCMS (business continuity management system). 2198 | 2199 | [ISO27001](https://www.iso.org/isoiec-27001-information-security.html) is the international standard that describes the requirements for an ISMS (information security management system). The framework is designed to help organizations manage their security practices in one place, consistently and cost-effectively. 2200 | 2201 | [ISO 27701](https://www.iso.org/en/contents/data/standard/07/16/71670.html) specifies the requirements for a PIMS (privacy information management system) based on the requirements of ISO 27001. 2202 | It is extended by a set of privacy-specific requirements, control objectives and controls. Companies that have implemented ISO 27001 will be able to use ISO 27701 to extend their security efforts to cover privacy management. 2203 | 2204 | [EU GDPR (General Data Protection Regulation)](https://gdpr.eu/) is a privacy and data protection law that supersedes existing national data protection laws across the EU, bringing uniformity by introducing just one main data protection law for companies/organizations to comply with. 2205 | 2206 | [CCPA (California Consumer Privacy Act)](https://www.oag.ca.gov/privacy/ccpa) is a data privacy law that took effect on January 1, 2020 in the State of California. It applies to businesses that collect California residents’ personal information, and its privacy requirements are similar to those of the EU’s GDPR (General Data Protection Regulation). 2207 | 2208 | [Payment Card Industry (PCI) Data Security Standards (DSS)](https://docs.microsoft.com/en-us/microsoft-365/compliance/offering-pci-dss) is a global information security standard designed to prevent fraud through increased control of credit card data. 2209 | 2210 | [SOC 2](https://www.aicpa.org/interestareas/frc/assuranceadvisoryservices/aicpasoc2report.html) is an auditing procedure that ensures your service providers securely manage your data to protect the interests of your comapny/organization and the privacy of their clients. 2211 | 2212 | [NIST CSF](https://www.nist.gov/national-security-standards) is a voluntary framework primarily intended for critical infrastructure organizations to manage and mitigate cybersecurity risk based on existing best practice. 2213 | 2214 | ## Security Tools 2215 | 2216 | [AppArmor](https://www.apparmor.net/) is an effective and easy-to-use Linux application security system. AppArmor proactively protects the operating system and applications from external or internal threats, even zero-day attacks, by enforcing good behavior and preventing both known and unknown application flaws from being exploited. AppArmor supplements the traditional Unix discretionary access control (DAC) model by providing mandatory access control (MAC). It has been included in the mainline Linux kernel since version 2.6.36 and its development has been supported by Canonical since 2009. 2217 | 2218 | [SELinux](https://github.com/SELinuxProject/selinux) is a security enhancement to Linux which allows users and administrators more control over access control. Access can be constrained on such variables as which users and applications can access which resources. These resources may take the form of files. Standard Linux access controls, such as file modes (-rwxr-xr-x) are modifiable by the user and the applications which the user runs. Conversely, SELinux access controls are determined by a policy loaded on the system which may not be changed by careless users or misbehaving applications. 2219 | 2220 | [Control Groups(Cgroups)](https://www.redhat.com/sysadmin/cgroups-part-one) is a Linux kernel feature that allows you to allocate resources such as CPU time, system memory, network bandwidth, or any combination of these resources for user-defined groups of tasks (processes) running on a system. 2221 | 2222 | [EarlyOOM](https://github.com/rfjakob/earlyoom) is a daemon for Linux that enables users to more quickly recover and regain control over their system in low-memory situations with heavy swap usage. 2223 | 2224 | [Libgcrypt](https://www.gnupg.org/related_software/libgcrypt/) is a general purpose cryptographic library originally based on code from GnuPG. 2225 | 2226 | [Pi-hole](https://pi-hole.net/) is a [DNS sinkhole](https://en.wikipedia.org/wiki/DNS_Sinkhole) that protects your devices from unwanted content, without installing any client-side software, intended for use on a private network. It is designed for use on embedded devices with network capability, such as the Raspberry Pi, but it can be used on other machines running Linux and cloud implementations. 2227 | 2228 | [Aircrack-ng](https://www.aircrack-ng.org/) is a network software suite consisting of a detector, packet sniffer, WEP and WPA/WPA2-PSK cracker and analysis tool for 802.11 wireless LANs. It works with any wireless network interface controller whose driver supports raw monitoring mode and can sniff 802.11a, 802.11b and 802.11g traffic. 2229 | 2230 | [Acra](https://cossacklabs.com/acra) is a single database security suite with 9 strong security controls: application level encryption, searchable encryption, data masking, data tokenization, secure authentication, data leakage prevention, database request firewall, cryptographically signed audit logging, security events automation. It is designed to cover the most important data security requirements with SQL and NoSQL databases and distributed apps in a fast, convenient, and reliable way. 2231 | 2232 | [Netdata](https://github.com/netdata/netdata) is high-fidelity infrastructure monitoring and troubleshooting, real-time monitoring Agent collects thousands of metrics from systems, hardware, containers, and applications with zero configuration. It runs permanently on all your physical/virtual servers, containers, cloud deployments, and edge/IoT devices, and is perfectly safe to install on your systems mid-incident without any preparation. 2233 | 2234 | [Trivy](https://aquasecurity.github.io/trivy/) is a comprehensive security scanner for vulnerabilities in container images, file systems, and Git repositories, as well as for configuration issues and hard-coded secrets. 2235 | 2236 | [Lynis](https://cisofy.com/lynis/) is a security auditing tool for Linux, macOS, and UNIX-based systems. Assists with compliance testing (HIPAA/ISO27001/PCI DSS) and system hardening. Agentless, and installation optional. 2237 | 2238 | [OWASP Nettacker](https://github.com/OWASP/Nettacker) is a project created to automate information gathering, vulnerability scanning and eventually generating a report for networks, including services, bugs, vulnerabilities, misconfigurations, and other information. This software will utilize TCP SYN, ACK, ICMP, and many other protocols in order to detect and bypass Firewall/IDS/IPS devices. 2239 | 2240 | [Terrascan](https://runterrascan.io/) is a static code analyzer for Infrastructure as Code to mitigate risk before provisioning cloud native infrastructure. 2241 | 2242 | [Sliver](https://github.com/BishopFox/sliver) is an open source cross-platform adversary emulation/red team framework, it can be used by organizations of all sizes to perform security testing. Sliver's implants support C2 over Mutual TLS (mTLS), WireGuard, HTTP(S), and DNS and are dynamically compiled with per-binary asymmetric encryption keys. 2243 | 2244 | [Attack Surface Analyzer](https://github.com/microsoft/AttackSurfaceAnalyzer) is a [Microsoft](https://github.com/microsoft/) developed open source security tool that analyzes the attack surface of a target system and reports on potential security vulnerabilities introduced during the installation of software or system misconfiguration. 2245 | 2246 | [Intel Owl](https://intelowl.readthedocs.io/) is an Open Source Intelligence, or OSINT solution to get threat intelligence data about a specific file, an IP or a domain from a single API at scale. It integrates a number of analyzers available online and a lot of cutting-edge malware analysis tools. 2247 | 2248 | [Deepfence ThreatMapper](https://deepfence.io/) is a runtime tool that hunts for vulnerabilities in your cloud native production platforms(Linux, K8s, AWS Fargate and more.), and ranks these vulnerabilities based on their risk-of-exploit. 2249 | 2250 | [Dockle](https://containers.goodwith.tech/) is a Container Image Linter for Security and helping build the Best-Practice Docker Image. 2251 | 2252 | [RustScan](https://github.com/RustScan/RustScan) is a Modern Port Scanner. 2253 | 2254 | [gosec](https://github.com/securego/gosec) is a Golang Security Checker that inspects source code for security problems by scanning the Go AST. 2255 | 2256 | [Prowler](https://github.com/prowler-cloud/prowler) is an Open Source security tool to perform AWS security best practices assessments, audits, incident response, continuous monitoring, hardening and forensics readiness. It contains more than 240 controls covering CIS, PCI-DSS, ISO27001, GDPR, HIPAA, FFIEC, SOC2, AWS FTR, ENS and custom security frameworks. 2257 | 2258 | [Burp Suite](https://portswigger.net/burp) is a leading range of cybersecurity tools. 2259 | 2260 | [KernelCI](https://foundation.kernelci.org/) is a community-based open source distributed test automation system focused on upstream kernel development. The primary goal of KernelCI is to use an open testing philosophy to ensure the quality, stability and long-term maintenance of the Linux kernel. 2261 | 2262 | [Continuous Kernel Integration project](https://github.com/cki-project) helps find bugs in kernel patches before they are commited to an upstram kernel tree. We are team of kernel developers, kernel testers, and automation engineers. 2263 | 2264 | [eBPF](https://ebpf.io) is a revolutionary technology that can run sandboxed programs in the Linux kernel without changing kernel source code or loading kernel modules. By making the Linux kernel programmable, infrastructure software can leverage existing layers, making them more intelligent and feature-rich without continuing to add additional layers of complexity to the system. 2265 | 2266 | [Cilium](https://cilium.io/) uses eBPF to accelerate getting data in and out of L7 proxies such as Envoy, enabling efficient visibility into API protocols like HTTP, gRPC, and Kafka. 2267 | 2268 | [Hubble](https://github.com/cilium/hubble) is a Network, Service & Security Observability for Kubernetes using eBPF. 2269 | 2270 | [Istio](https://istio.io/) is an open platform to connect, manage, and secure microservices. Istio's control plane provides an abstraction layer over the underlying cluster management platform, such as Kubernetes and Mesos. 2271 | 2272 | [Certgen](https://github.com/cilium/certgen) is a convenience tool to generate and store certificates for Hubble Relay mTLS. 2273 | 2274 | [Scapy](https://scapy.net/) is a python-based interactive packet manipulation program & library. 2275 | 2276 | [syzkaller](https://github.com/google/syzkaller) is an unsupervised, coverage-guided kernel fuzzer. 2277 | 2278 | [SchedViz](https://github.com/google/schedviz) is a tool for gathering and visualizing kernel scheduling traces on Linux machines. 2279 | 2280 | [oss-fuzz](https://google.github.io/oss-fuzz/) aims to make common open source software more secure and stable by combining modern fuzzing techniques with scalable, distributed execution. 2281 | 2282 | [OSSEC](https://www.ossec.net/) is a free, open-source host-based intrusion detection system. It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. 2283 | 2284 | [Metasploit Project](https://www.metasploit.com/) is a computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS signature development. 2285 | 2286 | [Wfuzz](https://github.com/xmendez/wfuzz) was created to facilitate the task in web applications assessments and it is based on a simple concept: it replaces any reference to the FUZZ keyword by the value of a given payload. 2287 | 2288 | [Nmap](https://nmap.org/) is a security scanner used to discover hosts and services on a computer network, thus building a "map" of the network. 2289 | 2290 | [Patchwork](https://github.com/getpatchwork/patchwork) is a web-based patch tracking system designed to facilitate the contribution and management of contributions to an open-source project. 2291 | 2292 | [pfSense](https://www.pfsense.org/) is a free and open source firewall and router that also features unified threat management, load balancing, multi WAN, and more. 2293 | 2294 | [Snowpatch](https://github.com/ruscur/snowpatch) is a continuous integration tool for projects using a patch-based, mailing-list-centric git workflow. This workflow is used by a number of well-known open source projects such as the Linux kernel. 2295 | 2296 | [Snort](https://www.snort.org/) is an open-source, free and lightweight network intrusion detection system (NIDS) software for Linux and Windows to detect emerging threats. 2297 | 2298 | [Wireshark](https://www.wireshark.org/) is a free and open-source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. 2299 | 2300 | [OpenSCAP](https://www.open-scap.org/) is U.S. standard maintained by [National Institute of Standards and Technology (NIST)](https://www.nist.gov/). It provides multiple tools to assist administrators and auditors with assessment, measurement, and enforcement of security baselines. OpenSCAP maintains great flexibility and interoperability by reducing the costs of performing security audits. Whether you want to evaluate DISA STIGs, NIST‘s USGCB, or Red Hat’s Security Response Team’s content, all are supported by OpenSCAP. 2301 | 2302 | [Tink](https://github.com/google/tink) is a multi-language, cross-platform, open source library that provides cryptographic APIs that are secure, easy to use correctly, and harder to misuse. 2303 | 2304 | [OWASP](https://www.owasp.org/index.php/Main_Page) is an online community, produces freely-available articles, methodologies, documentation, tools, and technologies in the field of web application security. 2305 | 2306 | [Open Vulnerability and Assessment Language](https://oval.mitre.org/) is a community effort to standardize how to assess and report upon the machine state of computer systems. OVAL includes a language to encode system details, and community repositories of content. Tools and services that use OVAL provide enterprises with accurate, consistent, and actionable information to improve their security. 2307 | 2308 | [ClamAV](https://www.clamav.net/) is an open source antivirus engine for detecting trojans, viruses, malware & other malicious threats. 2309 | 2310 | ### Security Tutorials & Resources 2311 | 2312 | - [Microsoft Open Source Software Security](https://www.microsoft.com/en-us/securityengineering/opensource) 2313 | 2314 | - [Cloudflare Open Source Security](https://cloudflare.github.io) 2315 | 2316 | - [The Seven Properties of Highly Secure Devices](https://www.microsoft.com/en-us/research/publication/seven-properties-highly-secure-devices/) 2317 | 2318 | - [How Layer 7 of the Internet Works](https://www.cloudflare.com/learning/ddos/what-is-layer-7/) 2319 | 2320 | - [The 7 Kinds of Security](https://www.veracode.com/sites/default/files/Resources/eBooks/7-kinds-of-security.pdf) 2321 | 2322 | - [The Libgcrypt Reference Manual](https://www.gnupg.org/documentation/manuals/gcrypt/) 2323 | 2324 | - [The Open Web Application Security Project(OWASP) Foundation Top 10](https://owasp.org/www-project-top-ten/) 2325 | 2326 | - [Best Practices for Using Open Source Code from The Linux Foundation](https://www.linuxfoundation.org/blog/2017/11/best-practices-using-open-source-code/) 2327 | 2328 | ### Security Certifications 2329 | 2330 | - [AWS Certified Security - Specialty Certification](https://aws.amazon.com/certification/certified-security-specialty/) 2331 | 2332 | - [Microsoft Certified: Azure Security Engineer Associate](https://docs.microsoft.com/en-us/learn/certifications/azure-security-engineer) 2333 | 2334 | - [Google Cloud Certified Professional Cloud Security Engineer](https://cloud.google.com/certification/cloud-security-engineer) 2335 | 2336 | - [Cisco Security Certifications](https://www.cisco.com/c/en/us/training-events/training-certifications/certifications/security.html) 2337 | 2338 | - [The Red Hat Certified Specialist in Security: Linux](https://www.redhat.com/en/services/training/ex415-red-hat-certified-specialist-security-linux-exam) 2339 | 2340 | - [Linux Professional Institute LPIC-3 Enterprise Security Certification](https://www.lpi.org/our-certifications/lpic-3-303-overview) 2341 | 2342 | - [Cybersecurity Training and Courses from IBM Skills](https://www.ibm.com/skills/topics/cybersecurity/) 2343 | 2344 | - [Cybersecurity Courses and Certifications by Offensive Security](https://www.offensive-security.com/courses-and-certifications/) 2345 | 2346 | - [RSA Certification Program](https://community.rsa.com/community/training/certification) 2347 | 2348 | - [Check Point Certified Security Expert(CCSE) Certification](https://training-certifications.checkpoint.com/#/courses/Check%20Point%20Certified%20Expert%20(CCSE)%20R80.x) 2349 | 2350 | - [Check Point Certified Security Administrator(CCSA) Certification](https://training-certifications.checkpoint.com/#/courses/Check%20Point%20Certified%20Admin%20(CCSA)%20R80.x) 2351 | 2352 | - [Check Point Certified Security Master (CCSM) Certification](https://training-certifications.checkpoint.com/#/courses/Check%20Point%20Certified%20Master%20(CCSM)%20R80.x) 2353 | 2354 | - [Certified Cloud Security Professional(CCSP) Certification](https://www.isc2.org/Certifications/CCSP) 2355 | 2356 | - [Certified Information Systems Security Professional (CISSP) Certification](https://www.isc2.org/Certifications/CISSP) 2357 | 2358 | - [CCNP Routing and Switching](https://learningnetwork.cisco.com/s/ccnp-enterprise) 2359 | 2360 | - [Certified Information Security Manager(CISM)](https://www.isaca.org/credentialing/cism) 2361 | 2362 | - [Wireshark Certified Network Analyst (WCNA)](https://www.wiresharktraining.com/certification.html) 2363 | 2364 | - [Juniper Networks Certification Program Enterprise (JNCP)](https://www.juniper.net/us/en/training/certification/) 2365 | 2366 | - [Security Training Certifications and Courses from Udemy](https://www.udemy.com/courses/search/?src=ukw&q=secuirty) 2367 | 2368 | - [Security Training Certifications and Courses from Coursera](https://www.coursera.org/search?query=security&) 2369 | 2370 | - [Security Certifications Training from Pluarlsight](https://www.pluralsight.com/browse/information-cyber-security/security-certifications) 2371 | 2372 | 2373 | ## Contribute 2374 | 2375 | - [x] If would you like to contribute to this guide simply make a [Pull Request](https://github.com/mikeroyal/Kubernetes-Guide/pulls). 2376 | 2377 | 2378 | ## License 2379 | [Back to the Top](https://github.com/mikeroyal/Kubernetes-Guide/blob/main/README.md#table-of-contents) 2380 | 2381 | Distributed under the [Creative Commons Attribution 4.0 International (CC BY 4.0) Public License](https://creativecommons.org/licenses/by/4.0/). 2382 | -------------------------------------------------------------------------------- /Security Glossary.md: -------------------------------------------------------------------------------- 1 | A list of Key Information Security Terms for Software and Hardware. **Sources:** [NIST Federal Information Processing Standards (FIPS)](https://csrc.nist.gov/publications/fips), the [Special Publication (SP) 800 series](https://csrc.nist.gov/publications/sp), [NIST Interagency Reports (NISTIRs)](https://csrc.nist.gov/publications/nistir), and from the [Committee for National Security Systems Instruction 4009 (CNSSI-4009)](https://www.cnss.gov/CNSS/issuances/Instructions.cfm). 2 | 3 | 4 | A 5 | 6 | Access – Ability and means to communicate with or otherwise interact with a system, to use system resources to handle information, to gain knowledge of the information the system contains, or to control system components and functions. 7 | SOURCE: CNSSI-4009 8 | 9 | Access Authority – An entity responsible for monitoring and granting access privileges for other authorized entities. 10 | SOURCE: CNSSI-4009 11 | 12 | Access Control – The process of granting or denying specific requests to: 1) obtain and use information and related information processing services; and 2) enter specific physical facilities (e.g., federal buildings, military establishments, border crossing entrances). 13 | SOURCE: FIPS 201; CNSSI-4009 14 | 15 | Access Control List (ACL) – 16 | * 1. A list of permissions associated with an object. The list specifies who or what is allowed to access the object and what operations are allowed to be performed on the object. 17 | * 2. A mechanism that implements access control for a system resource by enumerating the system entities that are permitted to access the resource and stating, either implicitly or explicitly, the access modes granted to each entity. 18 | SOURCE: CNSSI-4009 19 | 20 | Access Control Lists (ACLs) – A register of: 21 | * 1. users (including groups, machines, processes) who have been given permission to use a particular system resource, and 22 | * 2. the types of access they have been permitted. 23 | SOURCE: SP 800-12 24 | 25 | Access Control Mechanism – Security safeguards (i.e., hardware and software features, physical controls, operating procedures, management procedures, and various combinations of these) designed to detect and deny unauthorized 26 | access and permit authorized access to an information system. 27 | SOURCE: CNSSI-4009 28 | 29 | Access Level – A category within a given security classification limiting entry or 30 | system connectivity to only authorized persons. 31 | SOURCE: CNSSI-4009 32 | 33 | Access List – Roster of individuals authorized admittance to a controlled area. 34 | SOURCE: CNSSI-4009 35 | 36 | Access Point – A device that logically connects wireless client devices operating in infrastructure to one another and provides access to a distribution system, if connected, which is typically an organization’s enterprise wired network. 37 | SOURCE: SP 800-48; SP 800-121 38 | 39 | Access Profile – Association of a user with a list of protected objects the user may access. 40 | SOURCE: CNSSI-4009 41 | 42 | Access Type – Privilege to perform action on an object. Read, write, execute, append, modify, delete, and create are examples of access types. 43 | SOURCE: CNSSI-4009 44 | 45 | Activation Data – Private data, other than keys, that are required to access cryptographic modules. 46 | SOURCE: SP 800-32 47 | 48 | Active Attack – An attack that alters a system or data. 49 | SOURCE: CNSSI-4009 50 | 51 | Active Content – Software in various forms that is able to automatically carry out or trigger actions on a computer platform without the intervention of a user. 52 | SOURCE: CNSSI-4009 53 | 54 | Active Security Testing – Security testing that involves direct interaction with a target, such as sending packets to a target. 55 | SOURCE: SP 800-115 56 | 57 | Advanced Encryption Standard – (AES) The Advanced Encryption Standard specifies a U.S. government-approved cryptographic algorithm that can be used to protect electronic data. The AES algorithm is a symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information. This standard specifies the Rijndael algorithm, a symmetric block cipher that can process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits. 58 | SOURCE: FIPS 197 59 | 60 | B 61 | 62 | Blacklisting – The process of the system invalidating a user ID based on the user’s inappropriate actions. A blacklisted user ID cannot be used to log on to the system, even with the correct authenticator. Blacklisting and lifting of a blacklisting are both security-relevant events. Blacklisting also applies to blocks placed against IP addresses to prevent inappropriate or unauthorized use of Internet resources. 63 | SOURCE: CNSSI-4009 64 | 65 | Blue Team – 66 | * 1. The group responsible for defending an enterprise’s use of 67 | information systems by maintaining its security posture against a 68 | group of mock attackers (i.e., the Red Team). Typically the Blue 69 | Team and its supporters must defend against real or simulated 70 | attacks 1) over a significant period of time, 2) in a representative 71 | operational context (e.g., as part of an operational exercise), and 3) 72 | according to rules established and monitored with the help of a 73 | neutral group refereeing the simulation or exercise (i.e., the White 74 | Team). 75 | * 2. The term Blue Team is also used for defining a group of 76 | individuals that conduct operational network vulnerability 77 | evaluations and provide mitigation techniques to customers who have 78 | a need for an independent technical review of their network security 79 | posture. The Blue Team identifies security threats and risks in the 80 | operating environment, and in cooperation with the customer, 81 | analyzes the network environment and its current state of security 82 | readiness. Based on the Blue Team findings and expertise, 83 | they provide recommendations that integrate into an overall 84 | community security solution to increase the customer's cyber security 85 | readiness posture. Often times a Blue Team is employed by itself or 86 | prior to a Red Team employment to ensure that the customer's 87 | networks are as secure as possible before having the Red Team test 88 | the systems. 89 | SOURCE: CNSSI-4009 90 | 91 | Body of Evidence (BoE) – The set of data that documents the information system’s adherence to the security controls applied. The BoE will include a Requirements Verification Traceability Matrix (RVTM) delineating where the selected security controls are met and evidence to that fact can be found. The BoE content required by an Authorizing Official will be adjusted according to the impact levels selected. 92 | SOURCE: CNSSI-4009 93 | 94 | Boundary – Physical or logical perimeter of a system. 95 | SOURCE: CNSSI-4009 96 | 97 | C 98 | 99 | Capstone Policies – Those policies that are developed by governing or coordinating institutions of Health Information Exchanges (HIEs). They provide overall requirements and guidance for protecting health information within those HIEs. Capstone Policies must address the requirements imposed by: (1) all laws, regulations, and guidelines at the federal, state, and local levels; (2) business needs; and (3) policies at the institutional and HIE levels. 100 | SOURCE: NISTIR-7497 101 | 102 | Capture – The method of taking a biometric sample from an end user. 103 | Source: FIPS 201 104 | 105 | Certificate Management – Process whereby certificates (as defined above) are generated, stored, protected, transferred, loaded, used, and destroyed. 106 | SOURCE: CNSSI-4009 107 | 108 | Certificate Management Authority – A Certification Authority (CA) or a Registration Authority (RA). 109 | SOURCE: SP 800-32 110 | 111 | Certificate Policy (CP) – A specialized form of administrative policy tuned to electronic transactions performed during certificate management. A Certificate Policy addresses all aspects associated with the generation, production, distribution, accounting, compromise recovery, and administration of digital certificates. Indirectly, a certificate policy can also govern the transactions conducted using a communications system protected by a certificate-based security system. By controlling critical certificate extensions, such policies and associated enforcement technology can support provision of the security services required by particular applications. 112 | SOURCE: CNSSI-4009; SP 800-32 113 | 114 | Certification Practice Statement – A statement of the practices that a Certification Authority employs in issuing, suspending, revoking, and renewing certificates and providing access to them, in accordance with specific requirements (i.e., requirements specified in this Certificate Policy, or requirements specified in a contract for services). 115 | SOURCE: SP 800-32; CNSSI-4009 116 | 117 | Certification Test and Evaluation – Software and hardware security tests conducted during development of an information system. 118 | SOURCE: CNSSI-4009 119 | 120 | Checksum – Value computed on data to detect error or manipulation. 121 | SOURCE: CNSSI-4009 122 | 123 | Cloud Computing - A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service Provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models 124 | SOURCE(s): NISTIR 8006 under Cloud computing from NIST SP 800-145 - Adapted 125 | 126 | Cryptographic Initialization – Function used to set the state of a cryptographic logic prior to key generation, encryption, or other operating mode. 127 | SOURCE: CNSSI-4009 128 | 129 | Cryptographic Key – A value used to control cryptographic operations, such as decryption, encryption, signature generation, or signature verification. 130 | SOURCE: SP 800-63 131 | 132 | D 133 | 134 | Data – A subset of information in an electronic format that allows it to be retrieved or transmitted. 135 | SOURCE: CNSSI-4009 136 | 137 | Data Aggregation – Compilation of individual data systems and data that could result in the totality of the information being classified, or classified at a higher level, or of beneficial use to an adversary. 138 | SOURCE: CNSSI-4009 139 | 140 | Data Origin Authentication – The process of verifying that the source of the data is as claimed and that the data has not been modified. 141 | SOURCE: CNSSI-4009 142 | 143 | Data Security – Protection of data from unauthorized (accidental or intentional) modification, destruction, or disclosure. 144 | SOURCE: CNSSI-4009 145 | 146 | Data Transfer Device (DTD) – Fill device designed to securely store, transport, and transfer electronically both COMSEC and TRANSEC key, designed to be backward compatible with the previous generation of COMSEC common fill devices, and programmable to support modern mission systems. 147 | SOURCE: CNSSI-4009 148 | 149 | Denial of Service (DoS) – The prevention of authorized access to resources or the delaying of time-critical operations. (Time-critical may be milliseconds or it may be hours, depending upon the service provided.) 150 | SOURCE: CNSSI-4009 151 | 152 | Differential Power Analysis – An analysis of the variations of the electrical power consumption of a cryptographic module, using advanced statistical methods and/or other techniques, for the purpose of extracting information correlated to cryptographic keys used in a cryptographic algorithm. 153 | SOURCE: FIPS 140-2 154 | 155 | Digital Evidence – Electronic information stored or transferred in digital form. 156 | SOURCE: SP 800-72 157 | 158 | Digital Forensics – The application of science to the identification, collection, examination, and analysis of data while preserving the integrity of the information and maintaining a strict chain of custody for the data. 159 | SOURCE: SP 800-86 160 | 161 | Digital Signature – An asymmetric key operation where the private key is used to digitally sign data and the public key is used to verify the signature. Digital signatures provide authenticity protection, integrity protection, and non-repudiation. 162 | SOURCE: SP 800-63 163 | 164 | Disaster Recovery Plan (DRP) – A written plan for recovering one or more information systems at an alternate facility in response to a major hardware or software failure or destruction of facilities. 165 | SOURCE: SP 800-34 166 | 167 | E 168 | 169 | Embedded Cryptographic System – Cryptosystem performing or controlling a function as an integral element of a larger system or subsystem. 170 | SOURCE: CNSSI-4009 171 | 172 | Embedded Cryptography - Cryptography engineered into an equipment or system whose basic function is not cryptographic. 173 | SOURCE: CNSSI-4009 174 | 175 | Encipher – Convert plain text to cipher text by means of a cryptographic system. 176 | SOURCE: CNSSI-4009 177 | 178 | Encode – Convert plain text to cipher text by means of a code. 179 | SOURCE: CNSSI-4009 180 | 181 | Encrypt – Generic term encompassing encipher and encode. 182 | SOURCE: CNSSI-4009 183 | 184 | Encrypted Key – A cryptographic key that has been encrypted using an Approved security function with a key encrypting key, a PIN, or a password in order to disguise the value of the underlying plaintext key. 185 | SOURCE: FIPS 140-2 186 | 187 | Encrypted Network – A network on which messages are encrypted (e.g., using DES, AES, or other appropriate algorithms) to prevent reading by unauthorized parties. 188 | SOURCE: SP 800-32 189 | 190 | Encryption – Conversion of plaintext to ciphertext through the use of a cryptographic algorithm. 191 | SOURCE: FIPS 185 192 | 193 | End-to-End Encryption – Encryption of information at its origin and decryption at its intended destination without intermediate decryption. 194 | SOURCE: CNSSI-4009 195 | 196 | End-to-End Security – Safeguarding information in an information system from point of origin to point of destination. 197 | SOURCE: CNSSI-4009 198 | 199 | F 200 | 201 | [Federal Risk and Authorization Management Program (FedRAMP)](https://www.gsa.gov/technology/government-it-initiatives/fedramp) is a government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP empowers agencies to use modern cloud technologies, with emphasis on security and protection of federal information, and helps accelerate the adoption of secure, cloud solutions. 202 | 203 | [Federal Information Security Management Act (FISMA)](https://csrc.nist.gov/topics/laws-and-regulations/laws/fisma) is a United States federal law that defines a comprehensive framework to protect government information, operations, and assets against natural and manmade threats. This risk management framework was signed into law as part of the Electronic Government Act of 2002. Since 2002, FISMA's scope has widened to apply to state agencies that administer federal programs, or private businesses and service providers that hold a contract with the U.S. government. 204 | 205 | False Positive – An alert that incorrectly indicates that malicious activity is occurring. 206 | SOURCE: SP 800-61 207 | 208 | False Rejection – In biometrics, the instance of a security system failing to verify or identify an authorized person. It does not necessarily indicate a flaw in the biometric system; for example, in a fingerprint-based system, an incorrectly aligned finger on the scanner or dirt on the scanner can result in the scanner misreading the fingerprint, causing a false rejection of the authorized user. 209 | SOURCE: CNSSI-4009 210 | 211 | Federal Information Processing Standard (FIPS) – A standard for adoption and use by federal departments and agencies that has been developed within the Information Technology Laboratory and published by the National Institute of Standards and Technology, a part of the U.S. Department of Commerce. A FIPS covers some topic in information technology in order to achieve a common level of quality or some level of interoperability. 212 | SOURCE: FIPS 201 213 | 214 | File Encryption – The process of encrypting individual files on a storage medium and permitting access to the encrypted data only after proper authentication is provided. 215 | SOURCE: SP 800-111 216 | 217 | File Name Anomaly – 218 | * 1. A mismatch between the internal file header and its external 219 | extension. 220 | * 2. A file name inconsistent with the content of the file (e.g., renaming 221 | a graphics file with a non-graphical extension. 222 | SOURCE: SP 800-72 223 | 224 | File Protection – Aggregate of processes and procedures designed to inhibit unauthorized access, contamination, elimination, modification, or destruction of a file or any of its contents. 225 | SOURCE: CNSSI-4009 226 | 227 | File Security – Means by which access to computer files is limited to authorized users only. 228 | SOURCE: CNSSI-4009 229 | 230 | Firewall – A gateway that limits access between networks in accordance with local security policy. 231 | SOURCE: SP 800-32 232 | 233 | Forensics – The practice of gathering, retaining, and analyzing computer-related data for investigative purposes in a manner that maintains the integrity of the data. 234 | SOURCE: CNSSI-4009 235 | 236 | G 237 | 238 | Gateway – Interface providing compatibility between networks by converting transmission speeds, protocols, codes, or security measures. 239 | SOURCE: CNSSI-4009 240 | 241 | H 242 | 243 | Handshaking Procedures – Dialogue between two information systems for synchronizing, identifying, and authenticating themselves to one another. 244 | SOURCE: CNSSI-4009 245 | 246 | Hard Copy Key – Physical keying material, such as printed key lists, punched or printed key tapes, or programmable, read-only memories (PROM). 247 | SOURCE: CNSSI-4009 248 | 249 | Hardening – Configuring a host’s operating systems and applications to reduce the host’s security weaknesses. 250 | SOURCE: SP 800-123 251 | 252 | Hardware – The physical components of an information system. 253 | SOURCE: CNSSI-4009 254 | 255 | High Availability – A failover feature to ensure availability during device or component interruptions. 256 | SOURCE: SP 800-113 257 | 258 | I 259 | 260 | Identification – The process of verifying the identity of a user, process, or device, usually as a prerequisite for granting access to resources in an IT system. 261 | SOURCE: SP 800-47 262 | 263 | Identifier – Unique data used to represent a person’s identity and associated attributes. A name or a card number are examples of identifiers. 264 | SOURCE: FIPS 201 265 | 266 | Identity – A set of attributes that uniquely describe a person within a given context. 267 | SOURCE: SP 800-63 268 | 269 | Identity – The set of physical and behavioral characteristics by which an individual is uniquely recognizable. 270 | SOURCE: FIPS 201 271 | 272 | Identity Token – Smart card, metal key, or other physical object used to authenticate identity. 273 | SOURCE: CNSSI-4009 274 | 275 | Identity Validation – Tests enabling an information system to authenticate users or resources. 276 | SOURCE: CNSSI-4009 277 | 278 | Incident – A violation or imminent threat of violation of computer security policies, acceptable use policies, or standard security practices. 279 | SOURCE: SP 800-61 280 | 281 | Intellectual Property – Creations of the mind such as musical, literary, and artistic works; inventions; and symbols, names, images, and designs used in commerce, including copyrights, trademarks, patents, and related rights. Under intellectual property law, the holder of one of these abstract “properties” has certain exclusive rights to the creative work, commercial symbol, or invention by which it is covered. 282 | SOURCE: CNSSI-4009 283 | 284 | Internet Protocol (IP) – Standard protocol for transmission of data from source to destinations in packet-switched communications networks and interconnected systems of such networks. 285 | SOURCE: CNSSI-4009 286 | 287 | Intranet – A private network that is employed within the confines of a given enterprise (e.g., internal to a business or agency). 288 | SOURCE: CNSSI-4009 289 | 290 | Intrusion – Unauthorized act of bypassing the security mechanisms of a system. 291 | SOURCE: CNSSI-4009 292 | 293 | Intrusion Detection Systems (IDS) – Hardware or software product that gathers and analyzes information from various areas within a computer or a network to identify possible security breaches, which include both intrusions (attacks from outside the organizations) and misuse (attacks from within the organizations.) 294 | SOURCE: CNSSI-4009 295 | 296 | J 297 | 298 | Jamming – An attack in which a device is used to emit electromagnetic energy on a wireless network’s frequency to make it unusable. 299 | SOURCE: SP 800-48 300 | 301 | K 302 | 303 | Kerberos – A means of verifying the identities of principals on an open network. It accomplishes this without relying on the authentication, trustworthiness, or physical security of hosts while assuming all packets can be read, modified and inserted at will. It uses a trust broker model and symmetric cryptography to provide authentication and authorization of users and systems on the network. 304 | SOURCE: SP 800-95 305 | 306 | Key – A value used to control cryptographic operations, such as decryption, encryption, signature generation, or signature verification. 307 | SOURCE: SP 800-63 308 | 309 | Key Logger – A program designed to record which keys are pressed on a computer keyboard used to obtain passwords or encryption keys and thus bypass other security measures. 310 | SOURCE: SP 800-82 311 | 312 | L 313 | 314 | Least Privilege – The security objective of granting users only those accesses they need to perform their official duties. 315 | SOURCE: SP 800-12 316 | 317 | Level of Protection – Extent to which protective measures, techniques, and procedures must be applied to information systems and networks based on risk, threat, vulnerability, system interconnectivity considerations, and information assurance needs. Levels of protection are: 318 | * 1. Basic: information systems and networks requiring implementation of standard minimum security countermeasures. 319 | * 2. Medium: information systems and networks requiring layering of additional safeguards above the standard minimum security countermeasures. 320 | * 3. High: information systems and networks requiring the most stringent protection and rigorous security countermeasures. 321 | SOURCE: CNSSI-4009 322 | 323 | Likelihood of Occurrence – In Information Assurance risk analysis, a weighted factor based on a subjective analysis of the probability that a given threat is capable of exploiting a given vulnerability. 324 | SOURCE: CNSSI-4009 325 | 326 | M 327 | 328 | Malicious Code – Software or firmware intended to perform an unauthorized process that will have adverse impact on the confidentiality, integrity, or availability of an information system. A virus, worm, Trojan horse, or other code-based entity that infects a host. Spyware and some forms of adware are also examples of malicious code. 329 | SOURCE: SP 800-53; CNSSI-4009 330 | 331 | Malware – A program that is inserted into a system, usually covertly, with the intent of compromising the confidentiality, integrity, or availability of the victim’s data, applications, or operating system or of otherwise annoying or disrupting the victim. 332 | SOURCE: SP 800-83 333 | 334 | Man-in-the-middle Attack (MitM) – A form of active wiretapping attack in which the attacker intercepts and selectively modifies communicated data to masquerade as one or more of the entities involved in a communication association. 335 | SOURCE: CNSSI-4009 336 | 337 | Mandatory Access Control (MAC) – A means of restricting access to system resources based on the sensitivity (as represented by a label) of the information contained in the system resource and the formal authorization (i.e., clearance) of users to access information of such sensitivity. 338 | SOURCE: SP 800-44 339 | 340 | Mandatory Access Control – Access controls (which) are driven by the results of a comparison between the user’s trust level or clearance and the sensitivity designation of the information. 341 | SOURCE: FIPS 191 342 | 343 | Masquerading –When an unauthorized agent claims the identity of another agent, it is said to be masquerading. 344 | SOURCE: SP 800-19 345 | 346 | Multilevel Security (MLS) – A concept of processing information with different classifications and categories that simultaneously permits access by users with different security clearances and denies access to users who lack authorization. 347 | SOURCE: CNSSI-4009 348 | 349 | N 350 | 351 | Needs Assessment (IT Security Awareness and Training) – A process that can be used to determine an organization’s awareness and training needs. The results of a needs assessment can provide justification to convince management to allocate adequate resources 352 | to meet the identified awareness and training needs. 353 | SOURCE: SP 800-50 354 | 355 | Network – Information system(s) implemented with a collection of interconnected components. Such components may include routers, 356 | hubs, cabling, telecommunications controllers, key distribution centers, and technical control devices. 357 | SOURCE: SP 800-53; CNSSI-4009 358 | 359 | Network Access – Access to an organizational information system by a user (or a process acting on behalf of a user) communicating through a network (e.g., local area network, wide area network, Internet). 360 | SOURCE: SP 800-53; CNSSI-4009 361 | 362 | Network Access Control (NAC) – A feature provided by some firewalls that allows access based on a user’s credentials and the results of health checks performed on the telework client device. 363 | SOURCE: SP 800-41 364 | 365 | Network Address Translation (NAT) – A routing technology used by many firewalls to hide internal system addresses from an external network through use of an addressing schema. 366 | SOURCE: SP 800-41 367 | 368 | O 369 | 370 | Object Identifier – A specialized formatted number that is registered with an internationally recognized standards organization. The unique alphanumeric/numeric identifier registered under the ISO registration standard to reference a specific object or object class. In the federal government PKI, they are used to uniquely identify each of the four policies and cryptographic algorithms supported. 371 | SOURCE: SP 800-32 372 | 373 | Open Storage – Any storage of classified national security information outside of approved containers. This includes classified information that is resident on information systems media and outside of an approved storage container, regardless of whether or not that media is in use (i.e., unattended operations). 374 | SOURCE: CNSSI-4009 375 | 376 | Operating System (OS) Fingerprinting – Analyzing characteristics of packets sent by a target, such as packet headers or listening ports, to identify the operating system in use on the target. 377 | SOURCE: SP 800-115 378 | 379 | Operations Code – Code composed largely of words and phrases suitable for general communications use. 380 | SOURCE: CNSSI-4009 381 | 382 | Organization – A federal agency, or, as appropriate, any of its operational elements. 383 | SOURCE: FIPS 200 384 | 385 | Overwrite Procedure – A software process that replaces data previously stored on storage media with a predetermined set of meaningless data or random patterns. 386 | SOURCE: CNSSI-4009 387 | 388 | P 389 | 390 | Packet Filter – A routing device that provides access control functionality for host addresses and communication sessions. 391 | SOURCE: SP 800-41 392 | 393 | Packet Sniffer – Software that observes and records network traffic. 394 | SOURCE: CNSSI-4009 395 | 396 | Password – A protected character string used to authenticate the identity of a computer system user or to authorize access to system resources. 397 | SOURCE: FIPS 181 398 | 399 | Password Cracking – The process of recovering secret passwords stored in a computer system or transmitted over a network. 400 | SOURCE: SP 800-115 401 | 402 | Password Protected – The ability to protect a file using a password access control, protecting the data contents from being viewed with the appropriate viewer unless the proper password is entered. 403 | SOURCE: SP 800-72 404 | 405 | Patch – An update to an operating system, application, or other software issued specifically to correct particular problems with the software. 406 | SOURCE: SP 800-123 407 | 408 | Payload – The input data to the CCM generation-encryption process that is both authenticated and encrypted. 409 | SOURCE: SP 800-38C 410 | 411 | Penetration Testing – A test methodology in which assessors, using all available documentation (e.g., system design, source code, manuals) and working under specific constraints, attempt to circumvent the security features of an information system. 412 | SOURCE: SP 800-53A 413 | 414 | Personal Identification Number (PIN) – A secret that a claimant memorizes and uses to authenticate his or her identity. PINs are generally only decimal digits. 415 | SOURCE: FIPS 201 416 | 417 | Phishing - A digital form of social engineering that uses authentic looking but bogus emails to request information from users or direct them to a fake Web site that requests information. 418 | SOURCE: SP 800-115 419 | 420 | Plaintext – Data input to the Cipher or output from the Inverse Cipher. 421 | SOURCE: FIPS 197 422 | 423 | Policy Mapping – Recognizing that, when a CA in one domain certifies a CA in another domain, a particular certificate policy in the second domain may be considered by the authority of the first domain to be equivalent (but not necessarily identical in all respects) to a particular certificate policy in the first domain. 424 | SOURCE: SP 800-15 425 | 426 | Port – A physical entry or exit point of a cryptographic module that provides access to the module for physical signals, represented by logical information flows (physically separated ports do not share the same physical pin or wire). 427 | SOURCE: FIPS 140-2 428 | 429 | Port Scanning – Using a program to remotely determine which ports on a system are open (e.g., whether systems allow connections through those ports). 430 | SOURCE: CNSSI-4009 431 | 432 | Portal – A high-level remote access architecture that is based on a server that offers teleworkers access to one or more applications through a single centralized interface. 433 | SOURCE: SP 800-46 434 | 435 | Privilege – A right granted to an individual, a program, or a process. 436 | SOURCE: CNSSI-4009 437 | 438 | Privileged Accounts – Individuals who have access to set “access rights” for users on a given system. Sometimes referred to as system or network administrative accounts. 439 | SOURCE: SP 800-12 440 | 441 | Probe – A technique that attempts to access a system to learn something about the system. 442 | SOURCE: CNSSI-4009 443 | 444 | Profiling – Measuring the characteristics of expected activity so that changes to it can be more easily identified. 445 | SOURCE: SP 800-61; CNSSI-4009 446 | 447 | Protocol – Set of rules and formats, semantic and syntactic, permitting information systems to exchange information. 448 | SOURCE: CNSSI-4009 449 | 450 | Protocol Data Unit – A unit of data specified in a protocol and consisting of protocol information and, possibly, user data. 451 | SOURCE: FIPS 188 452 | 453 | Protocol Entity – Entity that follows a set of rules and formats (semantic and syntactic) that determines the communication behavior of other entities. 454 | SOURCE: FIPS 188 455 | 456 | Proxy – A proxy is an application that “breaks” the connection between client and server. The proxy accepts certain types of traffic entering or leaving a network and processes it and forwards it. This effectively closes the straight path between the internal and external networks making it more difficult for an attacker to obtain internal addresses and other details of the organization’s internal network. Proxy servers are available for common Internet services; for example, a Hyper Text Transfer Protocol (HTTP) proxy used for Web access, and a Simple Mail Transfer Protocol (SMTP) proxy used for email. 457 | SOURCE: SP 800-44 458 | 459 | Proxy Server – A server that services the requests of its clients by forwarding those requests to other servers. 460 | SOURCE: CNSSI-4009 461 | 462 | Public Domain Software – Software not protected by copyright laws of any nation that may be freely used without permission of, or payment to, the creator, and that carries no warranties from, or liabilities to the creator. 463 | SOURCE: CNSSI-4009 464 | 465 | Public Key - A cryptographic key used with a public key cryptographic algorithm, uniquely associated with an entity, and which may be made public; it is used to verify a digital signature; this key is mathematically linked with a corresponding private key. 466 | SOURCE: FIPS 196 467 | 468 | Q 469 | 470 | Qualitative Assessment – Use of a set of methods, principles, or rules for assessing risk based on nonnumeric categories or levels. 471 | SOURCE: SP 800-30 472 | 473 | Quality of Service – The measurable end-to-end performance properties of a network service, which can be guaranteed in advance by a Service-Level Agreement between a user and a service provider, so as to satisfy specific customer application requirements. Note: These properties may include throughput (bandwidth), transit delay (latency), error rates, priority, security, packet loss, packet jitter, etc. 474 | SOURCE: CNSSI-4009 475 | 476 | Quantitative Assessment – Use of a set of methods, principles, or rules for assessing risks based on the use of numbers where the meanings and proportionality of values are maintained inside and outside the context of the assessment. 477 | SOURCE: SP 800-30 478 | 479 | Quarantine – Store files containing malware in isolation for future disinfection or examination. 480 | SOURCE: SP 800-69 481 | 482 | R 483 | 484 | Radio Frequency Identification (RFID) – A form of automatic identification and data capture (AIDC) that uses electric or magnetic fields at radio frequencies to transmit information. 485 | SOURCE: SP 800-98 486 | 487 | Read – Fundamental operation in an information system that results only in the flow of information from an object to a subject. 488 | SOURCE: CNSSI-4009 489 | 490 | Read Access – Permission to read information in an information system. 491 | SOURCE: CNSSI-4009 492 | 493 | Real-Time Reaction – Immediate response to a penetration attempt that is detected and diagnosed in time to prevent access. 494 | SOURCE: CNSSI-4009 495 | 496 | Red Team – A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise Information Assurance by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment. 497 | SOURCE: CNSSI-4009 498 | 499 | Red Team Exercise – An exercise, reflecting real-world conditions, that is conducted as a simulated adversarial attempt to compromise organizational missions and/or business processes to provide a comprehensive assessment of the security capability of the information system and organization. 500 | SOURCE: SP 800-53 501 | 502 | Remote Access – Access to an organizational information system by a user (or an information system acting on behalf of a user) communicating through an external network (e.g., the Internet). 503 | SOURCE: SP 800-53 504 | 505 | Repository – A database containing information and data relating to certificates as specified in a CP; may also be referred to as a directory. 506 | SOURCE: SP 800-32 507 | 508 | Risk Assessment – The process of identifying risks to organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, and the Nation, arising through the operation of an information system. Part of risk management, incorporates threat and vulnerability analyses and considers mitigations provided by security controls planned or in place. Synonymous with risk analysis. 509 | SOURCE: SP 800-53; SP 800-53A; SP 800-37 510 | 511 | Risk Assessment Methodology – A risk assessment process, together with a risk model, assessment approach, and analysis approach. 512 | SOURCE: SP 800-30 513 | 514 | Risk Assessment Report – The report which contains the results of performing a risk assessment or the formal output from the process of assessing risk. 515 | SOURCE: SP 800-30 516 | 517 | Root Certification Authority – In a hierarchical Public Key Infrastructure, the Certification Authority whose public key serves as the most trusted datum (i.e., the beginning of trust paths) for a security domain. 518 | SOURCE: SP 800-32; CNSSI-4009 519 | 520 | Rootkit – A set of tools used by an attacker after gaining root-level access to a host to conceal the attacker’s activities on the host and permit the attacker to maintain root-level access to the host through covert means. 521 | SOURCE: CNSSI-4009 522 | 523 | S 524 | Safeguards – Protective measures prescribed to meet the security requirements (i.e., confidentiality, integrity, and availability) specified for an information system. Safeguards may include security features, management constraints, personnel security, and security of physical structures, areas, and devices. Synonymous with security controls and countermeasures. 525 | SOURCE: SP 800-53; SP 800-37; FIPS 200; CNSSI-4009 526 | 527 | Sandboxing- A restricted, controlled execution environment that prevents potentially malicious software, such as mobile code, from accessing any system resources except those for which the software is authorized. 528 | SOURCE: CNSSI-4009 529 | 530 | Scanning – Sending packets or requests to another system to gain information to be used in a subsequent attack. 531 | SOURCE: CNSSI-4009 532 | 533 | Secure Socket Layer (SSL) – A protocol used for protecting private information during transmission via the Internet. 534 | * **Note:** SSL works by using a public key to encrypt data that's transferred over the SSL connection. Most Web browsers support 535 | SSL, and many Web sites use the protocol to obtain confidential user information, such as credit card numbers. By convention, URLs that require an SSL connection start with “https:” instead of “http:.” 536 | SOURCE: CNSSI-4009 537 | 538 | Security Content Automation Protocol (SCAP) – A method for using specific standardized testing methods to enable automated vulnerability management, measurement, and policy compliance evaluation against a standardized set of security requirements. 539 | SOURCE: CNSSI-4009 540 | 541 | Signature – A recognizable, distinguishing pattern associated with an attack, such as a binary string in a virus or a particular set of keystrokes used to gain unauthorized access to a system. 542 | SOURCE: SP 800-61 543 | 544 | Signature Certificate – A public key certificate that contains a public key intended for verifying digital signatures rather than encrypting data or performing any other cryptographic functions. 545 | SOURCE: SP 800-32; CNSSI-4009 546 | 547 | Smart Card – A credit card-sized card with embedded integrated circuits that can store, process, and communicate information. 548 | SOURCE: CNSSI-4009 549 | 550 | Social Engineering – An attempt to trick someone into revealing information (e.g., a password) that can be used to attack systems or networks. 551 | SOURCE: SP 800-61 552 | 553 | Spam - Electronic junk mail or the abuse of electronic messaging systems to indiscriminately send unsolicited bulk messages. 554 | SOURCE: CNSSI-4009 555 | 556 | Spoofing – “IP spoofing” refers to sending a network packet that appears to come from a source other than its actual source. 557 | SOURCE: SP 800-48 558 | 559 | Spyware – Software that is secretly or surreptitiously installed into an information system to gather information on individuals or 560 | organizations without their knowledge; a type of malicious code. 561 | SOURCE: SP 800-53; CNSSI-4009 562 | 563 | Steganography – The art and science of communicating in a way that hides the existence of the communication. For example, a child pornography image can be hidden inside another graphic image file, audio file, or other file format. 564 | SOURCE: SP 800-72; SP 800-101 565 | 566 | Supply Chain Attack – Attacks that allow the adversary to utilize implants or other vulnerabilities inserted prior to installation in order to infiltrate data, or manipulate information technology hardware, software, operating systems, peripherals (information technology products) or services at any point during the life cycle. 567 | SOURCE: CNSSI-4009 568 | 569 | System Development Life Cycle (SDLC) – The scope of activities associated with a system, encompassing the system’s initiation, development and acquisition, implementation, operation and maintenance, and ultimately its disposal that instigates another system initiation. 570 | SOURCE: SP 800-34; CNSSI-4009 571 | 572 | System Development Methodologies – Methodologies developed through software engineering to manage the complexity of system development. Development methodologies include software engineering aids and high-level design analysis tools. 573 | SOURCE: CNSSI-4009 574 | 575 | System Integrity – The quality that a system has when it performs its intended function in an unimpaired manner, free from unauthorized manipulation of the system, whether intentional or accidental. 576 | SOURCE: SP 800-27 577 | 578 | T 579 | 580 | Tailoring – The process by which a security control baseline is modified based on: (i) the application of scoping guidance; (ii) the specification of compensating security controls, if needed; and (iii) the specification of organization-defined parameters in the security controls via explicit assignment and selection statements. 581 | SOURCE: SP 800-37; SP 800-53; SP 800-53A; CNSSI-4009 582 | 583 | Tampering – An intentional event resulting in modification of a system, its intended behavior, or data. 584 | SOURCE: CNSSI-4009 585 | 586 | Telecommunications – Preparation, transmission, communication, or related processing of information (writing, images, sounds, or other data) by electrical, electromagnetic, electromechanical, electro-optical, or electronic means. 587 | SOURCE: CNSSI-4009 588 | 589 | Threat – Any circumstance or event with the potential to adversely impact organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the Nation through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service. 590 | SOURCE: SP 800-53; SP 800-53A; SP 800-27; SP 800-60; SP 800- 591 | 37; CNSSI-4009 592 | 593 | Threat Analysis – The examination of threat sources against system vulnerabilities to determine the threats for a particular system in a particular operational environment. 594 | SOURCE: SP 800-27 595 | 596 | Threat Assessment – Formal description and evaluation of threat to an information system. 597 | SOURCE: SP 800-53; SP 800-18 598 | 599 | Threat Monitoring – Analysis, assessment, and review of audit trails and other information collected for the purpose of searching out system events that may constitute violations of system security. 600 | SOURCE: CNSSI-4009 601 | 602 | Token – Something that the Claimant possesses and controls (typically a key or password) that is used to authenticate the Claimant’s identity. 603 | SOURCE: SP 800-63 604 | 605 | Tracking Cookie – A cookie placed on a user’s computer to track the user’s activity on different Web sites, creating a detailed profile of the user’s behavior. 606 | SOURCE: SP 800-83 607 | 608 | Traffic Analysis – A form of passive attack in which an intruder observes information about calls (although not necessarily the contents of the messages) and makes inferences, e.g., from the source and destination numbers, or frequency and length of the messages. 609 | SOURCE: SP 800-24 610 | 611 | Trojan Horse – A computer program that appears to have a useful function, but also has a hidden and potentially malicious function that evades security mechanisms, sometimes by exploiting legitimate authorizations of a system entity that invokes the program. 612 | SOURCE: CNSSI-4009 613 | 614 | U 615 | 616 | Unauthorized Access – Unauthorized Occurs when a user, legitimate or unauthorized, accesses a resource that the user is not permitted to use. 617 | SOURCE: FIPS 191 618 | 619 | Unauthorized Disclosure – An event involving the exposure of information to entities not authorized access to the information. 620 | SOURCE: SP 800-57 Part 1; CNSSI-4009 621 | 622 | User – Individual or (system) process authorized to access an information system. 623 | SOURCE: FIPS 200 624 | 625 | User Initialization – A function in the life cycle of keying material; the process whereby a user initializes its cryptographic application (e.g., installing and initializing software and hardware). 626 | SOURCE: SP 800-57 Part 1 627 | 628 | V 629 | 630 | Validation – The process of demonstrating that the system under consideration meets in all respects the specification of that system. 631 | SOURCE: FIPS 201 632 | 633 | Verification – Confirmation, through the provision of objective evidence, that specified requirements have been fulfilled (e.g., an entity’s requirements have been correctly defined, or an entity’s attributes have been correctly presented; or a procedure or function performs as intended and leads to the expected outcome). 634 | SOURCE: CNSSI-4009 635 | 636 | Virtual Machine (VM) – Software that allows a single host to run one or more guest operating systems. 637 | SOURCE: SP 800-115 638 | 639 | Virtual Private Network (VPN) – A virtual network, built on top of existing physical networks, that provides a secure communications tunnel for data and other information transmitted between networks. 640 | SOURCE: SP 800-46 641 | 642 | Virus – A computer program that can copy itself and infect a computer without permission or knowledge of the user. A virus might corrupt 643 | or delete data on a computer, use email programs to spread itself to other computers, or even erase everything on a hard disk. 644 | SOURCE: CNSSI-4009 645 | 646 | Vulnerability – Weakness in an information system, system security procedures, internal controls, or implementation that could be exploited or triggered by a threat source. 647 | SOURCE: SP 800-53; SP 800-53A; SP 800-37; SP 800-60; SP 800-115; FIPS 200 648 | 649 | Vulnerability Assessment –Formal description and evaluation of the vulnerabilities in an information system. 650 | SOURCE: SP 800-53; SP 800-37 651 | 652 | W 653 | 654 | Web Content Filtering Software – A program that prevents access to undesirable Web sites, typically by comparing a requested Web site address to a list of known bad Web sites. 655 | SOURCE: SP 800-69 656 | 657 | Web Risk Assessment – Processes for ensuring Web sites are in compliance with applicable policies. 658 | SOURCE: CNSSI-4009 659 | 660 | Whitelist – A list of discrete entities, such as hosts or applications that are known to be benign and are approved for use within an organization and/or information system. 661 | SOURCE: SP 800-128 662 | 663 | Wi-Fi Protected Access-2 (WPA2) – The approved Wi-Fi Alliance interoperable implementation of the IEEE 802.11i security standard. For federal government use, the implementation must use FIPS-approved encryption, such as AES. 664 | SOURCE: CNSSI-4009 665 | 666 | Wireless Local Area Network (WLAN) – A group of wireless networking devices within a limited geographic area, such as an office building, that exchange data through radio communications. The security of each WLAN is heavily dependent on how well each WLAN component—including client devices, APs, and wireless switches—is secured throughout the WLAN lifecycle, from initial WLAN design and deployment through ongoing maintenance and monitoring. 667 | SOURCE: SP 800-153 668 | 669 | Write – Fundamental operation in an information system that results only in the flow of information from a subject to an object. See Access Type. 670 | SOURCE: CNSSI-4009 671 | 672 | Write Access – Permission to write to an object in an information system. 673 | SOURCE: CNSSI-4009 674 | 675 | Z 676 | 677 | Zeroize – To remove or eliminate the key from a cryptographic equipment or fill device. 678 | SOURCE: CNSSI-4009 679 | 680 | Zombie – A program that is installed on a system to cause it to attack other systems. 681 | SOURCE: SP 800-83 682 | --------------------------------------------------------------------------------