├── .github
└── PULL_REQUEST_TEMPLATE.md
├── doc_source
├── security_iam_troubleshoot.md
├── eks-partner-amis.md
├── deep-learning-containers.md
├── roadmap.md
├── eks-integrations.md
├── storage.md
├── eks-optimized-amis.md
├── gs-app-mesh.md
├── using-service-linked-roles.md
├── eks-distro.md
├── getting-started.md
├── managing-auth.md
├── update-workers.md
├── manage-secrets.md
├── clusters.md
├── eks-workloads.md
├── metrics-server.md
├── default-roles-users.md
├── logging-monitoring.md
├── delete-managed-node-group.md
├── disaster-recovery-resiliency.md
├── alternate-cni-plugins.md
├── iam-roles-for-service-accounts-minimum-sdk.md
├── eks-ami-build-scripts.md
├── managed-node-update-behavior.md
├── retrieve-ami-id-bottlerocket.md
├── node-taints-managed-node-groups.md
├── eks-managing.md
├── retrieve-ami-id.md
├── local-zones.md
├── creating-resources-with-cloudformation.md
├── retrieve-windows-ami-id.md
├── eks-custom-ami-windows.md
├── configuration-vulnerability-analysis.md
├── pod-multiple-network-interfaces.md
├── compliance.md
├── helm.md
├── add-ons-images.md
├── api-server-flags.md
├── iam-roles-for-service-accounts.md
├── worker.md
├── enable-iam-roles-for-service-accounts.md
├── infrastructure-security.md
├── monitoring-fargate-usage.md
├── eks-on-outposts.md
├── security.md
├── external-snat.md
├── storage-classes.md
├── best-practices-security.md
├── specify-service-account-role.md
├── fargate-pod-configuration.md
├── eks-add-ons.md
├── service-quotas.md
├── pod-execution-role.md
├── eksctl.md
├── restrict-service-external-ip.md
├── add-ons-configuration.md
├── what-is-eks.md
├── fargate.md
├── service_IAM_role.md
├── prometheus.md
├── pod-networking.md
├── launch-node-bottlerocket.md
├── cni-metrics-helper.md
├── using-service-linked-roles-eks.md
├── create-node-role.md
├── network_reqs.md
├── eks-networking.md
├── view-nodes.md
└── using-service-linked-roles-eks-nodegroups.md
├── CODE_OF_CONDUCT.md
├── LICENSE-SUMMARY
├── README.md
├── LICENSE-SAMPLECODE
└── CONTRIBUTING.md
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | *Issue #, if available:*
2 |
3 | *Description of changes:*
4 |
5 |
6 | By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
7 |
--------------------------------------------------------------------------------
/doc_source/security_iam_troubleshoot.md:
--------------------------------------------------------------------------------
1 | # Troubleshooting Amazon EKS identity and access
2 |
3 | To diagnose and fix common issues that you might encounter when working with Amazon EKS and IAM see [Troubleshooting IAM](troubleshooting_iam.md)\.
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ## Code of Conduct
2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct).
3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact
4 | opensource-codeofconduct@amazon.com with any additional questions or comments.
5 |
--------------------------------------------------------------------------------
/LICENSE-SUMMARY:
--------------------------------------------------------------------------------
1 | Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 |
3 | The documentation is made available under the Creative Commons Attribution-ShareAlike 4.0 International License. See the LICENSE file.
4 |
5 | The sample code within this documentation is made available under a modified MIT license. See the LICENSE-SAMPLECODE file.
6 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## Amazon EKS User Guide
2 |
3 | The open source version of the Amazon EKS user guide. You can submit feedback & requests for changes by submitting issues in this repo or by making proposed changes & submitting a pull request.
4 |
5 | ## License Summary
6 |
7 | The documentation is made available under the Creative Commons Attribution-ShareAlike 4.0 International License. See the LICENSE file.
8 |
9 | The sample code within this documentation is made available under a modified MIT license. See the LICENSE-SAMPLECODE file.
10 |
--------------------------------------------------------------------------------
/doc_source/eks-partner-amis.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS optimized Ubuntu Linux AMIs
2 |
3 | **Important**
4 |
5 | Canonical has partnered with Amazon EKS to create node AMIs that you can use in your clusters\.
6 |
7 | [Canonical](https://www.canonical.com/) delivers a built\-for\-purpose Kubernetes Node OS image\. This minimized Ubuntu image is optimized for Amazon EKS and includes the custom AWS kernel that is jointly developed with AWS\. For more information, see [Ubuntu and Amazon Elastic Kubernetes Service](https://cloud-images.ubuntu.com/aws-eks/) and [Optimized support for Amazon EKS on Ubuntu 18\.04](http://aws.amazon.com/blogs/opensource/optimized-support-amazon-eks-ubuntu-1804/)\.
--------------------------------------------------------------------------------
/doc_source/deep-learning-containers.md:
--------------------------------------------------------------------------------
1 | # Deep Learning Containers
2 |
3 | AWS Deep Learning Containers are a set of Docker images for training and serving models in TensorFlow on Amazon EKS and Amazon Elastic Container Service \(Amazon ECS\)\. Deep Learning Containers provide optimized environments with TensorFlow, Nvidia CUDA \(for GPU instances\), and Intel MKL \(for CPU instances\) libraries and are available in Amazon ECR\.
4 |
5 | To get started using AWS Deep Learning Containers on Amazon EKS, see [AWS Deep Learning Containers on Amazon EKS](https://docs.aws.amazon.com/dlami/latest/devguide/deep-learning-containers-eks.html) in the *AWS Deep Learning AMI Developer Guide*\.
--------------------------------------------------------------------------------
/doc_source/roadmap.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS new features and roadmap
2 |
3 | You can learn about new Amazon EKS features by scrolling to the What's New feed on the [What's New with AWS](http://aws.amazon.com/new/?whats-new-content-all.sort-by=item.additionalFields.postDateTime&whats-new-content-all.sort-order=desc&awsf.whats-new-compute=*all&awsf.whats-new-containers=general-products%23amazon-eks) page\. You can also review the [roadmap](https://github.com/aws/containers-roadmap/projects/1?card_filter_query=eks) on GitHub, which lets you know about upcoming features and priorities so that you can plan how you want to use Amazon EKS in the future\. You can provide direct feedback to us about the roadmap priorities\.
--------------------------------------------------------------------------------
/doc_source/eks-integrations.md:
--------------------------------------------------------------------------------
1 | # AWS services integrated with Amazon EKS
2 |
3 | Amazon EKS works with other AWS services to provide additional solutions for your business challenges\. This topic identifies services that either use Amazon EKS to add functionality, or services that Amazon EKS uses to perform tasks\.
4 |
5 | **Topics**
6 | + [Creating Amazon EKS resources with AWS CloudFormation](creating-resources-with-cloudformation.md)
7 | + [Logging Amazon EKS API calls with AWS CloudTrail](logging-using-cloudtrail.md)
8 | + [Amazon EKS on AWS Outposts](eks-on-outposts.md)
9 | + [Use AWS App Mesh with Kubernetes](gs-app-mesh.md)
10 | + [Amazon EKS on AWS Local Zones](local-zones.md)
11 | + [Deep Learning Containers](deep-learning-containers.md)
--------------------------------------------------------------------------------
/doc_source/storage.md:
--------------------------------------------------------------------------------
1 | # Storage
2 |
3 | This chapter covers storage options for Amazon EKS clusters\.
4 |
5 | The [Storage classes](storage-classes.md) topic uses the in\-tree Amazon EBS storage provisioner\.
6 |
7 | **Note**
8 | The existing [in\-tree Amazon EBS plugin](https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore) is still supported, but by using a CSI driver, you benefit from the decoupling of Kubernetes upstream release cycle and CSI driver release cycle\. Eventually, the in\-tree plugin will be discontinued in favor of the CSI driver\.
9 |
10 | **Topics**
11 | + [Storage classes](storage-classes.md)
12 | + [Amazon EBS CSI driver](ebs-csi.md)
13 | + [Amazon EFS CSI driver](efs-csi.md)
14 | + [Amazon FSx for Lustre CSI driver](fsx-csi.md)
--------------------------------------------------------------------------------
/doc_source/eks-optimized-amis.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS optimized AMIs
2 |
3 | You can deploy nodes with pre\-built Amazon EKS optimized [Amazon Machine Images](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) \(AMI\), or your own custom AMIs\. For more information about each type of Amazon EKS optimized AMI, see one of the following topics\. For more information about creating your own custom AMI, see [Amazon EKS optimized Amazon Linux AMI build script](eks-ami-build-scripts.md)\.
4 |
5 | **Topics**
6 | + [Amazon EKS optimized Amazon Linux AMIs](eks-optimized-ami.md)
7 | + [Amazon EKS optimized Ubuntu Linux AMIs](eks-partner-amis.md)
8 | + [Amazon EKS optimized Bottlerocket AMIs](eks-optimized-ami-bottlerocket.md)
9 | + [Amazon EKS optimized Windows AMIs](eks-optimized-windows-ami.md)
--------------------------------------------------------------------------------
/doc_source/gs-app-mesh.md:
--------------------------------------------------------------------------------
1 | # Use AWS App Mesh with Kubernetes
2 |
3 | **Important**
4 |
5 | AWS App Mesh \(App Mesh\) is a service mesh that makes it easy to monitor and control services\. App Mesh standardizes how your services communicate, giving you end\-to\-end visibility and helping to ensure high availability for your applications\. App Mesh gives you consistent visibility and network traffic controls for every service in an application\. You can get started using App Mesh with Kubernetes by completing the [Getting started with AWS App Mesh and Kubernetes](https://docs.aws.amazon.com/app-mesh/latest/userguide/getting-started-kubernetes.html) tutorial in the AWS App Mesh User Guide\. The tutorial recommends that you have existing services deployed to Kubernetes that you want to use App Mesh with\.
--------------------------------------------------------------------------------
/doc_source/using-service-linked-roles.md:
--------------------------------------------------------------------------------
1 | # Using service\-linked roles for Amazon EKS
2 |
3 | Amazon Elastic Kubernetes Service uses AWS Identity and Access Management \(IAM\) [service\-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role)\. A service\-linked role is a unique type of IAM role that is linked directly to Amazon EKS\. Service\-linked roles are predefined by Amazon EKS and include all the permissions that the service requires to call other AWS services on your behalf\.
4 |
5 | **Topics**
6 | + [Using roles for Amazon EKS clusters](using-service-linked-roles-eks.md)
7 | + [Using roles for Amazon EKS node groups](using-service-linked-roles-eks-nodegroups.md)
8 | + [Using roles for Amazon EKS Fargate profiles](using-service-linked-roles-eks-fargate.md)
--------------------------------------------------------------------------------
/LICENSE-SAMPLECODE:
--------------------------------------------------------------------------------
1 | Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this
4 | software and associated documentation files (the "Software"), to deal in the Software
5 | without restriction, including without limitation the rights to use, copy, modify,
6 | merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
7 | permit persons to whom the Software is furnished to do so.
8 |
9 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
10 | INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
11 | PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
12 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
13 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
14 | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15 |
--------------------------------------------------------------------------------
/doc_source/eks-distro.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS Distro
2 |
3 | Amazon EKS Distro is a distribution of the same open source Kubernetes software and dependencies deployed by Amazon EKS in the cloud\. With Amazon EKS Distro, you can create reliable and secure clusters wherever your applications are deployed\. You can rely on the same versions of Kubernetes deployed by Amazon EKS, etcd, CoreDNS, upstream CNI, and CSI sidecars with the latest updates and extended security patching support\. Amazon EKS Distro follows the same Kubernetes version release cycle as Amazon EKS and is provided as an open\-source project\.
4 |
5 | **Note**
6 | The source code for the Amazon EKS Distro is available on [GitHub](https://github.com/aws/eks-distro)\. The latest documentation is available on the Amazon EKS Distro [website](https://distro.eks.amazonaws.com/)\. If you find any issues, you can report them with Amazon EKS Distro by connecting with us on [GitHub](https://github.com/aws/eks-distro)\. There you can open issues, provide feedback, and report bugs\.
--------------------------------------------------------------------------------
/doc_source/getting-started.md:
--------------------------------------------------------------------------------
1 | # Getting started with Amazon EKS
2 |
3 | There are two getting started guides available for creating a new Kubernetes cluster with nodes in Amazon EKS:
4 | + [Getting started with Amazon EKS – `eksctl`](getting-started-eksctl.md) – This getting started guide helps you to install all of the required resources to get started with Amazon EKS using `eksctl`, a simple command line utility for creating and managing Kubernetes clusters on Amazon EKS\. At the end of the tutorial, you will have a running Amazon EKS cluster that you can deploy applications to\. This is the fastest and simplest way to get started with Amazon EKS\.
5 | + [Getting started with Amazon EKS – AWS Management Console and AWS CLI](getting-started-console.md) – This getting started guide helps you to create all of the required resources to get started with Amazon EKS using the AWS Management Console and AWS CLI\. At the end of the tutorial, you will have a running Amazon EKS cluster that you can deploy applications to\. In this guide, you manually create each resource required for an Amazon EKS cluster\. The procedures give you visibility into how each resource is created and how they interact with each other\.
--------------------------------------------------------------------------------
/doc_source/managing-auth.md:
--------------------------------------------------------------------------------
1 | # Cluster authentication
2 |
3 | Amazon EKS uses IAM to provide authentication to your Kubernetes cluster \(through the `aws eks get-token` command, available in version 1\.16\.156 or later of the AWS CLI, or the [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator)\), but it still relies on native Kubernetes [Role Based Access Control](https://kubernetes.io/docs/admin/authorization/rbac/) \(RBAC\) for authorization\. This means that IAM is only used for authentication of valid IAM entities\. All permissions for interacting with your Amazon EKS cluster’s Kubernetes API is managed through the native Kubernetes RBAC system\.
4 |
5 | ![\[Amazon EKS and IAM integration\]](http://docs.aws.amazon.com/eks/latest/userguide/images/eks-iam.png)
6 |
7 | **Topics**
8 | + [Managing users or IAM roles for your cluster](add-user-role.md)
9 | + [Authenticating users for your cluster from an OpenID Connect identity provider](authenticate-oidc-identity-provider.md)
10 | + [Create a `kubeconfig` for Amazon EKS](create-kubeconfig.md)
11 | + [Installing `aws-iam-authenticator`](install-aws-iam-authenticator.md)
12 | + [Default Amazon EKS Kubernetes roles and users](default-roles-users.md)
--------------------------------------------------------------------------------
/doc_source/update-workers.md:
--------------------------------------------------------------------------------
1 | # Self\-managed node updates
2 |
3 | When a new Amazon EKS optimized AMI is released, you should consider replacing the nodes in your self\-managed node group with the new AMI\. Likewise, if you have updated the Kubernetes version for your Amazon EKS cluster, you should also update the nodes to use nodes with the same Kubernetes version\.
4 |
5 | **Important**
6 | This topic covers node updates for self\-managed nodes\. If you are using [Managed node groups](managed-node-groups.md), see [Updating a managed node group](update-managed-node-group.md)\.
7 |
8 | There are two basic ways to update self\-managed node groups in your clusters to use a new AMI:
9 | + [Migrating to a new node group](migrate-stack.md) – Create a new node group and migrate your pods to that group\. Migrating to a new node group is more graceful than simply updating the AMI ID in an existing AWS CloudFormation stack, because the migration process taints the old node group as `NoSchedule` and drains the nodes after a new stack is ready to accept the existing pod workload\.
10 | + [Updating an existing self\-managed node group](update-stack.md) – Update the AWS CloudFormation stack for an existing node group to use the new AMI\. This method is not supported for node groups that were created with `eksctl`\.
--------------------------------------------------------------------------------
/doc_source/manage-secrets.md:
--------------------------------------------------------------------------------
1 | # Using AWS Secrets Manager secrets with Kubernetes
2 |
3 | To show secrets from Secrets Manager and parameters from Parameter Store as files mounted in [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) pods, you can use the AWS Secrets and Configuration Provider \(ASCP\) for the [Kubernetes Secrets Store CSI Driver](https://secrets-store-csi-driver.sigs.k8s.io/)\. The ASCP works with Amazon Elastic Kubernetes Service \(Amazon EKS\) version 1\.17 or later\.
4 |
5 | With the ASCP, you can store and manage your secrets in Secrets Manager and then retrieve them through your workloads running on Amazon EKS\. You can use IAM roles and policies to limit access to your secrets to specific Kubernetes pods in a cluster\. The ASCP retrieves the pod identity and exchanges the identity for an IAM role\. ASCP assumes the IAM role of the pod, and then it can retrieve secrets from Secrets Manager that are authorized for that role\.
6 |
7 | If you use Secrets Manager automatic rotation for your secrets, you can also use the Secrets Store CSI Driver rotation reconciler feature to ensure you are retrieving the latest secret from Secrets Manager\.
8 |
9 | For more information, see [Using Secrets Manager secrets in Amazon EKS](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_csi_driver.html) in the AWS Secrets Manager User Guide\.
--------------------------------------------------------------------------------
/doc_source/clusters.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS clusters
2 |
3 | An Amazon EKS cluster consists of two primary components:
4 | + The Amazon EKS control plane
5 | + Amazon EKS nodes that are registered with the control plane
6 |
7 | The Amazon EKS control plane consists of control plane nodes that run the Kubernetes software, such as `etcd` and the Kubernetes API server\. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster\. Each Amazon EKS cluster control plane is single\-tenant and unique, and runs on its own set of Amazon EC2 instances\.
8 |
9 | All of the data stored by the `etcd` nodes and associated Amazon EBS volumes is encrypted using AWS KMS\. The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer\. Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the nodes \(for example, to support `kubectl exec` , `logs` , and `proxy` data flows\)\.
10 |
11 | Amazon EKS nodes run in your AWS account and connect to your cluster's control plane via the API server endpoint and a certificate file that is created for your cluster\.
12 |
13 | **Note**
14 | You can find out how the different components of Amazon EKS work in [Amazon EKS networking](eks-networking.md)\.
--------------------------------------------------------------------------------
/doc_source/eks-workloads.md:
--------------------------------------------------------------------------------
1 | # Workloads
2 |
3 | Your workloads are deployed in containers, which are deployed in pods in Kubernetes\. A pod includes one or more containers\. Typically, one or more pods that provide the same service are deployed in a Kubernetes service\. Once you've deployed multiple pods that provide the same service, you can:
4 | + [View information about the workloads](view-workloads.md) running on each of your clusters using the AWS Management Console\.
5 | + Vertically scale pods up or down with the Kubernetes [Vertical Pod Autoscaler](vertical-pod-autoscaler.md)\.
6 | + Horizontally scale the number of pods needed to meet demand up or down with the Kubernetes [Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md)\.
7 | + Create an external \(for internet\-accessible pods\) or an internal \(for private pods\) [network load balancer](network-load-balancing.md) to balance network traffic across pods\. The load balancer routes traffic at Layer 4 of the OSI model\.
8 | + Create an [Application load balancing on Amazon EKS](alb-ingress.md) to balance application traffic across pods\. The application load balancer routes traffic at Layer 7 of the OSI model\.
9 | + If you're new to Kubernetes, this topic helps you [Deploy a sample Linux workload](sample-deployment.md)\.
10 | + You can [restrict IP addresses that can be assigned to a service](restrict-service-external-ip.md) with `externalIPs`\.
--------------------------------------------------------------------------------
/doc_source/metrics-server.md:
--------------------------------------------------------------------------------
1 | # Installing the Kubernetes Metrics Server
2 |
3 | The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster, and it is not deployed by default in Amazon EKS clusters\. For more information, see [Kubernetes Metrics Server](https://github.com/kubernetes-sigs/metrics-server) on GitHub\. The Metrics Server is commonly used by other Kubernetes add ons, such as the [Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md) or the [Kubernetes Dashboard](dashboard-tutorial.md)\. For more information, see [Resource metrics pipeline](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/) in the Kubernetes documentation\. This topic explains how to deploy the Kubernetes Metrics Server on your Amazon EKS cluster\.
4 |
5 | **Important**
6 | Don't use Metrics Server when you need an accurate source of resource usage metrics or as a monitoring solution\.
7 |
8 | **Deploy the Metrics Server**
9 |
10 | 1. Deploy the Metrics Server with the following command:
11 |
12 | ```
13 | kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
14 | ```
15 |
16 | 1. Verify that the `metrics-server` deployment is running the desired number of pods with the following command\.
17 |
18 | ```
19 | kubectl get deployment metrics-server -n kube-system
20 | ```
21 |
22 | Output
23 |
24 | ```
25 | NAME READY UP-TO-DATE AVAILABLE AGE
26 | metrics-server 1/1 1 1 6m
27 | ```
--------------------------------------------------------------------------------
/doc_source/default-roles-users.md:
--------------------------------------------------------------------------------
1 | # Default Amazon EKS Kubernetes roles and users
2 |
3 | Starting with version 1\.20, Amazon EKS creates identities for each of the kubernetes components\. The following objects have been created by the Amazon EKS team to provide RBAC permissions for the components:
4 |
5 | **Note**
6 | These roles and users only apply to 1\.20 clusters\.
7 |
8 | **Roles**
9 | + `eks:authenticator`
10 | + `eks:certificate-controller-approver`
11 | + `eks:certificate-controller`
12 | + `eks:cluster-event-watcher`
13 | + `eks:fargate-scheduler`
14 | + `eks:k8s-metrics`
15 | + `eks:nodewatcher`
16 | + `eks:pod-identity-mutating-webhook`
17 |
18 | **RoleBindings**
19 | + `eks:authenticator`
20 | + `eks:certificate-controller-approver`
21 | + `eks:certificate-controller`
22 | + `eks:cluster-event-watcher`
23 | + `eks:fargate-scheduler`
24 | + `eks:k8s-metrics`
25 | + `eks:nodewatcher`
26 | + `eks:pod-identity-mutating-webhook`
27 |
28 | **Users**
29 | + `eks:authenticator`
30 | + `eks:certificate-controller`
31 | + `eks:cluster-event-watcher`
32 | + `eks:fargate-scheduler`
33 | + `eks:k8s-metrics`
34 | + `eks:nodewatcher`
35 | + `eks:pod-identity-mutating-webhook`
36 |
37 | In addition to the objects above, Amazon EKS uses a special user identity `eks:cluster-bootstrap` for `kubectl` operations during cluster bootstrap\. Amazon EKS also uses a special user identity `eks:support-engineer` for cluster management operations\. All the user identities will appear in the `kube` audit logs available to customers through CloudWatch\.
38 |
39 | Run `kubectl describe clusterrole ` to see the permissions for each role\.
--------------------------------------------------------------------------------
/doc_source/logging-monitoring.md:
--------------------------------------------------------------------------------
1 | # Logging and monitoring in Amazon EKS
2 |
3 | Amazon EKS control plane logging provides audit and diagnostic logs directly from the Amazon EKS control plane to CloudWatch Logs in your account\. These logs make it easy for you to secure and run your clusters\. You can select the exact log types you need, and logs are sent as log streams to a group for each Amazon EKS cluster in CloudWatch\. For more information, see [Amazon EKS Control Plane Logging](control-plane-logs.md)\.
4 |
5 | **Note**
6 | When you check the Amazon EKS authenticator logs in Amazon CloudWatch, you'll see entries that contain text similar to the following example text\.
7 |
8 | ```
9 | level=info msg="mapping IAM role" groups="[]" role="arn:aws:iam::<111122223333:>role/-NodeManagerRole-" username="eks:node-manager"
10 | ```
11 | Entries that contain this text are expected\. The `username` is an Amazon EKS internal service role that performs specific operations for managed node groups and Fargate\.
12 |
13 | Amazon EKS is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in Amazon EKS\. CloudTrail captures all API calls for Amazon EKS as events\. The calls captured include calls from the Amazon EKS console and code calls to the Amazon EKS API operations\. For more information, see [Logging Amazon EKS API calls with AWS CloudTrail](logging-using-cloudtrail.md)\.
14 |
15 | The Kubernetes API server exposes a number of metrics that are useful for monitoring and analysis\. For more information, see [Control plane metrics with Prometheus](prometheus.md)\.
--------------------------------------------------------------------------------
/doc_source/delete-managed-node-group.md:
--------------------------------------------------------------------------------
1 | # Deleting a managed node group
2 |
3 | This topic describes how you can delete an Amazon EKS managed node group\.
4 |
5 | When you delete a managed node group, Amazon EKS will first set the minimum, maximum, and desired size of your Auto Scaling group to zero, which will trigger a scale down of your node group\. Before each instance is terminated, Amazon EKS will send a signal to drain the pods from that node and wait a few minutes\. If the pods haven't drained after a few minutes, Amazon EKS will let Auto Scaling continue the termination of the instance\. Once all instances are terminated, the Auto Scaling group is deleted\.
6 |
7 | **Important**
8 | If you delete a managed node group that uses a node IAM role that isn't used by any other managed node group in the cluster, the role is removed from the [`aws-auth` ConfigMap](add-user-role.md)\. If any self\-managed node groups in the cluster are using the same node IAM role, the self\-managed nodes move to the `NotReady` status, and the cluster operation are also disrupted\. You can add the mapping back to the ConfigMap to minimize disruption\.
9 |
10 | **To delete a managed node group**
11 |
12 | 1. Open the Amazon EKS console at [https://console\.aws\.amazon\.com/eks/home\#/clusters](https://console.aws.amazon.com/eks/home#/clusters)\.
13 |
14 | 1. Choose the cluster that contains the node group to delete\.
15 |
16 | 1. Select the **Configuration** tab\. On the **Compute** tab, select the node group to delete, and choose **Delete**\.
17 |
18 | 1. On the **Delete Node group: ** page, type the name of the node group in the text field and choose **Delete**\.
--------------------------------------------------------------------------------
/doc_source/disaster-recovery-resiliency.md:
--------------------------------------------------------------------------------
1 | # Resilience in Amazon EKS
2 |
3 | The AWS global infrastructure is built around AWS Regions and Availability Zones\. AWS Regions provide multiple physically separated and isolated Availability Zones, which are connected with low\-latency, high\-throughput, and highly redundant networking\. With Availability Zones, you can design and operate applications and databases that automatically fail over between Availability Zones without interruption\. Availability Zones are more highly available, fault tolerant, and scalable than traditional single or multiple data center infrastructures\.
4 |
5 | Amazon EKS runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability\. Amazon EKS automatically scales control plane instances based on load, detects and replaces unhealthy control plane instances, and it provides automated version updates and patching for them\.
6 |
7 | This control plane consists of at least two API server instances and three `etcd` instances that run across three Availability Zones within a Region\. Amazon EKS:
8 | + Actively monitors the load on control plane instances and automatically scales them to ensure high performance\.
9 | + Automatically detects and replaces unhealthy control plane instances, restarting them across the Availability Zones within the Region as needed\.
10 | + Leverages the architecture of AWS Regions in order to maintain high availability\. Because of this, Amazon EKS is able to offer an [SLA for API server endpoint availability](http://aws.amazon.com/eks/sla)\.
11 |
12 | For more information about AWS Regions and Availability Zones, see [AWS global infrastructure](http://aws.amazon.com/about-aws/global-infrastructure/)\.
--------------------------------------------------------------------------------
/doc_source/alternate-cni-plugins.md:
--------------------------------------------------------------------------------
1 | # Alternate compatible CNI plugins
2 |
3 | Amazon EKS only officially supports the [Amazon VPC CNI plugin](pod-networking.md)\. Amazon EKS runs upstream Kubernetes and is certified Kubernetes conformant however, so alternate CNI plugins will work with Amazon EKS clusters\. If you plan to use an alternate CNI plugin in production, then we strongly recommend that you either obtain commercial support, or have the in\-house expertise to troubleshoot and contribute fixes to the open source CNI plugin project\.
4 |
5 | Amazon EKS maintains relationships with a network of partners that offer support for alternate compatible CNI plugins\. Refer to the following partners' documentation for details on supported Kubernetes versions and qualifications and testing performed\.
6 |
7 |
8 | | Partner | Product | Documentation |
9 | | --- | --- | --- |
10 | | Tigera | [ Calico](https://www.tigera.io/partners/aws/) | [Installation instructions](https://docs.projectcalico.org/getting-started/kubernetes/managed-public-cloud/eks) |
11 | | Isovalent | [Cilium](https://cilium.io/contact-us-eks/) | [Installation instructions](https://docs.cilium.io/en/v1.9/gettingstarted/k8s-install-eks/) |
12 | | Weaveworks | [Weave Net](https://www.weave.works/contact/) | [Installation instructions](https://www.weave.works/docs/net/latest/kubernetes/kube-addon/#-installing-on-eks) |
13 | | VMware | [Antrea](https://antrea.io/) | [Installation instructions](https://antrea.io/docs/v0.11.1/eks-installation/) |
14 |
15 | Amazon EKS aims to give you a wide selection of options to cover all use cases\. If you develop a commercially supported Kubernetes CNI plugin that is not listed here, then please contact our partner team at [aws\-container\-partners@amazon\.com](mailto:aws-container-partners@amazon.com) for more information\.
--------------------------------------------------------------------------------
/doc_source/iam-roles-for-service-accounts-minimum-sdk.md:
--------------------------------------------------------------------------------
1 | # Using a supported AWS SDK
2 |
3 | The containers in your pods must use an AWS SDK version that supports assuming an IAM role via an OIDC web identity token file\. AWS SDKs that are included in Linux distribution package managers may not be new enough to support this feature\. Be sure to use at least the minimum SDK versions listed below:
4 | + Java \(Version 2\) — [2\.10\.11](https://github.com/aws/aws-sdk-java-v2/releases/tag/2.10.11)
5 | + Java — [1\.11\.704](https://github.com/aws/aws-sdk-java/releases/tag/1.11.704)
6 | + Go — [1\.23\.13](https://github.com/aws/aws-sdk-go/releases/tag/v1.23.13)
7 | + Python \(Boto3\) — [1\.9\.220](https://github.com/boto/boto3/releases/tag/1.9.220)
8 | + Python \(botocore\) — [1\.12\.200](https://github.com/boto/botocore/releases/tag/1.12.200)
9 | + AWS CLI — [1\.16\.232](https://github.com/aws/aws-cli/releases/tag/1.16.232)
10 | + Node — [2\.521\.0](https://github.com/aws/aws-sdk-js/releases/tag/v2.521.0)
11 | + Ruby — [2\.11\.345](https://github.com/aws/aws-sdk-ruby/releases/tag/v2.11.345)
12 | + C\+\+ — [1\.7\.174](https://github.com/aws/aws-sdk-cpp/releases/tag/1.7.174)
13 | + \.NET — [3\.3\.659\.1](https://github.com/aws/aws-sdk-net/releases/tag/3.3.659.1)
14 | + PHP — [3\.110\.7](https://github.com/aws/aws-sdk-php/releases/tag/3.110.7)
15 |
16 | Many popular Kubernetes add\-ons, such as the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) and the [AWS Load Balancer Controller](aws-load-balancer-controller.md), support IAM roles for service accounts\. The [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s) also supports IAM roles for service accounts\.
17 |
18 | To ensure that you are using a supported SDK, follow the installation instructions for your preferred SDK at [Tools for Amazon Web Services](https://aws.amazon.com/tools/) when you build your containers\.
--------------------------------------------------------------------------------
/doc_source/eks-ami-build-scripts.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS optimized Amazon Linux AMI build script
2 |
3 | Amazon Elastic Kubernetes Service \(Amazon EKS\) has open\-sourced the build scripts that are used to build the Amazon EKS optimized AMI\. These build scripts are now available [on GitHub](https://github.com/awslabs/amazon-eks-ami)\.
4 |
5 | The Amazon EKS optimized Amazon Linux AMI is built on top of Amazon Linux 2, specifically for use as a node in Amazon EKS clusters\. You can use this repository to view the specifics of how the Amazon EKS team configures `kubelet` , Docker, the AWS IAM Authenticator for Kubernetes, and more\.
6 |
7 | The build scripts repository includes a [HashiCorp packer](https://www.packer.io/) template and build scripts to generate an AMI\. These scripts are the source of truth for Amazon EKS optimized AMI builds, so you can follow the GitHub repository to monitor changes to our AMIs\. For example, perhaps you want your own AMI to use the same version of Docker that the EKS team uses for the official AMI\.
8 |
9 | The GitHub repository also contains the specialized [bootstrap script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) that runs at boot time to configure your instance's certificate data, control plane endpoint, cluster name, and more\.
10 |
11 | Additionally, the GitHub repository contains our Amazon EKS node AWS CloudFormation templates\. These templates make it easier to spin up an instance running the Amazon EKS optimized AMI and register it with a cluster\.
12 |
13 | For more information, see the repositories on GitHub at [https://github\.com/awslabs/amazon\-eks\-ami](https://github.com/awslabs/amazon-eks-ami)\.
14 |
15 | Amazon EKS optimized Amazon Linux 2 contains an optional bootstrap flag to enable the containerd runtime\. When bootstrapped in Amazon EKS optimized accelerated Amazon Linux AMIs for v1\.21, [AWS Inferentia](http://aws.amazon.com/machine-learning/inferentia/) workloads are not supported\.
--------------------------------------------------------------------------------
/doc_source/managed-node-update-behavior.md:
--------------------------------------------------------------------------------
1 | # Managed node update behavior
2 |
3 | When you update a managed node group version to the latest AMI release version for your node group's Kubernetes version or to a newer Kubernetes version to match your cluster, Amazon EKS automatically completes the following tasks:
4 |
5 | 1. Creates a new Amazon EC2 launch template version for the Auto Scaling group associated with your node group\. The new template uses the target AMI for the update\.
6 |
7 | 1. Updates the Auto Scaling group to use the latest launch template with the new AMI\.
8 |
9 | 1. Determine maximum nodes to upgrade in parallel using updateConfig configured for the node group\. The maximum concurrency \(unavailable\) is capped at 100 nodes\.
10 |
11 | 1. Increments the Auto Scaling group maximum size and desired size by the larger of either:
12 |
13 | \- One up to twice the number of Availability Zones in the Region that the Auto Scaling group is deployed in or
14 |
15 | \- The max concurrency of upgrade\.
16 |
17 | 1. Checks the nodes in the node group for the `eks.amazonaws.com/nodegroup-image` label, and applies a `eks.amazonaws.com/nodegroup=unschedulable:NoSchedule` taint on all of the nodes in the node group that aren't labeled with the latest AMI ID\. This prevents nodes that have already been updated from a previous failed update from being tainted\.
18 |
19 | 1. Randomly selects up to max nodes to upgrade in parallel\.
20 |
21 | 1. Cordons the node after all of the pods are evicted\. This is done so that the service controller doesn't send any new requests to this node and removes this node from its list of healthy, active nodes\.
22 |
23 | 1. Sends a termination request to the Auto Scaling group for the cordoned node\.
24 |
25 | 1. Repeats steps 6\-8 until there are no nodes in the node group that are deployed with the earlier version of the launch template\.
26 |
27 | 1. Decrements the Auto Scaling group maximum size and desired size by 1 to return to your pre\-update values\.
--------------------------------------------------------------------------------
/doc_source/retrieve-ami-id-bottlerocket.md:
--------------------------------------------------------------------------------
1 | # Retrieving Amazon EKS optimized Bottlerocket AMI IDs
2 |
3 | You can programmatically retrieve the Amazon Machine Image \(AMI\) ID for Amazon EKS optimized AMIs by querying the AWS Systems Manager Parameter Store API\. This parameter eliminates the need for you to manually look up Amazon EKS optimized AMI IDs\. For more information about the Systems Manager Parameter Store API, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html)\. Your user account must have the `ssm:GetParameter` IAM permission to retrieve the Amazon EKS optimized AMI metadata\.
4 |
5 | You can retrieve the AMI ID with the AWS CLI or the AWS Management Console\.
6 | + **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon EKS optimized Bottlerocket AMI with the following AWS CLI command by using the sub\-parameter `image_id`\. Replace *<1\.21>* with a [supported version](platform-versions.md) and ** with an [Amazon EKS supported Region](https://docs.aws.amazon.com/general/latest/gr/eks.html) for which you want the AMI ID\.
7 |
8 | ```
9 | aws ssm get-parameter --name /aws/service/bottlerocket/aws-k8s-<1.21>/x86_64/latest/image_id --region --query "Parameter.Value" --output text
10 | ```
11 |
12 | Example output:
13 |
14 | ```
15 | ami-<068ed1c8e99b4810c>
16 | ```
17 | + **AWS Management Console** – You can query for the recommended Amazon EKS optimized AMI ID using a URL in the AWS Management Console\. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter\. In the following URL, replace *<1\.21>* with a [supported version](platform-versions.md) and ** with an [Amazon EKS supported Region](https://docs.aws.amazon.com/general/latest/gr/eks.html) for which you want the AMI ID\.
18 |
19 | ```
20 | https://console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-k8s-<1.21>/x86_64/latest/image_id/description?region=
21 | ```
--------------------------------------------------------------------------------
/doc_source/node-taints-managed-node-groups.md:
--------------------------------------------------------------------------------
1 | # Node taints on managed node groups
2 |
3 | Amazon EKS supports configuring Kubernetes taints through managed node groups\. Taints and tolerations work together to ensure that pods aren't scheduled onto inappropriate nodes\.
4 |
5 | One or more taints can be applied to a node\. This marks that the node should not accept any pods that do not tolerate the taints\. Tolerations are applied to pods and allow, but don't require, the pods to schedule onto nodes with matching taints\.
6 |
7 | Kubernetes node taints can be applied to new and existing managed node groups using the AWS Management Console or through the Amazon EKS API\.
8 |
9 | The following is an example of creating a node group with a taint using the AWS CLI:
10 |
11 | ```
12 | aws eks create-nodegroup \
13 | --cli-input-json '
14 | {
15 | "clusterName": "my-cluster",
16 | ...
17 | "taints": [
18 | {
19 | "key": "dedicated",
20 | "value": "gpuGroup",
21 | "effect": "NO_SCHEDULE"
22 | }
23 | ],
24 | }'
25 | ```
26 |
27 | For more information on taints and tolerations, see the [Kubernetes documentation](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)\. For more information and examples of usage, see the [Kubernetes reference documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint)\.
28 |
29 | **Note**
30 | Maximum of 50 taints per node group\.
31 | Taints can be updated after node group creation as part of `UpdateNodegroupConfig` API\.
32 | The taint key must begin with a letter or number\. It may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters\.
33 | Optionally, the taint key can begin with a DNS subdomain prefix and a single `/`\. If it begins with a DNS subdomain prefix, it can be 253 characters long\.
34 | The value is optional and must begin with a letter or number\. It may contain letters, numbers, hyphens, dots, and underscores, up to 63 characters\.
35 | The effect must be one of `No_Schedule`, `Prefer_No_Schedule`, or `No_Execute`\.
--------------------------------------------------------------------------------
/doc_source/eks-managing.md:
--------------------------------------------------------------------------------
1 | # Cluster management
2 |
3 | This chapter includes the following topics to help you manage your cluster\. You can also view information about your [nodes](view-nodes.md) and [workloads](eks-workloads.md) using the AWS Management Console\.
4 | + [Installing `kubectl`](install-kubectl.md) – Learn how to install `kubectl`; a command line tool for managing Kubernetes\.
5 | + [The `eksctl` command line utility](eksctl.md) – Learn how to install a simple command line utility for creating and managing Kubernetes clusters on Amazon EKS\.
6 | + [Tutorial: Deploy the Kubernetes Dashboard \(web UI\)](dashboard-tutorial.md) – Learn how to install the dashboard, a web\-based user interface for your Kubernetes cluster and applications\.
7 | + [Installing the Kubernetes Metrics Server](metrics-server.md) – The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster\. It is not deployed by default in your cluster, but is used by Kubernetes add\-ons, such as the Kubernetes Dashboard and [Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md)\. In this topic you learn how to install the Metrics Server\.
8 | + [Control plane metrics with Prometheus](prometheus.md) – The Kubernetes API server exposes a number of metrics that are useful for monitoring and analysis\. This topic explains how to deploy Prometheus and some of the ways that you can use it to view and analyze what your cluster is doing\.
9 | + [Using Helm with Amazon EKS](helm.md) – The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster\. This topic helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer\.
10 | + [Tagging your Amazon EKS resources](eks-using-tags.md) – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in the form of *tags*\. This topic describes tags and shows you how to create them\.
11 | + [Amazon EKS service quotas](service-quotas.md) – Your AWS account has default quotas, formerly referred to as limits, for each AWS service\. Learn about the quotas for Amazon EKS and how to increase them\.
--------------------------------------------------------------------------------
/doc_source/retrieve-ami-id.md:
--------------------------------------------------------------------------------
1 | # Retrieving Amazon EKS optimized Amazon Linux AMI IDs
2 |
3 | You can programmatically retrieve the Amazon Machine Image \(AMI\) ID for Amazon EKS optimized AMIs by querying the AWS Systems Manager Parameter Store API\. This parameter eliminates the need for you to manually look up Amazon EKS optimized AMI IDs\. For more information about the Systems Manager Parameter Store API, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html)\. Your user account must have the `ssm:GetParameter` IAM permission to retrieve the Amazon EKS optimized AMI metadata\.
4 |
5 | You can retrieve the AMI ID with the AWS CLI or the AWS Management Console\.
6 | + **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon EKS optimized Amazon Linux AMI with the following command by using the sub\-parameter `image_id`\. Replace *<1\.21>* with a [supported version](platform-versions.md) and ** with an [Amazon EKS supported Region](https://docs.aws.amazon.com/general/latest/gr/eks.html) for which you want the AMI ID\. Replace ** with `amazon-linux-2-gpu` to see the accelerated AMI ID and `amazon-linux-2-arm64` to see the Arm ID\.
7 |
8 | ```
9 | aws ssm get-parameter --name /aws/service/eks/optimized-ami/<1.21>//recommended/image_id --region --query "Parameter.Value" --output text
10 | ```
11 |
12 | Example output:
13 |
14 | ```
15 | ami-
16 | ```
17 | + **AWS Management Console** – You can query for the recommended Amazon EKS optimized AMI ID using a URL\. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter\. In the following URL, replace *<1\.21>* with a [supported version](platform-versions.md) and ** with an [Amazon EKS supported Region](https://docs.aws.amazon.com/general/latest/gr/eks.html) for which you want the AMI ID\. Replace ** with `amazon-linux-2-gpu` to see the accelerated AMI ID and `amazon-linux-2-arm64` to see the Arm ID\.
18 |
19 | ```
20 | https://console.aws.amazon.com/systems-manager/parameters/%252Faws%252Fservice%252Feks%252Foptimized-ami%252F<1.21>%252F%252Frecommended%252Fimage_id/description?region=
21 | ```
--------------------------------------------------------------------------------
/doc_source/local-zones.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS on AWS Local Zones
2 |
3 | **Important**
4 |
5 | An AWS Local Zone is an extension of an AWS Region in geographic proximity to your users\. Local Zones have their own connections to the internet and support AWS Direct Connect\. Resources created in a Local Zone can serve local users with low\-latency communications\. For more information, see [Local Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-local-zones)\.
6 |
7 | Amazon EKS supports running certain infrastructure\. This includes Amazon EC2 instances, Amazon EBS volumes, and Application Load Balancers \(ALBs\) from a Local Zone as part of your cluster\. We recommend that you consider the following when using Local Zone infrastructure as part of your Amazon EKS cluster\.
8 |
9 | **Kubernetes versions**
10 | Only Amazon EKS clusters that run Kubernetes versions 1\.17 and later can use Local Zone compute resources\.
11 |
12 | **Nodes**
13 | You can't create managed node groups in AWS Local Zones with Amazon EKS\. You must create self\-managed nodes using the Amazon EC2 API or AWS CloudFormation\. **Do not use eksctl to create your cluster or nodes in Local Zones**\. For more information, see [Self\-managed nodes](worker.md)\.
14 |
15 | **Network architecture**
16 | The Amazon EKS managed Kubernetes control plane always runs in the AWS Region\. The Amazon EKS managed Kubernetes control plane can't run in the Local Zone\. Because Local Zones appear as a subnet within your VPC, Kubernetes sees your Local Zone resources as part of that subnet\.
17 |
18 | The Amazon EKS Kubernetes cluster communicates with the Amazon EC2 instances you run in the AWS Region or Local Zone using Amazon EKS managed [elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html)\. To learn more about Amazon EKS networking architecture, see [Amazon EKS networking](eks-networking.md)\.
19 |
20 | Unlike regional subnets, Amazon EKS can't place network interfaces into your Local Zone subnets\. This means that you must not specify Local Zone subnets when you create your cluster\.
21 |
22 | After the cluster is created, tag your Local Zone subnets with the Amazon EKS cluster name\. For more information, see [Subnet tagging](network_reqs.md#vpc-subnet-tagging)\. You can then deploy self\-managed nodes to the Local Zone subnets and the nodes join your Amazon EKS cluster\.
--------------------------------------------------------------------------------
/doc_source/creating-resources-with-cloudformation.md:
--------------------------------------------------------------------------------
1 | # Creating Amazon EKS resources with AWS CloudFormation
2 |
3 | Amazon EKS is integrated with AWS CloudFormation, a service that helps you model and set up your AWS resources so that you can spend less time creating and managing your resources and infrastructure\. You create a template that describes all the AWS resources that you want, for example an Amazon EKS cluster, and AWS CloudFormation takes care of provisioning and configuring those resources for you\.
4 |
5 | When you use AWS CloudFormation, you can reuse your template to set up your Amazon EKS resources consistently and repeatedly\. Just describe your resources once, and then provision the same resources over and over in multiple AWS accounts and Regions\.
6 |
7 | ## Amazon EKS and AWS CloudFormation templates
8 |
9 | To provision and configure resources for Amazon EKS and related services, you must understand [AWS CloudFormation templates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-guide.html)\. Templates are formatted text files in JSON or YAML\. These templates describe the resources that you want to provision in your AWS CloudFormation stacks\. If you're unfamiliar with JSON or YAML, you can use AWS CloudFormation Designer to help you get started with AWS CloudFormation templates\. For more information, see [What is AWS CloudFormation Designer?](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/working-with-templates-cfn-designer.html) in the *AWS CloudFormation User Guide*\.
10 |
11 | Amazon EKS supports creating clusters and node groups in AWS CloudFormation\. For more information, including examples of JSON and YAML templates for your Amazon EKS resources, see [Amazon EKS resource type reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_EKS.html) in the *AWS CloudFormation User Guide*\.
12 |
13 | ## Learn more about AWS CloudFormation
14 |
15 | To learn more about AWS CloudFormation, see the following resources:
16 | + [AWS CloudFormation](http://aws.amazon.com/cloudformation/)
17 | + [AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)
18 | + [AWS CloudFormation Command Line Interface User Guide](https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/what-is-cloudformation-cli.html)
--------------------------------------------------------------------------------
/doc_source/retrieve-windows-ami-id.md:
--------------------------------------------------------------------------------
1 | # Retrieving Amazon EKS optimized Windows AMI IDs
2 |
3 | You can programmatically retrieve the Amazon Machine Image \(AMI\) ID for Amazon EKS optimized AMIs by querying the AWS Systems Manager Parameter Store API\. This parameter eliminates the need for you to manually look up Amazon EKS optimized AMI IDs\. For more information about the Systems Manager Parameter Store API, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html)\. Your user account must have the `ssm:GetParameter` IAM permission to retrieve the Amazon EKS optimized AMI metadata\.
4 |
5 | You can retrieve the AMI ID with the AWS CLI or the AWS Management Console\.
6 | + **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon EKS optimized Windows AMI with the following command by using the sub\-parameter `image_id`\. You can replace *`<1.21>`* \(including *`<>`*\) with any supported Amazon EKS version and can replace *``* with an [Amazon EKS supported Region](https://docs.aws.amazon.com/general/latest/gr/eks.html) for which you want the AMI ID\. Replace *``* with `Full` to see the Windows Server full AMI ID\. You can also replace *`<2019>`* with `2004` for the `Core` version only\.
7 |
8 | ```
9 | aws ssm get-parameter --name /aws/service/ami-windows-latest/Windows_Server-<2019>-English--EKS_Optimized-<1.21>/image_id --region --query "Parameter.Value" --output text
10 | ```
11 |
12 | Example output:
13 |
14 | ```
15 | ami-
16 | ```
17 | + **AWS Management Console** – You can query for the recommended Amazon EKS optimized AMI ID using a URL\. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter\. In the following URL, you can replace *<1\.21>* \(including *`<>`*\) with any supported Amazon EKS version and can replace *``* with an [Amazon EKS supported Region](https://docs.aws.amazon.com/general/latest/gr/eks.html) for which you want the AMI ID\. Replace *``* with `Full` to see the Windows Server full AMI ID\. You can also replace *`<2019>`* with `2004` for the `Core` version only\.
18 |
19 | ```
20 | https://console.aws.amazon.com/systems-manager/parameters/%252Faws%252Fservice%252Fami-windows-latest%252FWindows_Server-<2019>-English--EKS_Optimized-<1.21>%252Fimage_id/description?region=
21 | ```
--------------------------------------------------------------------------------
/doc_source/eks-custom-ami-windows.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS optimized Windows AMI
2 |
3 | You can use Amazon EC2 Image Builder to create custom Amazon EKS optimized Windows AMIs\. You must create your own Image Builder recipe\. For more information, see [Create image recipes and versions](https://docs.aws.amazon.com/imagebuilder/latest/userguide/create-image-recipes.html) in the *EC2 Image Builder User Guide*\. When creating a recipe and selecting a **Source image**, you have the following options:
4 | + **Select managed images** – If you select this option, you can choose one of the following options for **Image origin**\.
5 | + **Quick start \(Amazon\-managed\)** – In the Image name drop\-down, select an [Amazon EKS supported Windows Server version](eks-optimized-windows-ami.md)\.
6 | + **Images owned by me** – For **Image name**, select the ARN of your own image with your own license\. The image that you provide can't already have Amazon EKS components installed\.
7 | + **Enter custom AMI ID** – For **AMI ID**, enter the ID for your AMI with your own license\. The image that you provide can't already have Amazon EKS components installed\.
8 |
9 | In the search box under **Build components \- Windows**, select **Amazon\-managed** in the dropdown list and then search on `eks`\. Select the **eks\-optimized\-ami\-windows** search result, even though the result returned may not be the version that you want\. Under **Selected components**, select **Versioning options**, then select **Specify component version**\. Enter `.x`, replacing `` \(including `<>`\) with a [supported Kubernetes version](kubernetes-versions.md)\. If you enter 1\.21\.x as the component version, your Image Builder pipeline builds an AMI with the latest 1\.21\.x `kubelet` version\.
10 |
11 | To determine which `kubelet` and Docker versions are installed with the component, select **Components** in the left navigation\. Under **Components**, change **Owned by me** to **Quick start \(Amazon\-managed\)**\. In the **Find components by name** box, enter `eks`\. The search results show the `kubelet` and Docker version in the component returned for each supported Kubernetes version\. The components go through functional testing on the Amazon EKS supported Windows versions\. Any other Windows versions are not supported and might not be compatible with the component\.
12 |
13 | You should also include the `update-windows` component for the latest Windows patches for the operating system\.
--------------------------------------------------------------------------------
/doc_source/configuration-vulnerability-analysis.md:
--------------------------------------------------------------------------------
1 | # Configuration and vulnerability analysis in Amazon EKS
2 |
3 | Security is a critical consideration for configuring and maintaining Kubernetes clusters and applications\. The [Center for Internet Security \(CIS\) Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes/) provides guidance for Amazon EKS node security configurations\. The benchmark:
4 | + Is applicable to Amazon EC2 nodes \(both managed and self\-managed\) where you are responsible for security configurations of Kubernetes components\.
5 | + Provides a standard, community\-approved way to ensure that you have configured your Kubernetes cluster and nodes securely when using Amazon EKS\.
6 | + Consists of four sections; control plane logging configuration, node security configurations, policies, and managed services\.
7 | + Supports all of the Kubernetes versions currently available in Amazon EKS and can be run using [kube\-bench](https://github.com/aquasecurity/kube-bench), a standard open source tool for checking configuration using the CIS benchmark on Kubernetes clusters\.
8 |
9 | To learn more, see [Introducing The CIS Amazon EKS Benchmark](http://aws.amazon.com/blogs/containers/introducing-cis-amazon-eks-benchmark/)\.
10 |
11 | Amazon EKS platform versions represent the capabilities of the cluster control plane, including which Kubernetes API server flags are enabled and the current Kubernetes patch version\. New clusters are deployed with the latest platform version\. For details, see [Amazon EKS platform versions](platform-versions.md)\.
12 |
13 | You can [update an Amazon EKS cluster](update-cluster.md) to newer Kubernetes versions\. As new Kubernetes versions become available in Amazon EKS, we recommend that you proactively update your clusters to use the latest available version\. For more information about Kubernetes versions in EKS, see [Amazon EKS Kubernetes versions](kubernetes-versions.md)\.
14 |
15 | Track security or privacy events for Amazon Linux 2 at the [Amazon Linux Security Center](https://alas.aws.amazon.com/alas2.html) or subscribe to the associated [RSS feed](https://alas.aws.amazon.com/AL2/alas.rss)\. Security and privacy events include an overview of the issue affected, packages, and instructions for updating your instances to correct the issue\.
16 |
17 | You can use [Amazon Inspector](https://docs.aws.amazon.com/inspector/latest/userguide/inspector_introduction.html) to check for unintended network accessibility of your nodes and for vulnerabilities on those Amazon EC2 instances\.
--------------------------------------------------------------------------------
/doc_source/pod-multiple-network-interfaces.md:
--------------------------------------------------------------------------------
1 | # Multiple network interfaces for pods
2 |
3 | Multus CNI is a container network interface \(CNI\) plugin for Amazon EKS that enables attaching multiple network interfaces to a pod\. For more information, see the [Multus\-CNI](https://github.com/k8snetworkplumbingwg/multus-cni) documentation on GitHub\.
4 |
5 | In Amazon EKS, each pod has one network interface assigned by the Amazon VPC CNI plugin\. With Multus, you can create a multi\-homed pod that has multiple interfaces\. This is accomplished by Multus acting as a "meta\-plugin"; a CNI plugin that can call multiple other CNI plugins\. AWS support for Multus comes configured with the Amazon VPC CNI plugin as the default delegate plugin\.
6 |
7 | **Considerations**
8 | + Amazon EKS won't be building and publishing single root I/O virtualization \(SR\-IOV\) and Data Plane Development Kit \(DPDK\) CNI plugins\. However, you can achieve packet acceleration by connecting directly to Amazon EC2 Elastic Network Adapters \(ENA\) through Multus managed host\-device and `ipvlan` plugins\.
9 | + Amazon EKS is supporting Multus, which provides a generic process that enables simple chaining of additional CNI plugins\. Multus and the process of chaining is supported, but AWS won't provide support for all compatible CNI plugins that can be chained, or issues that may arise in those CNI plugins that are unrelated to the chaining configuration\.
10 | + Amazon EKS is providing support and life cycle management for the Multus plugin, but isn't responsible for any IP address or additional management associated with the additional network interfaces\. The IP address and management of the default network interface utilizing the Amazon VPC CNI plugin remains unchanged\.
11 | + Only the Amazon VPC CNI plugin is officially supported as the default delegate plugin\. You need to modify the published Multus installation manifest to reconfigure the default delegate plugin to an alternate CNI if you choose not to use the Amazon VPC CNI plugin for primary networking\.
12 | + To prevent the Amazon VPC CNI plugin from trying to manage additional network interfaces assigned to pods, you can tag the network interfaces with `node.k8s.amazonaws.com/no_manage`\.
13 | + Multus is compatible with network policies, but the policy has to be enriched to include ports and IP addresses that may be part of additional network interfaces attached to pods\.
14 |
15 | For an implementation walk through, see the [Multus Setup Guide](https://github.com/aws-samples/eks-install-guide-for-multus/blob/main/README.md) on GitHub\.
--------------------------------------------------------------------------------
/doc_source/compliance.md:
--------------------------------------------------------------------------------
1 | # Compliance validation for Amazon Elastic Kubernetes Service
2 |
3 | Third\-party auditors assess the security and compliance of AWS services as part of multiple AWS compliance programs, such as SOC, PCI, FedRAMP, and HIPAA\.
4 |
5 | To learn whether Amazon EKS or other AWS services are in scope of specific compliance programs, see [AWS Services in Scope by Compliance Program](http://aws.amazon.com/compliance/services-in-scope/)\. For general information, see [AWS Compliance Programs](http://aws.amazon.com/compliance/programs/)\.
6 |
7 | You can download third\-party audit reports using AWS Artifact\. For more information, see [Downloading Reports in AWS Artifact](https://docs.aws.amazon.com/artifact/latest/ug/downloading-documents.html)\.
8 |
9 | Your compliance responsibility when using AWS services is determined by the sensitivity of your data, your company's compliance objectives, and applicable laws and regulations\. AWS provides the following resources to help with compliance:
10 | + [Security and Compliance Quick Start Guides](http://aws.amazon.com/quickstart/?awsf.quickstart-homepage-filter=categories%23security-identity-compliance) – These deployment guides discuss architectural considerations and provide steps for deploying baseline environments on AWS that are security and compliance focused\.
11 | + [Architecting for HIPAA Security and Compliance Whitepaper ](https://d0.awsstatic.com/whitepapers/compliance/AWS_HIPAA_Compliance_Whitepaper.pdf) – This whitepaper describes how companies can use AWS to create HIPAA\-compliant applications\.
12 | **Note**
13 | Not all services are compliant with HIPAA\.
14 | + [AWS Compliance Resources](http://aws.amazon.com/compliance/resources/) – This collection of workbooks and guides might apply to your industry and location\.
15 | + [Evaluating Resources with Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) in the *AWS Config Developer Guide* – The AWS Config service assesses how well your resource configurations comply with internal practices, industry guidelines, and regulations\.
16 | + [AWS Security Hub](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) – This AWS service provides a comprehensive view of your security state within AWS that helps you check your compliance with security industry standards and best practices\.
17 | + [AWS Audit Manager](https://docs.aws.amazon.com/audit-manager/latest/userguide/what-is.html) – This AWS service helps you continuously audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards\.
--------------------------------------------------------------------------------
/doc_source/helm.md:
--------------------------------------------------------------------------------
1 | # Using Helm with Amazon EKS
2 |
3 | The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster\. For more information, see the [Helm documentation](https://docs.helm.sh/)\. This topic helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local system\.
4 |
5 | **Important**
6 | Before you can install Helm charts on your Amazon EKS cluster, you must configure `kubectl` to work for Amazon EKS\. If you have not already done this, see [Create a `kubeconfig` for Amazon EKS](create-kubeconfig.md) before proceeding\. If the following command succeeds for your cluster, you're properly configured\.
7 |
8 | ```
9 | kubectl get svc
10 | ```
11 |
12 | **To install the Helm binaries on your local system**
13 |
14 | 1. Run the appropriate command for your client operating system\.
15 | + If you're using macOS with [Homebrew](https://brew.sh/), install the binaries with the following command\.
16 |
17 | ```
18 | brew install helm
19 | ```
20 | + If you're using Windows with [Chocolatey](https://chocolatey.org/), install the binaries with the following command\.
21 |
22 | ```
23 | choco install kubernetes-helm
24 | ```
25 | + If you're using Linux, install the binaries with the following commands\.
26 |
27 | ```
28 | curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
29 | chmod 700 get_helm.sh
30 | ./get_helm.sh
31 | ```
32 |
33 | 1. To pick up the new binary in your `PATH`, Close your current terminal window and open a new one\.
34 |
35 | 1. Confirm that Helm is running with the following command\.
36 |
37 | ```
38 | helm help
39 | ```
40 |
41 | 1. At this point, you can run any Helm commands \(such as `helm install `\) to install, modify, delete, or query Helm charts in your cluster\. If you're new to Helm and don't have a specific chart to install, you can:
42 | + Experiment by installing an example chart\. See [Install an example chart](https://helm.sh/docs/intro/quickstart#install-an-example-chart) in the Helm [Quickstart guide](https://helm.sh/docs/intro/quickstart/)\.
43 | + Create an example chart and push it to Amazon ECR\. For more information, see [Pushing a Helm chart](https://docs.aws.amazon.com/AmazonECR/latest/userguide/push-oci-artifact.html) in the *Amazon Elastic Container Registry User Guide*\.
44 | + Install an Amazon EKS chart from the [eks\-charts](https://github.com/aws/eks-charts#eks-charts) GitHub repo or from [ ArtifactHub](https://artifacthub.io/packages/search?page=1&repo=aws)\.
--------------------------------------------------------------------------------
/doc_source/add-ons-images.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS add\-on container image addresses
2 |
3 | When you deploy Amazon EKS add\-ons such as the [AWS Load Balancer Controller](aws-load-balancer-controller.md), the [Amazon VPC CNI](managing-vpc-cni.md#updating-vpc-cni-add-on), [`kube-proxy`](managing-kube-proxy.md#updating-kube-proxy-add-on), [CoreDNS](managing-coredns.md#updating-coredns-add-on), or [storage drivers](storage.md), you pull an image from an Amazon ECR repository\. The image name and tag are listed in the topics for each add\-on\.
4 |
5 | The following table contains a list of Regions and the addresses you can use to pull images from\.
6 |
7 |
8 | | Region | Address |
9 | | --- | --- |
10 | | af\-south\-1 | 877085696533\.dkr\.ecr\.af\-south\-1\.amazonaws\.com/ |
11 | | ap\-east\-1 | 800184023465\.dkr\.ecr\.ap\-east\-1\.amazonaws\.com/ |
12 | | ap\-northeast\-1 | 602401143452\.dkr\.ecr\.ap\-northeast\-1\.amazonaws\.com/ |
13 | | ap\-northeast\-2 | 602401143452\.dkr\.ecr\.ap\-northeast\-2\.amazonaws\.com/ |
14 | | ap\-northeast\-3 | 602401143452\.dkr\.ecr\.ap\-northeast\-3\.amazonaws\.com/ |
15 | | ap\-south\-1 | 602401143452\.dkr\.ecr\.ap\-south\-1\.amazonaws\.com/ |
16 | | ap\-southeast\-1 | 602401143452\.dkr\.ecr\.ap\-southeast\-1\.amazonaws\.com/ |
17 | | ap\-southeast\-2 | 602401143452\.dkr\.ecr\.ap\-southeast\-2\.amazonaws\.com/ |
18 | | ca\-central\-1 | 602401143452\.dkr\.ecr\.ca\-central\-1\.amazonaws\.com/ |
19 | | cn\-north\-1 | 918309763551\.dkr\.ecr\.cn\-north\-1\.amazonaws\.com\.cn/ |
20 | | cn\-northwest\-1 | 961992271922\.dkr\.ecr\.cn\-northwest\-1\.amazonaws\.com\.cn/ |
21 | | eu\-central\-1 | 602401143452\.dkr\.ecr\.eu\-central\-1\.amazonaws\.com/ |
22 | | eu\-north\-1 | 602401143452\.dkr\.ecr\.eu\-north\-1\.amazonaws\.com/ |
23 | | eu\-south\-1 | 590381155156\.dkr\.ecr\.eu\-south\-1\.amazonaws\.com/ |
24 | | eu\-west\-1 | 602401143452\.dkr\.ecr\.eu\-west\-1\.amazonaws\.com/ |
25 | | eu\-west\-2 | 602401143452\.dkr\.ecr\.eu\-west\-2\.amazonaws\.com/ |
26 | | eu\-west\-3 | 602401143452\.dkr\.ecr\.eu\-west\-3\.amazonaws\.com/ |
27 | | me\-south\-1 | 558608220178\.dkr\.ecr\.me\-south\-1\.amazonaws\.com/ |
28 | | sa\-east\-1 | 602401143452\.dkr\.ecr\.sa\-east\-1\.amazonaws\.com/ |
29 | | us\-east\-1 | 602401143452\.dkr\.ecr\.us\-east\-1\.amazonaws\.com/ |
30 | | us\-east\-2 | 602401143452\.dkr\.ecr\.us\-east\-2\.amazonaws\.com/ |
31 | | us\-gov\-east\-1 | 151742754352\.dkr\.ecr\.us\-gov\-east\-1\.amazonaws\.com/ |
32 | | us\-gov\-west\-1 | 013241004608\.dkr\.ecr\.us\-gov\-west\-1\.amazonaws\.com/ |
33 | | us\-west\-1 | 602401143452\.dkr\.ecr\.us\-west\-1\.amazonaws\.com/ |
34 | | us\-west\-2 | 602401143452\.dkr\.ecr\.us\-west\-2\.amazonaws\.com/ |
--------------------------------------------------------------------------------
/doc_source/api-server-flags.md:
--------------------------------------------------------------------------------
1 | # Viewing API server flags
2 |
3 | You can use the control plane logging feature for Amazon EKS clusters to view the API server flags that were enabled when a cluster was created\. For more information, see [Amazon EKS control plane logging](control-plane-logs.md)\. This topic shows you how to view the API server flags for an Amazon EKS cluster in the Amazon CloudWatch console\.
4 |
5 | When a cluster is first created, the initial API server logs include the flags that were used to start the API server\. If you enable API server logs when you launch the cluster, or shortly thereafter, these logs are sent to CloudWatch Logs and you can view them there\.
6 |
7 | **To view API server flags for a cluster**
8 |
9 | 1. If you have not already done so, enable API server logs for your Amazon EKS cluster\.
10 |
11 | 1. Open the Amazon EKS console at [https://console\.aws\.amazon\.com/eks/home\#/clusters](https://console.aws.amazon.com/eks/home#/clusters)\.
12 |
13 | 1. Choose the name of the cluster to display your cluster information\.
14 |
15 | 1. Select the **Configuration** tab\. On the **Logging** tab, choose **Manage logging**\.
16 |
17 | 1. For **API server**, make sure that the log type is **Enabled**\.
18 |
19 | 1. Choose **Save changes** to finish\.
20 |
21 | 1. Open the CloudWatch console at [https://console\.aws\.amazon\.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)
22 |
23 | 1. Choose **Logs**, then **Log groups** in the side menu\. Click on the cluster of which you want to see the logs, then choose the **Log streams** tab\.
24 |
25 | 1. In the list of log streams, find the earliest version of the `kube-apiserver-` log stream\. Use the **Last Event Time** column to determine the log stream ages\.
26 |
27 | 1. Scroll up to the earliest events \(the beginning of the log stream\)\. You should see the initial API server flags for the cluster\.
28 | ![\[control plane logs\]](http://docs.aws.amazon.com/eks/latest/userguide/images/server-logs.png)
29 | **Note**
30 | If you don't see the API server logs at the beginning of the log stream, then it is likely that the API server log file was rotated on the server before you enabled API server logging on the server\. Any log files that are rotated before API server logging is enabled cannot be exported to CloudWatch\.
31 | However, you can create a new cluster with the same Kubernetes version and enable the API server logging when you create the cluster\. Clusters with the same platform version have the same flags enabled, so your flags should match the new cluster's flags\. When you finish viewing the flags for the new cluster in CloudWatch, you can delete the new cluster\.
--------------------------------------------------------------------------------
/doc_source/iam-roles-for-service-accounts.md:
--------------------------------------------------------------------------------
1 | # IAM roles for service accounts
2 |
3 | You can associate an IAM role with a Kubernetes service account\. This service account can then provide AWS permissions to the containers in any pod that uses that service account\. With this feature, you no longer need to provide extended permissions to the [Amazon EKS node IAM role](create-node-role.md) so that pods on that node can call AWS APIs\.
4 |
5 | Applications must sign their AWS API requests with AWS credentials\. This feature provides a strategy for managing credentials for your applications, similar to the way that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances\. Instead of creating and distributing your AWS credentials to the containers or using the Amazon EC2 instance’s role, you can associate an IAM role with a Kubernetes service account\. The applications in the pod’s containers can then use an AWS SDK or the AWS CLI to make API requests to authorized AWS services\.
6 |
7 | **Important**
8 | Even if you assign an IAM role to a Kubernetes service account, the pod still also has the permissions assigned to the [Amazon EKS node IAM role](create-node-role.md), unless you block pod access to the IMDS\. For more information, see [Restricting access to the IMDS and Amazon EC2 instance profile credentials](best-practices-security.md#restrict-ec2-credential-access)\.
9 |
10 | The IAM roles for service accounts feature provides the following benefits:
11 | + **Least privilege —** By using the IAM roles for service accounts feature, you no longer need to provide extended permissions to the node IAM role so that pods on that node can call AWS APIs\. You can scope IAM permissions to a service account, and only pods that use that service account have access to those permissions\. This feature also eliminates the need for third\-party solutions such as `kiam` or `kube2iam`\.
12 | + **Credential isolation —** A container can only retrieve credentials for the IAM role that is associated with the service account to which it belongs\. A container never has access to credentials that are intended for another container that belongs to another pod\.
13 | + **Auditability —** Access and event logging is available through CloudTrail to help ensure retrospective auditing\.
14 |
15 | **Enable service accounts to access AWS resources in three steps**
16 |
17 | 1. **[Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md)** – You only need to do this once for a cluster\.
18 |
19 | 1. **[Create an IAM role and attach an IAM policy to it with the permissions that your service accounts need](create-service-account-iam-policy-and-role.md)** – We recommend creating separate roles for each unique collection of permissions that pods need\.
20 |
21 | 1. **[Associate an IAM role with a service account](specify-service-account-role.md)** – Complete this task for each Kubernetes service account that needs access to AWS resources\.
--------------------------------------------------------------------------------
/doc_source/worker.md:
--------------------------------------------------------------------------------
1 | # Self\-managed nodes
2 |
3 | A cluster contains one or more Amazon EC2 nodes that pods are scheduled on\. Amazon EKS nodes run in your AWS account and connect to your cluster's control plane via the cluster API server endpoint\. You deploy one or more nodes into a node group\. A node group is one or more Amazon EC2 instances that are deployed in an [Amazon EC2 Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html)\. All instances in a node group must:
4 | + Be the same instance type
5 | + Be running the same Amazon Machine Image \(AMI\)
6 | + Use the same [Amazon EKS node IAM role](create-node-role.md)
7 |
8 | A cluster can contain several node groups\. As long as each node group meets the previous requirements, the cluster can contain node groups that contain different instance types and host operating systems\. Each node group can contain several nodes\.
9 |
10 | Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal EC2 prices\. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/)\.
11 |
12 | Amazon EKS provides specialized Amazon Machine Images \(AMI\) called Amazon EKS optimized AMIs\. The AMIs are configured to work with Amazon EKS and include Docker, `kubelet` , and the AWS IAM Authenticator\. The AMIs also contain a specialized [bootstrap script](https://github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh) that allows it to discover and connect to your cluster's control plane automatically\.
13 |
14 | If you restrict access to your cluster's public endpoint using CIDR blocks, it is recommended that you also enable private endpoint access so that nodes can communicate with the cluster\. Without the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress sources from your VPC\. For more information, see [Amazon EKS cluster endpoint access control](cluster-endpoint.md)\.
15 |
16 | To add self\-managed nodes to your Amazon EKS cluster, see the topics that follow\. If you launch self\-managed nodes manually, then you must add the following tag to each node\. For more information, see [Adding and deleting tags on an individual resource](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#adding-or-deleting-tags)\. If you follow the steps in the guides that follow, then the required tag is automatically added to nodes for you\.
17 |
18 |
19 | | Key | Value |
20 | | --- | --- |
21 | | `kubernetes.io/cluster/` | `owned` |
22 |
23 | For more information about nodes from a general Kubernetes perspective, see [Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) in the Kubernetes documentation\.
24 |
25 | **Topics**
26 | + [Launching self\-managed Amazon Linux nodes](launch-workers.md)
27 | + [Launching self\-managed Bottlerocket nodes](launch-node-bottlerocket.md)
28 | + [Launching self\-managed Windows nodes](launch-windows-workers.md)
29 | + [Self\-managed node updates](update-workers.md)
--------------------------------------------------------------------------------
/doc_source/enable-iam-roles-for-service-accounts.md:
--------------------------------------------------------------------------------
1 | # Create an IAM OIDC provider for your cluster
2 |
3 | Your cluster has an [OpenID Connect](https://openid.net/connect/) issuer URL associated with it\. To use IAM roles for service accounts, an IAM OIDC provider must exist for your cluster\.
4 |
5 | **Prerequisites**
6 | An existing cluster\. If you don't have one, you can create one using one of the [Getting started with Amazon EKS](getting-started.md) guides\.
7 |
8 | **To create an IAM OIDC identity provider for your cluster with `eksctl`**
9 |
10 | 1. Determine whether you have an existing IAM OIDC provider for your cluster\.
11 |
12 | View your cluster's OIDC provider URL\.
13 |
14 | ```
15 | aws eks describe-cluster --name --query "cluster.identity.oidc.issuer" --output text
16 | ```
17 |
18 | Example output:
19 |
20 | ```
21 | https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E
22 | ```
23 |
24 | List the IAM OIDC providers in your account\. Replace *``* \(including *`<>`*\) with the value returned from the previous command\.
25 |
26 | ```
27 | aws iam list-open-id-connect-providers | grep
28 | ```
29 |
30 | Example output
31 |
32 | ```
33 | "Arn": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E"
34 | ```
35 |
36 | If output is returned from the previous command, then you already have a provider for your cluster\. If no output is returned, then you must create an IAM OIDC provider\.
37 |
38 | 1. Create an IAM OIDC identity provider for your cluster with the following command\. Replace `` \(including `<>`\) with your own value\.
39 |
40 | ```
41 | eksctl utils associate-iam-oidc-provider --cluster --approve
42 | ```
43 |
44 | **To create an IAM OIDC identity provider for your cluster with the AWS Management Console**
45 |
46 | 1. Open the Amazon EKS console at [https://console\.aws\.amazon\.com/eks/home\#/clusters](https://console.aws.amazon.com/eks/home#/clusters)\.
47 |
48 | 1. Select the name of your cluster and then select the **Configuration** tab\.
49 |
50 | 1. In the **Details** section, note the value of the **OpenID Connect provider URL**\.
51 |
52 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
53 |
54 | 1. In the navigation panel, choose **Identity Providers**\. If a **Provider** is listed that matches the URL for your cluster, then you already have a provider for your cluster\. If a provider isn't listed that matches the URL for your cluster, then you must create one\.
55 |
56 | 1. To create a provider, choose **Add Provider**\.
57 |
58 | 1. For **Provider Type**, choose **OpenID Connect**\.
59 |
60 | 1. For **Provider URL**, paste the OIDC issuer URL for your cluster, and then choose **Get thumbprint**\.
61 |
62 | 1. For **Audience**, enter `sts.amazonaws.com` and choose **Add provider**\.
--------------------------------------------------------------------------------
/doc_source/infrastructure-security.md:
--------------------------------------------------------------------------------
1 | # Infrastructure security in Amazon EKS
2 |
3 | As a managed service, Amazon EKS is protected by the AWS global network security procedures that are described in the [Amazon Web Services: Overview of security processes](https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Whitepaper.pdf) paper\.
4 |
5 | You use AWS published API calls to access Amazon EKS through the network\. Clients must support Transport Layer Security \(TLS\) 1\.0 or later\. We recommend TLS 1\.2 or later\. Clients must also support cipher suites with perfect forward secrecy \(PFS\) such as Ephemeral Diffie\-Hellman \(DHE\) or Elliptic Curve Ephemeral Diffie\-Hellman \(ECDHE\)\. Most modern systems such as Java 7 and later support these modes\.
6 |
7 | Additionally, requests must be signed by using an access key ID and a secret access key that is associated with an IAM principal\. Or you can use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html) \(AWS STS\) to generate temporary security credentials to sign requests\.
8 |
9 | When you create an Amazon EKS cluster, you specify the VPC subnets for your cluster to use\. Amazon EKS requires subnets in at least two Availability Zones\. We recommend a VPC with public and private subnets so that Kubernetes can create public load balancers in the public subnets that load balance traffic to pods running on nodes that are in private subnets\.
10 |
11 | For more information about VPC considerations, see [Cluster VPC considerations](network_reqs.md)\.
12 |
13 | If you create your VPC and node groups with the AWS CloudFormation templates provided in the [Getting started with Amazon EKS](getting-started.md) walkthrough, then your control plane and node security groups are configured with our recommended settings\.
14 |
15 | For more information about security group considerations, see [Amazon EKS security group considerations](sec-group-reqs.md)\.
16 |
17 | When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster \(using Kubernetes management tools such as `kubectl`\)\. By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management \(IAM\) and native Kubernetes [Role Based Access Control](https://kubernetes.io/docs/admin/authorization/rbac/) \(RBAC\)\.
18 |
19 | You can enable private access to the Kubernetes API server so that all communication between your nodes and the API server stays within your VPC\. You can limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server\.
20 |
21 | For more information about modifying cluster endpoint access, see [Modifying cluster endpoint access](cluster-endpoint.md#modify-endpoint-access)\.
22 |
23 | You can implement network policies with tools such as [Project Calico](calico.md)\. Project Calico is a third party open source project\. For more information, see the [Project Calico documentation](https://docs.projectcalico.org/v3.7/introduction/)\.
--------------------------------------------------------------------------------
/doc_source/monitoring-fargate-usage.md:
--------------------------------------------------------------------------------
1 | # AWS Fargate usage metrics
2 |
3 | You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources\. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards\.
4 |
5 | AWS Fargate usage metrics correspond to AWS service quotas\. You can configure alarms that alert you when your usage approaches a service quota\. For more information about Fargate service quotas, see [Amazon EKS service quotas](service-quotas.md)\.
6 |
7 | AWS Fargate publishes the following metrics in the `AWS/Usage` namespace\.
8 |
9 |
10 | | Metric | Description |
11 | | --- | --- |
12 | | `ResourceCount` | The total number of the specified resource running on your account\. The resource is defined by the dimensions associated with the metric\. |
13 |
14 | The following dimensions are used to refine the usage metrics that are published by AWS Fargate\.
15 |
16 |
17 | | Dimension | Description |
18 | | --- | --- |
19 | | `Service` | The name of the AWS service containing the resource\. For AWS Fargate usage metrics, the value for this dimension is `Fargate`\. |
20 | | `Type` | The type of entity that's being reported\. Currently, the only valid value for AWS Fargate usage metrics is `Resource`\. |
21 | | `Resource` | The type of resource that is running\. Currently, AWS Fargate returns information on your Fargate On\-Demand usage\. The resource value for Fargate On\-Demand usage is `OnDemand`\. Fargate On\-Demand usage combines Amazon EKS pods using Fargate, Amazon ECS tasks using the Fargate launch type and Amazon ECS tasks using the `FARGATE` capacity provider\. |
22 | | `Class` | The class of resource being tracked\. Currently, AWS Fargate does not use the class dimension\. |
23 |
24 | ## Creating a CloudWatch alarm to monitor Fargate resource usage metrics
25 |
26 | AWS Fargate provides CloudWatch usage metrics that correspond to the AWS service quotas for Fargate On\-Demand resource usage\. In the Service Quotas console, you can visualize your usage on a graph\. You can also configure alarms that alert you when your usage approaches a service quota\. For more information, see [AWS Fargate usage metrics](#monitoring-fargate-usage)\.
27 |
28 | Use the following steps to create a CloudWatch alarm based on the Fargate resource usage metrics\.
29 |
30 | **To create an alarm based on your Fargate usage quotas \(AWS Management Console\)**
31 |
32 | 1. Open the Service Quotas console at [https://console\.aws\.amazon\.com/servicequotas/](https://console.aws.amazon.com/servicequotas/)\.
33 |
34 | 1. In the navigation pane, choose **AWS services**\.
35 |
36 | 1. From the **AWS services** list, search for and select **AWS Fargate**\.
37 |
38 | 1. In the **Service quotas** list, select the Fargate usage quota you want to create an alarm for\.
39 |
40 | 1. In the Amazon CloudWatch Events alarms section, choose **Create**\.
41 |
42 | 1. For **Alarm threshold**, choose the percentage of your applied quota value that you want to set as the alarm value\.
43 |
44 | 1. For **Alarm name**, enter a name for the alarm and then choose **Create**\.
--------------------------------------------------------------------------------
/doc_source/eks-on-outposts.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS on AWS Outposts
2 |
3 | You can create and run Amazon EKS nodes on AWS Outposts\. AWS Outposts enables native AWS services, infrastructure, and operating models in on\-premises facilities\. In AWS Outposts environments, you can use the same AWS APIs, tools, and infrastructure that you use in the AWS Cloud\. Amazon EKS nodes on AWS Outposts is ideal for low\-latency workloads that need to be run in close proximity to on\-premises data and applications\. For more information about AWS Outposts, see the [AWS Outposts User Guide](https://docs.aws.amazon.com/outposts/latest/userguide/)\.
4 |
5 | ## Prerequisites
6 |
7 | The following are the prerequisites for using Amazon EKS nodes on AWS Outposts:
8 | + You must have installed and configured an Outpost in your on\-premises data center\. For more information, see [Create an Outpost and order Outpost capacity](https://docs.aws.amazon.com/outposts/latest/userguide/order-outpost-capacity.html) in the AWS Outposts User Guide\.
9 | + You must have a reliable network connection between your Outpost and its AWS Region\. We recommend that you provide highly available and low\-latency connectivity between your Outpost and its AWS Region\. For more information, see [Outpost connectivity to the local network](https://docs.aws.amazon.com/outposts/latest/userguide/local-network-connectivity.html) in the AWS Outposts User Guide\.
10 | + The AWS Region for the Outpost must support Amazon EKS\. For a list of supported Regions, see [Amazon EKS service endpoints](https://docs.aws.amazon.com/general/latest/gr/eks.html) in the *AWS General Reference*\.
11 |
12 | ## Considerations
13 | + AWS Identity and Access Management, Network Load Balancer, Classic Load Balancer, and Amazon Route 53 run in the AWS Region, not on Outposts\. This increases latencies between the services and the containers\.
14 | + You can deploy self\-managed nodes to AWS Outposts, but not managed or Fargate nodes\. For more information, see [Launching self\-managed Amazon Linux nodes](launch-workers.md), [Launching self\-managed Bottlerocket nodes](launch-node-bottlerocket.md), or [Launching self\-managed Windows nodes](launch-windows-workers.md)\.
15 | + You can't pass Outposts subnets in when creating a cluster\. For more information, see [Creating an Amazon EKS cluster](create-cluster.md)\.
16 | + You can't use AWS Outposts in China Regions\.
17 |
18 | ##
19 | + If network connectivity between your Outpost and its AWS Region is lost, your nodes will continue to run\. However, you cannot create new nodes or take new actions on existing deployments until connectivity is restored\. In case of instance failures, the instance will not be automatically replaced\. The Kubernetes control plane runs in the Region, and missing heartbeats caused by things like a loss of connectivity to the Availability Zone could lead to failures\. The failed heartbeats will lead to pods on the Outposts being marked as unhealthy, and eventually the node status will time out and pods will be marked for eviction\. For more information, see [Node Controller](https://kubernetes.io/docs/concepts/architecture/nodes/#node-controller) in the Kubernetes documentation\.
--------------------------------------------------------------------------------
/doc_source/security.md:
--------------------------------------------------------------------------------
1 | # Security in Amazon EKS
2 |
3 | Cloud security at AWS is the highest priority\. As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security\-sensitive organizations\.
4 |
5 | Security is a shared responsibility between AWS and you\. The [shared responsibility model](http://aws.amazon.com/compliance/shared-responsibility-model/) describes this as security *of* the cloud and security *in* the cloud:
6 | + **Security of the cloud** – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud\. For Amazon EKS, AWS is responsible for the Kubernetes control plane, which includes the control plane nodes and `etcd` database\. Third\-party auditors regularly test and verify the effectiveness of our security as part of the [AWS compliance programs](http://aws.amazon.com/compliance/programs/)\. To learn about the compliance programs that apply to Amazon EKS, see [AWS Services in Scope by Compliance Program](http://aws.amazon.com/compliance/services-in-scope/)\.
7 | + **Security in the cloud** – Your responsibility includes the following areas\.
8 | + The security configuration of the data plane, including the configuration of the security groups that allow traffic to pass from the Amazon EKS control plane into the customer VPC
9 | + The configuration of the nodes and the containers themselves
10 | + The node's operating system \(including updates and security patches\)
11 | + Other associated application software:
12 | + Setting up and managing network controls, such as firewall rules
13 | + Managing platform\-level identity and access management, either with or in addition to IAM
14 | + The sensitivity of your data, your company’s requirements, and applicable laws and regulations
15 |
16 | This documentation helps you understand how to apply the shared responsibility model when using Amazon EKS\. The following topics show you how to configure Amazon EKS to meet your security and compliance objectives\. You also learn how to use other AWS services that help you to monitor and secure your Amazon EKS resources\.
17 |
18 | **Note**
19 | Linux containers are made up of control groups \(cgroups\) and namespaces that help limit what a container can access, but all containers share the same Linux kernel as the host Amazon EC2 instance\. Running a container as the root user \(UID 0\) or granting a container access to host resources or namespaces such as the host network or host PID namespace are strongly discouraged, because doing so reduces the effectiveness of the isolation that containers provide\.
20 |
21 | **Topics**
22 | + [Identity and access management for Amazon EKS](security-iam.md)
23 | + [Logging and monitoring in Amazon EKS](logging-monitoring.md)
24 | + [Compliance validation for Amazon Elastic Kubernetes Service](compliance.md)
25 | + [Resilience in Amazon EKS](disaster-recovery-resiliency.md)
26 | + [Infrastructure security in Amazon EKS](infrastructure-security.md)
27 | + [Configuration and vulnerability analysis in Amazon EKS](configuration-vulnerability-analysis.md)
28 | + [Amazon EKS security best practices](best-practices-security.md)
29 | + [Pod security policy](pod-security-policy.md)
30 | + [Using AWS Secrets Manager secrets with Kubernetes](manage-secrets.md)
--------------------------------------------------------------------------------
/doc_source/external-snat.md:
--------------------------------------------------------------------------------
1 | # External source network address translation \(SNAT\)
2 |
3 | Communication within a VPC \(such as pod to pod\) is direct between private IP addresses and requires no source network address translation \(SNAT\)\. When traffic is destined for an address outside of the VPC, the [Amazon VPC CNI plugin for Kubernetes](https://github.com/aws/amazon-vpc-cni-k8s) translates the private IP address of each pod to the primary private IP address assigned to the primary [network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) \(network interface\) of the Amazon EC2 node that the pod is running on, by default\. SNAT:
4 | + Enables pods to communicate bi\-directionally with the internet\. The node must be in a [public subnet](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-subnet-basics) and have a [public](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html) or [elastic](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-eips.html) IP address assigned to the primary private IP address of its primary network interface\. The traffic is translated to and from the public or Elastic IP address and routed to and from the internet by an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html), as shown in the following picture\.
5 | ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/eks/latest/userguide/images/SNAT-enabled.jpg)
6 |
7 | SNAT is necessary because the internet gateway can only translate between the primary private and public or Elastic IP address assigned to the primary network interface of the Amazon EC2 instance node that pods are running on\.
8 | + Prevents a device in other private IP address spaces \(for example, [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), [Transit VPC](https://docs.aws.amazon.com/aws-technical-content/latest/aws-vpc-connectivity-options/transit-vpc.html), or [Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html)\) from communicating directly to a pod that is not assigned the primary private IP address of the primary network interface of the Amazon EC2 instance node\.
9 |
10 | If the internet or devices in other private IP address spaces need to communicate with a pod that isn't assigned the primary private IP address assigned to the primary network interface of the Amazon EC2 instance node that the pod is running on, then:
11 | + The node must be deployed in a private subnet that has a route to a [NAT device](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-comparison.html) in a public subnet\.
12 | + You need to enable external SNAT in the CNI plugin `aws-node` DaemonSet with the following command:
13 |
14 | ```
15 | kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true
16 | ```
17 |
18 | After external SNAT is enabled, the CNI plugin doesn't translate a pod's private IP address to the primary private IP address assigned to the primary network interface of the Amazon EC2 instance node that the pod is running on when traffic is destined for an address outside of the VPC\. Traffic from the pod to the internet is externally translated to and from the public IP address of the NAT device and routed to and from the internet by an internet gateway, as shown in the following picture\.
19 |
20 | ![\[Image NOT FOUND\]](http://docs.aws.amazon.com/eks/latest/userguide/images/SNAT-disabled.jpg)
--------------------------------------------------------------------------------
/doc_source/storage-classes.md:
--------------------------------------------------------------------------------
1 | # Storage classes
2 |
3 | Amazon EKS clusters that were created prior to Kubernetes version 1\.11 were not created with any storage classes\. You must define storage classes for your cluster to use and you should define a default storage class for your persistent volume claims\. For more information, see [Storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes) in the Kubernetes documentation\.
4 |
5 | **Note**
6 | This topic uses the [in\-tree Amazon EBS storage provisioner](https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore)\. The existing [in\-tree Amazon EBS plugin](https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore) is still supported, but by using a CSI driver, you benefit from the decoupling of Kubernetes upstream release cycle and CSI driver release cycle\. Eventually, the in\-tree plugin will be discontinued in favor of the CSI driver\.
7 |
8 | **To create an AWS storage class for your Amazon EKS cluster**
9 |
10 | 1. Determine which storage classes your cluster already has\.
11 |
12 | ```
13 | kubectl get storageclass
14 | ```
15 |
16 | Output
17 |
18 | ```
19 | NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
20 | gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer false 34m
21 | ```
22 |
23 | If your cluster returns the previous output, then it already has the storage class defined in the remaining steps\. You can define other storage classes using the steps for deploying any of the CSI drivers in the [Storage](storage.md) chapter\. Once deployed, you can set one of the storage classes as your [default](#define-default-storage-class) storage class\.
24 |
25 | 1. Create an AWS storage class manifest file for your storage class\. The `gp2-storage-class.yaml` example below defines a storage class called `gp2` that uses the Amazon EBS `gp2` volume type\.
26 |
27 | For more information about the options available for AWS storage classes, see [AWS EBS](https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs) in the Kubernetes documentation\.
28 |
29 | ```
30 | kind: StorageClass
31 | apiVersion: storage.k8s.io/v1
32 | metadata:
33 | name: gp2
34 | annotations:
35 | storageclass.kubernetes.io/is-default-class: "true"
36 | provisioner: kubernetes.io/aws-ebs
37 | parameters:
38 | type: gp2
39 | fsType: ext4
40 | ```
41 |
42 | 1. Use `kubectl` to create the storage class from the manifest file\.
43 |
44 | ```
45 | kubectl create -f gp2-storage-class.yaml
46 | ```
47 |
48 | Output:
49 |
50 | ```
51 | storageclass "gp2" created
52 | ```
53 |
54 | **To define a default storage class**
55 |
56 | 1. List the existing storage classes for your cluster\. A storage class must be defined before you can set it as a default\.
57 |
58 | ```
59 | kubectl get storageclass
60 | ```
61 |
62 | Output:
63 |
64 | ```
65 | NAME PROVISIONER AGE
66 | gp2 kubernetes.io/aws-ebs 8m
67 | ```
68 |
69 | 1. Choose a storage class and set it as your default by setting the `storageclass.kubernetes.io/is-default-class=true` annotation\.
70 |
71 | ```
72 | kubectl annotate storageclass gp2 storageclass.kubernetes.io/is-default-class=true
73 | ```
74 |
75 | Output:
76 |
77 | ```
78 | storageclass "gp2" patched
79 | ```
80 |
81 | 1. Verify that the storage class is now set as default\.
82 |
83 | ```
84 | kubectl get storageclass
85 | ```
86 |
87 | Output:
88 |
89 | ```
90 | gp2 (default) kubernetes.io/aws-ebs 12m
91 | ```
--------------------------------------------------------------------------------
/doc_source/best-practices-security.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS security best practices
2 |
3 | This topic provides security best practices for your cluster\.
4 |
5 | ## Restricting access to the IMDS and Amazon EC2 instance profile credentials
6 |
7 | By default, the Amazon EC2 [instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) \(IMDS\) provides the credentials assigned to the [node IAM role](create-node-role.md) to the instance, and any container running on the instance\. When you use [IAM roles for service accounts](iam-roles-for-service-accounts.md), it updates the credential chain of the pod to use the IAM roles for service accounts token\. The pod, however, can still inherit the rights of the instance profile assigned to the node\. We recommended that you block pod access to IMDS to minimize the permissions available to your containers if:
8 | + You’ve implemented IAM roles for service accounts and have assigned necessary permissions directly to all pods that require access to AWS services\.
9 | + No pods in your cluster require access to IMDS for other reasons, such as retrieving the current Region\.
10 |
11 | For more information, see [Retrieving Security Credentials from Instance Metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials)\. You can prevent access to IMDS from your instance and containers using one of the following options\.
12 |
13 | **Important**
14 | If you use the AWS Load Balancer Controller in your cluster, you may need to change your load balancer configuration\. For more information, see [To deploy the AWS Load Balancer Controller to an Amazon EKS cluster](aws-load-balancer-controller.md#deploy-lb-controller)\.
15 | + **Block access to IMDSv1 from the node and all containers and block access to IMDSv2 for all containers that don't use host networking** – Your instance and pods that have `hostNetwork: true` in their pod spec use host networking\. To implement this option, complete the steps in the row and column that apply to your situation\.
16 | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/best-practices-security.html)
17 | + **Block access to IMDSv1 and IMDSv2 for all containers that don't use host networking** – Your instance and pods that have `hostNetwork: true` in their pod spec use host networking, but for legacy reasons still require access to IMDSv1\. Run the following `iptables` commands on each of your Amazon Linux nodes \(as root\) or include them in your instance bootstrap user data script\.
18 |
19 | ```
20 | yum install -y iptables-services
21 | iptables --insert FORWARD 1 --in-interface eni+ --destination 169.254.169.254/32 --jump DROP
22 | iptables-save | tee /etc/sysconfig/iptables
23 | systemctl enable --now iptables
24 | ```
25 | **Important**
26 | The previous rule applies only to network interfaces within the node that have a name that starts with `eni`, which is all network interfaces that the CNI plugin creates for pods that don't use host networking\. Traffic to the IMDS is not dropped for the node, or for pods that use host networking, such as `kube-proxy` and the CNI plugin\.
27 | If you implement network policy, using a tool such as [Calico](calico.md), the previous rule may be overridden\. When implementing network policy, ensure that it doesn't override this rule, or that your policy includes this rule\.
28 | If you've applied security groups to pods and therefore, have branch network interfaces, in addition to the previous command, also run the following command\.
29 |
30 | ```
31 | iptables -t mangle -A POSTROUTING -o vlan+ --destination 169.254.169.254/32 --jump DROP
32 | ```
33 | For more information about branch network interfaces, see [Security groups for pods](security-groups-for-pods.md)\.
--------------------------------------------------------------------------------
/doc_source/specify-service-account-role.md:
--------------------------------------------------------------------------------
1 | # Associate an IAM role to a service account
2 |
3 | In Kubernetes, you define the IAM role to associate with a service account in your cluster by adding the following annotation to the service account\.
4 |
5 | **Note**
6 | If you created an IAM role to use with your service account using `eksctl`, this has already been done for you with the service account that you specified when creating the role\.
7 |
8 | ```
9 | apiVersion: v1
10 | kind: ServiceAccount
11 | metadata:
12 | annotations:
13 | eks.amazonaws.com/role-arn: arn:aws:iam:::role/
14 | ```
15 |
16 | **Prerequisites**
17 | + An existing cluster\. If you don't have one, you can create one using one of the [Getting started with Amazon EKS](getting-started.md) guides\.
18 | + An existing IAM OIDC provider for your cluster\. For more information, see [Create an IAM OIDC provider for your cluster](enable-iam-roles-for-service-accounts.md)\.
19 | + An existing service account\. If you don't have one, see [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in the Kubernetes documentation\.
20 | + An existing IAM role with an attached IAM policy\. If you don't have one, see [Creating an IAM role and policy for your service account](create-service-account-iam-policy-and-role.md)\.
21 |
22 | **To annotate a service account with an IAM role**
23 |
24 | 1. Use the following command to annotate your service account with the ARN of the IAM role that you want to use with your service account\. Be sure to replace the `` \(including `<>`\) with your own\.
25 |
26 | ```
27 | kubectl annotate serviceaccount -n \
28 | eks.amazonaws.com/role-arn=arn:aws:iam:::role/
29 | ```
30 | **Note**
31 | If you don't have an existing service account, then you need to create one\. For more information, see [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/) in the Kubernetes documentation\. For the service account to be able to use Kubernetes permissions, you must create a `Role`, or `ClusterRole` and then bind the role to the service account\. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation\. When the [AWS VPC CNI plugin](pod-networking.md) is deployed, for example, the deployment manifest creates a service account, cluster role, and cluster role binding\. You can view the[ manifest](https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.9/config/v1.9/aws-k8s-cni.yaml) on GitHub to use as an example\.
32 |
33 | 1. Delete and re\-create any existing pods that are associated with the service account to apply the credential environment variables\. The mutating web hook does not apply them to pods that are already running\. The following command deletes the existing the `aws-node` DaemonSet pods and deploys them with the service account annotation\. You can modify the namespace, deployment type, and label to update your specific pods\.
34 |
35 | ```
36 | kubectl delete pods -n -l
37 | ```
38 |
39 | 1. Confirm that the pods all restarted\.
40 |
41 | ```
42 | kubectl get pods -n -l
43 | ```
44 |
45 | 1. Describe one of the pods and verify that the `AWS_WEB_IDENTITY_TOKEN_FILE` and `AWS_ROLE_ARN` environment variables exist\.
46 |
47 | ```
48 | kubectl exec -n kube-system env | grep AWS
49 | ```
50 |
51 | Output:
52 |
53 | ```
54 | AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG
55 | AWS_ROLE_ARN=arn:aws:iam:::role/
56 | AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
57 | ```
--------------------------------------------------------------------------------
/doc_source/fargate-pod-configuration.md:
--------------------------------------------------------------------------------
1 | # Fargate pod configuration
2 |
3 | This section describes some of the unique pod configuration details for running Kubernetes pods on AWS Fargate\.
4 |
5 | ## Pod CPU and memory
6 |
7 | With Kubernetes, you can define requests, a minimum vCPU amount, and memory resources that are allocated to each container in a pod\. Pods are scheduled by Kubernetes to ensure that at least the requested resources for each pod are available on the compute resource\. For more information, see [Managing compute resources for containers](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/) in the Kubernetes documentation\.
8 |
9 | When pods are scheduled on Fargate, the vCPU and memory reservations within the pod specification determine how much CPU and memory to provision for the pod\.
10 | + The maximum request out of any Init containers is used to determine the Init request vCPU and memory requirements\.
11 | + Requests for all long\-running containers are added up to determine the long\-running request vCPU and memory requirements\.
12 | + The larger of the above two values is chosen for the vCPU and memory request to use for your pod\.
13 | + Fargate adds 256 MB to each pod's memory reservation for the required Kubernetes components \(`kubelet`, `kube-proxy`, and `containerd`\)\.
14 |
15 | Fargate rounds up to the compute configuration shown below that most closely matches the sum of vCPU and memory requests in order to ensure pods always have the resources that they need to run\.
16 |
17 | If you do not specify a vCPU and memory combination, then the smallest available combination is used \(\.25 vCPU and 0\.5 GB memory\)\.
18 |
19 | The following table shows the vCPU and memory combinations that are available for pods running on Fargate\.
20 |
21 |
22 | | vCPU value | Memory value |
23 | | --- | --- |
24 | | \.25 vCPU | 0\.5 GB, 1 GB, 2 GB |
25 | | \.5 vCPU | 1 GB, 2 GB, 3 GB, 4 GB |
26 | | 1 vCPU | 2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB |
27 | | 2 vCPU | Between 4 GB and 16 GB in 1\-GB increments |
28 | | 4 vCPU | Between 8 GB and 30 GB in 1\-GB increments |
29 |
30 | The additional memory reserved for the Kubernetes components can cause a Fargate task with more vCPUs than requested to be provisioned\. For example, a request for 1 vCPU and 8 GB memory will have 256 MB added to its memory request, and will provision a Fargate task with 2 vCPUs and 9 GB memory, since no task with 1 vCPU and 9 GB memory is available\.
31 |
32 | There is no correlation between the size of the pod running on Fargate and the node size reported by Kubernetes with `kubectl get nodes`\. The reported node size is often larger than the pod's capacity\. You can verify pod capacity with the following command\. Replace `` \(including `<>`\) with the name of your pod\.
33 |
34 | ```
35 | kubectl describe pod
36 | ```
37 |
38 | The output is as follows\.
39 |
40 | ```
41 | ...
42 | annotations:
43 | CapacityProvisioned: 0.25vCPU 0.5GB
44 | ...
45 | ```
46 |
47 | The `CapacityProvisioned` annotation represents the enforced pod capacity and it determines the cost of your pod running on Fargate\. For pricing information for the compute configurations, see [AWS Fargate Pricing](http://aws.amazon.com/fargate/pricing/)\.
48 |
49 | ## Fargate storage
50 |
51 | When provisioned, each pod running on Fargate receives 20 GB of container image layer storage\. Pod storage is ephemeral\. After a pod stops, the storage is deleted\. New pods launched onto Fargate on or after May 28, 2020, have encryption of the ephemeral storage volume enabled by default\. The ephemeral pod storage is encrypted with an AES\-256 encryption algorithm using AWS Fargate managed keys\.
52 |
53 | **Note**
54 | The usable storage for Amazon EKS pods that run on Fargate is less than 20 GB because some space is used by the `kubelet` and other Kubernetes modules that are loaded inside the pod\.
--------------------------------------------------------------------------------
/doc_source/eks-add-ons.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS add\-ons
2 |
3 | An add\-on is software that provides supporting operational capabilities to Kubernetes applications, but is not specific to the application\. This includes software like observability agents or Kubernetes drivers that allow the cluster to interact with underlying AWS resources for networking, compute, and storage\. Add\-on software is typically built and maintained by the Kubernetes community, cloud providers like AWS, or third\-party vendors\. Amazon EKS automatically installs self\-managed add\-ons such as the Amazon VPC CNI, `kube-proxy`, and CoreDNS for every cluster\. You can change the default configuration of the add\-ons and update them when desired\.
4 |
5 | Amazon EKS add\-ons provide installation and management of a curated set of add\-ons for Amazon EKS clusters\. All Amazon EKS add\-ons include the latest security patches, bug fixes, and are validated by AWS to work with Amazon EKS\. Amazon EKS add\-ons allow you to consistently ensure that your Amazon EKS clusters are secure and stable and reduce the amount of work that you need to do in order to install, configure, and update add\-ons\. If a self\-managed add\-on, such as `kube-proxy` is already running on your cluster and is available as an Amazon EKS add\-on, then you can install the `kube-proxy` Amazon EKS add\-on to start benefiting from the capabilities of Amazon EKS add\-ons\.
6 |
7 | You can update specific Amazon EKS managed configuration fields for Amazon EKS add\-ons through the Amazon EKS API\. You can also modify configuration fields not managed by Amazon EKS directly within the Kubernetes cluster once the add\-on starts\. This includes defining specific configuration fields for an add\-on where applicable\. These changes are not overridden by Amazon EKS once they are made\. This is made possible using the Kubernetes server\-side apply feature\. For more information, see [Amazon EKS add\-on configuration](add-ons-configuration.md)\.
8 |
9 | Amazon EKS add\-ons can be used with any 1\.18 or later Amazon EKS cluster\. The cluster can include self\-managed and Amazon EKS managed node groups, and Fargate\.
10 |
11 | **Considerations**
12 | + To configure add\-ons for the cluster your IAM user must have IAM permissions to work with add\-ons\. For more information, see the actions with `Addon` in their name in [Actions defined by Amazon Elastic Kubernetes Service](service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions)\.
13 | + Amazon EKS add\-ons are only available with Amazon EKS clusters running Kubernetes version 1\.18 and later\.
14 | + Amazon EKS add\-ons run on the nodes that you provision or configure for your cluster\. Node types include Amazon EC2 instances and Fargate\.
15 | + You can modify fields that aren't managed by Amazon EKS to customize the installation of an Amazon EKS add\-on\. For more information, see [Amazon EKS add\-on configuration](add-ons-configuration.md)\.
16 | + If you create a cluster with the AWS Management Console, the Amazon EKS `kube-proxy`, Amazon VPC CNI, and CoreDNS Amazon EKS add\-ons are automatically added to your cluster\. If you use eksctl to create your cluster with a `config` file, `eksctl` can also create the cluster with Amazon EKS add\-ons\. If you create your cluster using `eksctl` without a `config` file or with any other tool, the self\-managed `kube-proxy`, Amazon VPC CNI, and CoreDNS add\-ons are installed, rather than the Amazon EKS add\-ons\. You can either manage them yourself or add the Amazon EKS add\-ons manually after cluster creation\.
17 |
18 | You can add, update, or delete Amazon EKS add\-ons using the Amazon EKS API, AWS Management Console, AWS CLI, and `eksctl`\. For detailed steps when using the AWS Management Console, AWS CLI, and `eksctl`, see the topics for the following add\-ons:
19 | + [Amazon VPC CNI](managing-vpc-cni.md)
20 | + [CoreDNS](managing-coredns.md)
21 | + [kube\-proxy](managing-kube-proxy.md)
22 |
23 | You can also create Amazon EKS add\-ons using [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-addon.html)\.
--------------------------------------------------------------------------------
/doc_source/service-quotas.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS service quotas
2 |
3 | Amazon EKS has integrated with Service Quotas, an AWS service that enables you to view and manage your quotas from a central location\. For more information, see [What Is Service Quotas?](https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html) in the *Service Quotas User Guide*\. Service Quotas makes it easy to look up the value of your Amazon EKS and AWS Fargate service quotas using the AWS Management Console and AWS CLI\.
4 |
5 | ------
6 | #### [ AWS Management Console ]
7 |
8 | **To view Amazon EKS and Fargate service quotas using the AWS Management Console**
9 |
10 | 1. Open the Service Quotas console at [https://console\.aws\.amazon\.com/servicequotas/](https://console.aws.amazon.com/servicequotas/)\.
11 |
12 | 1. In the navigation panel, choose **AWS services**\.
13 |
14 | 1. From the **AWS services** list, search for and select **Amazon Elastic Kubernetes Service \(Amazon EKS\)** or **AWS Fargate**\.
15 |
16 | In the **Service quotas** list, you can see the service quota name, applied value \(if it is available\), AWS default quota, and whether the quota value is adjustable\.
17 |
18 | 1. To view additional information about a service quota, such as the description, choose the quota name\.
19 |
20 | 1. \(Optional\) To request a quota increase, select the quota that you want to increase, select **Request quota increase**, enter or select the required information, and select **Request**\.
21 |
22 | To work more with service quotas using the AWS Management Console see the [Service Quotas User Guide](https://docs.aws.amazon.com/servicequotas/latest/userguide/intro.html)\. To request a quota increase, see [Requesting a Quota Increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html) in the *Service Quotas User Guide*\.
23 |
24 | ------
25 | #### [ AWS CLI ]
26 |
27 | **To view Amazon EKS and Fargate service quotas using the AWS CLI**
28 | Run the following command to view your Amazon EKS quotas\.
29 |
30 | ```
31 | aws service-quotas list-aws-default-service-quotas \
32 | --query 'Quotas[*].{Adjustable:Adjustable,Name:QuotaName,Value:Value,Code:QuotaCode}' \
33 | --service-code eks \
34 | --output table
35 | ```
36 |
37 | Run the following command to view your Fargate quotas\.
38 |
39 | ```
40 | aws service-quotas list-aws-default-service-quotas \
41 | --query 'Quotas[*].{Adjustable:Adjustable,Name:QuotaName,Value:Value,Code:QuotaCode}' \
42 | --service-code fargate \
43 | --output table
44 | ```
45 |
46 | **Note**
47 | The quota returned is the maximum number of Amazon ECS tasks or Amazon EKS pods running concurrently on Fargate in this account in the current Region\.
48 |
49 | To work more with service quotas using the AWS CLI, see the [Service Quotas AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/service-quotas/index.html#cli-aws-service-quotas)\. To request a quota increase, see the [https://docs.aws.amazon.com/cli/latest/reference/service-quotas/request-service-quota-increase.html](https://docs.aws.amazon.com/cli/latest/reference/service-quotas/request-service-quota-increase.html) command in the [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/reference/service-quotas/index.html#cli-aws-service-quotas)\.
50 |
51 | ------
52 |
53 | ## Service quotas
54 |
55 | [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/service-quotas.html)
56 |
57 | ### AWS Fargate service quotas
58 |
59 | The following are Amazon EKS on AWS Fargate service quotas\.
60 |
61 | These service quotas are listed under the AWS Fargate namespace in the Service Quotas console\. To request a quota increase, see [Requesting a quota increase](https://docs.aws.amazon.com/servicequotas/latest/userguide/request-increase.html) in the *Service Quotas User Guide*\.
62 |
63 |
64 | | Service quota | Description | Default quota value | Adjustable |
65 | | --- | --- | --- | --- |
66 | | Fargate On\-Demand resource count | The maximum number of Amazon ECS tasks and Amazon EKS pods running concurrently on Fargate in this account in the current Region\. | 1,000 | Yes |
--------------------------------------------------------------------------------
/doc_source/pod-execution-role.md:
--------------------------------------------------------------------------------
1 | # Pod execution role
2 |
3 | The Amazon EKS pod execution role is required to run pods on AWS Fargate infrastructure\.
4 |
5 | When your cluster creates pods on AWS Fargate infrastructure, the components running on the Fargate infrastructure need to make calls to AWS APIs on your behalf to do things like pull container images from Amazon ECR or route logs to other AWS services\. The Amazon EKS pod execution role provides the IAM permissions to do this\.
6 |
7 | When you create a Fargate profile, you must specify a pod execution role for the Amazon EKS components that run on the Fargate infrastructure using the profile\. This role is added to the cluster's Kubernetes [Role based access control](https://kubernetes.io/docs/admin/authorization/rbac/) \(RBAC\) for authorization, so that the `kubelet` that is running on the Fargate infrastructure can register with your Amazon EKS cluster\. This is what allows Fargate infrastructure to appear in your cluster as nodes\.
8 |
9 | The containers running in the Fargate pod cannot assume the IAM permissions associated with the pod execution role\. To give the containers in your Fargate pod permissions to access other AWS services, you must use [IAM roles for service accounts](iam-roles-for-service-accounts.md)\.
10 |
11 | Before you create a Fargate profile, you must create an IAM role with the following IAM policy:
12 | + `[AmazonEKSFargatePodExecutionRolePolicy](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy%24jsonEditor)`
13 |
14 | ## Check for an existing pod execution role
15 |
16 | You can use the following procedure to check and see if your account already has the Amazon EKS pod execution role\.
17 |
18 | **To check for the `AmazonEKSFargatePodExecutionRole` in the IAM console**
19 |
20 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
21 |
22 | 1. In the navigation panel, choose **Roles**\.
23 |
24 | 1. Search the list of roles for `AmazonEKSFargatePodExecutionRole`\. If the role does not exist, see [Creating the Amazon EKS pod execution role](#create-pod-execution-role) to create the role\. If the role does exist, select the role to view the attached policies\.
25 |
26 | 1. Choose **Permissions**\.
27 |
28 | 1. Ensure that the **AmazonEKSFargatePodExecutionRolePolicy** Amazon managed policy is attached to the role\. If the policy is attached, then your Amazon EKS pod execution role is properly configured\.
29 |
30 | 1. Choose **Trust Relationships**, **Edit Trust Relationship**\.
31 |
32 | 1. Verify that the trust relationship contains the following policy\. If the trust relationship matches the policy below, choose **Cancel**\. If the trust relationship does not match, copy the policy into the **Policy Document** window and choose **Update Trust Policy**\.
33 |
34 | ```
35 | {
36 | "Version": "2012-10-17",
37 | "Statement": [
38 | {
39 | "Effect": "Allow",
40 | "Principal": {
41 | "Service": "eks-fargate-pods.amazonaws.com"
42 | },
43 | "Action": "sts:AssumeRole"
44 | }
45 | ]
46 | }
47 | ```
48 |
49 | ## Creating the Amazon EKS pod execution role
50 |
51 | You can use the following procedure to create the Amazon EKS pod execution role if you do not already have one for your account\.
52 |
53 | **To create an AWS Fargate pod execution role with the AWS Management Console**
54 |
55 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
56 |
57 | 1. Choose **Roles**, then **Create role**\.
58 |
59 | 1. Choose **EKS** from the list of services, **EKS \- Fargate pod** for your use case, and then **Next: Permissions**\.
60 |
61 | 1. Choose **Next: Tags**\.
62 |
63 | 1. \(Optional\) Add metadata to the role by attaching tags as key–value pairs\. For more information about using tags in IAM, see [Tagging IAM Entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*\.
64 |
65 | 1. Choose **Next: Review**\.
66 |
67 | 1. For **Role name**, enter a unique name for your role, such as `AmazonEKSFargatePodExecutionRole`, then choose **Create role**\.
--------------------------------------------------------------------------------
/doc_source/eksctl.md:
--------------------------------------------------------------------------------
1 | # The `eksctl` command line utility
2 |
3 | This topic covers `eksctl`, a simple command line utility for creating and managing Kubernetes clusters on Amazon EKS\. The `eksctl` command line utility provides the fastest and easiest way to create a new cluster with nodes for Amazon EKS\.
4 |
5 | For more information and to see the official documentation, visit [https://eksctl\.io/](https://github.com/weaveworks/eksctl)\.
6 |
7 | ## Installing or upgrading `eksctl`
8 |
9 | This section helps you to install or upgrade the latest version of the `eksctl` command line utility\. Select the tab with the name of the operating system that you want to install `eksctl` on\.
10 |
11 | ------
12 | #### [ macOS ]
13 |
14 | **To install or upgrade `eksctl` on macOS using Homebrew**
15 |
16 | The easiest way to get started with Amazon EKS and macOS is by installing `eksctl` with [Homebrew](https://brew.sh/)\. The `eksctl` Homebrew recipe installs `eksctl` and any other dependencies that are required for Amazon EKS, such as `kubectl`\. The recipe also installs the [`aws-iam-authenticator`](install-aws-iam-authenticator.md), which is required if you don't have the AWS CLI version 1\.16\.156 or higher installed\.
17 |
18 | 1. If you do not already have Homebrew installed on macOS, install it with the following command\.
19 |
20 | ```
21 | /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
22 | ```
23 |
24 | 1. Install the Weaveworks Homebrew tap\.
25 |
26 | ```
27 | brew tap weaveworks/tap
28 | ```
29 |
30 | 1. Install or upgrade `eksctl`\.
31 | + Install `eksctl` with the following command:
32 |
33 | ```
34 | brew install weaveworks/tap/eksctl
35 | ```
36 | + If `eksctl` is already installed, run the following command to upgrade:
37 |
38 | ```
39 | brew upgrade eksctl && brew link --overwrite eksctl
40 | ```
41 |
42 | 1. Test that your installation was successful with the following command\.
43 |
44 | ```
45 | eksctl version
46 | ```
47 | **Note**
48 | The `GitTag` version should be at least `0.61.0`\. If not, check your terminal output for any installation or upgrade errors, or manually download an archive of the release from [https://github\.com/weaveworks/eksctl/releases/download/0\.61\.0/eksctl\_Darwin\_amd64\.tar\.gz](https://github.com/weaveworks/eksctl/releases/download/0.61.0/eksctl_Darwin_amd64.tar.gz), extract `eksctl`, and then run it\.
49 |
50 | ------
51 | #### [ Linux ]
52 |
53 | **To install or upgrade `eksctl` on Linux using `curl`**
54 |
55 | 1. Download and extract the latest release of `eksctl` with the following command\.
56 |
57 | ```
58 | curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
59 | ```
60 |
61 | 1. Move the extracted binary to `/usr/local/bin`\.
62 |
63 | ```
64 | sudo mv /tmp/eksctl /usr/local/bin
65 | ```
66 |
67 | 1. Test that your installation was successful with the following command\.
68 |
69 | ```
70 | eksctl version
71 | ```
72 | **Note**
73 | The `GitTag` version should be at least `0.61.0`\. If not, check your terminal output for any installation or upgrade errors, or replace the address in step 1 with `https://github.com/weaveworks/eksctl/releases/download/0.61.0/eksctl_Linux_amd64.tar.gz` and complete steps 1\-3 again\.
74 |
75 | ------
76 | #### [ Windows ]
77 |
78 | **To install or upgrade `eksctl` on Windows using Chocolatey**
79 |
80 | 1. If you do not already have Chocolatey installed on your Windows system, see [Installing Chocolatey](https://chocolatey.org/install)\.
81 |
82 | 1. Install or upgrade `eksctl` \.
83 | + Install the binaries with the following command:
84 |
85 | ```
86 | choco install -y eksctl
87 | ```
88 | + If they are already installed, run the following command to upgrade:
89 |
90 | ```
91 | choco upgrade -y eksctl
92 | ```
93 |
94 | 1. Test that your installation was successful with the following command\.
95 |
96 | ```
97 | eksctl version
98 | ```
99 | **Note**
100 | The `GitTag` version should be at least `0.61.0`\. If not, check your terminal output for any installation or upgrade errors, or manually download an archive of the release from [https://github\.com/weaveworks/eksctl/releases/download/0\.61\.0/eksctl\_Windows\_amd64\.zip](https://github.com/weaveworks/eksctl/releases/download/0.61.0/eksctl_Windows_amd64.zip), extract `eksctl`, and then run it\.
101 |
102 | ------
--------------------------------------------------------------------------------
/doc_source/restrict-service-external-ip.md:
--------------------------------------------------------------------------------
1 | # Restricting external IP addresses that can be assigned to services
2 |
3 | Kubernetes services can be reached from inside of a cluster through:
4 | + A cluster IP address that is assigned automatically by Kubernetes
5 | + Any IP address that you specify for the `externalIPs` property in a service spec\. External IP addresses are not managed by Kubernetes and are the responsibility of the cluster administrator\. External IP addresses specified with `externalIPs` are different than the external IP address assigned to a service of type `LoadBalancer` by a cloud provider\.
6 |
7 | To learn more about Kubernetes services, see [Service](https://kubernetes.io/docs/concepts/services-networking/service/) in the Kubernetes documentation\. You can restrict the IP addresses that can be specified for `externalIPs` in a service spec\.
8 |
9 | **To restrict the IP addresses that can be specified for `externalIPs` in a service spec**
10 |
11 | 1. Deploy cert\-manager to manage webhook certificates\. For more information, see the [cert\-manager](https://cert-manager.io/docs/) documentation\.
12 |
13 | ```
14 | kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.1.1/cert-manager.yaml
15 | ```
16 |
17 | 1. Verify that the cert\-manager pods are running\.
18 |
19 | ```
20 | kubectl get pods -n cert-manager
21 | ```
22 |
23 | Output
24 |
25 | ```
26 | NAME READY STATUS RESTARTS AGE
27 | cert-manager-58c8844bb8-nlx7q 1/1 Running 0 15s
28 | cert-manager-cainjector-745768f6ff-696h5 1/1 Running 0 15s
29 | cert-manager-webhook-67cc76975b-4v4nk 1/1 Running 0 14s
30 | ```
31 |
32 | 1. Review your existing services to ensure that none of them have external IP addresses assigned to them that aren't contained within the CIDR block you want to limit addresses to\.
33 |
34 | ```
35 | kubectl get services --all-namespaces
36 | ```
37 |
38 | Output
39 |
40 | ```
41 | NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
42 | cert-manager cert-manager ClusterIP 10.100.102.137 9402/TCP 20m
43 | cert-manager cert-manager-webhook ClusterIP 10.100.6.136 443/TCP 20m
44 | default kubernetes ClusterIP 10.100.0.1 443/TCP 2d1h
45 | externalip-validation-system externalip-validation-webhook-service ClusterIP 10.100.234.179 443/TCP 16s
46 | kube-system kube-dns ClusterIP 10.100.0.10 53/UDP,53/TCP 2d1h
47 | my-namespace my-service ClusterIP 10.100.128.10 192.168.1.1 80/TCP 149m
48 | ```
49 |
50 | If any of the values are IP addresses that are not within the block you want to restrict access to, you'll need to change the addresses to be within the block, and redeploy the services\. For example, the `my-service` service in the previous output has an external IP address assigned to it that isn't within the CIDR block example in step 5\.
51 |
52 | 1. Download the external IP webhook manifest\. You can also view the [source code for the webhook](https://github.com/kubernetes-sigs/externalip-webhook) on GitHub\.
53 |
54 | ```
55 | curl -o externalip-webhook.yaml https://s3.us-west-2.amazonaws.com/amazon-eks/docs/externalip-webhook.yaml
56 | ```
57 |
58 | 1. Open the downloaded file in your editor and remove the `#` at the start of the following lines\.
59 |
60 | ```
61 | #args:
62 | #- --allowed-external-ip-cidrs=10.0.0.0/8
63 | ```
64 |
65 | Replace `10.0.0.0/8` with your own CIDR block\. You can specify as many blocks as you like\. If specifying mutiple blocks, add a comma between blocks\.
66 |
67 | 1. If your cluster is not in the `us-west-2` Region, replace *`us-west-2`*, *602401143452*, and *\.amazonaws\.com/* with the appropriate values for your Region from the list in [Amazon EKS add\-on container image addresses](add-ons-images.md)\.
68 |
69 | ```
70 | image:602401143452.dkr.ecr.us-west-2.amazonaws.com/externalip-webhook:v1.0.0
71 | ```
72 |
73 | 1. Apply the manifest to your cluster\.
74 |
75 | ```
76 | kubectl apply -f externalip-webhook.yaml
77 | ```
78 |
79 | An attempt to deploy a service to your cluster with an IP address specified for `externalIPs` that is not contained in the blocks that you specified in step 5 will fail\.
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Guidelines for contributing
2 |
3 | Thank you for your interest in contributing to AWS documentation! We greatly value feedback and contributions from our community.
4 |
5 | Please read through this document before you submit any pull requests or issues. It will help us work together more effectively.
6 |
7 | ## What to expect when you contribute
8 |
9 | When you submit a pull request, our team is notified and will respond as quickly as we can. We'll do our best to work with you to ensure that your pull request adheres to our style and standards. If we merge your pull request, we might make additional edits later for style or clarity.
10 |
11 | The AWS documentation source files on GitHub aren't published directly to the official documentation website. If we merge your pull request, we'll publish your changes to the documentation website as soon as we can, but they won't appear immediately or automatically.
12 |
13 | We look forward to receiving your pull requests for:
14 |
15 | * New content you'd like to contribute (such as new code samples or tutorials)
16 | * Inaccuracies in the content
17 | * Information gaps in the content that need more detail to be complete
18 | * Typos or grammatical errors
19 | * Suggested rewrites that improve clarity and reduce confusion
20 |
21 | **Note:** We all write differently, and you might not like how we've written or organized something currently. We want that feedback. But please be sure that your request for a rewrite is supported by the previous criteria. If it isn't, we might decline to merge it.
22 |
23 | ## How to contribute
24 |
25 | To contribute, send us a pull request. For small changes, such as fixing a typo or adding a link, you can use the [GitHub Edit Button](https://blog.github.com/2011-04-26-forking-with-the-edit-button/). For larger changes:
26 |
27 | 1. [Fork the repository](https://help.github.com/articles/fork-a-repo/).
28 | 2. In your fork, make your change in a branch that's based on this repo's **master** branch.
29 | 3. Commit the change to your fork, using a clear and descriptive commit message.
30 | 4. [Create a pull request](https://help.github.com/articles/creating-a-pull-request-from-a-fork/), answering any questions in the pull request form.
31 |
32 | Before you send us a pull request, please be sure that:
33 |
34 | 1. You're working from the latest source on the **master** branch.
35 | 2. You check [existing open](https://github.com/awsdocs/amazon-eks-user-guide/pulls), and [recently closed](https://github.com/awsdocs/amazon-eks-user-guide/pulls?q=is%3Apr+is%3Aclosed), pull requests to be sure that someone else hasn't already addressed the problem.
36 | 3. You [create an issue](https://github.com/awsdocs/amazon-eks-user-guide/issues/new) before working on a contribution that will take a significant amount of your time.
37 |
38 | For contributions that will take a significant amount of time, [open a new issue](https://github.com/awsdocs/amazon-eks-user-guide/issues/new) to pitch your idea before you get started. Explain the problem and describe the content you want to see added to the documentation. Let us know if you'll write it yourself or if you'd like us to help. We'll discuss your proposal with you and let you know whether we're likely to accept it. We don't want you to spend a lot of time on a contribution that might be outside the scope of the documentation or that's already in the works.
39 |
40 | ## Finding contributions to work on
41 |
42 | If you'd like to contribute, but don't have a project in mind, look at the [open issues](https://github.com/awsdocs/amazon-eks-user-guide/issues) in this repository for some ideas. Any issues with the [help wanted](https://github.com/awsdocs/amazon-eks-user-guide/labels/help%20wanted) or [enhancement](https://github.com/awsdocs/amazon-eks-user-guide/labels/enhancement) labels are a great place to start.
43 |
44 | In addition to written content, we really appreciate new examples and code samples for our documentation, such as examples for different platforms or environments, and code samples in additional languages.
45 |
46 | ## Code of conduct
47 |
48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). For more information, see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact [opensource-codeofconduct@amazon.com](mailto:opensource-codeofconduct@amazon.com) with any additional questions or comments.
49 |
50 | ## Security issue notifications
51 |
52 | If you discover a potential security issue, please notify AWS Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public issue on GitHub.
53 |
54 | ## Licensing
55 |
56 | See the [LICENSE](https://github.com/awsdocs/amazon-eks-user-guide/blob/master/LICENSE) file for this project's licensing. We will ask you to confirm the licensing of your contribution. We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes.
57 |
--------------------------------------------------------------------------------
/doc_source/add-ons-configuration.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS add\-on configuration
2 |
3 | Amazon EKS add\-ons are installed to your cluster using standard, best practice configurations\. For more information about Amazon EKS add\-ons, see [Amazon EKS add\-ons](eks-add-ons.md)\.
4 |
5 | You may want to customize the configuration of an Amazon EKS add\-on to enable advanced features\. Amazon EKS uses the Kubernetes server\-side apply feature to enable management of an add\-on by Amazon EKS without overwriting your configuration for settings that aren't managed by Amazon EKS\. For more information, see [Server\-Side Apply](https://kubernetes.io/docs/reference/using-api/server-side-apply/) in the Kubernetes documentation\. To achieve this, Amazon EKS manages a minimum set of fields for every add\-on that it installs\. You can modify all fields that aren't managed by Amazon EKS, or another Kubernetes control plane process such as `kube-controller-manager`, without issue\.
6 |
7 | **Important**
8 | Modifying a field managed by Amazon EKS prevents Amazon EKS from managing the add\-on and may result in your changes being overwritten when an add\-on is updated\.
9 |
10 | **Prerequisites**
11 | + An existing 1\.18 or later Amazon EKS cluster\.
12 | + An Amazon EKS add\-on added to the cluster\. For more information about adding an Amazon EKS add\-on to your cluster, see [Amazon EKS add\-ons](eks-add-ons.md)\.
13 |
14 | ## View field management status
15 |
16 | You can use `kubectl` to see which fields are managed by Amazon EKS for any Amazon EKS add\-on\.
17 |
18 | **To see the management status of a field**
19 |
20 | 1. Determine which add\-on that you want to examine\. To see all of the Deployments and Daemonsets deployed to your cluster, see [View workloads](view-workloads.md)\.
21 |
22 | 1. View the managed fields for an add\-on by running the following command:
23 |
24 | ```
25 | kubectl get / -n -o yaml
26 | ```
27 |
28 | For example, you can see the managed fields for the CoreDNS add\-on with the following command\.
29 |
30 | ```
31 | kubectl get deployment/coredns -n kube-system -o yaml
32 | ```
33 |
34 | Field management is listed in the following section in the returned output\.
35 |
36 | ```
37 | ...
38 | managedFields:
39 | - apiVersion: apps/v1
40 | fieldsType: FieldsV1
41 | fieldsV1:
42 | ...
43 | ```
44 |
45 | ## Understanding field management syntax in the Kubernetes API
46 |
47 | When you view details for a Kubernetes object, the managed fields are returned in the output\. Each key is either a `.` representing the field itself, which always maps to an empty set, or a string that represents a sub\-field or item\. The output for field management consists of the following types of declarations:
48 | + `f:`, where `` is the name of a field in a list\.
49 | + `k:`, where `` is a map of a list item's fields\.
50 | + `v:`, where `` is the exact json formatted value of a list item\.
51 | + `i:`, where `` is position of an item in the list\.
52 |
53 | For more information, see [FieldsV1 v1 meta](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#fieldsv1-v1-meta) in the Kubernetes documentation\.
54 |
55 | The following portions of output for the CoreDNS add\-on illustrate the previous declarations:
56 | + **Managed fields** – If a managed field has an `f:` \(field\) specified, but no `k:` \(key\), then the entire field is managed\. Modifications to any values in this field cause a conflict\.
57 |
58 | In the following output, you can see that the container named `coredns` is managed by `eks`\.
59 |
60 | ```
61 | ...
62 | f:containers:
63 | k:{"name":"coredns"}:
64 | .: {}
65 | f:args: {}
66 | f:image: {}
67 | f:imagePullPolicy: {}
68 | ...
69 | manager: eks
70 | ...
71 | ```
72 | + **Managed keys** – If a managed key has a value specified, the declared keys are managed for that field\. Modifying the specified keys cause a conflict\.
73 |
74 | In the following output, you can see that `eks` manages the `config-volume` and `tmp` volumes set with the `name` key\.
75 |
76 | ```
77 | ...
78 | f:volumes:
79 | k:{"name":"config-volume"}:
80 | .: {}
81 | f:configMap:
82 | f:items: {}
83 | f:name: {}
84 | f:name: {}
85 | k:{"name":"tmp"}:
86 | .: {}
87 | f:name: {}
88 | ...
89 | manager: eks
90 | ...
91 | ```
92 | + **Managed fields and keys** – If only a specific key value is managed, you can safely add additional keys, such as arguments, to a field without causing a conflict\. If you add additional keys, make sure that the field isn't managed first\. Adding or modifying any value that is managed causes a conflict\.
93 |
94 | In the following output, you can see that both the `name` key and `name` field are managed\. Adding or modifying any container name causes a conflict with this managed key\.
95 |
96 | ```
97 | ...
98 | f:containers:
99 | k:{"name":"coredns"}:
100 | ...
101 | f:name: {}
102 | ...
103 | manager: eks
104 | ...
105 | ```
--------------------------------------------------------------------------------
/doc_source/what-is-eks.md:
--------------------------------------------------------------------------------
1 | # What is Amazon EKS?
2 |
3 | Amazon Elastic Kubernetes Service \(Amazon EKS\) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes\. Kubernetes is an open\-source system for automating the deployment, scaling, and management of containerized applications\. Amazon EKS:
4 | + Runs and scales the Kubernetes control plane across multiple AWS Availability Zones to ensure high availability\.
5 | + Automatically scales control plane instances based on load, detects and replaces unhealthy control plane instances, and it provides automated version updates and patching for them\.
6 | + Is integrated with many AWS services to provide scalability and security for your applications, including the following capabilities:
7 | + Amazon ECR for container images
8 | + Elastic Load Balancing for load distribution
9 | + IAM for authentication
10 | + Amazon VPC for isolation
11 | + Runs up\-to\-date versions of the open\-source Kubernetes software, so you can use all of the existing plugins and tooling from the Kubernetes community\. Applications that are running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, no matter whether they're running in on\-premises data centers or public clouds\. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification\.
12 |
13 | ## Amazon EKS control plane architecture
14 |
15 | Amazon EKS runs a single tenant Kubernetes control plane for each cluster\. The control plane infrastructure is not shared across clusters or AWS accounts\. The control plane consists of at least two API server instances and three `etcd` instances that run across three Availability Zones within a Region\. Amazon EKS:
16 | + Actively monitors the load on control plane instances and automatically scales them to ensure high performance\.
17 | + Automatically detects and replaces unhealthy control plane instances, restarting them across the Availability Zones within the Region as needed\.
18 | + Leverages the architecture of AWS Regions in order to maintain high availability\. Because of this, Amazon EKS is able to offer an [SLA for API server endpoint availability](http://aws.amazon.com/eks/sla)\.
19 |
20 | Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to within a single cluster\. Control plane components for a cluster can't view or receive communication from other clusters or other AWS accounts, except as authorized with Kubernetes RBAC policies\. This secure and highly available configuration makes Amazon EKS reliable and recommended for production workloads\.
21 |
22 | ## How does Amazon EKS work?
23 |
24 | ![\[How Amazon EKS works\]](http://docs.aws.amazon.com/eks/latest/userguide/images/what-is-eks.png)
25 |
26 | Getting started with Amazon EKS is easy:
27 |
28 | 1. Create an Amazon EKS cluster in the AWS Management Console or with the AWS CLI or one of the AWS SDKs\.
29 |
30 | 1. Launch managed or self\-managed Amazon EC2 nodes, or deploy your workloads to AWS Fargate\.
31 |
32 | 1. When your cluster is ready, you can configure your favorite Kubernetes tools, such as `kubectl`, to communicate with your cluster\.
33 |
34 | 1. Deploy and manage workloads on your Amazon EKS cluster the same way that you would with any other Kubernetes environment\. You can also view information about your workloads using the AWS Management Console\.
35 |
36 | To create your first cluster and its associated resources, see [Getting started with Amazon EKS](getting-started.md)\.
37 |
38 | ## Pricing
39 |
40 | An Amazon EKS cluster consists of a control plane and the Amazon EC2 or AWS Fargate compute that you run pods on\. For more information about pricing for the control plane, see [Amazon EKS pricing](http://aws.amazon.com/eks/pricing)\. Both Amazon EC2 and Fargate provide:
41 | + **On\-Demand Instances** – Pay for the instances that you use by the second, with no long\-term commitments or upfront payments\. For more information, see [Amazon EC2 On\-Demand Pricing](http://aws.amazon.com/ec2/pricing/on-demand/) and [AWS Fargate Pricing](http://aws.amazon.com/fargate/pricing/)\.
42 | + **Savings Plans** – You can reduce your costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years\. For more information, see [Pricing with Savings Plans](http://aws.amazon.com/savingsplans/pricing/)\.
43 |
44 | ## Aligning with Amazon EKS for your self\-managed Kubernetes clusters
45 |
46 | Amazon EKS Distro is a distribution of the same open\-source Kubernetes software and dependencies deployed by Amazon EKS in the cloud\. With Amazon EKS Distro, you can create reliable and secure clusters wherever your applications are deployed\. You can rely on the same versions of Kubernetes deployed by Amazon EKS, etcd, CoreDNS, upstream CNI, and CSI sidecars with the latest updates, and extended security patching support\. Amazon EKS Distro follows the same Kubernetes version release cycle as Amazon EKS and is provided as an open\-source project\.
47 |
48 | **Note**
49 | The source code for the Amazon EKS Distro is available on [GitHub](https://github.com/aws/eks-distro)\. The latest documentation is available on the Amazon EKS Distro [website](https://distro.eks.amazonaws.com/)\. If you find any issues, you can report them with Amazon EKS Distro by connecting with us on [GitHub](https://github.com/aws/eks-distro)\. There you can open issues, provide feedback, and report bugs\.
--------------------------------------------------------------------------------
/doc_source/fargate.md:
--------------------------------------------------------------------------------
1 | # AWS Fargate
2 |
3 | This topic discusses using Amazon EKS to run Kubernetes pods on AWS Fargate\.
4 |
5 | AWS Fargate is a technology that provides on\-demand, right\-sized compute capacity for [containers](https://aws.amazon.com/what-are-containers)\. With AWS Fargate, you don't have to provision, configure, or scale groups of virtual machines on your own to run containers\. You also don't need to choose server types, decide when to scale your node groups, or optimize cluster packing\. You can control which pods start on Fargate and how they run with [Fargate profiles](fargate-profile.md)\. Fargate profiles are defined as part of your Amazon EKS cluster\.
6 |
7 | Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using the upstream, extensible model provided by Kubernetes\. These controllers run as part of the Amazon EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto Fargate\. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes scheduler in addition to several mutating and validating admission controllers\. When you start a pod that meets the criteria for running on Fargate, the Fargate controllers that are running in the cluster recognize, update, and schedule the pod onto Fargate\.
8 |
9 | This topic describes the different components of pods that run on Fargate, and calls out special considerations for using Fargate with Amazon EKS\.
10 |
11 | ## AWS Fargate considerations
12 |
13 | Here's some things to consider about using Fargate on Amazon EKS\.
14 | + AWS Fargate with Amazon EKS is available in all Amazon EKS Regions except China \(Beijing\), China \(Ningxia\), AWS GovCloud \(US\-East\), and AWS GovCloud \(US\-West\)\.
15 | + Each pod that runs on Fargate has its own isolation boundary\. They don't share the underlying kernel, CPU resources, memory resources, or elastic network interface with another pod\.
16 | + Network Load Balancers and Application Load Balancers \(ALBs\) can be used with Fargate with IP targets only\. For more information, see [Create a network load balancer](network-load-balancing.md#network-load-balancer) and [Application load balancing on Amazon EKS](alb-ingress.md)\.
17 | + Pods must match a Fargate profile at the time that they're scheduled to run on Fargate\. Pods that don't match a Fargate profile might be stuck as `Pending`\. If a matching Fargate profile exists, you can delete pending pods that you have created to reschedule them onto Fargate\.
18 | + You can only use [Security groups for pods](security-groups-for-pods.md) with pods that run on Fargate if one of the following conditions are met\. Your cluster is 1\.18 with platform version `eks.7` or later, your cluster is 1\.19 with platform version `eks.5` or later, or your cluster 1\.20 or later\.
19 | + Daemonsets aren't supported on Fargate\. If your application requires a daemon, reconfigure that daemon to run as a sidecar container in your pods\.
20 | + Privileged containers aren't supported on Fargate\.
21 | + Pods running on Fargate can't specify `HostPort` or `HostNetwork` in the pod manifest\.
22 | + The default `nofile` and `nproc` soft limit is 1024 and the hard limit is 65535 for Fargate pods\.
23 | + GPUs aren't currently available on Fargate\.
24 | + Pods that run on Fargate are only supported on private subnets \(with NAT gateway access to AWS services, but not a direct route to an Internet Gateway\), so your cluster's VPC must have private subnets available\. For clusters without outbound internet access, see [Private clusters](private-clusters.md)\.
25 | + You can use the [Vertical Pod Autoscaler](vertical-pod-autoscaler.md) to initially right size the CPU and memory for your Fargate pods, and then use the [Horizontal Pod Autoscaler](horizontal-pod-autoscaler.md) to scale those pods\. If you want the Vertical Pod Autoscaler to automatically re\-deploy pods to Fargate with larger CPU and memory combinations, set the mode for the Vertical Pod Autoscaler to either `Auto` or `Recreate` to ensure correct functionality\. For more information, see the [Vertical Pod Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#quick-start) documentation on GitHub\.
26 | + DNS resolution and DNS hostnames must be enabled for your VPC\. For more information, see [Viewing and updating DNS support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-updating)\.
27 | + Fargate runs each pod in a VM\-isolated environment without sharing resources with other pods\. However, because Kubernetes is a single\-tenant orchestrator, Fargate can't guarantee pod\-level security isolation\. Run sensitive workloads or untrusted workloads that need complete security isolation using separate Amazon EKS clusters\.
28 | + Fargate profiles support specifying subnets from VPC secondary CIDR blocks\. You might want to specify a secondary CIDR block\. This is because there's a limited number of IP addresses available in a subnet\. As a result, there's also a limited number of pods that can be created in the cluster\. By using different subnets for pods, you can increase the number of available IP addresses\. For more information, see [Adding IPv4 CIDR blocks to a VPC\.](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-resize)
29 | + The Amazon EC2 instance metadata service \(IMDS\) isn't available to pods that are deployed to Fargate nodes\. If you have pods that are deployed to Fargate that need IAM credentials, assign them to your pods using [IAM roles for service accounts](iam-roles-for-service-accounts.md)\. If your pods need access to other information available through IMDS, then you must hard code this information into your pod spec\. This includes the Region or Availability Zone that a pod is deployed to\.
30 | + You can't deploy Fargate pods to AWS Outposts, AWS Wavelength or AWS Local Zones\.
--------------------------------------------------------------------------------
/doc_source/service_IAM_role.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS cluster IAM role
2 |
3 | Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service\. Before you can create Amazon EKS clusters, you must create an IAM role with the following IAM policies:
4 | + `[AmazonEKSClusterPolicy](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEKSClusterPolicy%24jsonEditor)`
5 |
6 | **Note**
7 | Prior to April 16, 2020, [AmazonEKSServicePolicy](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEKSServicePolicy%24jsonEditor) was also required and the suggested name was `eksServiceRole`\. With the `AWSServiceRoleForAmazonEKS` service\-linked role, that policy is no longer required for clusters created on or after April 16, 2020\.
8 |
9 | ## Check for an existing cluster role
10 |
11 | You can use the following procedure to check and see if your account already has the Amazon EKS cluster role\.
12 |
13 | **To check for the `eksClusterRole` in the IAM console**
14 |
15 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
16 |
17 | 1. In the navigation panel, choose **Roles**\.
18 |
19 | 1. Search the list of roles for `eksClusterRole`\. If a role that includes `eksClusterRole` does not exist, then see [Creating the Amazon EKS cluster role](#create-service-role) to create the role\. If a role that includes `eksClusterRole` does exist, then select the role to view the attached policies\.
20 |
21 | 1. Choose **Permissions**\.
22 |
23 | 1. Ensure that the **AmazonEKSClusterPolicy** managed policy is attached to the role\. If the policy is attached, your Amazon EKS cluster role is properly configured\.
24 |
25 | 1. Choose **Trust Relationships**, **Edit Trust Relationship**\.
26 |
27 | 1. Verify that the trust relationship contains the following policy\. If the trust relationship matches the policy below, choose **Cancel**\. If the trust relationship does not match, copy the policy into the **Policy Document** window and choose **Update Trust Policy**\.
28 |
29 | ```
30 | {
31 | "Version": "2012-10-17",
32 | "Statement": [
33 | {
34 | "Effect": "Allow",
35 | "Principal": {
36 | "Service": "eks.amazonaws.com"
37 | },
38 | "Action": "sts:AssumeRole"
39 | }
40 | ]
41 | }
42 | ```
43 |
44 | ## Creating the Amazon EKS cluster role
45 |
46 | You can use the AWS Management Console or AWS CloudFormation to create the cluster role\. Select the tab with the name of the tool that you want to use to create the role\.
47 |
48 | ------
49 | #### [ AWS Management Console ]
50 |
51 | **To create your Amazon EKS cluster role in the IAM console**
52 |
53 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
54 |
55 | 1. Choose **Roles**, then **Create role**\.
56 |
57 | 1. Choose **EKS** from the list of services, then **EKS \- Cluster** for your use case, and then **Next: Permissions**\.
58 |
59 | 1. Choose **Next: Tags**\.
60 |
61 | 1. \(Optional\) Add metadata to the role by attaching tags as key–value pairs\. For more information about using tags in IAM, see [Tagging IAM Entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*\.
62 |
63 | 1. Choose **Next: Review**\.
64 |
65 | 1. For **Role name**, enter a unique name for your role, such as `eksClusterRole`, then choose **Create role**\.
66 |
67 | ------
68 | #### [ AWS CloudFormation ]
69 |
70 | **\[ To create your Amazon EKS cluster role with AWS CloudFormation \]**
71 |
72 | 1. Save the following AWS CloudFormation template to a text file on your local system\.
73 |
74 | ```
75 | ---
76 | AWSTemplateFormatVersion: '2010-09-09'
77 | Description: 'Amazon EKS Cluster Role'
78 |
79 |
80 | Resources:
81 |
82 | eksClusterRole:
83 | Type: AWS::IAM::Role
84 | Properties:
85 | AssumeRolePolicyDocument:
86 | Version: '2012-10-17'
87 | Statement:
88 | - Effect: Allow
89 | Principal:
90 | Service:
91 | - eks.amazonaws.com
92 | Action:
93 | - sts:AssumeRole
94 | ManagedPolicyArns:
95 | - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
96 |
97 | Outputs:
98 |
99 | RoleArn:
100 | Description: The role that Amazon EKS will use to create AWS resources for Kubernetes clusters
101 | Value: !GetAtt eksClusterRole.Arn
102 | Export:
103 | Name: !Sub "${AWS::StackName}-RoleArn"
104 | ```
105 | **Note**
106 | Prior to April 16, 2020, `ManagedPolicyArns` had an entry for `arn:aws:iam::aws:policy/AmazonEKSServicePolicy`\. With the `AWSServiceRoleForAmazonEKS` service\-linked role, that policy is no longer required\.
107 |
108 | 1. Open the AWS CloudFormation console at [https://console\.aws\.amazon\.com/cloudformation](https://console.aws.amazon.com/cloudformation/)\.
109 |
110 | 1. Choose **Create stack**\.
111 |
112 | 1. For **Specify template**, select **Upload a template file**, and then choose **Choose file**\.
113 |
114 | 1. Choose the file you created earlier, and then choose **Next**\.
115 |
116 | 1. For **Stack name**, enter a name for your role, such as `eksClusterRole`, and then choose **Next**\.
117 |
118 | 1. On the **Configure stack options** page, choose **Next**\.
119 |
120 | 1. On the **Review** page, review your information, acknowledge that the stack might create IAM resources, and then choose **Create stack**\.
121 |
122 | ------
--------------------------------------------------------------------------------
/doc_source/prometheus.md:
--------------------------------------------------------------------------------
1 | # Control plane metrics with Prometheus
2 |
3 | The Kubernetes API server exposes a number of metrics that are useful for monitoring and analysis\. These metrics are exposed internally through a metrics endpoint that refers to the `/metrics` HTTP API\. Like other endpoints, this endpoint is exposed on the Amazon EKS control plane\. This topic explains some of the ways you can use this endpoint to view and analyze what your cluster is doing\.
4 |
5 | ## Viewing the raw metrics
6 |
7 | To view the raw metrics output, use `kubectl` with the `--raw` flag\. This command allows you to pass any HTTP path and returns the raw response\.
8 |
9 | ```
10 | kubectl get --raw /metrics
11 | ```
12 |
13 | Example output:
14 |
15 | ```
16 | ...
17 | # HELP rest_client_requests_total Number of HTTP requests, partitioned by status code, method, and host.
18 | # TYPE rest_client_requests_total counter
19 | rest_client_requests_total{code="200",host="127.0.0.1:21362",method="POST"} 4994
20 | rest_client_requests_total{code="200",host="127.0.0.1:443",method="DELETE"} 1
21 | rest_client_requests_total{code="200",host="127.0.0.1:443",method="GET"} 1.326086e+06
22 | rest_client_requests_total{code="200",host="127.0.0.1:443",method="PUT"} 862173
23 | rest_client_requests_total{code="404",host="127.0.0.1:443",method="GET"} 2
24 | rest_client_requests_total{code="409",host="127.0.0.1:443",method="POST"} 3
25 | rest_client_requests_total{code="409",host="127.0.0.1:443",method="PUT"} 8
26 | # HELP ssh_tunnel_open_count Counter of ssh tunnel total open attempts
27 | # TYPE ssh_tunnel_open_count counter
28 | ssh_tunnel_open_count 0
29 | # HELP ssh_tunnel_open_fail_count Counter of ssh tunnel failed open attempts
30 | # TYPE ssh_tunnel_open_fail_count counter
31 | ssh_tunnel_open_fail_count 0
32 | ```
33 |
34 | This raw output returns verbatim what the API server exposes\. These metrics are represented in a [Prometheus format](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md)\. This format allows the API server to expose different metrics broken down by line\. Each line includes a metric name, tags, and a value\.
35 |
36 | ```
37 | {""=""[<,...>]}
38 | ```
39 |
40 | While this endpoint is useful if you are looking for a specific metric, you typically want to analyze these metrics over time\. To do this, you can deploy [Prometheus](https://prometheus.io/) into your cluster\. Prometheus is a monitoring and time series database that scrapes exposed endpoints and aggregates data, allowing you to filter, graph, and query the results\.
41 |
42 | ## Deploying Prometheus
43 |
44 | This topic helps you deploy Prometheus into your cluster with Helm V3\. If you already have Helm installed, you can check your version with the `helm version` command\. Helm is a package manager for Kubernetes clusters\. For more information about Helm and how to install it, see [Using Helm with Amazon EKS](helm.md)\.
45 |
46 | After you configure Helm for your Amazon EKS cluster, you can use it to deploy Prometheus with the following steps\.
47 |
48 | **To deploy Prometheus using Helm**
49 |
50 | 1. Create a Prometheus namespace\.
51 |
52 | ```
53 | kubectl create namespace prometheus
54 | ```
55 |
56 | 1. Add the `prometheus-community` chart repository\.
57 |
58 | ```
59 | helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
60 | ```
61 |
62 | 1. Deploy Prometheus\.
63 |
64 | ```
65 | helm upgrade -i prometheus prometheus-community/prometheus \
66 | --namespace prometheus \
67 | --set alertmanager.persistentVolume.storageClass="gp2",server.persistentVolume.storageClass="gp2"
68 | ```
69 | **Note**
70 | If you get the error `Error: failed to download "stable/prometheus" (hint: running `helm repo update` may help)` when executing this command, run `helm repo update`, and then try running the Step 2 command again\.
71 | If you get the error `Error: rendered manifests contain a resource that already exists`, run `helm uninstall your-release-name -n namespace`, then try running the Step 3 command again\.
72 |
73 | 1. Verify that all of the pods in the `prometheus` namespace are in the `READY` state\.
74 |
75 | ```
76 | kubectl get pods -n prometheus
77 | ```
78 |
79 | Output:
80 |
81 | ```
82 | NAME READY STATUS RESTARTS AGE
83 | prometheus-alertmanager-59b4c8c744-r7bgp 1/2 Running 0 48s
84 | prometheus-kube-state-metrics-7cfd87cf99-jkz2f 1/1 Running 0 48s
85 | prometheus-node-exporter-jcjqz 1/1 Running 0 48s
86 | prometheus-node-exporter-jxv2h 1/1 Running 0 48s
87 | prometheus-node-exporter-vbdks 1/1 Running 0 48s
88 | prometheus-pushgateway-76c444b68c-82tnw 1/1 Running 0 48s
89 | prometheus-server-775957f748-mmht9 1/2 Running 0 48s
90 | ```
91 |
92 | 1. Use `kubectl` to port forward the Prometheus console to your local machine\.
93 |
94 | ```
95 | kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
96 | ```
97 |
98 | 1. Point a web browser to [localhost:9090](localhost:9090) to view the Prometheus console\.
99 |
100 | 1. Choose a metric from the **\- insert metric at cursor** menu, then choose **Execute**\. Choose the **Graph** tab to show the metric over time\. The following image shows `container_memory_usage_bytes` over time\.
101 | ![\[Prometheus metrics\]](http://docs.aws.amazon.com/eks/latest/userguide/images/prometheus-metric.png)
102 |
103 | 1. From the top navigation bar, choose **Status**, then **Targets**\.
104 | ![\[Prometheus console\]](http://docs.aws.amazon.com/eks/latest/userguide/images/prometheus.png)
105 |
106 | All of the Kubernetes endpoints that are connected to Prometheus using service discovery are displayed\.
--------------------------------------------------------------------------------
/doc_source/pod-networking.md:
--------------------------------------------------------------------------------
1 | # Pod networking \(CNI\)
2 |
3 | Amazon EKS supports native VPC networking with the Amazon VPC Container Network Interface \(CNI\) plugin for Kubernetes\. This plugin assigns an IP address from your VPC to each pod\. The plugin is an open\-source project that is maintained on GitHub\. For more information, see [amazon\-vpc\-cni\-k8s](https://github.com/aws/amazon-vpc-cni-k8s) and [Proposal: CNI plugin for Kubernetes networking over Amazon VPC](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/cni-proposal.md) on GitHub\. The Amazon VPC CNI plugin is fully supported for use on Amazon EKS and self\-managed Kubernetes clusters on AWS\.
4 |
5 | **Note**
6 | Kubernetes can use the [Container Networking Interface \(CNI\)](https://github.com/containernetworking/cni) for configurable networking setups\. The Amazon VPC CNI plugin might not meet requirements for all use cases\. Amazon EKS maintains a network of partners that offer alternative CNI solutions with commercial support options\. For more information, see [Alternate compatible CNI plugins](alternate-cni-plugins.md)\.
7 |
8 | When you create an Amazon EKS node, it has one network interface\. All Amazon EC2 instance types support more than one network interface\. The network interface attached to the instance when the instance is created is called the *primary network interface*\. Any additional network interface attached to the instance is called a *secondary network interface*\. Each network interface can be assigned multiple private IP addresses\. One of the private IP addresses is the *primary IP address*, whereas all other addresses assigned to the network interface are *secondary IP addresses*\. For more information about network interfaces, see [Elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) in the *Amazon EC2 User Guide for Linux Instances*\. For more information about how many network interfaces and private IP addresses are supported for each network interface, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide for Linux Instances*\. For example, an `m5.large` instance type supports three network interfaces and ten private IP addresses for each network interface\.
9 |
10 | The Amazon VPC Container Network Interface \(CNI\) plugin for Kubernetes is deployed with each of your Amazon EC2 nodes in a Daemonset with the name `aws-node`\. The plugin consists of two primary components:
11 | + **L\-IPAM daemon** – Responsible for creating network interfaces and attaching the network interfaces to Amazon EC2 instances, assigning secondary IP addresses to network interfaces, and maintaining a warm pool of IP addresses on each node for assignment to Kubernetes pods when they are scheduled\. When the number of pods running on the node exceeds the number of addresses that can be assigned to a single network interface, the plugin starts allocating a new network interface, as long as the maximum number of network interfaces for the instance aren't already attached\. There are configuration variables that allow you to change the default value for when the plugin creates new network interfaces\. For more information, see [`WARM_ENI_TARGET`, `WARM_IP_TARGET` and `MINIMUM_IP_TARGET`](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/eni-and-ip-target.md) on GitHub\.
12 |
13 | Each pod that you deploy is assigned one secondary private IP address from one of the network interfaces attached to the instance\. Previously, it was mentioned that an `m5.large` instance supports three network interfaces and ten private IP addresses for each network interface\. Even though an `m5.large` instance supports 30 private IP addresses, you can't deploy 30 pods to that node\. To determine how many pods you can deploy to a node, use the following formula:
14 |
15 | ```
16 | (Number of network interfaces for the instance type × (the number of IP addressess per network interface - 1)) + 2
17 | ```
18 |
19 | Using this formula, an `m5.large` instance type can support a maximum of 29 pods\. For a list of the maximum number of pods supported by each instance type, see [eni\-max\-pods\.txt](https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt) on GitHub\. System pods count towards the maximum pods\. For example, the CNI plugin and `kube-proxy` pods run on every node in a cluster, so you're only able to deploy 27 additional pods to an `m5.large` instance, not 29\. Further, CoreDNS runs on some of the nodes in the cluster, which decrements the maximum pods by another one for the nodes it runs on\.
20 |
21 | By default, all pods deployed to a node are assigned the same security groups and are assigned private IP addresses from a CIDR block that is assigned to the subnet that one of the instance's network interfaces is connected to\. You can assign IP addresses from a different CIDR block than the subnet that the primary network interface is connected to by configuring [CNI custom networking](cni-custom-network.md)\. You can also use CNI custom networking to assign all pods on a node the same security groups\. The security groups assigned to all pods can be different than the security groups assigned to the primary network interface\. You can assign unique security groups to pods deployed to many Amazon EC2 instance types using security groups for pods\. For more information, see [Security groups for pods](security-groups-for-pods.md)\.
22 | + **CNI plugin** – Responsible for wiring the host network \(for example, configuring the network interfaces and virtual Ethernet pairs\) and adding the correct network interface to the pod namespace\.
23 |
24 | **Important**
25 | If you are using version 1\.7\.0 or later of the CNI plugin and you assign a custom pod security policy to the `aws-node` Kubernetes service account used for the `aws-node` pods deployed by the Daemonset, then the policy must have `NET_ADMIN` in its `allowedCapabilities` section along with `hostNetwork: true` and `privileged: true` in the policy's `spec`\. For more information, see [Pod security policy](pod-security-policy.md)\.
--------------------------------------------------------------------------------
/doc_source/launch-node-bottlerocket.md:
--------------------------------------------------------------------------------
1 | # Launching self\-managed Bottlerocket nodes
2 |
3 | This topic helps you to launch an Auto Scaling group of [Bottlerocket](http://aws.amazon.com/bottlerocket/) nodes that register with your Amazon EKS cluster\. Bottlerocket is a Linux\-based open\-source operating system that is purpose\-built by AWS for running containers on virtual machines or bare metal hosts\. After the nodes join the cluster, you can deploy Kubernetes applications to them\. For more information about Bottlerocket, see the [documentation](https://github.com/bottlerocket-os/bottlerocket/blob/develop/README.md) on GitHub\.
4 |
5 | **Important**
6 | Amazon EKS nodes are standard Amazon EC2 instances, and you are billed for them based on normal Amazon EC2 instance prices\. For more information, see [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/)\.
7 |
8 | **Important**
9 | You can deploy to Amazon EC2 instances with x86 or Arm processors, but not to instances that have GPUs or Inferentia chips\.
10 | You can't deploy to the following regions: China \(Beijing\) \(`cn-north-1`\), China \(Ningxia\) \(`cn-northwest-1`\), AWS GovCloud \(US\-East\) \(`us-gov-east-1`\), or AWS GovCloud \(US\-West\) \(`us-gov-west-1`\)\.
11 | There is no AWS CloudFormation template to deploy nodes with\.
12 |
13 | **To launch Bottlerocket nodes using `eksctl`**
14 |
15 | This procedure requires `eksctl` version `0.61.0` or later\. You can check your version with the following command:
16 |
17 | ```
18 | eksctl version
19 | ```
20 |
21 | For more information on installing or upgrading `eksctl`, see [Installing or upgrading `eksctl`](eksctl.md#installing-eksctl)\.
22 | **Note**
23 | This procedure only works for clusters that were created with `eksctl`\.
24 |
25 | 1. Create a file named *bottlerocket\.yaml* with the following contents\. Replace the *`example values`* with your own values\. If you want to deploy on Arm instances, then replace `m5.large` with an Arm instance type\. If specifying an Arm Amazon EC2 instance type, then review the considerations in [Amazon EKS optimized Arm Amazon Linux AMIs](eks-optimized-ami.md#arm-ami) before deploying\. If you want to deploy using a custom AMI, then see [Building Bottlerocket](https://github.com/bottlerocket-os/bottlerocket/blob/develop/BUILDING.md) on GitHub and [Custom AMI support](https://eksctl.io/usage/custom-ami-support/) in the `eksctl` documentation\. If you want to deploy a managed node group then you must deploy a custom AMI using a launch template\. For more information, see [Launch template support](launch-templates.md)\.
26 | **Important**
27 | If you want to deploy a node group to AWS Outposts, AWS Wavelength, or AWS Local Zones subnets, then the AWS Outposts, AWS Wavelength, or AWS Local Zones subnets must not have been passed in when you created the cluster, and you must specify the subnets in the following example\. For more information see [Create a nodegroup from a config file](https://eksctl.io/usage/managing-nodegroups/#creating-a-nodegroup-from-a-config-file) and [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation\.
28 |
29 | ```
30 | ---
31 | apiVersion: eksctl.io/v1alpha5
32 | kind: ClusterConfig
33 |
34 | metadata:
35 | name: my-cluster
36 | region: us-west-2
37 | version: '1.21'
38 |
39 | iam:
40 | withOIDC: true
41 |
42 | nodeGroups:
43 | - name: ng-bottlerocket
44 | instanceType: m5.large
45 | desiredCapacity: 3
46 | amiFamily: Bottlerocket
47 | ami: auto-ssm
48 | iam:
49 | attachPolicyARNs:
50 | - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
51 | - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
52 | - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
53 | ssh:
54 | allow: true
55 | publicKeyName: YOUR_EC2_KEYPAIR_NAME
56 | ```
57 |
58 | 1. Deploy your nodes with the following command\.
59 |
60 | ```
61 | eksctl create cluster --config-file=bottlerocket.yaml
62 | ```
63 |
64 | Output:
65 |
66 | You'll see several lines of output as the nodes are created\. One of the last lines of output is the following example line\.
67 |
68 | ```
69 | [✔] created 1 nodegroup(s) in cluster "my-cluster"
70 | ```
71 |
72 | 1. \(Optional\) Create a Kubernetes [persistent volume](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) on a Bottlerocket node using the [Amazon EBS CSI Plugin](https://github.com/kubernetes-sigs/aws-ebs-csi-driver)\. The default Amazon EBS driver relies on file system tools that are not included with Bottlerocket\. For more information about creating a storage class using the driver, see [Amazon EBS CSI driver](ebs-csi.md)\.
73 |
74 | 1. \(Optional\) By default `kube-proxy` sets the `nf_conntrack_max` kernel parameter to a default value that may differ from what Bottlerocket originally sets at boot\. If you prefer to keep Bottlerocket's [default setting](https://github.com/bottlerocket-os/bottlerocket/blob/develop/packages/release/release-sysctl.conf), then edit the kube\-proxy configuration with the following command\.
75 |
76 | ```
77 | kubectl edit -n kube-system daemonset kube-proxy
78 | ```
79 |
80 | Add `--conntrack-max-per-core` and `--conntrack-min to the kube-proxy` arguments as shown in the following example\. A setting of `0` implies no change\.
81 |
82 | ```
83 | containers:
84 | - command:
85 | - kube-proxy
86 | - --v=2
87 | - --config=/var/lib/kube-proxy-config/config
88 | - --conntrack-max-per-core=0
89 | - --conntrack-min=0
90 | ```
91 |
92 | 1. \(Optional\) Deploy a [sample application](sample-deployment.md) to test your Bottlerocket nodes\.
93 |
94 | 1. \(Optional\) If you plan to assign IAM roles to all of your Kubernetes service accounts so that pods only have the minimum permissions that they need, and no pods in the cluster require access to the Amazon EC2 instance metadata service \(IMDS\) for other reasons, such as retrieving the current Region, then we recommend blocking pod access to IMDS\. For more information, see [IAM roles for service accounts](iam-roles-for-service-accounts.md) and [Restricting access to the IMDS and Amazon EC2 instance profile credentials](best-practices-security.md#restrict-ec2-credential-access)\.
--------------------------------------------------------------------------------
/doc_source/cni-metrics-helper.md:
--------------------------------------------------------------------------------
1 | # CNI metrics helper
2 |
3 | The CNI metrics helper is a tool that you can use to scrape network interface and IP address information, aggregate metrics at the cluster level, and publish the metrics to Amazon CloudWatch\.
4 |
5 | When managing an Amazon EKS cluster, you may want to know how many IP addresses have been assigned and how many are available\. The CNI metrics helper helps you to:
6 | + Track these metrics over time
7 | + Troubleshoot and diagnose issues related to IP assignment and reclamation
8 | + Provide insights for capacity planning
9 |
10 | When a node is provisioned, the CNI plugin automatically allocates a pool of secondary IP addresses from the node’s subnet to the primary network interface \(`eth0`\)\. This pool of IP addresses is known as the *warm pool*, and its size is determined by the node’s instance type\. For example, a `c4.large` instance can support three network interfaces and nine IP addresses per interface\. The number of IP addresses available for a given pod is one less than the maximum \(of ten\) because one of the IP addresses is reserved for the elastic network interface itself\. For more information, see [IP Addresses Per Network Interface Per Instance Type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide for Linux Instances*\.
11 |
12 | As the pool of IP addresses is depleted, the plugin automatically attaches another elastic network interface to the instance and allocates another set of secondary IP addresses to that interface\. This process continues until the node can no longer support additional elastic network interfaces\.
13 |
14 | The following metrics are collected for your cluster and exported to CloudWatch:
15 | + The maximum number of network interfaces that the cluster can support
16 | + The number of network interfaces have been allocated to pods
17 | + The number of IP addresses currently assigned to pods
18 | + The total and maximum numbers of IP addresses available
19 | + The number of ipamD errors
20 |
21 | ## Deploying the CNI metrics helper
22 |
23 | The CNI metrics helper requires `cloudwatch:PutMetricData` permissions to send metric data to CloudWatch\. This section helps you to create an IAM policy with those permissions, apply it to your node instance role, and then deploy the CNI metrics helper\.
24 |
25 | **To create an IAM policy for the CNI metrics helper**
26 |
27 | 1. Create a file called `allow_put_metrics_data.json` and populate it with the following policy document\.
28 |
29 | ```
30 | {
31 | "Version": "2012-10-17",
32 | "Statement": [
33 | {
34 | "Effect": "Allow",
35 | "Action": "cloudwatch:PutMetricData",
36 | "Resource": "*"
37 | }
38 | ]
39 | }
40 | ```
41 |
42 | 1. Create an IAM policy called `CNIMetricsHelperPolicy` for your node instance profile that allows the CNI metrics helper to make calls to AWS APIs on your behalf\. Use the following AWS CLI command to create the IAM policy in your AWS account\.
43 |
44 | ```
45 | aws iam create-policy --policy-name CNIMetricsHelperPolicy \
46 | --description "Grants permission to write metrics to CloudWatch" \
47 | --policy-document file://allow_put_metrics_data.json
48 | ```
49 |
50 | Take note of the policy ARN that is returned\.
51 |
52 | 1. Get the IAM role name for your nodes\. Use the following command to print the `aws-auth` configmap\.
53 |
54 | ```
55 | kubectl -n kube-system describe configmap aws-auth
56 | ```
57 |
58 | Output:
59 |
60 | ```
61 | Name: aws-auth
62 | Namespace: kube-system
63 | Labels:
64 | Annotations:
65 |
66 | Data
67 | ====
68 | mapRoles:
69 | ----
70 | - groups:
71 | - system:bootstrappers
72 | - system:nodes
73 | rolearn: arn:aws:iam::<111122223333>:role/
74 | username: system:node:{{EC2PrivateDNSName}}
75 |
76 | Events:
77 | ```
78 |
79 | Record the role name for any `rolearn` values that have the `system:nodes` group assigned to them\. In the above example output, the role name is \. You should have one value for each node group in your cluster\.
80 |
81 | 1. Attach the new `CNIMetricsHelperPolicy` IAM policy to each of the node IAM roles you identified earlier with the following command, substituting the red text with your own AWS account number and node IAM role name\.
82 |
83 | ```
84 | aws iam attach-role-policy \
85 | --policy-arn arn:aws:iam::<111122223333>:policy/CNIMetricsHelperPolicy \
86 | --role-name
87 | ```
88 |
89 | **To deploy the CNI metrics helper**
90 | + Apply the CNI metrics helper manifest\.
91 |
92 | ```
93 | kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.9/config/v1.9/cni-metrics-helper.yaml
94 | ```
95 |
96 | ## Creating a metrics dashboard
97 |
98 | After you have deployed the CNI metrics helper, you can view the CNI metrics in the CloudWatch console\. This topic helps you to create a dashboard for viewing your cluster's CNI metrics\.
99 |
100 | **To create a CNI metrics dashboard**
101 |
102 | 1. Open the CloudWatch console at [https://console\.aws\.amazon\.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/)\.
103 |
104 | 1. In the left navigation, choose **Metrics**\.
105 |
106 | 1. Under **Custom Namespaces**, choose **Kubernetes**\.
107 |
108 | 1. Choose **CLUSTER\_ID**\.
109 |
110 | 1. On the **All metrics** tab, select the metrics you want to add to the dashboard\.
111 |
112 | 1. Choose **Actions**, and then **Add to dashboard**\.
113 |
114 | 1. In the **Select a dashboard** section, choose **Create new** and enter a name for your dashboard, such as "EKS\-CNI\-metrics"\.
115 |
116 | 1. In the **Select a widget type** section, choose **Number**\.
117 |
118 | 1. In the **Customize the widget title** section, enter a logical name for your dashboard title, such as "EKS CNI metrics"\.
119 |
120 | 1. Choose **Add to dashboard** to finish\. Now your CNI metrics are added to a dashboard that you can monitor, as shown below\.
121 | ![\[EKS CNI metrics\]](http://docs.aws.amazon.com/eks/latest/userguide/images/EKS_CNI_metrics.png)
--------------------------------------------------------------------------------
/doc_source/using-service-linked-roles-eks.md:
--------------------------------------------------------------------------------
1 | # Using roles for Amazon EKS clusters
2 |
3 | Amazon Elastic Kubernetes Service uses AWS Identity and Access Management \(IAM\)[ service\-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role)\. A service\-linked role is a unique type of IAM role that is linked directly to Amazon EKS\. Service\-linked roles are predefined by Amazon EKS and include all the permissions that the service requires to call other AWS services on your behalf\.
4 |
5 | A service\-linked role makes setting up Amazon EKS easier because you don't have to manually add the necessary permissions\. Amazon EKS defines the permissions of its service\-linked roles, and unless defined otherwise, only Amazon EKS can assume its roles\. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity\.
6 |
7 | You can delete a service\-linked role only after first deleting their related resources\. This protects your Amazon EKS resources because you can't inadvertently remove permission to access the resources\.
8 |
9 | For information about other services that support service\-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service\-linked role** column\. Choose a **Yes** with a link to view the service\-linked role documentation for that service\.
10 |
11 | ## Service\-Linked Role Permissions for Amazon EKS
12 |
13 | Amazon EKS uses the service\-linked role named `AWSServiceRoleForAmazonEKS` – The role allows Amazon EKS to manage clusters in your account\. The attached policies allow the role to manage the following resources: network interfaces, security groups, logs, and VPCs\.
14 |
15 | **Note**
16 | The `AWSServiceRoleForAmazonEKS` service\-linked role is distinct from the role required for cluster creation\. For more information, see [Amazon EKS cluster IAM role](service_IAM_role.md)\.
17 |
18 | The `AWSServiceRoleForAmazonEKS` service\-linked role trusts the following services to assume the role:
19 | + `eks.amazonaws.com`
20 |
21 | The role permissions policy allows Amazon EKS to complete the following actions on the specified resources:
22 | + [https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/aws-service-role/AmazonEKSServiceRolePolicy$jsonEditor](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/aws-service-role/AmazonEKSServiceRolePolicy$jsonEditor)
23 |
24 | You must configure permissions to allow an IAM entity \(such as a user, group, or role\) to create, edit, or delete a service\-linked role\. For more information, see [Service\-Linked Role Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*\.
25 |
26 | ## Creating a Service\-Linked Role for Amazon EKS
27 |
28 | You don't need to manually create a service\-linked role\. When you create a cluster in the AWS Management Console, the AWS CLI, or the AWS API, Amazon EKS creates the service\-linked role for you\.
29 |
30 | If you delete this service\-linked role, and then need to create it again, you can use the same process to recreate the role in your account\. When you create a cluster, Amazon EKS creates the service\-linked role for you again\.
31 |
32 | ## Editing a service\-linked role for Amazon EKS
33 |
34 | Amazon EKS does not allow you to edit the `AWSServiceRoleForAmazonEKS` service\-linked role\. After you create a service\-linked role, you cannot change the name of the role because various entities might reference the role\. However, you can edit the description of the role using IAM\. For more information, see [Editing a service\-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*\.
35 |
36 | ## Deleting a service\-linked role for Amazon EKS
37 |
38 | If you no longer need to use a feature or service that requires a service\-linked role, we recommend that you delete that role\. That way you don't have an unused entity that is not actively monitored or maintained\. However, you must clean up your service\-linked role before you can manually delete it\.
39 |
40 | ### Cleaning up a service\-linked role
41 |
42 | Before you can use IAM to delete a service\-linked role, you must first delete any resources used by the role\.
43 |
44 | **Note**
45 | If the Amazon EKS service is using the role when you try to delete the resources, then the deletion might fail\. If that happens, wait for a few minutes and try the operation again\.
46 |
47 | **To delete Amazon EKS resources used by the `AWSServiceRoleForAmazonEKS` role\.**
48 |
49 | 1. Open the Amazon EKS console at [https://console\.aws\.amazon\.com/eks/home\#/clusters](https://console.aws.amazon.com/eks/home#/clusters)\.
50 |
51 | 1. In the left navigation pane, choose **Clusters**\.
52 |
53 | 1. If your cluster has any node groups or Fargate profiles, you must delete them before you can delete the cluster\. For more information, see [Deleting a managed node group](delete-managed-node-group.md) and [Deleting a Fargate profile](fargate-profile.md#delete-fargate-profile)\.
54 |
55 | 1. On the **Clusters** page, choose the cluster that you want to delete and choose **Delete**\.
56 |
57 | 1. Type the name of the cluster in the deletion confirmation window, and then choose **Delete**\.
58 |
59 | 1. Repeat this procedure for any other clusters in your account\. Wait for all of the delete operations to finish\.
60 |
61 | ### Manually delete the service\-linked role
62 |
63 | Use the IAM console, the AWS CLI, or the AWS API to delete the `AWSServiceRoleForAmazonEKS` service\-linked role\. For more information, see [Deleting a service\-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#delete-service-linked-role) in the *IAM User Guide*\.
64 |
65 | ## Supported regions for Amazon EKS service\-linked roles
66 |
67 | Amazon EKS supports using service\-linked roles in all of the regions where the service is available\. For more information, see [Amazon EKS Service Endpoints and Quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html)\.
--------------------------------------------------------------------------------
/doc_source/create-node-role.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS node IAM role
2 |
3 | The Amazon EKS node `kubelet` daemon makes calls to AWS APIs on your behalf\. Nodes receive permissions for these API calls through an IAM instance profile and associated policies\. Before you can launch nodes and register them into a cluster, you must create an IAM role for those nodes to use when they are launched\. This requirement applies to nodes launched with the Amazon EKS optimized AMI provided by Amazon, or with any other node AMIs that you intend to use\. Before you create nodes, you must create an IAM role with the following IAM policies:
4 | + `[AmazonEKSWorkerNodePolicy](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy%24jsonEditor)`
5 | + `[AmazonEC2ContainerRegistryReadOnly](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly%24jsonEditor)`
6 |
7 | ## Check for an existing node role
8 |
9 | You can use the following procedure to check and see if your account already has the Amazon EKS node role\.
10 |
11 | **To check for the `eksNodeRole` in the IAM console**
12 |
13 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
14 |
15 | 1. In the navigation panel, choose **Roles**\.
16 |
17 | 1. Search the list of roles for `eksNodeRole`\. If a role that contains `eksNodeRole` or `NodeInstanceRole` does not exist, then see [Creating the Amazon EKS node IAM role](#create-worker-node-role) to create the role\. If a role that contains `eksNodeRole` or `NodeInstanceRole` does exist, then select the role to view the attached policies\.
18 |
19 | 1. Choose **Permissions**\.
20 |
21 | 1. Ensure that the **AmazonEKSWorkerNodePolicy** and **AmazonEC2ContainerRegistryReadOnly** managed policies are attached to the role\. If the policies are attached, your Amazon EKS node role is properly configured\.
22 | **Note**
23 | If the **AmazonEKS\_CNI\_Policy** policy is attached to the role, we recommend removing it and attaching it to an IAM role that is mapped to the `aws-node` Kubernetes service account instead\. For more information, see [Configuring the Amazon VPC CNI plugin to use IAM roles for service accounts](cni-iam-role.md)\.
24 |
25 | 1. Choose **Trust Relationships**, **Edit Trust Relationship**\.
26 |
27 | 1. Verify that the trust relationship contains the following policy\. If the trust relationship matches the policy below, choose **Cancel**\. If the trust relationship does not match, copy the policy into the **Policy Document** window and choose **Update Trust Policy**\.
28 |
29 | ```
30 | {
31 | "Version": "2012-10-17",
32 | "Statement": [
33 | {
34 | "Effect": "Allow",
35 | "Principal": {
36 | "Service": "ec2.amazonaws.com"
37 | },
38 | "Action": "sts:AssumeRole"
39 | }
40 | ]
41 | }
42 | ```
43 |
44 | ## Creating the Amazon EKS node IAM role
45 |
46 | You can create the node IAM role with the AWS Management Console or AWS CloudFormation\. Select the tab with the name of the tool that you want to create the role with\.
47 |
48 | ------
49 | #### [ AWS Management Console ]
50 |
51 | **To create your Amazon EKS node role in the IAM console**
52 |
53 | 1. Open the IAM console at [https://console\.aws\.amazon\.com/iam/](https://console.aws.amazon.com/iam/)\.
54 |
55 | 1. Choose **Roles**, then **Create role**\.
56 |
57 | 1. Choose **EC2** from the list of **Common use cases** under** Choose a use case,** then choose **Next: Permissions**\.
58 |
59 | 1. In the **Filter policies** box, enter `AmazonEKSWorkerNodePolicy`\. Check the box to the left of **AmazonEKSWorkerNodePolicy**\.
60 |
61 | 1. In the **Filter policies** box, enter `AmazonEC2ContainerRegistryReadOnly`\. Check the box to the left of **AmazonEC2ContainerRegistryReadOnly**\.
62 |
63 | 1. The **AmazonEKS\_CNI\_Policy** policy must be attached to either this role or to a different role that is mapped to the `aws-node` Kubernetes service account\. We recommend assigning the policy to the role associated to the Kubernetes service account instead of assigning it to this role\. For more information, see [Configuring the Amazon VPC CNI plugin to use IAM roles for service accounts](cni-iam-role.md)\.
64 |
65 | 1. Choose **Next: Tags**\.
66 |
67 | 1. \(Optional\) Add metadata to the role by attaching tags as key–value pairs\. For more information about using tags in IAM, see [Tagging IAM Entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_tags.html) in the *IAM User Guide*\.
68 |
69 | 1. Choose **Next: Review**\.
70 |
71 | 1. For **Role name**, enter a unique name for your role, such as NodeInstanceRole\. For **Role description**, replace the current text with descriptive text such as Amazon EKS \- Node Group Role, then choose **Create role**\.
72 |
73 | ------
74 | #### [ AWS CloudFormation ]
75 |
76 | **To create your Amazon EKS node role using AWS CloudFormation**
77 |
78 | 1. Open the AWS CloudFormation console at [https://console\.aws\.amazon\.com/cloudformation](https://console.aws.amazon.com/cloudformation/)\.
79 |
80 | 1. Choose **Create stack** and then choose **With new resources \(standard\)**\.
81 |
82 | 1. For **Specify template**, select **Amazon S3 URL**\.
83 |
84 | 1. Paste the following URL into the **Amazon S3 URL** text area and choose **Next** twice:
85 |
86 | ```
87 | https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-10-29/amazon-eks-nodegroup-role.yaml
88 | ```
89 |
90 | 1. On the **Specify stack details** page, for **Stack name** enter a name such as **eks\-node\-group\-instance\-role** and choose **Next**\.
91 |
92 | 1. \(Optional\) On the **Configure stack options** page, you can choose to tag your stack resources\. Choose **Next**\.
93 |
94 | 1. On the **Review** page, check the box in the **Capabilities** section and choose **Create stack**\.
95 |
96 | 1. When your stack is created, select it in the console and choose **Outputs**\.
97 |
98 | 1. Record the **NodeInstanceRole** value for the IAM role that was created\. You need this when you create your node group\.
99 |
100 | 1. \(Optional, but recommended\) One of the IAM policies attached to the role by the AWS CloudFormation template in a previous step is the **AmazonEKS\_CNI\_Policy** managed policy\. The policy must be attached to this role or to a role associated to the Kubernetes `aws-node` service account that is used for the Amazon EKS VPC CNI plugin\. We recommend assigning the policy to the role associated to the Kubernetes service account\. For more information, see [Configuring the Amazon VPC CNI plugin to use IAM roles for service accounts](cni-iam-role.md)\.
101 |
102 | ------
--------------------------------------------------------------------------------
/doc_source/network_reqs.md:
--------------------------------------------------------------------------------
1 | # Cluster VPC considerations
2 |
3 | Amazon EKS recommends running a cluster in a VPC with public and private subnets so that Kubernetes can create public load balancers in the public subnets that load balance traffic to pods running on nodes that are in private subnets\. This configuration is not required, however\. You can run a cluster in a VPC with only private or only public subnets, depending on your networking and security requirements\. For more information about clusters deployed to a VPC with only private subnets, see [Private clusters](private-clusters.md)\.
4 |
5 | When you create an Amazon EKS cluster, you specify the VPC subnets where Amazon EKS can place Elastic Network Interfaces\. Amazon EKS requires subnets in at least two Availability Zone, and creates up to four network interfaces across these subnets to facilitate control plane communication to your nodes\. This communication channel supports Kubernetes functionality such as `kubectl exec` and `kubectl logs`\. The Amazon EKS created [cluster security group](sec-group-reqs.md#cluster-sg) and any additional security groups that you specify when you create your cluster are applied to these network interfaces\. Each Amazon EKS created network interface has Amazon EKS `` in their description\.
6 |
7 | Make sure that the subnets that you specify during cluster creation have enough available IP addresses for the Amazon EKS created network interfaces\. We recommend creating small \(`/28`\), dedicated subnets for Amazon EKS created network interfaces, and only specifying these subnets as part of cluster creation\. Other resources, such as nodes and load balancers, should be launched in separate subnets from the subnets specified during cluster creation\.
8 |
9 | **Important**
10 | Nodes and load balancers can be launched in any subnet in your cluster’s VPC, including subnets not registered with Amazon EKS during cluster creation\. Subnets do not require any tags for nodes\. For Kubernetes load balancing auto discovery to work, subnets must be tagged as described in [Subnet tagging](#vpc-subnet-tagging)\.
11 | Subnets associated with your cluster cannot be changed after cluster creation\. If you need to control exactly which subnets the Amazon EKS created network interfaces are placed in, then specify only two subnets during cluster creation, each in a different Availability Zone\.
12 | Do not select a subnet in AWS Outposts, AWS Wavelength, or an AWS Local Zone when creating your cluster\.
13 | Clusters created using v1\.14 or earlier contain a "kubernetes\.io/cluster/ tag on your VPC\. This tag was only used by Amazon EKS and can be safely removed\.
14 |
15 | Your VPC must have DNS hostname and DNS resolution support, or your nodes can't register with your cluster\. For more information, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the Amazon VPC User Guide\.
16 |
17 | ## VPC IP addressing
18 |
19 | Nodes must be able to communicate with the control plane and other AWS services\. If your nodes are deployed in a private subnet, then the subnet must meet one of the following requirements:
20 | + Has a default route to a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html)\. The NAT gateway must be assigned a public IP address to provide internet access for the nodes\.
21 | + Is configured with the necessary settings and requirements in [Private clusters](private-clusters.md)\.
22 |
23 | If self\-managed nodes are deployed to a public subnet, the subnet must be configured to auto\-assign public IP addresses\. Otherwise, your node instances must be assigned a public IP address when they're launched\. For more information, see [Assigning a public IPv4 address during instance launch](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#vpc-public-ip) in the Amazon VPC User Guide\. If managed nodes are deployed to a public subnet, the subnet must be configured to auto\-assign public IP addresses\. If the subnet is not configured to auto\-assign public IP addresses, then the nodes aren't assigned a public IP address\. Determine whether your public subnets are configured to auto\-assign public IP addresses with the following command\. Replace the *``* \(including *`<>`*\) with your own values\.
24 |
25 | ```
26 | aws ec2 describe-subnets \
27 | --filters "Name=vpc-id,Values=" | grep 'SubnetId\|MapPublicIpOnLaunch'
28 | ```
29 |
30 | Output
31 |
32 | ```
33 | "MapPublicIpOnLaunch": ,"SubnetId": "","MapPublicIpOnLaunch": ,"SubnetId": "",
34 | ```
35 |
36 | For any subnets that have `MapPublicIpOnLaunch` set to `false`, change the setting to `true`\.
37 |
38 | ```
39 | aws ec2 modify-subnet-attribute --map-public-ip-on-launch --subnet-id
40 | ```
41 |
42 | **Important**
43 | If you used an [Amazon EKS AWS AWS CloudFormation template](create-public-private-vpc.md) to deploy your VPC before March 26, 2020, then you need to change the setting for your public subnets\.
44 | You can define both private \(RFC 1918\), and public \(non\-RFC 1918\) classless inter\-domain routing \(CIDR\) ranges within the VPC used for your Amazon EKS cluster\. For more information, see [Adding IPv4 CIDR blocks to a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html#vpc-resize) in the Amazon VPC User Guide\. When choosing the CIDR blocks for your VPC and subnets, make sure that the blocks contain enough IP addresses for all of the Amazon EC2 nodes and pods that you plan to deploy\. There should be at least one IP address for each of your pods\. You can conserve IP address use by implementing a transit gateway with a shared services VPC\. For more information, see [Isolated VPCs with shared services](https://docs.aws.amazon.com/vpc/latest/tgw/transit-gateway-isolated-shared.html) and [Amazon EKS VPC routable IP address conservation patterns in a hybrid network](http://aws.amazon.com/blogs/containers/eks-vpc-routable-ip-address-conservation/)\.
45 |
46 | ## Subnet tagging
47 |
48 | For 1\.18 and earlier clusters, Amazon EKS adds the following tag to all subnets passed in during cluster creation\. Amazon EKS does not add the tag to subnets passed in when creating 1\.19 clusters\. If the tag exists on subnets used by a cluster created on a version earlier than 1\.19, and you update the cluster to 1\.19, the tag is not removed from the subnets\.
49 | + **Key** – `kubernetes.io/cluster/`
50 | + **Value** – `shared`
51 |
52 | You can optionally use this tag to control where Elastic Load Balancers are provisioned, in addition to the required subnet tags for using automatically provisioned Elastic Load Balancers\. For more information about load balancer subnet tagging, see [Application load balancing on Amazon EKS](alb-ingress.md) and [Network load balancing on Amazon EKS](network-load-balancing.md)\.
--------------------------------------------------------------------------------
/doc_source/eks-networking.md:
--------------------------------------------------------------------------------
1 | # Amazon EKS networking
2 |
3 | This chapter provides an overview of Amazon EKS networking\. The following diagram shows key components of an Amazon EKS cluster, and the relationship of these components to a VPC\.
4 |
5 | ![\[EKS networking\]](http://docs.aws.amazon.com/eks/latest/userguide/images/networking-overview.png)
6 |
7 | The following explanations help you understand how components of the diagram relate to each other and which topics in this guide and other AWS guides that you can reference for more information\.
8 | + **Amazon VPC and subnets** – All Amazon EKS resources are deployed to one Region in an existing subnet in an existing VPC\. For more information, see [VPCs and subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html) in the Amazon VPC User Guide\. Each subnet exists in one Availability Zone\. The VPC and subnets must meet requirements such as the following:
9 | + VPCs and subnets must be tagged appropriately, so that Kubernetes knows that it can use them for deploying resources, such as load balancers\. For more information, see [Subnet tagging](network_reqs.md#vpc-subnet-tagging)\. If you deploy the VPC using an Amazon EKS provided [AWS CloudFormation template](create-public-private-vpc.md#create-vpc) or using `eksctl`, then the VPC and subnets are tagged appropriately for you\.
10 | + A subnet may or may not have internet access\. If a subnet does not have internet access, the pods deployed within it must be able to access other AWS services, such as Amazon ECR, to pull container images\. For more information about using subnets that don't have internet access, see [Private clusters](private-clusters.md)\.
11 | + Any public subnets that you use must be configured to auto\-assign public IP addresses for Amazon EC2 instances launched within them\. For more information, see [VPC IP addressing](network_reqs.md#vpc-cidr)\.
12 | + The nodes and control plane must be able to communicate over all ports through appropriately tagged [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)\. For more information, see [Amazon EKS security group considerations](sec-group-reqs.md)\.
13 | + You can implement a network segmentation and tenant isolation network policy\. Network policies are similar to AWS security groups in that you can create network ingress and egress rules\. Instead of assigning instances to a security group, you assign network policies to pods using pod selectors and labels\. For more information, see [Installing Calico on Amazon EKS](calico.md)\.
14 |
15 | You can deploy a VPC and subnets that meet the Amazon EKS requirements through manual configuration, or by deploying the VPC and subnets using [`eksctl`](eksctl.md), or an Amazon EKS provided AWS CloudFormation template\. Both `eksctl` and the AWS CloudFormation template create the VPC and subnets with the required configuration\. For more information, see [Creating a VPC for your Amazon EKS cluster](create-public-private-vpc.md#create-vpc)\.
16 | + **Amazon EKS control plane** – Deployed and managed by Amazon EKS in an Amazon EKS managed VPC\. When you create the cluster, Amazon EKS creates and manages network interfaces in your account that have `Amazon EKS ` in their description\. These network interfaces allow AWS Fargate and Amazon EC2 instances to communicate with the control plane\.
17 |
18 | By default, the control plane exposes a public endpoint so that clients and nodes can communicate with the cluster\. You can limit the internet client source IP addresses that can communicate with the public endpoint\. Alternatively, you can enable a private endpoint and disable the public endpoint or enable both the public and private endpoints\. To learn more about cluster endpoints, see [Amazon EKS cluster endpoint access control](cluster-endpoint.md)\.
19 |
20 | Clients in your on\-premises network or other VPCs can communicate with the public or private\-only endpoint, if you've configured connectivity between the VPC that the cluster is deployed to and the other networks\. For more information about connecting your VPC to other networks, see the AWS [Network\-to\-Amazon VPC connectivity options](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/network-to-amazon-vpc-connectivity-options.html) and [Amazon VPC\-to\-Amazon VPC connectivity options](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/amazon-vpc-to-amazon-vpc-connectivity-options.html) technical papers\.
21 | + **Amazon EC2 instances** – Each Amazon EC2 node is deployed to one subnet\. Each node is assigned a [private IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-private-addresses) from a CIDR block assigned to the subnet\. If the subnets were created using one of the [Amazon EKS provided AWS CloudFormation templates](create-public-private-vpc.md#create-vpc), then nodes deployed to public subnets are automatically assigned a [public IP address](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses) by the subnet\. Each node is deployed with the [Pod networking \(CNI\)](pod-networking.md) which, by default, assigns each pod a private IP address from the CIDR block assigned to the subnet that the node is in and adds the IP address as a secondary IP address to one of the [network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) attached to the instance\. This AWS resource is referred to as a *network interface* in the AWS Management Console and the Amazon EC2 API\. Therefore, we use "network interface" in this documentation instead of "elastic network interface"\. The term "network interface" in this documentation always means "elastic network interface"\.
22 |
23 | You can change this behavior by assigning additional CIDR blocks to your VPC and enabling [CNI custom networking](cni-custom-network.md), which assigns IP addresses to pods from different subnets than the node is deployed to\. To use custom networking, you must enable it when you launch your nodes\. You can also associate unique security groups with some of the pods running on many Amazon EC2 instance types\. For more information, see [Security groups for pods](security-groups-for-pods.md)\.
24 |
25 | By default, the source IP address of each pod that communicates with resources outside of the VPC is translated through network address translation \(NAT\) to the primary IP address of the primary network interface attached to the node\. You can change this behavior to instead have a NAT device in a private subnet translate each pod's IP address to the NAT device's IP address\. For more information, see [External source network address translation \(SNAT\)](external-snat.md)\.
26 | + **Fargate pods** – Deployed to private subnets only\. Each pod is assigned a private IP address from the CIDR block assigned to the subnet\. Fargate does not support all pod networking options\. For more information, see [AWS Fargate considerations](fargate.md#fargate-considerations)\.
--------------------------------------------------------------------------------
/doc_source/view-nodes.md:
--------------------------------------------------------------------------------
1 | # View nodes
2 |
3 | The Amazon EKS console shows information about all of your cluster's nodes, including Amazon EKS managed nodes, self\-managed nodes, and Fargate\. Nodes represent the compute resources provisioned for your cluster from the perspective of the Kubernetes API\. For more information, see [Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/) in the Kubernetes documentation\. To learn more about the different types of Amazon EKS nodes that you can deploy your [workloads](view-workloads.md) to, see [Amazon EKS nodes](eks-compute.md)\.
4 |
5 | **Prerequisites**
6 |
7 | The IAM user or IAM role that you sign into the AWS Management Console with must meet the following requirements\.
8 | + Have the `eks:AccessKubernetesApi` and other necessary IAM permissions to view nodes attached to it\. For an example IAM policy, see [View nodes and workloads for all clusters in the AWS Management Console](security_iam_id-based-policy-examples.md#policy_example3) \.
9 | + Is mapped to Kubernetes user or group in the `aws-auth` `configmap`\. For more information, see [Managing users or IAM roles for your cluster](add-user-role.md)\.
10 | + The Kubernetes user or group that the IAM user or role is mapped to in the configmap must be bound to a Kubernetes `role` or `clusterrole` that has permissions to view the resources in the namespaces that you want to view\. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation\. You can download the following example manifests that create a `clusterrole` and `clusterrolebinding` or a `role` and `rolebinding`:
11 | + **View Kubernetes resources in all namespaces** – The group name in the file is `eks-console-dashboard-full-access-group`, which is the group that your IAM user or role needs to be mapped to in the `aws-auth` configmap\. You can change the name of the group before applying it to your cluster, if desired, and then map your IAM user or role to that group in the configmap\. To download the file, select the appropriate link for the Region that your cluster is in\.
12 | + [All Regions other than Beijing and Ningxia China](https://amazon-eks.s3.us-west-2.amazonaws.com/docs/eks-console-full-access.yaml)
13 | + [Beijing and Ningxia China Regions](https://amazon-eks.s3.cn-north-1.amazonaws.com.cndocs/eks-console-full-access.yaml)
14 | + **View Kubernetes resources in a specific namespace** – The namespace in this file is `default`, so if you want to specify a different namespace, edit the file before applying it to your cluster\. The group name in the file is `eks-console-dashboard-restricted-access-group`, which is the group that your IAM user or role needs to be mapped to in the `aws-auth` configmap\. You can change the name of the group before applying it to your cluster, if desired, and then map your IAM user or role to that group in the configmap\. To download the file, select the appropriate link for the Region that your cluster is in\.
15 | + [All Regions other than Beijing and Ningxia China](https://amazon-eks.s3.us-west-2.amazonaws.com/docs/eks-console-restricted-access.yaml)
16 | + [Beijing and Ningxia China Regions](https://amazon-eks.s3.cn-north-1.amazonaws.com.cndocs/eks-console-restricted-access.yaml)
17 |
18 | **To view nodes using the AWS Management Console**
19 |
20 | 1. Open the Amazon EKS console at [https://console\.aws\.amazon\.com/eks/home\#/clusters](https://console.aws.amazon.com/eks/home#/clusters)\.
21 |
22 | 1. In the left navigation panel, select **Clusters**, and then in the **Clusters** list, select the cluster that you want to view compute resources for\.
23 |
24 | 1. On the **Overview** tab, you see a list of all compute **Nodes** for your cluster and the nodes' status\.
25 | **Important**
26 | If you can't see any **Nodes** on the **Overview** tab, or you see a **Your current user or role does not have access to Kubernetes objects on this EKS cluster** error, see the prerequisites for this topic\. If you don't resolve the issue, you can still view and manage your Amazon EKS cluster on the **Configuration** tab, but you won't see self\-managed nodes or some of the information that you see for managed nodes and Fargate under **Nodes**\.
27 | **Note**
28 | Each pod that runs on Fargate is registered as a separate Kubernetes node within the cluster\. This is because Fargate runs each pod in an isolated compute environment and independently connects to the cluster control plane\. For more information, see [AWS Fargate](fargate.md)\.
29 |
30 | 1. In the **Nodes** list, you see a list of all of the managed, self\-managed, and Fargate nodes for your cluster\. Selecting the link for one of the nodes provides the following information about the node:
31 | + The Amazon EC2 **Instance type**, **Kernel version**, **Kubelet version**, **Container runtime**, **OS** and **OS image** for managed and self\-managed nodes\.
32 | + Deep links to the Amazon EC2 console and the Amazon EKS managed node group \(if applicable\) for the node\.
33 | + The **Resource allocation**, which shows baseline and allocatable capacity for the node\.
34 | + **Conditions** describe the current operational status of the node\. This is useful information for troubleshooting issues on the node\.
35 |
36 | Conditions are reported back to the Kubernetes control plane by the Kubernetes agent `kubelet` that runs locally on each node\. For more information, see [kubelet](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/) in the Kubernetes documentation\. Conditions on the node are always reported as part of the node detail and the **Status** of each condition along with its **Message** indicates the health of the node for that condition\. The following common conditions are reported for a node:
37 | + **Ready** – This condition is **TRUE** if the node is healthy and can accept pods\. The condition is **FALSE** if the node is not ready and will not accept pods\. **UNKNOWN** indicates that the Kubernetes control plane has not recently received a heartbeat signal from the node\. The heartbeat timeout period is set to the Kubernetes default of 40 seconds for Amazon EKS clusters\.
38 | + **Memory pressure** – This condition is **FALSE** under normal operation and **TRUE** if node memory is low\.
39 | + **Disk pressure** – This condition is **FALSE** under normal operation and **TRUE** if disk capacity for the node is low\.
40 | + **PID pressure** – This condition is **FALSE** under normal operation and **TRUE** if there are too many processes running on the node\. On the node, each container runs as a process with a unique *Process ID*, or PID\.
41 | + **NetworkUnavailable** – This condition is **FALSE**, or not present, under normal operation\. If **TRUE**, the network for the node is not properly configured\.
42 | + The Kubernetes **Labels** and **Annotations** assigned to the node\. These could have been assigned by you, by Kubernetes, or by the Amazon EKS API when the node was created\. These values can be used by your workloads for scheduling pods\.
--------------------------------------------------------------------------------
/doc_source/using-service-linked-roles-eks-nodegroups.md:
--------------------------------------------------------------------------------
1 | # Using roles for Amazon EKS node groups
2 |
3 | Amazon EKS uses AWS Identity and Access Management \(IAM\)[ service\-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role)\. A service\-linked role is a unique type of IAM role that is linked directly to Amazon EKS\. Service\-linked roles are predefined by Amazon EKS and include all the permissions that the service requires to call other AWS services on your behalf\.
4 |
5 | A service\-linked role makes setting up Amazon EKS easier because you don't have to manually add the necessary permissions\. Amazon EKS defines the permissions of its service\-linked roles, and unless defined otherwise, only Amazon EKS can assume its roles\. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity\.
6 |
7 | You can delete a service\-linked role only after first deleting their related resources\. This protects your Amazon EKS resources because you can't inadvertently remove permission to access the resources\.
8 |
9 | For information about other services that support service\-linked roles, see [AWS services that work with IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-services-that-work-with-iam.html) and look for the services that have **Yes **in the **Service\-linked role** column\. Choose a **Yes** with a link to view the service\-linked role documentation for that service\.
10 |
11 | ## Service\-linked role permissions for Amazon EKS
12 |
13 | Amazon EKS uses the service\-linked role named `AWSServiceRoleForAmazonEKSNodegroup` – The role allows Amazon EKS to manage node groups in your account\. The attached policies allow the role to manage the following resources: Auto Scaling groups, security groups, launch templates and IAM instance profiles\.\.
14 |
15 | The `AWSServiceRoleForAmazonEKSNodegroup` service\-linked role trusts the following services to assume the role:
16 | + `eks-nodegroup.amazonaws.com`
17 |
18 | The role permissions policy allows Amazon EKS to complete the following actions on the specified resources:
19 | + [https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSServiceRoleForAmazonEKSNodegroup%24jsonEditor](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AWSServiceRoleForAmazonEKSNodegroup%24jsonEditor)
20 |
21 | You must configure permissions to allow an IAM entity \(such as a user, group, or role\) to create, edit, or delete a service\-linked role\. For more information, see [Service\-Linked Role Permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions) in the *IAM User Guide*\.
22 |
23 | ## Creating a service\-linked role for Amazon EKS
24 |
25 | You don't need to manually create a service\-linked role\. When you CreateNodegroup in the AWS Management Console, the AWS CLI, or the AWS API, Amazon EKS creates the service\-linked role for you\.
26 |
27 | **Important**
28 | This service\-linked role can appear in your account if you completed an action in another service that uses the features supported by this role\. If you were using the Amazon EKS service before January 1, 2017, when it began supporting service\-linked roles, then Amazon EKS created the AWSServiceRoleForAmazonEKSNodegroup role in your account\. To learn more, see [A New Role Appeared in My IAM Account](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_roles.html#troubleshoot_roles_new-role-appeared)\.
29 |
30 | ### Creating a service\-linked role in Amazon EKS \(AWS API\)
31 |
32 | You don't need to manually create a service\-linked role\. When you create a managed node group in the AWS Management Console, the AWS CLI, or the AWS API, Amazon EKS creates the service\-linked role for you\.
33 |
34 | If you delete this service\-linked role, and then need to create it again, you can use the same process to recreate the role in your account\. When you create another managed node group, Amazon EKS creates the service\-linked role for you again\.
35 |
36 | ## Editing a service\-linked role for Amazon EKS
37 |
38 | Amazon EKS does not allow you to edit the `AWSServiceRoleForAmazonEKSNodegroup` service\-linked role\. After you create a service\-linked role, you cannot change the name of the role because various entities might reference the role\. However, you can edit the description of the role using IAM\. For more information, see [Editing a service\-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role) in the *IAM User Guide*\.
39 |
40 | ## Deleting a service\-linked role for Amazon EKS
41 |
42 | If you no longer need to use a feature or service that requires a service\-linked role, we recommend that you delete that role\. That way you don't have an unused entity that is not actively monitored or maintained\. However, you must clean up your service\-linked role before you can manually delete it\.
43 |
44 | ### Cleaning up a service\-linked role
45 |
46 | Before you can use IAM to delete a service\-linked role, you must first delete any resources used by the role\.
47 |
48 | **Note**
49 | If the Amazon EKS service is using the role when you try to delete the resources, then the deletion might fail\. If that happens, wait for a few minutes and try the operation again\.
50 |
51 | **To delete Amazon EKS resources used by the `AWSServiceRoleForAmazonEKSNodegroup` role\.**
52 |
53 | 1. Open the Amazon EKS console at [https://console\.aws\.amazon\.com/eks/home\#/clusters](https://console.aws.amazon.com/eks/home#/clusters)\.
54 |
55 | 1. In the left navigation pane, choose **Clusters**\.
56 |
57 | 1. Select the **Configuration** tab and then choose the **Compute** tab\.
58 |
59 | 1. In the **Node Groups** section, choose the node group to delete\.
60 |
61 | 1. Type the name of the node group in the deletion confirmation window, and then choose **Delete**\.
62 |
63 | 1. Repeat this procedure for any other node groups in the cluster\. Wait for all of the delete operations to finish\.
64 |
65 | ### Manually delete the service\-linked role
66 |
67 | Use the IAM console, the AWS CLI, or the AWS API to delete the `AWSServiceRoleForAmazonEKSNodegroup` service\-linked role\. For more information, see [Deleting a service\-linked role](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#delete-service-linked-role) in the *IAM User Guide*\.
68 |
69 | ## Supported regions for Amazon EKS service\-linked roles
70 |
71 | Amazon EKS supports using service\-linked roles in all of the regions where the service is available\. For more information, see [Amazon EKS Service Endpoints and Quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html)\.
--------------------------------------------------------------------------------