Skip to main content
Amazon EKS 0.67.1Last updated in version 0.67.0

EKS Cluster Control Plane Module

View SourceRelease Notes

This Terraform Module launches an Elastic Container Service for Kubernetes Cluster.

This module is responsible for the EKS Control Plane in the EKS cluster topology. You must launch worker nodes in order to be able to schedule pods on your cluster. See the eks-cluster-workers module for managing EKS worker nodes.

What is the EKS Control Plane?

The EKS Control Plane is a managed service entrirely managed by AWS. This contains the resources and endpoint to run and access the Kubernetes master components. The resources are deployed into your VPC so that they inherit the network rules you configure for your VPC.

Specifically, the control plane consists of:

  • etcd: A distributed key value store used by Kubernetes to hold the metadata and cluster state.
  • kube-apiserver: Web service that exposes the Kubernetes API. This is the main entrypoint for interacting with the Kubernetes cluster.
  • kube-scheduler: This component is responsible for watching for newly created Pods on the cluster, and scheduling them on to the available worker nodes.
  • kube-controller-manager: This component is responsible for executing the controller logic. Controllers are responsible for managing the Pods on the cluster. For example, you can use a Deployment controller to ensure that a specified number of replicas of a Pod is running on the cluster.
  • cloud-controller-manager: This component is responsible for managing cloud components that Kubernetes will manage. This includes resources like the LoadBalancers.

You can read more about the different components of EKS in the project README.

What security group rules are created?

This module will create a security group for the EKS cluster master nodes to allow them to function as a Kubernetes cluster. The rules are based on the recommendations provided by AWS for configuring an EKS cluster.

How do you add additional security group rules?

To add additional security group rules to the EKS cluster master nodes, you can use the aws_security_group_rule resource, and set its security_group_id argument to the Terraform output of this module called eks_control_plane_security_group_id. For example, here is how you can allow the master nodes in this cluster to allow incoming HTTPS requests on port 443 from an additional security group that is not the workers:

module "eks_cluster" {
# (arguments omitted)
}

resource "aws_security_group_rule" "allow_inbound_http_from_anywhere" {
type = "ingress"
from_port = 443
to_port = 443
protocol = "tcp"

security_group_id = module.eks_cluster.eks_control_plane_security_group_id
source_security_group_id = var.source_aws_security_group_id
}

What IAM policies are attached to the EKS Cluster?

This module will create IAM roles for the EKS cluster master nodes with the minimum set of policies necessary for the cluster to function as a Kubernetes cluster. The policies attached to the roles are the same as those documented in the AWS getting started guide for EKS.

How do you add additional IAM policies?

To add additional IAM policies to the EKS cluster master nodes, you can use the aws_iam_role_policy or aws_iam_policy_attachment resources, and set the IAM role id to the Terraform output of this module called eks_control_plane_iam_role_name for the master nodes. For example, here is how you can allow the master nodes in this cluster to access an S3 bucket:

module "eks_cluster" {
# (arguments omitted)
}

resource "aws_iam_role_policy" "access_s3_bucket" {
name = "access_s3_bucket"
role = module.eks_cluster.eks_control_plane_iam_role_name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect":"Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*"
}
]
}
EOF
}

How do I associate IAM roles to the Pods?

NOTE: This configuration depends on kubergrunt, minimum version 0.5.3

This module will set up the OpenID Connect Provider that can be used with the IAM Roles for Service Accounts feature. When this feature is enabled, you can exchange the Kubernetes Service Account Tokens for IAM role credentials using the sts:AssumeRoleWithWebIdentity AWS API in the STS service.

To allow Kubernetes Service Accounts to assume the roles, you need to grant the proper assume role IAM policies to the role that is being assumed. Specifically, you need to:

  • Allow the OpenID Connect Provider to assume the role.
  • Specify any conditions on assuming the role. You can restrict by:
    • Service Accounts that can assume the role
    • Which Namespaces have full access to assume the role (meaning, all Service Accounts in the Namespace can assume that role).

You can use the eks-iam-role-assume-role-policy-for-service-account module to construct the policy using a more convenient interface. Refer to the module documentation for more info.

Once you have an IAM Role that can be assumed by the Kubernetes Service Account, you can configure your Pods to exchange them for IAM role credentials. EKS will automatically configure the correct environment variables that the SDK expects on the Pods when you annotate the associated Service Account with the role it should assume.

The following shows an example Kubernetes manifest that configures the Service Account to assume the IAM role arn:aws:iam::123456789012:role/myrole:

apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/myrole

Note that the AWS SDK will automatically assume the role if you are using a compatible version. The following is a list of the minimum SDK version for various platforms that support the AWS_WEB_IDENTITY_TOKEN_FILE environment variable used by IRSA:

Java   1.11.623
Java2 2.7.36
Go 1.23.13
Python 1.9.220
Node 2.521.0
Ruby 2.11.345
PHP 3.110.7
.NET 3.3.580.0

How do I SSH into the nodes?

By design, AWS does not allow you to SSH into the master nodes of an EKS cluster.

API Access and Networking

By default this module enables both the Public Kubernetes API Endpoint and the Private Kubernetes API VPC Endpoint. The public endpoint is used for network requests originating from outside the VPC, while requests originating from within the VPC (including worker nodes) use the private VPC endpoint.

To restrict access to the public endpoint, you can use the endpoint_public_access_cidrs input variable. When set, only requests originating from the list of CIDR blocks will be allowed access from outside the VPC.

To restrict access to the private VPC endpoint, you can use the endpoint_private_access_cidrs and endpoint_private_access_security_group_ids input variables. When set, requests originating from within the VPC and from the list of CIDRs/Security Group IDs will be allowed access.

Note that even if an IP is allowed access to the public endpoint via the endpoint_public_access_cidrs variable, if that IP originates from within the VPC of the EKS cluster, that request will not be allowed unless it is allowed to access the private endpoint. That is, setting endpoint_public_access_cids = 0.0.0.0/0, will not automatically allow access to the Kubernetes API from within the VPC. You must configure endpoint_private_access_cidrs or endpoint_private_access_security_group_ids to allow access to requests originating from within the VPC.

The public endpoint makes operations easier when configuring the EKS cluster control plane. However, for added security, you can disable the public endpoint by setting the endpoint_public_access input variable to false.

Control Plane Logging

EKS supports exporting various logs to CloudWatch. By default, none of the logging options are enabled by this module. To enable logs, you can pass in the relevant type strings to the enabled_cluster_log_types input variable. For example, to enable API server and audit logs, you can pass in the list ["api", "audit"]. See the official documentation for a list of available log types.

How do I configure encryption at rest for Secrets?

Kubernetes Secrets are resources in the cluster designed to store and manage sensitive information. These behave like ConfigMaps, but have a few extra properties that enhance their security profile.

All EKS clusters encrypt Kubernetes Secrets at rest at the disk level using shared AWS managed KMS keys. Alternatively, you can provide your own KMS Customer Master Key (CMK) to use for envelope encryption. In envelope encryption, Kubernetes will use the provided CMK to encrypt the secret keys used to encrypt the Kubernetes Secrets. For each Secret, Kubernetes will dynamically generate a new data encryption key (DEK) for the purposes of encrypting and decrypting the secret. This key is then encrypted using the provided CMK before being stored in the cluster. In this way, you can manage access to the Secret (indirectly by restricting access to the DEK) through the KMS permissions. For example, you can disable all access to any Secrets in the EKS cluster by removing the permissions to encrypt/decrypt using the KMS key in case of a breach.

To enable envelope encryption, provide the KMS key ARN you would like to use using the variable secret_envelope_encryption_kms_key_arn. Note that if the KMS key belongs to another account, you will need to grant access to manage permissions for the key to the account holding the EKS cluster. See Allowing users in other accounts to use a CMK from the official AWS docs for more information.

How do I deploy Pods on Fargate?

AWS Fargate is an AWS managed infrastructure for running ECS Tasks and EKS Pods without any worker nodes. With Fargate, your EKS Pods will automatically be assigned a node from a shared pool of VMs that are fully managed by AWS. This means that you can focus entirely on the application you are deploying and not have to worry about servers, clusters, and the underlying infrastructure as a whole.

To use Fargate with your EKS Pods, you need to create a Fargate Profile to select the Pods that you want to run. You can use Namespaces and Labels to restrict which Pods of the EKS cluster will run on Fargate. This means that Pods that match the specifications of the Fargate Profile will automatically be deployed to Fargate without any further configuration.

Some additional notes on using Fargate:

  • Fargate Profiles require a Pod Execution Role, which is an IAM role that will be assigned to the underlying kubelet of the Fargate instance. At a minimum, this role must be given enough permissions to pull the images used by the Pod. Note that this role is NOT made available to the Pods! Use the IAM Role for Service Accounts (IRSA) feature of EKS to assign IAM roles for use by the Pods themselves.
  • If you set the input variable schedule_control_plane_services_on_fargate on this module, the module will automatically allocate a Fargate Profile that selects the core control plane services deployed in the kube-system Namespace (e.g., core-dns). This profile is highly selective and will most likely not match any other Pods in the cluster. To deploy additional Pods onto Fargate, you must manually create Fargate Profiles that select those Pods (use the aws_eks_fargate_profile resource to provision Fargate Profiles with Terraform). The Pod Execution Role created by the module may be reused for other Fargate Profiles.
  • Fargate does not support DaemonSets. This means that you can't rely on the eks-container-logs module to forward logs to CloudWatch. Instead, you need to manually configure a sidecar fluentd container that forwards the log entries to CloudWatch Logs. Refer to this AWS blog post for documentation on how to setup fluentd with Fargate.

How do I upgrade the Kubernetes Version of the cluster?

To upgrade the minor version of Kubernetes deployed on the EKS cluster, you need to update the kubernetes_version input variable. You must upgrade one minor version at a time, as EKS does not support upgrading by more than one minor version.

Updating core components

When you upgrade the cluster, you can update the cluster core components with either Kubergrunt or using Amazon EKS add-ons. If use_upgrade_cluster_script is set to true then kubergrunt is used to update the core components. If enable_eks_addons is set to true, then EKS add-ons are used. If both are set to true, then enable_eks_addons takes precedence.

Note that customized VPC CNI configurations (e.g., enabling prefix delegation) is not fully supported with add-ons as the automated add-on lifecycles could potentially undo the configuration changes. As such, it is not recommended to use EKS add-ons if you wish to use the VPC CNI customization features.

Using Kubergrunt

When you bump minor versions, the module will automatically update the deployed Kubernetes components as described in the official upgrade guide. This is handled by kubergrunt (minimum version 0.6.2) using the eks sync-core-components command, which will look up the deployed Kubernetes version and make the required kubectl calls to deploy the updated components.

Using EKS add-ons

If you have specified explicit addon_version in eks_addons, you must update the addon_version to match the cluster version. All add-on version details can be found in the official documentation. If you omit the addon_version, correct versions are automatically applied.

Updating worker node AMIs

Note that you must update the nodes to use the corresponding kubelet version as well. This means that when you update minor versions, you will also need to update the AMIs used by the worker nodes to match the version and rotate the workers. For more information on rotating worker nodes, refer to How do I roll out an update to the instances? in the eks-cluster-workers module README.

Detailed upgrade steps

Here are detailed steps on how to update your cluster:

  1. Bump the kubernetes_version in the module variable to the next minor version in your module block for eks-cluster-control-plane.
  2. For self managed worker nodes (eks-cluster-workers module), build a new AMI for your worker nodes that depend on an EKS optimized AMI for the Kubernetes minor version. Update the asg_default_instance_ami variable to the new AMI in your module block for eks-cluster-workers.
  3. Apply the changes. This will update the Kubernetes version on the EKS control plane, and stage the updates for your workers. Note that your cluster will continue to function as Kubernetes supports worker nodes that are 1 minor version behind.
  4. Roll out the AMI update using kubergrunt eks deploy.

How do I increase the number of Pods for my worker nodes?

By default, this module deploys an EKS cluster that uses the AWS VPC CNI to manage internal networking for the cluster. This plugin works to source IP addresses from the assigned VPC of the cluster to assign to each Pod within Kubernetes.

The AWS VPC CNI works by allocating secondary IP addresses and Elastic Network Interfaces to the worker nodes to assign to the Pods that are scheduled on them. This means that there is a limit on the number of IP addresses that can be made available to the Pods per node. You can look up the various limits per instance type in the official AWS documentation.

Unfortunately, these limits are typically significantly less than the available compute and memory resources that the node has. This means that the worker nodes will often hit the IP address limit well before it reaches compute and memory limits of the nodes, greatly reducing the Pod scheduling potential of your cluster.

To address this, you can use prefix delegation mode for managing the available IP addresses on your workers. In prefix delegation mode, each ENI is assigned an IPv4 address prefix instead of an individual IP for each secondary address slot. This means that for each individual IP address that was previously available, you now have up to 16 IP addresses that the worker node can assign to the container, greatly increasing the number of IP addresses that each worker can assign to the Pods.

To enable prefix delegation mode, set the vpc_cni_enable_prefix_delegation input variable to true.

Note that prefix delegation mode greatly increases the number of IP addresses that each worker node will keep in standby for the Pods. This is because worker nodes can only allocate IP addresses in blocks of 16. This means that each worker will consume a minimum of 16 IP addresses from the VPC, and potentially more depending on the number of Pods that are scheduled (e.g., a worker with 17 Pods will consume 32 IP addresses - 2 prefixes of 16 IP addresses each).

You can tweak the allocation behavior by configuring the vpc_cni_warm_ip_target and vpc_cni_minimum_ip_target variables.

The warm IP target indicates the target number of IP addresses each node should have available. For example, if you set the warm IP target to 5, then the node will only preallocate the next prefix of 16 IP addresses when the current prefix reaches 68.75% utilization (11 out of 16 used). On the other hand, if the warm IP target is set to 16 (the default), then the next prefix will be allocated as soon as one Pod is scheduled on the current prefix.

The minimum IP target indicates the target number of IP addresses that should be available on each node during initialization. For example, if you set this to 32, then each node will start with 2 prefixes being preallocated at launch time. On the other hand, if the minimum IP target is 16 (the default), then each node starts with only 1 prefix.

You can learn more details about how prefix delegation mode works, and the behavior of warm IP target and minimum IP target in the official AWS blog post about the feature.

Troubleshooting

AccessDenied when provisioning Services of LoadBalancer type

On brand new accounts, AWS needs to provision a new Service Linked Role for ELBs when an ELB is first provisioned. EKS automatically creates the Service Linked Role if it doesn't exist, but it needs more permissions than is provided by default. Since the permission is only needed as a one time thing, binding the necessary permissions would be a violation of least privileges.

As such, this module does not bind the requisite permissions, and instead we recommend taking one of the following approaches:

  • Create a one time wrapper module that appends the following IAM permissions to the control plane IAM role (the output eks_master_iam_role_arn), and deploy the EKS cluster with LoadBalancer service:

      ec2:DescribeAccountAttributes
    ec2:DescribeInternetGateways
  • Create an ELB using the AWS console, or the modules in terraform-aws-load-balancer.

  • Create the service linked role using the Landing Zone modules.

Sample Usage

main.tf

# ------------------------------------------------------------------------------------------------------
# DEPLOY GRUNTWORK'S EKS-CLUSTER-CONTROL-PLANE MODULE
# ------------------------------------------------------------------------------------------------------

module "eks_cluster_control_plane" {

source = "git::git@github.com:gruntwork-io/terraform-aws-eks.git//modules/eks-cluster-control-plane?ref=v0.67.1"

# ----------------------------------------------------------------------------------------------------
# REQUIRED VARIABLES
# ----------------------------------------------------------------------------------------------------

# The name of the EKS cluster (e.g. eks-prod). This is used to namespace all
# the resources created by these templates.
cluster_name = <string>

# A list of CIDR blocks that should be allowed network access to the
# Kubernetes public API endpoint. When null or empty, allow access from the
# whole world (0.0.0.0/0). Note that this only restricts network reachability
# to the API, and does not account for authentication to the API. Note also
# that this only controls access to the public API endpoint, which is used for
# network access from outside the VPC. If you want to control access to the
# Kubernetes API from within the VPC, then you must use the
# endpoint_private_access_cidrs and endpoint_private_access_security_group_ids
# variables.
endpoint_public_access_cidrs = <list(string)>

# A list of the subnets into which the EKS Cluster's control plane nodes will
# be launched. These should usually be all private subnets and include one in
# each AWS Availability Zone.
vpc_control_plane_subnet_ids = <list(string)>

# The ID of the VPC in which the EKS Cluster's EC2 Instances will reside.
vpc_id = <string>

# ----------------------------------------------------------------------------------------------------
# OPTIONAL VARIABLES
# ----------------------------------------------------------------------------------------------------

# A list of additional security group IDs to attach to the control plane.
additional_security_groups = []

# Automatically download and install Kubergrunt if it isn't already installed
# on this OS. Only used if var.use_kubergrunt_verification is true.
auto_install_kubergrunt = true

# The AWS partition used for default AWS Resources.
aws_partition = "aws"

# The ID (ARN, alias ARN, AWS ID) of a customer managed KMS Key to use for
# encrypting log data in the CloudWatch log group for EKS control plane logs.
cloudwatch_log_group_kms_key_id = null

# The number of days to retain log events in the CloudWatch log group for EKS
# control plane logs. Refer to
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group#retention_in_days
# for all the valid values. When null, the log events are retained forever.
cloudwatch_log_group_retention_in_days = null

# Tags to apply on the CloudWatch Log Group for EKS control plane logs,
# encoded as a map where the keys are tag keys and values are tag values.
cloudwatch_log_group_tags = null

# ARN of permissions boundary to apply to the cluster IAM role - the IAM role
# created for the EKS cluster as well as the default fargate IAM role.
cluster_iam_role_permissions_boundary = null

# The IP family used to assign Kubernetes pod and service addresses. Valid
# values are ipv4 (default) and ipv6. You can only specify an IP family when
# you create a cluster, changing this value will force a new cluster to be
# created.
cluster_network_config_ip_family = "ipv4"

# The CIDR block to assign Kubernetes pod and service IP addresses from. If
# you don't specify a block, Kubernetes assigns addresses from either the
# 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks. You can only specify a custom
# CIDR block when you create a cluster, changing this value will force a new
# cluster to be created.
cluster_network_config_service_ipv4_cidr = null

# Whether or not to automatically configure kubectl on the current operator
# machine. To use this, you need a working python install with the AWS CLI
# installed and configured.
configure_kubectl = false

# When set to true, this will inform the module to set up the OpenID Connect
# Provider for use with the IAM Roles for Service Accounts feature of EKS.
configure_openid_connect_provider = true

# When true, IAM role will be created and attached to Fargate control plane
# services. When true, requires that
# schedule_control_plane_services_on_fargate variable should be set true.
create_default_fargate_iam_role = true

# The name to use for the default Fargate execution IAM role that is created
# when create_default_fargate_iam_role is true. When null, defaults to
# CLUSTER_NAME-fargate-role.
custom_fargate_iam_role_name = null

# A map of custom tags to apply to the EKS add-ons. The key is the tag name
# and the value is the tag value.
custom_tags_eks_addons = {}

# A map of custom tags to apply to the EKS Cluster. The key is the tag name
# and the value is the tag value.
custom_tags_eks_cluster = {}

# A map of custom tags to apply to the Security Group for this EKS Cluster.
# The key is the tag name and the value is the tag value.
custom_tags_security_group = {}

# Configuraiton object for the EBS CSI Driver EKS AddOn
ebs_csi_driver_addon_config = {}

# A map of custom tags to apply to the EBS CSI Driver AddOn. The key is the
# tag name and the value is the tag value.
ebs_csi_driver_addon_tags = {}

# If using KMS encryption of EBS volumes, provide the KMS Key ARN to be used
# for a policy attachment.
ebs_csi_driver_kms_key_arn = null

# The namespace for the EBS CSI Driver. This will almost always be the
# kube-system namespace.
ebs_csi_driver_namespace = "kube-system"

# The Service Account name to be used with the EBS CSI Driver
ebs_csi_driver_sa_name = "ebs-csi-controller-sa"

# Map of EKS add-ons, where key is name of the add-on and value is a map of
# add-on properties.
eks_addons = {}

# When set to true, the module configures and install the EBS CSI Driver as an
# EKS managed AddOn
# (https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html). To
# use this feature, `configure_openid_connect_provider` must be set to true
# (the default value).
enable_ebs_csi_driver = false

# When set to true, the module configures EKS add-ons
# (https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html)
# specified with `eks_addons`. VPC CNI configurations with
# `use_vpc_cni_customize_script` isn't fully supported with addons, as the
# automated add-on lifecycles could potentially undo the configuration
# changes.
enable_eks_addons = false

# A list of the desired control plane logging to enable. See
# https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html for
# the list of available logs.
enabled_cluster_log_types = []

# A list of CIDR blocks that should be allowed network access to the private
# Kubernetes API endpoint. Note that worker nodes automatically get access to
# the private endpoint, so this controls additional access. Note that this
# only restricts network reachability to the API, and does not account for
# authentication to the API. Note also that this only controls access to the
# private API endpoint, which is used for network access from inside the VPC.
# If you want to control access to the Kubernetes API from outside the VPC,
# then you must use the endpoint_public_access_cidrs.
endpoint_private_access_cidrs = []

# Same as endpoint_private_access_cidrs, but exposes access to the provided
# list of security groups instead of CIDR blocks. The keys in the map are
# unique user defined identifiers that can be used for resource tracking
# purposes.
endpoint_private_access_security_group_ids = {}

# Whether or not to enable public API endpoints which allow access to the
# Kubernetes API from outside of the VPC. Note that private access within the
# VPC is always enabled.
endpoint_public_access = true

# Create a dependency between the control plane services Fargate Profile in
# this module to the interpolated values in this list (and thus the source
# resources). In other words, the resources in this module will now depend on
# the resources backing the values in this list such that those resources need
# to be created before the resources in this module, and the resources in this
# module need to be destroyed before the resources in the list.
fargate_profile_dependencies = []

# Name of the kubectl config file context for accessing the EKS cluster.
kubectl_config_context_name = ""

# Path to the kubectl config file. Defaults to $HOME/.kube/config
kubectl_config_path = ""

# The URL from which to download Kubergrunt if it's not installed already.
# Only used if var.use_kubergrunt_verification and var.auto_install_kubergrunt
# are true.
kubergrunt_download_url = "https://github.com/gruntwork-io/kubergrunt/releases/download/v0.15.0/kubergrunt"

# Version of Kubernetes to use. Refer to EKS docs for list of available
# versions
# (https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html).
kubernetes_version = "1.29"

# The thumbprint to use for the OpenID Connect Provider. You can retrieve the
# thumbprint by following the instructions in the AWS docs:
# https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc_verify-thumbprint.html.
# When set to null, this module will dynamically retrieve the thumbprint from
# AWS. You should only set this if you have strict requirements around HTTP
# access in your organization (e.g., you require an HTTP proxy).
openid_connect_provider_thumbprint = null

# When true, configures control plane services to run on Fargate so that the
# cluster can run without worker nodes. When true, requires kubergrunt to be
# available on the system.
schedule_control_plane_services_on_fargate = false

# ARN for KMS Key to use for envelope encryption of Kubernetes Secrets. By
# default Secrets in EKS are encrypted at rest using shared AWS managed keys.
# Setting this variable will configure Kubernetes to encrypt Secrets using
# this KMS key. Can only be used on clusters created after 2020-03-05.
secret_envelope_encryption_kms_key_arn = null

# When true, precreate the CloudWatch Log Group to use for EKS control plane
# logging. This is useful if you wish to customize the CloudWatch Log Group
# with various settings such as retention periods and KMS encryption. When
# false, EKS will automatically create a basic log group to use. Note that
# logs are only streamed to this group if var.enabled_cluster_log_types is
# true.
should_create_cloudwatch_log_group = true

# When set to true, the sync-core-components command will skip updating
# coredns. This variable is ignored if `use_upgrade_cluster_script` is false.
upgrade_cluster_script_skip_coredns = false

# When set to true, the sync-core-components command will skip updating
# kube-proxy. This variable is ignored if `use_upgrade_cluster_script` is
# false.
upgrade_cluster_script_skip_kube_proxy = false

# When set to true, the sync-core-components command will skip updating
# aws-vpc-cni. This variable is ignored if `use_upgrade_cluster_script` is
# false.
upgrade_cluster_script_skip_vpc_cni = false

# When set to true, the sync-core-components command will wait until the new
# versions are rolled out in the cluster. This variable is ignored if
# `use_upgrade_cluster_script` is false.
upgrade_cluster_script_wait_for_rollout = true

# When set to true, this will enable the kubergrunt eks cleanup-security-group
# command using a local-exec provisioner. This script ensures that no known
# residual resources managed by EKS is left behind after the cluster has been
# deleted.
use_cleanup_cluster_script = true

# When set to true, this will enable kubergrunt verification to wait for the
# Kubernetes API server to come up before completing. If false, reverts to a
# 30 second timed wait instead.
use_kubergrunt_verification = true

# When true, all IAM policies will be managed as dedicated policies rather
# than inline policies attached to the IAM roles. Dedicated managed policies
# are friendlier to automated policy checkers, which may scan a single
# resource for findings. As such, it is important to avoid inline policies
# when targeting compliance with various security standards.
use_managed_iam_policies = true

# When set to true, this will enable the kubergrunt eks sync-core-components
# command using a local-exec provisioner. This script ensures that the
# Kubernetes core components are upgraded to a matching version everytime the
# cluster's Kubernetes version is updated.
use_upgrade_cluster_script = true

# When set to true, this will enable management of the aws-vpc-cni
# configuration options using kubergrunt running as a local-exec provisioner.
# If you set this to false, the vpc_cni_* variables will be ignored.
use_vpc_cni_customize_script = true

# When true, enable prefix delegation mode for the AWS VPC CNI component of
# the EKS cluster. In prefix delegation mode, each ENI will be allocated 16 IP
# addresses (/28) instead of 1, allowing you to pack more Pods per node. Note
# that by default, AWS VPC CNI will always preallocate 1 full prefix - this
# means that you can potentially take up 32 IP addresses from the VPC network
# space even if you only have 1 Pod on the node. You can tweak this behavior
# by configuring the var.vpc_cni_warm_ip_target input variable.
vpc_cni_enable_prefix_delegation = false

# The minimum number of IP addresses (free and used) each node should start
# with. When null, defaults to the aws-vpc-cni application setting (currently
# 16 as of version 1.9.0). For example, if this is set to 25, every node will
# allocate 2 prefixes (32 IP addresses). On the other hand, if this was set to
# the default value, then each node will allocate only 1 prefix (16 IP
# addresses).
vpc_cni_minimum_ip_target = null

# The number of free IP addresses each node should maintain. When null,
# defaults to the aws-vpc-cni application setting (currently 16 as of version
# 1.9.0). In prefix delegation mode, determines whether the node will
# preallocate another full prefix. For example, if this is set to 5 and a node
# is currently has 9 Pods scheduled, then the node will NOT preallocate a new
# prefix block of 16 IP addresses. On the other hand, if this was set to the
# default value, then the node will allocate a new block when the first pod is
# scheduled.
vpc_cni_warm_ip_target = null

# A list of the subnets into which the EKS Cluster's administrative pods will
# be launched. These should usually be all private subnets and include one in
# each AWS Availability Zone. Required when
# var.schedule_control_plane_services is true.
vpc_worker_subnet_ids = []

}


Reference

Required

cluster_namestringrequired

The name of the EKS cluster (e.g. eks-prod). This is used to namespace all the resources created by these templates.

endpoint_public_access_cidrslist(string)required

A list of CIDR blocks that should be allowed network access to the Kubernetes public API endpoint. When null or empty, allow access from the whole world (0.0.0.0/0). Note that this only restricts network reachability to the API, and does not account for authentication to the API. Note also that this only controls access to the public API endpoint, which is used for network access from outside the VPC. If you want to control access to the Kubernetes API from within the VPC, then you must use the endpoint_private_access_cidrs and endpoint_private_access_security_group_ids variables.

vpc_control_plane_subnet_idslist(string)required

A list of the subnets into which the EKS Cluster's control plane nodes will be launched. These should usually be all private subnets and include one in each AWS Availability Zone.

vpc_idstringrequired

The ID of the VPC in which the EKS Cluster's EC2 Instances will reside.

Optional

additional_security_groupslist(string)optional

A list of additional security group IDs to attach to the control plane.

[]

Automatically download and install Kubergrunt if it isn't already installed on this OS. Only used if use_kubergrunt_verification is true.

true
aws_partitionstringoptional

The AWS partition used for default AWS Resources.

"aws"

The ID (ARN, alias ARN, AWS ID) of a customer managed KMS Key to use for encrypting log data in the CloudWatch log group for EKS control plane logs.

null

The number of days to retain log events in the CloudWatch log group for EKS control plane logs. Refer to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudwatch_log_group#retention_in_days for all the valid values. When null, the log events are retained forever.

null
cloudwatch_log_group_tagsmap(string)optional

Tags to apply on the CloudWatch Log Group for EKS control plane logs, encoded as a map where the keys are tag keys and values are tag values.

null

ARN of permissions boundary to apply to the cluster IAM role - the IAM role created for the EKS cluster as well as the default fargate IAM role.

null

The IP family used to assign Kubernetes pod and service addresses. Valid values are ipv4 (default) and ipv6. You can only specify an IP family when you create a cluster, changing this value will force a new cluster to be created.

"ipv4"

The CIDR block to assign Kubernetes pod and service IP addresses from. If you don't specify a block, Kubernetes assigns addresses from either the 10.100.0.0/16 or 172.20.0.0/16 CIDR blocks. You can only specify a custom CIDR block when you create a cluster, changing this value will force a new cluster to be created.

null
configure_kubectlbooloptional

Whether or not to automatically configure kubectl on the current operator machine. To use this, you need a working python install with the AWS CLI installed and configured.

false

When set to true, this will inform the module to set up the OpenID Connect Provider for use with the IAM Roles for Service Accounts feature of EKS.

true

When true, IAM role will be created and attached to Fargate control plane services. When true, requires that schedule_control_plane_services_on_fargate variable should be set true.

true

The name to use for the default Fargate execution IAM role that is created when create_default_fargate_iam_role is true. When null, defaults to CLUSTER_NAME-fargate-role.

null
custom_tags_eks_addonsmap(string)optional

A map of custom tags to apply to the EKS add-ons. The key is the tag name and the value is the tag value.

{}
Example
     {
key1 = "value1"
key2 = "value2"
}

custom_tags_eks_clustermap(string)optional

A map of custom tags to apply to the EKS Cluster. The key is the tag name and the value is the tag value.

{}
Example
     {
key1 = "value1"
key2 = "value2"
}

custom_tags_security_groupmap(string)optional

A map of custom tags to apply to the Security Group for this EKS Cluster. The key is the tag name and the value is the tag value.

{}
Example
     {
key1 = "value1"
key2 = "value2"
}

Configuraiton object for the EBS CSI Driver EKS AddOn

Any types represent complex values of variable type. For details, please consult `variables.tf` in the source repo.
{}
Details

EKS add-on advanced configuration via configuration_values must follow the configuration schema for the deployed version of the add-on.
See the following AWS Blog for more details on advanced configuration of EKS add-ons: https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/
Example:
{
addon_version = "v1.14.0-eksbuild.1"
configuration_values = {}
preserve = false
resolve_conflicts = "NONE"
service_account_role_arn = "arn:aws:iam::123456789012:role/role-name"
}

ebs_csi_driver_addon_tagsmap(string)optional

A map of custom tags to apply to the EBS CSI Driver AddOn. The key is the tag name and the value is the tag value.

{}
Example
     {
key1 = "value1"
key2 = "value2"
}

If using KMS encryption of EBS volumes, provide the KMS Key ARN to be used for a policy attachment.

null

The namespace for the EBS CSI Driver. This will almost always be the kube-system namespace.

"kube-system"

The Service Account name to be used with the EBS CSI Driver

"ebs-csi-controller-sa"
eks_addonsanyoptional

Map of EKS add-ons, where key is name of the add-on and value is a map of add-on properties.

Any types represent complex values of variable type. For details, please consult `variables.tf` in the source repo.
{}
Details

EKS add-on advanced configuration via configuration_values must follow the configuration schema for the deployed version of the add-on.
See the following AWS Blog for more details on advanced configuration of EKS add-ons: https://aws.amazon.com/blogs/containers/amazon-eks-add-ons-advanced-configuration/
Example:
eks_addons = {
coredns = {}
kube-proxy = {}
vpc-cni = {
addon_version = "1.10.1-eksbuild.1"
configuration_values = {
ipvs = {}
mode = "iptables"
resources = {}
}
preserve = false
resolve_conflicts = "NONE"
service_account_role_arn = "arn:aws:iam::123456789012:role/role-name"
}
}

When set to true, the module configures and install the EBS CSI Driver as an EKS managed AddOn (https://docs.aws.amazon.com/eks/latest/userguide/managing-ebs-csi.html). To use this feature, configure_openid_connect_provider must be set to true (the default value).

false
enable_eks_addonsbooloptional

When set to true, the module configures EKS add-ons (https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html) specified with eks_addons. VPC CNI configurations with use_vpc_cni_customize_script isn't fully supported with addons, as the automated add-on lifecycles could potentially undo the configuration changes.

false
enabled_cluster_log_typeslist(string)optional

A list of the desired control plane logging to enable. See https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html for the list of available logs.

[]
endpoint_private_access_cidrslist(string)optional

A list of CIDR blocks that should be allowed network access to the private Kubernetes API endpoint. Note that worker nodes automatically get access to the private endpoint, so this controls additional access. Note that this only restricts network reachability to the API, and does not account for authentication to the API. Note also that this only controls access to the private API endpoint, which is used for network access from inside the VPC. If you want to control access to the Kubernetes API from outside the VPC, then you must use the endpoint_public_access_cidrs.

[]

Same as endpoint_private_access_cidrs, but exposes access to the provided list of security groups instead of CIDR blocks. The keys in the map are unique user defined identifiers that can be used for resource tracking purposes.

{}

Whether or not to enable public API endpoints which allow access to the Kubernetes API from outside of the VPC. Note that private access within the VPC is always enabled.

true
fargate_profile_dependencieslist(string)optional

Create a dependency between the control plane services Fargate Profile in this module to the interpolated values in this list (and thus the source resources). In other words, the resources in this module will now depend on the resources backing the values in this list such that those resources need to be created before the resources in this module, and the resources in this module need to be destroyed before the resources in the list.

[]

Name of the kubectl config file context for accessing the EKS cluster.

""
kubectl_config_pathstringoptional

Path to the kubectl config file. Defaults to $HOME/.kube/config

""

The URL from which to download Kubergrunt if it's not installed already. Only used if use_kubergrunt_verification and auto_install_kubergrunt are true.

"https://github.com/gruntwork-io/kubergrunt/releases/download/v0.15.0/kubergrunt"
kubernetes_versionstringoptional

Version of Kubernetes to use. Refer to EKS docs for list of available versions (https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html).

"1.29"

The thumbprint to use for the OpenID Connect Provider. You can retrieve the thumbprint by following the instructions in the AWS docs: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc_verify-thumbprint.html. When set to null, this module will dynamically retrieve the thumbprint from AWS. You should only set this if you have strict requirements around HTTP access in your organization (e.g., you require an HTTP proxy).

null

When true, configures control plane services to run on Fargate so that the cluster can run without worker nodes. When true, requires kubergrunt to be available on the system.

false

ARN for KMS Key to use for envelope encryption of Kubernetes Secrets. By default Secrets in EKS are encrypted at rest using shared AWS managed keys. Setting this variable will configure Kubernetes to encrypt Secrets using this KMS key. Can only be used on clusters created after 2020-03-05.

null

When true, precreate the CloudWatch Log Group to use for EKS control plane logging. This is useful if you wish to customize the CloudWatch Log Group with various settings such as retention periods and KMS encryption. When false, EKS will automatically create a basic log group to use. Note that logs are only streamed to this group if enabled_cluster_log_types is true.

true

When set to true, the sync-core-components command will skip updating coredns. This variable is ignored if use_upgrade_cluster_script is false.

false

When set to true, the sync-core-components command will skip updating kube-proxy. This variable is ignored if use_upgrade_cluster_script is false.

false

When set to true, the sync-core-components command will skip updating aws-vpc-cni. This variable is ignored if use_upgrade_cluster_script is false.

false

When set to true, the sync-core-components command will wait until the new versions are rolled out in the cluster. This variable is ignored if use_upgrade_cluster_script is false.

true

When set to true, this will enable the kubergrunt eks cleanup-security-group command using a local-exec provisioner. This script ensures that no known residual resources managed by EKS is left behind after the cluster has been deleted.

true

When set to true, this will enable kubergrunt verification to wait for the Kubernetes API server to come up before completing. If false, reverts to a 30 second timed wait instead.

true

When true, all IAM policies will be managed as dedicated policies rather than inline policies attached to the IAM roles. Dedicated managed policies are friendlier to automated policy checkers, which may scan a single resource for findings. As such, it is important to avoid inline policies when targeting compliance with various security standards.

true

When set to true, this will enable the kubergrunt eks sync-core-components command using a local-exec provisioner. This script ensures that the Kubernetes core components are upgraded to a matching version everytime the cluster's Kubernetes version is updated.

true

When set to true, this will enable management of the aws-vpc-cni configuration options using kubergrunt running as a local-exec provisioner. If you set this to false, the vpccni* variables will be ignored.

true

When true, enable prefix delegation mode for the AWS VPC CNI component of the EKS cluster. In prefix delegation mode, each ENI will be allocated 16 IP addresses (/28) instead of 1, allowing you to pack more Pods per node. Note that by default, AWS VPC CNI will always preallocate 1 full prefix - this means that you can potentially take up 32 IP addresses from the VPC network space even if you only have 1 Pod on the node. You can tweak this behavior by configuring the vpc_cni_warm_ip_target input variable.

false

The minimum number of IP addresses (free and used) each node should start with. When null, defaults to the aws-vpc-cni application setting (currently 16 as of version 1.9.0). For example, if this is set to 25, every node will allocate 2 prefixes (32 IP addresses). On the other hand, if this was set to the default value, then each node will allocate only 1 prefix (16 IP addresses).

null

The number of free IP addresses each node should maintain. When null, defaults to the aws-vpc-cni application setting (currently 16 as of version 1.9.0). In prefix delegation mode, determines whether the node will preallocate another full prefix. For example, if this is set to 5 and a node is currently has 9 Pods scheduled, then the node will NOT preallocate a new prefix block of 16 IP addresses. On the other hand, if this was set to the default value, then the node will allocate a new block when the first pod is scheduled.

null
vpc_worker_subnet_idslist(string)optional

A list of the subnets into which the EKS Cluster's administrative pods will be launched. These should usually be all private subnets and include one in each AWS Availability Zone. Required when schedule_control_plane_services is true.

[]