Gruntwork release 2020-12
Guides / Update Guides / Releases / 2020-12
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2020-12. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 12/8/2020 | Release notes
Published: 12/7/2020 | Release notes
DO NOT USE: integration testing release
Published: 12/2/2020 | Release notes
Published: 12/18/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Module versions have been updated for compatibility with Terraform 0.13. Additionally, the required versions in all modules have been updated to reflect usage with 0.13.
Several backwards incompatible changes were pulled in as a result. Refer to the Migration Guide down below for details on state changes (if any) that need to be applied.
Most modules do not require any changes to apply the Terraform 0.13 compatibility versions, and to update to Terraform 0.13. Below are the list of modules that require state migrations, or include expected destroyed resources. Any module that is not listed do not require any state migration to apply cleanly.
The cloudtrail
module has several internal changes to how the S3 bucket is managed. You will need to perform state migrations to avoid recreating the bucket. Refer to the upgrade guide for detailed instructions on updating to this release.
The eks-cluster
module has several changes to avoid using external information in destroy provisioners. As a result a simple version bump will lead to various terraform errors due to incompatibility with the state. Refer to the upgrade guide for detailed instructions on how to resolve those errors.
The k8s-service
module includes a change to how the ALB Access Log S3 bucket is managed. You will need to perform state migrations to avoid recreating the bucket. Refer to the upgrade guide for detailed instructions on updating to this release.
The k8s-namespace
module includes a rename for one of the RBAC roles that are included with the Namespace. Refer to the upgrade guide for more information on the specifics of the rename, and how to avoid losing access in your services that depend on those roles.
The rds
module includes a few changes to the CloudWatch alarms that are provisioned. Specifically, the replica related alarms are now only created if there are Read Replicas being deployed (previously we always created these alarms). You may see these alarms destroyed when you update to this release. These alarm deletions are expected and safe to perform for clusters that do not have any Read Replicas.
The alb
module includes a change to how the ALB Access Log S3 bucket is managed. You will need to perform state migrations to avoid recreating the bucket. Refer to the upgrade guide for detailed instructions on updating to this release.
Published: 12/18/2020 | Release notes
Initial release of the architecture catalog!
Published: 12/17/2020 | Modules affected: ecs-deploy-runner, build-helpers | Release notes
build-helpers
: A bug has been fixed in build-packer-artifact
where multiple filters were not producing the desired result.
ecs-deploy-runner
: The Dockerfile
for the ecs-deploy-runner
Docker image has been updated to use the new build-packer-artifact
script. The image also now install Terraform 0.13.5 and newer versions of Terragrunt and Kubergrunt by default.
Published: 12/10/2020 | Modules affected: build-helpers | Release notes
This release fixes a bug in build-packer-artifact
script, where the --idempotency
flag did not properly handle images with multiple tags.
Published: 12/10/2020 | Modules affected: ecs-deploy-runner | Release notes
- The default version of tools installed in the
ecs-deploy-runner
docker containers have been updated: module_ci_version
is now v0.29.2
, and kaniko
is now v1.3.0
.
Published: 12/17/2020 | Modules affected: cloudtrail | Release notes
Configures data event logging for cloudtrail buckets, as per the 3.10 and 3.11 requirements of CIS AWS Foundations Benchmark.
Published: 12/15/2020 | Modules affected: cleanup-expired-certs, cloudtrail, cloudwatch-logs-metric-filters | Release notes
- Adds a new module
cleanup-expired-certs
to ensure that all expired SSL/TLS certificates stored in AWS IAM are removed as per the 1.19 requirement of the CIS AWS Foundations Benchmark.
- Add metric filter and alarm for AWS Organizations changes, as per the 4.15 requirement of CIS AWS Foundations Benchmark.
Published: 12/8/2020 | Modules affected: aurora, rds | Release notes
- You can now tell the
rds
and aurora
modules to ignore changes to the master_password
parameter by setting the new ignore_password_changes
input variable to true
. This is useful when managing the password outside of Terraform, such as with auto-rotating passwords in AWS Secrets Manager.
Published: 12/15/2020 | Modules affected: ecs-cluster | Release notes
You can now enable container insights on the ECS cluster deployed with the ecs-cluster
module.
Published: 12/4/2020 | Modules affected: ecs-cluster | Release notes
- You can now configure the
ecs-cluster
to create one capacity provider and one ASG per AZ / subnet by setting the multi_az_capacity_provider
input variable to true.
Published: 12/19/2020 | Modules affected: eks-cluster-managed-workers, eks-cluster-workers, eks-cluster-control-plane | Release notes
- You can now configure the EKS control plane with additional security groups that are managed outside the module. (NOTE: You will need to recreate the EKS cluster to append additional security groups to the control plane).
- Fix a bug where certain cases can cause list indexing errors.
- Various updates to the documentation
Published: 12/16/2020 | Modules affected: eks-cluster-control-plane | Release notes
- This release is a minor bugfix to use the latest kubergrunt (v0.6.8) required dependency.
Published: 12/15/2020 | Modules affected: eks-cluster-control-plane, eks-cluster-workers | Release notes
Various instance parameters are now overrideable in the autoscaling_group_configurations
. Refer to the updated variable definition for more details on which attributes are available to override.
Published: 12/14/2020 | Modules affected: eks-cluster-control-plane, eks-alb-ingress-controller, All | Release notes
eks-cluster-control-plane
[BACKWARD INCOMPATIBLE]
eks-alb-ingress-controller
[BACKWARD INCOMPATIBLE]
- All other modules (backward compatible changes)
This module includes backward incompatible changes. Please refer to the migration guide.
Terraform 0.13 upgrade: We have verified that this repo is compatible with Terraform 0.13.x!
- From this release onward, we will be running tests against this repo with Terraform 0.13.x only, so we recommend that you upgrade your local Terraform to 0.13.x soon!
- To give you more time to upgrade, for the time being, all modules still support Terraform 0.12.26 and above, as that version has several features in it (required_providers with source URLs) that make it more forward compatible with 0.13.x.
- Once all Gruntwork module repos have been updated to work with 0.13.x, we will publish a migration guide with a version compatibility table and announce it all via the Gruntwork Newsletter.
Remove references to the following variables from the module block if you have them set.
We can no longer dynamically configure destroy provisioners starting with Terraform 0.13. As a result, we had to remove the ability to dynamically configure the destroy provisioners on the Helm Release in eks-alb-ingress-controller
. If you have a need for destroy hooks (such as culling all the Ingress resources prior to destroying the controller), consider using a tool like terragrunt or forking the module to implement it directly.
- var.destroy_lifecycle_environment
- var.destroy_lifecycle_command
We no longer allow kubergrunt_install_dir
to be configurable. Kubergrunt is primarily used for helping us clean up leftover resources, which are otherwise not cleaned up, when running terraform destroy to destroy the EKS cluster and any other related resources, using this module. Because Terraform >= 0.13.0
can no longer reference any variables in destroy provisioners, we must hardcode kubergrunt's install_dir
so that this module can reliably call it from a known location to clean up leftover resources.
-
var.kubergrunt_install_dir
-
These steps assume you have a running EKS cluster that was deployed using an earlier version of terraform-aws-eks and using terraform 0.12.x
. The following steps have been verified using terraform 0.12.26
, so if you have an older version of terraform, you may run into some unforeseen issues. You may first want to upgrade your terraform to at least 0.12.26
(but still not 0.13.x
) before proceeding.
-
🎉 Terraform 0.12.29
handles the changes to state much better than previous versions of 0.12
. This means you can probably skip steps 5-7 below!
-
If you're using a version of terraform-aws-eks
older than v0.29.x
, you should address all backward-incompatible changes from your current version to v0.29.x
. That means going through every v0.X.0
release.
-
Make particular note of changes in v0.28.0
: if you're using a version older than this, you can follow the instructions in the release notes for v0.28.0 to ensure your Load Balancer resources are compatible with AWS Load Balancer Controller version 2. Otherwise, you may end up with Load Balancer resources, such as Security Groups, left behind when destroying the EKS cluster using the current (v0.30.0
) version.
- Make sure your state files are up to date. Before making changes to upgrade the module code, make sure your state is in sync with the current version of the code by running
terraform apply
.
- Upgrade. Update the module blocks referencing
terraform-aws-eks
to version v0.30.0
.
- Update providers. Run
terraform init
. This will update the providers and make no changes to your state.
- Run plan to see errors. Run
terraform plan
. If you see errors for provider configuration, the next steps help you fix these issues. If you do not see errors, skip to step 8.
First, some background on how state changes. We've removed data
and null_resources
in this release, so in order to upgrade, you also need to remove these resources from state. It is safe to remove these resources because null_resource
s are virtual resources in terraform with no cloud resources backing them. In the next step, we've offered an example of the state rm
command you need to run, but the prefix for each state address may be different for you. The prefix of each address is the module label you assigned to the block for the eks-cluster-control-plane
module. So if you had:
module "eks_cluster" &
source = "git::git@github.com:gruntwork-io/terraform-aws-eks.git
...
&
the prefix will be module.eks_cluster
. If you had labeled the module block as my_cluster
(e.g., module "my_cluster" {}
), the prefix will be module.my_cluster
. Reliable ways to figure out the full address:
- Use
terraform state list
.
- Look at the errors you get from running
terraform plan
.
- Dry-run state changes. The following is an example of the state change you'll have to make. Run it in
-dry-run
mode first, and use -backup
. Look at the list of modules it will remove and compare to the errors in the previous step. As we remove these resources, the errors will go away.
MODULE_PREFIX='module.eks_cluster'
terraform state rm -dry-run -backup=tfstate.backup \
"$MODULE_PREFIX".null_resource.cleanup_eks_cluster_resources_script_hook \
"$MODULE_PREFIX".module.cleanup_eks_cluster_resources.null_resource.run_pex \
"$MODULE_PREFIX".module.cleanup_eks_cluster_resources.module.pex_env \
"$MODULE_PREFIX".module.cleanup_eks_cluster_resources.module.pex_env.module.os \
"$MODULE_PREFIX".module.cleanup_eks_cluster_resources.module.pex_env.module.pex_module_path.module.os \
"$MODULE_PREFIX".module.cleanup_eks_cluster_resources.module.pex_env.module.python2_pex_path.module.os \
"$MODULE_PREFIX".module.cleanup_eks_cluster_resources.module.pex_env.module.python3_pex_path.module.os \
"$MODULE_PREFIX".null_resource.local_kubectl
- State change. Once all the resources to be removed match the errors you saw, you are ready to run the command without the
-dry-run
flag to remove those resources from state.
- Re-run
terraform plan
. You should no longer see those errors. You should not see any changes to add or destroy. You may see some resources that will be updated in place depending on your configuration.
- Run
terraform apply
to apply any changes and save everything to state. Even if you don't see changes, run apply
anyway.
From this point onward, you should be able to make changes to the module as normal.
When you want to destroy the EKS cluster, you can run terraform destroy
, which should not only destroy the resources created by the module, but also it should remove extraneous resources that otherwise wouldn't get cleaned up, such as Security Groups managed by AWS, and CoreDNS changes.
Note: At this point terraform 0.14.x has been released, but be aware that these modules have not been tested with it.
These steps assume you've upgraded the modules separately using terraform 0.12.x, preferably 0.12.26 or later, as described in the previous step.
- Upgrade your local terraform to
0.13.x
. We've tested with 0.13.4
, but later versions should work.
- Run
terraform plan
.
- If there are any minor changes, go ahead and run
terraform apply
.
- Note: If in any of the previous commands you get a provider-related error, you may need to run
terraform init
first.
From this point onward, you should be all good to continue using terraform 0.13.x.
We made big changes to how we clean up leftover resources when running terraform destroy
in these modules. While most of the time things will work smoothly, there is a known case with an issue:
If you start with a running cluster using the old version (prior to this release) of the modules, that you created with terraform 0.12.x, then upgrade to the new module and terraform 0.13.x as we describe above, and then try to destroy, this destruction step might not go as planned. If you're spinning up and down a lot of EKS clusters programmatically, it can be a headache to try to resolve errors and timeouts during destroy. Therefore, for these situations, we recommend switching to the new modules along with terraform 0.13.x exclusively once you're ready to do so. Destroying a cluster that was deployed using this version of the modules applied with terraform 0.13.x works much more smoothly.
We've documented specific known issues regarding the destroy below.
The destroy step depends on Kubergrunt version ~0.6.7. Normally if you use the eks-cluster-control-plane
module with default values for var.auto_install_kubergrunt
and var.use_kubergrunt_verification
, the right version of kubergrunt will be installed during terraform plan
. If you change these values to avoid that install, or if you have installed an older version of kubergrunt, you will get an error when running terraform destroy
that advises you to install it. For installation instructions, look here.
- If you have deployed the AWS Load balancer Ingress Controller (previously called AWS ALB Ingress Controller), you need to undeploy it before destroying the EKS cluster. If the Ingress Controller is still up while the EKS cluster is being destroyed, the clean up routine can deadlock with the controller because the resources being destroyed will be recreated by the controller. The destroy process will eventually time out. For example, if you are destroying an EKS cluster with supporting services (as in these examples: Fargate, Managed Workers), you will need to first destroy
nginx-service
, then core-services
, then eks-cluster
.
- Deleting Fargate profiles in AWS can take longer than anticipated. This can result in timeout errors during the destroy process. If you run into this issue, be advised that you may have to re-run
terraform destroy
a few times before it is able to proceed.
- If you end up needing to re-run
terraform destroy
multiple times because of timeouts, be advised that you may still have to clean up Security Groups and the VPC associated with your cluster manually in the AWS Console UI. This is because the cleanup process that we run in kubergrunt will not re-run on the next terraform destroy
call if the parent resource (the EKS cluster) is already destroyed. The unfortunate consequence is that any VPC you intended to delete will not be cleaned up because the Security Group remains. Since VPCs incur expenses, please make sure to clean up the leftover Security Group and VPC.
Published: 12/14/2020 | Modules affected: eks-cluster-managed-workers | Release notes
- You can now set the
capacity_type
on the Managed Node Groups created with eks-cluster-managed-workers
Published: 12/1/2020 | Modules affected: eks-alb-ingress-controller, eks-k8s-cluster-autoscaler, eks-k8s-external-dns, eks-cluster-managed-workers | Release notes
- The type of
pod_tolerations
input var was incorrect for eks-alb-ingress-controller
, eks-k8s-cluster-autoscaler
, eks-k8s-external-dns
.
eks-cluster-managed-workers
now supports specifying launch templates.
Published: 12/1/2020 | Modules affected: logs/load-balancer-access-logs | Release notes
This release contains backwards incompatible changes. Make sure to follow the instructions in the migration guide below!
The load-balancer-access-logs
module has been refactored to use the private-s3-bucket
module under the hood to configure the access logging S3 bucket.
Published: 12/3/2020 | Modules affected: openvpn-server | Release notes
This release contains backwards incompatible changes. Make sure to follow the instructions in the migration guide below!
The openvpn-server
module has been refactored to use the private-s3-bucket
module under the hood to configure the S3 bucket.
Published: 12/18/2020 | Modules affected: account-baseline-root, account-baseline-security, iam-access-analyzer-multi-region | Release notes
As part of upgrading module to align with CIS 1.3.0 compliance, as is recommended, the IAM Access Analyzer needs to be enabled across all used AWS regions.
In this release:
- We've added a new module wrapper
iam-access-analyzer-multi-region
for the IAM Access Analyzer service for multiple AWS regions and a related example.
- We've updated
account-baseline-root
and account-baseline-security
and their respective code examples to showcase using the new module.
The iam-access-analyzer-multi-region
has been added, but is disabled at the level of the Landing Zone product (account-baseline-*
modules) for backward compatibility. To enable the use of this feature, users will need to enable_iam_access_analyzer
to true
in the variables.tf
for each of these modules or examples.
Published: 12/16/2020 | Modules affected: cloudtrail | Release notes
This release adds support for configuring data event logging for cloudtrail buckets. Data event logging is configured using the newly introduced variables: data_logging_enabled
, data_logging_read_write_type
, data_logging_include_management_events
, data_logging_resource_type
and data_logging_resource_values
. For detailed instructions see the descriptions of these variables.
Published: 12/15/2020 | Modules affected: account-baseline-app, account-baseline-root, account-baseline-security, cloudtrail-bucket | Release notes
This fixes a bug that was introduced in v0.44.3
, where the cloudtrail
module now needed kms:DescribeKey
access to the KMS key, which was not provided by default. This release reverts back to the behavior in v0.44.2
, unless you enable the following flags:
allow_kms_describe_key_to_external_aws_accounts = true
kms_key_arn_is_alias = true
You can now attach kms:DescribeKey
permissions to IAM entities on CMKs managed with kms-master-key
by setting cmk_describe_only_user_iam_arns
.
Published: 12/10/2020 | Modules affected: cloudtrail | Release notes
This fixes a perpetual diff issue with cloudtrail
module when kms_key_arn
is a loose KMS ID (e.g., KMS Alias).
Published: 12/10/2020 | Modules affected: kms-grant-multi-region | Release notes
kms-grant-multi-region
now supports using aliases for KMS Key IDs.
Published: 12/3/2020 | Modules affected: private-s3-bucket | Release notes
- You can now configure the bucket ownership settings using the new
bucket_ownership
input variable in private-s3-bucket
.
Published: 12/16/2020 | Modules affected: single-server | Release notes
- Replace
template_file
usage with locals
to avoid data source dependency graphs.
Published: 12/18/2020 | Modules affected: services, mgmt, networking, base | Release notes
- The
ecs-service
module accepts a new optional variable, secrets_access
, which can be used to automatically create an IAM policy with GetSecretValue
permission on the given secrets.
- Update dependency
gruntwork-io/module-ci
to v0.29.5 (release notes)
- Update dependency
gruntwork-io/terraform-aws-vpc
to v0.12.4 (release notes)
- Update dependency
gruntwork-io/module-server
to v0.9.4 (release notes)
- Update dependency
gruntwork-io/module-security
to (v0.44.5)
- Update dependency
gruntwork-io/module-ecs
to (v0.23.3)
- Update dependency
gruntwork-io/terratest
to (v0.31.2)
Published: 12/15/2020 | Modules affected: base, data-stores, landingzone, mgmt | Release notes
- Update dependency
gruntwork-io/module-security
: v0.44.3
=> v0.44.4
(Release notes: v0.44.4).
Published: 12/14/2020 | Modules affected: base, data-stores, landingzone, mgmt/openvpn-server | Release notes
-
Update dependency gruntwork-io/module-security
: v0.44.2
=> v0.44.3
(Release notes: v0.44.3).
-
Update dependency gruntwork-io/terraform-aws-vpc
: v0.12.2
=> v0.12.3
(Release notes: v0.12.3).
-
Update dependency gruntwork-io/module-ci
: v0.29.3
=> v0.29.4
(Release notes: v0.29.4).
-
Update dependency gruntwork-io/terratest
: v0.30.23
=> v0.31.1
(Release notes: v0.30.24, v0.30.25, v0.30.26, v0.30.27, v0.31.0, v0.31.1).
-
Update dependency gruntwork-io/terraform-aws-eks
: v0.29.0
=> v0.29.1
(Release notes: v0.29.1).
-
Update dependency gruntwork-io/kubergrunt
: => v0.6.7
(Release notes: v0.6.7).
-
Update dependency gruntwork-io/terraform-aws-monitoring
: v0.23.4
=> v0.24.0
(Release notes: v0.24.0). NOTE: This includes a backwards incompatible change that affects the k8s-service
and alb
modules. Please read the migration guide in the terraform-aws-monitoring module release notes for more details!
-
Update dependency gruntwork-io/module-ecs:
v0.23.0=>
v0.23.2` (Release notes: v0.23.1, v0.23.2).
-
Update dependency gruntwork-io/package-openvpn
: v0.12.1
=> v0.13.0
(Release notes: v0.13.0). NOTE: This includes a backwards incompatible change that affects the openvpn-server
module. Please read the migration guide in the package-openvpn module release notes for more details!
Published: 12/10/2020 | Modules affected: networking/vpc-mgmt, networking/vpc, data-stores, base | Release notes
- Update dependency
gruntwork-io/module-data-storage
: v0.16.3
=> v0.17.1
(Release notes: v0.17.0 ; v0.17.1).
- Update dependency
gruntwork-io/terraform-aws-vpc
: v0.11.0
=> v0.12.2
(Release notes: v0.12.0 ; v0.12.1 ; v0.12.2). NOTE: This includes a backwards incompatible change. Please read the migration guide in the terraform-aws-vpc
module release notes for more details!
- Update dependency
gruntwork-io/module-security
: v0.44.1
=> v0.44.2
(release notes).
- Address a silent failure in KMS grant dependencies in the account baseline modules.
Published: 12/4/2020 | Modules affected: data-stores | Release notes
- Exposed SSE algorithm settings in
s3-bucket
: bucket_sse_algorithm
and replica_sse_algorithm
.
Published: 12/4/2020 | Modules affected: mgmt, data-stores, base, landingzone | Release notes
- Update dependency
gruntwork-io/terragrunt
to v0.26.7
- Access permissions for the access log and replica buckets in
s3-bucket
are now controlled via the separate input variables access_logging_bucket_policy_statements
and replica_bucket_policy_statements
instead. This is a backwards incompatible change. See Migration Guide below.
- Expose bucket ownership settings in
s3-bucket
via the bucket_ownership
, access_logging_bucket_ownership
, and replica_bucket_ownership
input variables.
Published: 12/1/2020 | Modules affected: data-stores, networking | Release notes
- Expose the redis parameter group name from the underlying module (input variable
parameter_group_name
).
- Expose
engine_version
for Aurora.
- Expose
enable_deletion_protection
for RDS modules.
Published: 12/17/2020 | Modules affected: vpc-app-network-acls | Release notes
- Fix a bug where
vpc-app-network-acls
would not work correctly if some of the subnet tiers in the VPC were disabled.
Published: 12/10/2020 | Modules affected: vpc-app | Release notes
- The
vpc-app
module now allows you to configure the ingress and egress rules for the default Security Group and NACL using the new default_security_group_ingress_rules
, default_security_group_egress_rules
, default_nacl_ingress_rules
, and default_nacl_egress_rules
input variables. You can also control tags on these resources using the existing custom_tags
input variable.
Published: 12/10/2020 | Modules affected: vpc-flow-logs | Release notes
- Fix a bug in how the
vpc-flow-logs
module looked up the KMS key when create_resources
was set to false
.
Published: 12/9/2020 | Modules affected: vpc-app, vpc-mgmt, vpc-mgmt-network-acls | Release notes
- The
vpc-app
module now allows you to disable any of the three tiers of subnets (public, private-app, private-persistence) by setting the new input variables create_public_subnets
, create_private_app_subnets
, or create_private_persistence_subnets
to false
. This is convenient, for example, if you want to create a VPC with no public subnets because you get all public Internet access through some other mechanism (e.g., Direct Connect, VPC peering, etc).
- IMPORTANT NOTE: as of this release,
vpc-mgmt
is now deprecated: The main difference between vpc-mgmt
and vpc-app
was that vpc-app
had three tiers of subnets (public, private-app, private-persistence) and vpc-mgmt
had two (public, private). As of
this release, since vpc-app
allows you to disable any of the subnet tiers, it can now support 1, 2, or 3 tiers of subnets, as needed. Therefore, we recommend using vpc-app
for all your VPCs in the future. If you're already using vpc-mgmt
, we will continue to maintain it for a little while longer, but please be aware that, in a future release, once we feel the new functionality in vpc-app
is fully baked, we will remove vpc-mgmt
entirely.
Published: 12/3/2020 | Modules affected: vpc-flow-logs | Release notes
This release contains backwards incompatible changes. Make sure to follow the instructions in the migration guide below!
The vpc-flow-logs
module has been refactored to use the private-s3-bucket
module under the hood to configure the S3 bucket.
Published: 12/15/2020 | Release notes
(no description found in release notes)