Gruntwork release 2020-06
Guides / Update Guides / Releases / 2020-06
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2020-06. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 6/12/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Updates in this version:
- Update EKS modules to latest version.
- Update k8s-service to use helm v3
- Update k8s-service to use latest chart versions.
Refer to the migration guide in infrastructure-modules-multi-account-acme for instructions on how to update existing reference architectures.
Published: 6/8/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Updates in this version:
- Fix compatibility issues with latest terragrunt
- Bump instances to
t3
class
Published: 6/12/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Updates in this version:
- Update EKS modules to latest version.
- Update k8s-service to use helm v3
- Update k8s-service to use latest chart versions.
If you would like to take an existing Reference Architecture and update to this version, see the guide below.
IMPORTANT: This has been updated to allow upgrades post deprecation of helm v2 repository.
If you are running an EKS flavored Reference Architecture deployed prior to this release (all Reference Architectures before 06/11/2020), you can follow the guides in the following order to update your EKS cluster to this version.
This upgrade moves you to Kubernetes 1.16, the Gruntwork terraform-aws-eks
module to v0.20.1, and Helm 3. You will first update the cluster itself, then the core services, and finally, your own services that run in the cluster.
NOTE: You must fully roll out the changes at each bullet point prior to moving on to the next step, unless stated otherwise.
- Update your EKS cluster to run Kubernetes version 1.14 (instructions). Note that you must update the module versions to upgrade beyond 1.14, so if you want to upgrade to 1.15 and 1.16, wait until the end of the guide.
- Upgrade Gruntwork library modules
eks-cluster-control-plane
and eks-cluster-workers
in the eks-cluster
service module to version v0.9.8
(instructions).
- Update
eks-clusters
service module (instructions).
- At this point, you can repeat the steps in step (1) to upgrade the Kubernetes version to 1.15 and 1.16.
- Upgrade
k8s-service
service module to use Helm v3 (instructions). This must be rolled out to ALL your services before you can move on to the next step.
- Update
k8s-service
to use chart version 0.1.0
(instructions).
- Update
eks-core-services
service module (instructions).
- Update
k8s-namespace-with-tiller
module to remove references to Tiller (instructions).
Published: 6/8/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Updates in this version:
- Support for
nvme-cli
- Bumping to
t3.micro
- Bumping to latest
module-ci
for jenkins-server
- Bug fixes with helm
- Bug fixes in tls-scripts
- Compatibility update with latest terragrunt version
- Updating default kubernetes version to 1.14
Published: 6/17/2020 | Modules affected: asg-rolling-deploy | Release notes
The variable aws_region
was removed from the module, it's value will be retrieved from the region on the provider. When updating to this new version, make sure to remove the aws_region
parameter to the module.
Published: 6/14/2020 | Modules affected: asg-rolling-deploy | Release notes
- You can now configure the
asg-rolling-deploy
module to NOT use ELB health checks during a deploy by setting the use_elb_health_checks
variable to false
. This is useful for testing connectivity before health check endpoints are available.
Published: 6/25/2020 | Modules affected: memcached | Release notes
- Updated the
memcached
module to support passing an empty list of allowed CIDR blocks.
Published: 6/24/2020 | Modules affected: git-helpers, terraform-helpers | Release notes
terraform-update-variable
now supports commiting updates to a separate branch. Note that as part of this change, the --skip-git
option has been updated to take in the value as opposed to being a bare option. If you were using the --skip-git
flag previously, you will now need to pass in --skip-git true
.
Published: 6/2/2020 | Modules affected: ecs-deploy-runner | Release notes
- Added ecs_task_iam_role_arn as output on ecs-deploy-runner module
Published: 6/22/2020 | Modules affected: rds, aurora | Release notes
-
rds
[BREAKING CHANGES]
-
aurora
[BREAKING CHANGES]
-
The rds
and aurora
modules have been updated to remove redundant/duplicate resources by taking advantage of Terraform 0.12 syntax (i.e., for_each
, null
defaults, and dynamic
blocks). This greatly simplifies the code and makes it more maintainable, but because many resources were renamed, this is a backwards incompatible change, so make sure to follow the migration guide below when upgrading!
All input and output variables are the same, so you will not need to do any code changes. There are no changes in functionality either, so there shouldn't be anything new to apply
(i.e., when you finish the migration, the plan
migration should show no changes). The only thing that changed in this upgrade is that several resources were renamed in the Terraform code, so you'll need to update your Terraform state so it knows about these new names. You do this using the state mv command (Note: If you're using Terragrunt, replace terraform
with terragrunt
in all the commands in this migration guide):
terraform state mv OLD_ADDRESS NEW_ADDRESS
Where OLD_ADDRESS
is the resource address with the old resource name and NEW_ADDRESS
is the resource address with the new name. The easiest way to get the old and new address is to upgrade to the new version of this module and run terraform plan
. When you do so, you'll see output like this:
$ terraform plan
[...]
# module.aurora_serverless.aws_rds_cluster.cluster will be created
+ resource "aws_rds_cluster" "cluster" {
+ apply_immediately = false
+ arn = (known after apply)
+ availability_zones = (known after apply)
+ backup_retention_period = 21
+ cluster_identifier = "aurora-serverless-example"
+ cluster_identifier_prefix = (known after apply)
+ cluster_members = (known after apply)
[...]
# module.aurora_serverless.aws_rds_cluster.cluster_with_encryption_serverless[0] will be destroyed
- resource "aws_rds_cluster" "cluster_with_encryption_serverless" {
- apply_immediately = false -> null
- arn = "arn:aws:rds:us-east-1:087285199408:cluster:aurora-serverless-example" -> null
- availability_zones = [
- "us-east-1a",
- "us-east-1b",
- "us-east-1e",
] -> null
- backtrack_window = 0 -> null
- backup_retention_period = 21 -> null
- cluster_identifier = "aurora-serverless-example" -> null
The lines that show you resources being removed (with a -
in front of them) show the old addresses in a comment above the resource:
# module.aurora_serverless.aws_rds_cluster.cluster_with_encryption_serverless[0] will be destroyed
- resource "aws_rds_cluster" "cluster_with_encryption_serverless" {
And the lines that show the very same resources being added (with a +
in front of them) show the new addresses in a comment above the resource:
# module.aurora_serverless.aws_rds_cluster.cluster will be created
+ resource "aws_rds_cluster" "cluster" {
You'll want to run terraform state mv
(or terragrunt state mv
) on each pair of these resources:
terraform state mv \
module.aurora_serverless.aws_rds_cluster.cluster_with_encryption_serverless[0] \
module.aurora_serverless.aws_rds_cluster.cluster
Here are the renames that have happened:
Old resource name | New resource name |
---|
aws_rds_cluster.cluster_with_encryption_global_primary | aws_rds_cluster.cluster |
aws_rds_cluster.cluster_with_encryption_global_secondary | aws_rds_cluster.cluster |
aws_rds_cluster.cluster_with_encryption_serverless | aws_rds_cluster.cluster |
aws_rds_cluster.cluster_with_encryption_provisioned | aws_rds_cluster.cluster |
aws_rds_cluster.cluster_without_encryption | aws_rds_cluster.cluster |
aws_db_instance.primary_with_encryption | aws_db_instance.primary |
aws_db_instance.primary_without_encryption | aws_db_instance.primary |
aws_db_instance.replicas_with_encryption | aws_db_instance.replicas |
aws_db_instance.replicas_without_encryption | aws_db_instance.replicas |
When you've run terraform state mv
on all the pairs of resources, you know you've done it correctly if you can run terraform plan
and see no changes:
$ terraform plan
[...]
------------------------------------------------------------------------
No changes. Infrastructure is up-to-date.
Published: 6/17/2020 | Modules affected: aurora | Release notes
- Improved the Aurora documentation and added a dedicated Aurora Serverless example. This release also adds support for specifying a
scaling_configuration_timeout_action
when using the aurora
module in serverless
mode.
Published: 6/17/2020 | Modules affected: efs | Release notes
- The
efs
module can now create EFS access points and corresponding IAM policies for you. Use the efs_access_points
input variable to specify what access points you want and configure the user settings, root directory, read-only access, and read-write access for each one.
Published: 6/14/2020 | Modules affected: rds | Release notes
Published: 6/4/2020 | Modules affected: rds | Release notes
- Fix issue where restoring from snapshot wasn't setting
master_password
Published: 6/30/2020 | Modules affected: ecs-service | Release notes
- The
ecs-service
module now allows you to mount EFS Volumes in your ECS Tasks (including Fargate tasks) using the new efs_volumes
input variable. See also the efs module for creating EFS volumes.
Published: 6/17/2020 | Modules affected: ecs-cluster | Release notes
- The
ecs-cluster
module now attaches the ecs:UpdateContainerInstancesState
permission to the ECS Cluster's IAM role. This is required for automated ECS instance draining (e.g., when receiving a spot instance termination notice).
Published: 6/8/2020 | Modules affected: ecs-cluster | Release notes
- Add new module output
ecs_instance_iam_role_id
which contains the ID of the aws_iam_role
mapped to ecs instances.
Published: 6/5/2020 | Modules affected: ecs-service | Release notes
You can now bind different containers and ports to each target group created for the ECS service. This can be used to expose multiple containers or ports to existing ALBs or NLBs.
Published: 6/17/2020 | Modules affected: eks-k8s-external-dns, eks-alb-ingress-controller | Release notes
eks-k8s-external-dns
is now using a more up to date Helm chart to deploy external-dns
. Additionally, you can now configure the logging format between text
and json
.
eks-alb-ingress-controller
now supports selecting a different container version of the ingress controller. This can be used to deploy the v2 alpha image with shared ALB support.
Published: 6/11/2020 | Modules affected: eks-cluster-control-plane | Release notes
The control plane Python PEX binaries now support long path names on Windows. Previously the scripts were causing errors when attempting to unpack the dependent libraries.
Published: 6/2/2020 | Modules affected: eks-cloudwatch-container-logs, eks-cluster-control-plane | Release notes
The cluster upgrade script now supports updating to Kubernetes version 1.16. The eks-cloudwatch-container-logs
is also now compatible with Kubernetes version 1.16.
Published: 6/1/2020 | Modules affected: lambda-edge, lambda | Release notes
The lambda
and lambda-edge
modules now support configuring the dead letter queue for subscribing to errors from the functions.
Published: 6/22/2020 | Modules affected: sqs | Release notes
The sqs
module can now be turned off by setting create_resources = true
. When this option is passed in, the module will disable all the resources, effectively simulating a conditional.
Published: 6/3/2020 | Modules affected: sns | Release notes
- The
sns
module will now allow display names to be up to 100 characters.
Published: 6/16/2020 | Modules affected: account-baseline-security | Release notes
As outlined in the AWS docs, the key policy in the security account should allow trail/* so that all trails in external accounts can use the key for encryption (but not decryption). Without this, running the account baseline in a sub account results in InsufficientEncryptionPolicyException.
Published: 6/14/2020 | Modules affected: iam-users | Release notes
- The
iam-users
module can now associate a public SSH key with each IAM user using the ssh_public_key
parameter.
Published: 6/2/2020 | Modules affected: account-baseline-app, account-baseline-security, kms-master-key-multi-region, cloudtrail | Release notes
This minor release includes a number of documentation changes and renamed files.
vars.tf
has been renamed to variables.tf
throughout the repository
- The suggestion to set the
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
has been dropped since most users now use aws-auth
or aws-vault
- Added documentation on using 1Password with
aws-auth
Published: 6/26/2020 | Modules affected: single-server | Release notes
- Added
iam_role_name
and iam_role_arn
outputs to the single-server
module.
- Updated the repo README to the new format.
Published: 6/26/2020 | Modules affected: vpc-dns-forwarder-rules, vpc-dns-forwarder, vpc-flow-logs | Release notes
This release adds the ability to create tags
with the modules mentioned above.
Published: 6/14/2020 | Modules affected: vpc-interface-endpoint | Release notes
- The
vpc-interface-endpoint
module now supports endpoints for SSM, SSM Messages, and EC2 Messages.