Gruntwork release 2020-05
Guides / Update Guides / Releases / 2020-05
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2020-05. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 5/26/2020 | Release notes
Published: 5/6/2020 | Release notes
Published: 5/15/2020 | Modules affected: server-group | Release notes
- You can now enable encryption for the root block device by using the
root_block_device_encrypted
input variable.
Published: 5/29/2020 | Modules affected: ecs-deploy-runner | Release notes
ecs-deploy-runner
now outputs the security group used by the ECS task so that you can append additional rules to it.
Published: 5/28/2020 | Modules affected: jenkins-server | Release notes
This release bumps the version of the ALB module used by Jenkins to v0.20.1
to fix an issue related to outputs from the ALB module.
Migration guide
The jenkins-server
module no longer takes the aws_account_id
variable. To update to this release, do not pass the variable as an input.
Published: 5/27/2020 | Modules affected: infrastructure-deployer, ecs-deploy-runner | Release notes
The infrastructure-deployer
now supports selecting the container to run in a multi container deployment for the ecs-deploy-runner
. Note that this version of the infrastructure-deployer
is only compatible with an ecs-deploy-runner
that is deployed with this version.
Published: 5/26/2020 | Modules affected: ecs-deploy-runner, infrastructure-deploy-script | Release notes
The infrastructure-deploy-script
now supports running destroy
. Note that the threat model of running destroy
in the CI/CD pipeline is not well thought out and is not recommended. Instead, directly call the ECS task to run destroy using privileged credentials.
Published: 5/15/2020 | Modules affected: build-helpers | Release notes
build-packer-artifact
now supports building a packer template from a git repository. See the updated docs for more info.
Published: 5/15/2020 | Modules affected: ecs-deploy-runner | Release notes
ecs-deploy-runner
now supports specifying multiple container images, and choosing a container image based on a user defined name. This allows you to configure and use different Docker containers for different purposes of your infrastructure pipeline.
Published: 5/7/2020 | Modules affected: infrastructure-deployer, infrastructure-deploy-script, install-jenkins | Release notes
-
The CLI arg for setting the log level in infrastructure-deployer
and infrastructure-deploy-script
has been renamed to --log-level
instead of --loglevel
.
-
The infrastructure-deploy-script
no longer supports passing in the private SSH key via CLI args. You must pass it in with the environment variable DEPLOY_SCRIPT_SSH_PRIVATE_KEY
.
-
install-jenkins
will automatically disable jenkins so that it won't start on boot. This ensures that jenkins will not be started unless it has been successfully configured with run-jenkins
. To get the previous behavior, pass in --module-param "run-on-boot=true"
.
Published: 5/8/2020 | Modules affected: aws-securityhub | Release notes
aws-securityhub
no longer depends on python to get enabled regions, and instead uses a terraform native data source.
Published: 5/28/2020 | Modules affected: aurora | Release notes
- You can now enable cross-region replication for Aurora by setting
source_region
and replication_source_identifier
to the region and ARN, respectively, of a primary Aurora DB.
Published: 5/26/2020 | Modules affected: aurora | Release notes
- Allow changing the auto minor version upgrade behavior
Published: 5/18/2020 | Modules affected: efs | Release notes
- Bugfix for EFS: create mount targets in correct security group
Published: 5/14/2020 | Modules affected: efs | Release notes
Published: 5/5/2020 | Modules affected: aurora | Release notes
- You can now pass in an optional list of IAM roles to attach to the Aurora cluster using the new
cluster_iam_roles
input variable.
Published: 5/1/2020 | Modules affected: rds, aurora | Release notes
You can now provide an existing DB subnet group to use with the RDS clusters instead of creating a new one.
Published: 5/9/2020 | Modules affected: ecs-service | Release notes
You can now configure the platform version of ECS Fargate using the platform_version
variable.
Published: 5/29/2020 | Modules affected: eks-cluster-workers, eks-cluster-control-plane, eks-k8s-role-mapping | Release notes
This release introduces first class support for using the EKS cluster security group with self managed workers:
-
The eks-cluster-control-plane
module now outputs the cluster security group ID so that you can extend it with additional rules.
-
The eks-cluster-workers
module now appends the cluster security group to the node instead of rolling out its own group by default. Note that it still creates its own group to make it easier to append rules that are only specific to the self-managed workers.
This release also fixes a bug with the eks-k8s-role-mapping
module, where previously it did not support including the Fargate execution role. If you don't include the Fargate execution role in the mapping, terraform may delete the configuration rules that enable Fargate to communicate with the Kubernetes API as workers.
Published: 5/28/2020 | Modules affected: eks-k8s-role-mapping | Release notes
eks-k8s-role-mapping
is now a pure terraform module and no longer uses python to assist in generating the role mapping. Note that this will cause a drift in the configuration state due to some of the attributes being reorganized, but the configuration is semantically equivalent (thus the roll out is backwards compatible).
Published: 5/16/2020 | Modules affected: eks-cluster-workers | Release notes
You can now specify the max_instance_lifetime
on the autoscaling group created with eks-cluster-workers
.
Published: 5/8/2020 | Modules affected: eks-cluster-control-plane | Release notes
eks-cluster-control-plane
module will now automatically download and install kubergrunt
if it is not available in the target system. This behavior can be disabled by setting the input variable auto_install_kubergrunt
to false
.
This release also includes several documentation fixes to READMEs of various modules.
Published: 5/7/2020 | Modules affected: lambda | Release notes
The lambda
module is now more robust to partial failures in the module. Previously you could end up in a state where you couldn't apply
or destroy
the module if it only partially applied the resources due to output errors. This release addresses that by changing the output logic.
Note that previously this module output null
for all the outputs when create_resources
was false
. However, with this release the output is converted to ""
. If you depended on behavior of null
outputs, you will need to adjust your code to convert null
checks to ""
.
Published: 5/20/2020 | Modules affected: alb | Release notes
- ALB outputs have been adjusted to use
for
syntax as opposed to zipmap
for the listener port => cert ARN mapping. This was due to an obscure Terraform bug that is not yet fixed/released.
Published: 5/29/2020 | Modules affected: alarms | Release notes
- Added alarms for Replica Lag and Replication Errors.
Published: 5/19/2020 | Modules affected: alarms | Release notes
- Update README.md (fixes minor typo)
- Add RDS storage alarms for Aurora engine type
Published: 5/4/2020 | Modules affected: metrics, logs | Release notes
- The
install.sh
scripts for the cloudwatch-log-aggregation-scripts
, syslog
, and cloudwatch-memory-disk-metrics-scripts
modules were unnecessarily using eval
to execute scripts used in the install steps. This led to unexpected behavior, such as --module-param
arguments being shell expanded. We've removed the calls to eval
and replaced with a straight call to the underlying scripts.
This release is marked as backwards incompatible, but this only applies if you were (intentionally or otherwise) relying on the eval
behavior (which is not likely or recommended!).
Published: 5/26/2020 | Modules affected: account-baseline-app, account-baseline-security, kms-master-key-multi-region, kms-master-key | Release notes
kms-master-key
now supports configuring service principal permissions with conditions. As part of this change, the way CloudTrail is setup in the Landing Zone modules have been updated to better support the multiaccount configuration. Refer to the updated docs on multiaccount CloudTrail for more information.
Published: 5/21/2020 | Modules affected: cloudtrail, account-baseline-app, account-baseline-root, account-baseline-security | Release notes
The cloudtrail
module now supports reusing an existing KMS key in your account, as opposed to creating a new one. To use an existing key, set the kms_key_already_exists
variable to true
and provide the ARN of the key to the variable kms_key_arn
.
Note that as part of this change, the aws_account_id
variable was removed from the module and it will now look up the account ID based on the configured authentication credentials of the provider. Remove the variable in your module block to have a backwards compatible deployment.
Published: 5/19/2020 | Modules affected: iam-policies, account-baseline-root | Release notes
- The
iam-policies
module now allows sts:TagSession for the automation users
- In v0.29.0, we updated
account-baseline-app
and account-baseline-security
to allow for centralizing Config output in a single bucket. In this release, we take the same approach with account-baseline-root
. It now supports using config bucket in security account.
Migration guide
To centralize logs in S3, use the same migration guide as in v0.29.0.
To keep logs in the existing S3 bucket and make no change, set should_create_s3_bucket=true
.
Published: 5/7/2020 | Modules affected: iam-policies | Release notes
This release grants permissions to describe/list EKS clusters to the read-only policy.
Published: 5/4/2020 | Modules affected: aws-config, aws-config-multi-region, account-baseline-security, account-baseline-root | Release notes
The aws-config
module has been refactored to better support multi-region, multi-account configurations. Previously, running the aws-config-multi-region
would create an S3 bucket, an IAM role, and an SNS topic in each region. When run in multiple accounts, such as when using the Gruntwork reference architecture, each account would have the aforementioned resources within each region. This configuration was impractical to use since Config would be publishing data to dozens of buckets and topics, making it difficult to monitor and triage.
With this release, the aws-config-multi-region
module has been modified as follows:
- Only one IAM role is created. The AWS Config configuration recorder in each region assumes this role.
- One S3 bucket is created in the same region as the
global_recorder_region
. The AWS Config configuration recorder in each region can this bucket.
- One SNS topic is created per region. According to the AWS documentation, the topic must exist in the same region as the configuration recorder.
- An aggregator resource is created to capture Config data from all regions to the
global_recorder_region
. The aggregated view in the AWS console interface will show results from all regions.
In addition, the account-baseline-*
modules can now be configured in the following way:
- The
account-baseline-security
module can be configured as the “central” account in which to aggregate all other accounts.
- The
account-baseline-app
module can be configured to use the central/security account.
In this configuration, the central account will be configured with an S3 Bucket in the same region as the global_recorder_region
and an SNS topic will be created in each region. Any account configured with account-baseline-app
can publish to the S3 bucket in the central account, and to send SNS notifications to the topic in the corresponding region of the central account. In addition, all configuration recorders across all accounts will be aggregated to the global_recorder_region
of the central account.
Migration guide
First, remove the now-unused regional AWS Config buckets from the terraform state so that the data remains intact. If you don't need the data, you can remove the buckets after removing them from the Terraform state. If you're using bash
, the following loop should do the trick
for region in eu_north_1 eu_west_3 ap_southeast_2 ap_southeast_1 eu_west_1 us_east_2 sa_east_1 ap_northeast_2 ca_central_1 ap_south_1 eu_central_1 ap_northeast_1 us_east_1 eu_west_2 us_west_2 us_west_1; do
terraform state rm "module.config.module.aws_config_$&
done
Find additional migration instructions below for the modules affected by this change.
For aws-config
:
s3_bucket_name
remains a required variable.
- If
should_create_s3_bucket=true
(the default), an S3 bucket will be created. If it is false
, AWS Config will be configured to use an existing bucket with the name provided by s3_bucket_name
.
sns_topic_name
is now optional. If sns_topic_name
is provided, an SNS topic will be created. If sns_topic_arn
is provided, AWS Config will be configured to use that topic.
- If
should_create_iam_role
is true (the default), an IAM role will be created with the default name of AWSConfigRole
.
For aws-config-multi-region
:
global_recorder_region
is no longer required. The default is now us-east-1
.
- The
name_prefix
variable has been removed.
s3_bucket_name
is now required. In addition, if should_create_s3_bucket=true
(the default), an S3 bucket will be created in the same region as global_recorder_region
. If should_create_s3_bucket=false
, the configuration recorder will be configured to use an existing bucket with the name provided by s3_bucket_name
.
- If a list of account IDs is provided in the
linked_accounts
variable, the S3 bucket and SNS topic policies will be configured to allow write access from those accounts.
- If an account ID is provided in the
central_account_id
variable, AWS Config will be configured to publish to the S3 bucket and SNS topic in that account.
- If
kms_key_arn
is provided, the S3 bucket and SNS topic will be encrypted with the provided key. If kms_key_arn
is left as null, the S3 bucket will be encrypted with the default aws/s3
key, and the SNS topic will not be encrypted.
For account-baseline-security
:
- If a list of account IDs is provided in
config_linked_accounts
, those accounts will be granted access to the S3 bucket and SNS topic in the security account.
- If the
config_s3_bucket_name
variable is provided, the S3 bucket will be created with that name. If no name is provided, the bucket will have the default name of ${var.name_prefix}-config
.
For account-baseline-app
:
-
The config_central_account_id
variable should be configured with the ID of the account that contains the S3 bucket and SNS topic. This will typically be the account that is configured with account-baseline-security
.
-
If the config_s3_bucket_name
variable is provided, AWS Config will be configured to use that name (but the bucket will not be created within the account). If no name is provided, AWS Config will be configured to use a default name of ${var.name_prefix}-config
. This bucket must already exist and should have appropriate permissions to allow access from this account. To set up permissions, provide this account ID in the config_linked_accounts
of the account-baseline-security
modules.
Published: 5/8/2020 | Modules affected: enabled-aws-regions | Release notes
Published: 5/7/2020 | Modules affected: executable-dependency | Release notes
- Added a new module called
executable-dependency
that can be used to install an executable if it's not installed already. This is useful if your Terraform code depends on external dependencies, such as terraform-aws-eks
, which depends on kubergrunt
.
Published: 5/29/2020 | Modules affected: vpc-peering | Release notes
The vpc-peering
module can now optionally create resources using the create_resources
variable. This weird parameter exists solely because Terraform does not support conditional modules. Therefore, this is a hack to allow you to conditionally decide if the VPC Peering function and other resources should be created or not.
Published: 5/14/2020 | Modules affected: vpc-app | Release notes
- This fixes a bug with
vpc-app
. Previously the dynamodb endpoint routes mistakenly referenced the S3 endpoint.
Special thanks to @jdhornsby for the fix!