Gruntwork release 2020-01
Guides / Update Guides / Releases / 2020-01
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2020-01. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 1/17/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/7/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/17/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/10/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/8/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/17/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/8/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/17/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/10/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/8/2020 | Release notes
Since this repo is solely used for examples/demonstrations, and NOT meant for direct production use, we simply publish all changes at v0.0.1, with a date marker for when it was published.
Published: 1/17/2020 | Modules affected: server-group | Release notes
- The
server-group
module now outputs the instance profile name via the output variable iam_instance_profile_name
.
Published: 1/6/2020 | Modules affected: server-group | Release notes
- Remove duplicate comment
- Fix broken url
Published: 1/27/2020 | Modules affected: iam-policies | Release notes
- The modules under
iam-policies
now allow you to set the create_resources
parameter to false
to have the module not create any resources. This is a workaround for Terraform not supporting the count
parameter on module { ... }
blocks.
Published: 1/21/2020 | Modules affected: infrastructure-deploy-script, iam-policies | Release notes
- Move docs-generator tests to deptest, since it is still using dep
- Fix broken links
Published: 1/14/2020 | Modules affected: iam-policies, git-helpers | Release notes
- Added a flag to skip running Git pull automatically in
git-add-commit-push
.
- Documentation improvements.
Published: 1/22/2020 | Modules affected: aws-config | Release notes
Published: 1/7/2020 | Modules affected: cloudtrail | Release notes
You can now define custom metric filters in addition to the default filters required by the Benchmark from the cloudtrail
module. Previously this was only available through the cloudwatch-logs-metric-filters
module.
Published: 1/7/2020 | Modules affected: cloudwatch-logs-metric-filters | Release notes
- Adds the ability to define custom metric filters in addition to the default filters required by the Benchmark. Thanks to @frankzieglermbc for his contribution.
Published: 1/30/2020 | Modules affected: rds | Release notes
- You can now limit the Availability Zones the
rds
module uses for replicas via the allowed_replica_zones
parameter.
Published: 1/27/2020 | Modules affected: lambda-create-snapshot, lambda-copy-shared-snapshot | Release notes
Published: 1/14/2020 | Modules affected: rds | Release notes
This release exposes the ca_cert_identifier
argument for aws_db_instance
. This argument configures which CA certificate bundle is used by RDS. The expiration of the previous CA bundle is March 5, 2020, at which point TLS connections that haven't been updated will break. Refer to the AWS documentation on this.
The argument defaults to rds-ca-2019
. Once you run terraform apply
with this update, it will update the instance, but the change will not take effect until the next DB modification window. You can use apply_immediately=true
to restart the instance. Until the instance is restarted, the Terraform plan will result in a perpetual diff.
Published: 1/31/2020 | Modules affected: ecs-service | Release notes
This update adds tags for ECS services and task definitions. To add a tag to a service, provide a map with the service_tags
variable. Similar, to tag task definitions, provide a map with the task_definition_tags
variable. For example:
service_tags = {
foo = "bar"
}
Use the propagate_tags
variable to propagate tags to ECS tasks. If you set propagate_tags
to SERVICE
, the tags from service_tags
will be set on tasks. If you want to propagate tags from task definitions, set propagate_tags="TASK_DEFINITION"
. If you set propagate_tags=null
, tasks will be created with no tags. The default is SERVICE
.
Compatibility note
Tag propagation requires that you adopt the new ARN and resource ID format. If you don't do this, you may encounter the following error:
InvalidParameterException: The new ARN and resource ID format must be enabled to propagate tags. Opt in to the new format and try again.
To opt-in to the new format as the account default using the AWS CLI, use the following aws
commands:
$ aws ecs put-account-setting-default --name containerInstanceLongArnFormat --value enabled
$ aws ecs put-account-setting-default --name taskLongArnFormat --value enabled
$ aws ecs put-account-setting-default --name serviceLongArnFormat --value enabled
This will set the account default, but note that the setting is per-user, per-region. The commands above should be executed within each region that uses ECS.
Furthermore, you may also need to run the commands for IAM users that already exist in the account but haven't opted in to the new format. To do so, authenticate as the IAM user who will be running Terraform (such as a CI machine user), and use the put-account-setting
variant of the command within the appropriate regions. For example:
$ aws --region us-east-2 ecs put-account-setting --name containerInstanceLongArnFormat --value enabled
$ aws --region us-east-2 ecs put-account-setting --name taskLongArnFormat --value enabled
$ aws --region us-east-2 ecs put-account-setting --name serviceLongArnFormat --value enabled
Repeat as necessary for all in-scope regions and IAM users.
Published: 1/30/2020 | Modules affected: ecs-service, ecs-deploy-check-binaries | Release notes
This release introduces support for ECS capacity providers in the ecs-service
module. This allows you to provide a strategy for how to run the ECS tasks of the service, such as distributing the load between Fargate, and Fargate Spot.
Published: 1/20/2020 | Modules affected: ecs-service | Release notes
- Fix
description
field of health_check_grace_period_seconds
input variable in the ecs-service
module
- Add tests that this repo works with Amazon Linux 2
- Fix broken links in the README
Published: 1/21/2020 | Modules affected: eks-cluster-control-plane | Release notes
The eks-cluster-control-plane
now supports specifying a CIDR block to restrict access to the public Kubernetes API endpoint. Note that this is only used for the public endpoint: you cannot restrict access by CIDR for the private endpoint yet.
Published: 1/14/2020 | Modules affected: eks-cluster-control-plane, eks-cluster-workers, eks-k8s-role-mapping, eks-k8s-external-dns | Release notes
This release includes the following feature enhancements:
- You can now specify the encryption mode of the root volume for EC2 instances deployed using the
eks-cluster-workers
module using the cluster_instance_root_volume_encryption
input variable.
- You can now define the
--txt-owner-id
argument using the txt_owner_id
input variable for external-dns
. This argument is used to uniquely tag DNS records on the Hosted Zone so that multiple instances of external-dns
can manage records against the same Hosted Zone.
- The
eks-k8s-role-mapping
now outputs the yaml file in a deterministic order. Previously the yaml was non-deterministic, causing potential perpetual diffs when nothing has actually changed.
This release also includes a number of minor bug fixes:
- All examples have been improved to use the correct IAM Role ARN for the EKS role mapping for authentication.
- Broken links in the READMEs have been fixed.
- The root README has an updated architecture diagram for Fargate and Managed Node Groups.
Published: 1/9/2020 | Modules affected: eks-k8s-role-mapping, eks-k8s-external-dns, eks-k8s-external-dns-iam-policy, eks-k8s-cluster-autoscaler | Release notes
Starting this release, the modules in this repo have official support for Fargate:
eks-cluster-control-plane
now has a new input variable fargate_only
, which will create Fargate Profiles for the default
and kube-system
namespace so that all Pods in those namespaces will be routed to Fargate. This will also adjust the core administrative Pods to run on Fargate so that you can have a functioning EKS cluster without worker nodes.
eks-k8s-external-dns
, eks-k8s-cluster-autoscaler
, eks-cloudwatch-container-logs
, and eks-alb-ingress-controller
now support deploying with IAM Role for Service Accounts, inline creating IAM roles and associating them with the Service Accounts within the modules.
- The underlying helm charts used in the modules
eks-k8s-external-dns
, eks-k8s-cluster-autoscaler
, eks-cloudwatch-container-logs
, and eks-alb-ingress-controller
have been bumped to the most recent version.
eks-k8s-external-dns
, eks-k8s-cluster-autoscaler
, eks-cloudwatch-container-logs
, and eks-alb-ingress-controller
now support scheduling on Fargate if you have mixed worker pools.
eks-k8s-external-dns-iam-policy
, eks-k8s-cluster-autoscaler-iam-policy
, and eks-alb-ingress-controller-iam-policy
now support conditionally turning off creation of the IAM policy with the input variable create_resources
.
- The worker IAM role is no longer required for
eks-k8s-role-mapping
.
Published: 1/7/2020 | Modules affected: eks-cluster-managed-workers | Release notes
This release introduces a new module eks-cluster-managed-workers
, which provisions EKS Managed Node Groups. This is an alternative worker pool to the existing eks-cluster-workers
module that has some nice properties. You can read more about the differences to self managed workers in the module README.
Published: 1/6/2020 | Modules affected: eks-k8s-role-mapping, eks-cluster-control-plane, eks-cloudwatch-container-logs | Release notes
- The python scripts used in
eks-k8s-role-mapping
and eks-cluster-control-plane
no longer support Mac OSX 12. If you are on OSX 12, please use prior versions of this module or upgrade your OSX version.
- The python scripts used in
eks-k8s-role-mapping
and eks-cluster-control-plane
now support Python 3.8.
- EKS components upgrade script now updates to the latest versions of the components mentioned in https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html
- EKS components upgrade script now supports skipping the wait for roll out. This is useful in the initial rollout when worker nodes may not be ready.
- fluentd helm charts used in
eks-cloudwatch-container-logs
have been updated to the latest.
Published: 1/27/2020 | Modules affected: **No | Release notes
- No changes to underlying modules.
Fix broken links in README's
Published: 1/27/2020 | Modules affected: logs | Release notes
- Fix broken links in readme's
- Fix wrong syntax on event metric filter pattern
Published: 1/17/2020 | Modules affected: alarms/sqs-alarms | Release notes
- Update the SQS alarm period to a minimum of 60 seconds, as AWS now pushes those metrics at one-minute intervals.
Published: 1/6/2020 | Modules affected: logs/cloudwatch-log-aggregation-iam-policy | Release notes
The logs/cloudwatch-log-aggregation-iam-policy
module can now be conditionally excluded based on the input variable create_resources
. When create_resources
is false
, the module will not create any resources and become a no-op.
Published: 1/21/2020 | Modules affected: init-openvpn | Release notes
- You can now configure a custom MTU for OpenVPN to use via the
--link-mtu
parameter.
Published: 1/9/2020 | Modules affected: openvpn-server | Release notes
- You can now run your VPN server on spot instances by specifying the
spot_price
input variable.
Published: 1/30/2020 | Modules affected: cloudtrail | Release notes
This release fixes a bug where the cloudtrail module sometimes fails due to not being able to see the IAM role that grants access to CloudWatch Logs.
Published: 1/29/2020 | Modules affected: codegen/generator | Release notes
None of the Terraform modules has been updated in this release
The codegen generator go library has been updated to allow rendering explicit blocks at the end of main.tf
and outputs.tf
, separate from each region configuration.
Published: 1/29/2020 | Modules affected: aws-organizations | Release notes
- Addresses the issue of perpetual diff with AWS Organization child account property
iam_user_access_to_billing
.
Published: 1/25/2020 | Modules affected: guardduty-single-region, guardduty-multi-region, aws-config, aws-config-multi-region | Release notes
This release introduces a new module aws-config-multi-region
which can be used to configure AWS Config in multiple regions of an account.
The following additional fixes are also included in this release:
- The
guardduty-multi-region
module now supports automatically detecting which regions are enabled on your account. This means that you no longer need to manually maintain the opt_out_regions
list.
- Fix a bug in the
aws-config
module where the aws_config_delivery_channel
resource sometimes fails due to a race condition with the IAM policy to write to SNS.
- Fix broken links in numerous READMEs.
Published: 1/21/2020 | Modules affected: ssh-grunt, kms-master-key, guardduty-multi-region | Release notes
Published: 1/13/2020 | Modules affected: guardduty-single-region, guardduty-multi-region | Release notes
-
guardduty-single-region
[NEW]
-
guardduty-multi-region
[NEW]
-
New modules for configuring AWS GuardDuty, a service for detecting threats and continuously monitoring your AWS accounts and workloads against malicious activity and unauthorized behavior.
-
https://github.com/gruntwork-io/module-security/pull/193
Published: 1/5/2020 | Modules affected: single-server | Release notes
- You can now enable EC2 Instance Termination Protection using a new
disable_api_termination
input variable.
Published: 1/27/2020 | Modules affected: s3-static-website | Release notes
- Fix a few broken links in README's
- Update CODEOWNERS
Published: 1/6/2020 | Release notes
run-pex-as-resource
now outputs pex_done
, which can be used as a dependency for linking resources that depend on the pex script being run.
Published: 1/27/2020 | Release notes
- Fix broken links in README's
Published: 1/6/2020 | Modules affected: vpc-mgmt, vpc-app | Release notes
Now vpc-app
and vpc-mgmt
will create a single VPC endpoint for all tiers. Previously we were creating separate endpoints per tier, but that makes it more likely to reach the max VPC endpoints per region limits of AWS as you add more VPCs, which is not extendable. By consolidating, we can bring down the VPC endpoint count per VPC to 2 from 6.
NOTE: Since the VPC endpoints need to be recreated with this change, existing VPCs will experience a brief outage when trying to reach these endpoints (S3 and DynamoDB) while the endpoints are being recreated when you upgrade to this release. This can not be avoided as you can only have one VPC endpoint per route table and so you can not create the new consolidated endpoints first before removing the old ones. You can expect up to 10 seconds of endpoint access downtime for terraform to do the recreation.