Gruntwork release 2017-03
Guides / Update Guides / Releases / 2017-03
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2017-03. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 3/9/2017 | Release notes
Published: 3/8/2017 | Release notes
Published: 3/17/2017 | Release notes
Published: 3/3/2017 | Release notes
https://github.com/gruntwork-io/module-ci/pull/22: The scheduled-lambda-job
module now makes running in a VPC optional. It exposes a new input variable called run_in_vpc
which, if set to true, will give the lambda function access to a VPC you specify via the vpc_id
and subnet_ids
input variables. However, by default, it's set to false, and you can omit vpc_id
and subnet_ids
.
This is useful for lambda functions that use the AWS APIs and don't need direct access to a VPC anyway. Moreover, a recent bug in Terraform causes issues when you try to delete a lambda function that was deployed into a VPC.
Published: 3/31/2017 | Release notes
Published: 3/8/2017 | Release notes
https://github.com/gruntwork-io/module-data-storage/pull/14: To allow the bastion host to talk to RDS or Aurora, you now have to explicitly set the allow_connections_from_bastion_host
input variable to true. Before, we only exposed the bastion_host_security_group_id
input variable, but if you fed dynamic data into that variable (e.g. from a terraform_remote_state
data source), you'd get an error. This is now fixed.
Published: 3/6/2017 | Release notes
Published: 3/3/2017 | Release notes
https://github.com/gruntwork-io/module-data-storage/pull/12: We've added four new modules:
-
lambda-create-snapshot: A lambda function that runs on a scheduled basis to take snapshots of an RDS DB. Useful if the once-nightly snapshots aren't enough and, even more importantly, this is the first step if you want to backup your snapshots to another AWS account.
-
lambda-share-snapshot: A lambda function that can share an RDS snapshot with another AWS account. This is the second step in backing up your snapshots to another AWS account.
-
lambda-copy-snapshot: A lambda function that runs on a scheduled basis to make a local copies of RDS snapshots shared from an external AWS account. This is the third step and it needs to run in the AWS account you're using to backup your snapshots.
-
lambda-cleanup-snapshots: A lambda function that runs on a scheduled basis to delete old RDS snapshots. You configure it with a maximum number of snapshots to keep, and once that number is exceeded, it deletes the oldest snapshots. This is useful to keep the number of snapshots from step 1 and 3 above from getting out of hand.
Published: 3/9/2017 | Release notes
Published: 3/1/2017 | Release notes
Published: 3/29/2017 | Release notes
Published: 3/9/2017 | Release notes
https://github.com/gruntwork-io/module-load-balancer/pull/9
BREAKING CHANGE
Two bug fixes:
-
Due to a Terraform bug with merge
and zipmap
, some of the listener outputs were simply disappearing. For example, if your ALB had only HTTP listeners, the outputs for the HTTPS listeners would disappear, as would the aggregate listener that contained both HTTP and HTTPS listeners. Since we have other modules that depend on these outputs, this made the ALB unusable.
As a result, the listener_arns
and https_listener_arns
outputs have been removed. The available outputs are now http_listener_arns
, https_listener_non_acm_cert_arns
, https_listener_acm_cert_arns
.
-
There was a bug in the previous release that caused an error to show up any time you tried to use an ACM cert. This has now been fixed.
Published: 3/8/2017 | Release notes
https://github.com/gruntwork-io/module-load-balancer/pull/8: To add an HTTPS listener, the ALB module originally had you pass in the https_listener_ports_and_ssl_certs
input variable, which was a map of HTTPS ports to the ARNs of TLS certs (e.g. 443 = "arn:aws:acm:us-east-1:123456789012:certificate/12345678"
. The module now exposes a new input variable called https_listener_ports_and_acm_ssl_certs
which is a more user-friendly map of HTTPS ports to the domain name of a TLS cert issues by the AWS Certificate Manager (e.g. 443 = *.foo.com
).
Published: 3/5/2017 | Release notes
Published: 3/28/2017 | Release notes
Published: 3/24/2017 | Release notes
Published: 3/23/2017 | Release notes
Published: 3/19/2017 | Release notes
- ENHANCEMENT: The tls-cert-private module can now generate a TLS certificate that is valid for multiple domain names.
Published: 3/2/2017 | Release notes
Published: 3/1/2017 | Release notes
- NEW MODULE: We are pleased to introduce the os-hardening module!
This module is our first step in providing a path to using a hardened OS Image based on the Center for Internet Security Benchmarks. These Benchmarks are freely downloadable and specific to a technology, which makes them straightforward to reference.
At present, we support only a hardened OS for Amazon Linux, though we are open to adding support for additional OS's if customers request it. The primary OS hardening implemented in this release is the ability to create multiple disk partitions on the root volume in a Packer build, and mount each disk partition to a file system path with unique mount options.
For example, we can now mount /tmp
to its own disk partition so that a runaway program that fills up all of /tmp
will not affect disk space available on other paths like /var/log
where logs are stored. In addition, we can mount /tmp
with the nosuid
, nodev
, and noexec
options, which say that no file in /tmp
should be allowed to assume the permissions of its file owner (a security risk), no external devices (like a block device) can be attached to /tmp
and no files in /tmp
can be executed, respectively.
Published: 3/1/2017 | Release notes
https://github.com/gruntwork-io/module-security/pull/15: Added support for easy cross-account access. You can now define all your IAM users in one AWS account (e.g. a users
account), give those IAM users access to specific IAM roles in your other AWS accounts (e.g. a stage
or prod
account), and they will be able to switch accounts in the AWS console with just a few clicks.
To use this, you need to configure the new iam_groups_for_cross_account_access
input variable in the iam-groups module in your users
account and deploy the new cross-account-iam-roles module in the stage
and prod
accounts.
Published: 3/17/2017 | Release notes
- BUG FIX: The
mount-ebs-volume
script in the persistent-ebs-volume
module now correctly formats a volume with xfs. Previously, it worked only for ext4.
Published: 3/9/2017 | Release notes
- ENHANCEMENT: The persistent-ebs-module script now supports a parameter that specifies file system mounting options, and explicitly supports creating file systems of type XFS.
Previously, you could pass in alternative file systems to this script, but since even blank EBS Volume are formatted as ext4
by default, the script would not attempt to format the EBS Volume with the new file system type. That is now fixed.
Published: 3/7/2017 | Release notes
Published: 3/7/2017 | Release notes
Published: 3/7/2017 | Release notes