Gruntwork release 2019-02
Guides / Update Guides / Releases / 2019-02
This page is lists all the updates to the Gruntwork Infrastructure as Code
Library that were released in 2019-02. For instructions
on how to use these updates in your code, check out the updating
documentation.
Here are the repos that were updated:
Published: 2/14/2019 | Release notes
Published: 2/21/2019 | Modules affected: server-group | Release notes
Published: 2/24/2019 | Modules affected: terraform-helpers | Release notes
Published: 2/20/2019 | Modules affected: ecs-cluster | Release notes
Published: 2/20/2019 | Modules affected: ecs-cluster | Release notes
-
ecs-cluster
-
The ecs-cluster
module now exposes setting its launch configuration using the output: ecs_cluster_launch_configuration_id
. This allows subscribing to changes in the launch configuration to automatically rollout cluster changes
-
https://github.com/gruntwork-io/module-ecs/pull/119
Published: 2/20/2019 | Modules affected: lambda | Release notes
@josh-taylor for the contribution
Published: 2/22/2019 | Modules affected: alb | Release notes
Published: 2/7/2019 | Modules affected: alb-alarms | Release notes
- Fix errors in the new connection count and low request count alarms to remove the "client-tls-negotiation-error" portion that was accidentally copy/pasted into them.
Published: 2/4/2019 | Modules affected: alarms/alb-alarms, alarms/alb-target-group-alarms, alarms/rds-alarms | Release notes
- The alarms in
alb-alarms
, alb-target-group-alarms
, and rds-alarms
now support directly setting the datapoints_to_alarm
setting. You can read more about datapoints_to_alarm
in the official AWS documentation.
Special thanks to @ksemaev for these contributions.
Published: 2/20/2019 | Modules affected: gruntsam | Release notes
- This release adds support for lambda Layers in the
gruntsam
utility. Refer to the README for more information.
Published: 2/18/2019 | Modules affected: fail2ban, os-hardening | Release notes
- Update the
fail2ban
module so it works properly on Amazon Linux 2. We've also updated how we install it on Ubuntu (using pip
to install aws
instead of apt
) and changed the jail files a bit to take advantage of fail2ban interpolation
- Update the
ami-builder
in os-hardening
to support a new parallel_build
param that lets you control whether the builds run in parallel. It's set to true true
by default, as before, but you may need to disable it for use with nvme.
- Call
udevadm settle
in the partition-volume
script to ensure all symlinks are in place before going on to subsequent steps (e.g., formatting).
Published: 2/11/2019 | Modules affected: iam-groups | Release notes
iam-groups
module now creates an additional IAM group that has the iam-user-self-mgmt
IAM policy already attached to make it easier to associate the rules of that policy to an IAM user via the group.
Published: 2/20/2019 | Modules affected: persistent-ebs-volume | Release notes
This release introduces automated tests for the nvme features of the mount-ebs-volume
and unmount-ebs-volume
scripts. Refer to the new section in the module documentation for how to use the scripts with nvme block devices: How do you use this on Nitro based instances?
Published: 2/12/2019 | Modules affected: s3-cloudfront | Release notes
Published: 2/6/2019 | Modules affected: k8s-service-account, k8s-namespace-roles | Release notes
- This release adds another set of permissions to the
rbac_tiller_resource_access
role that allows Tiller to manage PodDisruptionBudgets
.
- In the
k8s-tiller-minikube
example, sometimes the Tiller undeploy
fails because it removes the service account role before undeploy
, stripping the Tiller pod of its ability to nuke itself. This fixes that by adding a depends_on
to the service account output so that we delete the role binding when all resources referencing the service acocunt is deleted.
Published: 2/5/2019 | Modules affected: k8s-namespace, k8s-namespace-roles | Release notes
- We broke out the role creation pieces of
k8s-namespace
into its own submodule, k8s-namespace-roles
. This allows you to create the same roles on a preexisting namespace (e.g default
or kube-system
).
Published: 2/5/2019 | Modules affected: k8s-namespace | Release notes
This introduces an example terraform module that deploys Tiller using kubergrunt
. This example shows how to setup a Namespace
and ServiceAccount
for Tiller as well. See the example quickstart guide for an example of how you can combine the modules in this repo with kubergrunt
to deploy a best practices Tiller instance.
Other changes:
k8s-namespace
now exports additional roles: namespace-tiller-metadata-access
for minimal permissions to Tiller to be able to manage its Secrets
and namespace-tiller-resource-access
for minimal permissions to deploy resources from helm charts into a target namespace.
Published: 2/2/2019 | Modules affected: k8s-namespace, k8s-service-account | Release notes
k8s-namespace
and k8s-service-account
now implement the input variable dependencies
that can be used to specify module dependencies.
k8s-service-account
now also requires RBAC role namespaces to be included when binding rbac roles. This is to allow binding roles that are not in the same namespace as the created ServiceAccount
. As a result, the rbac_roles
input variable is now a list of maps containing the keys name
and namespace
.