Skip to main content
Knowledge Base

terraform-aws-ecs v0.25.0 upgrade issues

Answer

We are upgrading [r:terraform-aws-ecs](https://github.com/gruntwork-io/terraform-aws-ecs) `terraform-aws-ecs.git//modules/ecs-service` from version 0.23.4 to 0.25.0 (we can't use 0.24.0 due to the bug with `service_autoscaling_iam_role_arn` output). We when update the version with no other changes in our Terraform file it wants to remove our autoscaling role. We have `use_auto_scaling` set to `true`. Why is the autoscaling getting removed? Is there a way to prevent this or is the autoscaling role supposed to get removed? `terraform plan` gives this output: ``` An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: ~ update in-place - destroy -/+ destroy and then create replacement Terraform will perform the following actions: # module.ecs_service.aws_ecs_service.service_with_auto_scaling[0] will be updated in-place ~ resource "aws_ecs_service" "service_with_auto_scaling" { cluster = "arn:aws:ecs:us-east-1:XXXX:cluster/xxxx-stage" deployment_maximum_percent = 200 deployment_minimum_healthy_percent = 100 desired_count = 1 enable_ecs_managed_tags = false enable_execute_command = false health_check_grace_period_seconds = 15 iam_role = "aws-service-role" id = "arn:aws:ecs:us-east-1:XXXXX:service/XXXXX-stage" launch_type = "FARGATE" name = "XXXXX-stage" platform_version = "1.4.0" propagate_tags = "NONE" scheduling_strategy = "REPLICA" tags = {} tags_all = {} ~ task_definition = "arn:aws:ecs:us-east-1:XXXXX:task-definition/XXXXX-stage:1220" -> (known after apply) wait_for_steady_state = false deployment_circuit_breaker { enable = false rollback = false } deployment_controller { type = "ECS" } load_balancer { container_name = "branchcms-stage" container_port = 3000 target_group_arn = "arn:aws:elasticloadbalancing:us-east-1:XXXXX:targetgroup/branchcms-stage/XXXXX" } network_configuration { assign_public_ip = true security_groups = [ "sg-XXXXXX", ] subnets = [ "subnet-XXXXX", "subnet-XXXXX", "subnet-XXXXX", "subnet-XXXXX", "subnet-XXXXX", "subnet-XXXXX", ] } } # module.ecs_service.aws_ecs_task_definition.task must be replaced -/+ resource "aws_ecs_task_definition" "task" { ~ arn = "arn:aws:ecs:us-east-1:XXXXX:task-definition/XXXXX-stage:1220" -> (known after apply) ~ container_definitions = jsonencode( ~ [ # forces replacement ~ { cpu = 1024 environment = [ { name = "AWS_REGION" value = "us-east-1" }, { name = "BACKEND_PORT" value = "80" }, { name = "DB_URL" value = "aurora-stage.cluster-XXXXX.us-east-1.rds.amazonaws.com" }, { name = "DEPLOY_ENV" value = "stage" }, { name = "FTP_DB_URL" value = "site-files-ftp-user-db.cluster-XXXXX.us-east-1.rds.amazonaws.com" }, { name = "REDIS_URL" value = "redis-stage.XXXX.ng.0001.use1.cache.amazonaws.com" }, { name = "SITE_FILES_DYNAMODB_META" value = "branchms-files-stage" }, { name = "SITE_FILES_LAMBDA_IMAGE_PROCESS" value = "BranchCMS-Files-S3-Image-Process" }, { name = "SITE_FILES_S3_BUCKET" value = "fcms-stage" }, { name = "SSL_S3_BUCKET" value = "branchcms-stage-ssl-certificates" }, { name = "TASK_ROLE_ARN" value = "arn:aws:iam::XXXXX:role/branchcms-stage-stage-task" }, ] essential = true ~ image = "826252675753.dkr.ecr.us-east-1.amazonaws.com/XXXXXXX" -> "826252675753.dkr.ecr.us-east-1.amazonaws.com/branchcms:XXXXXXX" logConfiguration = { logDriver = "awslogs" options = { awslogs-group = "branchcms-stage" awslogs-region = "us-east-1" awslogs-stream-prefix = "branchcms-stage-ecs-fargate" } } memory = 2048 mountPoints = [] name = "branchcms-stage" portMappings = [ { containerPort = 3000 hostPort = 3000 protocol = "tcp" }, ] volumesFrom = [] } # forces replacement, ] ) cpu = "1024" execution_role_arn = "arn:aws:iam::XXXXXX:role/branchcms-stage-stage-task-execution-role" family = "branchcms-stage" ~ id = "branchcms-stage" -> (known after apply) memory = "2048" network_mode = "awsvpc" requires_compatibilities = [ "FARGATE", ] ~ revision = 1220 -> (known after apply) skip_destroy = false - tags = {} -> null ~ tags_all = {} -> (known after apply) task_role_arn = "arn:aws:iam::XXXXXXX:role/branchcms-stage-stage-task" } # module.ecs_service.aws_iam_role.ecs_service_autoscaling_role[0] will be destroyed - resource "aws_iam_role" "ecs_service_autoscaling_role" { - arn = "arn:aws:iam::XXXXX:role/branchcms-stage-stage-autoscaling" -> null - assume_role_policy = jsonencode( { - Statement = [ - { - Action = "sts:AssumeRole" - Effect = "Allow" - Principal = { - Service = "application-autoscaling.amazonaws.com" } - Sid = "" }, ] - Version = "2012-10-17" } ) -> null - create_date = "2020-07-08T11:27:16Z" -> null - force_detach_policies = false -> null - id = "branchcms-stage-stage-autoscaling" -> null - managed_policy_arns = [] -> null - max_session_duration = 3600 -> null - name = "branchcms-stage-stage-autoscaling" -> null - path = "/" -> null - tags = {} -> null - tags_all = {} -> null - unique_id = "AROAWMHU6GFAHMLXERNTM" -> null - inline_policy { - name = "branchcms-stage-ecs-service-autoscaling-policy" -> null - policy = jsonencode( { - Statement = [ - { - Action = [ - "ecs:UpdateService", - "ecs:DescribeServices", ] - Effect = "Allow" - Resource = "*" - Sid = "" }, - { - Action = "cloudwatch:DescribeAlarms" - Effect = "Allow" - Resource = "*" - Sid = "" }, ] - Version = "2012-10-17" } ) -> null } } # module.ecs_service.aws_iam_role_policy.ecs_service_autoscaling_policy[0] will be destroyed - resource "aws_iam_role_policy" "ecs_service_autoscaling_policy" { - id = "branchcms-stage-stage-autoscaling:branchcms-stage-ecs-service-autoscaling-policy" -> null - name = "branchcms-stage-ecs-service-autoscaling-policy" -> null - policy = jsonencode( { - Statement = [ - { - Action = [ - "ecs:UpdateService", - "ecs:DescribeServices", ] - Effect = "Allow" - Resource = "*" - Sid = "" }, - { - Action = "cloudwatch:DescribeAlarms" - Effect = "Allow" - Resource = "*" - Sid = "" }, ] - Version = "2012-10-17" } ) -> null - role = "branchcms-stage-stage-autoscaling" -> null } # module.ecs_service.null_resource.ecs_deployment_check[0] must be replaced -/+ resource "null_resource" "ecs_deployment_check" { ~ id = "5496600579459245589" -> (known after apply) ~ triggers = { - "desired_count" = "1" - "ecs_service_arn" = "arn:aws:ecs:us-east-1:XXXXX:service/branchcms-stage" - "ecs_task_definition_arn" = "arn:aws:ecs:us-east-1:XXXXXX:task-definition/branchcms-stage:1220" } -> (known after apply) # forces replacement } Plan: 2 to add, 1 to change, 4 to destroy. Changes to Outputs: ~ aws_ecs_task_definition_arn = "arn:aws:ecs:us-east-1:XXXXXX:task-definition/branchcms-stage:1220" -> (known after apply) ``` --- <ins datetime="2022-07-06T21:24:27Z"> <p><a href="https://support.gruntwork.io/hc/requests/108932">Tracked in ticket #108932</a></p> </ins>

Hi Eric! First, to get on the same page, here are the release notes for [v0.24.0](https://github.com/gruntwork-io/terraform-aws-ecs/releases/tag/v0.24.0), and those for [v0.25.0](https://github.com/gruntwork-io/terraform-aws-ecs/releases/tag/v0.25.0). Within these changes: 1. you no longer need to set `autoscaling_role_permissions_boundary_arn`, 2. and if you were using the output `service_autoscaling_iam_role_arn`, you can now use `arn:aws:iam::<accountID>:role/aws-service-role/ecs.amazonaws.com/AWSServiceRoleForECS` instead. The main gist of the change is > This release replaces the [legacy custom IAM role for ECS Auto Scaling](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-legacy-iam-roles.html) with a [service-linked role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using-service-linked-roles.html) that is managed by AWS. Now, to interpret your plan output (which, thank you for providing!): 1. `module.ecs_service.aws_ecs_service.service_with_auto_scaling[0]` updated in-place because of a task_definition arn change. 2. `module.ecs_service.aws_ecs_task_definition.task` destroyed and recreated because the container_definition image is changing. 3. `module.ecs_service.aws_iam_role.ecs_service_autoscaling_role[0]` destroyed because it is being replaced with the AWS-managed service linked role. 4. `module.ecs_service.aws_iam_role_policy.ecs_service_autoscaling_policy[0]` destroyed also due to the previous item. 5. `module.ecs_service.null_resource.ecs_deployment_check[0]` destroyed and recreated because the triggers changed. Based on that, the auto-scaling on your ECS service will remain, unaffected. A legacy role and its policy will be destroyed. And the rest are getting updated (or destroyed/recreated), so that the auto-scaling can use the service-linked role instead. I think this will be a safe upgrade for you!