Skip to main content
Knowledge Base

How to rebuild a RefArch for a particular AWS account?

Answer

Hello, I am trying to destroy the RefArc for the dev account and rebuild it. The `terragrunt run-all destroy` command executed from the `infrastructure-live/dev` folder leads to the below error: ``` │ Error: local-exec provisioner error │ │ with module.ecs_deploy_runner.module.ec2_ecs_cluster.aws_ecs_cluster.ecs[0], │ on .terraform/modules/ecs_deploy_runner.ec2_ecs_cluster/modules/ecs-cluster/main.tf line 64, in resource "aws_ecs_cluster" "ecs": │ 64: provisioner "local-exec" { │ │ Error running command 'python │ .terraform/modules/ecs_deploy_runner.ec2_ecs_cluster/modules/ecs-cluster/shut-down-container-instances.py │ arn:aws:ecs:us-east-2:863730053613:cluster/ecs-deploy-runner': exit status │ 1. Output: [INFO] [shut-down-container-instances] 2022-05-11 17:21:43 │ Starting shutdown process for container instances... │ [INFO] [shut-down-container-instances] 2022-05-11 17:21:43 Looking up │ container instances in ECS cluster │ arn:aws:ecs:us-east-2:863730053613:cluster/ecs-deploy-runner in us-east-2 │ Traceback (most recent call last): │ File "/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner/.terragrunt-cache/6UY0AdyeQXdycU0rUo2NG4PFWNM/fCw8UgbtyEKZwmAHcT_oAWkqDzs/modules/mgmt/ecs-deploy-runner/.terraform/modules/ecs_deploy_runner.ec2_ecs_cluster/modules/ecs-cluster/shut-down-container-instances.py", line 322, in <module> │ run_shutdown_process(sys.argv) │ File "/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner/.terragrunt-cache/6UY0AdyeQXdycU0rUo2NG4PFWNM/fCw8UgbtyEKZwmAHcT_oAWkqDzs/modules/mgmt/ecs-deploy-runner/.terraform/modules/ecs_deploy_runner.ec2_ecs_cluster/modules/ecs-cluster/shut-down-container-instances.py", line 309, in run_shutdown_process │ container_instance_arns = get_container_instance_arns(ecs_cluster_arn, aws_region, logger) │ File "/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner/.terragrunt-cache/6UY0AdyeQXdycU0rUo2NG4PFWNM/fCw8UgbtyEKZwmAHcT_oAWkqDzs/modules/mgmt/ecs-deploy-runner/.terraform/modules/ecs_deploy_runner.ec2_ecs_cluster/modules/ecs-cluster/shut-down-container-instances.py", line 98, in get_container_instance_arns │ container_instances_output = run_aws_cli(list_args, aws_region) │ File "/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner/.terragrunt-cache/6UY0AdyeQXdycU0rUo2NG4PFWNM/fCw8UgbtyEKZwmAHcT_oAWkqDzs/modules/mgmt/ecs-deploy-runner/.terraform/modules/ecs_deploy_runner.ec2_ecs_cluster/modules/ecs-cluster/shut-down-container-instances.py", line 78, in run_aws_cli │ output = subprocess.check_output(['aws'] + args + common_args) │ File "/home/user/.pyenv/versions/3.9.1/lib/python3.9/subprocess.py", line 420, in check_output │ return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, │ File "/home/user/.pyenv/versions/3.9.1/lib/python3.9/subprocess.py", line 501, in run │ with Popen(*popenargs, **kwargs) as process: │ File "/home/user/.pyenv/versions/3.9.1/lib/python3.9/subprocess.py", line 947, in __init__ │ self._execute_child(args, executable, preexec_fn, close_fds, │ File "/home/user/.pyenv/versions/3.9.1/lib/python3.9/subprocess.py", line 1819, in _execute_child │ raise child_exception_type(errno_num, err_msg, err_filename) │ FileNotFoundError: [Errno 2] No such file or directory: 'aws' │ ╵ Releasing state lock. This may take a few moments... ERRO[1766] Module /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner has finished with an error: 1 error occurred: * exit status 1 prefix=[/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner] ERRO[1766] Dependency /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner of module /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc just finished with an error. Module /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc will have to return an error too. prefix=[/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc] ERRO[1766] Module /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc has finished with an error: Cannot process module Module /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc (excluded: false, assume applied: false, dependencies: []) because one of its dependencies, Module /home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/ecs-deploy-runner (excluded: false, assume applied: false, dependencies: [/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc]), finished with an error: 1 error occurred: * exit status 1 prefix=[/home/user/Projects/infrastructure-live/dev/us-east-2/mgmt/networking/vpc] ERRO[1766] 11 errors occurred: * Cannot process module Module /home/user/Projects/infrastructure-live/dev/us-east-2/dev/networking/alb (excluded: false, assume applied: false, dependencies: [/home/user/Projects/infrastructure-live/dev/us-east-2/dev/networking/vpc, /home/user/Projects/infrastructure-live/dev/_global/route53-public]) because one of its dependencies, Module /home/user/Projects/infrastructure-live/dev/us-east-2/dev/services/ecs-cluster (excluded: false, assume applied: false, dependencies: [/home/user/Projects/infrastructure-live/dev/us-east-2/dev/networking/vpc, /home/user/Projects/infrastructure-live/dev/us-east-2/dev/networking/openvpn-server, /home/user/Projects/infrastructure-live/dev/us-east-2/_regional/sns-topic, /home/user/Projects/infrastructure-live/dev/us-east-2/dev/networking/alb, /home/user/Projects/infrastructure-live/dev/us-east-2/dev/networking/alb-internal]), finished with an error: 1 error occurred: * exit status 1 ``` What are the proper steps to destroy and then rebuild the RefArch in AWS? Thanks! --- <ins datetime="2022-05-13T20:10:20Z"> <p><a href="https://support.gruntwork.io/hc/requests/108604">Tracked in ticket #108604</a></p> </ins>

Hi @armartirosyan, In this particular case, based on the stack trace you've shared, it looks like you do not have the latest AWS Python SDK (boto3) installed on your system. Since the python scripts in our ecs-module require the AWS Python SDK to be installed locally, you're getting this error because our scripts are trying to invoke a binary you don't have installed. Please review [these instructions in the ecs-module README](https://github.com/gruntwork-io/terraform-aws-ecs/tree/master/modules/ecs-cluster#how-to-use-the-roll-out-ecs-cluster-updatepy-script) to install AWS's Python SDK (the instructions are to run `pip install boto3`). Hope this helps!