Gruntwork Pipelines: What are my options for optimizations?
Our Ref Arch pipeline takes about 20 minutes or more to run through all the deployments, especially when rolling out to multiple accounts. What if we need to apply a critical change asap to production? --- <ins datetime="2022-09-09T14:56:25Z"> <p><a href="https://support.gruntwork.io/hc/requests/109228">Tracked in ticket #109228</a></p> </ins>
There are a few things you can do to speed up the pipeline: - The pipeline is setup to only apply changes for modules that have been touched in the commit. This means that you can try to narrow the scope of your changes instead of batching them all into one PR, avoiding the need to tie all the updates together in one push. - ECS Fargate can add latency to the deployment process, due to the way Fargate provisioning works. We have observed that many tasks take 1-2 minutes to boot up and start, especially since AWS has to do a fresh pull of the ECR images when running the task. Switching to [EC2 based ECS Deploy Runner](https://github.com/gruntwork-io/knowledge-base/discussions/353#discussioncomment-2560116) can help with increasing the speed of the tasks to start as it can take advantage of the local docker cache. However, in general, keep in mind that the pipeline will be bound by the limitations of Terraform and AWS. That is, if it takes 1 hour to roll out a change in AWS (e.g., updating a CloudFront distribution), nothing in the pipeline will be able to speed it up. Thus it is important to be aware of where all the time is being spent. If there are any time inefficiencies you notice in any part of the Gruntwork product (e.g., in the ECS Deploy Runner or Terragrunt), we may be able to help optimize those pieces, but other aspects of the pipeline (e.g., GitHub Actions reaction time) are out of our control.