Skip to main content
Knowledge Base

Upgrade eks-core-services in CircleCI

Answer

Hello all, i ran into a problem during the EKS cluster upgrade: we recently deployed ECS deploy runner and have not yet experienced with it. When I upgraded the `eks-core-service` module, the CircleCI pipeline failed with these errors: ``` [ecs-deploy-runner][2023-01-16T16:42:43+0000] ╷ [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ with module.alb_ingress_controller["enable"].helm_release.aws_alb_ingress_controller, [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ on .terraform/modules/alb_ingress_controller/modules/eks-alb-ingress-controller/main.tf line 48, in resource "helm_release" "aws_alb_ingress_controller": [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ 48: resource "helm_release" "aws_alb_ingress_controller" { [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ [ecs-deploy-runner][2023-01-16T16:42:43+0000] ╵ [ecs-deploy-runner][2023-01-16T16:42:43+0000] ╷ [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ with module.aws_for_fluent_bit["enable"].helm_release.aws_for_fluent_bit, [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ on .terraform/modules/aws_for_fluent_bit/modules/eks-container-logs/main.tf line 48, in resource "helm_release" "aws_for_fluent_bit": [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ 48: resource "helm_release" "aws_for_fluent_bit" { [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ [ecs-deploy-runner][2023-01-16T16:42:43+0000] ╵ [ecs-deploy-runner][2023-01-16T16:42:43+0000] ╷ [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ with module.k8s_external_dns["enable"].helm_release.k8s_external_dns, [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ on .terraform/modules/k8s_external_dns/modules/eks-k8s-external-dns/main.tf line 54, in resource "helm_release" "k8s_external_dns": [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ 54: resource "helm_release" "k8s_external_dns" { [ecs-deploy-runner][2023-01-16T16:42:43+0000] │ [ecs-deploy-runner][2023-01-16T16:42:43+0000] ╵ ``` My understanding is that `ecs-deploy-runner` ECS task does not perform Kubernetes authentication and does not have Kubernetes configuration. Does anybody know how to workaround this? --- <ins datetime="2023-01-17T10:56:28Z"> <p><a href="https://support.gruntwork.io/hc/requests/109797">Tracked in ticket #109797</a></p> </ins>

Without knowing the full details of your configuration, I'll try my best to explain... For the `ecs-deploy-runner` to be able to interact with the EKS cluster, the IAM Role the runner uses, must be mapped in the `aws-auth` ConfigMap. Had the cluster been created with the IAM Role `ecs-deploy-runner` is using, this would be unnecessary, as EKS implicitly grants admin RBAC for the IAM role that the cluster was created with. I'm assuming the cluster was created with a different role? To fix the issue, the ECS Deploy Runner IAM Role has to be added to `aws-auth` ConfigMap. If you're using the [`eks-aws-auth-merger`](https://github.com/gruntwork-io/terraform-aws-eks/tree/master/modules/eks-aws-auth-merger), you can use the [`eks-k8s-role-mapping`](https://github.com/gruntwork-io/terraform-aws-eks/tree/master/modules/eks-k8s-role-mapping) to create an entry in the `aws-auth` ConfigMap, e.g. ``` module "ecs_deploy_runner_eks_k8s_role_mapping" { source = "git::git@github.com:gruntwork-io/terraform-aws-eks.git//modules/eks-k8s-role-mapping?ref=v0.x.x" name = "ecs-deploy-runner" namespace = "whatever-namespace-you-use-for-auth-merger" eks_worker_iam_role_arns = [] eks_fargate_profile_executor_iam_role_arns = [] iam_role_to_rbac_group_mappings = { # I'm assuming you want admin level permissions in the cluster, because you'll be deploying # RBAC resources, hence the system:masters "your-ecs-deploy-runner-iam-role" = ["system:masters"] } config_map_labels = { eks-cluster = module.eks_cluster.eks_cluster_name } } ``` Make sure you're not overwriting the entire `aws-auth` ConfigMap 😅 and check the plan results carefully before applying. Note that you'll have to deploy the module with an IAM Role that has sufficient permissions in the EKS cluster. After the `aws-auth` ConfigMap has been updated, applying with the `ecs-deploy-runner` should work. Hope this helps!