Skip to main content
Knowledge Base

Issue with upgrading to terraform-aws-service-catalog to v.100.0 for eks-core-components through pipeline - upgrade from k8s 1.23 to 1.24.

Answer

Hello, I am starting to upgrade our eks cluster from k8s 1.23 to k8s 1.24. We are currently in the version v0.95.1 for repository - terraform-aws-service-catalog. My understanding is that i need to upgrade eks-core-components first and then upgrade eks-cluster. I was able to upgrade the core components from my local pc without any issues. I also upgraded my local kubergrunt version to v.10.0 to support the upgrade. This is how the plan looked like (locally): ``` # module.fargate_fluent_bit["enable"].kubernetes_config_map.logging will be updated in-place ~ resource "kubernetes_config_map" "logging" { ~ data = { + "filters.conf" = <<-EOT [FILTER] Name kubernetes Match * Merge_Log Off Buffer_Size 0 Kube_Meta_Cache_TTL 300s EOT # (1 unchanged element hidden) } id = "aws-observability/aws-logging" # (2 unchanged attributes hidden) # (1 unchanged block hidden) } # module.k8s_external_dns["enable"].helm_release.k8s_external_dns will be updated in-place ~ resource "helm_release" "k8s_external_dns" { id = "external-dns" name = "external-dns" ~ version = "6.2.4" -> "6.12.2" # (26 unchanged attributes hidden) } Plan: 0 to add, 2 to change, 0 to destroy. ``` This was applied locally successfully, but I never could run **even the plan from the pipeline**. It would throw the error: `Error: Get "https://a219bdea5fe77ddb324de47fa9153a05.yl4.eu-central-1.eks.amazonaws.com/api/v1/namespaces/aws-observability": getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/runtime/scheme.go:100"` Am I correct to understand that this is an issue with kubergrunt versions, or is this something else? (I can see the ability to add the url of the kubergrunt version is introduced only to the eks-cluster and not eks-core-components). --- <ins datetime="2023-06-07T11:27:37Z"> <p><a href="https://support.gruntwork.io/hc/requests/110238">Tracked in ticket #110238</a></p> </ins>

Hi @sewmiuraj, You are correct with rebuilding the Docker images as you described, and I think I see the root cause of the issue. If the ecs-deploy-runner image was built with the default configuration (no build args changed), then an incompatible version of `kubergrunt` will be used (and thus the `client.authentication.k8s.io/v1alpha1` API error). Here is the offending [line](https://github.com/gruntwork-io/terraform-aws-ci/blob/v0.50.7/modules/ecs-deploy-runner/docker/deploy-runner/Dockerfile#L46). The dependency (`kubergrunt` version) looks like it wasn't updated to `v0.10.0` which is where support for EKS `1.24` was added. Updating to `v0.100.0` of the Service Catalog _should have_ provided the proper support by default, but this looks like it was missed. So version `0.8.0` of `kubergrunt` is still being used by default which does not support EKS `1.24`. Can you try rebuilding the `ecs-deploy-runner` Docker image as you were planning to do, and provide a build arg to override the default `kubergrunt` version and instead use `v0.10.0`? Something like: `--build-arg kubergrunt_version=v0.10.0`