What is the latest way to configure the helm and kubernetes providers?
We had been following gruntworks recommendation for how to configure helm provider to ensure the token refresh: ``` provider "helm" { kubernetes { host = "${eks_cluster_endpoint}" cluster_ca_certificate = base64decode("${eks_certificate_authority}") exec { api_version = "client.authentication.k8s.io/v1alpha1" command = "aws" args = ["eks", "get-token", "--cluster-name", "${eks_cluster_name}"] } } } ``` However more recently we are seeing a lot of apply errors with: ``` Error: Kubernetes cluster unreachable: Get "https://xxxxxxx.eks.amazonaws.com/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) ``` Has something changed, should we be using different helm provider configuration?
We still use this method both internally at Gruntwork and in our Reference Architecture. AFAIK, we haven't run into any issues using this configuration. The error message `request canceled while waiting for connection` suggests this is more likely a network error reaching the EKS kubernetes endpoint. Do you use private endpoints (where the k8s API is only accessible from within the VPC)? It's possible that there is some issue with the VPN connection: we've had issues in the past where the MTU settings from OpenVPN led to dropped packets, causing all sorts of network errors reaching private endpoints.