Skip to main content
Knowledge Base

EKS cluster exhausted all our private app subnet's IP addresses

Answer

_This message was extracted from a discussion that originally took place in Gruntwork Community Slack. Names and URLs have been removed where appropriate_ **From a customer** Hey Gruntwork folks! :wave: We ran into a problem in our Dev environment today where our EKS cluster exhausted all our private app subnet's IP addresses. Fun! I was looking into this more and saw that our subnets (created with `vpc-app` module) are using a default prefix of `/21` giving them 2048 addresses each. Our CIDR uses `/16` which effectively allows for up to 65,536 total addresses, so we have some room to grow. My question for you is how to accomplish this. I see that I can specify `private_app_subnet_cidr_blocks`, but I'd rather not have to manually calculate out each CIDR block to pass in to the module for every one of our VPCs. Is there a better way? Perhaps with the `subnet_spacing` vars and `subnet_bits` vars? I tinkered a little with that but was not really sure how to calculate the proper spacing given the desired bits that I gave it.

**From a grunt** Jumping in a bit late... AFAIK, there are a few alternatives to deal with the lack of IPs issue, with varying levels of difficulty: - Add more subnets to EKS itself (I don't remember which specific configuration it is but EKS itself only use some subnets). If you create new subnets and then allocate them to be used by EKS it will use them. (Can be a band-aid solution when you still have space on your VPC and cannot migrate to a bigger one). This seems like a newer tutorial, I don't remember if we had to do all the steps, though. Maybe we used this one instead :thinking_face: - Use an alternate compatible CNI plugin , with these you will not be restricted by the IP limitations of the AWS CNI. I've heard interesting things about these alternative CNIs like the fact that IPs and Pods do not end up being a 1:1 scenario, but as I've never used them I don't know how they work nor how complex/challenging they are to configure :disappointed: - I couldn't find, but I remember reading/watching some presentation about the "million pods club" where there was some debates about how to reach astounding numbers of pods in EKS