Custom Networking for EKS Auto Mode: Solving IP Exhaustion
Since June 13, 2025, EKS Auto Mode supports custom networking through the NodeClass resource. This brings the same IP conservation benefits of CGNAT overlay networks to Auto Mode clusters, but with a cleaner implementation that doesn't require managing the aws-node DaemonSet or ENIConfig resources.
If you're running out of IP addresses in your VPC subnets, this feature lets you place pods in a separate CIDR block while keeping nodes in your main subnets. This is the Auto Mode equivalent of the custom networking setup for managed node groups, but significantly simpler to configure.
Separate IP Spaces for Nodes and Pods
Custom networking in Auto Mode lets you split the IP address space:
- Nodes live in your existing VPC subnets (e.g.,
10.196.24.0/24) - Pods get IPs from a secondary CIDR block (e.g.,
100.64.0.0/16)
This approach uses the same Carrier-Grade NAT (CGNAT) IP range commonly used by ISPs, giving you over 4 million IP addresses for pods without touching your main VPC allocation.
How It Works
Unlike managed node groups that require ENIConfig resources and DaemonSet environment variables, Auto Mode uses the NodeClass resource with two key fields:
podSubnetSelectorTerms- Selects which subnets should host pod ENIspodSecurityGroupSelectorTerms- Selects which security groups apply to pod ENIs
A few important technical details:
- No overlay networking - Each pod still gets its own ENI and
/28prefix, but those ENIs are attached in the pod subnets you specify - VPC-native routing - AWS automatically creates local routes between your primary and secondary CIDRs. Pods and nodes communicate directly without tunneling
- Slightly lower pod density - When pods use separate subnets, the node's primary interface can't host pods. You lose the first ENI prefix and start with the first secondary ENI
Infrastructure Setup
The VPC infrastructure setup is identical to the managed node groups approach. You need to add a secondary CIDR block and create pod subnets in each availability zone.
Step 1: Add Secondary CIDR to VPC
# Define the main vpc cidr block
variable "vpc_id" {
description = "The ID of the existing VPC"
type = string
}
# Add secondary CIDR block to existing VPC for CGNAT overlay network
resource "aws_vpc_ipv4_cidr_block_association" "cgnat_cidr" {
vpc_id = var.vpc_id
cidr_block = "100.64.0.0/16"
}This adds the 100.64.0.0/16 CIDR block to your existing VPC. AWS restricts secondary CIDR blocks to between /16 and /28, so we can't use the full 100.64.0.0/10 CGNAT range, but /16 provides more than enough addresses.
Step 2: Create Pod Subnets
Create one subnet per availability zone where your cluster runs:
# Create CGNAT overlay subnets for Kubernetes pods
# One subnet per availability zone where the EKS cluster runs
resource "aws_subnet" "pods_eu_west_1b" {
vpc_id = aws_vpc_ipv4_cidr_block_association.cgnat_cidr.vpc_id
cidr_block = "100.64.0.0/17"
availability_zone = "eu-west-1b"
tags = {
Name = "Internal/pods-eu-west-1b"
"kubernetes.io/role/pod" = "1" # Tag for Auto Mode NodeClass selector
}
}
resource "aws_subnet" "pods_eu_west_1c" {
vpc_id = aws_vpc_ipv4_cidr_block_association.cgnat_cidr.vpc_id
cidr_block = "100.64.128.0/17"
availability_zone = "eu-west-1c"
tags = {
Name = "Internal/pods-eu-west-1c"
"kubernetes.io/role/pod" = "1" # Tag for Auto Mode NodeClass selector
}
}The kubernetes.io/role/pod tag is important. This is what the NodeClass will use to identify which subnets should host pod ENIs.
Step 3: Create Pod Security Group
# Define the main vpc cidr block
variable "main_vpc_cidr" {
description = "The CIDR block of the main VPC"
type = string
}
# Security group for CGNAT overlay network pods
resource "aws_security_group" "cgnat_overlay_sg" {
name = "cgnat-overlay-sg"
description = "Security group for Carrier-Grade NAT overlay network pods"
vpc_id = aws_vpc_ipv4_cidr_block_association.cgnat_cidr.vpc_id
tags = {
Name = "cgnat-overlay-sg"
}
}
# Allow all inbound traffic from VPC CIDR blocks
resource "aws_vpc_security_group_ingress_rule" "allow_vpc_traffic" {
security_group_id = aws_security_group.cgnat_overlay_sg.id
cidr_ipv4 = var.main_vpc_cidr # e.g., "10.196.24.0/24"
ip_protocol = "-1"
}
resource "aws_vpc_security_group_ingress_rule" "allow_cgnat_traffic" {
security_group_id = aws_security_group.cgnat_overlay_sg.id
cidr_ipv4 = "100.64.0.0/16"
ip_protocol = "-1"
}
# Allow all outbound traffic
# NOTE: This rule is auto-created by AWS for all security groups,
# but must be explicitly defined in Terraform or it will be removed
resource "aws_vpc_security_group_egress_rule" "allow_all_outbound" {
security_group_id = aws_security_group.cgnat_overlay_sg.id
cidr_ipv4 = "0.0.0.0/0"
ip_protocol = "-1"
}This security group allows traffic between pods in the CGNAT range and resources in the main VPC.
Step 4: Configure Route Table
# Create route table for CGNAT subnets
# Note: AWS automatically creates local routes for all CIDR blocks within the VPC
# This enables communication between the overlay subnets and main VPC subnets
resource "aws_route_table" "cgnat_route_table" {
vpc_id = aws_vpc_ipv4_cidr_block_association.cgnat_cidr.vpc_id
# Local route is automatically created by AWS for VPC CIDR blocks
# This allows traffic between 100.64.0.0/16 and the main VPC CIDR
tags = {
Name = "cgnat-route-table"
}
}
# Associate route table with CGNAT subnets
resource "aws_route_table_association" "cgnat_rt_association_1b" {
subnet_id = aws_subnet.pods_eu_west_1b.id
route_table_id = aws_route_table.cgnat_route_table.id
}
resource "aws_route_table_association" "cgnat_rt_association_1c" {
subnet_id = aws_subnet.pods_eu_west_1c.id
route_table_id = aws_route_table.cgnat_route_table.id
}AWS automatically creates local routes between CIDR blocks in the same VPC, so pods can communicate with nodes without additional configuration.
Kubernetes Configuration
This is where Auto Mode shines compared to managed node groups. Instead of configuring the aws-node DaemonSet and creating ENIConfig resources for each availability zone, you simply define a NodeClass and attach it to a NodePool.
Define the NodeClass
apiVersion: eks.amazonaws.com/v1
kind: NodeClass
metadata:
name: split-ip-space
spec:
# IAM role for EC2 instances
role: MyNodeRole
# Where the EC2 nodes live (your existing main VPC subnets)
subnetSelectorTerms:
- tags:
kubernetes.io/role/internal-elb: "1"
securityGroupSelectorTerms:
- tags:
Name: "eks-cluster-sg"
# Where the pods live (CGNAT overlay subnets)
podSubnetSelectorTerms:
- tags:
kubernetes.io/role/pod: "1"
podSecurityGroupSelectorTerms:
- tags:
Name: "cgnat-overlay-sg"
# Optional: SNAT policy for outbound traffic
snatPolicy: Random
# Optional: Network policy enforcement
networkPolicy: DefaultAllowThe key sections here:
subnetSelectorTerms- Selects subnets for node ENIs (your existing main VPC subnets)podSubnetSelectorTerms- Selects subnets for pod ENIs (the CGNAT overlay subnets we created)securityGroupSelectorTerms- Security groups for node ENIspodSecurityGroupSelectorTerms- Security groups for pod ENIs
Replace the tag values with your actual subnet and security group tags.
Attach to a NodePool
apiVersion: eks.amazonaws.com/v1
kind: NodePool
metadata:
name: workers-split-ip
spec:
nodeClassRef:
name: split-ip-space
# Example configuration - adjust to your needs
disruption:
consolidationPolicy: WhenEmpty
budgets:
- nodes: "10%"The NodePool references the NodeClass, and Auto Mode handles the rest. All nodes created by this pool will place their pods in the CGNAT subnets.
Verifying the Setup
After applying the configuration and creating nodes, verify that the IP separation is working:
# Check node IPs - should show addresses from main VPC CIDR
kubectl get nodes -o wide
# Check pod IPs - should show addresses from CGNAT CIDR (100.64.x.x)
kubectl get pods -A -o wideNodes should have IPs like 10.196.24.x while pods should have IPs like 100.64.x.x.
Important Considerations
NAT Gateway Required
Pods in the 100.64.0.0/16 range need a NAT Gateway or Transit Gateway to reach the internet. These IPs are not routable outside your VPC, so outbound internet traffic must be NATted to a public IP.
Add a NAT Gateway to your pod subnet route table:
resource "aws_route" "pod_subnet_nat" {
route_table_id = aws_route_table.cgnat_route_table.id
destination_cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.main.id
}IP Utilization
Each node still receives at least one /28 prefix in both the node subnet and the pod subnet. Plan your capacity accordingly. You're not reducing per-node IP consumption, just moving pod IPs to a different address space.
Security Groups Per Pod
Auto Mode does not yet support per-pod security groups. The security group you specify in podSecurityGroupSelectorTerms is shared by all pods on each node.
Monitoring Limitations
CloudWatch export of network policy logs is not available in Auto Mode. If you need detailed network flow logging, you'll need to use VPC Flow Logs.
Comparison: Auto Mode vs Managed Node Groups
| Feature | Managed Node Groups | Auto Mode |
|---|---|---|
| Configuration | ENIConfig + DaemonSet env vars | NodeClass only |
| Per-AZ setup | Separate ENIConfig per AZ | Single NodeClass with tag selectors |
| Maintenance | Manual DaemonSet updates | Managed by AWS |
| Complexity | Higher - multiple resources | Lower - declarative config |
The Auto Mode approach is cleaner and more maintainable. You define the desired state in a NodeClass, and Auto Mode ensures nodes are configured correctly.
When to Use This
If you're not IP-constrained, the added complexity isn't worth it. Stick with the default networking configuration.
Alternative: IPv6
If you're hitting IPv4 limits even with CGNAT, consider an IPv6 Auto Mode cluster. Each node gets a /80 prefix by default, which provides effectively unlimited IP addresses. However, this requires your applications and network infrastructure to support IPv6.
Key Takeaways
EKS Auto Mode's custom networking feature brings IP conservation to Auto Mode clusters without the complexity of managing ENIConfig resources and DaemonSet configurations.
- Tag-based selection - Use Kubernetes-style label selectors to identify pod subnets and security groups
- Single configuration resource - One
NodeClassreplaces multipleENIConfigresources - Same infrastructure - VPC setup is identical to managed node groups (secondary CIDR, subnets, security groups)
- NAT required - Pods in CGNAT space need a NAT Gateway for internet access
- Simpler than managed nodes - No DaemonSet configuration or per-AZ ENI resources
INFO
This feature became available on June 13, 2025 !releasedate
Additional Resources
- EKS Auto Mode NodeClass Specification
- AWS Documentation History - Track new features as they're released
- Custom Networking for Managed Node Groups - The alternative approach using
ENIConfig
