CIDR Planning for Azure Red Hat OpenShift
If you're deploying Azure Red Hat OpenShift and think CIDR planning is just "pick some RFC1918 ranges and call it a day," I have bad news for you. ARO has some very specific opinions about which IP ranges you can and cannot use, and these opinions changed dramatically between OpenShift 4.13 and 4.14 when OVN-Kubernetes became the default network provider.
I learned this the hard way when clusters that would have deployed fine on 4.13 suddenly failed validation on 4.14 because the CGNAT range was now off-limits. The migration from OpenShift SDN to OVN-Kubernetes introduced new reserved ranges that conflict with common choices, and the documentation didn't make this immediately obvious.
This guide exists to save you from the CIDR planning headaches I went through. It covers what ranges are actually safe to use, why certain ranges became forbidden after 4.14, and how to plan your address space without painting yourself into a corner.
Why CIDR Planning Actually Matters
ARO networking involves three separate CIDR blocks that all need to coexist without conflicts:
- Machine CIDR: Your Azure VNet address space where nodes live
- Pod CIDR: The internal range OpenShift assigns to pods
- Service CIDR: The virtual IPs used by Kubernetes services
The trick is that you can't just pick any private ranges. You need to avoid conflicts with existing Azure infrastructure, on-premises networks connected via VPN or ExpressRoute, and the internal ranges that OVN-Kubernetes reserves for its own plumbing.
The OVN-Kubernetes Reserved Range Problem
Here's where it gets interesting. Starting with OpenShift 4.14, OVN-Kubernetes became the default CNI, replacing OpenShift SDN. OVN-Kubernetes internally reserves specific IP ranges for cluster networking operations, and if you happen to pick those same ranges for your pods, services, or node subnets, you're going to have a bad time.
CRITICAL
These ranges are hard-coded internal reservations by OVN-Kubernetes. If you configure your cluster to use any of these addresses, you'll get routing conflicts and mysterious connectivity issues that are painful to debug.
IPv4 Reserved Ranges
OVN-Kubernetes claims three specific ranges for internal routing:
Join Subnet:
100.64.0.0/16
Connects gateway routers to distributed routers via the join switch. Part of the CGNAT block (100.64.0.0/10
).Transit Switch Subnet:
100.88.0.0/16
Routes traffic between zones across all nodes. Also part of the CGNAT block.Masquerade Subnet:
169.254.0.0/17
(OpenShift 4.17+)
Prevents IP collision for hairpin traffic. On upgraded clusters, the old masquerade subnet is preserved.
IPv6 Reserved Ranges
If you're running dual-stack, these IPv6 ranges are also off-limits:
- Join Subnet:
fd98::/64
- Transit Switch Subnet:
fd97::/64
- Masquerade Subnet:
fd69::/112
(OpenShift 4.17+)
Where You Cannot Use These Ranges
Do not configure these reserved addresses in:
- Pod CIDR
- Service CIDR
- Machine CIDR (node subnet ranges)
- Connected Azure VNets or peered networks
- VPN or ExpressRoute connected networks
The CGNAT Problem
Before OpenShift 4.14, using the CGNAT range (100.64.0.0/10
) for Pod or Service CIDRs was generally safe because OpenShift SDN didn't reserve it. After the switch to OVN-Kubernetes in 4.14, that entire range became problematic because OVN carves out 100.64.0.0/16
and 100.88.0.0/16
for internal use.
This is why clusters that validated fine on 4.13 would fail on 4.14 if you'd chosen CGNAT ranges for pods or services.
Safe CIDR Choices That Actually Work
After dealing with multiple failed deployments due to range conflicts, here's what actually works reliably across OpenShift versions.
The Default Configuration
OpenShift's default ranges are safe and well-tested:
- Machine CIDR:
10.0.0.0/16
(your Azure VNet) - Pod CIDR:
10.128.0.0/14
(OpenShift default) - Service CIDR:
172.30.0.0/16
(OpenShift default)
If you have no specific requirements or conflicts with existing infrastructure, just use these. They work, they're tested, and you won't run into weird edge cases.
Custom Ranges
If the defaults conflict with your existing network infrastructure, here are safe alternatives:
For Pod CIDR:
10.128.0.0/14
(default, preferred)192.168.0.0/16
(if10.x
is taken)
For Service CIDR:
172.30.0.0/16
(default)192.168.0.0/24
(smaller, works for most use cases)
For Machine CIDR (Azure VNet):
10.0.0.0/16
(default)- Any RFC1918 range that doesn't conflict with your existing Azure or on-premises networks
AVOID THESE RANGES
Do not use:
- CGNAT block:
100.64.0.0/10
(conflicts with OVN-Kubernetes internals) - Link-local:
169.254.0.0/16
(conflicts with masquerade subnet) - Any ranges currently used in your Azure infrastructure or connected networks
Validation Tool
Before deploying, use the Red Hat OpenShift Network Calculator to validate your CIDR choices. It requires a Red Hat account, but it'll catch conflicts before you waste time on a failed deployment.
What You Can and Can't Change After Installation
Here's something that bit me: most of these CIDR decisions are permanent. Once the cluster is created, you're stuck with them.
Immutable After Creation
These cannot be changed without rebuilding the cluster:
- Pod CIDR - Each node gets a
/23
allocation from this range, and it's baked into the network config - Machine CIDR - Your VNet address space is locked at cluster creation
- Service CIDR - The virtual IP range for services is permanent
PLAN AHEAD
Size these ranges generously at creation time. It's better to have unused IP space than to need more and be unable to expand.
Can Be Modified (Advanced)
These can technically be changed post-installation, but it's not for the faint of heart:
- Join subnet
- Masquerade subnet
- Transit CIDR ranges
Modifying these requires deep knowledge of OVN-Kubernetes internals. Don't attempt this unless you really know what you're doing.
Network Expansion
You can expand the Pod CIDR range after installation, but you cannot change it to a different range entirely. For example, if you start with 10.128.0.0/16
, you could expand it to 10.128.0.0/14
, but you can't switch to 192.168.0.0/16
.
Sizing Your CIDR Blocks
Getting the size right matters because you can't easily fix it later. Here's how to calculate what you actually need.
Pod CIDR Sizing
Each node in your cluster gets a /23
subnet from the Pod CIDR block. That's 512 IP addresses per node.
Example calculation for 10.128.0.0/14
:
- Total available addresses: ~262,144 IPs
- Per-node allocation:
/23
(512 IPs) - Maximum nodes supported: ~512 nodes
For most production clusters, a /14
or /16
Pod CIDR is sufficient. Don't go smaller than /18
unless you're absolutely certain you'll never need more than 64 nodes.
MINIMUM SIZES
- Pod CIDR minimum:
/18
(supports ~64 nodes) - Service CIDR minimum:
/24
(254 services) - Node subnet minimum:
/27
(27 usable IPs per subnet)
Service CIDR Sizing
Each Kubernetes Service
consumes one IP from this range.
- Minimum:
/24
(254 usable IPs) - Recommended for production:
/22
or larger (1,022+ IPs)
If you're running a large platform with hundreds of microservices, size this accordingly. Running out of service IPs is not fun.
Machine CIDR (VNet) Sizing
Your Azure VNet needs enough space for both control plane and worker nodes.
- Master subnet: Minimum
/27
(32 IPs, ~27 usable) - Worker subnet: Minimum
/27
(32 IPs, ~27 usable) - Recommended for production:
/24
or larger per subnet
Remember that Azure reserves the first four IPs in each subnet, so factor that into your calculations.
Key Takeaways
CIDR planning for ARO is more complicated than it should be, but here's what you need to remember:
Use the defaults if you can. OpenShift's default ranges (
10.128.0.0/14
for pods,172.30.0.0/16
for services) are tested and safe.Avoid the CGNAT block. The
100.64.0.0/10
range is off-limits on OpenShift 4.14+ due to OVN-Kubernetes internal reservations.Size generously. You can't easily change these ranges after cluster creation, so plan for growth.
Validate before deploying. Use the Red Hat network calculator to catch conflicts before you waste time on failed deployments.
Watch out for version changes. If you're upgrading from pre-4.14 clusters, be aware that range restrictions changed with the OVN-Kubernetes migration.
The good news is that once you get the CIDR planning right, it's one less thing to worry about. The bad news is that getting it wrong means rebuilding the cluster from scratch.