I'm trying to deploy a highly available Kubernetes control-plane across multiple Availability Zones (AZs) in OpenStack. Each AZ has its own dedicated subnet, and I need to assign control-plane nodes to different subnets based on their AZ placement.
Currently, I can achieve this for worker nodes using MachineDeployment with different OpenStackMachineTemplate per AZ, each specifying its own subnet. However, for the control-plane, the OpenStackCluster spec only allows defining a single subnet/network configuration that applies to all control-plane nodes.
Describe the solution you'd like
Similar to what AWS Cluster API Provider (CAPA) offers, I would like the ability to specify multiple subnets for the control-plane, with each subnet mapped to a specific Availability Zone.
Example of desired configuration:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OpenStackCluster
metadata:
name: my-cluster
spec:
controlPlaneAvailabilityZones:
- az1
- az2
- az3
network:
subnets:
- id: subnet-az1-uuid
availabilityZone: az1
- id: subnet-az2-uuid
availabilityZone: az2
- id: subnet-az3-uuid
availabilityZone: az3
Or alternatively, allow KubeadmControlPlane to reference different OpenStackMachineTemplate per replica/AZ (similar to how MachineDeployment works).
Describe alternatives you've considered
Single stretched subnet across all AZs: This works but is not always possible due to network architecture constraints (separate L2 domains per AZ).
Manual post-deployment network configuration: Manually attaching additional ports after VM creation, but this breaks the declarative model and is error-prone.
Using worker nodes as pseudo control-plane: Not a viable solution for production environments.
Multiple single-node control-planes with external load balancer: Complex to manage and defeats the purpose of using Cluster API.
Additional context
CAPA reference: In AWS CAPA, you can specify multiple subnets in AWSCluster.spec.network.subnets with AZ mapping, and control-plane nodes are automatically distributed.
Use case: Multi-AZ deployments in enterprise OpenStack environments where each AZ has isolated network segments for compliance/security reasons.
Current behavior: The OpenStackCluster spec accepts a single subnet field, and all control-plane nodes are created in that same subnet regardless of their failureDomain (AZ) placement.
Similar feature in CAPA: [https://cluster-api-aws.sigs.k8s.io/topics/multi-az-control-planes]
I'm trying to deploy a highly available Kubernetes control-plane across multiple Availability Zones (AZs) in OpenStack. Each AZ has its own dedicated subnet, and I need to assign control-plane nodes to different subnets based on their AZ placement.
Currently, I can achieve this for worker nodes using MachineDeployment with different OpenStackMachineTemplate per AZ, each specifying its own subnet. However, for the control-plane, the OpenStackCluster spec only allows defining a single subnet/network configuration that applies to all control-plane nodes.
Describe the solution you'd like
Similar to what AWS Cluster API Provider (CAPA) offers, I would like the ability to specify multiple subnets for the control-plane, with each subnet mapped to a specific Availability Zone.
Example of desired configuration:
Or alternatively, allow KubeadmControlPlane to reference different OpenStackMachineTemplate per replica/AZ (similar to how MachineDeployment works).
Describe alternatives you've considered
Single stretched subnet across all AZs: This works but is not always possible due to network architecture constraints (separate L2 domains per AZ).
Manual post-deployment network configuration: Manually attaching additional ports after VM creation, but this breaks the declarative model and is error-prone.
Using worker nodes as pseudo control-plane: Not a viable solution for production environments.
Multiple single-node control-planes with external load balancer: Complex to manage and defeats the purpose of using Cluster API.
Additional context
CAPA reference: In AWS CAPA, you can specify multiple subnets in AWSCluster.spec.network.subnets with AZ mapping, and control-plane nodes are automatically distributed.
Use case: Multi-AZ deployments in enterprise OpenStack environments where each AZ has isolated network segments for compliance/security reasons.
Current behavior: The OpenStackCluster spec accepts a single subnet field, and all control-plane nodes are created in that same subnet regardless of their failureDomain (AZ) placement.
Similar feature in CAPA: [https://cluster-api-aws.sigs.k8s.io/topics/multi-az-control-planes]