Skip to content

Hosted Control Plane and tenant networking

Official documentation: Not yet available

Tested with:

Component Version
OpenShift v4.21.9
OpenShift Virt v4.21.0

Overview

Challenge: running a hosted cluster in a different tenant network segment or VLAN without wide-open access from the tenant segment to the management segment.

Additional requirement: the hub cluster must not have arbitrary addressing or routing into the tenant network segment. The hub may only attach hosted-cluster workloads (for example, KubeVirt VMs) to that segment.

Worker nodes are straightforward: attach them to the tenant network segment (DHCP or equivalent addressing is required).

Exposing hosted control plane endpoints into the tenant segment is harder. The following components must be reachable from workers and clients in that segment:

  • API Server
  • OAuth
  • Konnectivity
  • Ignition

Here is a summary of common publishing options for these components:

Component/Service Exposing strategy (servicePublishingStrategy) Kubernetes Service type LoadBalancer Route (OpenShift router)
API Server
  • LoadBalancer (recommended; Kubernetes LoadBalancer service)
  • NodePort* (not for production)
  • OAuth
  • Route (default)
  • NodePort* (not for production)
  • Konnectivity
  • Route (default)
  • LoadBalancer (Kubernetes LoadBalancer service)
  • NodePort* (not for production)
  • Ignition
  • Route (default)
  • NodePort* (not for production)
  • For this proof of concept, endpoints are exposed as follows:

    • API Server: LoadBalancer (fronted by external api-lb in the tenant segment; see below)
    • OAuth, Konnectivity, Ignition: Route via a dedicated ingress controller shard on the hub, fronted by external ingress-shared-lb with VIPs/DNS in the tenant segment

    Exposing components via a dedicated router shard

    Use a dedicated OpenShift Ingress Controller shard on the hub so only the hosted-cluster control-plane Routes are served by that shard. Tenant clients resolve OAuth, Konnectivity, and Ignition hostnames to ingress-shared-lb, which forwards to the shard’s NodePorts on the management network.

    Place an external load balancer in front of that shard (for example F5 BIG-IP or NetScaler) that can reach the hub’s management network and present stable tenant-facing VIPs or addresses.

    Proof of concept environment overview

    Router between Mgmt and Tenant-A

    VyOS acts as router and firewall between management and Tenant-A. Restrict lateral traffic between the two segments (no full mesh); allow only what you need (for example DNS to resolvers, default route or NAT for internet egress). Hosted-cluster control-plane traffic from tenant nodes should flow to the external load balancer VIPs in the tenant segment (not directly into arbitrary management subnets).

    VyOS config commands
    set firewall group address-group ALLOWED-IPS address '10.32.96.1'
    set firewall group address-group ALLOWED-IPS address '10.32.96.31'
    set firewall group address-group ALLOWED-IPS address '10.32.111.254'
    set firewall ipv4 forward filter rule 49 action 'accept'
    set firewall ipv4 forward filter rule 49 description 'Allow IPs'
    set firewall ipv4 forward filter rule 49 destination group address-group 'ALLOWED-IPS'
    set firewall ipv4 forward filter rule 50 action 'drop'
    set firewall ipv4 forward filter rule 50 description 'Drop enire coe lab'
    set firewall ipv4 forward filter rule 50 destination address '10.32.96.0/20'
    
    set interfaces ethernet eth0 address 'dhcp'
    set interfaces ethernet eth1 address '192.168.203.1/24'
    
    set nat source rule 100 outbound-interface name 'eth0'
    set nat source rule 100 source address '192.168.203.0/24'
    set nat source rule 100 translation address 'masquerade'
    set service dhcp-server listen-interface 'eth1'
    set service dhcp-server shared-network-name coe-2003 authoritative
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 option default-router '192.168.203.1'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 option name-server '10.32.96.1'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 range 1 start '192.168.203.100'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 range 1 stop '192.168.203.200'
    set service dhcp-server shared-network-name coe-2003 subnet 192.168.203.0/24 subnet-id '1'
    set service ssh
    set system host-name 'router-2003'
    set system name-server '10.32.96.1'
    set system name-server '10.32.96.31'
    

    Ingress Sharding at Hub Cluster

    Ingress Controller
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      name: tenant-a
      namespace: openshift-ingress-operator
    spec:
      domain: tenant-a.coe.muc.redhat.com
    
      endpointPublishingStrategy:
        type: NodePortService
      namespaceSelector:
        matchExpressions:
          - key: kubernetes.io/metadata.name
            operator: In
            values:
              - ingress-test
              - clusters-tenant-a
    
    1
    2
    3
    % oc get svc -n openshift-ingress router-nodeport-tenant-a
    NAME                       TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                     AGE
    router-nodeport-tenant-a   NodePort   172.30.141.209   <none>        80:32460/TCP,443:32488/TCP,1936:32095/TCP   106s
    

    The ingress shard load balancer is an RHEL 9 host running HAProxy (external load balancer ingress-shared-lb).

    • Install HAProxy: dnf install haproxy
    • Configure SELinux: setsebool -P haproxy_connect_any 1
    • Apply the example haproxy configuration (update ports to match your NodePort service)
    • Enable and start HAProxy: systemctl enable --now haproxy
    HAProxy config
    global
      log         127.0.0.1 local2
      pidfile     /var/run/haproxy.pid
      maxconn     4000
      daemon
    defaults
      mode                    http
      log                     global
      option                  dontlognull
      option http-server-close
      option                  redispatch
      retries                 3
      timeout http-request    10s
      timeout queue           1m
      timeout connect         10s
      timeout client          1m
      timeout server          1m
      timeout http-keep-alive 10s
      timeout check           10s
      maxconn                 3000
    
    listen ingress-router-443
      bind *:443
      mode tcp
      balance source
      server ucs-blade-server-5 10.32.96.105:32488 check inter 1s
      server ucs-blade-server-6 10.32.96.106:32488 check inter 1s        
      server ucs-blade-server-7 10.32.96.107:32488 check inter 1s
      server ucs-blade-server-8 10.32.96.108:32488 check inter 1s
    
    listen ingress-router-80
      bind *:80
      mode tcp
      balance source
      server ucs-blade-server-5 10.32.96.105:32460 check inter 1s
      server ucs-blade-server-6 10.32.96.106:32460 check inter 1s        
      server ucs-blade-server-7 10.32.96.107:32460 check inter 1s
      server ucs-blade-server-8 10.32.96.108:32460 check inter 1s
    

    Add DNS records

    1
    2
    3
    konnectivity.tenant-a.coe.muc.redhat.com.       IN A 192.168.203.111
    oauth.tenant-a.coe.muc.redhat.com.              IN A 192.168.203.111
    ignition.tenant-a.coe.muc.redhat.com.           IN A 192.168.203.111
    

    Deployment sequence (reference)

    Three external load balancers appear in this write-up; keep their roles distinct:

    Name Role
    ingress-shared-lb Tenant-facing VIPs for OAuth, Konnectivity, Ignition Routes on the hub ingress shard
    api-lb Tenant-facing VIP for the hosted cluster API (APIServer publishing)
    ingress-lb Tenant-facing VIP for hosted cluster application Routes (*.apps…)

    Suggested order: (1) hub ingress shard + ingress-shared-lb + DNS for the three control-plane hostnames, (2) api-lb + API DNS, (3) ingress-lb + wildcard apps DNS, then (4) apply HostedCluster and NodePool. Adjust if your automation creates services first and you backfill DNS once NodePorts or service endpoints are known.

    The following two subsections describe (2) and (3); the hub shard and DNS for OAuth, Konnectivity, and Ignition are covered above.

    Deploy External Load Balancer for API (api-lb)

    Use an RHEL 9 virtual machine with HAProxy.

    • Install HAProxy: dnf install haproxy
    • Configure SELinux: setsebool -P haproxy_connect_any 1
    • Apply the example haproxy configuration (update ports to match your environment)
    • Enable and start HAProxy: systemctl enable --now haproxy
    HAProxy config
    global
      log         127.0.0.1 local2
      pidfile     /var/run/haproxy.pid
      maxconn     4000
      daemon
    defaults
      mode                    http
      log                     global
      option                  dontlognull
      option http-server-close
      option                  redispatch
      retries                 3
      timeout http-request    10s
      timeout queue           1m
      timeout connect         10s
      timeout client          1m
      timeout server          1m
      timeout http-keep-alive 10s
      timeout check           10s
      maxconn                 3000
    
    listen api
      bind *:6443
      mode tcp
      balance source
      server ucs-blade-server-5 10.32.96.105:30918 check inter 1s
      server ucs-blade-server-6 10.32.96.106:30918 check inter 1s        
      server ucs-blade-server-7 10.32.96.107:30918 check inter 1s
      server ucs-blade-server-8 10.32.96.108:30918 check inter 1s
    

    Add DNS record:

    api.tenant-a.coe.muc.redhat.com.       IN A 192.168.203.<IP of VM>
    

    Deploy External Load Balancer for Ingress (ingress-lb) of hosted cluster

    Use an RHEL 9 virtual machine with HAProxy.

    • Install HAProxy: dnf install haproxy
    • Configure SELinux: setsebool -P haproxy_connect_any 1
    • Apply the example haproxy configuration (update ports to match your environment)
    • Enable and start HAProxy: systemctl enable --now haproxy
    HAProxy config
    global
      log         127.0.0.1 local2
      pidfile     /var/run/haproxy.pid
      maxconn     4000
      daemon
    defaults
      mode                    http
      log                     global
      option                  dontlognull
      option http-server-close
      option                  redispatch
      retries                 3
      timeout http-request    10s
      timeout queue           1m
      timeout connect         10s
      timeout client          1m
      timeout server          1m
      timeout http-keep-alive 10s
      timeout check           10s
      maxconn                 3000
    
    listen ingress-router-443
      bind *:443
      mode tcp
      balance source
      server tenant-a-gngj5-mfwp6 192.168.203.101:30190 check inter 1s
      server tenant-a-gngj5-rrbmv 192.168.203.102:30190 check inter 1s        
    
    listen ingress-router-80
      bind *:80
      mode tcp
      balance source
      server tenant-a-gngj5-mfwp6 192.168.203.101:30282 check inter 1s
      server tenant-a-gngj5-rrbmv 192.168.203.102:30282 check inter 1s        
    

    Add DNS record:

    *.apps.tenant-a.coe.muc.redhat.com.       IN A 192.168.203.<IP of VM>
    

    Start hosted control plane and nodepool

    HostedCluster
    apiVersion: hypershift.openshift.io/v1beta1
    kind: HostedCluster
    metadata:
      name: 'tenant-a'
      namespace: 'clusters'
      labels:
        "cluster.open-cluster-management.io/clusterset": 'default'
    spec:
      configuration:
        ingress:
          appsDomain: apps.tenant-a.coe.muc.redhat.com # (1)
          domain: ''
          loadBalancer:
            platform:
              type: ''
      channel: fast-4.21
      etcd:
        managed:
          storage:
            persistentVolume:
              size: 8Gi
            type: PersistentVolume
        managementType: Managed
      release:
        image: quay.io/openshift-release-dev/ocp-release:4.21.11-multi
      pullSecret:
        name: pullsecret-cluster-tenant-a
      sshKey:
        name: sshkey-cluster-tenant-a
      networking:
        clusterNetwork:
          - cidr: 10.132.0.0/14
        serviceNetwork:
          - cidr: 172.31.0.0/16
        networkType: OVNKubernetes
      controllerAvailabilityPolicy: SingleReplica
      infrastructureAvailabilityPolicy: SingleReplica
      platform:
        type: KubeVirt
        kubevirt:
          baseDomainPassthrough: false
      infraID: 'tenant-a'
      services:
        - service: APIServer
          servicePublishingStrategy:
            type: LoadBalancer
            loadBalancer:
              hostname: api.tenant-a.coe.muc.redhat.com  # (2)
        - service: OAuthServer
          servicePublishingStrategy:
            type: Route
            route:
              hostname: oauth.tenant-a.coe.muc.redhat.com  # (3)
        - service: OIDC
          servicePublishingStrategy:
            type: Route
        - service: Konnectivity
          servicePublishingStrategy:
            type: Route
            route:
              hostname: konnectivity.tenant-a.coe.muc.redhat.com  # (4)
        - service: Ignition
          servicePublishingStrategy:
            type: Route
            route:
              hostname: ignition.tenant-a.coe.muc.redhat.com  # (5)
    
    1. appsDomain: resolve names under apps.tenant-a.coe.muc.redhat.com to ingress-lb (hosted cluster ingress), not the hub shard.
    2. API server loadBalancer.hostname: resolve to api-lb, which forwards to the APIServer publishing target on the hub.
    3. OAuth route.hostname: resolve to ingress-shared-lb (hub dedicated shard).
    4. Konnectivity route.hostname: resolve to ingress-shared-lb.
    5. Ignition route.hostname: resolve to ingress-shared-lb.
    NodePool
    apiVersion: hypershift.openshift.io/v1beta1
    kind: NodePool
    metadata:
      name: 'tenant-a'
      namespace: 'clusters'
    spec:
      arch: amd64
      clusterName: 'tenant-a'
      replicas: 2
      management:
        autoRepair: false
        upgradeType: Replace
      platform:
        type: KubeVirt
        kubevirt:
          compute:
            cores: 2
            memory: 8Gi
          rootVolume:
            type: Persistent
            persistent:
              size: 32Gi
          additionalNetworks:
          - name: default/cudn-localnet1-2003 # (1)
          attachDefaultNetwork: false
      release:
        image: quay.io/openshift-release-dev/ocp-release:4.21.11-multi
    
    1. Attach NodePool VMs to the tenant segment using a user-defined network (UDN) localnet attachment (default/cudn-localnet1-2003 in this lab).

    Open topics

    • Disable or constrain cloud provider integration so that Kubernetes LoadBalancer Service requests for the hosted cluster are not satisfied by the hub cluster cloud integration unless that is intentional.
    • WebUI bug: ACM shows https://console-openshift-console.apps.tenant-a.apps.ocp5.stormshift.coe.muc.redhat.com/ for the console, but the URL should be https://console-openshift-console.apps.tenant-a.coe.muc.redhat.com/.
    • Add custom endpoint publishing strategy
    • Find a solution for the NodePort chicken-and-egg problem of the external API load balancer

    2026-05-01 2026-05-12 Contributors: Robert Bohne