Challenge: running a hosted cluster in a different tenant network segment or VLAN without wide-open access from the tenant segment to the management segment.
Additional requirement: the hub cluster must not have arbitrary addressing or routing into the tenant network segment. The hub may only attach hosted-cluster workloads (for example, KubeVirt VMs) to that segment.
Worker nodes are straightforward: attach them to the tenant network segment (DHCP or equivalent addressing is required).
Exposing hosted control plane endpoints into the tenant segment is harder. The following components must be reachable from workers and clients in that segment:
API Server
OAuth
Konnectivity
Ignition
Here is a summary of common publishing options for these components:
For this proof of concept, endpoints are exposed as follows:
API Server: LoadBalancer (fronted by external api-lb in the tenant segment; see below)
OAuth, Konnectivity, Ignition: Route via a dedicated ingress controller shard on the hub, fronted by external ingress-shared-lb with VIPs/DNS in the tenant segment
Use a dedicated OpenShift Ingress Controller shard on the hub so only the hosted-cluster control-plane Routes are served by that shard. Tenant clients resolve OAuth, Konnectivity, and Ignition hostnames to ingress-shared-lb, which forwards to the shard’s NodePorts on the management network.
Place an external load balancer in front of that shard (for example F5 BIG-IP or NetScaler) that can reach the hub’s management network and present stable tenant-facing VIPs or addresses.
VyOS acts as router and firewall between management and Tenant-A. Restrict lateral traffic between the two segments (no full mesh); allow only what you need (for example DNS to resolvers, default route or NAT for internet egress). Hosted-cluster control-plane traffic from tenant nodes should flow to the external load balancer VIPs in the tenant segment (not directly into arbitrary management subnets).
konnectivity.tenant-a.coe.muc.redhat.com. IN A 192.168.203.111
oauth.tenant-a.coe.muc.redhat.com. IN A 192.168.203.111
ignition.tenant-a.coe.muc.redhat.com. IN A 192.168.203.111
Three external load balancers appear in this write-up; keep their roles distinct:
Name
Role
ingress-shared-lb
Tenant-facing VIPs for OAuth, Konnectivity, Ignition Routes on the hub ingress shard
api-lb
Tenant-facing VIP for the hosted cluster API (APIServer publishing)
ingress-lb
Tenant-facing VIP for hosted cluster application Routes (*.apps…)
Suggested order: (1) hub ingress shard + ingress-shared-lb + DNS for the three control-plane hostnames, (2) api-lb + API DNS, (3) ingress-lb + wildcard apps DNS, then (4) apply HostedCluster and NodePool. Adjust if your automation creates services first and you backfill DNS once NodePorts or service endpoints are known.
The following two subsections describe (2) and (3); the hub shard and DNS for OAuth, Konnectivity, and Ignition are covered above.
Disable or constrain cloud provider integration so that Kubernetes LoadBalancer Service requests for the hosted cluster are not satisfied by the hub cluster cloud integration unless that is intentional.
WebUI bug: ACM shows https://console-openshift-console.apps.tenant-a.apps.ocp5.stormshift.coe.muc.redhat.com/ for the console, but the URL should be https://console-openshift-console.apps.tenant-a.coe.muc.redhat.com/.
Add custom endpoint publishing strategy
Find a solution for the NodePort chicken-and-egg problem of the external API load balancer