Multi-Architecture Compute: Managing User Provisioned Infrastructure Load Balancers with Post-Installation workers

From https://community.ibm.com/community/user/powerdeveloper/blogs/paul-bastide/2024/03/21/multi-architecture-compute-managing-user-provision?CommunityKey=daf9dca2-95e4-4b2c-8722-03cd2275ab63

Multi-Arch Compute for Red Hat OpenShift Container Platform on IBM Power systems lets one use a pair of compute architectures, such as, ppc64le and amd64, within a single cluster. This feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures. The cluster owner is able to add an additional worker post installation.

With User Provisioned Infrastructure (UPI), the cluster owner may have used automation or manual setup of front-end load balancers. The IBM team provides PowerVS ocp4-upi-powervs, PowerVM ocp4-upi-powervm and HMC ocp4-upi-powervm-hmc automation.

When installing a cluster, the cluster is setup with ab external load balancer, such as haproxy. The external load balancer routes traffic to pools the Ingress Pods, API Server and MachineConfig server. The haproxy configuration is stored at /etc/haproxy/haproxy.cfg.

For instance, the configuration for ingress-https load balancer would look like the following:

frontend ingress-https
        bind *:443
        default_backend ingress-https
        mode tcp
        option tcplog

backend ingress-https
        balance source
        mode tcp
        server master0 10.17.15.11:443 check
        server master1 10.17.19.70:443 check
        server master2 10.17.22.204:443 check
        server worker0 10.17.26.89:443 check
        server worker1 10.17.30.71:443 check
        server worker2 10.17.30.225:443 check

When adding a post-installation worker to a UPI cluster, one must update the ingress-http and ingress-https. Y

  1. Get the IP and hostname
# oc get nodes -lkubernetes.io/arch=amd64 --no-headers=true -ojson | jq  -c '.items[].status.addresses'
[{"address":"10.17.15.11","type":"InternalIP"},{"address":"worker-amd64-0","type":"Hostname"}]
[{"address":"10.17.19.70","type":"InternalIP"},{"address":"worker-amd64-1","type":"Hostname"}]
  1. Edit the /etc/haproxy/haproxy.cfg

a. Find backend ingress-http then before the first server entry add the worker hostnames and ips.

        server worker-amd64-0 10.17.15.11:80 check
        server worker-amd64-1 10.17.19.70:80 check

b. Find backend ingress-https then before the first server entry add the worker hostnames and ips.

        server worker-amd64-0 10.17.15.11:443 check
        server worker-amd64-1 10.17.19.70:443 check

c. Save the config file.

  1. Restart the haproxy
# systemctl restart haproxy

You now have the additional workers incorporated into the haproxy, and as the ingress pods are moved from Power to Intel and back. You have a fully functional environment.

Best wishes.

Paul

P.S. You can learn more about scalling up the ingress controller at Scaling an Ingress Controller

$ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge

P.P.S If you are running very advanced scenarios, you can change the ingresscontroller spec.nodePlacement.nodeSelector to put the workload on specific architectures. see Configuring an Ingress Controller

nodePlacement:
 nodeSelector:
   matchLabels:
     kubernetes.io/arch: ppc64le

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.