I found a cool article on Cert Manager with IPI PowerVS
Simplify certificate management on OpenShift across multiple architectures
Chirag Kyal is a Software Engineer at Red Hat… has authored an article about deploying IPI PowerVS and Cert Manager on IBM Cloud.
Check out the article about efficient certificate management techniques on Red Hat OpenShift using the cert-manager Operator for OpenShift’s multi-architecture support.
I’ve developed the following script to help you get started deploying multiarchitecture applications and show elaborate on the techniques for controllin multiarch compute. This script uses the sock-shop application which is available at https://github.com/ocp-power-demos/sock-shop-demo . This series of instructions for sock-shop-demo requires kustomize and following the readme.md in the repository to setup the username and password for mongodb.
You do not need to do every step that follows, please feel free to install/use what you’d like. I recommend the kustomize install with multi-no-ns, and then playing with the features you find interesting. Note, multi-no-ns requires no namespace.
The layout of the application is described in this diagram:
Deploying a non-multiarch Intel App
This deployment shows the Exec errors and pod scheduling errors that are encountered when scheduling Intel only Pods on Power.
For these steps, you are going to clone the ocp-power-demos’s sock-shop-demo and then experiment to resolve errors so the application is up and running.
The reason kustomize is used is due to the sort order feature in the binary.
Update the manifests/overlays/single/env.secret file with a username and password for mongodb. openssl rand -hex 10 is a good tip to generating a random password. You’ll need to copy this env.secret in each ‘overlays/` folder that is used in the demo.
You might be lucky enough for the scheduler to assign these to Intel only nodes.
At this point if they are all Running with no restarts, yes it’s running.
Grab the external URL
❯ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
sock-shop sock-shop-test-user-4.apps.rdr-mac-cust-d.rdr-xyz.net front-end 8079 edge/Redirect None
Open a Browser, and navigate around. Try registering a user.
It failed for me.
Cordon Power nodes
The purpose is to cordon the Power Nodes and delete the existing pod so you get the Pod running on the architecture you want. This is only recommended on a dev/test system and on the worker nodes.
Find the Power workers
oc get nodes -l kubernetes.io/arch=ppc64le | grep worker
For each of the Power, cordon the nodes
oc adm cordon node/<worker>
List the front-end app pods
❯ oc get pods -l name=front-end
NAME READY STATUS RESTARTS AGE
front-end-648fdf6957-bjk9m 0/1 CrashLoopBackOff 13 (26s ago) 42m
Delete the front-end pods.
oc delete pod/front-end-648fdf6957-bjk9m
The app should be running correctly at this point.
Use a Node Selector for the Application
Demonstrate how to use node selector to put the workload on the right nodes.
These microservices use Deployments. We can modify the deployment to use NodeSelectors.
Edit the manifests/overlays/single/09-front-end-dep.yaml or oc edit deployment/front-end
Find the nodeSelector field and add an architecture limitation using a Node label:
With many of these applications, there are architecture specific alternatives. You can run without NodeSelectors to get the workload scheduled where there is support.
To switch to Node selectors use across Power/Intel.
Switch to oc project sock-shop
Delete the Pods and Recreate (this is a manifest-listed set of images)
❯ oc get pod -l name=rabbitmq -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rabbitmq-65c75db8db-9jqbd 2/2 Running 0 96s 10.130.2.31 mac-01a7-worker-1 <none> <none>
Process the template with the NFS_PATH and NFS_SERVER
# oc process -f storage-class-nfs-template.yaml -p NFS_PATH=/data -p NFS_SERVER=10.17.2.138 | oc apply -f –
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
storageclass.storage.k8s.io/nfs-client created
FYI: I was made aware of kubernetes-sigs/kube-scheduler-simulator and the release simulator/v0.2.0.
That’s why we are developing a simulator for kube-scheduler — you can try out the behavior of the scheduler while checking which plugin made what decision for which Node.
Optimal logical partition (LPAR) placement can be important to improve the performance of workloads as this can favor efficient use of the memory and CPU resources on the system. However, for certain configuration and settings such as I/O devices allocation to the partition, amount of memory allocation, CPU entitlement to the partition, and so on we might not get a desired LPAR placement. In such situations, the technique described in this blog can enable you to place the LPAR in a desired optimal configuration.
Enhancing container security with Aqua Trivy on IBM Power
… IBM Power development team found that Trivy is as effective as other open source scanners in detecting vulnerabilities. Not only does Trivy prove to be suitable for container security in IBM Power clients’ DevSecOps pipelines, but the scanning process is simple. IBM Power’s support for Aqua Trivy underscores its industry recognition for its efficacy as an open source scanner.
The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads.
Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures, such as ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures.
Join Paul Bastide, IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.
More and more IBM® Power® clients are modernizing securely with lower risk and faster time to value with cloud-native microservices on Red Hat® OpenShift® running alongside their existing banking and industry applications on AIX, IBM i, and Linux. With the availability of Red Hat OpenShift 4.15 on March 19th, Red Hat and IBM introduced a long-awaited innovation called Multi-Architecture Compute that enables clients to mix Power and x86 worker nodes in a single Red Hat OpenShift cluster. With the release of Red Hat OpenShift 4.15, clients can now run the control plane for a Multi-Architecture Compute cluster natively on Power.
Some tips for setting up a Multi-Arch Compute Cluster
Setting up a multi-arch compute cluster manually, not using automation, you’ll want to follow this process:
Setup the Initial Cluster with the multi payload on Intel or Power for the Control Plane.
Open the network ports between the two environments
Multi-Arch Compute for Red Hat OpenShift Container Platform on IBM Power systems lets one use a pair of compute architectures, such as, ppc64le and amd64, within a single cluster. This feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures. The cluster owner is able to add an additional worker post installation.
With User Provisioned Infrastructure (UPI), the cluster owner may have used automation or manual setup of front-end load balancers. The IBM team provides PowerVS ocp4-upi-powervs, PowerVM ocp4-upi-powervm and HMC ocp4-upi-powervm-hmc automation.
When installing a cluster, the cluster is setup with ab external load balancer, such as haproxy. The external load balancer routes traffic to pools the Ingress Pods, API Server and MachineConfig server. The haproxy configuration is stored at /etc/haproxy/haproxy.cfg.
For instance, the configuration for ingress-https load balancer would look like the following:
frontend ingress-https
bind *:443
default_backend ingress-https
mode tcp
option tcplog
backend ingress-https
balance source
mode tcp
server master0 10.17.15.11:443 check
server master1 10.17.19.70:443 check
server master2 10.17.22.204:443 check
server worker0 10.17.26.89:443 check
server worker1 10.17.30.71:443 check
server worker2 10.17.30.225:443 check
When adding a post-installation worker to a UPI cluster, one must update the ingress-http and ingress-https. Y
a. Find backend ingress-http then before the first server entry add the worker hostnames and ips.
server worker-amd64-0 10.17.15.11:80 check
server worker-amd64-1 10.17.19.70:80 check
b. Find backend ingress-https then before the first server entry add the worker hostnames and ips.
server worker-amd64-0 10.17.15.11:443 check
server worker-amd64-1 10.17.19.70:443 check
c. Save the config file.
Restart the haproxy
# systemctl restart haproxy
You now have the additional workers incorporated into the haproxy, and as the ingress pods are moved from Power to Intel and back. You have a fully functional environment.
P.P.S If you are running very advanced scenarios, you can change the ingresscontroller spec.nodePlacement.nodeSelector to put the workload on specific architectures. see Configuring an Ingress Controller