In this article we’ll explore how to make use of the built-in build capabilities available in Red Hat OpenShift 4 in a multi-arch compute environment, and how to make use of nodeSelectors to schedule builds on nodes of the architecture of our choosing.
These provide some nice security features and tools for IBM Power containers.
OpenShift Routes for cert-manager
The OpenShift Routes project supports automatically getting a certificate for OpenShift routes from any cert-manager Issuer, similar to annotating an Ingress or Gateway resource in vanilla Kubernetes.
I found a cool article on Cert Manager with IPI PowerVS
Simplify certificate management on OpenShift across multiple architectures
Chirag Kyal is a Software Engineer at Red Hat… has authored an article about deploying IPI PowerVS and Cert Manager on IBM Cloud.
Check out the article about efficient certificate management techniques on Red Hat OpenShift using the cert-manager Operator for OpenShift’s multi-architecture support.
I’ve developed the following script to help you get started deploying multiarchitecture applications and show elaborate on the techniques for controllin multiarch compute. This script uses the sock-shop application which is available at https://github.com/ocp-power-demos/sock-shop-demo . This series of instructions for sock-shop-demo requires kustomize and following the readme.md in the repository to setup the username and password for mongodb.
You do not need to do every step that follows, please feel free to install/use what you’d like. I recommend the kustomize install with multi-no-ns, and then playing with the features you find interesting. Note, multi-no-ns requires no namespace.
The layout of the application is described in this diagram:
Deploying a non-multiarch Intel App
This deployment shows the Exec errors and pod scheduling errors that are encountered when scheduling Intel only Pods on Power.
For these steps, you are going to clone the ocp-power-demos’s sock-shop-demo and then experiment to resolve errors so the application is up and running.
The reason kustomize is used is due to the sort order feature in the binary.
Update the manifests/overlays/single/env.secret file with a username and password for mongodb. openssl rand -hex 10 is a good tip to generating a random password. You’ll need to copy this env.secret in each ‘overlays/` folder that is used in the demo.
You might be lucky enough for the scheduler to assign these to Intel only nodes.
At this point if they are all Running with no restarts, yes it’s running.
Grab the external URL
❯ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
sock-shop sock-shop-test-user-4.apps.rdr-mac-cust-d.rdr-xyz.net front-end 8079 edge/Redirect None
Open a Browser, and navigate around. Try registering a user.
It failed for me.
Cordon Power nodes
The purpose is to cordon the Power Nodes and delete the existing pod so you get the Pod running on the architecture you want. This is only recommended on a dev/test system and on the worker nodes.
Find the Power workers
oc get nodes -l kubernetes.io/arch=ppc64le | grep worker
For each of the Power, cordon the nodes
oc adm cordon node/<worker>
List the front-end app pods
❯ oc get pods -l name=front-end
NAME READY STATUS RESTARTS AGE
front-end-648fdf6957-bjk9m 0/1 CrashLoopBackOff 13 (26s ago) 42m
Delete the front-end pods.
oc delete pod/front-end-648fdf6957-bjk9m
The app should be running correctly at this point.
Use a Node Selector for the Application
Demonstrate how to use node selector to put the workload on the right nodes.
These microservices use Deployments. We can modify the deployment to use NodeSelectors.
Edit the manifests/overlays/single/09-front-end-dep.yaml or oc edit deployment/front-end
Find the nodeSelector field and add an architecture limitation using a Node label:
With many of these applications, there are architecture specific alternatives. You can run without NodeSelectors to get the workload scheduled where there is support.
To switch to Node selectors use across Power/Intel.
Switch to oc project sock-shop
Delete the Pods and Recreate (this is a manifest-listed set of images)
❯ oc get pod -l name=rabbitmq -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
rabbitmq-65c75db8db-9jqbd 2/2 Running 0 96s 10.130.2.31 mac-01a7-worker-1 <none> <none>
Process the template with the NFS_PATH and NFS_SERVER
# oc process -f storage-class-nfs-template.yaml -p NFS_PATH=/data -p NFS_SERVER=10.17.2.138 | oc apply -f –
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
storageclass.storage.k8s.io/nfs-client created
FYI: I was made aware of kubernetes-sigs/kube-scheduler-simulator and the release simulator/v0.2.0.
That’s why we are developing a simulator for kube-scheduler — you can try out the behavior of the scheduler while checking which plugin made what decision for which Node.
Optimal logical partition (LPAR) placement can be important to improve the performance of workloads as this can favor efficient use of the memory and CPU resources on the system. However, for certain configuration and settings such as I/O devices allocation to the partition, amount of memory allocation, CPU entitlement to the partition, and so on we might not get a desired LPAR placement. In such situations, the technique described in this blog can enable you to place the LPAR in a desired optimal configuration.
Enhancing container security with Aqua Trivy on IBM Power
… IBM Power development team found that Trivy is as effective as other open source scanners in detecting vulnerabilities. Not only does Trivy prove to be suitable for container security in IBM Power clients’ DevSecOps pipelines, but the scanning process is simple. IBM Power’s support for Aqua Trivy underscores its industry recognition for its efficacy as an open source scanner.
The Red Hat OpenShift Container Platform runs on IBM Power systems, offering a secure and reliable foundation for modernizing applications and running containerized workloads.
Multi-Arch Compute for OpenShift Container Platform lets you use a pair of compute architectures, such as ppc64le and amd64, within a single cluster. This exciting feature opens new possibilities for versatility and optimization for composite solutions that span multiple architectures.
Join Paul Bastide, IBM Senior Software Engineer, as he introduces the background behind Multi-Arch Compute and then gets you started setting up, configuring, and scheduling workloads. After, Paul will take you through a brief demonstration showing common problems and solutions for running multiple architectures in the same cluster.