My colleague, Paul Chapman, has a nice article on .NET 9.0 running on IBM Power.
https://community.ibm.com/community/user/powerdeveloper/blogs/paul-chapman/2024/11/21/dotnet9
Best wishes.
My colleague, Paul Chapman, has a nice article on .NET 9.0 running on IBM Power.
https://community.ibm.com/community/user/powerdeveloper/blogs/paul-chapman/2024/11/21/dotnet9
Best wishes.
With the third new release this year, Red Hat OpenShift4.17 is now generally available including for IBMĀ® PowerĀ®. You can read the release notes here and find the guide for installing OpenShift 4.17 on Power here. This release builds on features included in Red Hat OpenShift 4.15 and 4.16, including an important update to multi-architecture compute that helps clients automate their modernization journeys with Power. Other updates and enhancements for clients deploying on Power focus on scalability, resource optimization, security, developer and system administrator productivity, and more. Here is an overview of key new features and improvements specifically relevant to Power:Ā
Multiarch Tuning Operator
Included with Red Hat OpenShift 4.17 is an update to multi-architecture compute called the Multiarch Tuning Operator. The Multiarch Tuning Operator optimizes workload management across different architectures such as IBM Power, IBM Z, and x86, including single-architecture clusters transitioning to multi-architecture environments. It allows systems administrators to handle scheduling and resource allocation across these different architectures by ensuring workloads are correctly directed to the nodes of compatible architectures. The Multiarch Tuning Operator in OpenShift 4.17 further helps clients optimize resource allocation with policies that automatically place workloads on the most appropriate architecture. This also improves system administrator productivity and is especially useful with business-critical workloads that require high performance or need specific architecture capabilities, such as data-intensive applications often found running on Power.
TheĀ scheduler-pluginsĀ has a new releaseĀ v0.30.6. This feature is used in concert with theĀ Secondary Scheduler Operator
This one aligns the k8s version – v1.30.6
IPI with FIPS mode creates certificates that are FIPS compliant and makes sure the Nodes/Operators are using the proper cryptographic profiles.
FIPS
Mode and a RHEL9 equivalent stream.fips-mode-setup --check
Note, you must reboot after enabling fips or this binary will not function.
oc
# curl -O https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/openshift-client-linux-ppc64le-rhel9.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 44 32.4M 44 14.3M 0 0 14.1M 0 0:00:02 0:00:01 0:00:01 1100 32.4M 100 32.4M 0 0 17.0M 0 0:00:01 0:00:01 --:--:-- 17.0M
# tar xvf openshift-client-linux-ppc64le-rhel9.tar.gz
oc
kubectl
README.md
You can optionally move the oc
and kubectl
files to /usr/local/bin/
ccoctl
# curl -O https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/ccoctl-linux-rhel9.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 44 32.4M 44 14.3M 0 0 14.1M 0 0:00:02 0:00:01 0:00:01 1100 32.4M 100 32.4M 0 0 17.0M 0 0:00:01 0:00:01 --:--:-- 17.0M
# tar xvf ccoctl-linux-rhel9.tar.gz ccoctl
ccoctl
# chmod 755 ccoctl
Copy over your pull-secret.txt
Get the Credentials Request pull spec from the release image https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/release.txt
Pull From: quay.io/openshift-release-dev/ocp-release@sha256:6507d5a101294c670a283f5b56c5595fb1212bd6946b2c3fee01de2ef661625f
# mkdir -p credreqs
# oc adm release extract --cloud=powervs --credentials-requests quay.io/openshift-release-dev/ocp-release@sha256:6507d5a101294c670a283f5b56c5595fb1212bd6946b2c3fee01de2ef661625f --to=./credreqs -a pull-secret.txt
...
Extracted release payload created at 2024-10-02T21:38:57Z
# ls credreqs/
0000_26_cloud-controller-manager-operator_15_credentialsrequest-powervs.yaml
0000_30_cluster-api_01_credentials-request.yaml
0000_30_machine-api-operator_00_credentials-request.yaml
0000_50_cluster-image-registry-operator_01-registry-credentials-request-powervs.yaml
0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml
0000_50_cluster-storage-operator_03_credentials_request_powervs.yaml
# export IBMCLOUD_API_KEY=<your ibmcloud apikey>
# ./ccoctl ibmcloud create-service-id --credentials-requests-dir ./credreqs --name fips-svc --resource-group-name ocp-dev-resource-group
2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-cloud-controller-manager-ibm-cloud-credentials-credentials.yaml
2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-machine-api-powervs-credentials-credentials.yaml
2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-image-registry-installer-cloud-credentials-credentials.yaml
2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-ingress-operator-cloud-credentials-credentials.yaml
2024/11/01 08:22:12 Saved credentials configuration to: /root/install/t/manifests/openshift-cluster-csi-drivers-ibm-powervs-cloud-credentials-credentials.yaml
curl -O https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/openshift-install-rhel9-ppc64le.tar.gz
Note, with a FIPS host, youāll want to use rhel9
as it supports FIPS https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp-dev-preview/4.18.0-ec.2/openshift-client-linux-ppc64le-rhel9.tar.gz
Unarchive openshift-install-rhel9-ppc64le.tar.gz
Create the install-config.yaml using openshift-install-fips create install-config
per https://developer.ibm.com/tutorials/awb-deploy-ocp-on-power-vs-ipi/
Edit install-config.yaml
and add a new line at the end fips: true
[root@fips-ocp-7219-bastion-0 t]# mkdir -p 20241031c
[root@fips-ocp-7219-bastion-0 t]# cp install-config.yaml-old 20241031c/install-config.yaml
openshift-install-fips create manifests
# openshift-install-fips create manifests
WARNING Release Image Architecture not detected. Release Image Architecture is unknown
INFO Consuming Install Config from target directory
INFO Adding clusters...
INFO Manifests created in: cluster-api, manifests and openshift
# cp credreqs/manifests/openshift-*yaml 20241031c/openshift/
# ls openshift/
99_feature-gate.yaml 99_openshift-machineconfig_99-master-ssh.yaml
99_kubeadmin-password-secret.yaml 99_openshift-machineconfig_99-worker-fips.yaml
99_openshift-cluster-api_master-machines-0.yaml 99_openshift-machineconfig_99-worker-multipath.yaml
99_openshift-cluster-api_master-machines-1.yaml 99_openshift-machineconfig_99-worker-ssh.yaml
99_openshift-cluster-api_master-machines-2.yaml openshift-cloud-controller-manager-ibm-cloud-credentials-credentials.yaml
99_openshift-cluster-api_master-user-data-secret.yaml openshift-cluster-csi-drivers-ibm-powervs-cloud-credentials-credentials.yaml
99_openshift-cluster-api_worker-machineset-0.yaml openshift-config-secret-pull-secret.yaml
99_openshift-cluster-api_worker-user-data-secret.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml
99_openshift-machine-api_master-control-plane-machine-set.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml
99_openshift-machineconfig_99-master-fips.yaml openshift-install-manifests.yaml
99_openshift-machineconfig_99-master-multipath.yaml openshift-machine-api-powervs-credentials-credentials.yaml
BASE_DOMAIN=powervs-openshift-ipi.cis.ibm.net RELEASE_ARCHITECTURE="ppc64le" openshift-install-fips create cluster
INFO Creating infrastructure resources...
INFO Started local control plane with envtest
INFO Stored kubeconfig for envtest in: /root/install/t/20241031c/.clusterapi_output/envtest.kubeconfig
INFO Running process: Cluster API with args [-v=2 --diagnostics-address=0 --health-addr=127.0.0.1:45201 --webhook-port=40159 --webhook-cert-dir=/tmp/envtest-serving-certs-1721884268 --kubeconfig=/root/install/t/20241031c/.clusterapi_output/envtest.kubeconfig]
INFO Running process: ibmcloud infrastructure provider with args [--provider-id-fmt=v2 --v=5 --health-addr=127.0.0.1:37207 --webhook-port=35963 --webhook-cert-dir=/tmp/envtest-serving-certs-3500602992 --kubeconfig=/root/install/t/20241031c/.clusterapi_output/envtest.kubeconfig]
INFO Creating infra manifests...
INFO Created manifest *v1.Namespace, namespace= name=openshift-cluster-api-guests
INFO Created manifest *v1beta1.Cluster, namespace=openshift-cluster-api-guests name=fips-fd4f6
INFO Created manifest *v1beta2.IBMPowerVSCluster, namespace=openshift-cluster-api-guests name=fips-fd4f6
INFO Created manifest *v1beta2.IBMPowerVSImage, namespace=openshift-cluster-api-guests name=rhcos-fips-fd4f6
INFO Done creating infra manifests
INFO Creating kubeconfig entry for capi cluster fips-fd4f6
INFO Waiting up to 30m0s (until 9:06AM EDT) for network infrastructure to become ready...
INFO Network infrastructure is ready
INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-bootstrap
INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-0
INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-1
INFO Created manifest *v1beta2.IBMPowerVSMachine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-2
INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-bootstrap
INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-0
INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-1
INFO Created manifest *v1beta1.Machine, namespace=openshift-cluster-api-guests name=fips-fd4f6-master-2
INFO Created manifest *v1.Secret, namespace=openshift-cluster-api-guests name=fips-fd4f6-bootstrap
INFO Created manifest *v1.Secret, namespace=openshift-cluster-api-guests name=fips-fd4f6-master
INFO Waiting up to 15m0s (until 9:02AM EDT) for machines [fips-fd4f6-bootstrap fips-fd4f6-master-0 fips-fd4f6-master-1 fips-fd4f6-master-2] to provision...
INFO Control-plane machines are ready
INFO Cluster API resources have been created. Waiting for cluster to become ready...
INFO Consuming Cluster API Manifests from target directory
INFO Consuming Cluster API Machine Manifests from target directory
INFO Waiting up to 20m0s (until 9:21AM EDT) for the Kubernetes API at https://api.fips.powervs-openshift-ipi.cis.ibm.net:6443...
INFO API v1.31.1 up
INFO Waiting up to 45m0s (until 9:47AM EDT) for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 5m0s for bootstrap machine deletion openshift-cluster-api-guests/fips-fd4f6-bootstrap...
INFO Shutting down local Cluster API controllers...
INFO Stopped controller: Cluster API
INFO Stopped controller: ibmcloud infrastructure provider
INFO Shutting down local Cluster API control plane...
INFO Local Cluster API system has completed operations
INFO no post-destroy requirements for the powervs provider
INFO Finished destroying bootstrap resources
INFO Waiting up to 40m0s (until 10:16AM EDT) for the cluster at https://api.fips.powervs-openshift-ipi.cis.ibm.net:6443 to initialize...
If you have any doubts, you can start a second terminal session and use the kubeconfig to verify access:
# oc --kubeconfig=auth/kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
fips-fd4f6-master-0 Ready control-plane,master 41m v1.31.1
fips-fd4f6-master-1 Ready control-plane,master 41m v1.31.1
fips-fd4f6-master-2 Ready control-plane,master 41m v1.31.1
fips-fd4f6-worker-srwf2 Ready worker 7m37s v1.31.1
fips-fd4f6-worker-tc28p Ready worker 7m13s v1.31.1
fips-fd4f6-worker-vrlrq Ready worker 7m12s v1.31.1
You can also check oc --kubeconfig=auth/kubeconfig get co
19. When it’s complete you can login and use your fips enabled cluster
This is a recommended article on oc-mirror and getting started with a fundamental tool in OpenShift.
This guide demonstrates the use of
oc-mirror
v2 to assist in populating a local Red Hat Quay registry that will be used for a disconnected installation, and includes the steps used to configureopenshift-marketplace
to use catalog sources that point to the local Red Hat Quay registry.
The Linux Pressure Stall Information, as part of the Control Group v2, provides an accurate accounting of a containers cpu, memory and io. The psi
stats allow accurate and limited access to resources – no over-committing and no over-sizing.
However, it sometimes is difficult to see if the a container is being limited and could use more resources assigned.
This article is designed to help you diagnose and check your pods so you can get the best out of your workloads.
You can check the container in your Pod’s cpu.stat:
[root@cpi-c7b2-bastion-0 ~]# oc get pod -n test test-pod -oyaml | grep -i containerID
- containerID: cri-o://c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea
[root@cpi-c7b2-bastion-0 ~]# oc rsh -n test test-pod
sh-4.4# find /sys -iname '*c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea*'
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4b90d9_20f9_427d_9414_9964f32379dc.slice/crio-c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea.scope
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4b90d9_20f9_427d_9414_9964f32379dc.slice/crio-conmon-c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea.scope
/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d4b90d9_20f9_427d_9414_9964f32379dc.slice/crio-conmon-c050804396004e6b5d822541a58f299ea2b0e48936709175d6d57f3507cc6cea.scope/cpu.stat
usage_usec 11628232854
user_usec 8689145332
system_usec 2939087521
core_sched.force_idle_usec 0
nr_periods 340955
nr_throttled 8
throttled_usec 8012
nr_bursts 0
burst_usec 0
nr_throttled
Ā andĀ throttled_usec
. This is really a minor impact for a container.nr_throttled 8
throttled_usec 8012
If the container had a higher number of throttled events, you want to check the number of cpus or memory that your container is limited to, such as:
nr_throttled 103
throttled_usec 22929315
⯠NS=test
⯠POD=test-pod
⯠oc get -n ${NS} pod ${POD} -ojson | jq -r '.spec.containers[].resources.limits.cpu'
8
You can check the real-time statsĀ top
Ā for your container pressure. Log on to your host.
find /sys/fs/cgroup/kubepods.slice/ -iname cpu.pressure | xargs -t -I {} cat {} | grep -v total=0
find /sys/fs/cgroup/kubepods.slice/ -iname memory.pressure | xargs -t -I {} cat {} | grep -v total=0
find /sys/fs/cgroup/kubepods.slice/ -iname io.pressure | xargs -t -I {} cat {} | grep -v total=0
This will show you all the pods that are under pressure.
for PRESSURE in $( find /sys/fs/cgroup/kubepods.slice/ -iname io.pressure)
do
if [ ! -z "$(cat ${PRESSURE} | grep -v total=0)" ]
then
if [ ! -z "$(cat ${PRESSURE} | grep -v "avg10=0.00 avg60=0.00 avg300=0.00")" ]
then
echo ${PRESSURE}
fi
fi
done
⯠cat /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde03ef16_000a_4198_9e04_ac96d0ea33c5.slice/crio-d200161683a680588c4de8346ff58d633201eae2ffd558c8d707c4836215645e.scope/io.pressure
some avg10=14.02 avg60=14.16 avg300=13.99 total=4121355556
full avg10=14.02 avg60=14.16 avg300=13.99 total=4121050788
In this case, I was able to go in and icnrease the total IO.
You can tweak the cpu.pressure settings temporarily for a pod or system so the time used to evaluate is extended (this is the longest time possible).
The maximum window size is 10 seconds, and if you have kernel version less than 6.5 then the minimum window size is 500ms.
cat << EOF > /sys/fs/cgroup/cpu.pressure
some 10000000 10000000
full 10000000 10000000
EOF
There are two methods to disable psi
in OpenShift, the first is to set a kernel parameter, and the second is to switch from cgroupsv2 to cgroups.
You can switch from cgroupsv2 to cgroups – Configuring the Linux cgroup version on your nodes.
⯠oc patch nodes.config cluster --type merge -p '{"spec": {"cgroupMode": "v1"}}'
You’ll have to wait for each of the Nodes to restart.
In OpenShift, you can disable psi
in using a MachineConfig
.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 99-worker-psi-disable
spec:
kernelArguments:
- psi=0
You can check to see if it is enabled by checking one of the cpu.pressure, io.pressure or memory.pressure files. You’ll see “Operation not supported”.
sh-5.1# cat /sys/fs/cgroup/cpu.pressure
cat: /sys/fs/cgroup/cpu.pressure: Operation not supported
or
oc debug node/<node_name>
chroot /host
stat -c %T -f /sys/fs/cgroup
tmpfs
Linux PSI is pretty awesome. However, you should check your workload and verify it’s running correctly.
kernel/sched/psi.c
Our colleague at Red Hat Dylan Orzel posted an article on Building multi-architecture container images on OpenShift Container Platform clusters
In this article we’ll explore how to make use of the built-in build capabilities available in Red Hat OpenShift 4 in a multi-arch compute environment, and how to make use of nodeSelectors to schedule builds on nodes of the architecture of our choosing.
Here are some things around IBM Power Systems and Red Hat OpenShift you should know about:
The IBM Power team has updated the list of containers they build with support for ppc64le. The list is kept at https://community.ibm.com/community/user/powerdeveloper/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr
The updates are:
system-logger | v1.19.0 | podman pull icr.io/ppc64le-oss/system-logger-ppc64le:v1.14.0 | July 18, 2024 |
postgres-operator | v15.7 | podman pull icr.io/ppc64le-oss/postgres-operator-ppc64le:v15.7 | July 18, 2024 |
postgresql | v14.12.0-bv | podman pull icr.io/ppc64le-oss/postgresql:v14.12.0-bv | July 9, 2024 |
mongodb | 5.0.26 | podman pull icr.io/ppc64le-oss/mongodb-ppc64le:5.0.26 | April 9, 2024 |
6.0.13 | podman pull icr.io/ppc64le-oss/mongodb-ppc64le:6.0.13 | April 9, 2024 |
Trivy and Starboard are now available per https://community.ibm.com/community/user/powerdeveloper/blogs/gerrit-huizenga/2024/07/17/aqua-trivy-and-starboard-for-scanning-gitlab-on-ib
You can download the Trivy RPM using:
rpm -ivh https://github.com/aquasecurity/trivy/releases/download/v0.19.2/trivy_0.19.2_Linux-PPC64LE.rpm
Or you could use Starboard directly from https://github.com/aquasecurity/trivy-operator/releases/tag/v0.22.0
These provide some nice security features and tools for IBM Power containers.
The OpenShift Routes project supports automatically getting a certificate for OpenShift routes from any cert-manager Issuer, similar to annotating an Ingress or Gateway resource in vanilla Kubernetes.
You can download the helm chart from https://github.com/cert-manager/openshift-routes/releases
Or you can use:
helm install openshift-routes -n cert-manager oci://ghcr.io/cert-manager/charts/openshift-routes
OpenBao exists to provide a software solution to manage, store, and distribute sensitive data including secrets, certificates, and keys.
OpenBAO has released v2.0.0
You can use helm to install on IBM Power use the values.openshift.yaml
link
helm repo add openbao https://openbao.github.io/openbao-helm
helm install openbao openbao/openbao
The Containers are at https://quay.io/repository/openbao/openbao?tab=tags&tag=latest
Red Hat OpenShift 4.16 is generally available for upgrades and new installations, and as of today is announced. It is based on Kubernetes 1.29 with the CRI-O 1.29 runtime, RHEL CoreOS 9.4. You can read the release notes at https://docs.openshift.com/container-platform/4.16/release_notes/ocp-4-16-release-notes.html
Some cool features you can use are:
– oc adm upgrade status command, which decouples status information from the existing oc adm upgrade command and provides specific information regarding a cluster update, including the status of the control plane and worker node updates. https://docs.openshift.com/container-platform/4.16/updating/updating_a_cluster/updating-cluster-cli.html#update-upgrading-oc-adm-upgrade-status_updating-cluster-cli
– Tech Preview and Generally Available Table – https://docs.openshift.com/container-platform/4.16/release_notes/ocp-4-16-release-notes.html#ocp-4-16-technology-preview-tables_release-notes
FYI: google/go-containerregistry has a new release v0.19.2. This adds a new feature we care about:
crane mutate myimage --set-platform linux/arm64
This release also supports using podmanās authfile from the REGISTRY_AUTH_FILE
file.