Category: IBM Power Systems

  • Setting up nx-gzip in a non-privileged container

    In order to use nx-gzip on Power Systems with a non-privileged container, use the following recipe:

    On each of the nodes, create the selinux power-nx-gzip.cil:

    (block nx
        (blockinherit container)
        (allow process container_file_t ( chr_file ( map )))
    )
    

    Install the CIL on each worker node

    sudo semodule -i power-nx-gzip.cil /usr/share/udica/templates/base_container.cil
    

    I ran the following:

    podman run -it --security-opt label=type:nx.process --device=/dev/crypto/nx-gzip registry.access.redhat.com/ubi9/ubi@sha256:a1804302f6f53e04cc1c6b20bc2204d5c9ae6e5a664174b38fbeeb30f7983d4e sh
    

    I copied the files into the container using the container CONTAINER ID:

    podman ps
    podman cp temp 6a4d967f3b6b:/tmp
    podman cp gzfht_test 6a4d967f3b6b:/tmp
    

    Then running in the container:

    sh-5.1# cd /tmp
    sh-5.1# ./gzfht_test temp
    file temp read, 1048576 bytes
    compressed 1048576 to 1105922 bytes total, crc32 checksum = 3c56f054
    

    You can use ausearch -m avc -ts recent | audit2allow to track down missing permissions

    Hope this helps you…

    Reference

    https://github.com/libnxz/power-gzip
  • Help… My Ingress is telling me OAuthServerRouteEndpointAccessibleControllerDegraded

    My teammate hit an issue with Ingress Certificates not being valid:

    oc get co ingress -oyaml
        message: |-
          OAuthServerRouteEndpointAccessibleControllerDegraded: Get "https://oauth-openshift.apps.mycluster.local/healthz": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-04-02T17:58:35Z is after 2025-02-13T20:04:16Z
          RouterCertsDegraded: secret/v4-0-config-system-router-certs.spec.data[apps.mycluster.local] -n openshift-authentication: certificate could not validate route hostname oauth-openshift.apps.mycluster.local: x509: certificate has expired or is not yet valid: current time 2025-04-02T17:58:33Z is after 2025-02-13T20:04:16Z
    

    The Red Hat docs and tech articles are great. I found How to redeploy/renew an expired default ingress certificate in RHOCP4?

    I ran the following on a non-production cluster:

    1. Renewed the ingress CA:
    oc get secret router-ca -oyaml -n openshift-ingress-operator> router-ca-2025-04-02.yaml
    oc delete secret router-ca -n openshift-ingress-operator
    oc delete pod --all -n openshift-ingress-operator
    wait 30
    oc get secret router-ca -n openshift-ingress-operator
    oc get po -n openshift-ingress-operator
    
    1. Recreate the wild-card ingress certificate using the new ingress CA:
    oc get secret router-certs-default -o yaml -n openshift-ingress > router-certs-default-2025-04-02.yaml
    oc delete secret router-certs-default -n openshift-ingress 
    oc delete pod --all -n openshift-ingress 
    wait 30
    oc get secret router-certs-default -n openshift-ingress 
    oc get po -n openshift-ingress 
    
    1. Checked the ingress
    curl -v https://oauth-openshift.apps.mycluster.local/healthz -k
    *  subject: CN=*.apps.mycluster.local
    *  start date: Apr  2 19:08:33 2025 GMT
    *  expire date: Apr  2 19:08:34 2027 GMT
    
    1. Update ca-trust
    oc -n openshift-ingress-operator get secret router-ca -o jsonpath="{ .data.tls\.crt }" | base64 -d -i > ingress-ca-2025-04-02.crt
    cp /root/ingress-ca-2025-04-02.crt /etc/pki/ca-trust/source/anchors/
    update-ca-trust 
    
    1. Login now works
    oc login -u kubeadmin -p YOUR_PASSWORD https://api.mycluster.local:6443
    

    You’ve seen how to recreate the cert.

    You should use the cert-manager operator from Red Hat.

  • Multi-Arch Tuning Operator 1.1.0 Released

    The Red Hat team has released a new version of the Multi-Arch Tuning Operator.

    In Multi-Arch Compute clusters, the Multiarch Tuning Operator influences the scheduling of Pods, so application run on the supported architecture.

    You can learn more about it at https://catalog.redhat.com/software/containers/multiarch-tuning/multiarch-tuning-operator-bundle/661659e9c5bced223a7f7244

    Addendum

    My colleague, Punith, worked with the Red Hat team to add NodeAffinityScoring and plugin support to the Multi-Arch Tuning Operator and ClusterPodPlacementConfig. This feature allows users to define cluster-wide preferences for specific architectures, influencing how the Kubernetes scheduler places pods. It helps optimize workload distribution based on preferred node architecture.

    	Spec:
    	    Plugins:
    		NodeAffinityScoring:
    		   enabled: true
    		   platforms:
    		   - architecture: ppc64le
    		     weight: 100
    		   - architecture: amd64
    		     weight: 50
  • FIPS support in Go 1.24

    Kudos to the Red Hat team. link

    The benefits of native FIPS support in Go 1.24

    The introduction of the FIPS Cryptographic Module in Go 1.24 marks a watershed moment for the language’s security capabilities. This new module provides FIPS 140-3-compliant implementations of cryptographic algorithms, seamlessly integrated into the standard library. What makes this particularly noteworthy is its transparent implementation. Existing Go applications can leverage FIPS-compliant cryptography without requiring code changes.

    Build-time configuration through the GOFIPS140 environment variable, allowing developers to select specific versions of the Go Cryptographic Module.

    GOFIPS140=true go build

    Runtime control via the fips140 GODEBUG setting, enabling dynamic FIPS mode activation.

    GODEBUG=

    Keep these in your toolbox along with GOARCH=ppc64le

  • Updates to Open Source Container images for Power on IBM Container Registry

    The IBM Linux on Power team pushed new images to their public open source container images in the IBM Container Registry (ICR). This should assure end users that IBM has authentically built these containers in a secure environment.

    The new container images are:

    Image NameTag NameProject LicensesImage Pull CommandLast Published
    fluentd-kubernetes-daemonsetv1.14.3-debian-forward-1.0Apache-2.0podman pull icr.io/ppc64le-oss/fluentd-kubernetes-daemonset:v1.14.3-debian-forward-1.0March 17, 2025
    cloudnative-pg/pgbouncer1.23.0Apache-2.0podman pull icr.io/ppc64le-oss/cloudnative-pg/pgbouncer:1.23.0March 17, 2025
  • Red Hat OpenShift Container Platform 4.18 Now Available on IBM Power

    Red Hat OpenShift 4.18 Now Available on IBM Power Red Hat® OpenShift® 4.18 has been released and adds improvements and new capabilities to OpenShift Container Platform components. Based on Kubernetes 1.31 and CRI-O 1.31, Red Hat OpenShift 4.18 focused on core improvements with enhanced network flexibility.

    You can download 4.18.1 from the mirror at https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.18.1/ppc64le/

  • Nest Accelerator and Urandom… I think

    The NX accelerator has random number generation capabilities.

    What what happens if the random-number entropy pool runs out of numbers? If you are reading from the /dev/random device, your application will block waiting for new numbers to be generated. Alternatively the urandom device is non-blocking, and will create random numbers on the fly, re-using some of the entropy in the pool. This can lead to numbers that are less random than required for some use cases.

    Well, the Power9 and Power10 servers use the nest accelerator to generate the pseudo random numbers and maintains the pool.

    Each processor chip in a Power9 and Power10 server has an on-chip “nest” accelerator called the NX unit that provides specialized functions for general data compression, gzip compression, encryption, and random number generation. These accelerators are used transparently across the systems software stack to speed up operations related to Live Partition Migration, IPSec, JFS2 Encrypted File Systems, PKCS11 encryption, and random number generation through /dev/random and /dev/urandom.

    Kind of cool, I’ll have to find some more details to verify it and use it.

  • vim versus plain vi: One Compelling Reason

    My colleague, Michael Q, introduced me to a vim extension that left me saying… that’s awesome.

    set cuc which enables Cursor Column, and when I use it with set number, it’s awesome to see correct indenting

    The commands are:

    1. Shift + :
    2. set cuc and enter
    3. Shift + :
    4. set number and enter
    `set cuc` which enables *Cursor Column*, and when I use it with `set number`, it's awesome to see correct indenting

    Use set nocuc to disable

    Good luck…

    Post Script

    • Install vim with dnf install -y vim

    Reference VimTrick: set cuc

  • Cool Plugin… kube-health

    kube-health has a new release v0.3.0. I’ve been following along on this tool for a while.

    Here’s why:

    1. It allows you to poll a single resource and see if it’s OK… in the aggregate. You can see the status of subresources at the same time.
    2. It’s super simple to watch the resource until it exits cleanly or fails…

    Kudos to iNecas for a wonderful tool.

    The following is an image from the github site. demo.svg

  • Custom nftable firewall rules in OpenShift

    Here is a good references for using OpenShift:

    Custom nftable firewall rules in OpenShift: https://access.redhat.com/articles/7090422

    It’s a supported method for implementing custom nftables firewall rules in OpenShift clusters. It is intended for cluster administrators who are responsible for managing network security policies within their OpenShift environments.