Blog

  • New Containers for IBM Power

    New container images for IBM Power are made available, here are the last four images:

    Image NameTag NameProject LicensesImage Pull CommandLast Published On
    rocketmq5.3.3Apache-2.0docker pull icr.io/ppc64le-oss/rocketmq-ppc64le:5.3.3December 9, 2025
    elasticsearch7.17.28Server Side Public License V1 and Elastic License 2.0docker pull icr.io/ppc64le-oss/elasticsearch-ppc64le:7.17.28Nov 14th, 2025
    zookeeperv3.9.3-debian-12-r19-bvApache License 2.0docker pull icr.io/ppc64le-oss/zookeeper-ppc64le:v3.9.3-debian-12-r19-bvNov 14, 2025
    vllm0.10.1Apache-2.0docker pull icr.io/ppc64le-oss/vllm-ppc64le:0.10.1.dev852.gee01645db.d20250827September 11, 2025

    Reference

    https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

  • Kernel Module Management Operator 2.5: Now Supporting IBM Power

    Originally posted to https://community.ibm.com/community/user/blogs/paul-bastide/2025/12/03/kernel-module-management-operator-25-now-supportin

    Note: Since the writing 2.5.1 is released, and is the recommended

    The support of KMM is part of my work, and included my write-up on this site.

    We are excited to announce the release of Kernel Module Management (KMM) Operator 2.5, which brings significant enhancements to how you deploy specialized drivers on OpenShift Container Platform to IBM Power.

    The KMM Operator streamlines the management of out-of-tree kernel modules and their associated device plugins. The operator centrally manages, builds, signs, and deploys these components across your cluster.

    What is KMM?

    At its core, KMM utilizes a Module Custom Resource Definition (CRD). This resource allows you to configure everything necessary for an out-of-tree kernel module:

    • How to load the module.
    • Defining ModuleLoader images for specific kernel versions.
    • Including instructions for building and signing modules.

    One of KMM’s most powerful features is its ability to handle multiple kernel versions at once for any given module. This capability is critical for achieving seamless node upgrades and reduced application downtime on your OpenShift clusters. A prime example of this support includes the effortless management of specialized storage drivers that require custom kernel modules to function.

    Other features, such as In-Cluster Building and Signing, where KMM supports building DriverContainer images and signing kernel modules in-cluster to ensure compatibility, including support for Secure Boot environments.

    Please note IBM Power does not provides an Real-Time kernel, and the features of the KMM Operator for Real Time kernel are not applicable to IBM Power.

    🛠️ Installation

    KMM is supported on OpenShift Container Platform on IBM Power 4.20 and later.

    Using the Web Console

    As a cluster administrator, you can install KMM through the OpenShift web console:

    1. Log in to the OpenShift web console.
    2. Navigate to Ecosystem Software Catalog.
    3. Select the Kernel Module Management Operator and click Install.
    4. Choose the openshift-kmm namespace from the Installed Namespace list.
    5. Click Install.

    To verify the installation, navigate to Ecosystem Installed Operators and ensure the Kernel Module Management Operator in the openshift-kmm project shows a status of InstallSucceeded.

    💡 Usage Example: Deploying a Module

    The Module Custom Resource (CR) is used to define and deploy your kernel module.

    A Module CR specifies the following:

    • spec.selector: A node selector (e.g., node-role.kubernetes.io/worker: "") to determine which nodes are eligible.
    • spec.moduleLoader.container.kernelMappings: A list of kernel versions or regular expressions (regexp) and the corresponding container image to use.
    • spec.devicePlugin (Optional): Configuration for an associated device plugin.

    Example Module CR (Annotated)

    The following example shows how to configure a module named my_kmod to be deployed to all worker nodes. It uses kernel mappings to specify different container images for different kernel versions and includes configuration for building/signing the module if the image doesn’t exist.

    apiVersion: kmm.sigs.x-k8s.io/v1beta1
    kind: Module
    metadata:
      name: my-kmod
    spec:
      # Selects all worker nodes
      selector:
        node-role.kubernetes.io/worker: ""
    
      moduleLoader:
        container:
          # Required name of the kernel module to load
          modprobe:
            moduleName: my-kmod 
    
          # Defines container images based on kernel version
          kernelMappings:  
            # Literal match for a specific kernel version
            - literal: 6.0.15-300.fc37.x86_64
              containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64
    
            # Regular expression match for any other kernel 
            - regexp: '^.+$' 
              containerImage: "some.registry/org/my-kmod:${KERNEL_FULL_VERSION}"
              # Instructions for KMM to build the image if it doesn't exist
              build:
                dockerfileConfigMap:  
                  name: my-kmod-dockerfile
              # Instructions for KMM to sign the module if Secure Boot is required
              sign:
                certSecret:
                  name: cert-secret 
                keySecret:
                  name: key-secret 
                filesToSign:
                  - /opt/lib/modules/${KERNEL_FULL_VERSION}/my-kmod.ko

    The KMM reconciliation loop will then handle listing matching nodes, finding the correct image for the running kernel, building/signing the image if needed, and creating worker pods to execute modprobe and load the kernel module.

    Summary

    The Kernel Module Management (KMM) Operator 2.5 release enhances OpenShift’s ability to manage specialized hardware drivers. A key addition is support for IBM Power (ppc64le), enabling seamless, automated deployment of out-of-tree kernel modules and specialized storage drivers on this architecture. KMM continues to minimize disruption during node maintenance by supporting multiple kernel versions. However, Real-Time kernel support remains unavailable for IBM Power.

    References

    For more details on the KMM Operator and the latest changes, please consult the official documentation:

    1. KMM Operator 2.5 Release Notes
    2. Chapter 4. Kernel Module Management Operator Overview
  • 🚀 Red Hat Compliance Operator 1.8 GA: Custom Rules Made Easy!

    We are thrilled to announce the GA release of Red Hat Compliance Operator version 1.8, a key tool for auditing and enforcing security compliance on Red Hat OpenShift.

    The focus of this release is significantly lowering the barrier to creating custom compliance definitions:

    • ‼️ [Tech Prev] CustomRule CRDs with Common Expression Language (CEL): Customers can now define custom compliance checks using CEL. This eliminates the need to learn complex SCAP data streams or OVAL, enabling faster development of tailored compliance rules. (A detailed blog post is coming in early December.)
    • Simplified Configuration: The Compliance Operator team has decoupled PV storage from scan result processing, greatly simplifying the operator configuration, especially for customers focused on detecting cluster changes.

    Enhanced Security Profiles:

    • Updated: DISA-STIG profile to V2R3 🏛️.
    • Removed Deprecated Profiles: CIS OpenShift 1.4.0/1.5.0 and DISA STIG V1R1/V2R1 have been removed.

    See the release notes on the Red Hat Customer Portal for full details.

  • 🚀Announcing the Availability of Red Hat OpenShift AI 3.0 on IBM Power

    IBM announced the availability of Red Hat OpenShift AI 3.0 on IBM® Power®:

    This milestone represents over a year of collaboration and engineering dedication to bring the latest capabilities in open and production-ready AI development to IBM Power clients. Built on Kubernetes, Red Hat OpenShift AI provides a flexible and scalable MLOps platform for building, training, deploying, and monitoring machine learning and generative AI models. With version 3.0 now available on IBM Power, clients can unify their AI workloads from experimentation to production on a single enterprise-grade platform.

    Learn more at IBM Blog | IBM Power Modernization

    Credit to Author : Brandon Pederson

  • 🚀 Build Event-Driven Serverless Apps with OpenShift & Kafka!

    Discover how Red Hat OpenShift Serverless, powered by Knative, integrates seamlessly with Apache Kafka to enable scalable, event-driven architectures.

    In the latest Power Developer Exchange blog, walk through:
    ✅ What Knative brings to serverless workloads
    ✅ How to deploy a sample serverless app on OpenShift Container Platform 4.18.9
    ✅ Configuring Streams for Apache Kafka to route real-time events

    This integration empowers developers to create responsive, cloud-native applications that dynamically scale with incoming Kafka messages—perfect for modern, reactive systems.

    👉 Read the full blog to learn how to combine OpenShift Serverless and Kafka for enterprise-grade scalability and reliability!


    https://community.ibm.com/community/user/blogs/kumar-abhishek/2025/11/13/red-hat-openshift-serverless-with-apache-kafka

    #OpenShift #Serverless #Knative #ApacheKafka #CloudNative #EventDrivenArchitecture

  • Announcing Red Hat OpenShift 4.20 Now Generally Available on IBM Power

    Red Hat OpenShift Container Platform 4.20 is now generally available on IBM® Power® servers, advancing hybrid cloud and AI-ready infrastructure. This release delivers expanded architecture support, accelerator enablement for IBM Spyre™, and enhanced security with the Security Profiles Operator. Together, IBM and Red Hat continue driving enterprise-grade container orchestration optimized for Power, enabling high-performance workloads and modern AI applications. Organizations can now build, deploy, and scale mission-critical workloads with confidence on a secure, resilient platform.

    Learn more at IBM Blog | IBM Power Modernization

    Credit to Author : Brandon Pederson

  • Help… My SystemMemoryExceedsReservation

    Red Hat explains the alert in SystemMemoryExceedsReservation alert received in OCP 4. There is also some detail in alerts/machine-config-operator/SystemMemoryExceedsReservation.md.

    a warning triggered when the *memory usage* of the *system processes* exceeds the 95% of the reservation, not the total memory in the node.

    You can check your configuration by ssh’ing to one of the workers, and sudo ps -ef | grep /usr/bin/kubelet | grep system-reserved

    [root@worker-0 core]# sudo ps -ef | grep  /usr/bin/kubelet | grep system-reserved
    root        2733       1 15 Nov04 ?        12:21:38 /usr/bin/kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/worker,node.openshift.io/os_id=rhel, --node-ip=10.20.29.240 --minimum-container-ttl-duration=6m0s --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --cloud-provider= --hostname-override= --provider-id= --pod-infra-container-image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d2f23cbaebe30a59f7af3b5a9e9cf6157f8ed143af494594e1c9dcf924ce0ec --system-reserved=cpu=500m,memory=1Gi,ephemeral-storage=1Gi --v=2

    You’ll notice the default is a half core and 1G memory cpu=500m,memory=1Gi.

    You can tweak the configuration using:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: KubeletConfig
    metadata:
      name: set-allocatable
    spec:
      machineConfigPoolSelector:
        matchLabels:
          pools.operator.machineconfiguration.openshift.io/worker: ""
      kubeletConfig:
        systemReserved:
          cpu: 1000m
          memory: 3Gi

    Wait until the restart 99% sure that it just restarts kubelet in 4.19, without a reboot.

  • Notes on Adding Intel Worker

    1. You need to grab the latest ignition on your Intel bastion:
    curl -k -H "Accept: application/vnd.coreos.ignition+json;version=3.4.0" -o /var/www/html/ignition/worker.ign https://localhost:22623/config/worker
    restorecon -R /var/www/html/ignition/
    
    1. Clone git clone https://github.com/ocp-power-automation/ocp4-upi-multiarch-compute
    2. Change directory to ocp4-upi-multiarch-compute/tf/add-powervm-workers
    3. Create a tfvars file
    auth_url    = "https://<vl>:5000/v3"
    user_name   = ""
    password    = ""
    insecure    = true
    tenant_name = "ocp-qe"
    domain_name = "Default"
    
    network_name                = "vlan"
    ignition_ip                 = "10.10.19.16"
    resolver_ip                 = "10.10.19.16"
    resolve_domain              = "pavan-421ec3.ocpqe"
    power_worker_prefix         = "rhcos9-worker"
    flavor_id                   = "8ee61c00-b803-49c5-b243-62da02220ed6"
    image_id                    = "f48b00dc-d672-4f9a-bac8-a3383bea4a3f"
    openstack_availability_zone = "e980"
    
    # the number of workers to create
    worker_count = 1
    
    1. Run Terraform terraform apply -var-file=data/var.tfvars
    2. On a Power bastion node, you will need to add dhcpd entry to /etc/dhcp/dhcpd.conf and named forwarder pointing to your Intel bastion forwarders { 8.8.4.4; }; in /etc/named.conf. Then restart each using systemctl restart dhcpd and systemctl restart named.
    3. Start the VM is created in the ‘Stopped’ state, you can manually ‘Start’ it.
    4. Approve the CSRs that are generated.

    public docs are at https://github.com/ocp-power-automation/ocp4-upi-multiarch-compute/tree/main/tf/add-powervm-workers#add-powervm-workers-to-intel-cluster

  • IBM Cloud Pak for AIOps supports Multi-Arch Compute on IBM Power

    :information_source: Our second cloud pak supporting Multi-Arch Compute with IBM Power has arrived IBM Cloud Pak for AIOps supports installation on an Intel node in a Power cluster.

    IBM Cloud Pak for AIOps can be deployed on a multi-architecture Red Hat OpenShift cluster, provided the nodes with compatible architecture (x86_64 or s390x) fulfill the necessary hardware prerequisites for IBM Cloud Pak for AIOps. To install IBM Cloud Pak for AIOps on a multi-architecture Red Hat OpenShift cluster, you must annotate your IBM Cloud Pak for AIOps namespace. For more information, see Create a custom namespace.

    You must apply an annotation to limit the architecture to amd64.

  • IBM Cloud Pak for Business Automation adds support for Multi-Arch Compute clusters on IBM Power

    :information_source: With our partners in Cloud Pak for Business Automation, we are pleased to share the first Cloud Pak to support Multi-Arch Compute cluster support for IBM Power:

    *Support for OCP multi-architecture clusters*
    An OpenShift Container Platform (OCP) multi-architecture cluster supports compute machines with different architectures, including ppc64le for Power, s390x for IBM Z, and amd64/x86 for AMD. A CP4BA 25.0.0-IF002 deployment can be assigned to nodes that match the appropriate image architecture. For more information about assigning pods to nodes, see Placing pods on particular nodes.

    Ref: https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/25.0.0?topic=notes-whats-new-in-2500