Blog

  • Beyond the Static Dashboard: The Power of the Dynamic ConsolePlugin in OpenShift

    In the fast-evolving world of cloud-native platforms, a “one size fits all” user interface is no longer enough. As your ecosystem grows, your console needs to grow with it—without requiring a full platform reboot every time you want to add a new feature.

    Enter Dynamic Plugins. By shifting away from hardcoded UI components toward a flexible, runtime-loaded architecture, developers can now inject custom pages and extensions directly into the console on the fly. Leveraging the power of the Operator Lifecycle Manager (OLM), these plugins are delivered as self-contained micro-services that integrate seamlessly into your existing workflow. In this post, we’ll explore how this architecture turns your cluster console into a living, extensible platform.

    Here is the recipe to test the ConsolePlugin

    With the test setup and conversation, you’ll need to recompile the container image

    1. Setup the external route for the Image Registry
    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
    config.imageregistry.operator.openshift.io/cluster patched
    
    1. Check the OpenShift Image registry host and you see the hostname printed.
    $ oc get route default-route -n openshift-image-registry --template='{{.spec.host }}'
    default-route-openshift-image-registry.apps.rct-ocp-pra-fbac.ibm.com
    
    1. Make the local registry lookup use relative names
    $ oc set image-lookup  --all
    
    1. Set a temporary login
    export KUBECONFIG=~/local_config
    oc login -u kubeadmin -p $(cat openstack-upi/auth/kubeadmin-password) api.rct-ocp-pra-fbac.ibm.com:6443
    
    1. Login to the Registry (you must use )
    $ podman login --tls-verify=false -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps.rct-ocp-pra-fbac.ibm.com
    Login Succeeded
    
    1. Revert back to the default kubeconfig
    $ unset KUBECONFIG
    
    1. Create the test plugin
    oc new-project console-demo-plugin
    oc label namespace/console-demo-plugin security.openshift.io/scc.podSecurityLabelSync=false --overwrite=true
    oc label namespace/console-demo-pluginr pod-security.kubernetes.io/enforce=privileged --overwrite=true
    oc label namespace/console-demo-plugin pod-security.kubernetes.io/enforce-version=v1.24 --overwrite=true
    oc label namespace/console-demo-plugin pod-security.kubernetes.io/audit=privileged --overwrite=true
    oc label namespace/console-demo-plugin pod-security.kubernetes.io/warn=privileged --overwrite=true
    
    1. Clone the test repo
    git clone https://github.com/openshift/console-plugin-template
    cd console-plugin-template/
    
    1. Build Container Image
    $ oc project console-demo-plugin
    $ podman build -t $(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin -f Dockerfile .
    

    :warning: if the build stalls, add the ip of the primary interface to /etc/resolv.conf as a nameserver. For instance nameserver 10.20.184.190 is added as a newline.

    1. Push Container Image
    $ podman push --tls-verify=false $(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin
    
    1. Helm install the console-plugin-template
    $ helm upgrade -i console-plugin-template charts/openshift-console-plugin \
        -n console-demo-plugin \
        --set plugin.image=$(oc get route default-route -n openshift-image-registry --template='{{.spec.host }}')/$(oc project --short=true)/console-demo-plugin:plugin \
        --set plugin.jobs.patchConsoles.image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16816f988db21482c309e77207364b7c282a0fef96e6d7da129928aa477dcfa7
    

    In Conclusion: Seamless Extensibility Through Automation

    Dynamic plugins represent a major leap forward in UI flexibility. By utilizing OLM Operators to manage the underlying infrastructure, the process of extending a console is both automated and scalable. To recap the workflow:

    • Deployment: An Operator spins up a dedicated HTTP server and Kubernetes service to host the plugin’s assets.
    • Registration: The ConsolePlugin custom resource acts as the bridge, announcing the plugin’s presence to the system.
    • Activation: The cluster administrator retains ultimate control, enabling the plugin through the Console Operator configuration.

    This decoupled approach ensures that your console remains lightweight and stable while providing the “pluggable” freedom necessary for modern, customized cloud environments.

    Reference

    Dynamic Plugins in 4.20

  • Deploy OpenShift on IBM PowerVS with Ease

    Deploying Red Hat OpenShift on IBM Power Systems Virtual Server (PowerVS) just got faster. The openshift-install-power project provides a streamlined bash script that automates the deployment process using Infrastructure as Code (IaC).

    By wrapping the Terraform logic of the ocp4-upi-powervs pattern into an interactive script, this tool removes the manual friction of setting up enterprise clusters.

    Release v1.14.0, which further refines the Terraform lifecycle management and improves the automation flow for a more seamless user experience.

    To get started:

    1. Prep: Ensure your PowerVS instance is prepped for deployment.
    2. Clone: git clone https://github.com/ocp-power-automation/openshift-install-power.git
    3. Run: Execute the installer script and follow the prompts.

    For a full demo and documentation, visit the GitHub Repository.

  • IBM Power adds Limited Live Migration Support to OpenShift 4.16

    IBM Power Systems adds official support for Limited Live Migration from OpenShiftSDN to OVN-Kubernetes. Administrators are able to migrate off OpenShiftSDN cluster networks to OVN-Kubernetes without experiencing service interruption. As the preferred migration path, it ensures that enterprise workloads running on OpenShift COntainer Platform on IBM Power maintain continuous availability. For environments where a live transition is not feasible, IBM Power also supports the offline migration method to ensure a successful network evolution.

    Steps

    1. Verifying Setup a. Ensure you are the latest eus-4.16 which is 4.16.54. We used this when testing. OpenShift Upgrade Path b. Ensure the oc get co returns all Operators Ready and none are degrated. c. Review Diagnostic Steps in the Knowledge Base Article: Limited Live Migration from OpenShift SDN to OVN-Kubernetes https://access.redhat.com/solutions/7057169
    2. If everything is OK, you can initiate the limited live migration per 19.5.1.5.4. Initiating the limited live migration process
    oc patch Network.config.openshift.io cluster --type='merge' --patch '{"metadata":{"annotations":{"network.openshift.io/network-type-migration":""}},"spec":{"networkType":"OVNKubernetes"}}'
    
    1. Watch the network.config to see it is complete.
    oc patch Network.config.openshift.io cluster --type='merge' --patch '{"metadata":{"annotations":{"network.openshift.io/network-type-migration":""}},"spec":{"networkType":"OVNKubernetes"}}'
    
    1. After a successful migration operation, remove the network.openshift.io/network-type-migration- annotation from the network.config custom resource by entering the following command:
    oc annotate network.config cluster network.openshift.io/network-type-migration-
    
    1. Afterwards, you may see the following output in network.config, this is OK, and expected.
      # oc get network.config -oyaml
      apiVersion: config.openshift.io/v1
      kind: Network
      metadata:
        creationTimestamp: "2025-12-09T07:03:09Z"
        generation: 18
        name: cluster
        resourceVersion: "545748"
        uid: b3ec83d9-f1ba-4a44-959a-0c60f3e19866
      spec:
        clusterNetwork:
        - cidr: 10.128.0.0/14
          hostPrefix: 23
        externalIP:
          policy: {}
        networkType: OVNKubernetes
        serviceNetwork:
        - 172.30.0.0/16
      status:
        clusterNetwork:
        - cidr: 10.128.0.0/14
          hostPrefix: 23
        clusterNetworkMTU: 1350
        conditions:
        - lastTransitionTime: "2025-12-10T07:25:55Z"
          message: ""
          reason: AsExpected
          status: "True"
          type: NetworkDiagnosticsAvailable
        - lastTransitionTime: "2025-12-10T07:41:38Z"
          message: Network type migration is not in progress
          reason: NetworkTypeMigrationNotInProgress
          status: Unknown
          type: NetworkTypeMigrationMTUReady
        - lastTransitionTime: "2025-12-10T07:41:38Z"
          message: Network type migration is not in progress
          reason: NetworkTypeMigrationNotInProgress
          status: Unknown
          type: NetworkTypeMigrationTargetCNIAvailable
        - lastTransitionTime: "2025-12-10T07:41:38Z"
          message: Network type migration is not in progress
          reason: NetworkTypeMigrationNotInProgress
          status: Unknown
          type: NetworkTypeMigrationTargetCNIInUse
        - lastTransitionTime: "2025-12-10T07:41:38Z"
          message: Network type migration is not in progress
          reason: NetworkTypeMigrationNotInProgress
          status: Unknown
          type: NetworkTypeMigrationOriginalCNIPurged
        - lastTransitionTime: "2025-12-10T07:41:38Z"
          message: Network type migration is completed
          reason: NetworkTypeMigrationCompleted
          status: "False"
          type: NetworkTypeMigrationInProgress
        networkType: OVNKubernetes
        serviceNetwork:
        - 172.30.0.0/16
    

    Best wishes with your migration.

    Reference

    1. 19.5.1.1. Supported platforms when using the limited live migration method https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/ovn-kubernetes-network-plugin#supported-platforms-live-migrating-ovn-kubernetes
  • A Reference on Optimizing Vector Search on IBM Power

    The vector-on-power-reference repository provides high-performance implementations of distance computation kernels—the backbone of vector databases like FAISS, PGVector, and Knowhere—specifically optimized for IBM Power architectures.

    The project achieves significant speedups by moving beyond generic C++ code to leverage Power-specific hardware features:

    • Source-Level Optimization: Utilizing vector data types for better compiler auto-vectorization.
    • Intrinsic-Level Optimization: Directly invoking AltiVec and IBM-specific built-in functions for maximum control over hardware registers.

    Whether on RHEL (using gcc) or AIX (using IBM Open XL C/C++), the build process is streamlined via specialized Makefiles.

    To test a 32-dimension vector using intrinsic optimizations:

    make
    ./bin/test -s 32 --run_intrinsic_code
    

    The repository includes a testing framework that compares base implementations against Power-optimized versions.

    Early benchmarks show optimized ppc64le code can reduce execution time to roughly 40% of the original, delivering a 2.5x performance boost for critical Euclidean and Hamming distance calculations. Note: Hamming distance optimizations require Power 8+ due to the vec_popcnt() requirement.

    Reference

    • https://github.com/IBM/vector-distance-reference/tree/main
  • New Images on the IBM Container Registry for AI on Power

    The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for ollama/docling are particular interesting for those working on AI.

    rocketmq 5.3.3      podman pull icr.io/ppc64le-oss/rocketmq-ppc64le:5.3.3
    openssh-server
    - 8.1_p1-r0-ls20 	podman pull icr.io/ppc64le-oss/openssh-server-ppc64le:8.1_p1-r0-ls20
    - 8.4_p1-r3-ls48 	podman pull icr.io/ppc64le-oss/openssh-server-ppc64le:8.4_p1-r3-ls48
    ollama v0.13.1      podman pull icr.io/ppc64le-oss/ollama-ppc64le:v0.13.1
    docling 2.60.1      podman pull icr.io/ppc64le-oss/docling-ppc64le:2.60.1
    

    Refer to https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr for more details.

  • MTO 1.2.1

    The Multiarch Tuning Operator v1.2.1 is release. 1.2.1 enhances the operational experience within multi-architecture clusters, and single-architecture clusters that are migrating to a multi-architecture compute configuration.

    source: https://github.com/openshift/multiarch-tuning-operator/compare/v1.2.0…v1.2.1 https://ftp.redhat.com/pub/redhat/containers/src.index.html

    catalog: https://catalog.redhat.com/en/software/container-stacks/detail/663a095de3f7eaafee6879a8 https://catalog.redhat.com/en/software/containers/multiarch-tuning/multiarch-tuning-rhel9-operator/6616582895a35187a06ba2ce?architecture=ppc64le&image=

  • Great News… Red Hat AI Inference Server on IBM® Power

    A team I work closesly with just announced the general availability of Red Hat AI Inference Server on IBM® Power®. This brings a high-performance, cost-efficient option for generative AI inferencing to organizations running enterprise workloads across hybrid cloud environments.

    Running natively on IBM Power processor-based servers, clients gain a secure, resilient, and scalable platform for AI alongside mission-critical workloads. With IBM Spyre™ Accelerator for Power, enterprises can accelerate AI while maintaining governance and reducing latency by keeping inferencing close to their data.

    This collaboration between IBM and Red Hat delivers open, hybrid cloud freedom and enterprise-grade performance for AI at scale. Learn more:

  • New Containers for IBM Power

    New container images for IBM Power are made available, here are the last four images:

    Image NameTag NameProject LicensesImage Pull CommandLast Published On
    rocketmq5.3.3Apache-2.0docker pull icr.io/ppc64le-oss/rocketmq-ppc64le:5.3.3December 9, 2025
    elasticsearch7.17.28Server Side Public License V1 and Elastic License 2.0docker pull icr.io/ppc64le-oss/elasticsearch-ppc64le:7.17.28Nov 14th, 2025
    zookeeperv3.9.3-debian-12-r19-bvApache License 2.0docker pull icr.io/ppc64le-oss/zookeeper-ppc64le:v3.9.3-debian-12-r19-bvNov 14, 2025
    vllm0.10.1Apache-2.0docker pull icr.io/ppc64le-oss/vllm-ppc64le:0.10.1.dev852.gee01645db.d20250827September 11, 2025

    Reference

    https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr

  • Kernel Module Management Operator 2.5: Now Supporting IBM Power

    Originally posted to https://community.ibm.com/community/user/blogs/paul-bastide/2025/12/03/kernel-module-management-operator-25-now-supportin

    Note: Since the writing 2.5.1 is released, and is the recommended

    The support of KMM is part of my work, and included my write-up on this site.

    We are excited to announce the release of Kernel Module Management (KMM) Operator 2.5, which brings significant enhancements to how you deploy specialized drivers on OpenShift Container Platform to IBM Power.

    The KMM Operator streamlines the management of out-of-tree kernel modules and their associated device plugins. The operator centrally manages, builds, signs, and deploys these components across your cluster.

    What is KMM?

    At its core, KMM utilizes a Module Custom Resource Definition (CRD). This resource allows you to configure everything necessary for an out-of-tree kernel module:

    • How to load the module.
    • Defining ModuleLoader images for specific kernel versions.
    • Including instructions for building and signing modules.

    One of KMM’s most powerful features is its ability to handle multiple kernel versions at once for any given module. This capability is critical for achieving seamless node upgrades and reduced application downtime on your OpenShift clusters. A prime example of this support includes the effortless management of specialized storage drivers that require custom kernel modules to function.

    Other features, such as In-Cluster Building and Signing, where KMM supports building DriverContainer images and signing kernel modules in-cluster to ensure compatibility, including support for Secure Boot environments.

    Please note IBM Power does not provides an Real-Time kernel, and the features of the KMM Operator for Real Time kernel are not applicable to IBM Power.

    🛠️ Installation

    KMM is supported on OpenShift Container Platform on IBM Power 4.20 and later.

    Using the Web Console

    As a cluster administrator, you can install KMM through the OpenShift web console:

    1. Log in to the OpenShift web console.
    2. Navigate to Ecosystem Software Catalog.
    3. Select the Kernel Module Management Operator and click Install.
    4. Choose the openshift-kmm namespace from the Installed Namespace list.
    5. Click Install.

    To verify the installation, navigate to Ecosystem Installed Operators and ensure the Kernel Module Management Operator in the openshift-kmm project shows a status of InstallSucceeded.

    💡 Usage Example: Deploying a Module

    The Module Custom Resource (CR) is used to define and deploy your kernel module.

    A Module CR specifies the following:

    • spec.selector: A node selector (e.g., node-role.kubernetes.io/worker: "") to determine which nodes are eligible.
    • spec.moduleLoader.container.kernelMappings: A list of kernel versions or regular expressions (regexp) and the corresponding container image to use.
    • spec.devicePlugin (Optional): Configuration for an associated device plugin.

    Example Module CR (Annotated)

    The following example shows how to configure a module named my_kmod to be deployed to all worker nodes. It uses kernel mappings to specify different container images for different kernel versions and includes configuration for building/signing the module if the image doesn’t exist.

    apiVersion: kmm.sigs.x-k8s.io/v1beta1
    kind: Module
    metadata:
      name: my-kmod
    spec:
      # Selects all worker nodes
      selector:
        node-role.kubernetes.io/worker: ""
    
      moduleLoader:
        container:
          # Required name of the kernel module to load
          modprobe:
            moduleName: my-kmod 
    
          # Defines container images based on kernel version
          kernelMappings:  
            # Literal match for a specific kernel version
            - literal: 6.0.15-300.fc37.x86_64
              containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64
    
            # Regular expression match for any other kernel 
            - regexp: '^.+$' 
              containerImage: "some.registry/org/my-kmod:${KERNEL_FULL_VERSION}"
              # Instructions for KMM to build the image if it doesn't exist
              build:
                dockerfileConfigMap:  
                  name: my-kmod-dockerfile
              # Instructions for KMM to sign the module if Secure Boot is required
              sign:
                certSecret:
                  name: cert-secret 
                keySecret:
                  name: key-secret 
                filesToSign:
                  - /opt/lib/modules/${KERNEL_FULL_VERSION}/my-kmod.ko

    The KMM reconciliation loop will then handle listing matching nodes, finding the correct image for the running kernel, building/signing the image if needed, and creating worker pods to execute modprobe and load the kernel module.

    Summary

    The Kernel Module Management (KMM) Operator 2.5 release enhances OpenShift’s ability to manage specialized hardware drivers. A key addition is support for IBM Power (ppc64le), enabling seamless, automated deployment of out-of-tree kernel modules and specialized storage drivers on this architecture. KMM continues to minimize disruption during node maintenance by supporting multiple kernel versions. However, Real-Time kernel support remains unavailable for IBM Power.

    References

    For more details on the KMM Operator and the latest changes, please consult the official documentation:

    1. KMM Operator 2.5 Release Notes
    2. Chapter 4. Kernel Module Management Operator Overview