Category: Uncategorized

  • OpenShift Container Platform 4.21.0 has been released

    OpenShift Container Platform 4.21.0 has been released

    • ppc64le payload: https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.21.0/

    • multi payload: https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.21.0/ppc64le/

    New features are:

    • Added Installer-Provisioned Infrastructure (IPI) support for PowerVC [Technology Preview]

    • Enable Spyre Accelerator on IBM Power®

    • CIFS/SMB CSI Driver Operator

    • Red Hat build of Kueue

    • Kernel Module Management Operator

    • Servicability notes for kdump on IBM Power

    Release Notes https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/release_notes/ocp-4-21-release-notes#ocp-release-notes-ibm-power_release-notes

    Video YouTube: What’s New in OpenShift 4.21 – Key Updates and New Features

    See https://community.ibm.com/community/user/blogs/brandon-pederson1/2026/02/04/red-hat-openshift-421-is-now-generally-available-o

  • MTO 1.2.1

    The Multiarch Tuning Operator v1.2.1 is release. 1.2.1 enhances the operational experience within multi-architecture clusters, and single-architecture clusters that are migrating to a multi-architecture compute configuration.

    source: https://github.com/openshift/multiarch-tuning-operator/compare/v1.2.0…v1.2.1 https://ftp.redhat.com/pub/redhat/containers/src.index.html

    catalog: https://catalog.redhat.com/en/software/container-stacks/detail/663a095de3f7eaafee6879a8 https://catalog.redhat.com/en/software/containers/multiarch-tuning/multiarch-tuning-rhel9-operator/6616582895a35187a06ba2ce?architecture=ppc64le&image=

  • Kernel Module Management Operator 2.5: Now Supporting IBM Power

    Originally posted to https://community.ibm.com/community/user/blogs/paul-bastide/2025/12/03/kernel-module-management-operator-25-now-supportin

    Note: Since the writing 2.5.1 is released, and is the recommended

    The support of KMM is part of my work, and included my write-up on this site.

    We are excited to announce the release of Kernel Module Management (KMM) Operator 2.5, which brings significant enhancements to how you deploy specialized drivers on OpenShift Container Platform to IBM Power.

    The KMM Operator streamlines the management of out-of-tree kernel modules and their associated device plugins. The operator centrally manages, builds, signs, and deploys these components across your cluster.

    What is KMM?

    At its core, KMM utilizes a Module Custom Resource Definition (CRD). This resource allows you to configure everything necessary for an out-of-tree kernel module:

    • How to load the module.
    • Defining ModuleLoader images for specific kernel versions.
    • Including instructions for building and signing modules.

    One of KMM’s most powerful features is its ability to handle multiple kernel versions at once for any given module. This capability is critical for achieving seamless node upgrades and reduced application downtime on your OpenShift clusters. A prime example of this support includes the effortless management of specialized storage drivers that require custom kernel modules to function.

    Other features, such as In-Cluster Building and Signing, where KMM supports building DriverContainer images and signing kernel modules in-cluster to ensure compatibility, including support for Secure Boot environments.

    Please note IBM Power does not provides an Real-Time kernel, and the features of the KMM Operator for Real Time kernel are not applicable to IBM Power.

    🛠️ Installation

    KMM is supported on OpenShift Container Platform on IBM Power 4.20 and later.

    Using the Web Console

    As a cluster administrator, you can install KMM through the OpenShift web console:

    1. Log in to the OpenShift web console.
    2. Navigate to Ecosystem Software Catalog.
    3. Select the Kernel Module Management Operator and click Install.
    4. Choose the openshift-kmm namespace from the Installed Namespace list.
    5. Click Install.

    To verify the installation, navigate to Ecosystem Installed Operators and ensure the Kernel Module Management Operator in the openshift-kmm project shows a status of InstallSucceeded.

    💡 Usage Example: Deploying a Module

    The Module Custom Resource (CR) is used to define and deploy your kernel module.

    A Module CR specifies the following:

    • spec.selector: A node selector (e.g., node-role.kubernetes.io/worker: "") to determine which nodes are eligible.
    • spec.moduleLoader.container.kernelMappings: A list of kernel versions or regular expressions (regexp) and the corresponding container image to use.
    • spec.devicePlugin (Optional): Configuration for an associated device plugin.

    Example Module CR (Annotated)

    The following example shows how to configure a module named my_kmod to be deployed to all worker nodes. It uses kernel mappings to specify different container images for different kernel versions and includes configuration for building/signing the module if the image doesn’t exist.

    apiVersion: kmm.sigs.x-k8s.io/v1beta1
    kind: Module
    metadata:
      name: my-kmod
    spec:
      # Selects all worker nodes
      selector:
        node-role.kubernetes.io/worker: ""
    
      moduleLoader:
        container:
          # Required name of the kernel module to load
          modprobe:
            moduleName: my-kmod 
    
          # Defines container images based on kernel version
          kernelMappings:  
            # Literal match for a specific kernel version
            - literal: 6.0.15-300.fc37.x86_64
              containerImage: some.registry/org/my-kmod:6.0.15-300.fc37.x86_64
    
            # Regular expression match for any other kernel 
            - regexp: '^.+$' 
              containerImage: "some.registry/org/my-kmod:${KERNEL_FULL_VERSION}"
              # Instructions for KMM to build the image if it doesn't exist
              build:
                dockerfileConfigMap:  
                  name: my-kmod-dockerfile
              # Instructions for KMM to sign the module if Secure Boot is required
              sign:
                certSecret:
                  name: cert-secret 
                keySecret:
                  name: key-secret 
                filesToSign:
                  - /opt/lib/modules/${KERNEL_FULL_VERSION}/my-kmod.ko

    The KMM reconciliation loop will then handle listing matching nodes, finding the correct image for the running kernel, building/signing the image if needed, and creating worker pods to execute modprobe and load the kernel module.

    Summary

    The Kernel Module Management (KMM) Operator 2.5 release enhances OpenShift’s ability to manage specialized hardware drivers. A key addition is support for IBM Power (ppc64le), enabling seamless, automated deployment of out-of-tree kernel modules and specialized storage drivers on this architecture. KMM continues to minimize disruption during node maintenance by supporting multiple kernel versions. However, Real-Time kernel support remains unavailable for IBM Power.

    References

    For more details on the KMM Operator and the latest changes, please consult the official documentation:

    1. KMM Operator 2.5 Release Notes
    2. Chapter 4. Kernel Module Management Operator Overview
  • 🔥 Boost Your VS Code Workflow with a Custom Hotkey

    Sometimes, the smallest automation can make a big difference in your coding flow. If you frequently type . // >—maybe as part of a comment convention, markdown formatting, or a custom syntax—you can streamline your workflow by creating a hotkey in Visual Studio Code to insert it instantly.

    Here’s how to do it without installing any extensions.


    ✅ Step 1: Create a Keybinding to Trigger the Snippet

    While VS Code doesn’t allow direct keybinding to a named snippet, you can work around this by using the built-in editor.action.insertSnippet command with an inline snippet.

    🔧 How to Set It Up:

    1. Open the Command Palette:Ctrl+Shift+P (or Cmd+Shift+P on macOS)
    2. Type and select:Preferences: Open Keyboard Shortcuts (JSON)
    3. Add the following entry to your keybindings.json file:

    JSON

    [

    {

    “key”: “ctrl+shift+q”,

    “command”: “editor.action.insertSnippet”,

    “args”: {

    “snippet”: “. // >”

    },

    “when”: “editorTextFocus”

    }

    ]

    💡 You can change "ctrl+shift+q" to any key combination that suits your workflow.


    ✅ Step 2: Test It

    Now, whenever you’re focused in a text editor in VS Code and press Ctrl+Shift+Q, it will instantly insert:

    . // >
    

    No extensions. No fuss. Just a clean, efficient shortcut.


    🧠 Bonus Tip

    Want to scope this to specific file types like Markdown or Python? You can add a condition to the "when" clause, such as:

    JSON

    “when”: “editorTextFocus && editorLangId == ‘markdown’”

  • Optimizing Workloads with NUMA-Aware CPU Distribution in Kubernetes

    DRAFT This is not a complete article. I haven’t yet fully tested and vetted the steps I built. I will come back and hopefully update.

    Kubernetes 1.30 introduces a powerful enhancement to CPU resource management: the ability to distribute CPUs across NUMA nodes using a new CPUManager policy. This feature, part of KEP-2902, enables better performance and resource utilization on multi-NUMA systems by spreading workloads instead of concentrating them on a single node.

    Non-Uniform Memory Access (NUMA) is a memory design used in modern multi-socket systems where each CPU socket has its own local memory. Accessing local memory is faster than accessing memory attached to another CPU. Therefore, NUMA-aware scheduling is crucial for performance-sensitive workloads.

    Traditionally, Kubernetes’ CPUManager used a “packed” policy, allocating CPUs from a single NUMA node to reduce latency. However, this can lead to resource contention and underutilization in systems with multiple NUMA nodes.

    • High-throughput applications like databases or analytics engines
    • Multi-threaded workloads that benefit from parallelism
    • NUMA-aware applications that manage memory locality explicitly

    The new “distributed” policy spreads CPU allocations across NUMA nodes, improving parallelism and overall system throughput.

    To enable the distributed CPUManager Policy, here is a step-by-step guide to enable and use this feature on Kubernetes v1.30+:

    1. Label the nodes you want to be enabled with cpumanager and the distributed policy.
    oc label node worker-0 custom-kubelet=cpumanager-enabled
    
    1. Create a custom KubeletConfig to allow the CPUManager to use distributed cpuManagerPolicy.
    cat << EOF | oc apply -f -
    apiVersion: machineconfiguration.openshift.io/v1
    kind: KubeletConfig
    metadata:
      name: cpumanager-enabled
    spec:
      machineConfigPoolSelector:
        matchLabels:
          custom-kubelet: cpumanager-enabled
      kubeletConfig:
         cpuManagerPolicy: distributed 
         cpuManagerReconcilePeriod: 5s 
    EOF
    
    1. Wait for the Node to restart the Kubelet
    2. Create a Pod to request Guaranteed QoS by specifying equal CPU requests and limits:
    apiVersion: v1
    kind: Pod
    metadata:
      name: numa-aware-pod
    spec:
      containers:
      - name: workload
        image: your-image
        resources:
          requests:
            cpu: "4"
          limits:
            cpu: "4"
    

    Kubernetes will now distribute the 4 CPUs across NUMA nodes instead of packing them on one.

    To visualize the difference, here’s a conceptual graphic to illustrate the difference between the two policies:

    Packed Policy:

    NUMA Node 0: [CPU0, CPU1, CPU2, CPU3, CPU4, CPU5, CPU6, CPU7] ← All assigned here
    NUMA Node 1: [CPU0 ]
    

    Distributed Policy:

    NUMA Node 0: [CPU0, CPU1, CPU2, CPU3]
    NUMA Node 1: [CPU0, CPU1, CPU2, CPU3] ← Balanced across nodes
    

    This balance reduces memory contention and improves cache locality for distributed workloads.

    This enhancement gives Kubernetes administrators more control over CPU topology, enabling better performance tuning for complex workloads. It’s a great step forward in making Kubernetes more NUMA-aware and suitable for high-performance computing environments.b

  • 🚀 In-Place Pod Resize in Kubernetes: What You Need to Know

    DRAFT This is not a complete article. I haven’t yet fully tested and vetted the steps I built. I will come back and hopefully update.

    In Kubernetes v1.33, In-Place Pod Resize has entered Beta. This feature allows you to resize the CPU and memory resources of containers in a running Pod without needing to restart them. This feature is fairly nice for Power customers who scale their systems vertically. You would need to also restart the kubelet.

    One no longer has to change the resource requests or limits of a pod in Kubernetes and restart the Pod. This restart was disruptive for long-running workloads.

    With in-place pod resize, autoscaling workloads, improving stateful applications is a real win.

    1. Enable the InPlacePodVerticalScaling featuregate in a kind config called kind-cluster-config.yaml
    kind: Cluster
    apiVersion: kind.x-k8s.io/v1alpha4
    featureGates:
      InPlacePodVerticalScaling: true
    nodes:
    - role: control-plane
      kubeadmConfigPatches:
      - |
        kind: ClusterConfiguration
        apiServer:
            extraArgs:
              v: "1"
        scheduler:
            extraArgs:
              v: "1"
        controllerManager:
            extraArgs:
              v: "1"
      - |
        kind: InitConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            v: "1"
    - role: worker
      kubeadmConfigPatches:
      - |
        kind: JoinConfiguration
        nodeRegistration:
          kubeletExtraArgs:
            v: "1"
    
    1. Download kind
    mkdir -p dev-cache
    GOBIN=$(PWD)/dev-cache/ go install sigs.k8s.io/kind@v0.29.0
    
    1. Start the kind cluster
    KIND_EXPERIMENTAL_PROVIDER=podman dev-cache/kind create cluster \
    		--image quay.io/powercloud/kind-node:v1.33.1 \
    		--name test \
    		--config kind-cluster-config.yaml\
    		--wait 5m
    
    1. Create a namespace
    apiVersion: v1
    kind: Namespace
    metadata:
      labels:
        kubernetes.io/metadata.name: resize-test
        pod-security.kubernetes.io/audit: restricted
        pod-security.kubernetes.io/audit-version: v1.24
        pod-security.kubernetes.io/enforce: restricted
        pod-security.kubernetes.io/warn: restricted
        pod-security.kubernetes.io/warn-version: v1.24
      name: resize-test
    
    1. Create a Pod
    apiVersion: v1
    kind: Pod
    metadata:
      name: resize-test
    spec:
      containers:
      - name: resize-test
        image: registry.access.redhat.com/ubi9/ubi
        resizePolicy:
        - resourceName: cpu
          restartPolicy: NotRequired
        - resourceName: memory
          restartPolicy: NotRequired
        resources:
          limits:
            memory: "200Mi"
            cpu: "1"
          requests:
            memory: "200Mi"
            cpu: "1"
    
    1. Edit kubectl edit pod/test -n resize-test
    2. Check kubectl describe pod/test -n resize-test
    3. Check oc rsh pod/test and run lscpu to see the size changed

    You’ve seen how this feature functions with Kubernetes and can resize your Pod without a restart.

    References

    1. Kubernetes v1.33: In-Place Pod Resize Graduated to Beta
    2. Resize CPU and Memory Resources assigned to Containers
  • odf: Disk type is not compatible with the selected backing storage

    Here is how I worked around Disk type is not compatible with the selected backing storage:

    1. Encoded the queue/rotational = 0
    cat << EOF | base64 -w0
    ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="dm-[1,3]",  ATTR{queue/rotational}="0"
    EOF
    

    Encoded

    QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09ImJsb2NrIiwgS0VSTkVMPT0iZG0tWzEsM10iLCAgQVRUUntxdWV1ZS9yb3RhdGlvbmFsfT0iMCIK
    
    1. Create the MachineConfig – 99-worker-udev-non-rotational
    cat << EOF > ./99-worker-udev-configuration.yaml
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 99-worker-udev-non-rotational
    spec:
      config:
        ignition:
          version: 3.2.0  
        storage:
          files:
          - contents:
              source: data:text/plain;charset=utf-8;base64,QUNUSU9OPT0iYWRkfGNoYW5nZSIsIFNVQlNZU1RFTT09ImJsb2NrIiwgS0VSTkVMPT0iZG0tWzEsM10iLCAgQVRUUntxdWV1ZS9yb3RhdGlvbmFsfT0iMCIK
              verification: {}
            filesystem: root
            mode: 420
            path: /etc/udev/rules.d/99-worker-udev-non-rotational.rules
    EOF
    oc apply -f 99-worker-udev-configuration.yaml
    

    Check the MachineConfigPool/worker, and then proceeded with the setup.

    Yamls

    Here are the YAMLs:

    apiVersion: local.storage.openshift.io/v1alpha1
    kind: LocalVolumeDiscovery
    metadata:
      name: auto-discover-devices
      namespace: openshift-local-storage
    spec:
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - worker1.removed
                  - worker2.removed
                  - worker3.removed
    
    apiVersion: local.storage.openshift.io/v1alpha1
    kind: LocalVolumeSet
    metadata:
      name: lvs-san-x
      namespace: openshift-local-storage
    spec:
      deviceInclusionSpec:
        deviceTypes:
          - disk
          - part
          - mpath
        minSize: 1Gi
      nodeSelector:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - worker1.removed
                  - worker2.removed
                  - worker3.removed
      storageClassName: lvs-san-x
      volumeMode: Block

    References

    1. Override device rotational flag in OCS/ODF environments https://access.redhat.com/articles/6547891

  • Reshare: Help Shape HashiCorp Integration for IBM Power

    For those integrating with Power, and interested in HashiCorp, you might be interested in this post.

    IBM Power is collecting insights on how you use HashiCorp products to manage infrastructure and security—including your specific use cases. Even if you’re not currently using these tools, we’d still love to hear from you. The survey takes just 10 minutes, and your feedback will help shape the future integration of HashiCorp into IBM Power.

    Click here to take the survey [ibm.biz/hashicorp_ibmpower]

    See https://community.ibm.com/community/user/question/help-shape-hashicorp-integration-for-ibm-power-2