Author: Paul Bastide

  • Red Hat Service Interconnect (RHSI) now supports IBM Power (ppc64le)!

    We are excited to announce with the release of Red Hat Service Interconnect (RHSI) v2.1.2 runs on IBM Power Systems and workloads can now seamlessly join your cross-architecture service mesh!

    Red Hat Service Interconnect is based on the Skupper project, allowing you to create a Layer-7 service interconnect across different clouds and clusters. It allows your apps to talk to each other as if they were on the same local network — without complex VPNs or firewall headaches.

    A few key things to know about RHSI:

    • Security First All traffic is encrypted automatically using mTLS.
    • No Root Needed Operates at the application layer; no cluster-admin rights required to get started.
    • Seamless Integration Easily connect a frontend in the public cloud to a database on a Power system in your private datacenter.

    Check out the official release notes and supported configurations here:
    https://docs.redhat.com/en/documentation/red_hat_service_interconnect/2.1/html/release_notes/supported-configurations

  • A Simplified script to install Kubernetes with kubeadm

    The provider-ibmcloud-test-infra project introduces a simplified kubeadm installation script designed for ease of use and consistency. This script streamlines common setup steps, reduces manual intervention, and helps users get a functional Kubernetes cluster up and running faster.

    As shared by Manjunath Kumatagi, the goal is to make Kubernetes installation more accessible for developers and operators alike. Try running the script from the repository, explore how it fits your workflow, and share feedback to help improve it further.

  • Updated Blog on Navigating Red Hat OpenShift Licensing on IBM Power

    Optimizing infrastructure costs starts with understanding how your software licensing interacts with your hardware capabilities. In his updated blog post, IBM’s Maarten Kreuger breaks down the nuances of Red Hat OpenShift subscriptions specifically for IBM Power systems.

    The post explores how the unique features of the Power Hypervisor (PowerVM) allow for highly granular licensing. Whether you are using dedicated cores or leveraging Shared Processor LPARs, understanding the math behind “core-pairs”, bare metal and “socket models” is essential for a cost-effective deployment.

    Key highlights from the blog include:

    • The Power Advantage: How PowerVM’s hardware-enforced hypervisor allows for per-core licensing and fine-grained increments (as small as 0.05 cores).
    • Subscription Models: A comparison between the simple Socket Model (ideal for scale-out servers like the S1122) and the Core-Pair Model (best for shared or co-hosted environments).
    • The SMT Variable: Why SMT (Simultaneous Multi-Threading) doesn’t increase your license costs, despite reporting more vCPUs.
    • Optimization Tactics: How to use Shared Processor Pools to cap CPU usage and prevent paying for the same physical core twice.

    Whether you’re running a single cluster or managing complex Power Enterprise Pools 2.0, this guide provides the clarity needed to ensure you aren’t over-subscribing.

    Read the full technical deep-dive at OpenShift Subscriptions on Power

  • Bash Fu  ${%%}

    Thanks to Gerrit for cluing me in.

    In Bash, symbols like # and % aren’t just random noise—they are powerful operators used for Parameter Expansion. They allow you to “trim” or “slice” strings stored in variables without needing external tools like sed or awk.

    To understand ${%%}, we have to break down how Bash sees those symbols.

    1. The Core Logic: Front vs. Back Think of these symbols as “knives” that cut parts of your string based on a pattern:
    SymbolActionMnemonic
    #Removes from the front (left)The # is on the left side of a standard keyboard (Shift+3).
    %Removes from the back (right)The % is on the right side of the # (Shift+5).
    1. Doubling Up: Small vs. Large The number of symbols determines how “aggressive” the cut is:
    • Single (# or %): Non-greedy. It removes the shortest possible match.
    • Double (“ or %%): Greedy. It removes the longest possible match.
    1. Practical Examples Let’s say we have a variable: file="image.jpg.backup"

    Using # and “ (Removing from the Front)

    • ${file#*.} → Result: jpg.backup (Cut the shortest bit ending in a dot).
    • ${file*.} → Result: backup (Cut everything up to the last dot).

    Using % and %% (Removing from the Back)

    • ${file%.*} → Result: image.jpg (Cut the shortest bit starting from a dot at the end).
    • ${file%%.*} → Result: image (Cut everything from the first dot to the end).

    If you have VAR="long.file.name.txt":

    SyntaxLogicResult
    ${VAR#*.}Delete shortest match from frontfile.name.txt
    ${VAR*.}Delete longest match from fronttxt
    ${VAR%.*}Delete shortest match from backlong.file.name
    ${VAR%%.*}Delete longest match from backlong

    Quick Tip: If you ever forget which is which, remember that on the keyboard, # is to the left of %. Therefore, # handles the left (start) of the string, and % handles the right (end).

  • Even another Image on the IBM Container Registry for Caching on Power

    The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for taefik are particular interesting for those working with ingress.

    traefik v3.3 	MIT 	podman pull icr.io/ppc64le-oss/traefik-ppc64le:v3.3 	March 27, 2026

    Refer to https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr for more details.

  • Docling with IBM Power

    Originally posted to https://community.ibm.com/community/user/blogs/paul-bastide/2026/03/20/docling-with-ibm-power

    If you’ve been following the rapid evolution of document parsing in AI, you’ve likely encountered Docling. It’s a powerhouse for converting complex PDFs and documents into machine-readable formats. The AI Services team and the IBM Power Python Ecosystem team have provided all of the requirements so you can use docling and as it iterates rapidly, stay up-to-date.

    For python developers using IBM Power, this article provides a recipe to use docling with IBM Power. You can also learn more about the using the Python Ecosystem at https://community.ibm.com/community/user/blogs/janani-janakiraman/2025/09/10/developing-apps-using-python-packages-on-ibm-power

    The Recipe: Step-by-Step Installation

    This guide assumes you are working in a Linux environment (specifically optimized for ppc64le architectures, though the logic holds for most setups).

    1. Prepare Your Environment

    Start by setting up a fresh virtual environment to avoid dependency issues

    python3 -m venv ./test-venv
    source ./test-venv/bin/activate
    python3.12 -m venv --upgrade test-venv/
    

    2. Define the Requirements

    The AI Services team has identified a specific “golden set” of versions that play well together. Create a requirements.txt file containing the necessary packages, including doclingtorch, and transformers.

    accelerate==1.13.0
    annotated-doc==0.0.4
    annotated-types==0.7.0
    antlr4-python3-runtime==4.9.3
    attrs==26.1.0
    beautifulsoup4==4.14.3
    certifi==2026.2.25
    charset-normalizer==3.4.6
    click==8.3.1
    colorlog==6.10.1
    defusedxml==0.7.1
    dill==0.4.1
    docling==2.77.0
    docling-core==2.70.2
    docling-ibm-models==3.12.0
    docling-parse==5.3.2
    et_xmlfile==2.0.0
    Faker==40.11.0
    filelock==3.25.2
    filetype==1.2.0
    fsspec==2026.2.0
    huggingface_hub==0.36.2
    idna==3.11
    Jinja2==3.1.6
    jsonlines==4.0.0
    jsonref==1.1.0
    jsonschema==4.26.0
    jsonschema-specifications==2025.9.1
    latex2mathml==3.79.0
    lxml==6.0.2
    markdown-it-py==4.0.0
    marko==2.2.2
    MarkupSafe==3.0.3
    mdurl==0.1.2
    mpire==2.10.2
    mpmath==1.3.0
    multiprocess==0.70.19
    networkx==3.6.1
    numpy==2.4.1
    omegaconf==2.3.0
    opencv-python==4.10.0.84+ppc64le2
    openpyxl==3.1.5
    packaging==26.0
    pandas==2.3.3
    pillow==12.1.1
    pip==26.0.1
    pluggy==1.6.0
    polyfactory==3.3.0
    psutil==7.2.2
    pyclipper==1.4.0
    pydantic==2.12.5
    pydantic_core==2.41.5
    pydantic-settings==2.13.1
    Pygments==2.19.2
    pylatexenc==2.10
    pypdfium2==5.6.0
    python-dateutil==2.9.0.post0
    python-docx==1.2.0
    python-dotenv==1.2.2
    python-pptx==1.0.2
    pytz==2026.1.post1
    PyYAML==6.0.3
    rapidocr==3.7.0
    referencing==0.37.0
    regex==2026.2.28
    requests==2.32.5
    rich==14.3.3
    rpds-py==0.30.0
    rtree==1.4.1
    safetensors==0.7.0
    scipy==1.17.0
    semchunk==3.2.5
    shapely==2.1.2
    shellingham==1.5.4
    six==1.17.0
    soupsieve==2.8.3
    sympy==1.14.0
    tabulate==0.10.0
    tokenizers==0.22.2
    torch==2.9.1
    torchvision==0.24.1
    tqdm==4.67.3
    transformers==4.57.6
    tree-sitter==0.25.2
    tree-sitter-c==0.24.1
    tree-sitter-javascript==0.25.0
    tree-sitter-python==0.25.0
    tree-sitter-typescript==0.23.2
    typer==0.21.2
    typing_extensions==4.15.0
    typing-inspection==0.4.2
    tzdata==2025.3
    urllib3==2.6.3
    xlsxwriter==3.2.9

    Note: Ensure you include the full list of dependencies (like docling==2.77.0 and docling-core==2.66.0) to maintain stability across your build.

    If you need OCR, you will need to run:

     yum install -y --setopt=tsflags=nodocs python3.12-devel python3.12-pip \
            lcms2-devel openblas-devel freetype libicu libjpeg-turbo && \
        yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm && \
        yum install -y spatialindex-devel

    3. The Installation Secret Sauce

    Before running the install, ensure pip is at its latest version. Then, use the --extra-index-url flag to point to the optimized IBM developer wheels. This is the trick to getting the faster compilation mentioned earlier.

    pip install --upgrade pip
    pip install -r requirements.txt \
        --extra-index-url=https://wheels.developerfirst.ibm.com/ppc64le/linux \
        --prefer-binary
    

    Verifying the Build

    Once the installation completes, it’s a good idea to run a “smoke test” to ensure the models can be fetched properly. You can use a simple script to trigger the model downloads:

    # download_docling_models.py
    from docling.pipeline.standard_pdf_pipeline import StandardPdfPipeline
    
    # This triggers the download of Layout & TableFormer models
    pipeline = StandardPdfPipeline()
    print("Download complete.")
    

    When you see the output Downloading ds4sd--docling-models (Layout & TableFormer)..., you’re officially ready to start parsing.

    Why This Matters

    By focusing on the dependencies rather than the wheel itself, the AI Services team has given us a way to stay agile. We get the latest features of Docling without the overhead of waiting for official distribution builds to catch up to the repo’s velocity.

    Special credit to Yussuf and his test!

  • RoCE (RDMA over Converged Ethernet: Demo

    The following is a research project I investigated… and notes on what I would do, saving for others to take advantage of:

    To demonstrate RoCE (RDMA over Converged Ethernet) usage across nodes on Red Hat OpenShift, you need a container image that includes the RDMA core librariesOFED drivers, and performance testing tools like perftest (which provides ib_write_bwib_send_lat, etc.).

    Based on the Red Hat learning path you provided, here is a optimized Podman/Docker Dockerfile and the necessary configuration to run it.

    1. The Podman/Docker image

    This Dockerfile uses Red Hat Universal Base Image (UBI) 9 and installs the essential RDMA stack and the perftest suite.

    # Use RHEL 9 UBI as the base
    FROM registry.access.redhat.com/ubi9/ubi:latest
    
    LABEL maintainer="OpenShift RoCE Demo"
    
    # Install RDMA core libraries, drivers, and performance testing tools
    # 'perftest' contains the ib_write_bw, ib_read_bw, etc. commands
    RUN dnf install -y \
        libibverbs \
        libibverbs-utils \
        rdma-core \
        iproute \
        pciutils \
        ethtool \
        perftest \
        && dnf clean all
    
    # Set working directory
    WORKDIR /root
    
    # Default command to keep the container running so you can 'exec' into it
    CMD ["sleep", "infinity"]
    

    2. Build and Push the Image

    Use Podman to build the image and push it to a registry accessible by your OpenShift cluster (e.g., Quay.io or your internal OpenShift registry).

    # Build the image
    podman build -t quay.io/<your-username>/roce-test:latest .
    
    # Push the image
    podman push quay.io/<your-username>/roce-test:latest
    
    

    3. Demonstrating Cross-Node Usage (The Test)

    To prove RoCE is working across nodes, you must bypass the standard SDN (Software Defined Network) by using Host Networking or a Secondary Network (Multus). For a quick demonstration, we use hostNetwork: true.

    Step A: Deploy two Pods on different nodes

    Create a file named roce-demo.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: roce-server
      labels:
        app: roce-test
    spec:
      hostNetwork: true # Required to access the host's RDMA/RoCE hardware
      containers:
      - name: main
        image: quay.io/<your-username>/roce-test:latest
        securityContext:
          privileged: true # Required for RDMA device access
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: roce-client
      labels:
        app: roce-test
    spec:
      hostNetwork: true
      containers:
      - name: main
        image: quay.io/<your-username>/roce-test:latest
        securityContext:
          privileged: true
    

    Step B: Run the Performance Benchmark

    1. Identify the IP of the Server Node:
    oc get pod roce-server -o wide
    # Note the IP (since it's hostNetwork, this is the Node's IP)
    
    1. Start the Server:
    oc exec -it roce-server -- ib_write_bw -d <rdma_device_name> -a
    

    (Note: Use ibv_devinfo inside the pod to find your device name, e.g., mlx5_0) 3. Run the Client (from the other pod):

    oc exec -it roce-client -- ib_write_bw -d <rdma_device_name> <server_ip> -a
    

    How this demonstrates RoCE:

    • Zero-Copy: The ib_write_bw tool performs memory-to-memory transfers without involving the CPU’s TCP/IP stack.
    • Performance: If RoCE is correctly configured in your OpenShift cluster (via the Node Network Configuration Policy), you will see bandwidth near the line rate (e.g., ~95Gbps on a 100G link) with extremely low latency compared to standard Ethernet.
    • Verification: You can run ethtool -S <interface> on the host while the test is running to see the rdma_ counters increasing, confirming the traffic is not using standard TCP.
  • Even more Images on the IBM Container Registry for Caching on Power

    The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for valkey are particular interesting for those working on Caching.

    opensearch	3.3.0 	Apache-2.0 	docker pull icr.io/ppc64le-oss/opensearch-ppc64le:3.3.0 	Feb 26, 2026
    seaweedfs	4.0.8 	Apache-2.0 	docker pull icr.io/ppc64le-oss/seaweedfs-ppc64le:4.08 	Feb 27, 2026
    langflow	v1.7.3 	MIT 	docker pull icr.io/ppc64le-oss/langflow-ppc64le:v1.7.3
    

    Refer to https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr for more details.

  • Bridging the Gap: Shared NFS Storage Between VMs and OpenShift

    When migrating workloads to OpenShift, one of the most common hurdles is data sharing. You might have a legacy VM writing logs or processing files and a modern containerized app that needs to read them—or vice versa.

    While OpenShift natively supports NFSv4, getting “identical visibility” across both environments requires a bit of finesse. Here is how to handle NFS mounting without compromising security or breaking the OpenShift security model.

    The “Don’t Do This” List

    Before we dive into the solution, it’s important to understand why the “obvious” paths often lead to trouble:

    • Avoid Custom SCCs for Direct Container Mounts You could technically mount the NFS share directly inside the container. However, in OpenShift, Pods run under restricted Security Context Constraints (SCCs). Bypassing these with a custom SCC opens up attack vectors. It’s better to let the platform handle the mount.
    • Don’t Hack the CSI Driver You might be tempted to force a dynamic provisioner to use a specific root path. This is a bad move. CSI drivers create unique subfolders for a reason: to prevent App A from accidentally (or maliciously) seeing App B’s data. Breaking this breaks your security isolation.

    The Solution: Static PersistentVolumes

    The most robust way to ensure a VM and a Pod see the exact same folder is by using a Static PersistentVolume (PV). This bypasses the dynamic provisioner’s tendency to create unique subfolders, allowing you to point OpenShift exactly where the VM is looking.

    1. Define the Static PersistentVolume

    You must manually define the PV. This allows you to hardcode the server and path to match the VM’s mount point.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: shared-pv
    spec:
      capacity:
        storage: 100Gi # Required but not used with nfs
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain # Data survives PVC deletion
      nfs:
        path: /srv/nfs_dir  # The identical path used by your VM
        server: nfs-server.example.com

    Important Server-Side Config To avoid “UID/GID havoc,” ensure your NFS server export is configured with: rw,sync,no_root_squash,no_all_squash. This prevents the server from rewriting IDs, which is vital when OpenShift’s secure containers use random UIDs. See the Cloud Pak for Data article for more details

    2. Create the PersistentVolumeClaim (PVC)

    Next, create a PVC that binds specifically to the static PV you just created. By setting the storageClassName to an empty string, you tell OpenShift not to look for a dynamic provisioner.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: shared-pvc
      namespace: your-app-namespace
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Gi
      volumeName: shared-pv # Direct binding to the static PV
      storageClassName: "" # Keep this empty to avoid dynamic provisioning

    3. Mount the PVC in your Pod

    Finally, reference the PVC in your Pod’s volume definition. This is where the magic happens: the container sees the filesystem exactly as the VM does.

    apiVersion: v1
    kind: Pod
    metadata:
      name: shared-data-app
    spec:
      containers:
      - name: app-container
        image: my-app-image
        volumeMounts:
        - name: nfs-storage
          mountPath: /var/data/shared
      volumes:
      - name: nfs-storage
        persistentVolumeClaim:
          claimName: shared-pvc

    Mount Options

    NFS can be picky. If your server requires specific versions or security flavors, add a mountOptions section to your PV definition to match the VM’s parameters exactly (e.g., nfsvers=4.1 or sec=sys).

    Summary

    By using a Static PV, you treat the NFS share as a known, fixed resource rather than a disposable volume. This keeps your OpenShift environment secure, your SCCs restricted, and your data perfectly synced between your infrastructure layers.

  • Introducing OpenShift Installer Provisioned Infrastructure (IPI) for IBM PowerVC (TechPreview)

    With Red Hat OpenShift Container Platform (OCP) 4.21, there is a new Tech Preview of powervc platform. The technical preview provides early access to this installer platform. The technical preview enables you to test and provide feedback on the simplified deployment of OCP on IBM Power Virtual Center (PowerVC) managed infrastructure, offering a powerful combination of enterprise-grade reliability and container orchestration. For administrators using IBM PowerVC, the Installer Provisioned Infrastructure (IPI) method simplifies the deployment process by automating the provisioning of underlying infrastructure resources.

    We welcome feedback and any thoughts.

    Thanks to the dev leaders and QE Team

    More details are at https://community.ibm.com/community/user/blogs/paul-bastide/2026/02/24/introducing-openshift-installer-provisioned-infras