Blog

  • REPOST: Using Red Hat Service Interconnect with OpenShift and RHEL on IBM Power

    Original is at https://community.ibm.com/community/user/blogs/paul-bastide/2026/04/27/using-red-hat-service-interconnect-with-ibm-power#ItemCommentPanel

    • Author: Kaushik Talathi, IBM Power
    • Author: Michael Turek, IBM Power
    • Author: Paul Bastide, IBM Power

    As organizations adopt open hybrid cloud and cloud-native architectures, developers face complexity in connecting applications across multiple clouds — public, private, and on-premises systems. Traditional VPNs and firewall rules require extensive network planning, taking days or weeks to deploy, which delay development and project delivery.

    Red Hat Service Interconnect enables developers to create secure Layer-7 connections on-demand. Based on the open source Skupper project, Red Hat Service Interconnect enables application connectivity across Red Hat Enterprise Linux, Red Hat OpenShift Container Platform clusters, and non-Red Hat environments. Your application are able to talk to each other as if they were on the same local network — without complex VPNs or firewall headaches.

    With the release of Red Hat Service Interconnect (RHSI) v2.1.2, RHSI runs on IBM Power Systems and using a simple CLI, your workloads can seamlessly join your cross-architecture service mesh in minutes – no extensive networking planning or added security risk.

    This document walks you through the installation and setup, and a hello world to ease your adoption of RHSI.

    Installation and Setup

    Installing RHSI Operator using the OpenShift Console

    To install the RHSI on your OpenShift Container Platform 4.18 or higher system, go to the OperatorHub:

    1. Login with a user id that has cluster-admin user access
    2. In the OpenShift Container Platform web console, navigate to Ecosystem → Software Catalog.
    3. Search for the Red Hat Service Interconnect, then click Install.
    4. Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-operators namespace.
    5. Click Install.
    6. Repeat the same process for Red Hat Service Interconnect Network Observer.
    7. Verify the installation succeeded by inspecting the ClusterServiceVersion (csv) file:
    $ oc project openshift-operators
    
    $ oc get csv
    NAME                                  DISPLAY                                         VERSION      REPLACES                              PHASE
    skupper-netobs-operator.v2.1.3-rh-1   Red Hat Service Interconnect Network Observer   2.1.3-rh-1   skupper-netobs-operator.v2.1.2-rh-2   Succeeded
    skupper-operator.v2.1.3-rh-1          Red Hat Service Interconnect                    2.1.3-rh-1   skupper-operator.v2.1.2-rh-2          Succeeded
    
    1. Verify that the Red Hat Service Interconnect (RHSI) is up and running
    $ oc get deploy -n openshift-operators
    NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
    skupper-controller                           1/1     1            1           9m59s
    skupper-netobs-operator-controller-manager   1/1     1            1           101s
    
    1. Check the pods created for Red Hat Service Interconnect (RHSI) through the command line interface:
    $ oc get pods
    NAME                                                         READY   STATUS    RESTARTS   AGE
    skupper-controller-779d985989-vqvvb                          1/1     Running   0          11m
    skupper-netobs-operator-controller-manager-85957676f-p98tc   1/1     Running   0          3m40s
    

    Installing Red Hat Service Interconnect CLI

    To install RHSI CLI on OCP Bastion Node and Linux System, enable Red Hat package:

    1. Use the subscription-manager command to subscribe to the required package repositories.

    Red Hat Enterprise Linux 8

    $ sudo subscription-manager repos --enable=service-interconnect-2-for-rhel-8-ppc64le-rpms
    

    Red Hat Enterprise Linux 9

    $ sudo subscription-manager repos --enable=service-interconnect-2-for-rhel-9-ppc64le-rpms
    
    1. Use yum or dnf commands to install the RHSI CLI & Router.
    $ sudo dnf install skupper-cli skupper-router
    
    1. Verify that CLI & Router is installed correctly.
    $ skupper version
    Warning: Docker is not installed. Skipping image digests search.
    COMPONENT               VERSION
    router                  3.4.2
    network-observer        2.1.3
    cli                     2.1.3
    system-controller       2.1.3
    prometheus              v4.16.0
    origin-oauth-proxy      v4.16.0
    

    Hello World Example

    To show RHSI in action, you need an application to use – HTTP Hello World application with a frontend and backend service. The frontend uses the backend to process requests. In this scenario, the backend is deployed in the hello-world-east namespace of rhsi-east cluster and the frontend is deployed in the hello-world-west namespace of rhsi-west another cluster as well as local-west namespace on a RHEL system. You are able to use multiple namespaces, typically on different clusters or from a single machine.

    1. Configure access to multiple namespaces on OCP Clusters and Local Systems

    1. Start a console session for each of your namespaces. Set the KUBECONFIG environment variable to a different path in each session. For the Local System, ensure skupper CLI is installed.
    ## Console for West cluster
    $ export KUBECONFIG=$HOME/.kube/config-hello-world-west
    
    ## Console for East cluster
    $ export KUBECONFIG=$HOME/.kube/config-hello-world-east
    
    ## Local System
    $ systemctl --user enable --now podman.socket
    $ loginctl enable-linger <username>
    $ export REGISTRY_AUTH_FILE=/path/to/auth-file
    $ export SKUPPER_PLATFORM=podman
    $ podman login registry.Red Hat.io
    $ skupper system install
    Platform podman is now configured for Skupper
    
    1. Create and set the namespaces.
    ## Console for West cluster
    $ oc create namespace hello-world-west
    $ oc config set-context --current --namespace hello-world-west
    
    ## Console for East cluster
    $ oc create namespace hello-world-east
    $ oc config set-context --current --namespace hello-world-east
    

    2. Creating Sites on Clusters and Local System

    1. Create Site with link access enabled for external connections using earlier set context.
    ## West Cluster
    apiVersion: skupper.io/v2alpha1
    kind: Site
    metadata:
      name: west
      namespace: hello-world-west
    spec:
      linkAccess: default
    
    ## East Cluster
    apiVersion: skupper.io/v2alpha1
    kind: Site
    metadata:
      name: east
      namespace: hello-world-east
    spec:
      linkAccess: default
    
    ## Local System
    $ skupper site create local-west-site -n local-west --enable-link-access
    File written to /var/lib/skupper/namespaces/local-west/input/resources/Site-local-west-site.yaml
    File written to /var/lib/skupper/namespaces/local-west/input/resources/RouterAccess-router-access-local-west-site.yaml
    
    $ skupper system start -n local-west
    Sources will be consumed from namespace "local-west"
    Site "local-west-site" has been created on namespace "local-west"
    Platform: podman
    Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
    Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
    
    1. Validate sites are in Ready state.
    ## West Cluster
    $ oc get site
    NAME   STATUS    SITES IN NETWORK   MESSAGE
    west   Pending                      containers with unready status: [router kube-adaptor]
    
    $ oc get site
    NAME   STATUS   SITES IN NETWORK   MESSAGE
    west   Ready    1                  OK
    
    
    ## East Cluster
    $ oc get site
    NAME   STATUS    SITES IN NETWORK   MESSAGE
    west   Pending                      containers with unready status: [router kube-adaptor]
    
    $ oc get site
    NAME   STATUS   SITES IN NETWORK   MESSAGE
    west   Ready    1                  OK
    
    ## Local System
    $ skupper site status -n local-west
    NAME            STATUS  MESSAGE
    local-west-site Ready   OK
    

    The message containers with unready status: [router kube-adaptor] is expected.

    3. Linking Sites

    Once sites are linked, services can be exposed and consumed across the application network without the need to open ports or manage inter-site connectivity. You’ll find that there are two key types of link connection:

    • Connecting site: The site that initiates the link connection.
    • Listening site: The site receives the link connection.

    The link direction is not significant, and is typically determined by ease of connectivity. For example, if east site is behind a firewall and west site is a cluster on the public cloud, linking from east to west sites is the easiest option.

    AccessGrant – Permission on a listening site enabling access token redemption to create links. Grants access to the GrantServer (HTTPS server) which provides a URL, secret code, and cert bundled into an AccessToken. Token redemption limits and duration are configurable. Exposed via Route (OpenShift) or LoadBalancer (other systems).

    AccessToken – Short-lived, typically single-use credential containing the GrantServer URL, secret code, and cert. A connecting site redeems this token to establish a link to the listening site.

    To link sites, AccessGrant and AccessToken resources on the listening site and apply the AccessToken resource on the connecting site to create the link.

    1. On the listening(for example west) site, create an AccessGrant resource for Kubenetes connecting east site. For local-system linking, generate a link resource on kubernetes cluster site – for example east – where system site needs to be connected.
    ## West Cluster to East cluster
    apiVersion: skupper.io/v2alpha1
    kind: AccessGrant
    metadata:
      name: grant-west
    spec:
      redemptionsAllowed: 2        # default 1
      expirationWindow: 25m        # default 15m
    
    ## East Cluster link to local system
    $ skupper link generate > link-for-local-site.yaml
    
    1. Validate the AccessGrant resource on listening(for example west) site:
    $ oc get accessgrant
    NAME         REDEMPTIONS ALLOWED   REDEMPTIONS MADE   EXPIRATION             STATUS   MESSAGE
    grant-west   2                     0                  2026-04-22T17:11:12Z   Ready    OK
    
    1. On the listening(for example west) site, populate environment variables to allow token generation:
    URL="$(oc get accessgrant grant-west -o template --template '{{ .status.url }}')"
    CODE="$(oc get accessgrant grant-west -o template --template '{{ .status.code }}')"
    CA_RAW="$(oc get accessgrant grant-west -o template --template '{{ .status.ca }}')"
    

    URL is the URL of the GrantServer CODE is the secret code to access the GrantServer CA_RAW is the cert required to establish a HTTPS connection to the GrantServer

    1. On the listening(for example west) site, create a token YAML file named token.yaml.

    NOTE Access to this file provides access to the application network. Protect it appropriately.

    cat > token.yaml <<EOF
    apiVersion: skupper.io/v2alpha1
    kind: AccessToken
    metadata:
      name: token-to-west
    spec:
      code: "$(printf '%s' "$CODE")"
      ca: |- 
    $(printf '%s\n' "$CA_RAW" | sed 's/^/    /')
      url: "$(printf '%s' "$URL")"
    EOF
    
    1. Securely transfer the token.yaml file to context of the connecting(for example east) site, And apply it. For local system, copy the link-for-local-site.yaml file to the local-west site and apply it.
    ## East Cluster
    $ oc apply -f token.yaml
    
    ## Local-west site
    $ skupper system apply -f link-for-local-site.yaml -n local-west
    File written to /var/lib/skupper/namespaces/local-west/input/resources/Link-link-east-skupper-router.yaml
    Link link-east-skupper-router added
    File written to /var/lib/skupper/namespaces/local-west/input/resources/Secret-link-east.yaml
    Secret link-east added
    Custom resources are applied. If a site is already running, run `skupper system reload` to make effective the changes.
    
    $ skupper system reload -n local-west
    Sources will be consumed from namespace "local-west"
    ...
    2026/04/24 12:43:40 WARN certificate will not be overwritten path=/var/lib/skupper/namespaces/local-west/runtime/issuers/skupper-site-ca/tls.key
    Site "local-west-site" has been created on namespace "local-west"
    Platform: podman
    Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
    Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
    
    1. On the connecting(for example east & local-west) site, check token and link status: The GrantServer has validated the AccessToken and redeemed it for a Link resource. The connecting site uses Link resource to establish an mTLS connection between routers.
    ## East site
    $ oc get accesstoken
    NAME            URL                                                                                 REDEEMED   STATUS   MESSAGE
    token-to-west   https://<skupper-grant-server-west-site>:443/cc4e6668-1869-4fd9-a9e7-a0a86abbe15d   true       Ready    OK
    
    # oc get link
    NAME            STATUS   REMOTE SITE   MESSAGE
    token-to-west   Ready    west          OK
    
    ## Local-west site 
    $  skupper link status -n local-west
    NAME                            STATUS  COST    MESSAGE
    link-east-skupper-router        Ready   1       OK
    

    4. Exposing services on the application network

    After creating an application network by linking sites, services can be exposed from one site using connectors and consume those services on other sites using listeners.

    1. Create a workload to expose on the network, for example, backend server of hello world example.
    ## East site
    $ oc create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3
    
    ## West site
    $ oc create deployment frontend --image quay.io/skupper/hello-world-frontend
    
    ## Local site
    $ podman run --name frontend --detach --rm -p 9090:8080 quay.io/skupper/hello-w
    orld-frontend
    Trying to pull quay.io/skupper/hello-world-frontend:latest...
    Getting image source signatures
    Copying blob b4c4646a26d4 done   | 
    Copying blob c8939585957e done   | 
    Copying blob b530b5dc825c done   | 
    Copying blob 76789c06b573 done   | 
    Copying blob 10643c2bc08d done   | 
    Copying blob 42c663ca3696 done   | 
    Copying blob 938062c0e7a6 done   | 
    Copying blob 4f2321e928b3 done   | 
    Copying config 75a7a6cc39 done   | 
    Writing manifest to image destination
    84d9a4bd4399ec332faf0f7555278ecdf240ddbf5d4f4773f1fe2893264e933f
    
    1. Create connector resource on east site.
    apiVersion: skupper.io/v2alpha1
    kind: Connector
    metadata:
      name: backend
      namespace: hello-world-east
    spec:
      routingKey: backend
      selector: app=backend
      port: 8080
    
    1. Validate the connector status:
    $ oc get connector
    NAME      ROUTING KEY   PORT   HOST   SELECTOR      STATUS    HAS MATCHING LISTENER   MESSAGE
    backend   backend       8080          app=backend   Pending                           No matching listeners 
    
    1. Create a listener resource on west & local-west site:

    Note Identify a connector that you want to use. Note the routing key of that connector.

    ## West site
    apiVersion: skupper.io/v2alpha1
    kind: Listener
    metadata:
      name: frontend
      namespace: hello-world-west
    spec:
      routingKey: backend
      host: backend
      port: 8080
    
    ## local-west site
    $ skupper listener create local-frontend --routing-key backend  8080 -n local-west
    File written to /var/lib/skupper/namespaces/local-west/input/resources/Listener-local-frontend.yaml
    
    $ skupper system reload -n local-west
    Sources will be consumed from namespace "local-west"
    2026/04/24 12:47:18 WARN certificate will not be overwritten path=/var/lib/skupper/namespaces/local-west/runtime/issuers/skupper-site-ca/tls.crt
    ...
    Site "local-west-site" has been created on namespace "local-west"
    Platform: podman
    Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
    Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
    
    1. Validate the listener status:
    ## West site
    $ oc get listener
    NAME       ROUTING KEY   PORT   HOST      STATUS   HAS MATCHING CONNECTOR   MESSAGE
    frontend   backend       8080   backend   Ready    true                     OK
    
    ## local-west site
    $ skupper listener status -n local-west
    NAME            STATUS  ROUTING-KEY     HOST    PORT    MATCHING-CONNECTOR      MESSAGE
    local-frontend  Ready   backend         0.0.0.0 8080    true                    OK
    
    1. Test the Hello World Application

    To test our Hello World, we need external access to the frontend. Use oc port-forward to make the frontend available at localhost:8080.

    ## West site
    $ oc port-forward deployment/frontend 8080:8080 &
    

    If everything is in order, you can now access the web interface by navigating to this URL in your browser http://localhost:8080/

    The frontend assigns each new user a name. Click Say hello to send a greeting to the backend and get a greeting in response.

    For local system tests, you can run following command:

    $ curl -s http://localhost:8080/api/hello  -d '{"name":"Jack Sparrow"}' | jq -r '.text'
    Hi, Jack Sparrow.  I am Astonishing Application (backend-66dbcb9494-t7wlg).
    

    5. Setting up Network Observer

    The console provides a visual overview of the sites, links, services, and communication metrics.

    1. Create NetworkObserver object:
    apiVersion: observability.skupper.io/v2alpha1
    kind: NetworkObserver
    metadata:
      name: networkobserver-sample
      namespace: hello-world-west
    spec: {}
    
    1. Determine the console URL and use this URL to login to skupper console via browser and when prompted login using OCP credentials.
    $ oc get --namespace hello-world-west -o jsonpath="{.spec.host}" route net
    workobserver-sample-network-observer
    <NetworkObserver-URL>
    

    The Skupper console is used to monitor and troubleshoot application network. The console provides a visual overview of the sites, links, services, and communication metrics.

    Sites view

    The Sites tab displays the network topology showing three interconnected sites in the Hello World example: the east site (OCP cluster hosting the backend service), the west site (OCP cluster with frontend service), and local-west-site (RHEL local system running Skupper on Podman with frontend service). The dashed lines represent active links connecting these sites, enabling frontend service from different environments to access the backend service seamlessly.

    Components view

    The Components tab shows the logical service architecture with three key elements: hello-world-frontend & local system site, consuming service and hello-world-backend, an exposed service. The directional arrow illustrates how the frontend component communicates with the backend through the Skupper service network demonstrating cross-site service connectivity.

    Processes view

    The Processes tab displays actual running pods and real-time traffic metrics, showing backend service in the east site on OCP cluster processing requests from two frontend clients in the west site on OCP cluster with total of 2.1 KB traffic and local-weat-site on the local linux system with podman showcasing 2.9 KB traffic volume. This validates that RHSI setups can be configured across the hybrid cloud setup.

    Cleaning up

    To remove Skupper and the other resources from this exercise, use the following commands:

    ## West site
    $ skupper site delete --all
    $ oc delete deployment/frontend
    
    ## East
    $ skupper site delete --all
    $ oc delete deployment/backend
    
    ## Local System
    $ skupper system stop -n local-west
    

    Conclusion

    Red Hat Service Interconnect (RHSI) simplifies hybrid cloud connectivity by enabling secure, on-demand application connections across diverse environments without complex VPNs or firewall headaches. RHSI support on IBM Power showcases seamless interconnect between services across OpenShift clusters and local RHEL systems, with the Network Observer console providing real-time visibility into the distributed service mesh.

    Best wishes and good luck with your RHSI journey! 🚀

    References & Additional Resources

  • Red Hat Service Interconnect (RHSI) now supports IBM Power (ppc64le)!

    We are excited to announce with the release of Red Hat Service Interconnect (RHSI) v2.1.2 runs on IBM Power Systems and workloads can now seamlessly join your cross-architecture service mesh!

    Red Hat Service Interconnect is based on the Skupper project, allowing you to create a Layer-7 service interconnect across different clouds and clusters. It allows your apps to talk to each other as if they were on the same local network — without complex VPNs or firewall headaches.

    A few key things to know about RHSI:

    • Security First All traffic is encrypted automatically using mTLS.
    • No Root Needed Operates at the application layer; no cluster-admin rights required to get started.
    • Seamless Integration Easily connect a frontend in the public cloud to a database on a Power system in your private datacenter.

    Check out the official release notes and supported configurations here:
    https://docs.redhat.com/en/documentation/red_hat_service_interconnect/2.1/html/release_notes/supported-configurations

  • A Simplified script to install Kubernetes with kubeadm

    The provider-ibmcloud-test-infra project introduces a simplified kubeadm installation script designed for ease of use and consistency. This script streamlines common setup steps, reduces manual intervention, and helps users get a functional Kubernetes cluster up and running faster.

    As shared by Manjunath Kumatagi, the goal is to make Kubernetes installation more accessible for developers and operators alike. Try running the script from the repository, explore how it fits your workflow, and share feedback to help improve it further.

  • Updated Blog on Navigating Red Hat OpenShift Licensing on IBM Power

    Optimizing infrastructure costs starts with understanding how your software licensing interacts with your hardware capabilities. In his updated blog post, IBM’s Maarten Kreuger breaks down the nuances of Red Hat OpenShift subscriptions specifically for IBM Power systems.

    The post explores how the unique features of the Power Hypervisor (PowerVM) allow for highly granular licensing. Whether you are using dedicated cores or leveraging Shared Processor LPARs, understanding the math behind “core-pairs”, bare metal and “socket models” is essential for a cost-effective deployment.

    Key highlights from the blog include:

    • The Power Advantage: How PowerVM’s hardware-enforced hypervisor allows for per-core licensing and fine-grained increments (as small as 0.05 cores).
    • Subscription Models: A comparison between the simple Socket Model (ideal for scale-out servers like the S1122) and the Core-Pair Model (best for shared or co-hosted environments).
    • The SMT Variable: Why SMT (Simultaneous Multi-Threading) doesn’t increase your license costs, despite reporting more vCPUs.
    • Optimization Tactics: How to use Shared Processor Pools to cap CPU usage and prevent paying for the same physical core twice.

    Whether you’re running a single cluster or managing complex Power Enterprise Pools 2.0, this guide provides the clarity needed to ensure you aren’t over-subscribing.

    Read the full technical deep-dive at OpenShift Subscriptions on Power

  • Bash Fu  ${%%}

    Thanks to Gerrit for cluing me in.

    In Bash, symbols like # and % aren’t just random noise—they are powerful operators used for Parameter Expansion. They allow you to “trim” or “slice” strings stored in variables without needing external tools like sed or awk.

    To understand ${%%}, we have to break down how Bash sees those symbols.

    1. The Core Logic: Front vs. Back Think of these symbols as “knives” that cut parts of your string based on a pattern:
    SymbolActionMnemonic
    #Removes from the front (left)The # is on the left side of a standard keyboard (Shift+3).
    %Removes from the back (right)The % is on the right side of the # (Shift+5).
    1. Doubling Up: Small vs. Large The number of symbols determines how “aggressive” the cut is:
    • Single (# or %): Non-greedy. It removes the shortest possible match.
    • Double (“ or %%): Greedy. It removes the longest possible match.
    1. Practical Examples Let’s say we have a variable: file="image.jpg.backup"

    Using # and “ (Removing from the Front)

    • ${file#*.} → Result: jpg.backup (Cut the shortest bit ending in a dot).
    • ${file*.} → Result: backup (Cut everything up to the last dot).

    Using % and %% (Removing from the Back)

    • ${file%.*} → Result: image.jpg (Cut the shortest bit starting from a dot at the end).
    • ${file%%.*} → Result: image (Cut everything from the first dot to the end).

    If you have VAR="long.file.name.txt":

    SyntaxLogicResult
    ${VAR#*.}Delete shortest match from frontfile.name.txt
    ${VAR*.}Delete longest match from fronttxt
    ${VAR%.*}Delete shortest match from backlong.file.name
    ${VAR%%.*}Delete longest match from backlong

    Quick Tip: If you ever forget which is which, remember that on the keyboard, # is to the left of %. Therefore, # handles the left (start) of the string, and % handles the right (end).

  • Even another Image on the IBM Container Registry for Caching on Power

    The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for taefik are particular interesting for those working with ingress.

    traefik v3.3 	MIT 	podman pull icr.io/ppc64le-oss/traefik-ppc64le:v3.3 	March 27, 2026

    Refer to https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr for more details.

  • Docling with IBM Power

    Originally posted to https://community.ibm.com/community/user/blogs/paul-bastide/2026/03/20/docling-with-ibm-power

    If you’ve been following the rapid evolution of document parsing in AI, you’ve likely encountered Docling. It’s a powerhouse for converting complex PDFs and documents into machine-readable formats. The AI Services team and the IBM Power Python Ecosystem team have provided all of the requirements so you can use docling and as it iterates rapidly, stay up-to-date.

    For python developers using IBM Power, this article provides a recipe to use docling with IBM Power. You can also learn more about the using the Python Ecosystem at https://community.ibm.com/community/user/blogs/janani-janakiraman/2025/09/10/developing-apps-using-python-packages-on-ibm-power

    The Recipe: Step-by-Step Installation

    This guide assumes you are working in a Linux environment (specifically optimized for ppc64le architectures, though the logic holds for most setups).

    1. Prepare Your Environment

    Start by setting up a fresh virtual environment to avoid dependency issues

    python3 -m venv ./test-venv
    source ./test-venv/bin/activate
    python3.12 -m venv --upgrade test-venv/
    

    2. Define the Requirements

    The AI Services team has identified a specific “golden set” of versions that play well together. Create a requirements.txt file containing the necessary packages, including doclingtorch, and transformers.

    accelerate==1.13.0
    annotated-doc==0.0.4
    annotated-types==0.7.0
    antlr4-python3-runtime==4.9.3
    attrs==26.1.0
    beautifulsoup4==4.14.3
    certifi==2026.2.25
    charset-normalizer==3.4.6
    click==8.3.1
    colorlog==6.10.1
    defusedxml==0.7.1
    dill==0.4.1
    docling==2.77.0
    docling-core==2.70.2
    docling-ibm-models==3.12.0
    docling-parse==5.3.2
    et_xmlfile==2.0.0
    Faker==40.11.0
    filelock==3.25.2
    filetype==1.2.0
    fsspec==2026.2.0
    huggingface_hub==0.36.2
    idna==3.11
    Jinja2==3.1.6
    jsonlines==4.0.0
    jsonref==1.1.0
    jsonschema==4.26.0
    jsonschema-specifications==2025.9.1
    latex2mathml==3.79.0
    lxml==6.0.2
    markdown-it-py==4.0.0
    marko==2.2.2
    MarkupSafe==3.0.3
    mdurl==0.1.2
    mpire==2.10.2
    mpmath==1.3.0
    multiprocess==0.70.19
    networkx==3.6.1
    numpy==2.4.1
    omegaconf==2.3.0
    opencv-python==4.10.0.84+ppc64le2
    openpyxl==3.1.5
    packaging==26.0
    pandas==2.3.3
    pillow==12.1.1
    pip==26.0.1
    pluggy==1.6.0
    polyfactory==3.3.0
    psutil==7.2.2
    pyclipper==1.4.0
    pydantic==2.12.5
    pydantic_core==2.41.5
    pydantic-settings==2.13.1
    Pygments==2.19.2
    pylatexenc==2.10
    pypdfium2==5.6.0
    python-dateutil==2.9.0.post0
    python-docx==1.2.0
    python-dotenv==1.2.2
    python-pptx==1.0.2
    pytz==2026.1.post1
    PyYAML==6.0.3
    rapidocr==3.7.0
    referencing==0.37.0
    regex==2026.2.28
    requests==2.32.5
    rich==14.3.3
    rpds-py==0.30.0
    rtree==1.4.1
    safetensors==0.7.0
    scipy==1.17.0
    semchunk==3.2.5
    shapely==2.1.2
    shellingham==1.5.4
    six==1.17.0
    soupsieve==2.8.3
    sympy==1.14.0
    tabulate==0.10.0
    tokenizers==0.22.2
    torch==2.9.1
    torchvision==0.24.1
    tqdm==4.67.3
    transformers==4.57.6
    tree-sitter==0.25.2
    tree-sitter-c==0.24.1
    tree-sitter-javascript==0.25.0
    tree-sitter-python==0.25.0
    tree-sitter-typescript==0.23.2
    typer==0.21.2
    typing_extensions==4.15.0
    typing-inspection==0.4.2
    tzdata==2025.3
    urllib3==2.6.3
    xlsxwriter==3.2.9

    Note: Ensure you include the full list of dependencies (like docling==2.77.0 and docling-core==2.66.0) to maintain stability across your build.

    If you need OCR, you will need to run:

     yum install -y --setopt=tsflags=nodocs python3.12-devel python3.12-pip \
            lcms2-devel openblas-devel freetype libicu libjpeg-turbo && \
        yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm && \
        yum install -y spatialindex-devel

    3. The Installation Secret Sauce

    Before running the install, ensure pip is at its latest version. Then, use the --extra-index-url flag to point to the optimized IBM developer wheels. This is the trick to getting the faster compilation mentioned earlier.

    pip install --upgrade pip
    pip install -r requirements.txt \
        --extra-index-url=https://wheels.developerfirst.ibm.com/ppc64le/linux \
        --prefer-binary
    

    Verifying the Build

    Once the installation completes, it’s a good idea to run a “smoke test” to ensure the models can be fetched properly. You can use a simple script to trigger the model downloads:

    # download_docling_models.py
    from docling.pipeline.standard_pdf_pipeline import StandardPdfPipeline
    
    # This triggers the download of Layout & TableFormer models
    pipeline = StandardPdfPipeline()
    print("Download complete.")
    

    When you see the output Downloading ds4sd--docling-models (Layout & TableFormer)..., you’re officially ready to start parsing.

    Why This Matters

    By focusing on the dependencies rather than the wheel itself, the AI Services team has given us a way to stay agile. We get the latest features of Docling without the overhead of waiting for official distribution builds to catch up to the repo’s velocity.

    Special credit to Yussuf and his test!

  • RoCE (RDMA over Converged Ethernet: Demo

    The following is a research project I investigated… and notes on what I would do, saving for others to take advantage of:

    To demonstrate RoCE (RDMA over Converged Ethernet) usage across nodes on Red Hat OpenShift, you need a container image that includes the RDMA core librariesOFED drivers, and performance testing tools like perftest (which provides ib_write_bwib_send_lat, etc.).

    Based on the Red Hat learning path you provided, here is a optimized Podman/Docker Dockerfile and the necessary configuration to run it.

    1. The Podman/Docker image

    This Dockerfile uses Red Hat Universal Base Image (UBI) 9 and installs the essential RDMA stack and the perftest suite.

    # Use RHEL 9 UBI as the base
    FROM registry.access.redhat.com/ubi9/ubi:latest
    
    LABEL maintainer="OpenShift RoCE Demo"
    
    # Install RDMA core libraries, drivers, and performance testing tools
    # 'perftest' contains the ib_write_bw, ib_read_bw, etc. commands
    RUN dnf install -y \
        libibverbs \
        libibverbs-utils \
        rdma-core \
        iproute \
        pciutils \
        ethtool \
        perftest \
        && dnf clean all
    
    # Set working directory
    WORKDIR /root
    
    # Default command to keep the container running so you can 'exec' into it
    CMD ["sleep", "infinity"]
    

    2. Build and Push the Image

    Use Podman to build the image and push it to a registry accessible by your OpenShift cluster (e.g., Quay.io or your internal OpenShift registry).

    # Build the image
    podman build -t quay.io/<your-username>/roce-test:latest .
    
    # Push the image
    podman push quay.io/<your-username>/roce-test:latest
    
    

    3. Demonstrating Cross-Node Usage (The Test)

    To prove RoCE is working across nodes, you must bypass the standard SDN (Software Defined Network) by using Host Networking or a Secondary Network (Multus). For a quick demonstration, we use hostNetwork: true.

    Step A: Deploy two Pods on different nodes

    Create a file named roce-demo.yaml:

    apiVersion: v1
    kind: Pod
    metadata:
      name: roce-server
      labels:
        app: roce-test
    spec:
      hostNetwork: true # Required to access the host's RDMA/RoCE hardware
      containers:
      - name: main
        image: quay.io/<your-username>/roce-test:latest
        securityContext:
          privileged: true # Required for RDMA device access
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: roce-client
      labels:
        app: roce-test
    spec:
      hostNetwork: true
      containers:
      - name: main
        image: quay.io/<your-username>/roce-test:latest
        securityContext:
          privileged: true
    

    Step B: Run the Performance Benchmark

    1. Identify the IP of the Server Node:
    oc get pod roce-server -o wide
    # Note the IP (since it's hostNetwork, this is the Node's IP)
    
    1. Start the Server:
    oc exec -it roce-server -- ib_write_bw -d <rdma_device_name> -a
    

    (Note: Use ibv_devinfo inside the pod to find your device name, e.g., mlx5_0) 3. Run the Client (from the other pod):

    oc exec -it roce-client -- ib_write_bw -d <rdma_device_name> <server_ip> -a
    

    How this demonstrates RoCE:

    • Zero-Copy: The ib_write_bw tool performs memory-to-memory transfers without involving the CPU’s TCP/IP stack.
    • Performance: If RoCE is correctly configured in your OpenShift cluster (via the Node Network Configuration Policy), you will see bandwidth near the line rate (e.g., ~95Gbps on a 100G link) with extremely low latency compared to standard Ethernet.
    • Verification: You can run ethtool -S <interface> on the host while the test is running to see the rdma_ counters increasing, confirming the traffic is not using standard TCP.
  • Even more Images on the IBM Container Registry for Caching on Power

    The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for valkey are particular interesting for those working on Caching.

    opensearch	3.3.0 	Apache-2.0 	docker pull icr.io/ppc64le-oss/opensearch-ppc64le:3.3.0 	Feb 26, 2026
    seaweedfs	4.0.8 	Apache-2.0 	docker pull icr.io/ppc64le-oss/seaweedfs-ppc64le:4.08 	Feb 27, 2026
    langflow	v1.7.3 	MIT 	docker pull icr.io/ppc64le-oss/langflow-ppc64le:v1.7.3
    

    Refer to https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr for more details.

  • Bridging the Gap: Shared NFS Storage Between VMs and OpenShift

    When migrating workloads to OpenShift, one of the most common hurdles is data sharing. You might have a legacy VM writing logs or processing files and a modern containerized app that needs to read them—or vice versa.

    While OpenShift natively supports NFSv4, getting “identical visibility” across both environments requires a bit of finesse. Here is how to handle NFS mounting without compromising security or breaking the OpenShift security model.

    The “Don’t Do This” List

    Before we dive into the solution, it’s important to understand why the “obvious” paths often lead to trouble:

    • Avoid Custom SCCs for Direct Container Mounts You could technically mount the NFS share directly inside the container. However, in OpenShift, Pods run under restricted Security Context Constraints (SCCs). Bypassing these with a custom SCC opens up attack vectors. It’s better to let the platform handle the mount.
    • Don’t Hack the CSI Driver You might be tempted to force a dynamic provisioner to use a specific root path. This is a bad move. CSI drivers create unique subfolders for a reason: to prevent App A from accidentally (or maliciously) seeing App B’s data. Breaking this breaks your security isolation.

    The Solution: Static PersistentVolumes

    The most robust way to ensure a VM and a Pod see the exact same folder is by using a Static PersistentVolume (PV). This bypasses the dynamic provisioner’s tendency to create unique subfolders, allowing you to point OpenShift exactly where the VM is looking.

    1. Define the Static PersistentVolume

    You must manually define the PV. This allows you to hardcode the server and path to match the VM’s mount point.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: shared-pv
    spec:
      capacity:
        storage: 100Gi # Required but not used with nfs
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain # Data survives PVC deletion
      nfs:
        path: /srv/nfs_dir  # The identical path used by your VM
        server: nfs-server.example.com

    Important Server-Side Config To avoid “UID/GID havoc,” ensure your NFS server export is configured with: rw,sync,no_root_squash,no_all_squash. This prevents the server from rewriting IDs, which is vital when OpenShift’s secure containers use random UIDs. See the Cloud Pak for Data article for more details

    2. Create the PersistentVolumeClaim (PVC)

    Next, create a PVC that binds specifically to the static PV you just created. By setting the storageClassName to an empty string, you tell OpenShift not to look for a dynamic provisioner.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: shared-pvc
      namespace: your-app-namespace
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Gi
      volumeName: shared-pv # Direct binding to the static PV
      storageClassName: "" # Keep this empty to avoid dynamic provisioning

    3. Mount the PVC in your Pod

    Finally, reference the PVC in your Pod’s volume definition. This is where the magic happens: the container sees the filesystem exactly as the VM does.

    apiVersion: v1
    kind: Pod
    metadata:
      name: shared-data-app
    spec:
      containers:
      - name: app-container
        image: my-app-image
        volumeMounts:
        - name: nfs-storage
          mountPath: /var/data/shared
      volumes:
      - name: nfs-storage
        persistentVolumeClaim:
          claimName: shared-pvc

    Mount Options

    NFS can be picky. If your server requires specific versions or security flavors, add a mountOptions section to your PV definition to match the VM’s parameters exactly (e.g., nfsvers=4.1 or sec=sys).

    Summary

    By using a Static PV, you treat the NFS share as a known, fixed resource rather than a disposable volume. This keeps your OpenShift environment secure, your SCCs restricted, and your data perfectly synced between your infrastructure layers.