If you are like me, developing code for OpenShift and reviewing logs daily – omc is for you.
omc is a tool engineers use to inspect resources from an OpenShift must-gather in the same way as they are retrieved with the oc command. It’s a game changer. You can see sophisticated examples at https://github.com/gmeghnag/omc?tab=readme-ov-file#examples
The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for ollama and redis-exporter are particular interesting for those working with analytics and caching.
ollama v0.17.6 docker pull icr.io/ppc64le-oss/ollama-ppc64le:v0.17.6 April 6, 2026
redis-exporter 1.81.0 MIT docker pull icr.io/ppc64le-oss/redis-exporter-ppc64le:1.81.0 May 14, 2026
As organizations adopt open hybrid cloud and cloud-native architectures, developers face complexity in connecting applications across multiple clouds — public, private, and on-premises systems. Traditional VPNs and firewall rules require extensive network planning, taking days or weeks to deploy, which delay development and project delivery.
Red Hat Service Interconnect enables developers to create secure Layer-7 connections on-demand. Based on the open source Skupper project, Red Hat Service Interconnect enables application connectivity across Red Hat Enterprise Linux, Red Hat OpenShift Container Platform clusters, and non-Red Hat environments. Your application are able to talk to each other as if they were on the same local network — without complex VPNs or firewall headaches.
With the release of Red Hat Service Interconnect (RHSI) v2.1.2, RHSI runs on IBM Power Systems and using a simple CLI, your workloads can seamlessly join your cross-architecture service mesh in minutes – no extensive networking planning or added security risk.
This document walks you through the installation and setup, and a hello world to ease your adoption of RHSI.
Installation and Setup
Installing RHSI Operator using the OpenShift Console
To install the RHSI on your OpenShift Container Platform 4.18 or higher system, go to the OperatorHub:
Login with a user id that has cluster-admin user access
In the OpenShift Container Platform web console, navigate to Ecosystem → Software Catalog.
Search for the Red Hat Service Interconnect, then click Install.
Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the openshift-operators namespace.
Click Install.
Repeat the same process for Red Hat Service Interconnect Network Observer.
Verify the installation succeeded by inspecting the ClusterServiceVersion (csv) file:
$ oc project openshift-operators
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
skupper-netobs-operator.v2.1.3-rh-1 Red Hat Service Interconnect Network Observer 2.1.3-rh-1 skupper-netobs-operator.v2.1.2-rh-2 Succeeded
skupper-operator.v2.1.3-rh-1 Red Hat Service Interconnect 2.1.3-rh-1 skupper-operator.v2.1.2-rh-2 Succeeded
Verify that the Red Hat Service Interconnect (RHSI) is up and running
$ oc get deploy -n openshift-operators
NAME READY UP-TO-DATE AVAILABLE AGE
skupper-controller 1/1 1 1 9m59s
skupper-netobs-operator-controller-manager 1/1 1 1 101s
Check the pods created for Red Hat Service Interconnect (RHSI) through the command line interface:
$ oc get pods
NAME READY STATUS RESTARTS AGE
skupper-controller-779d985989-vqvvb 1/1 Running 0 11m
skupper-netobs-operator-controller-manager-85957676f-p98tc 1/1 Running 0 3m40s
Installing Red Hat Service Interconnect CLI
To install RHSI CLI on OCP Bastion Node and Linux System, enable Red Hat package:
Use the subscription-manager command to subscribe to the required package repositories.
Use yum or dnf commands to install the RHSI CLI & Router.
$ sudo dnf install skupper-cli skupper-router
Verify that CLI & Router is installed correctly.
$ skupper version
Warning: Docker is not installed. Skipping image digests search.
COMPONENT VERSION
router 3.4.2
network-observer 2.1.3
cli 2.1.3
system-controller 2.1.3
prometheus v4.16.0
origin-oauth-proxy v4.16.0
Hello World Example
To show RHSI in action, you need an application to use – HTTP Hello World application with a frontend and backend service. The frontend uses the backend to process requests. In this scenario, the backend is deployed in the hello-world-east namespace of rhsi-east cluster and the frontend is deployed in the hello-world-west namespace of rhsi-west another cluster as well as local-west namespace on a RHEL system. You are able to use multiple namespaces, typically on different clusters or from a single machine.
1. Configure access to multiple namespaces on OCP Clusters and Local Systems
Start a console session for each of your namespaces. Set the KUBECONFIG environment variable to a different path in each session. For the Local System, ensure skupper CLI is installed.
## Console for West cluster
$ export KUBECONFIG=$HOME/.kube/config-hello-world-west
## Console for East cluster
$ export KUBECONFIG=$HOME/.kube/config-hello-world-east
## Local System
$ systemctl --user enable --now podman.socket
$ loginctl enable-linger <username>
$ export REGISTRY_AUTH_FILE=/path/to/auth-file
$ export SKUPPER_PLATFORM=podman
$ podman login registry.Red Hat.io
$ skupper system install
Platform podman is now configured for Skupper
Create and set the namespaces.
## Console for West cluster
$ oc create namespace hello-world-west
$ oc config set-context --current --namespace hello-world-west
## Console for East cluster
$ oc create namespace hello-world-east
$ oc config set-context --current --namespace hello-world-east
2. Creating Sites on Clusters and Local System
Create Site with link access enabled for external connections using earlier set context.
## West Cluster
apiVersion: skupper.io/v2alpha1
kind: Site
metadata:
name: west
namespace: hello-world-west
spec:
linkAccess: default
## East Cluster
apiVersion: skupper.io/v2alpha1
kind: Site
metadata:
name: east
namespace: hello-world-east
spec:
linkAccess: default
## Local System
$ skupper site create local-west-site -n local-west --enable-link-access
File written to /var/lib/skupper/namespaces/local-west/input/resources/Site-local-west-site.yaml
File written to /var/lib/skupper/namespaces/local-west/input/resources/RouterAccess-router-access-local-west-site.yaml
$ skupper system start -n local-west
Sources will be consumed from namespace "local-west"
Site "local-west-site" has been created on namespace "local-west"
Platform: podman
Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
Validate sites are in Ready state.
## West Cluster
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Pending containers with unready status: [router kube-adaptor]
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Ready 1 OK
## East Cluster
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Pending containers with unready status: [router kube-adaptor]
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Ready 1 OK
## Local System
$ skupper site status -n local-west
NAME STATUS MESSAGE
local-west-site Ready OK
The message containers with unready status: [router kube-adaptor] is expected.
3. Linking Sites
Once sites are linked, services can be exposed and consumed across the application network without the need to open ports or manage inter-site connectivity. You’ll find that there are two key types of link connection:
Connecting site: The site that initiates the link connection.
Listening site: The site receives the link connection.
The link direction is not significant, and is typically determined by ease of connectivity. For example, if east site is behind a firewall and west site is a cluster on the public cloud, linking from east to west sites is the easiest option.
AccessGrant – Permission on a listening site enabling access token redemption to create links. Grants access to the GrantServer (HTTPS server) which provides a URL, secret code, and cert bundled into an AccessToken. Token redemption limits and duration are configurable. Exposed via Route (OpenShift) or LoadBalancer (other systems).
AccessToken – Short-lived, typically single-use credential containing the GrantServer URL, secret code, and cert. A connecting site redeems this token to establish a link to the listening site.
To link sites, AccessGrant and AccessToken resources on the listening site and apply the AccessToken resource on the connecting site to create the link.
On the listening(for example west) site, create an AccessGrant resource for Kubenetes connecting east site. For local-system linking, generate a link resource on kubernetes cluster site – for example east – where system site needs to be connected.
## West Cluster to East cluster
apiVersion: skupper.io/v2alpha1
kind: AccessGrant
metadata:
name: grant-west
spec:
redemptionsAllowed: 2 # default 1
expirationWindow: 25m # default 15m
## East Cluster link to local system
$ skupper link generate > link-for-local-site.yaml
Validate the AccessGrant resource on listening(for example west) site:
$ oc get accessgrant
NAME REDEMPTIONS ALLOWED REDEMPTIONS MADE EXPIRATION STATUS MESSAGE
grant-west 2 0 2026-04-22T17:11:12Z Ready OK
On the listening(for example west) site, populate environment variables to allow token generation:
URL="$(oc get accessgrant grant-west -o template --template '{{ .status.url }}')"
CODE="$(oc get accessgrant grant-west -o template --template '{{ .status.code }}')"
CA_RAW="$(oc get accessgrant grant-west -o template --template '{{ .status.ca }}')"
URL is the URL of the GrantServer CODE is the secret code to access the GrantServer CA_RAW is the cert required to establish a HTTPS connection to the GrantServer
On the listening(for example west) site, create a token YAML file named token.yaml.
NOTE Access to this file provides access to the application network. Protect it appropriately.
Securely transfer the token.yaml file to context of the connecting(for example east) site, And apply it. For local system, copy the link-for-local-site.yaml file to the local-west site and apply it.
## East Cluster
$ oc apply -f token.yaml
## Local-west site
$ skupper system apply -f link-for-local-site.yaml -n local-west
File written to /var/lib/skupper/namespaces/local-west/input/resources/Link-link-east-skupper-router.yaml
Link link-east-skupper-router added
File written to /var/lib/skupper/namespaces/local-west/input/resources/Secret-link-east.yaml
Secret link-east added
Custom resources are applied. If a site is already running, run `skupper system reload` to make effective the changes.
$ skupper system reload -n local-west
Sources will be consumed from namespace "local-west"
...
2026/04/24 12:43:40 WARN certificate will not be overwritten path=/var/lib/skupper/namespaces/local-west/runtime/issuers/skupper-site-ca/tls.key
Site "local-west-site" has been created on namespace "local-west"
Platform: podman
Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
On the connecting(for example east & local-west) site, check token and link status: The GrantServer has validated the AccessToken and redeemed it for a Link resource. The connecting site uses Link resource to establish an mTLS connection between routers.
## East site
$ oc get accesstoken
NAME URL REDEEMED STATUS MESSAGE
token-to-west https://<skupper-grant-server-west-site>:443/cc4e6668-1869-4fd9-a9e7-a0a86abbe15d true Ready OK
# oc get link
NAME STATUS REMOTE SITE MESSAGE
token-to-west Ready west OK
## Local-west site
$ skupper link status -n local-west
NAME STATUS COST MESSAGE
link-east-skupper-router Ready 1 OK
4. Exposing services on the application network
After creating an application network by linking sites, services can be exposed from one site using connectors and consume those services on other sites using listeners.
Create a workload to expose on the network, for example, backend server of hello world example.
$ oc get connector
NAME ROUTING KEY PORT HOST SELECTOR STATUS HAS MATCHING LISTENER MESSAGE
backend backend 8080 app=backend Pending No matching listeners
Create a listener resource on west & local-west site:
Note Identify a connector that you want to use. Note the routing key of that connector.
## West site
apiVersion: skupper.io/v2alpha1
kind: Listener
metadata:
name: frontend
namespace: hello-world-west
spec:
routingKey: backend
host: backend
port: 8080
## local-west site
$ skupper listener create local-frontend --routing-key backend 8080 -n local-west
File written to /var/lib/skupper/namespaces/local-west/input/resources/Listener-local-frontend.yaml
$ skupper system reload -n local-west
Sources will be consumed from namespace "local-west"
2026/04/24 12:47:18 WARN certificate will not be overwritten path=/var/lib/skupper/namespaces/local-west/runtime/issuers/skupper-site-ca/tls.crt
...
Site "local-west-site" has been created on namespace "local-west"
Platform: podman
Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
Validate the listener status:
## West site
$ oc get listener
NAME ROUTING KEY PORT HOST STATUS HAS MATCHING CONNECTOR MESSAGE
frontend backend 8080 backend Ready true OK
## local-west site
$ skupper listener status -n local-west
NAME STATUS ROUTING-KEY HOST PORT MATCHING-CONNECTOR MESSAGE
local-frontend Ready backend 0.0.0.0 8080 true OK
Test the Hello World Application
To test our Hello World, we need external access to the frontend. Use oc port-forward to make the frontend available at localhost:8080.
## West site
$ oc port-forward deployment/frontend 8080:8080 &
If everything is in order, you can now access the web interface by navigating to this URL in your browser http://localhost:8080/
The frontend assigns each new user a name. Click Say hello to send a greeting to the backend and get a greeting in response.
For local system tests, you can run following command:
$ curl -s http://localhost:8080/api/hello -d '{"name":"Jack Sparrow"}' | jq -r '.text'
Hi, Jack Sparrow. I am Astonishing Application (backend-66dbcb9494-t7wlg).
5. Setting up Network Observer
The console provides a visual overview of the sites, links, services, and communication metrics.
Determine the console URL and use this URL to login to skupper console via browser and when prompted login using OCP credentials.
$ oc get --namespace hello-world-west -o jsonpath="{.spec.host}" route net
workobserver-sample-network-observer
<NetworkObserver-URL>
The Skupper console is used to monitor and troubleshoot application network. The console provides a visual overview of the sites, links, services, and communication metrics.
Sites view
The Sites tab displays the network topology showing three interconnected sites in the Hello World example: the east site (OCP cluster hosting the backend service), the west site (OCP cluster with frontend service), and local-west-site (RHEL local system running Skupper on Podman with frontend service). The dashed lines represent active links connecting these sites, enabling frontend service from different environments to access the backend service seamlessly.
Components view
The Components tab shows the logical service architecture with three key elements: hello-world-frontend & local system site, consuming service and hello-world-backend, an exposed service. The directional arrow illustrates how the frontend component communicates with the backend through the Skupper service network demonstrating cross-site service connectivity.
Processes view
The Processes tab displays actual running pods and real-time traffic metrics, showing backend service in the east site on OCP cluster processing requests from two frontend clients in the west site on OCP cluster with total of 2.1 KB traffic and local-weat-site on the local linux system with podman showcasing 2.9 KB traffic volume. This validates that RHSI setups can be configured across the hybrid cloud setup.
Cleaning up
To remove Skupper and the other resources from this exercise, use the following commands:
## West site
$ skupper site delete --all
$ oc delete deployment/frontend
## East
$ skupper site delete --all
$ oc delete deployment/backend
## Local System
$ skupper system stop -n local-west
Conclusion
Red Hat Service Interconnect (RHSI) simplifies hybrid cloud connectivity by enabling secure, on-demand application connections across diverse environments without complex VPNs or firewall headaches. RHSI support on IBM Power showcases seamless interconnect between services across OpenShift clusters and local RHEL systems, with the Network Observer console providing real-time visibility into the distributed service mesh.
Best wishes and good luck with your RHSI journey! 🚀
We are excited to announce with the release of Red Hat Service Interconnect (RHSI) v2.1.2 runs on IBM Power Systems and workloads can now seamlessly join your cross-architecture service mesh!
Red Hat Service Interconnect is based on the Skupper project, allowing you to create a Layer-7 service interconnect across different clouds and clusters. It allows your apps to talk to each other as if they were on the same local network — without complex VPNs or firewall headaches.
A few key things to know about RHSI:
Security First All traffic is encrypted automatically using mTLS.
No Root Needed Operates at the application layer; no cluster-admin rights required to get started.
Seamless Integration Easily connect a frontend in the public cloud to a database on a Power system in your private datacenter.
The provider-ibmcloud-test-infra project introduces a simplified kubeadm installation script designed for ease of use and consistency. This script streamlines common setup steps, reduces manual intervention, and helps users get a functional Kubernetes cluster up and running faster.
As shared by Manjunath Kumatagi, the goal is to make Kubernetes installation more accessible for developers and operators alike. Try running the script from the repository, explore how it fits your workflow, and share feedback to help improve it further.
Optimizing infrastructure costs starts with understanding how your software licensing interacts with your hardware capabilities. In his updated blog post, IBM’s Maarten Kreuger breaks down the nuances of Red Hat OpenShift subscriptions specifically for IBM Power systems.
The post explores how the unique features of the Power Hypervisor (PowerVM) allow for highly granular licensing. Whether you are using dedicated cores or leveraging Shared Processor LPARs, understanding the math behind “core-pairs”, bare metal and “socket models” is essential for a cost-effective deployment.
Key highlights from the blog include:
The Power Advantage: How PowerVM’s hardware-enforced hypervisor allows for per-core licensing and fine-grained increments (as small as 0.05 cores).
Subscription Models:Â A comparison between the simple Socket Model (ideal for scale-out servers like the S1122) and the Core-Pair Model (best for shared or co-hosted environments).
The SMT Variable:Â Why SMT (Simultaneous Multi-Threading) doesn’t increase your license costs, despite reporting more vCPUs.
Optimization Tactics: How to use Shared Processor Pools to cap CPU usage and prevent paying for the same physical core twice.
Whether you’re running a single cluster or managing complex Power Enterprise Pools 2.0, this guide provides the clarity needed to ensure you aren’t over-subscribing.
In Bash, symbols like # and % aren’t just random noise—they are powerful operators used for Parameter Expansion. They allow you to “trim” or “slice” strings stored in variables without needing external tools like sed or awk.
To understand ${%%}, we have to break down how Bash sees those symbols.
The Core Logic: Front vs. Back Think of these symbols as “knives” that cut parts of your string based on a pattern:
Symbol
Action
Mnemonic
#
Removes from the front (left)
The # is on the left side of a standard keyboard (Shift+3).
%
Removes from the back (right)
The % is on the right side of the # (Shift+5).
Doubling Up: Small vs. Large The number of symbols determines how “aggressive” the cut is:
Single (# or %): Non-greedy. It removes the shortest possible match.
Double (“ or %%): Greedy. It removes the longest possible match.
Practical Examples Let’s say we have a variable: file="image.jpg.backup"
Using # and “ (Removing from the Front)
${file#*.} → Result: jpg.backup (Cut the shortest bit ending in a dot).
${file*.} → Result: backup (Cut everything up to the last dot).
Using % and %% (Removing from the Back)
${file%.*} → Result: image.jpg (Cut the shortest bit starting from a dot at the end).
${file%%.*} → Result: image (Cut everything from the first dot to the end).
If you have VAR="long.file.name.txt":
Syntax
Logic
Result
${VAR#*.}
Delete shortest match from front
file.name.txt
${VAR*.}
Delete longest match from front
txt
${VAR%.*}
Delete shortest match from back
long.file.name
${VAR%%.*}
Delete longest match from back
long
Quick Tip: If you ever forget which is which, remember that on the keyboard, # is to the left of %. Therefore, # handles the left (start) of the string, and % handles the right (end).
The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for taefik are particular interesting for those working with ingress.
traefik v3.3 MIT podman pull icr.io/ppc64le-oss/traefik-ppc64le:v3.3 March 27, 2026
If you’ve been following the rapid evolution of document parsing in AI, you’ve likely encountered Docling. It’s a powerhouse for converting complex PDFs and documents into machine-readable formats. The AI Services team and the IBM Power Python Ecosystem team have provided all of the requirements so you can use docling and as it iterates rapidly, stay up-to-date.
The AI Services team has identified a specific “golden set” of versions that play well together. Create a requirements.txt file containing the necessary packages, including docling, torch, and transformers.
Before running the install, ensure pip is at its latest version. Then, use the --extra-index-url flag to point to the optimized IBM developer wheels. This is the trick to getting the faster compilation mentioned earlier.
Once the installation completes, it’s a good idea to run a “smoke test” to ensure the models can be fetched properly. You can use a simple script to trigger the model downloads:
# download_docling_models.py
from docling.pipeline.standard_pdf_pipeline import StandardPdfPipeline
# This triggers the download of Layout & TableFormer models
pipeline = StandardPdfPipeline()
print("Download complete.")
When you see the output Downloading ds4sd--docling-models (Layout & TableFormer)..., you’re officially ready to start parsing.
Why This Matters
By focusing on the dependencies rather than the wheel itself, the AI Services team has given us a way to stay agile. We get the latest features of Docling without the overhead of waiting for official distribution builds to catch up to the repo’s velocity.
The following is a research project I investigated… and notes on what I would do, saving for others to take advantage of:
To demonstrate RoCE (RDMA over Converged Ethernet) usage across nodes on Red Hat OpenShift, you need a container image that includes the RDMA core libraries, OFED drivers, and performance testing tools like perftest (which provides ib_write_bw, ib_send_lat, etc.).
Based on the Red Hat learning path you provided, here is a optimized Podman/DockerDockerfile and the necessary configuration to run it.
1. The Podman/Docker image
This Dockerfile uses Red Hat Universal Base Image (UBI) 9 and installs the essential RDMA stack and the perftest suite.
# Use RHEL 9 UBI as the base
FROM registry.access.redhat.com/ubi9/ubi:latest
LABEL maintainer="OpenShift RoCE Demo"
# Install RDMA core libraries, drivers, and performance testing tools# 'perftest' contains the ib_write_bw, ib_read_bw, etc. commands
RUN dnf install -y \
libibverbs \
libibverbs-utils \
rdma-core \
iproute \
pciutils \
ethtool \
perftest \
&& dnf clean all
# Set working directory
WORKDIR /root
# Default command to keep the container running so you can 'exec' into it
CMD ["sleep", "infinity"]
2. Build and Push the Image
Use Podman to build the image and push it to a registry accessible by your OpenShift cluster (e.g., Quay.io or your internal OpenShift registry).
# Build the image
podman build -t quay.io/<your-username>/roce-test:latest .
# Push the image
podman push quay.io/<your-username>/roce-test:latest
3. Demonstrating Cross-Node Usage (The Test)
To prove RoCE is working across nodes, you must bypass the standard SDN (Software Defined Network) by using Host Networking or a Secondary Network (Multus). For a quick demonstration, we use hostNetwork: true.
Step A: Deploy two Pods on different nodes
Create a file named roce-demo.yaml:
apiVersion: v1
kind: Pod
metadata:
name: roce-server
labels:
app: roce-test
spec:
hostNetwork: true # Required to access the host's RDMA/RoCE hardware
containers:
- name: main
image: quay.io/<your-username>/roce-test:latest
securityContext:
privileged: true # Required for RDMA device access
---
apiVersion: v1
kind: Pod
metadata:
name: roce-client
labels:
app: roce-test
spec:
hostNetwork: true
containers:
- name: main
image: quay.io/<your-username>/roce-test:latest
securityContext:
privileged: true
Step B: Run the Performance Benchmark
Identify the IP of the Server Node:
oc get pod roce-server -o wide
# Note the IP (since it's hostNetwork, this is the Node's IP)
Start the Server:
oc exec -it roce-server -- ib_write_bw -d <rdma_device_name> -a
(Note: Use ibv_devinfo inside the pod to find your device name, e.g., mlx5_0) 3. Run the Client (from the other pod):
oc exec -it roce-client -- ib_write_bw -d <rdma_device_name> <server_ip> -a
How this demonstrates RoCE:
Zero-Copy: The ib_write_bw tool performs memory-to-memory transfers without involving the CPU’s TCP/IP stack.
Performance: If RoCE is correctly configured in your OpenShift cluster (via the Node Network Configuration Policy), you will see bandwidth near the line rate (e.g., ~95Gbps on a 100G link) with extremely low latency compared to standard Ethernet.
Verification: You can run ethtool -S <interface> on the host while the test is running to see the rdma_ counters increasing, confirming the traffic is not using standard TCP.