- Author: Kaushik Talathi, IBM Power
- Author: Michael Turek, IBM Power
- Author: Paul Bastide, IBM Power
As organizations adopt open hybrid cloud and cloud-native architectures, developers face complexity in connecting applications across multiple clouds — public, private, and on-premises systems. Traditional VPNs and firewall rules require extensive network planning, taking days or weeks to deploy, which delay development and project delivery.
Red Hat Service Interconnect enables developers to create secure Layer-7 connections on-demand. Based on the open source Skupper project, Red Hat Service Interconnect enables application connectivity across Red Hat Enterprise Linux, Red Hat OpenShift Container Platform clusters, and non-Red Hat environments. Your application are able to talk to each other as if they were on the same local network — without complex VPNs or firewall headaches.
With the release of Red Hat Service Interconnect (RHSI) v2.1.2, RHSI runs on IBM Power Systems and using a simple CLI, your workloads can seamlessly join your cross-architecture service mesh in minutes – no extensive networking planning or added security risk.
This document walks you through the installation and setup, and a hello world to ease your adoption of RHSI.
Installation and Setup
Installing RHSI Operator using the OpenShift Console
To install the RHSI on your OpenShift Container Platform 4.18 or higher system, go to the OperatorHub:
- Login with a user id that has
cluster-adminuser access - In the OpenShift Container Platform web console, navigate to
Ecosystem→Software Catalog. - Search for the
Red Hat Service Interconnect, then click Install. - Keep the default selection of Installation mode and namespace to ensure that the Operator will be installed to the
openshift-operatorsnamespace. - Click
Install. - Repeat the same process for Red Hat Service Interconnect Network Observer.
- Verify the installation succeeded by inspecting the
ClusterServiceVersion(csv) file:
$ oc project openshift-operators
$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
skupper-netobs-operator.v2.1.3-rh-1 Red Hat Service Interconnect Network Observer 2.1.3-rh-1 skupper-netobs-operator.v2.1.2-rh-2 Succeeded
skupper-operator.v2.1.3-rh-1 Red Hat Service Interconnect 2.1.3-rh-1 skupper-operator.v2.1.2-rh-2 Succeeded
- Verify that the Red Hat Service Interconnect (RHSI) is up and running
$ oc get deploy -n openshift-operators
NAME READY UP-TO-DATE AVAILABLE AGE
skupper-controller 1/1 1 1 9m59s
skupper-netobs-operator-controller-manager 1/1 1 1 101s
- Check the pods created for Red Hat Service Interconnect (RHSI) through the command line interface:
$ oc get pods
NAME READY STATUS RESTARTS AGE
skupper-controller-779d985989-vqvvb 1/1 Running 0 11m
skupper-netobs-operator-controller-manager-85957676f-p98tc 1/1 Running 0 3m40s
Installing Red Hat Service Interconnect CLI
To install RHSI CLI on OCP Bastion Node and Linux System, enable Red Hat package:
- Use the
subscription-managercommand to subscribe to the required package repositories.
Red Hat Enterprise Linux 8
$ sudo subscription-manager repos --enable=service-interconnect-2-for-rhel-8-ppc64le-rpms
Red Hat Enterprise Linux 9
$ sudo subscription-manager repos --enable=service-interconnect-2-for-rhel-9-ppc64le-rpms
- Use
yumordnfcommands to install the RHSI CLI & Router.
$ sudo dnf install skupper-cli skupper-router
- Verify that CLI & Router is installed correctly.
$ skupper version
Warning: Docker is not installed. Skipping image digests search.
COMPONENT VERSION
router 3.4.2
network-observer 2.1.3
cli 2.1.3
system-controller 2.1.3
prometheus v4.16.0
origin-oauth-proxy v4.16.0
Hello World Example
To show RHSI in action, you need an application to use – HTTP Hello World application with a frontend and backend service. The frontend uses the backend to process requests. In this scenario, the backend is deployed in the hello-world-east namespace of rhsi-east cluster and the frontend is deployed in the hello-world-west namespace of rhsi-west another cluster as well as local-west namespace on a RHEL system. You are able to use multiple namespaces, typically on different clusters or from a single machine.

1. Configure access to multiple namespaces on OCP Clusters and Local Systems
- Start a console session for each of your namespaces. Set the
KUBECONFIGenvironment variable to a different path in each session. For the Local System, ensure skupper CLI is installed.
## Console for West cluster
$ export KUBECONFIG=$HOME/.kube/config-hello-world-west
## Console for East cluster
$ export KUBECONFIG=$HOME/.kube/config-hello-world-east
## Local System
$ systemctl --user enable --now podman.socket
$ loginctl enable-linger <username>
$ export REGISTRY_AUTH_FILE=/path/to/auth-file
$ export SKUPPER_PLATFORM=podman
$ podman login registry.Red Hat.io
$ skupper system install
Platform podman is now configured for Skupper
- Create and set the namespaces.
## Console for West cluster
$ oc create namespace hello-world-west
$ oc config set-context --current --namespace hello-world-west
## Console for East cluster
$ oc create namespace hello-world-east
$ oc config set-context --current --namespace hello-world-east
2. Creating Sites on Clusters and Local System
- Create Site with link access enabled for external connections using earlier set context.
## West Cluster
apiVersion: skupper.io/v2alpha1
kind: Site
metadata:
name: west
namespace: hello-world-west
spec:
linkAccess: default
## East Cluster
apiVersion: skupper.io/v2alpha1
kind: Site
metadata:
name: east
namespace: hello-world-east
spec:
linkAccess: default
## Local System
$ skupper site create local-west-site -n local-west --enable-link-access
File written to /var/lib/skupper/namespaces/local-west/input/resources/Site-local-west-site.yaml
File written to /var/lib/skupper/namespaces/local-west/input/resources/RouterAccess-router-access-local-west-site.yaml
$ skupper system start -n local-west
Sources will be consumed from namespace "local-west"
Site "local-west-site" has been created on namespace "local-west"
Platform: podman
Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
- Validate sites are in Ready state.
## West Cluster
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Pending containers with unready status: [router kube-adaptor]
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Ready 1 OK
## East Cluster
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Pending containers with unready status: [router kube-adaptor]
$ oc get site
NAME STATUS SITES IN NETWORK MESSAGE
west Ready 1 OK
## Local System
$ skupper site status -n local-west
NAME STATUS MESSAGE
local-west-site Ready OK
The message containers with unready status: [router kube-adaptor] is expected.
3. Linking Sites
Once sites are linked, services can be exposed and consumed across the application network without the need to open ports or manage inter-site connectivity. You’ll find that there are two key types of link connection:
- Connecting site: The site that initiates the link connection.
- Listening site: The site receives the link connection.
The link direction is not significant, and is typically determined by ease of connectivity. For example, if east site is behind a firewall and west site is a cluster on the public cloud, linking from east to west sites is the easiest option.
AccessGrant – Permission on a listening site enabling access token redemption to create links. Grants access to the GrantServer (HTTPS server) which provides a URL, secret code, and cert bundled into an AccessToken. Token redemption limits and duration are configurable. Exposed via Route (OpenShift) or LoadBalancer (other systems).
AccessToken – Short-lived, typically single-use credential containing the GrantServer URL, secret code, and cert. A connecting site redeems this token to establish a link to the listening site.
To link sites, AccessGrant and AccessToken resources on the listening site and apply the AccessToken resource on the connecting site to create the link.
- On the listening(for example
west) site, create anAccessGrantresource for Kubenetes connectingeastsite. For local-system linking, generate a link resource on kubernetes cluster site – for exampleeast– where system site needs to be connected.
## West Cluster to East cluster
apiVersion: skupper.io/v2alpha1
kind: AccessGrant
metadata:
name: grant-west
spec:
redemptionsAllowed: 2 # default 1
expirationWindow: 25m # default 15m
## East Cluster link to local system
$ skupper link generate > link-for-local-site.yaml
- Validate the
AccessGrantresource on listening(for examplewest) site:
$ oc get accessgrant
NAME REDEMPTIONS ALLOWED REDEMPTIONS MADE EXPIRATION STATUS MESSAGE
grant-west 2 0 2026-04-22T17:11:12Z Ready OK
- On the listening(for example
west) site, populate environment variables to allow token generation:
URL="$(oc get accessgrant grant-west -o template --template '{{ .status.url }}')"
CODE="$(oc get accessgrant grant-west -o template --template '{{ .status.code }}')"
CA_RAW="$(oc get accessgrant grant-west -o template --template '{{ .status.ca }}')"
URL is the URL of the GrantServer CODE is the secret code to access the GrantServer CA_RAW is the cert required to establish a HTTPS connection to the GrantServer
- On the listening(for example
west) site, create a token YAML file namedtoken.yaml.
NOTE Access to this file provides access to the application network. Protect it appropriately.
cat > token.yaml <<EOF
apiVersion: skupper.io/v2alpha1
kind: AccessToken
metadata:
name: token-to-west
spec:
code: "$(printf '%s' "$CODE")"
ca: |-
$(printf '%s\n' "$CA_RAW" | sed 's/^/ /')
url: "$(printf '%s' "$URL")"
EOF
- Securely transfer the
token.yamlfile to context of the connecting(for exampleeast) site, And apply it. For local system, copy thelink-for-local-site.yamlfile to thelocal-westsite and apply it.
## East Cluster
$ oc apply -f token.yaml
## Local-west site
$ skupper system apply -f link-for-local-site.yaml -n local-west
File written to /var/lib/skupper/namespaces/local-west/input/resources/Link-link-east-skupper-router.yaml
Link link-east-skupper-router added
File written to /var/lib/skupper/namespaces/local-west/input/resources/Secret-link-east.yaml
Secret link-east added
Custom resources are applied. If a site is already running, run `skupper system reload` to make effective the changes.
$ skupper system reload -n local-west
Sources will be consumed from namespace "local-west"
...
2026/04/24 12:43:40 WARN certificate will not be overwritten path=/var/lib/skupper/namespaces/local-west/runtime/issuers/skupper-site-ca/tls.key
Site "local-west-site" has been created on namespace "local-west"
Platform: podman
Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
- On the connecting(for example
east&local-west) site, checktokenandlinkstatus: TheGrantServerhas validated the AccessToken and redeemed it for a Link resource. The connecting site uses Link resource to establish an mTLS connection between routers.
## East site
$ oc get accesstoken
NAME URL REDEEMED STATUS MESSAGE
token-to-west https://<skupper-grant-server-west-site>:443/cc4e6668-1869-4fd9-a9e7-a0a86abbe15d true Ready OK
# oc get link
NAME STATUS REMOTE SITE MESSAGE
token-to-west Ready west OK
## Local-west site
$ skupper link status -n local-west
NAME STATUS COST MESSAGE
link-east-skupper-router Ready 1 OK
4. Exposing services on the application network
After creating an application network by linking sites, services can be exposed from one site using connectors and consume those services on other sites using listeners.
- Create a workload to expose on the network, for example, backend server of hello world example.
## East site
$ oc create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3
## West site
$ oc create deployment frontend --image quay.io/skupper/hello-world-frontend
## Local site
$ podman run --name frontend --detach --rm -p 9090:8080 quay.io/skupper/hello-w
orld-frontend
Trying to pull quay.io/skupper/hello-world-frontend:latest...
Getting image source signatures
Copying blob b4c4646a26d4 done |
Copying blob c8939585957e done |
Copying blob b530b5dc825c done |
Copying blob 76789c06b573 done |
Copying blob 10643c2bc08d done |
Copying blob 42c663ca3696 done |
Copying blob 938062c0e7a6 done |
Copying blob 4f2321e928b3 done |
Copying config 75a7a6cc39 done |
Writing manifest to image destination
84d9a4bd4399ec332faf0f7555278ecdf240ddbf5d4f4773f1fe2893264e933f
- Create connector resource on
eastsite.
apiVersion: skupper.io/v2alpha1
kind: Connector
metadata:
name: backend
namespace: hello-world-east
spec:
routingKey: backend
selector: app=backend
port: 8080
- Validate the connector status:
$ oc get connector
NAME ROUTING KEY PORT HOST SELECTOR STATUS HAS MATCHING LISTENER MESSAGE
backend backend 8080 app=backend Pending No matching listeners
- Create a listener resource on
west&local-westsite:
Note Identify a connector that you want to use. Note the routing key of that connector.
## West site
apiVersion: skupper.io/v2alpha1
kind: Listener
metadata:
name: frontend
namespace: hello-world-west
spec:
routingKey: backend
host: backend
port: 8080
## local-west site
$ skupper listener create local-frontend --routing-key backend 8080 -n local-west
File written to /var/lib/skupper/namespaces/local-west/input/resources/Listener-local-frontend.yaml
$ skupper system reload -n local-west
Sources will be consumed from namespace "local-west"
2026/04/24 12:47:18 WARN certificate will not be overwritten path=/var/lib/skupper/namespaces/local-west/runtime/issuers/skupper-site-ca/tls.crt
...
Site "local-west-site" has been created on namespace "local-west"
Platform: podman
Static links have been defined at: /var/lib/skupper/namespaces/local-west/runtime/links
Definition is available at: /var/lib/skupper/namespaces/local-west/input/resources
- Validate the listener status:
## West site
$ oc get listener
NAME ROUTING KEY PORT HOST STATUS HAS MATCHING CONNECTOR MESSAGE
frontend backend 8080 backend Ready true OK
## local-west site
$ skupper listener status -n local-west
NAME STATUS ROUTING-KEY HOST PORT MATCHING-CONNECTOR MESSAGE
local-frontend Ready backend 0.0.0.0 8080 true OK
- Test the Hello World Application
To test our Hello World, we need external access to the frontend. Use oc port-forward to make the frontend available at localhost:8080.
## West site
$ oc port-forward deployment/frontend 8080:8080 &
If everything is in order, you can now access the web interface by navigating to this URL in your browser http://localhost:8080/

The frontend assigns each new user a name. Click Say hello to send a greeting to the backend and get a greeting in response.
For local system tests, you can run following command:
$ curl -s http://localhost:8080/api/hello -d '{"name":"Jack Sparrow"}' | jq -r '.text'
Hi, Jack Sparrow. I am Astonishing Application (backend-66dbcb9494-t7wlg).
5. Setting up Network Observer
The console provides a visual overview of the sites, links, services, and communication metrics.
- Create NetworkObserver object:
apiVersion: observability.skupper.io/v2alpha1
kind: NetworkObserver
metadata:
name: networkobserver-sample
namespace: hello-world-west
spec: {}
- Determine the console URL and use this URL to login to skupper console via browser and when prompted login using OCP credentials.
$ oc get --namespace hello-world-west -o jsonpath="{.spec.host}" route net
workobserver-sample-network-observer
<NetworkObserver-URL>
The Skupper console is used to monitor and troubleshoot application network. The console provides a visual overview of the sites, links, services, and communication metrics.
Sites view
The Sites tab displays the network topology showing three interconnected sites in the Hello World example: the east site (OCP cluster hosting the backend service), the west site (OCP cluster with frontend service), and local-west-site (RHEL local system running Skupper on Podman with frontend service). The dashed lines represent active links connecting these sites, enabling frontend service from different environments to access the backend service seamlessly.
Components view
The Components tab shows the logical service architecture with three key elements: hello-world-frontend & local system site, consuming service and hello-world-backend, an exposed service. The directional arrow illustrates how the frontend component communicates with the backend through the Skupper service network demonstrating cross-site service connectivity.

Processes view
The Processes tab displays actual running pods and real-time traffic metrics, showing backend service in the east site on OCP cluster processing requests from two frontend clients in the west site on OCP cluster with total of 2.1 KB traffic and local-weat-site on the local linux system with podman showcasing 2.9 KB traffic volume. This validates that RHSI setups can be configured across the hybrid cloud setup.

Cleaning up
To remove Skupper and the other resources from this exercise, use the following commands:
## West site
$ skupper site delete --all
$ oc delete deployment/frontend
## East
$ skupper site delete --all
$ oc delete deployment/backend
## Local System
$ skupper system stop -n local-west
Conclusion
Red Hat Service Interconnect (RHSI) simplifies hybrid cloud connectivity by enabling secure, on-demand application connections across diverse environments without complex VPNs or firewall headaches. RHSI support on IBM Power showcases seamless interconnect between services across OpenShift clusters and local RHEL systems, with the Network Observer console providing real-time visibility into the distributed service mesh.
Best wishes and good luck with your RHSI journey! 🚀

Leave a Reply