Category: OpenShift

  • 🚀 Build Event-Driven Serverless Apps with OpenShift & Kafka!

    Discover how Red Hat OpenShift Serverless, powered by Knative, integrates seamlessly with Apache Kafka to enable scalable, event-driven architectures.

    In the latest Power Developer Exchange blog, walk through:
    ✅ What Knative brings to serverless workloads
    ✅ How to deploy a sample serverless app on OpenShift Container Platform 4.18.9
    ✅ Configuring Streams for Apache Kafka to route real-time events

    This integration empowers developers to create responsive, cloud-native applications that dynamically scale with incoming Kafka messages—perfect for modern, reactive systems.

    👉 Read the full blog to learn how to combine OpenShift Serverless and Kafka for enterprise-grade scalability and reliability!


    https://community.ibm.com/community/user/blogs/kumar-abhishek/2025/11/13/red-hat-openshift-serverless-with-apache-kafka

    #OpenShift #Serverless #Knative #ApacheKafka #CloudNative #EventDrivenArchitecture

  • Announcing Red Hat OpenShift 4.20 Now Generally Available on IBM Power

    Red Hat OpenShift Container Platform 4.20 is now generally available on IBM® Power® servers, advancing hybrid cloud and AI-ready infrastructure. This release delivers expanded architecture support, accelerator enablement for IBM Spyre™, and enhanced security with the Security Profiles Operator. Together, IBM and Red Hat continue driving enterprise-grade container orchestration optimized for Power, enabling high-performance workloads and modern AI applications. Organizations can now build, deploy, and scale mission-critical workloads with confidence on a secure, resilient platform.

    Learn more at IBM Blog | IBM Power Modernization

    Credit to Author : Brandon Pederson

  • Help… My SystemMemoryExceedsReservation

    Red Hat explains the alert in SystemMemoryExceedsReservation alert received in OCP 4. There is also some detail in alerts/machine-config-operator/SystemMemoryExceedsReservation.md.

    a warning triggered when the *memory usage* of the *system processes* exceeds the 95% of the reservation, not the total memory in the node.

    You can check your configuration by ssh’ing to one of the workers, and sudo ps -ef | grep /usr/bin/kubelet | grep system-reserved

    [root@worker-0 core]# sudo ps -ef | grep  /usr/bin/kubelet | grep system-reserved
    root        2733       1 15 Nov04 ?        12:21:38 /usr/bin/kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/worker,node.openshift.io/os_id=rhel, --node-ip=10.20.29.240 --minimum-container-ttl-duration=6m0s --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --cloud-provider= --hostname-override= --provider-id= --pod-infra-container-image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d2f23cbaebe30a59f7af3b5a9e9cf6157f8ed143af494594e1c9dcf924ce0ec --system-reserved=cpu=500m,memory=1Gi,ephemeral-storage=1Gi --v=2

    You’ll notice the default is a half core and 1G memory cpu=500m,memory=1Gi.

    You can tweak the configuration using:

    apiVersion: machineconfiguration.openshift.io/v1
    kind: KubeletConfig
    metadata:
      name: set-allocatable
    spec:
      machineConfigPoolSelector:
        matchLabels:
          pools.operator.machineconfiguration.openshift.io/worker: ""
      kubeletConfig:
        systemReserved:
          cpu: 1000m
          memory: 3Gi

    Wait until the restart 99% sure that it just restarts kubelet in 4.19, without a reboot.

  • Notes on Adding Intel Worker

    1. You need to grab the latest ignition on your Intel bastion:
    curl -k -H "Accept: application/vnd.coreos.ignition+json;version=3.4.0" -o /var/www/html/ignition/worker.ign https://localhost:22623/config/worker
    restorecon -R /var/www/html/ignition/
    
    1. Clone git clone https://github.com/ocp-power-automation/ocp4-upi-multiarch-compute
    2. Change directory to ocp4-upi-multiarch-compute/tf/add-powervm-workers
    3. Create a tfvars file
    auth_url    = "https://<vl>:5000/v3"
    user_name   = ""
    password    = ""
    insecure    = true
    tenant_name = "ocp-qe"
    domain_name = "Default"
    
    network_name                = "vlan"
    ignition_ip                 = "10.10.19.16"
    resolver_ip                 = "10.10.19.16"
    resolve_domain              = "pavan-421ec3.ocpqe"
    power_worker_prefix         = "rhcos9-worker"
    flavor_id                   = "8ee61c00-b803-49c5-b243-62da02220ed6"
    image_id                    = "f48b00dc-d672-4f9a-bac8-a3383bea4a3f"
    openstack_availability_zone = "e980"
    
    # the number of workers to create
    worker_count = 1
    
    1. Run Terraform terraform apply -var-file=data/var.tfvars
    2. On a Power bastion node, you will need to add dhcpd entry to /etc/dhcp/dhcpd.conf and named forwarder pointing to your Intel bastion forwarders { 8.8.4.4; }; in /etc/named.conf. Then restart each using systemctl restart dhcpd and systemctl restart named.
    3. Start the VM is created in the ‘Stopped’ state, you can manually ‘Start’ it.
    4. Approve the CSRs that are generated.

    public docs are at https://github.com/ocp-power-automation/ocp4-upi-multiarch-compute/tree/main/tf/add-powervm-workers#add-powervm-workers-to-intel-cluster

  • IBM Cloud Pak for AIOps supports Multi-Arch Compute on IBM Power

    :information_source: Our second cloud pak supporting Multi-Arch Compute with IBM Power has arrived IBM Cloud Pak for AIOps supports installation on an Intel node in a Power cluster.

    IBM Cloud Pak for AIOps can be deployed on a multi-architecture Red Hat OpenShift cluster, provided the nodes with compatible architecture (x86_64 or s390x) fulfill the necessary hardware prerequisites for IBM Cloud Pak for AIOps. To install IBM Cloud Pak for AIOps on a multi-architecture Red Hat OpenShift cluster, you must annotate your IBM Cloud Pak for AIOps namespace. For more information, see Create a custom namespace.

    You must apply an annotation to limit the architecture to amd64.

  • IBM Cloud Pak for Business Automation adds support for Multi-Arch Compute clusters on IBM Power

    :information_source: With our partners in Cloud Pak for Business Automation, we are pleased to share the first Cloud Pak to support Multi-Arch Compute cluster support for IBM Power:

    *Support for OCP multi-architecture clusters*
    An OpenShift Container Platform (OCP) multi-architecture cluster supports compute machines with different architectures, including ppc64le for Power, s390x for IBM Z, and amd64/x86 for AMD. A CP4BA 25.0.0-IF002 deployment can be assigned to nodes that match the appropriate image architecture. For more information about assigning pods to nodes, see Placing pods on particular nodes.

    Ref: https://www.ibm.com/docs/en/cloud-paks/cp-biz-automation/25.0.0?topic=notes-whats-new-in-2500

  • 🎉 A New Deployable Architecture variation Quickstart OpenShift for Power Virtual Server

     IBM Cloud introduces a new Deployable Architecture variation Quickstart OpenShift for the *Power Virtual Server with VPC landing zone* deployable architecture. This quickstart accelerates the deployment of an OpenShift cluster fully configured with IBM Cloud services.solution is perfect.

    For more information, go to OpenShift for the PowerVS with VPC Landing Zone Deployable Architecture and Power Virtual Server with VPC landing zone

  • 🎉 Multiarch Tuning Operator v1.2.0 is released

    The Multiarch Tuning Operator v1.2.0 is released. 1.2.0 continues to enhance the user experience for administrators of Openshift clusters with multi-architecture compute clusters.

    If you’ve ever run an Intel container on a Power node, v1.20 alerts you using an eBPF program that monitors for the ENOEXEC (aka Exec Format Error). It’s super helpful when you are migrating to IBM Power.

    You can install Multi-Arch Tuning Operator right from Operator Hub on any cluster 4.16 and higher.

    To enable the monitoring configure your global ClusterPodPlacementConfig:

    oc create -f - <<EOF
    apiVersion: multiarch.openshift.io/v1beta1
    kind: ClusterPodPlacementConfig
    metadata:
      name: cluster
    spec:
      logVerbosity: Normal
      namespaceSelector:
        matchExpressions:
          - key: multiarch.openshift.io/exclude-pod-placement
            operator: DoesNotExist
      plugins:
        execFormatErrorMonitor:
          enabled: true
    EOF
    

    References

    1. docs
    2. container
    3. Enhancement: MTO-0004-enoexec-monitoring.md
  • 🎉 Red Hat Build of Kueue v1.1 Now Available on IBM Power

    We’re excited to let you know that Red Hat Build of Kueue v1.1 is now available on IBM Power systems! This marks an important step in enabling AI and HPC workloads on IBM Power.

    A little background, Kueue is a Kubernetes-native job queueing system designed to manage workloads efficiently in shared clusters. It provides a set of APIs and controllers that act as a job-level manager, making intelligent decisions about:

    • When a job should start – allowing pods to be created when resources are available.
    • When a job should stop – ensuring active pods are deleted when the job completes or resources need to be reallocated.

    This approach helps organizations optimize resource utilization on IBM Power OpenShift clusters.

    Documentation is available at https://docs.redhat.com/en/documentation/openshift_container_platform/4.19/html/ai_workloads/red-hat-build-of-kueue

  • Notes from Testing SMB/CIFS CSI driver with OpenShift

    1. Login to the OpenShift cluster oc login. You’ll need to do this with a password not kubeconfig.
    2. Clone git clone https://github.com/prb112/openshift-samba
    3. Change to cd openshift-samba
    4. Create the Project oc new-project samba-test
    5. Update Project permissions
    oc label namespace/samba-test security.openshift.io/scc.podSecurityLabelSync=false --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/enforce=privileged --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/audit=privileged --overwrite
    oc label namespace/samba-test pod-security.kubernetes.io/warn=privileged --overwrite
    
    1. Enable incluster resolution
    $ oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge
    
    1. Run ./enable-registry-and-push.sh
    $ ./enable-registry-and-push.sh
    === Image successfully pushed to OpenShift registry ===
    You can now use this image in your deployments with: default-route-openshift-image-registry.apps.kt-test-cp4ba-1174.powervs-openshift-ipi.cis.ibm.net/samba-test/samba:latest
    
    1. You can create the secret with oc create secret generic smbcreds --from-literal username=USERNAME --from-literal password="PASSWORD"
    2. Setup setup the SMB server:
    cat << EOF | oc apply -f -
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: smb-server
      namespace: samba-test
      labels:
        app: smb-server
    spec:
      type: ClusterIP
      selector:
        app: smb-server
      ports:
        - port: 445
          name: smb-server
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: smb-client-provisioner
      namespace: samba-test
    ---
    kind: StatefulSet
    apiVersion: apps/v1
    metadata:
      name: smb-server
      namespace: samba-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: smb-server
      template:
        metadata:
          name: smb-server
          labels:
            app: smb-server
        spec:
          serviceAccountName: smb-client-provisioner
          nodeSelector:
            kubernetes.io/os: linux
            kubernetes.io/hostname: worker-0
          containers:
            - name: smb-server
              image: image-registry.openshift-image-registry.svc:5000/samba-test/samba:latest
              ports:
                - containerPort: 445
              securityContext:
                privileged: true
                capabilities:
                    add:
                    - CAP_SYS_ADMIN
                    - CAP_FOWNER
                    - NET_ADMIN
                    - SYS_ADMIN
                    drop:
                    - ALL
                runAsUser: 0
                runAsNonRoot: false
                readOnlyRootFilesystem: false
                allowPrivilegeEscalation: true
              volumeMounts:
                - mountPath: /export/smbshare
                  name: data-volume
          volumes:
            - name: data-volume
              hostPath:
                path: /var/smb
                type: DirectoryOrCreate
    EOF
    
    1. Set the permissions oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:samba-test:smb-client-provisioner and oc adm policy add-scc-to-user privileged -z smb-client-provisioner -n samba-test
    2. Kill the existing pods oc delete rs --all -n samba-test
    3. Reset the samba-test permissions
    oc rsh smb-server-0
    chmod -R 777 /export
    
    1. Check the connectivity works:
    # oc rsh smb-server-0
    $ smbclient //smb-server.samba-test.svc.cluster.local/data -U USERNAME --password=PASSWORD -W WORKGROUP
    $ mkdir /export/abcd
    
    1. Then you can create the SMB test using.
    cat <<EOF | oc apply -f -
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: smb
    provisioner: smb.csi.k8s.io
    parameters:
      source: //smb-server.samba-test.svc.cluster.local/data
      csi.storage.k8s.io/provisioner-secret-name: smbcreds
      csi.storage.k8s.io/provisioner-secret-namespace: samba-test
      csi.storage.k8s.io/node-stage-secret-name: smbcreds
      csi.storage.k8s.io/node-stage-secret-namespace: samba-test
    volumeBindingMode: Immediate
    allowVolumeExpansion: true
    mountOptions:
      - dir_mode=0777
      - file_mode=0777
      - uid=1001
      - gid=1001
      - noperm
      - mfsymlinks
      - cache=strict
      - noserverino  # required to prevent data corruption
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: pvc-smb-1005
      namespace: samba-test
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
      storageClassName: smb
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-smb
      namespace: samba-test
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx-smb
      template:
        metadata:
          labels:
            app: nginx-smb
        spec:
          containers:
            - image: registry.access.redhat.com/ubi10/nginx-126@sha256:8e282961aa38ee1070b69209af21e4905c2ca27719502e7eaa55925c016ecb76
              name: nginx-smb
              command:
                - "/bin/sh"
                - "-c"
                - while true; do echo $(date) >> /mnt/outfile; sleep 1; done
              volumeMounts:
                - name: smb01
                  mountPath: "/mnt"
                  readOnly: false
          volumes:
            - name: smb01
              persistentVolumeClaim:
                claimName: pvc-smb-1005
    EOF
    
    1. Find the test pod oc get pod -l app=nginx-smb
    2. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-6b55dc568-mbk9t
    $ dd if=/dev/random of=/mnt/testfile bs=1M count=10
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    
    1. Rollout restart
    # oc rollout restart deployment nginx-smb
    
    1. Find the test pod oc get pod -l app=nginx-smb
    2. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-64cbbb9f56-7zfv7
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    

    The sha256sum should agree with the first one.

    1. Restart the SMB Server
    oc rollout restart statefulset smb-server
    
    1. Connect to the test pod and load a test file
    # oc rsh pod/nginx-smb-64cbbb9f56-7zfv7
    $ sha256sum /mnt/testfile
    2bc558e0ccf2995a23cfa14c5cc500d9b4192b046796eb9fbfce772140470223  /mnt/testfile
    

    These should all agree.

    That’s all for testing. (I tried it out on yoru system.)