Blog

  • Bridging the Gap: Shared NFS Storage Between VMs and OpenShift

    When migrating workloads to OpenShift, one of the most common hurdles is data sharing. You might have a legacy VM writing logs or processing files and a modern containerized app that needs to read them—or vice versa.

    While OpenShift natively supports NFSv4, getting “identical visibility” across both environments requires a bit of finesse. Here is how to handle NFS mounting without compromising security or breaking the OpenShift security model.

    The “Don’t Do This” List

    Before we dive into the solution, it’s important to understand why the “obvious” paths often lead to trouble:

    • Avoid Custom SCCs for Direct Container Mounts You could technically mount the NFS share directly inside the container. However, in OpenShift, Pods run under restricted Security Context Constraints (SCCs). Bypassing these with a custom SCC opens up attack vectors. It’s better to let the platform handle the mount.
    • Don’t Hack the CSI Driver You might be tempted to force a dynamic provisioner to use a specific root path. This is a bad move. CSI drivers create unique subfolders for a reason: to prevent App A from accidentally (or maliciously) seeing App B’s data. Breaking this breaks your security isolation.

    The Solution: Static PersistentVolumes

    The most robust way to ensure a VM and a Pod see the exact same folder is by using a Static PersistentVolume (PV). This bypasses the dynamic provisioner’s tendency to create unique subfolders, allowing you to point OpenShift exactly where the VM is looking.

    1. Define the Static PersistentVolume

    You must manually define the PV. This allows you to hardcode the server and path to match the VM’s mount point.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: shared-pv
    spec:
      capacity:
        storage: 100Gi # Required but not used with nfs
      accessModes:
        - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain # Data survives PVC deletion
      nfs:
        path: /srv/nfs_dir  # The identical path used by your VM
        server: nfs-server.example.com

    Important Server-Side Config To avoid “UID/GID havoc,” ensure your NFS server export is configured with: rw,sync,no_root_squash,no_all_squash. This prevents the server from rewriting IDs, which is vital when OpenShift’s secure containers use random UIDs. See the Cloud Pak for Data article for more details

    2. Create the PersistentVolumeClaim (PVC)

    Next, create a PVC that binds specifically to the static PV you just created. By setting the storageClassName to an empty string, you tell OpenShift not to look for a dynamic provisioner.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: shared-pvc
      namespace: your-app-namespace
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 100Gi
      volumeName: shared-pv # Direct binding to the static PV
      storageClassName: "" # Keep this empty to avoid dynamic provisioning

    3. Mount the PVC in your Pod

    Finally, reference the PVC in your Pod’s volume definition. This is where the magic happens: the container sees the filesystem exactly as the VM does.

    apiVersion: v1
    kind: Pod
    metadata:
      name: shared-data-app
    spec:
      containers:
      - name: app-container
        image: my-app-image
        volumeMounts:
        - name: nfs-storage
          mountPath: /var/data/shared
      volumes:
      - name: nfs-storage
        persistentVolumeClaim:
          claimName: shared-pvc

    Mount Options

    NFS can be picky. If your server requires specific versions or security flavors, add a mountOptions section to your PV definition to match the VM’s parameters exactly (e.g., nfsvers=4.1 or sec=sys).

    Summary

    By using a Static PV, you treat the NFS share as a known, fixed resource rather than a disposable volume. This keeps your OpenShift environment secure, your SCCs restricted, and your data perfectly synced between your infrastructure layers.

  • Introducing OpenShift Installer Provisioned Infrastructure (IPI) for IBM PowerVC (TechPreview)

    With Red Hat OpenShift Container Platform (OCP) 4.21, there is a new Tech Preview of powervc platform. The technical preview provides early access to this installer platform. The technical preview enables you to test and provide feedback on the simplified deployment of OCP on IBM Power Virtual Center (PowerVC) managed infrastructure, offering a powerful combination of enterprise-grade reliability and container orchestration. For administrators using IBM PowerVC, the Installer Provisioned Infrastructure (IPI) method simplifies the deployment process by automating the provisioning of underlying infrastructure resources.

    We welcome feedback and any thoughts.

    Thanks to the dev leaders and QE Team

    More details are at https://community.ibm.com/community/user/blogs/paul-bastide/2026/02/24/introducing-openshift-installer-provisioned-infras

  • Quick Fix: Resolving tmpfs Space Constraints on IBM Power (ppc64le)

    If you are deploying OpenShift on IBM Power and encountering failures during the node-image pull, you are likely hitting a known capacity limit in the temporary filesystem.

    The Problem

    In recent 4.x builds (specifically identified in OCPBUGS-70168), the default 4GiB allocation for /var/ostreecontainer in tmpfs is not enough for Power architectures. As the node-image attempts to pull and pivot to the latest operating system levels, it falls short by approximately 200MiB.

    The failure manifests in the logs as a “min-free-space” error:

    error: Writing content object: min-free-space-percent '3%' would be exceeded...


    The Solution: Manual Remount

    While PR #10304 is being reviewed and merged, you can resolve this manually by increasing the size of the mount.

    When to apply: This must be executed early in the boot process, specifically after the tmpfs is initialized but before the node-image-pull.sh script attempts to fetch the release layers.

    sudo mount -o remount,size=5G /var/ostreecontainer
    

    Key Considerations

    • Memory Usage: Increasing this limit “steals” an additional 1GB from main system memory. Ensure your LPAR or VM has enough overhead to accommodate this temporary spike.
    • Persistence: This is a volatile change to tmpfs. Once the node pivots and reboots into the new image, the temporary filesystem is cleared.
  • Yet more Images on the IBM Container Registry for Caching on Power

    The IBM Linux on Power team has released some new open source container images into the IBM Container Registry (ICR). New images for valkey are particular interesting for those working on Caching.

    opensearch-operator 3.2.0 	Apache-2.0 	docker pull icr.io/ppc64le-oss/opensearch-operator-ppc64le:3.2.0 	Feb 17, 2026
    litelllm-pgvector 	0.0.1 	MIT 	docker pull icr.io/ppc64le-oss/litellm-pgvector-ppc64le:0.0.1 	Feb 17, 2026
    

    Refer to https://community.ibm.com/community/user/blogs/priya-seth/2023/04/05/open-source-containers-for-power-in-icr for more details.

  • For Your Awareness: Selecting the Right OpenShift Cluster Type for Power Virtual Server IPI

    My colleague Christy has posted a new article Selecting the Right OpenShift Cluster Type for Power Virtual Server IPI

    The net: You don’t want to select a random type, be intentional. The key difference lies in Access and Internet Connectivity.

    ✅ Public Cluster: Reachable by anyone, simple setup via CIS DNS. ✅ Private Cluster: Internal users only. Uses a Bastion/VPN. Outbound updates stay enabled. ✅ Disconnected: Air-gapped. High security, high complexity (requires image mirroring).

    Pro-Tip: Most enterprise production workloads land in the Private category—it offers the best balance of security without the operational headache of a fully air-gapped environment.

    Learn more at https://community.ibm.com/community/user/blogs/christy-norman/2026/02/16/selecting-the-right-openshift-cluster-install-meth

  • Mirroring for OpenShift on IBM Power in Disconnected Environments

    Many IBM customers run IBM Power workloads in secure environment, where the convenience of a direct internet connection for the workload is restricted or strictly prohibited.

    For OpenShift Container Platform clusters in these Disconnected environments, the clusters are configured to retrieve release images from a registry or retrieve update paths and recommendations for the cluster from a secure location. To configure the cluster without a direct internet connection, you configure your cluster to access the secure location in the disconnected environment.

    There are two levels of disconnected environments, Disconnected (Restricted Network) and Air-Gapped (Fully Isolated). Air Gapped further restricts access with no physical or logical connection to any outside network, where as restircited has limited access to the network, and passes connections through a jumpbox.

    The OpenShift team has updated their support for Disconnected environments with a release of oc-mirror version 2. Customers are encouraged to change from the prior standard oc adm release mirror to the new oc-mirror --v2 to get the best disconnected experience (signatures,attestations,performance).

    • oc adm mirror: Manual, script-heavy, and struggles with Operator Lifecycle Manager (OLM). You often ended up mirroring thousands of images you didn’t need just to get one Operator working.
    • oc-mirror (v2): A declarative, plugin-based approach. You define an ImageSetConfiguration file (YAML), and the tool calculates exactly which versions and dependencies are needed.

    This article shows how to mirror a ppc64le payload (release).

    1. Download the latest oc-mirror binary from the OpenShift mirror site and ensure it’s in your $PATH. You can download from https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.21.2/oc-mirror.tar.gz

    Note: As of OpenShift 4.18+, v1 is officially deprecated.

    1. Define Your ImageSet and call it imageset-config.yaml. You can mirror the multi payload by switching from ppc64le to multi.
    apiVersion: mirror.openshift.io/v2alpha1
    kind: ImageSetConfiguration
    mirror:
      platform:
        architectures:
          - "ppc64le"
        channels:
          - name: stable-4.21
            minVersion: 4.21.0
            maxVersion: 4.21.1
    
    1. Run the Mirroring

    a. For Disconnected (Mirror-to-Mirror):

    oc mirror --config imageset-config.yaml docker://local-registry.internal:5000 --v2
    

    b. For Air-Gapped (Mirror-to-Disk):

    1. On the Internet-connected machine:
    oc mirror --config imageset-config.yaml file://my-mirror-bundle --v2
    
    1. Move the my-mirror-bundle folder to your isolated environment.
    2. On the Air-gapped machine:
    oc mirror --from file://my-mirror-bundle docker://local-registry.internal:5000 --v2
    

    Summary

    To support IBM Power workloads in high-security environments, oc-mirror v2 replaces manual, script-heavy mirroring with a declarative, YAML-based approach that automatically calculates required dependencies for Disconnected or Air-Gapped clusters. This transition to a plugin-based architecture (v2) ensures better performance, security, and reproducibility for maintaining ppc64le payloads without a direct internet connection.

    We encourage you to switch to using oc-mirror --v2 as soon as possible.

  • 🚀 Multi-Arch Tuning Operator v1.2.2 is now live!

    Red Hat released 1.2.2 of the Multi-Arch Tuning Operator. This update focuses on enhancing stability, security, and developer experience for managing workloads across diverse CPU architectures in your OpenShift cluster’s compute plane.

    ✨ What’s New in v1.2.2?

    • UBI 9 Minimal Base Image: The operator has been migrated to use the Red Hat Universal Base Image (UBI) 9 Minimal. This change provides a smaller, more secure footprint and better compatibility with modern OpenShift ecosystems.
    • API Cleanup: We’ve removed unused fields from the CustomResource to streamline configuration and reduce clutter.
    • Upgraded Dependencies: To ensure the highest performance and security standards, the operator now uses:
    • Operator SDK v1.33
    • Go 1.25.3
    • Controller-Runtime 1.34.1
    • Security Patches: Includes critical updates to underlying libraries (e.g., sigstore/fulcio to fix CVE-2025-66564).

    If you’re running a cluster with mixed architectures (like x86_64 and ppc64le), this operator is the “easy button” for ensuring your pods land on the right nodes without manual headaches.

  • OpenShift Container Platform 4.21.0 has been released

    OpenShift Container Platform 4.21.0 has been released

    • ppc64le payload: https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/4.21.0/

    • multi payload: https://mirror.openshift.com/pub/openshift-v4/multi/clients/ocp/4.21.0/ppc64le/

    New features are:

    • Added Installer-Provisioned Infrastructure (IPI) support for PowerVC [Technology Preview]

    • Enable Spyre Accelerator on IBM Power®

    • CIFS/SMB CSI Driver Operator

    • Red Hat build of Kueue

    • Kernel Module Management Operator

    • Servicability notes for kdump on IBM Power

    Release Notes https://docs.redhat.com/en/documentation/openshift_container_platform/4.21/html/release_notes/ocp-4-21-release-notes#ocp-release-notes-ibm-power_release-notes

    Video YouTube: What’s New in OpenShift 4.21 – Key Updates and New Features

    See https://community.ibm.com/community/user/blogs/brandon-pederson1/2026/02/04/red-hat-openshift-421-is-now-generally-available-o

  • OpenShift 4.21.0: Using rhel-coreos-10 nodes

    With OpenShift 4.21.0, you will see two new entries in the release payload for rhel-coreos-10 and rhel-coreos-10-extensions. These are node images for RHEL 10. This capability is a DevPreview/Tech Preview, and will be available as a stable/generally available feature in a future release.

    An example of the release payload entry:

    rhel-coreos-10                                 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca2fc3c3f56da0772fe6d428b78c4ce4ed36afaf9cc5a71f6c310ffb1673772a
    rhel-coreos-10-extensions                      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:077b897ed0cd007ed9b88762d6834fcfc0b9508538ddc9df1dbc0917231e45fc
    

    Note, there is a limitation that you use RHEL10 nodes with Power 10 or Power 11 based systems.

    You can create a new MachineConfigPool with the name rhel10 using Multi-Architecture Compute: Supporting Architecture Specific Operating System and Kernel Parameters. The matchExpression applies the worker and rhel10 MachineConfig objects to the MachineConfigPool members. The members are determined based on the matchLabels.

    cat <<EOF | oc apply -f -
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfigPool
    metadata:
      name: rhel10
    spec:
      machineConfigSelector:
        matchExpressions:
          - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,rhel10]}
      nodeSelector:
        matchLabels:
          node-role.kubernetes.io/rhel10: ""
    EOF
    

    Verify the power MachineConfigPool is created.

    Find the rhel-coreos image

    oc adm release info --image-for rhel-coreos
    

    Now that the rhel10 MachineConfigPool is created.

    cat <<EOF | oc apply -f -
    apiVersion: machineconfiguration.openshift.io/v1
    kind: MachineConfig
    metadata:
      labels:
        machineconfiguration.openshift.io/role: rhel10
      name: 99-override-image-rhel10
    spec:
      osImageURL: "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca2fc3c3f56da0772fe6d428b78c4ce4ed36afaf9cc5a71f6c310ffb1673772a"
    EOF
    

    You have seen how to use RHEL10 nodes with OpenShift on IBM Power.

  • OpenShift 4.21.0 ClusterImagePolicy Feature enforces signature verification

    With OpenShift Container Platform 4.21, Red Hat further improved supply chain security with signature verification for the release image. The verification is controlled by the ClusterImagePolicy which specifies the Root CA and a scope of quay.io/openshift-release-dev/ocp-release. When an image is used on a node, the signatures are pulled from the mirror along with the image, and verified before starting a container.

    If you are running a disconnected cluster and the signature is missing from your mirror, you’ll see the following:

    • Install: The bootstrap node hangs and does not install as the signature is not verified.
    • Upgrade: The ClusterImagePolicy enforces signature verification. If the signatures are missing from your mirror, the Cluster Version Operator (CVO) will be blocked, preventing node updates.

    In order to continue, you may do one of the following:

    1. Use oc mirror --v2 to mirror your content. This feature automatically honors signatures … see Mirroring images for a disconnected installation using the oc-mirror plugin
    2. If you are currently using oc adm release mirror, you can copy the sig file for the release payload:
    $ oc image mirror quay.io/openshift-release-dev/ocp-release:${RELEASE_DIGEST}.sig registry.example.com/openshift/whatever:${RELEASE_DIGEST}.sig
    

    RELEASE_DIGEST:: Specifies your digest image with the : character replaced by a - character. For example: sha256:884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee becomes sha256-884e1ff5effeaa04467fab9725900e7f0ed1daa89a7734644f14783014cebdee.sig.

    It is recommended you switch to using oc mirror --v2

    Good luck with your disconnected clusters, and ensure image signatures are present in your local mirror using one of the mirroring methods.

    References

    1. Red Hat OpenShift Docs: Chapter 12. Manage secure signatures with sigstore
    2. Red Hat Developer: How to verify container signatures in disconnected OpenShift
    3. Red Hat Developer: Verify Cosign bring-your-own PKI signature on OpenShift
    4. Red Hat Developer: How oc-mirror version 2 enables disconnected installations in OpenShift 4.16