Bridging the Gap: Shared NFS Storage Between VMs and OpenShift

When migrating workloads to OpenShift, one of the most common hurdles is data sharing. You might have a legacy VM writing logs or processing files and a modern containerized app that needs to read them—or vice versa.

While OpenShift natively supports NFSv4, getting “identical visibility” across both environments requires a bit of finesse. Here is how to handle NFS mounting without compromising security or breaking the OpenShift security model.

The “Don’t Do This” List

Before we dive into the solution, it’s important to understand why the “obvious” paths often lead to trouble:

  • Avoid Custom SCCs for Direct Container Mounts You could technically mount the NFS share directly inside the container. However, in OpenShift, Pods run under restricted Security Context Constraints (SCCs). Bypassing these with a custom SCC opens up attack vectors. It’s better to let the platform handle the mount.
  • Don’t Hack the CSI Driver You might be tempted to force a dynamic provisioner to use a specific root path. This is a bad move. CSI drivers create unique subfolders for a reason: to prevent App A from accidentally (or maliciously) seeing App B’s data. Breaking this breaks your security isolation.

The Solution: Static PersistentVolumes

The most robust way to ensure a VM and a Pod see the exact same folder is by using a Static PersistentVolume (PV). This bypasses the dynamic provisioner’s tendency to create unique subfolders, allowing you to point OpenShift exactly where the VM is looking.

1. Define the Static PersistentVolume

You must manually define the PV. This allows you to hardcode the server and path to match the VM’s mount point.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-pv
spec:
  capacity:
    storage: 100Gi # Required but not used with nfs
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain # Data survives PVC deletion
  nfs:
    path: /srv/nfs_dir  # The identical path used by your VM
    server: nfs-server.example.com

Important Server-Side Config To avoid “UID/GID havoc,” ensure your NFS server export is configured with: rw,sync,no_root_squash,no_all_squash. This prevents the server from rewriting IDs, which is vital when OpenShift’s secure containers use random UIDs. See the Cloud Pak for Data article for more details

2. Create the PersistentVolumeClaim (PVC)

Next, create a PVC that binds specifically to the static PV you just created. By setting the storageClassName to an empty string, you tell OpenShift not to look for a dynamic provisioner.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-pvc
  namespace: your-app-namespace
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi
  volumeName: shared-pv # Direct binding to the static PV
  storageClassName: "" # Keep this empty to avoid dynamic provisioning

3. Mount the PVC in your Pod

Finally, reference the PVC in your Pod’s volume definition. This is where the magic happens: the container sees the filesystem exactly as the VM does.

apiVersion: v1
kind: Pod
metadata:
  name: shared-data-app
spec:
  containers:
  - name: app-container
    image: my-app-image
    volumeMounts:
    - name: nfs-storage
      mountPath: /var/data/shared
  volumes:
  - name: nfs-storage
    persistentVolumeClaim:
      claimName: shared-pvc

Mount Options

NFS can be picky. If your server requires specific versions or security flavors, add a mountOptions section to your PV definition to match the VM’s parameters exactly (e.g., nfsvers=4.1 or sec=sys).

Summary

By using a Static PV, you treat the NFS share as a known, fixed resource rather than a disposable volume. This keeps your OpenShift environment secure, your SCCs restricted, and your data perfectly synced between your infrastructure layers.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *